This excerpt is unfortunately only the conclusion of Katherine Hayles' book, so I can only base any response to that particular section. That being said, I think it gives an adequate picture of what she was after in the rest of the book. The conclusion is aimed at appeasing fears of the advent of the posthuman. While there are a great many things I do not agree with in her argument I will start by giving a brief synopsis and then move on to my criticisms.
Hayles' objective in her conclusion is to show that the fear of the posthuman is unnecessary by showing the positive interpretations of the notion of being posthuman. This done so by offering intellectual and argumentative grounds for a positive outlook and thereby remoulding the definition of the term posthuman. Firstly, Hayles identifies why there is a fear: the fear of an intellectual variety that the definition of human will morph into something else, but more importantly the fear of an antihuman or apocalyptic instantiation of the posthuman. Meaning our replacement as a species by machines or our melding with machines to create some sort of cyborg. Either way, the view is negative because of the view that humans are superior and we as a species should not be overcome. Hayles then bases this fear in the liberal humanist belief of subject autonomy. That is the view, dating from the humanist Enlightenment, that humans are in control, we are superior to nature and this superiority gives us a clearly defined definition of what 'human' is (it is not machine, it is not cybernetic, it is a collection of particles forming a body of such and such average dimensions with reason, freedom and the ability to think as its hallmarks). Hayles thinks that this is untrue, certainly for the notion of posthuman but for other similar thinkers such as Harraway and others that she lists.
Hayles then points to this misconception of human autonomy as the root cause of this fear but also reverses it to give the posthuman a positive outlook. She argues that we do not think independently but think in conjunction with our surroundings and that the notion of "distributed cognition" shows that we are not the autonomous entities we thought. Indeed she says that computers, programs and machines in general do a lot of our thinking for us and the subjectivity that machines have was not given but emerged due to whatever external and internal circumstances prompted it to do so-mainly us creating and improving them. So machines think and judge and this thinking is used by humans. When we make a decision it is not by virtue of our autonomous will but by a conjunction of "thinking" of various entities, conscious and non-conscious alike. Therefore Hayles concludes that we do not have to fear the posthuman for it is not an end of the human as subject but the natural progression of our interaction with the complex system of distributed cognition that has evolved for thousands of years having machines and other natural entities as thinking subjects which we rely on. Rather the posthuman offers exciting chances of evolution for humans to broaden our capabilities. The definition of human it would seem Hayles is opposing is that of humans as separate, independent, autonomous, entities that are naturally superior and impose and dominate nature and she replaces it with humans as interdependent subjects in a complex network where thinking is done by a myriad of actors. It then is not a replacement of humans by machines but a more interdependent relationship that we will have with them through whatever form that may take (more integrated prosthesis, computer chips in our minds, judgement calls by robots on various matters, etc.) The human then evolves into the posthuman. Of note though is that Hayles also makes the claim that this may already be happening or has already happened because we have used tools for thousands of years and thought with them. As a result we are only progressing towards more integration with machines/tools but have always been 'posthuman' because of our interconnectedness to this distributed cognition.
Now I hope I have given an accurate description of Hayles arguments and propositions. While brief I have not created a straw man because the arguments I am criticizing are those in her conclusion not in the above summation.
I have numerous critiques of Hayles assertions and premises but her final conclusion and my own are actually the same. Firstly, the terror she describes at the advent of the posthuman I agree is unwarranted. I have high confidence in the ability of the human species and its innovation and ingenuity for it not to be wiped out or replaced any time soon (unless some sci-fi theory is realized and we are invaded by more intelligent and virustic aliens). The fear however is aimed towards mechanical supremacy over humanity. But this as well I find unlikely. I do not wish to digress too much but the idea of AI as generally seen in popular culture is not only a long way off but I think unlikely in general. If there were to be some sort of intelligent machine it would have to be more of a cyborg variety, say like the Borg from Star Trek. It would require human characteristics, most notably brain functions, to think and have consciousness like ours. However Hayles does not discuss this for she remains in the realm of the abstract. While I do agree that there is a fear of the "posthuman" and that it is unwarranted I disagree with the premise she posits as the cause of this fear. That is the liberal humanist view of the human subject's autonomy. While I agree that we are not totally free and are our actions are causally determined to a large extent, we still have a will able to synthesize these causes and see the possibilities open to choose. A view much like Schopenhauer's as outlined in The World as Will and Representation (Book III). We have the ability to think, to process information, to synthesize it and to draw judgements based on our thoughts. While this information does come from a variety of sources, such as machines and computer programs, the cognitive ability comes solely from us. Machines do not think nor do they have any sort of consciousness akin to our own. They process information based on a mathematical equation and spit it out without "understanding" what it is they are processing. Hayles gives Hutchins' example of Searle's Chinese Room thought experiment to underscore the "distributed cognition". While I agree that it is because of our environment that we have the information that we think with it is our ability to think that enables any cognition. One must flip this criticism onto machines to see that they do not think. In their entirety anything resembling "thought" comes from an amalgamation of data that they receive from the environment, largely input by humans. Hayles seems to use "thought" in a rather loose way, only referring to "distributed cognition" so it may turn out that she does not think that computers think in actuality but rather I believe she thinks that cognition arises out of a complex network of actors all contributing to this process. While this may be true I prefer to think of it as a complex network, but one that humans are at the centre of because it is our judgement and cognitive ability that makes it possible. Remove humans and there would be no cognition of a complex variety (although there may still be cognition in the form of other life forms but that is debatably not thinking)
I would now like to turn to the subject of machines themselves. Hayles analysis of them seems to liken them to subjective consciousnesses similar to humans because they aid in our thinking and they perform functions that we no longer have to but still make use of. Indeed they often perform them better than we do. But this is forgetting where they come from. Machines and other tools are a product of the human mind. We have created them to help maximize our own efficiency and they have certainly accomplished that task. The main thing though is that without us not only would they not exist, but as I just mentioned there would be no cognitive ability. As Ayn Rand has one of her characters say in Atlas Shrugged: “I thought...of the men who claim that machines condition their brains. Well there was the motor to condition them, and there it remained as just exactly what it is without man’s mind-as a pile of metal scraps and wires, going to rust.” (page 745, Atlas Shrugged) This sentence expresses that which Hayles and I agree on: that machines condition humans and that we enhance them with our progress but we differ in that I think that we are still the dominant factor. Without us they would fall to rust. Perhaps like I mentioned, in the future there will be artificial intelligence reminiscent of say The Matrix capable of sustaining itself. But I doubt it and currently wish to discuss mechanical developments to the present day.
What Hayles is doing that I do agree with is her attempt to appease fears against the posthuman. Now I think I differ in what the posthuman will look like and what it is defined by, but what we have in common is that the future of humanity and technology is not only positive but exciting. Although Hayles view of where the fear comes form is different then my own the prospects of humanity are enthralling. In the short span of several thousand years we have developed vastly complex technology that accomplishes tasks from the most trivial (doorknob) to the most complex (remote controlled mining robots). It is our greatest achievement as a race and our best quality, one that bestows on us an ability to conquer nature and proliferate our species. The continued improvement of our technology does not herald the end of humanity but its enduring existence. Hayles gives the view that we have always been posthuman, and this is certainly one way to look at it, but I prefer to see it as humanity and its technology. The former creating and relying on the latter and the latter requiring the former for existence but simultaneously aiding in the former’s own existence. I this view Hayles and I can also agree that it is a complex interdependency and co-relationship that will continue to evolve for the betterment of both. The pedantic argumentation of thought and consciousness is actually secondary and almost irrelevant to the concretely observable evolution in this complex relationship.
Directed at my fellow peers: Hayles mentions Joseph Weizenbaum's assertion that the capacity to make a judgement should be left as a matter of an ethical principle, to humans alone. Do you agree? What are the ethical implications of ceding more and more functions to computers and other technology? Is there a possibility of us losing our humanity because we relinquish the ability to form a judgement of a given nature? I have supplied a link to the Ayn Rand Institutes' webpage for those who are interested in learning more about her. She is a philosopher that is generally despised and disregarded by the academic community and for that reason I believe she has merit. Also I added a link to a forum of Computer Ethics articles, essays and discussions to go along with my questions