Sai (saizai) wrote in academic_empath,
Sai
saizai
academic_empath

Chinese room: Reducible complexity & empathy limitation

[Reposted from my LJ.

This is a midterm paper for my Philosophy of Mind class (taught by John Searle). I think the issue of consciousness is pretty much directly tied to empathy; if anything, it is the only thing - that is, the way in which we view the world, and perhaps the potential for being aware of others' conscious states or even the state of the multiple-people meta-organism itself - that differentiates us from perfectly normal people experiencing sympathy.

I also think that the last point I make - that our understanding of others, and ability to perceive them as conscious (or, say, intelligent) is directly tied to how much we empathize with them. And that a good multi-level-description materialist worldview is perfectly compatible with empathy, with even a perfectly mundane underlying process to "explain" it. Namely, the combination of perception of behavior, belief/knowledge of structure, and personal experience. The closer our experience has come to someone else's, the easier it is to empathize. E.g., ain't many people who've been a tree, so that's hard.

If any of the references are confusing, ask. If you don't understand the Chinese Room argument from my very brief synopsis, Google-Scholar the reference at the bottom; it's an interesting paper.]


[Note for non-neuropsych people - "prosopagnosia" is a deficit caused by lesions (particularly on both sides) to a region of the brain called the "Fusiform Face Area"; people with it have a specific impairment to their ability to recognize faces (both familiar and unfamiliar), though their general visual skills, object recognition, voice recognition, etc., remains more-or-less intact.]

Sai Phil 132 – Philosophy of Mind GSI: Aaron Lambert
Paper 1, Topic 3b/f (Chinese room per Systems / Impossibility replies)

John Searle proposed his “Chinese Room Argument” about 25 years ago (Searle 1980), as an attempt to argue against the “Strong AI” belief that artificial intelligences – or in general, systems other than humans – could be (at least in theory) conscious, sentient, or “intelligent”. It invokes a story of a room which to all outside tests is able to pass the Turing test, by carrying on perfectly intelligent conversation, in Chinese writing. On the inside is a man with no knowledge of Chinese, interpreting the symbols he is given by looking them up in a tome reference tables and – following its instructions – constructs a return message of equally-meaningless (to him) symbols.

This scenario invokes some obviously impossible features, the most obvious of which is the magic “book of Chinese”. This book would, obviously, need to be written by someone who knows Chinese, and indeed could be construed as nothing but a set of instructions for describing how they would reply to any given statement – including, e.g., any that would refer to their emotions, experiences of the world, preferences, etc. Thus, the room becomes equivalent to having an intermediary between the outside world and the true Chinese speaker – one that functions in a marvelously complex manner. However, while this does counter the argument, it does not support Strong AI either, as it still relies on having an authentic (human) Chinese speaker to work.

By another reply, the worker in the room is merely a component in a larger system – which, taken as a whole, does know Chinese. The immediate reply is to obviate the room itself, by having the worker simply memorize the entire book, and then claim that this also implies that there is no longer a part vs. system relationship going on. (ibid., p. 419)

However, internalization doesn’t work as described. For one, as mentioned above, the book would need to be the equivalent of an entire speaker’s knowledge-base, etc. This would mean an extremely (infinitely?) large “book” – hardly plausible for any other human to memorize. For two, this maneuver still doesn’t obviate the system – it just hides it.

The Chinese system in this internalized “Chinese room” is not “simply a part of the English subsystem” (419), but the opposite – it is the superset. The English system (as envisioned) is the part of the operator that only knows English and follows certain rules; the “part” that speaks Chinese is the totality. In fact, one could say that the operator himself is not conscious of knowing Chinese – he doesn’t, no more than his hippocampus “knows” English. As Searle says (lecture), this is a question of levels of description.

I would like to take that point one step further than Searle does, however, and claim that it applies to consciousness as well. As with neurons that are not “conscious” in the same sense that we are – though they (and the limbic system, etc) make up the totality of our structure – still manage to create consciousness as a higher-level phenomenon, so too could be described the room – internalized or not – as being made up of lower-level consciousness (the operator, his tools, and the book), to constitute a higher-level one (the room). Obviously, the operator will not be aware of this any more than the neuron.

This argument can be extended, of course. In fact, in many ways it is similar to the counterargument against the “irreducible complexity” of Creationists. Their claim – and Searle’s – is that some property arises spontaneously, whole, and special to “us” – whether that be humans or mammals in general. However, it can be broken down, and all the steps leading up to it still can be comprehended as types of consciousness – though the farther you go from the integrated whole that is ours, the stranger it seems to call it the same thing. (A direct analogy is the evolution of humans’ “camera” eyes. [Coyne, part V].)

As Block (1997) points out, there is more than one component to “consciousness” as we conventionally think of it – and these components can be experienced (if the word applies) by themselves or together. Likewise, these components too have sub-systems, such as all the varied apparatus that goes into creating a cohesive visual percept. Lesion patients (such as prospoagnostics) give us examples of people who are, in some sense, not “conscious” to the same full extent that normal people are.

These do, indeed, combine to justify Searle’s remark – “… now the mind is everywhere. What we wanted to know is what distinguishes the mind from thermostats and livers…” (420). I will agree with the first part of this; it does entail a sort of universal mentalism.

The latter part, however, is misguided – in the same way as it is misguided to ask, “What essential property distinguishes us from chimps? Dogs? Mollusks?”, and expect them to be utterly disjoint in their makeup. As I have said, the system is conscious; it’s just a somewhat different sort of consciousness than the operator’s; likewise his stomach is a different sort, and likewise his cat. They are related by a sort of ‘genetics’ of similarity – as Searle says in dozens of places, by both behavior and structure we begin to believe other entities to be conscious. Entities more distant from us in behavior and/or structure are necessarily “conscious” in more and more distance ways – from the prosopagnostic who demonstrably has a different experience of normal life, to the cat who has markedly different physiology and senses, to a mollusk that lacks most of the complicated apparatus that enables us to process complex thoughts.

Thus, I would suggest that the “other minds” problem is best accepted as inherent, and turned on its head – that we not to try to grant or deny membership into the League of Sentients, but rather to acknowledge that it is our very ability to understand that guides our intuition in this. Empathy – both by saying “this entity is similar to me” and “this entity reacts in a similar way to me” – leads us to conclude that that other entity’s experience is like ours; this is inescapable.

It provides the limit to what we are capable of making meaningful assertions about, when it comes to others’ experiences – be it that of being a cat or a bat, or anything else. Insofar as we are similar to machines, we will be able to say that their consciousness is or isn’t similar to ours; if we create something that mimics our brain and behavior neuron for neuron, we will then be forced, chauvinism aside, to give it the same “polite convention” as we give ourselves.


(Note: While I am siding with the “standard” Systems / Impossibility replies, I’m also trying to make two [to my knowledge] novel points, which are similar to that of Systems but significantly different. Call them the “Empathy Limitation” and “Reducible Complexity” replies; relatively briefly dealt with here, due to space limitations.)

References:
Jerry Coyne - “The case against intelligent design: The faith that dare not speak its name”, Edge online. http://www.edge.org/3rd_culture/coyne05/coyne05_index.html, accessed 9/25/05.
John Searle – “Minds, brains, and programs”, The Behavioral and Brain Sciences (1980) 3, pp. 417-457.
Ned Block – “On a confusion about a function of consciousness”, The Nature of Consciousness (1997), ed. Block, Flanagan, & Güzeldere, MIT Press, London.
  • Post a new comment

    Error

    default userpic
  • 10 comments