Chinese Room Argument

One of the tedious, but rewarding (crazy people call it “fun”), things about reading is the simple fact that you are going to come across something you don’t understand. An involved reader will go and look those things up to achieve a better understanding of the text. One of the more interesting things that I’ve had to look up lately (and believe me, this happens quite often) was the Chinese Room Argument, mentioned on page 48 of Hayles’ article. It’s an interesting thought experiment that relates to AI and the nature of the human mind and is superbly relevant to the way we think about and relate to electronic texts.

This argument was developed in the 1980’s by John Searle in his paper “Minds, Brains, and Programs”.

The experiment goes like this. Suppose that AI research has succeeded in constructing a computer that behaves as if it understands the Chinese language. It takes Chinese symbols as input and, by following the instructions of a computer program, produces other Chinese characters which it presents as output. Suppose that this computer performs its task so well that it convinces a human Chinese speaker that the program is itself a human being. To all of the questions that the human asks, it makes appropriate responses so that any Chinese speaker would be convinced that he/she is talking to another Chinese speaking person.

Now suppose that someone in a closed room and that he has a book with an English version of the aforementioned computer program, along with sufficient paper, pencils, erasers and filing cabinets. He can receive Chinese characters, process them according to the program’s instructions, and produce Chinese characters as output. It is suggested that a human operator will be able to perform as well as the aforementioned computer, even though this person does not understand Chinese.

Searle asserts that there is no essential difference between the role the computer plays in the first case and the role the human operator plays in the latter. Each is simply following a program, step-by-step, which simulates intelligent behavior. And yet, Searle points out, the human operator does not understand a word of Chinese. Since it is obvious that he does not understand Chinese, Searle argues, we must infer that the computer does not understand Chinese either. Without understanding there is no thinking, without thinking there is no mind, so there is a limit to how intelligent artificial intelligence can be.

This idea is particularly troubling to scholars who subscribe to the computational theory of the mind, who suggest that the human mind is basically an information processing system. Does true learning ever really occur? What about the relation between the mind and the body? Is there a soul? An essential Identity?

Let’s run with this idea and apply to Wunderkammer. Here, we are given a scenario where can ask a program a series of questions, and it replies in a meaningful way. When we click on a body part, we are essentially asking the program “Tell me what this body part means to you.”, and it does, by linking us to a page with a story, description, what have you. It’s almost like we’re having an intimate conversation with the author. However, when we examine the code, we can see that the code is set up like a simple list and each node of the list having its own seperate memory attached to it. Is this how Shelley understands her body? Each part existing on its own, with its own stories and important moments? It’s almost as if her body parts could exist seperately, a leg over here in this bin with a note attached to it that leads you to the arm over here in this drawer with another note and photo that leads you to her face, etc.

This is a reminder that these pieces are constructed. They seem mysterious, alive, and self aware, but really they are the product of a very thoughtful, meticulous hand.

Advertisements

~ by scootielou on February 20, 2010.

 
%d bloggers like this: