This Is the End


Markets, Risk and Human Interaction

February 23, 2011

Context, Content and the Turing Test

In a recent post I laid the blame for the inadequacies of neoclassical economics and behavioral economics on the failure to take into account human context. By context I mean that humans make decisions that are colored by their assumptions, experience, agenda, and even their sense of foreboding.
One way for economics to overcome its deficiencies is to take into account these inherently human characteristics. A different route is for people to cast aside these traits and start behaving more like computers. It looks like we might be going down the latter path.

In an article in this month's Atlantic, Brian Christian recounts his role as a confederate in the annual Loebner competition, which runs the Turing Test to see if computers can fool judges over the course of a five minute conversation conducted via computer console. The humans won this time around, as they have in each of the twenty years the contest has been run. And Christian's bet is that the computers will not be winners anytime soon because even as computers get faster and more adeptly programmed humans will counterattack with the weapons in their arsenal. One of those, which Christian used to win the event's prize as the “most human human” (the human who was most often identified correctly as a human), was to interrupt frequently and backtrack to previous points in the conversation the way we do in real conversation. By comparison, the computers far preferred a you-ask-I-answer interrogative approach.

The tendency for the Turing Test to become a competitive game for the humans as well as the computer programmers -- that is, where the humans are trying to win rather than 'be themselves' within the structure of the game -- defeats the test's intention, which is more or less to have a computer be indistinguishable from a person in a “normal” human interaction: say, a pleasant dinner conversation with a stranger, in which neither party is trying to prove that he is not a computer.
A better Turing Test to overcome the problems introduced in the competitions is to interject a computer into a round of dinner conversations where the human subjects are not made aware that this is occurring. After the fact, subjects are told that some of their companions might have been computers, and only then are they asked to rank the guests by “humanness.”

Apply the same method to other common modes of conversation, moving down the line toward the increasingly vacuous and context-less: e-mail exchanges, then online chat, and finally “texting.” As we go down the line we lose more and more context and depth. Each back and forth depends, if anything, on fewer and shorter prior communications. Tweets, which seem like the lower-limit of texting, are virtually “stateless,” meaning that they often spew forth apropos of nothing. As we descend into these more modern forms of communication it becomes easier and easier for a computer to “win” the Turing Test.

To illustrate this point, Christian relates an exchange between a computer and an unwitting human, where the human engaged in a conversation for an hour and a half, and then broke away without ever realizing there wan no human on the other end. (The dialogue, presented in part in this link, is one of the funniest things I have ever read). And this occurred in 1989:

Mark Humphrys, a 21-year-old University College Dublin undergraduate, put online a program he’d written, called “MGonz,” and left the building for the day. A user (screen name “Someone”) at Drake University in Iowa tentatively sent the message “finger” to Humphrys’s account—an early-Internet command that acted as a request for basic information about a user. To Someone’s surprise, a response came back immediately: “cut this cryptic shit speak in full sentences.” This began an argument between Someone and MGonz that lasted almost an hour and a half. (The best part was undoubtedly when Someone said, “you sound like a goddamn robot that repeats everything.”)

Returning to the lab the next morning, Humphrys was stunned to find the log. His program might have just shown how to pass the Turing Test. When it lacked any clear cue for what to say, MGonz fell back on things like “You are obviously an asshole,” or “Ah type something interesting or shut up.” It’s a stroke of genius because, as becomes painfully clear from reading the MGonz transcripts, argument is stateless—that is, unanchored from all context. Each remark after the first is only about the previous remark. If a program can induce us to sink to this level, of course it can pass the Turing Test.

We are indeed sinking to that level, not by becoming more verbally abusive, but by becoming less verbal, period. We are moving as a society toward the vacuous and non-contextual as we embrace new modes of conversation. Many have written on the vacuousness of IM and SMS-based conversation. But it is not depth of content that differentiates humans from machines. A computer can already beat us in terms of content. One human in a previous Loebner competition was pegged as a computer because she knew more Shakespeare than the judges thought was humanly possible, but not more than what they thought was possible for a computer. Recently a computer went head-to-head with past Jeopardy champions and won handily.

For humans, context matters more than content. A computer does not have existential angst. It does not hold grudges or have its reactions shaped by its childhood experience. It does not respond to a remark based on the previous conversations and how that colors the sense of the other person's interests and emotions. These dimensions of human interaction are flattened as we sink into the texting, twittering world.


  1. I am motivated by the concept that the Turing test is something that humans could fail before their computers can pass. Azimov would laugh.

  2. Speaking of content and context (if I may interrupt and backtrack for a moment), Tony Kashani has written an interesting commentary examining two forms of human conscience as formulated by Eric Fromm. Fromm compares the authoritarian conscience with the humanistic conscience by saying that the latter “is not the internalized voice of an authority whom we are eager to please and afraid of displeasing; it is our own voice, present in every human being and independent of external sanctions and rewards."

    While Twitter can be easily characterized as vacuous or flat in many circumstances, it is hard not to notice the explosive awakening of a cultural humanistic conscience in the Arab world facilitated in no small way by IM and SMS-based conversations.

    I favor a biological paradigm in forming expectations about how our relationship with computers will evolve. Many permutations will be attempted; only the most robust will survive. Until such time as computers can replicate on their own and engage in their own forms of natural selection, I think it's safe to assume that we are moving toward the age of the bionic man where those with the means will have their own bio-connection to a personalized Watson. The more broadly dispersed this capability becomes the better for humanity and the globe.

    Unfortunately, we are probably going to have to go through an Age of the Biosaurs in which a small group of richly endowed entrepreneurs monopolize the power derived from the Watson-human interface. It's my expectation that the ongoing struggle with the voice of a computer-based authority will define much of human history going forward. I think that ultimately the musicality and soul of the humanistic conscience will prevail, richly informed by a well-tempered Watson.

  3. Indeed there is a certain level of vacuousness in IM and SMS-based conversations. People use smart phones and smart application like Facebook and Twitter but the communication level is far from being smart. In those kind of communication actually there is no context and no depth. It's just a matter of being "emotionally" in tune with others. Staying tuned and connected does not mean communicating. Think about Facebook is just about marketing ourselves, bein part of a lonely crowd. Then you click on "I like" and "Share" and that's it, you have communicated: What? Nothing just that you are there...Computers can do it as well...

  4. jon.e.whiteford@gmail.comMarch 25, 2011 at 11:52 PM

    Asimov went over this from the standpoint of, "Is it a Machine Intelligence or is it a Machine Intelligence with FREE WILL!"

    The last time you called tech support, did the tech have an accent? What about the one before that? Don't remember? Then I guess he/she/or IT was good enough.

    I prefer to use Machine Intelligence, because if it is real can it be artificial?

    Both the transmitter and the recipient need to be educated with regard to the communication expected/planned.

    If i only speak french and you only speak german and all we have is audio communication, then no matter how hard i try or want to, i will never be able to tell you about the biggest deal of insider trading i have ever encountered.

    But if we plan ahead, then i can get a load of info from one lantern or two (one if by land, etc)

    Remember the old digital modems for analog voice lines, and how they would have to handshake and negotiate a protocol (sprechen zie english?) They BOTH had to be reasonably intelligent/educated to communicate.

    Remember C3PO the robot translator?

    the Turing test as first stated by Turing was, to my mind, very pc incorrect...that is why it is usually rephrased.

    The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:

    C: Will X please tell me the length of his or her hair?

    Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:

    "My hair is shingled, and the longest strands are about nine inches long."

    In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.

    We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"


Note: Only a member of this blog may post a comment.