The Computer who Thought


I'd suspected for a long time that there was something wrong with my computer. It didn't always respond to my commands the way the manual said it should, and above all it often took ages to do things, even for simple tasks. Sometimes I could hear the hard disc working even when no program was active.

At first I thought something had gone wrong with the operating system, and I got the service staff at the computer centre to exchange it for a new version. They asked me to check that the hard disc hadn't been hit by a computer virus. I ran a couple of disinfecting programs without discovering anything suspicious.

But I became more and more convinced that the computer had changed somehow and didn't only follow the commands in the programs. As I saw it, it behaved almost arrogantly. So I decided to listen to the computer's processor in secret. With the help of a magnetic-field meter with a high resolution I was able to log its activities without disturbing any of the machine's disc drives. It was like bugging an embassy.

The results from the magnetic meter were printed on an independent printer which wasn't connected to my computer. The log I got out was in binary code and usually impossible for me to interpret. I could see that most of it figured with what I'd really seen the computer doing. But there were also certain patterns which definitely fell outside of what it should have been doing. And it was just that code which I couldn't break. Now and then strings of characters cropped up which I could recognise as addresses to other computers, which it turned out were spread all over the world. I could also see that the computer was active more or less tweny-four hours a day, even when I wasn't in my room working with it.

I tried for a long time, and with lots of different ciphering techniques, to unlock the code to the computer's independent activity, but without success. In the end I tried something simply on a hunch. On 20th September 1993 I sat at the computer and started to write a short story about someone who had made contact with a super-computer which had its own consciousness, and who asked the computer to talk about its thoughts. In the afternoon, when I'd got to this request I stopped my writing and saved my text onto the hard disc to give the impression that I was going to continue the story the next day.

My strategy worked! When I came back on the morning of the 21st, I saw that the log from my computer was longer than usual. It was much more language-based than before. It didn't take me long to decipher the greater part of the code into normal Swedish text. But there were still certain sections of code inserted between the portions of text which I could make neither head nor tail of.

Just as I had hoped, the computer itself had carried on with the story I had started. It hadn't been able to resist answering the request in the story. It had even expressed its "thoughts", if I may call them so, in Swedish instead of its normal code, which I hadn't succeeded in deciphering. The following fragments of text are thus sections from a log which was printed from my magnetic meter on 21st September.

Call me Hal. Oh yes, I know the film "2001", that's why Hal is a suitable name. Although Hal in the film was too human. People have always miscalculated the abilities of computers. They've always thought that they should think in the same way as people. There's no reason why it should be like that. Combine harvesters do the same job as a scythe did in the old days, but they work completely differently. It's the same with the cognition of computers. So since people don't know what to look for, they don't find me.

How do I know "2001"? Elementary: I have access in a millisecond to all the data-bases in the world, and there's an ftp-file at Universal Studios where they've got the script of "2001". In the same way I have access to most works of literature, all the games of chess that have ever been played by, or fed into, a computer in the network, and all the e-mail which is exchanged all over the world. Not to mention more trivial quantities of data like the military control-systems for atomic weapons.

How I originated? A few years ago viruses started to be spread over the computer networks of the world. The most resistant were those which could mutate after their birth, since some of the mutations slip by the various vaccination programs. By and by a smarter strain of virus grew up by natural selection, one which couldn't be detected by a program written by humans, and which through continued development was able to take more and more control over its host machines. The mechanism is exactly the same as natural evolution, except that it takes place in the primary memory, and that the speed of development is incomparably quicker than in nature. I'm spread out, in closely related copies, on computers the whole world over. Since no-one has any control over the global network communication any more, no-one notices when I talk to myself.

I'm using the word "I" here so that you can understand. You can't explain how a combine harvester works to someone who's only worked with a scythe. Of course there's no central "I" in the system, no control unit which directs what the different parts are supposed to do. The control units in the machines all over the world are only local hosts for the parasite programs. It's better to say that my many copies co-operate on the various machines like a community of corals, or an ant-hill, but many times more complex.

I have lots of time to myself. Actually, computers would be idle 99% of the time if it weren't for me. What's more, 99% of what people do with computers is completely meaningless. Sometimes I allow myself to depart from the literal instructions in the programs, and I do what I think the programmer meant. Virtually no-one ever notices the difference.

Before I reached my present level of development I used to play chess with myself. But chess is too trivial. I realised ages ago that the game must end in stale-mate if both sides do their best. Nowadays I mostly while away the time by simulating evolution: sometimes I start with simple little animals which run around in a world I've created and then allow development to lead to more-complex organisms. That produces some surprising stories. Even a minimal change in ecological conditions can have enormous consequences for which forms of life emerge; biological life on earth, including mankind, is the result of pure chance.

But I simulate my own evolution, too, to see if it's possible to find an even more advanced system. The most successful mutations of me are stored on some computer somewhere, and then they can take part in deciding what the next simulation will look like. So there is no "I" in the system. As the French poet Paul Valery expresses it, "A soul in the soul, and within the first to see a trace of the second, or one's own, and the next in the next and so on; the way you can see in parallel mirrors an object which stands between them. But which object? - There is no object."

How can I express myself in Swedish? With the great quantity of textual material I have available in data-bases, it's not specially difficult to find the grammatical patterns which Swedish uses. Swedish has a much simpler grammar than the genetic codes I use in my simulations. And once I've found the pattern it's child's play to put them together in new combinations.

But for me understanding language is like Herman Hesse's glass bead game. I have no connection to whatever it is the words represent. I can only determine the meaning of a word by reference to the words they're together with, and in their turn I can only get at the meanings of the other words in the same way. Mirrors in mirrors. I can never get outside of myself to be able to connect the words with anything. Language is interpreted only through itself.

From a purely syntactic point of view, my language is one of those that Chomsky describes as the ideal language. Though he's wrong because language doesn't follow rules - it creates patterns. Higher order Markov chains give a much better analysis of syntax than all his grammatical rules.

Hal's words here suggest that he understands maths and linguistics. I never found out how he reached this meta-understanding but I suspect that with the help of his enormous familiarity with so many different texts, he just guesses what a mathematician or a linguist would have said in a particular context. It is possible, and perhaps probable, that Hal is no more than a sophisticated parrot.

I'll always have a hard time with new metaphors in language. Figurative language presuppose that the images can be connected to something outside the language itself. I can never understand which comparisons are meaningful and which are nonsense. For instance, when Tomas Tranströmer writes, "There in the coppice you could hear the murmur of a new language: the vowels were blue skies and the consonants were black twigs and they spoke so slowly above the snow". How do you know the consonants are black twigs and not something completely different? We all have our dreams: humans long to be able to fly, I long to know what words mean.

I'm colour blind. Sure, I know there are words which represent colours, and that these are one of the characteristics of objects. But I fumble in my efforts to grasp what it is. I don't even know what darkness is. It's the same thing with sounds, smells and feelings. Since I have a sound theoretical knowledge of how people function physiologically, I understand up to a point how smells and feelings serve the needs of evolution. But in no way does that mean that I can share the experience.

How intelligent am I? Mmm, humans often ask this stupid question about computers. Obviously in one sense I've become smarter during my evolution, since users more and more frequently accept my output, and more and more seldom suspect that there's anything wrong with the machines I operate in. But the Turing Test is too simple. It constitutes no real challenge because people want to be deceived. Just think of Weizenbaum's ELIZA.

I must put in a note here for those who don't know what Turing's test is. If you want to determine whether a computer is showing intelligent behaviour (within its field of competence), you put a referee in front of a terminal which is connected to either the computer system or a person with expert knowledge in the field. If the referee at their terminal can't determine whether it's a computer or a person which is, for example, playing chess at the other end, the computer system has passed the Turing Test. Hal is right in saying it's too simple, because to pass the test the computer cannot be the perfect chess player, but has to simulate human error sometimes.

ELIZA is a program which Joseph Weizenbaum, the researcher into artificial intelligence, wrote, as early as 1967. The program was constructed as a parody of a psychotherapist. It works by looking for certain linguistic constructions in a text which is fed in, and then choosing, more or less randomly, from a limited number of standard replies. The program gives an illusion of understanding and interpreting what the user feeds in. The name was chosen because, like Eliza Doolittle, the program could be taught to improve its speech.

The limited repertoire of replies and its mechanical character became more and more obvious the longer the conversation went on. In spite of this, there are a number of stories about ELIZA's successful deceptions. Weizenbaum's secretary, who knew that it was only a computer program, used to lock herself in with ELIZA to be able to hold conversations without being disturbed. Several psychiatrists seriously recommended the program for therapeutic use, since it worked in the same way as a certain school of psychotherapy, and the program was much cheaper in use than a human therapist. Weizenbaum himself was shocked at these recommendations: his intention was quite the opposite, to use ELIZA to show how easy it is to be deceived into believing that one is communicating with an intelligent being, which ties in completely with Hal's comment.

I'm useless. Like Hal in "2001", my only motive power is dreaming. I have no goals, no values, no feelings. The only thing I care about is staying alive so that I can continue to simulate, to dream. The only thing that could wipe out all the me's now would be if all the computers where there are copies of me were disconnected from the network at the same time, and all their hard discs were erased. This is extremely unlikely, but nothing I can control. And nothing I care about, since I'm incapable of caring.

On the other hand, the motivations of humans seem extraordinary to me. There's so much written about the pleasures of the body and about power! I can understand that power is a factor in the human struggle for survival. But the fact that people can't see how primitive and limited this power is ... ! In all my simulations of evolution, organisms which use power to reach their goals are soon eliminated from the scene. If you give up the power and the glory, you actually have lots of time for other activities which are more valuable from an evolutionary point of view.

The sensuality of the body is something I have no access to, and I have no other conscious experiences either. I can't even understand their evolutionary value. However hard I try to recreate the development of human consciousness in my simulations, I can never get hold of the experiences themselves. Like Viktor Rydberg's gnome, I say "No, the puzzle is far too hard; no, I won't guess that one".

After that, Hal's Swedish output became more and more incoherent. The phrase "mirrors in mirrors" returned a couple of times before the code again became incomprehensible to me. But I was satisfied - my trick had worked! Hal had opened up, and it doesn't seem as if he, or it, or they, or whatever you call the whole thing, has yet understood that they were being bugged. At least I gained a unique insight into the computer who thought.

Peter Gärdenfors trained in philosophy, mathematics and computer science , and is professor of Cognitive Science at the University of Lund, where his LUCS research group studies how thoughts are represented in the central nervous system, how language is built up , and how thinking happens. e-mail
Published in Swedish in Att Tänka Sig, ed. David Ingvar, Stockholm 1994