In times when unbelievable technological creations and discoveries emerge, our article is about the beginnings. When ChatGPT becomes the centerpiece and the greatest highlight of artificial intelligence, we recount the episode which started it all, with chatbots. Come with us on a historical journey to the 50s and 60s, where we meet Alan M. Turing and Joseph Weizenbaum.
Can Computers Think?
The name of Alan Mathison Turing (1912-1954) is mentioned in most articles which take a historical perspective on the development of artificial intelligence and the theory of codes. His research went much further and wider than these disciplines, Turing being active in biology, philosophy, relativity theory and, of course, mathematical logic.
We begin with his publication from 1950, an article titled Computer Machinery and Intelligence. Therein, Turing writes in detail about a test to help answer the question whether machines can think. The starting point is an experiment where the researcher asks questions to an unseen partner, wherefrom they are to deduce whether the partner is a man or a woman. Then Turing takes it a step further: What if instead of a person, we have a computer? He proposes that this approach should replace the much vaguer question whether a machine could think, and he calls the experiment the imitation game.
His research is remarkably meticulous, and he takes into discussion critiques stemming from theology, neuroscience, social sciences, then mathematics and physics. We recommend the original article in the References section, and we select three such (counter)arguments below:
What brains and computers have in common physically is electricity hence it could follow that they also have thinking as a common feature, a product of said electricity. Turing’s counterargument starts with the Analytical Engine which his predecessor, Charles Babbage, tried to build between 1822 and 1831. Its design was purely mechanical and still the device could compute. Babbage didn’t live to see his project materialize, mostly for financial reasons. But in 2000, the Museum of Science in London finally completed the build, and the machine could calculate with up to 31 digits! Turing’s answer, therefore, is that one must not restrict the definition of computers—and consequently, the question about thinking—to electronic devices.
Another point is called by Turing The ‘Heads in the Sand’ Objection. We quote it without further comments, as it is quite spectacular:
[Objection:] “The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.”
[Turing’s response:] I do not think that this argument is sufficiently substantial to require refutation. Consolation would be more appropriate: perhaps this should be sought in the transmigration of souls.
Finally, the article discusses an objection from Lady Ada Lovelace, the first woman computer scientist and Babbage’s collaborator: “The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform.” Turing’s refutation relies on complexity. Even in those times, electronic and mechanical devices were already sophisticated enough that one could hardly, if ever, be in complete control of all the parameters involved. Thus, he confesses to having been surprised by the output of his own creations. Even if one could be in control of all the parameters which influence the behavior of the machinery, it is still impossible to dial in precise voltages, up to the required decimal places. The conclusion is that the unpredictable is born out of complexity; in other words, the more complex a computer is, the more creative freedom it has.
Meet the New Psychologist
In 1964, the German scientist Joseph Weizenbaum was leading a team of researchers from MIT, whose project was to become the first conversational robot—or chatbot—a (ro)bot designed to have a conversation (chat) on arbitrary subjects. The robot was a computer program which the team called ELIZA, inspired by one character from George Bernard Shaw’s play, Pygmalion. There we meet a girl, Eliza Doolittle, who learns proper, upper-class speech as the play unfolds. Since a picture is worth a thousand words, here’s a fragment of a conversation:
The purpose of the project was surprising, though: it aimed to show how superficial conversations have become—so much so that they can be simulated with software. An indirect target of the team’s satire was the Rogerian psychology school, founded by Carl Rogers, who proposed an alternative to the Freudian psychoanalysis and the behaviorism of B. F. Skinner.
Technologically, ELIZA uses ideas and techniques which form the basis of natural language processing (NLP), a field which is highly active these days, and which is relevant for most of the recent breakthroughs in chatbots, including ChatGPT. The superficial dialogue was programmed with a technique known as pattern matching, in which the software searches for important words in the user’s lines—some pronouns, verbs, nouns. Then it tries to reply with something that contains some of those words or related ones (see the image).
ELIZA was only the beginning of this field, which snowballed incredibly in the past 50 years. Furthermore, aside from the actual programming of chatbots, Weizenbaum is part of a distinguished group of researchers who wrote extensively on ethical and psychological problems that such discoveries incur. They go way beyond the Rogerian satire and into significant subjects that are at the crossroads between computer science, programming, mathematics, psychology, cognitive science, and philosophy. An important example is the book by Norbert Weiner, the founder of cybernetics—the art and science of communication between human and machine: God and the Golem, published, perhaps not by accident, the year when ELIZA was born: 1964.
Food for Thought
How do you assess the question of whether a computer could think? How would you define your own version of the imitation game: what must a computer do to pass as a human?
Which do you think are the fields where a computer can successfully replace humans and those in which artificial intelligence stands no chance?
What are the possible advances in technology and artificial intelligence that you consider the most useful to humanity and what would such uses include? How about dangerous developments?
How do you feel about ethical problems raised by artificial intelligence? Should they be judged with the same methods and principles of human ethics? Should we develop a new computer ethics? Or perhaps should such a problem be completely dismissed?
Try to answer by formulating your own points. Then you can research topics such as computer ethics, the Chinese room argument, friendly AI, cyber ethics, cybernetics.
Write us your answers in the comments or via email at newsletter[at]poligon-edu.xyz
.