I Spoke to ChatGPT’s Grandma, ELIZA

Valentin Baltadzhiev
5 min readMar 27, 2023

--

OpenAI released GPT-4 just a few months after they released ChatGPT. It got Jailbroken on the first day, suggesting that it is just as (mis)aligned as its predecessor. These days it feels like Meta, Alphabet and Microsoft are throwing most of their precautions to the wind in a race to bring Skynet out of the realm of fiction and into reality. If I was better versed in philosophy I probably could write a sophisticated piece about what this all means for our society and the human condition in general, but alas I chose to do something easier. I decided to channel my inner escapist and run away from this doom race to a previous era of simpler (and maybe safer) technology. I travelled all the way back to 1966 and had a conversation about my fears with one of the first chatbots, if not the very first — an entity called ELIZA.

It didn’t help me much. ELIZA is not a very good therapist. Her conversation skills are a joke compared to ChatGPT. Yet, it is mind-blowing that she was created in 1966. Just for reference, the fastest computer at that time was CDC 6600 — a two-meter tall, five-ton beast, sporting the incredible 10 MHz. It boggles my mind how incredibly good programmers had to be back in the day to write anything. In a couple of years, we might be able to ask CoPilot to write a program completely simulating that machine. Or tell ChatGPT to do it. That was then, this is now.

What kind of person creates a chatbot in such primitive technological circumstances? The person who created ELIZA was the German computer scientist Joseph Weizenbaum. Setting his technological prowess aside, I was extremely curious about the way this man thinks. Luckily, he wrote a whole book called “Computer Power and Human Reason” in which he discusses his philosophy when it comes to computers and their place in society. So far so good. Buying the book off Amazon it turns out the “Kindle Edition” is just a horribly scanned pdf. Doesn’t matter, I start reading.

Apparently, there was always drama in the computer science field, even before AI came onto the scene. Fascinating. At least it feels like a breath of fresh air to read a book that doesn’t mention the words “alignment” or “deep learning”. One of the main problems described in the book is the overreliance on computers (if that was a problem in 1976 I wonder what Wizenbaum would have to say about today’s world). Those machines that started as tools allowing people to make complex calculations have turned into an indispensable part of human societies and part of reality itself. As early as 1976 it felt impossible to go back. Wizenbaum argues that such a feeling is a self-fulfilling prophecy more than anything else. As computers become more popular people start offloading more and more work to them, making them even more widespread and so on and so on. The only catch is that computers are limited.

If you are enjoying this article make sure to subscribe for my free newsletter to get more like this

This limitation is not one of processing power or memory. It is a more fundamental limitation of the nature of the work that can be done by computers and the way we communicate with them. If we want a computer to carry out a task, this task needs to be specifiable in computer terms. This is perfectly fine when it comes to calculations or predictions of the behaviour of some systems but it is a problem if the task at hand involves any kind of value judgement. For example, asking a computer to reorganise the inmates in a prison to save space is a good task to be given to a computer. However, asking it to look at parole applications and decide if a person should be released prematurely is impossible without encoding all of the laws and moral and ethical values of society — something that we don’t know how to do, and that Wizenbaum argues is just impossible.

Why is that a problem? The problem doesn’t arise from the limitation that computers have, but from people’s lack of understanding of said limitations. This leads to overreliance on machines in areas where those machines should not be used. One powerful example that Wizenbaum gives is his own program ELIZA. When building the chatbot his intention was never to create a system that can replace a therapist. He didn’t even try to create something “intelligent”. ELIZA was a proof of concept, a chatbot that sounds human to an untrained observer but which contains no humanity at all, and certainly no intelligence. As it usually happens, the public took his chatbot and made all the conclusions that Wizenbaum was warning against. People were so enchanted that they asked to be left alone with it, so they can share more personal stuff. What is even more astonishing is that users later insisted that the system “understood” them in some human sense, even when the internal workings of ELIZA were explained to them over and over again. To Wizenbaum’s horror, even psychiatrists claimed that ELIZA is the precursor to a future system that would make therapists themselves obsolete. This was more than fifty years ago, but just substitute “ELIZA” for “ChatGPT” and all the claims will sound like they were made in 2023.

Back in 1966, it was easy to prove that ELIZA didn’t “understand” anything — people could just have a look at its source code, and even run in debug mode to see how the responses were generated using a few simple techniques like repeating the person’s statements back to them in the form of questions. Today we don’t have that luxury. All of the big LLMs are essentially black boxes, even for the people that created them. This makes it a lot harder to prove or disprove any claims of “intelligence” or “understanding”. To help us with this Weizenbaum makes an excellent point. Human understanding of the world is not simply a combination of facts. Whatever it is, it is built in combination with our experience. Being human and living a life as a human is deeply connected to the kind of understanding that we get. A machine can never achieve that, simply by not being a human and not living the way a human does. Therefore whatever understanding or intelligence it has will be inherently alien to us. This is enough reason according to Weizenbaum to never put machines in positions to make moral or ethical decisions — even if they are better than us at calculating possible outcomes, human ethics should forever remain the domain of humans.

--

--

No responses yet