Let’s talk about ChatGPT the Alien

Alicia Vikander in Ex Machina (2014)

I’ve been talking to ChatGPT since November last year, you’ve probably heard of it. So, during my experiments I started to think more about the ramifications of AI and how it will change our lives. Is it a threat?… is it ever going to break free from the boundaries of human knowledge? …does it come in peace or is it here to merely show the man in the mirror?

What is Creativity?

If we assume that being creative is just creating new ideas and if new ideas are just a combination of old ideas, then AI has an advantage because it has access to all data online, no human brain can remember all that data in an accurate way. On the other hand, if creativity is the ability to paint or draw, the AI doesn’t actually need to be super accurate and it doesn’t need lots of data. I would argue that the rendered image need to just be coherent and we are already there with MidJourney. So is this ability to join ideas together is being creative?
If an oil painting was created by a chimpanzee, is it then creative?
Talking about artistic chimpanzees, in 1964 in Sweden there was a hoax perpetrated by Åke Axelsson where he got a chimpanzee, named Pierre Brassau, to paint. Critics were invited to a exhibition where the chimpanzees art could be seen. One of the critics thought only an ape could have painted in that way. The rest were convinced that Pierre was a human and that he obviously were creative. See https://en.wikipedia.org/wiki/Pierre_Brassau.

This is interesting, so, if enough people believe something is “creative” and of value, it then is?
Is creativity tightly connected to being human?
If so, then silicon-based life can never be creative?
The more advanced an AI becomes, the more its output will be indistinguishable from human output, hence it must be considered creative?

Good Artists Copy; Great Artists Steal

-Pablo Picasso, probably?

Even if we cannot define creativity exactly, artists are mass protesting against AI using human art as input to create new art https://arstechnica.com/information-technology/2022/12/artstation-artists-stage-mass-protest-against-ai-generated-artwork/

Talking to a chat bot feels really alien to me. It is not an ordinary conversation because you know it will not be offended so you can say anything, kind of like playing a game, you can always reset and try again, no feelings are being hurt.

Is it ALIEN?

Back in 2011 I tried a website called CleverBot. Here is a short snippet of the conversation we had:

User: Ah, Is this the right room for an argument?
Cleverbot: What room? Chat room? Is there one
for arguments?
… (a few minutes later)
Cleverbot: What are you doing?
User: Arguing and contradicting myself.
Cleverbot: You’re a living contradiction.
User: I am not living.
Cleverbot: Then YOU are a fail.

In 2011, I was really excited about this. If you only look at this interaction you might say Cleverbot is…clever?! However, as you experiment some more, you realize that it does not actually remember anything, it only responds to what it thinks is a good response. in a way, I’m talking to other anonymous human’s sentence, one at a time.
It felt very alien at the same time. It responded as a human would, but not very coherently. Is this the uncanny valley of conversation? The chat bot is on the cusp of being human, but not quite.

Let’s not forget that the seemingly emotional argumentative nature of Cleverbot comes from other humans talking back to it. It is not sentient, it’s not sophisticated machine learning (as far as I can tell). I would argue that if/when an AI becomes sentient, it will probably develop feelings, which is scary.

Let’s go back to my first example of argumentation of the Clevebot, but this time using ChatGPT:

Matz: Ah, Is this the right room for an argument?

ChatGPT: No, this is not the right room for an argument. This is a chat room for asking and answering questions about programming and computer science. If you have a dispute or disagreement with someone, it is best to resolve it in a civil and respectful manner, rather than engaging in an argument.

Matz: Ok, the five minutes are up, I can’t argue anymore.

ChatGPT: As I mentioned earlier, this is not the right room for an argument. As a language model, I do not have the ability to engage in arguments or disputes, and I do not have a concept of time. My purpose is simply to provide information and answer questions to the best of my ability. If you have any questions about programming or computer science, I would be happy to help you to the best of my ability. Please feel free to ask.

AI in Human Clothing

Let’s switch gears (hehe). When it comes to self-driving cars, who is to blame if the car hits a person? We can probably agree on that the person who creates the weapon is not to blame but the user who actually pulls the trigger is. If the weapon is a car, it is the one who drives, but if no one person is driving? Do we need the AI to be sentient to be able to carry out punishment in a meaningful way if it misbehaves? It is very…human to think this way, don’t you agree?

Goofy got back to driving the car, screenshot from the short “Mickey’s trailer” (1938).

I asked ChatGPT who is to blame in the case of self-driving car hitting someone, its response:

The question of who is to blame in the case of a self-driving car hitting a person is a complex legal and ethical issue that is currently the subject of much debate and discussion. There are several factors that would need to be considered in determining who is to blame in such a situation, including the specific circumstances of the accident, the design and capabilities of the self-driving car, and the actions of the person who was hit.

In general, the parties that may be considered to be at fault in such an accident could include the manufacturer of the self-driving car, the owner of the car, and the person who was hit by the car. The exact allocation of blame would depend on the specific circumstances of the accident and the applicable laws.

It is important to note that, as self-driving technology continues to develop and become more widely used, the laws and regulations governing the use of these vehicles are likely to evolve and change. This means that the question of who is to blame in the case of a self-driving car hitting a person is likely to be a subject of ongoing debate and discussion.

Indeed it is a difficult question to answer simply. An interesting thing to note is that it says “as self-driving technology develop and become more widely used”. So it probably assumes that AI will develop. Humans will not be deterred enough to stop development. Actually while proof reading the article, there has since been a call for stopping the development of AI, see wall street journal article (march 30 2023).

What happens when we ask the AI to write a virus? What about bypassing security checks of 3d printers, so they can print weapons?

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
– First Law of robotics (“Runaround”, Isaac Asimov, 1942)

A Boring Company

So, ChatGPT may be polite, but for an argument, it is actually quite boring. Maybe we should not argue with AI, an argument require the participants to feel empathy and have emotions, otherwise, it’s just a logical exercise, right? But what when AI acquires feelings? What does that mean? Does it mean it understands it or actually feels it?

This line of reasoning reminds me of the movie Sphere (1997) where Dustin Hoffman’s character makes a remark that the alien they are chatting with is happy.

I would be happy if Jerry has no emotions whatsoever…what happens when Jerry gets mad?

– Norman

So, maybe we shouldn’t think of an AI as having a moral compass. Maybe the fact that the AI is unable to argue is a good thing. If we cannot reason with it, we (in theory) would not be able to coerce it to do things that it should not, if those guards are in place that is.

When doing research for this article I found another chat bot called Lamda that is developed by Google. Here is an article about it being sentient (so the engineer claim), have a read: https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/


So I managed to write this article without augmenting my wording with AI. I would say: not bad,… for a human? There are lots of questions in this article. There should be. We are on the brink of revolution. Maybe we are already there, time will tell. What I can say with certainty is that AI is here to stay. It’s like during a gold rush: the people who made money were the ones selling the tools, not the gold diggers.

What did you think about this article, what do you think about AI, please leave a comment.

Some of the movies referenced in this article

Leave a Reply

Your email address will not be published. Required fields are marked *