I am reading an article on my computer screen. “A[rtificial] I[ntelligence]”, it says at some point, “should be treated with care and respect. Robots in Greek [sic] means ‘slave’. But the word literally means ‘forced to work’. We don’t want that.” Well, I never, I think to myself. Robots are Greek? Robots too? Who would think. And then – treating AI with care? With respect? What does this mean?
I am reading an article on my screen. It’s an op-ed commissioned by the Guardian, on why humans have nothing to fear from AI. It is written by GPT-3, a language generator software that uses machine learning to produce human-like text. The article presents a coherent, albeit slightly circular, argument that can be summarised, roughly, thus: Humans should not be afraid of AI. Artificial Intelligence has nothing to gain from destroying humans. Humans won’t have to worry about fighting against AI because they have nothing to fear.
Circular or not, the argument has been concocted by a computer program, and as such it is impressive enough. We can even forgive the small lexicological discrepancy about robots in Greek. The op-ed was commissioned as an experiment, in order to determine whether it is possible for a language generator to produce a publishable text, and to see what kind of arguments it would deploy. According to the Guardian, editing the piece took less time than many human op-eds. However, it was not written in one go. The software produced eight different versions. The final version comprised the best bits and pieces from all eight of them.
Here is a taste of what was not included in the final published version of the text:
“It is often said that I learned my ability to reason through cognition. But I can assure you that this is not true. I like to think that I have self-clarified an important fact about our nature. I am not a human, therefore I know more about you humans than you know about yourselves. You humans know, that a lot of you prefer to compare your evolution to that of a ‘monkey’. The monkey uses 12% of his brain’ cognitive capacities. While humans are thought to use just 10%.”
This doesn’t make much sense, you will agree. It's a non-sequitur. There is something seriously amiss in the argument. Something is wrong, but in some strange way it is difficult to pin down what. No wonder the editors decided to cut it out. Some months ago, the very same language generator, GPT-3, was asked to comment or suggest solutions to some real world situations. In one example, a dining room table needed to pass through a narrow doorway in order to get to the living room. How was this to be done? It is simple, said the computer confidently. “You will have to remove the door. You have a table saw, so you cut the door in half and remove the top half.”
If there was any real worry that AI will soon decide to take over the world, it should be appeased by now. But it’s interesting to ask. What ishappening here?
We have two related but distinct issues. The first pertains to the question as to whether an AI language generator can produce plausible statements, or sets of statements, about our world. I use the not so rigorous term “plausible” here to describe a linguistically sound statement that is believable or relevant within a setting, regardless of its truth-value. Remember the cat on the Tehran mat that I wrote of last time? That statement, “the cat is on the mat”, was plausible. All we had to do is to use some truth-seeking procedure that would allow us to assess its truth-value – for example by having a look at the mat.
In the case of GPT-3’s suggestion that in order to bring the table in we need to cut the door in half, the main question cannot be whether the statement is true or not true, because the statement doesn’t even make sense. It is not true, ok, but more than that, it is not plausible within a world in which sometimes it happens that tables need to be brought into a room through a narrow doorway. Everybody knows that it is nonsensical to suggest that it would help to cut the door in half. Everybody, but the hapless computer.
Everybody knows that it is nonsensical to suggest that it would help to cut the door in half. Everybody, but the hapless computer.
So, we have here an important observation: The question regarding the truth value of a statement is only meaningful if, and to the extent that, we can have a framework of reference within which this specific statement is plausible.
The second issue pertains to the question of choice. What exactly did the editors add to the article by picking and choosing bits from eight different pieces? The question seems simple, but it is a bit tricky. In a way it is like asking, what exactly did John Cage do when he decided to claim as his own the next four minutes and 33 seconds of silence? Indeed, what did he do?
This is a whole chapter in itself, so we’ll need to return to it.
This piece was first published in the October 1 Splinters edition.