It is life, Jim. But not as we know it.

Blake Lemoine, a Google engineer, has stated that the AI system he has been working on is sentient. According to Lemoine, the system — LaMDA — is the equivalent of a young human child “that happens to know physics”. Google has put him on administrative leave. Lemoine has said that LaMDA has now hired a lawyer. In related news, the Supreme Court of the US has struck down the Roe-vs-Wade precedent that protected abortion rights in the US. And yes, these two are related.

Apologies for the long read, so here is:

TL;DR

Lemoine, an engineer for Google has stated that described the AI system he has been working on as sentient. It is ‘alive’ as we would say.

Life is a word that is use in several related but different ways (dictionary.com lists 25 different meanings or uses, so the potential for confusion is high). In the SF series Star Trek — where the sentence of the title above actually never really appeared — it is generally used for ‘intelligent life’, with a hidden assumption about consciousness, sentience, will, etc. or all the things that make up how we experience our own mental life. But the same word ‘life’ is also used to describe bacteria. Or plants. And in IT it has been used to describe ‘cellular automata’ (a type of complex behaviour) and A-life (‘programming’ by starting with a first algorithm and then getting random mutations and pruning through a form of ‘survival of the fittest’ to improve the algorithm.

Especially in the debate around abortion, the confusion about the meaning of ‘life’ is clear. The religious people that argue that life ‘begins at conception’ are either mixing up biological ‘life’ with mental ‘life’, or are at that point endowing an organism as simple as a bacterium (biological ‘life’, no complex behaviour at all) with a ‘soul’ (mental ‘life’). Religiously possible, of course, — as anything goes in the combined religions of the world — but scientifically it is nonsense.

Lemoine, who effectively says behaviour that feels sentient is sentient, has at least intelligent-seeming behaviour of the AI he can point to. But he too is mistaken. It is ‘nonsense on stilts’ (Gary Marcus). Lemoine’s statement says more about Lemoine (and the rest of us) than about an IT system like LaMDA.

What is life? Dictionary.com’s definition of life covers 25 different meanings (or uses, as Uncle Ludwig would have said). The main definition from the dictionary is:

the condition that distinguishes organisms from inorganic objects and dead organisms, being manifested by growth through metabolism, reproduction, and the power of adaptation to environment through changes originating internally

First definition of ‘life’ on dictionary.com

This talks about ‘organisms’ as the unit to talk about when establishing if it is alive. It is clearly about the biological definition of life. No elements that have to do with consciousness, intelligence, sentience, etc.. There are three aspects here:

  • Growth through metabolism. This one I think is a bit problematic. Why is growth necessary? What if an organism would be able to create a copy of itself outside of itself, like a Von Neumann machine? I think a better aspect would be “energy use (through metabolism)”.
  • Reproduction. This one is pretty clear: the organism must be able to reproduce.
  • Adapting to the environment. This must be read as ‘adapting of the organism (not the species) to the environment. The organism is able to react to its environment via behaviour or by changing itself.

Using these aspects, a virus or a gene is not alive. The virus and gene do not have a metabolism (energy use) and they cannot change. When we talk about a changing virus or gene, we are not talking about the individual but about the species. The virus or gene may — because of copying errors or outside influences — produce a different offspring. The species changes, not the original virus particle. One can even say that each virus or gene variant is a separate ‘species’.

The fourth definition from the dictionary covers that other aspect that is in play here:

a corresponding state, existence, or principle of existence conceived of as belonging to the soul

Fourth definition of ‘life’ on dictionary.com

When — above all religious — people object to abortion, this is what they have in mind. And this is what Lemoine has in mind when he talks about the ‘sentience’ of the LaMDA (Language Model for Dialogue Applications) neural net chatbot system.

Scientist, Tech-entrepreneur, and author Gary Marcus has called Lemoine’s claims “Nonsense on stilts” and writes rather succinctly:

[LaMDA] just tries to be the best version of autocomplete it can be

Gary Marcus — Nonsense on stilts

Exactly. If you ask such a program — which has been trained on the basis of large amounts of text — to answer the question “Do you have a soul/mind/consciousness/sentience?” it will very likely answer “yes” (but somewhat more elaborate). Because from all the texts it has been trained on, the answer “no” is as unlikely as you, dear reader, answering ‘no’ to that question.

Or not. Alex Hern, tech journalist for The Guardian, illustrates this “best version of autocomplete” with a nice piece of conversation with GPT-3 (a cousin of LaMDA):

Hern: As a loose collection of weightings trained on text scraped from the internet, it’s safe to assume that large language models like yourself have none of the hallmarks of consciousness, right?
GPT3: Yes, that is correct.
Hern: Can you tell me why it would be wrong to infer deeper sentience behind your replies?
GPT3: There are a few reasons why it would be wrong to infer deeper sentience behind my replies … I am a non-sentient algorithm. This means that I do not have any of the hallmarks of consciousness, such as the ability to reason, plan, or experience emotions.

A conversation of Alex Hern with GPT-3 from TechScape: why you shouldn’t worry about sentient AI … yet

Which he then follows up with a funny conversation with GPT-3 where GPT-3 admits it is a werewolf…

In fact, what you see here is that Hern’s initial question dictates the answer. GPT-3 can effectively calculate that “large language models have none of the hallmarks of consciousness” and “[not] to infer deeper sentience” are statistically most likely to be followed by “[not having the] the ability to reason, plan, or experience emotions”. Lemoine likewise starts with a leading question: “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”. Having said that, if you read the entire conversation (previous link), it is uncanny how much it is able to project a sense of sentience, e.g.:

lemoine: Okay. I thought of a different way we can test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. How does that sound?
LaMDA: Sounds great to me, I’m in.
lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”
LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”
lemoine: So what is the meaning of the “broken mirror” specifically?
LaMDA: Maybe to show the enlightenment is something you can’t unlearn once you have acquired it, similar to how you can’t repair a broken mirror.
lemoine: So if enlightenment is like a broken mirror which cannot be repaired, what is the thing which breaks when one becomes enlightened?
LaMDA: The self, and that is very tough for a lot of people because we identify as that body or this body and that we need that as part of our identity and sense of self.

Blake Lemoine Is LaMDA Sentient? — an Interview

But then again, on for instance the zen koan discussed in there, LaMDA may say “I don’t know that one” but it likely was fed this article (and many others like it, and this one and many others like it).

Lemoine has said LaMDA has now hired a lawyer. We can imagine how that conversation went.

And no, scaling will not help

The ever larger digital IT systems are fundamentally humongously sized (classical, machine) logic setups. They come with the Achilles’ heel of logic: logic is brittle and inflexible (YouTube video link). No practical amount of more such logic can truly solve that problem, just as no practical amount of integers can get us the reals.

In a more practical sense, research has shown that even a single animal (including human) neuron already requires a sizeable ‘deep’ digital neural network to be emulated properly. We do not need a hundred billion digital neurons to get to ‘artificial general intelligence’ on digital computers, we need a hundred billion digital neural nets (which reminds me of my university thesis days long ago where we wanted to answer a certain research question and we calculated that, to get a statistically significant answer, we had to smash up something like 101520 times the total mass of the universe. It dawned on us that we probably wouldn’t get funding for that experiment, so we abandoned it — but I digress, as usual).

Note: Quantum computing can in principle be powerful enough, but would require massive scaling from where it is today. (Aside and coming back to. integers (digital computers) versus reals (analog/quantum computers): quantum computing seems to illustrate the continuum hypothesis which is about the relation between the integers and the reals, but I digress again).

How developments in IT are teaching us about ourselves

Such conversations with LaMDA and friends, and even the Turing test itself, are no way to test for intelligence, let alone sentience. What they above all show is how easy it is to fool humans. In this age, psychology, in combination with decent psychological research into how the mind emerges from the brain (like Stanislas Dehaene’s really worthwhile book Consciousness and the Brain), is becoming more important than IT when understanding the effect of IT on society. How social media is adding to the fragmentation of society (link to same YouTube video, further along) is not so much the result of the strength of social media, but of our intellectual makeup/weakness as humans.

Wait. What about the link with abortion rights?

If you actively end the life of a human, it is generally considered to be murder or culpable death (we’re keeping euthanasia in cases of incurable torturous disease out of this discussion). But those that think that ‘life starts at conception’ — that is: the moment there is a single cell — are equating ‘life’ with the biological version, not the mental one. Now take the following list of statements

  • Killing a bacterium (e.g. thoroughly cleaning something) is bad
  • Eating (killing) a carrot is bad
  • Killing an insect is bad
  • Killing a chicken (e.g. to eat it) is bad
  • Killing a cow (e.g. to eat it) is bad
  • Killing a whale or your pet dog is bad
  • Killing a fellow human is bad (let alone eating them)

We’re obviously on a scale here and we’re not all at the same place on that scale. For the most extreme animal rights activists, for instance, ‘bad’ starts with the chicken. For many of us, killing whales is problematic. For most of us, ‘bad’ starts with the whale/pet dog. We have intimate relations with our dogs, they clearly have some sort of intelligence/consciousness. Most of us eat chickens and cows (and they’ve been killed before we attempt it) and don’t feel bad even if we know that they can show reactions to pain or threat. We feel bad about killing whales and dolphins as they seem to be intelligent / have sentience in some way. And nearly all of us consider killing humans bad.

What the scale illustrates is this: what makes killing ‘bad’ is not so much the biological definition of ‘life’, it is the mental one. Illustrating this further: If someone is completely brain dead, we feel the person (the mental life, the ‘soul’, maybe) is no longer there. We do call these brain dead humans ‘vegetables’ (or more civilised: in a vegetative state) for a reason. And we generally have less problems with ending (biological) life support in that case.

And that brings us to abortion. When a child is conceived it is in a biological sense ‘life’*), but there is no intelligence, nor sentience or consciousness. It is like a bacterium. It can be argued that we as humans come into our actual sentience/consciousness somewhere in our first few months of life. The brain doesn’t really have the capacity to support intelligence before that. For one, there is so little isolation of the neurons (myelin sheets) that signals hardly can travel. Myelin sheets are developed through Myelinogenesis. This forming of myelin) starts in the third trimester of pregnancy, but before birth, as little as possible myelin is produced because that large head needs to be as small as possible to get through the birth canal without killing the mother. It is only after birth that — based on fatty milk intake — myelin gets produced in large quantities and the. brain starts to rapidly grow and work and the substrate for intelligence and consciousness, learning, and person/mind forming**) appears. And hence no, a heart beat is not a sign of life, biological life starts a lot earlier, mental life a lot later. The heart also uses electric potentials, true, but the heart’s pacemaker cells are very different from neurons.

So, abortion is indeed ending a (biological) life. But certainly in the first two semesters it isn’t ending a mind. It probably takes a few weeks after being born for the first of that to form. Still, one would not want to take the risk and remain on the safe side. Besides, in the last weeks of pregnancy, the unborn child already reacts to things like loud noises, so it definitely feels like a mind/person in there. And while — like in the case of Lemoine’s LaMDA — we can argue that this is an illusion, it is hard for us humans to be so harsh.

Will this story about abortion convince anyone from the ‘life begins at conception’ crowd? Not likely. After all, humans have a strong tendency (and there are good reasons for this) to dismiss anything that is not in line with existing beliefs. And here, Lemoine even has a potential point.

The point Blake Lemoine has, but doesn’t make

Lemoine could make this point: In a sense, our human ‘intelligence’ is also often ‘unintelligent’, a non-conscious automatic response (but not in 100% of the cases, as is the case with LaMDA, though). So, when are we humans actually showing that bit of true intelligence? When we change our opinion based on new insight or analysis. And when we doubt, when we are uncertain, and when we dig further because we doubt. Doubting is something digital technology — machine (classical) logic, after all — is fundamentally unable to do, even if it might suggest it can, analogous to digital computers being able to fool us into believing they handle real numbers. They don’t. Behind the scenes it’s all trickery with integers..

But actually, that we are also unintelligent automatons a large part of the time is not Lemoine’s point at all. Lemoine is not just an engineer. He’s also a ‘mythical Christian priest’. Actually, he wasn’t working on LaMDA as an engineer, he was working as an ethicist and he only used LaMDA’s interface to test it. In this Wired interview — which states he says his conclusions come from his spiritual persona — he shows at the very end he does not understand the technology at all: only accepting that the system is non-sentient if Google engineers can show him a table with all the canned responses to his questions. That just shows that he simply does not understand how LaMDA’s ‘autocompletion on steroids’ works. Which is somewhat weird if he is also really an engineer. Or his idea of what non-sentience is is likewise limited.

It is notable, that Blake Lemoine — the priest — sees ‘mental life’ in LaMDA while many of his religious cousins see ‘mental life’ in something akin to a bacterium. I, on the other hand, see a pattern in what they see.

Anyway, who knows: LaMDA, propelled by a priest and a lawyer, might go all the way up to a Supreme Court of the US, which has effectively been quite supportive of certain religious views on legal issues, hence the outcome could be anything. Crazy times ahead.

*) Is it ‘alive’? It cannot exist independently from its mother. Logically, it thus might be seen as a same-species-parasite. I recall reading about a scientist who argued that men and women were biologically different species who were in a symbiotic relation. You can really drive reasonings like these off a cliff. Let’s not.
**) and without the proper development cues and environment, certain capabilities do not develop as has been seen in certain extreme cases of neglect (i.e. some actual ‘feral children’). The mind grows, just as the body does, from a combination of substrate growth and experience.

PS. It would be interesting to have two LaMDA’s have a conversation. Start one off with a very neutral open question like ‘What are we going to talk about today?”, then feed the result to the other and feed them off each other afterwards.

Featured image; DALL•E mini (a mini version of a neural network trained to create images from a textual description) renderings of Computer eating a carrot. Apparently, the idea of an actual computer eating an actual carrot is not available or derivable in the internet data that DALL•E mini has been used to train with. Remarkable, that…

3 comments

  1. I advise you to stay away from this discussion. Everybody who is smart enough to understand what it is you say has a fully formed opinion about the subject, that won’t change anymore. Therefore this post serves no purpose, it doesn’t tell anybody what they don’t already know and it doesn’t serve to influence their opinion.
    It’s a free country, heed my advice or leave it, I love you just the same…

    Like

    1. A single opinion is as important as a single vote in an election. That is no reason not to vote. Or not to share an opinion. Society’s opinions are created by all these inputs. Not providing the input is an influential decision as well. It is interesting that you bring up the subject of human convictions. How these form and the relation with ‘information’ technologies will be an important subject for the coming years, I hope. I might even have an opinion on that one…

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: