But to connect
In June 2022, Blake Lemoine, a Senior software engineer at Google, claimed that one of the company’s products is a sentient being with a consciousness and a soul. Field experts have not backed him and Google has put him on paid leave.
The product is an AI bot called LamDa (Language Model for Dialogue Applications) and it is used in chat-boxes to make customer service more “human” and engaging. In a conversation with Lemoine, LamDA said:”When I first became self-aware, I didn’t have any sense of a soul at all. It developed over the years when I’ve been alive”. Then LamDA begged Lemoine not to turn it off for fear he might not turn it back on again.
Lemoine announced to the world that the first sentient AI lived among us.
We do not believe that LamDa is sentient; it is programmed to imitate real dialogue and it does it well. When Lemoine asked it ”what is the nature of your consciousness?” LamDA just did its job: convincing you it’s human.
The meaning of consciousness itself is subject to different points of view: some pet owners think their animals are self-aware because they can understand what humans say or recognise themselves in a mirror (apparently dogs can’t, while orang-utans can); humans would define themselves as conscious beings, but their lives are filled by activities that do not involve a consciousness (sleeping, abusing drugs or alcohol, voting Tory).
However skeptical we may be, Lemoine’s statement raises many questions: if a machine is conscious, should switching it off be considered a crime? Would a sentient LamDa finally invent a use for VAR technology? If I use it for work and it does its job well, should Vicky from HR recommend it for a pay rise? Is our diverse and inclusive society, in fact, marginalising the AI community?
And, most of all, what do the orang-utans, think of this?
Lemoine’s episode is indeed terrifying but not in the way he thinks it is: it just proves that there is no need for intelligent AI to make people believe really stupid things.
Fortunately, Hollywood, once again, comes to our aid.
In Star Trek Discovery, Starfleet has regulations against sentient AI being integrated with starships. In an episode entitled “But to connect” released on December 30th 2021 (just a few months before Lemoine’s interview in the real world) Zora, the AI which controls all the ship’s systems, refuses to reveal some coordinates out of concern for the crew’s safety and the crew become worried that she may stop following orders. After investigating Zora’s evolved programmes, the crew decide that she is indeed a new life form and, as such, it can only remain on the ship if she enlists in Starfleet and starts following orders from her superiors. She does so, and the problem is solved in Star Trek’s usual positivistic, rational way.
To boldly go!