The measurement of a machine: is LaMDA a person?

0
AI artificial intelligence
The Alter humanoid android robot recreates human movements at the Mirakian Museum in Tokyo. |

In June 2022, Google suspended engineer Blake Lemoine for his work in artificial intelligence. Having previously helped with a program called Language Models for Dialog Applications (LaMDA), Lemoine was furloughed after posting confidential information about the project. Lemoine himself disputes this description, saying, “All I’ve talked to other people about are my conversations with a colleague.”

To complicate matters, this “colleague” is LaMDA himself.

LaMDA is the latest from Google conversation-generating artificial intelligence. If virtually any identity is assigned – such as, for example, “you are Tom Cruise” or “you are secretly a squirrel” – he offers an in-character conversation, modeling his answers on databases of real conversations and related information. His dialogue is extremely sophisticated; LaMDA answers questions, composes poems and expresses concern about being disconnected. Lemoine claims that this behavior shows that LaMDA is a sensitive person, and therefore not owned by Google. the company, and many experts disagree. The assertion, however, points to a fundamental question: if a computer program were a person, how would it be said?

Lemoine’s argument follows reasoning first introduced by Alan Turing, a father of AI and computation in general. In 1950, Turing had observed a trend in computer research. Skeptical observers would state that only a thinking human could perform a task – e.g., draw a picture, outsmart another human, etc. – to propose a new stricter requirement when a computer reaches the first one. Turing offers a broader metric for intelligence; if an AI could converse indistinguishably from ordinary humans, it should be considered capable of true thought. After all, humans cannot directly sense each other’s sensibilities, yet they generally assume that the people they are conversing with are precisely that: people.

Anyone fooled by a “robo-caller” can attest that even simple programs can briefly look human, but the Turing test as a whole remains a tall order. Although the extent of LaMDA’s interaction is amazing, the program still shows conversations seams. The AI’s memory is limited in many ways, and it’s prone to insisting on obvious lies – its history as a school teacher, for example, even when addressing his own development team. While he often uses the correct vocabulary, the structure of his arguments sometimes degenerates into absurdity.

Still, these things might not be disqualifying. Human beings lie or argue badly; most people probably wouldn’t question the self-awareness of another human who said the things LaMDA does. Indeed, Lemoine argues that by judging LaMDA’s utterances differently from those of biological humans, observers show “hydrocarbon fanaticism.”

More fundamentally, conversation alone is a poor way to measure self-awareness. The most famous critic of Turing’s “imitation game” is the philosopher John Searle, who proposed a thought experiment called the Chinese Room. Searle imagined a sealed room; outside, a Chinese language speaker composes messages and passes them through a letter slot. Inside, a second participant receives the messages but cannot read them. With him in the room, however, are a stack of books defining a series of rules: “If you see such and such Chinese characters, write this and that in response.” Obediently, the man in the room does so, copying an answer and playing it over. From the Chinese speaker’s point of view, the exchanges are a meaningful conversation; from the perspective of the person inside, they are an exchange of meaningless symbols.

This is where the shortcoming of conversation-based measures of intelligence lies. By definition, any computer program can be reduced to a series of input/output rules like Searle’s imaginary bedroom books. An AI then simply follows its set of symbol manipulation rules, forming words and sentences according to the instructions of the rules, without regard to semantics or comprehension. All meaning is thus imposed by the speaker “outside” the room: the human user.

LaMDA, of course, has no simple rules in the form of Searle’s pictures; no database of predefined answers could suffice for its purposes. But the operation of the program is still ultimately reducible to a finite description of this form: given these symbols, taking these actions. Indeed, a sufficiently motivated programmer could (very slowly) retrace the entire operation of LaMDA with pencil and paper, without a computer, and produce identical results. Where is the so-called artificial person?

It could be objected that the same could be said of a human being. In principle, a specialized biologist could perhaps trace every fluctuation of hormones and electricity in the brain, fully describing its inputs and outputs. To do so would presumably not deny that humans experience meaning. But this argument begs the question; it assumes that the mind is reducible to the brain, or more broadly, that the human person is reduced to physical properties. Indeed, the apparent inexplicability of consciousness in purely physical terms has earned a Last name in philosophy: “the difficult problem”.

Christianity is perhaps well placed to offer a better answer. Most Christians have historically understood personality to depend on more than physical traits or conversational abilities; unborn children are therefore people, whereas artificial intelligences are not. A solid defense of this understanding could be attractive – and, in fact, could offer valuable insight.

Unfortunately, despite statements from groups like the Southern Baptists and the a Roman Catholic church, the Church as a whole has been slow to respond to the theological questions of the AI. LaMDA is not an end point, and the coming years will likely see many more who share Lemoine’s beliefs. Increasingly, the Church’s growing challenges share a common need for a rich anthropology: a biblical defense of precisely what it is to be human.

Brian J. Dellinger is an associate professor of computer science at Grove City College.



Source link

Share.

About Author

Comments are closed.