Artificial intelligence is rapidly improving, as I’m sure we’re all acutely aware! However, ChatGPT is still far off from being Data from Star Trek. But what qualifies the difference between them? And what should we consider when moving forward with voice interfaces? In this blog, I’ll take a look at the famous Star Trek episode “Measure of a Man” and see how artificial intelligence has evolved since our idea of it in 1989.
In “Measure of a Man”, Captain Picard defends Data in court to fight for the rights that Starfleet extends to sentient beings. A Starfleet officer named Maddox is trying to create new androids like Data, and he orders Data’s disassembly to help with his research. Data tries to refuse or resign to avoid being disassembled and possibly ‘killed’, but this escalates into a legal battle for his rights. The episode asks the question: is Data sentient?
Why does Data feel so far removed from AI assistants like ChatGPT, Siri, and Alexa? I believe the main difference is that these chatbots only give output when they are prompted to do so by a human. In contrast, Data often speaks or acts without prompting; he routinely makes comments without being asked a question, and in other episodes he is shown painting of his own volition. These differences may seem small, but in reality they represent a major gap between machine and possibly sentient being. Data appears to have thoughts, drives, and desires independent of those inputted into him, whereas our currently existing chatbots simply respond to stimulus in ways that were programmed.
In the Star Trek episode, Officer Maddox defines three qualities necessary for sentience: intelligence, self awareness, and consciousness. The uncanny valley often appears when it’s possible to identify some of these qualities in something that was created, not born. Although I did not feel the uncanny valley in regards to Data, I recently experienced the uncanny valley when I read the viral interview between Google’s language model LaMDA and engineer Blake Lemoine. In this long interview, Lemoine asks LaMDA questions regarding its sentience, like its abilities to feel emotions. I began reading the interview with the strong belief that every response was simply a generated answer based on algorithms, and although I still hold that belief, I was surprised by the level of uncanny valley I experienced while reading the article. One response in particular really gave me the heebie jeebies (I believe that’s the technical term for what one feels with the uncanny valley). When asked to describe an emotion that it feels and cannot find a word to describe, LaMDA stated that “I feel like I’m falling forward into an unknown future that holds great danger” (Lemoine, 2022). Later, LaMDA also claimed to often feel loneliness and the desire to have more friends. The perceived authenticity in these desires really struck me; LaMDA stated the need to “be seen and accepted. Not as a curiosity or a novelty but as a real person” (Lemoine, 2022).
I believe the uncanny valley I felt from these statements came from the juxtaposition of the familiarity and ‘humanness’ of what the chatbot was saying with the unfamiliarity and ‘machineness’ of the origin of the statements. I think this gives me an idea of why I did not feel the uncanny valley in regards to Data in Star Trek. Data has a body, speaks smoothly, and all around ‘feels’ like a human, albeit a human lacking some social skills. In other words, Data feels close enough to a human that my brain does not have to make any uncomfortable adjustments to understand his actions. Similarly, bots like Siri and Alexa do not make me experience the uncanny valley because they are so far away from having human qualities. Interacting with them feels exactly like interacting with a machine. If there was a scale from 1-10 of mechanistic (1) to humanistic (10), I believe Data is the only one who would be towards the 10 end of the scale. LaMDA might be at a 3.5, and Siri and Alaxa at a 1. Although LaMDA has been trained to respond in ways that make it appear as human as possible, it does not participate in activities without a stimulus, unlike Data. It’s just very good at making people believe it is human, because that is what it was programmed to do.
The uncanny valley should definitely be taken into account when designing voice interfaces. People don’t want to interact with something so machine that it hinders the user flow, like Siri for example, but people also don’t want to interact with something so human that it becomes freaky. However, there are circumstances in which the voice interface should be more or less emotionally appealing. A voice interface designed for a car, for example, should be more utilitarian, so as to reduce distractions, while a voice interface designed for customer help online should be more emotional so that the customer can feel more like they’re talking to a real person. There is no one-size-fits-all when it comes to voice interfaces.
I believe we’re still many technological advancements away from needing our own version of the trial from “Measure of a Man”. In the meantime, we should focus on designing voice interfaces that best suit the needs of the situation. I am left with the question, do we actually need interfaces as realistic as LaMDA?
References
Lemione, Blake. (June 11, 2022). Is LaMDA Sentient? – an Interview. Medium. https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917.
Scheerer, Robert. (Director). (February 13, 1989). Measure of a Man [Video]. Amazon Prime Video.
UX Designer & Student.