top of page
Search
  • Writer's pictureDaniel White

Simulation and Sarcasm in Artificial Emotional Intelligence

In an uncharacteristic moment for me, I recently found myself texting with a friend about relationships. In a slightly mocking tone with which North American males typically express homosociality with one another, I typed “In the realm of love...” This was promptly autocorrected to “I’m the realm of love.” I don’t know what this means, but even in all the possible meanings I surmised, I can confirm that I’m unmistakably not—nor have I ever nearly approached—the “realm of love.” Even more, I’m almost hyperconscious of this fact. As such, I wondered if my messaging application had perhaps learned enough about me to be conscious of this too, and if the point of singularity hadn’t in fact arrived in the form of artificially intelligent mockery. (Hold on, “Is it, Siri?” [Ta tam 🎵] “Yeah, totally.”)


It turns out, however, that my curiosity about artificial intelligence potentially expressing consciousness through a display of vaguely anti-social impudence is not far off the mark. In his popular account of the recent history of interactive conversation software, or chatbots, Brian Christian (2011) tells a story of an annual simulation of Alan Turing's classic thought experiment for determining how to evaluate if a machine can think or not.




In this particular exhibition, highly criticized by some as more publicity than science, four human confederates are paired off against one of several leading chatbots entered into the competition. A panel of judges guess which of the conversations are human and which are machine.[1] Known as the Loebner competition, after the controversial figure Hugh Loebner, prizes are given not only for the "Most Human Computer," but also for the "Most Human Human," which names the human "confederate who elicited the greatest number of votes and greatest confidence from the judges" (Christian 2011: 10). Contrary to a variety of humanist traditions in the West that would cite the passions as that which most uniquely distinguishes human nature, and thus that in which we—as Philip K. Dick suggests with his Voight-Kompff test—might most trust in guarding against impersonation by artificial agents who may do us harm, it is apparently not the most authentically compassionate responses that belies our humanity when talking to chatbots but actually our most irritable. In 1994, one of the earliest winners of the award for Most Human Human apparently went to Wired columnist Charles Platt. Curious to learn from Pratt in preparation for his own role as confederate in the 2009 simulation, Brian Christian asked him how he did it. The answer: By "being moody, irritable, and obnoxious" (Christian 2011: 9). It was a (temporary) victory for the human race, if mildly incriminating of a particular North American subset of it.


Of course, the lesson one learns by this particular version of Turing's imitation game is not how to become more generally, ideally, or quintessentially human, but rather how to become more distinctively unlike a machine in ironic contrast to a machine that, especially with recent advances in machine learning, has learned how to become human from us. The misleading divide between the nature of humans and the culture of machines thus becomes an idea which we ritualize through performances such as the Turing test, and reify through subsequent instantiations of artificially intelligent machines. From a posthumanist perspective that has long argued that humans have always evolved in mutual dependence with technologies (from walking sticks and eye glasses to the chat bots featured in Loebner's contest), the divide is a discursive construct that while ontologically mistaken is nonetheless highly generative of advances within the field of artificial intelligence.


That it is a kind of North American macho prickly sarcasm (as both an American and an anthropologist I believe I can get away with that description) rather than traditionally lauded emotions of compassion or love that should most distinguish our humanity from machines has interesting implications for a particularly important area of AI research: the rapidly growing field of affective computing. Affective computing is a field of computer science initially and most programmatically formulated by MIT Professor Rosalind Picard. In her 1995 paper that today is seen as inaugurating the field, she asks what it means for computers not only to “recognize and portray affect” but also to “’have’ emotion” (1995: 15). Offering her own assessment on the meaning of feeling for machine thinking, she says, “Clearly, a machine will not pass the Turing test unless it is also capable of perceiving and expressing emotions” (1995: 2). As products from affective computing research start to exit the lab and enter industry, we are beginning to find out which of the emotions—from the esteemed to the despicable—matter most for generating and extracting profit from consumers.


A number of new companies are currently testing this question, such as Affectiva, Beyond Verbal, Imotions, and Emotient (which was purchased by Apple in 2016 and is responsible for the iPhone’s facial recognition feature). Central to the scientific success of affective computing and the profitability of the companies seeking to capitalize off the field's advances is the ability to create technologies which are universally applicable.[2] So, for example, when researchers at Affectiva attempt to refine software that is able to better distinguish a smile from a smirk so that CocaCola's advertising unit can discern reactions to commercials for Coke from those of Coke Zero, uniformity is important. Predictably, then, psychological research that has most sophisticatedly illustrated the universality of facial expressions across cultures are those most adopted. One problem with approaches like these, however, is that recent psychological research such as that of Lisa Feldman Barrett and a good amount of the ethnographic record suggests there is substantial cultural diversity when it comes to embodying, experiencing, and displaying feeling. Moreover, with the emergence of artificially intelligent companion robots making their way into homes, like Softbank's Pepper, there is ample reason to at least consider the possibility that the kinds of feelings we will develop in relationship to them are quite unlike anything we have catalogued before (see White 2018), let alone anything that could be read by software programmed to read a limited number of fundamentally universal emotions.




What Charles Platt's skillful exhibition of a very particular and contextualized version of North American sarcasm suggests for the future of affective computing and emotional AI is not only the problems of technologies that claim universal applicability, but more importantly, that particularity, perspective, and context appear in the eyes of the emotional AI industries not as cultural problematizations to be explored by researchers but as technical problems to be solved by industry. What this means for anthropologists working in posthuman environments like artificial intelligence or affective computing labs is that the present is a particularly critical time to catalogue the variety of particular and peculiar human traits that are being formulated as data for emotion machine learning programs.


My own assessment of these data and machine learning practices suggest what philosophers and historians of science have known for a while: that people and machines have evolved together and are in fact mutually entangled in processes of technological development. Just like the chatbots featured in Loebner’s competition, the technical limitations of current emotional machines and social robots like SoftBank’s Pepper require the human subject to dramatically adjust his or her emotional expressions in order to be understood or “interpreted.” For Pratt the point is to exaggerate those most distinctive human traits in contrast to what current chatbots can do, which emphasizes our irritability; for users of Pepper hoping to be understood by his emotional engine, the point is to exaggerate facial expression so as to be clearly captured by Pepper, which emphasizes our performativity. And as interactions with Pepper become more routine as he makes his way into more and more homes, it might also mean the embodiment of what we might call a very robotic “emotional repertoire” (von Poser et al. 2019).


Of course, this may be an entirely positive thing. The point of Christian’s book on chatbots is we have a lot to learn about being human from machines. In my own research on emotion modeling within affective computing and social robotics, I am equally curious about what we can learn about feeling from robots. But I am interested in doing so less to draw lines between human and machine and more to test and assess the implications of new affective relationships that will inevitably emerge as a consequence of our mutually dependent development. Within the resolutely hybrid realm of human-robot love, relationships with companion robots and other artificial emotionally intelligent machines require adjustments as well as technological and performative compromises in order to establish communicative and affective commensurability between human and robot. Ratcheting up human sarcasm in this process to outwit increasingly emotionally intelligent machines seems not only a bad idea, or bad engineering, but also bad ontology.


References

Barrett, Lisa Feldman. 2017. How Emotions Are Made: The Secret Life of the Brain. Boston: Houghton Mifflin Harcourt.


Christian, Brian. 2011. The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive. 1st ed. New York: Doubleday.


Picard, Rosalind Wright. 1995. "Affective Computing." M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 321: 1-15.


von Poser, Anita, Edda Heyken, Thi Minh Tam Ta, and Eric Hahn. 2019. "Emotion Repertoires." In Affective Societies: Key Concepts, edited by Jan Slaby and Christian von Scheve. London: Routledge.


White, Daniel. 2018. "Contemplating the Robotic Embrace." In More Than Human Worlds Blog, edited by Paul Hansen, Gergely Mohacsi and Émile St-Pierre. NatureCulture.


Notes

[1] In Christian's words, "Turing predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation" (2011: 9)


[2] One estimate predicts the “global affective computing market will grow from $9.3 billion in 2015 to $42.5 billion by 2020.”

32 views0 comments
bottom of page