CONCEPTS
Some of the key concepts guiding the research approach of Model Emotion
ARTIFICIAL INTELLIGENCE
Artificial intelligence is a key concept in our work not because it is a robust and precise philosophical term, but precisely because it is not. Because the term refers to a loosely-arranged set of technologies that shift according to a social contests and negotiations over what counts as machine “intelligence,” it serves as a useful metric for understanding the particular hopes, desires, fears, and other sentiments and ideas driving the automation of intelligence in machines...
AFFECT AND EMOTION
Affect and emotion are two words with long and contested histories. Most simply put, we understand affect as those automatic and non-conscious capacities produced, circulated, and embodied within arrangements of social and material culture. Emotion is the representation and communication of those affective capacities in thought, discourse, sign systems, and material objects...
WELLBEING
Like another of our project’s key concepts, artificial intelligence, wellbeing is a concept in flux, changing in relation to the different social pressures that grant the term its meaning, value, and function in a particular time and place. That the term wellbeing has attracted as much attention in technology discourses in Japan as it has elsewhere in the world indicates globally-interdependent networks not only of expert discourses but also of the social, economic, and emotional pressures they respond to...
MODEL
What is a model? Unpacking an answer to this seemingly simple question reveals the variety of curious ways that different habits of thought, cultural values, and technological creations become entangled in cultures of emotional robotics engineering. As the central focus of Model Emotion is to analyze the different ways that people theorize, model, and embody emotion in assemblages of mind, body, and machine, we outline below three different meanings of the word “model” that we consider through research.
First, a model refers to the actual models of emotion that engineers draw on or build in order to equip a robot or machine with artificial emotional intelligence. These can be designed in the forms of conceptual architectures and blueprints for how an emotion-sensing machine could theoretically work in a system, such as the OCC model, the PAD emotional state model, Plutchik’s wheel of emotions, Ekman’s Basic Emotions imagined through an emotion Atlas, or James Russell’s circumplex model. These can also refer to the algorithms in software and technological hardware that operationalize artificial emotional intelligence, such as the cameras and software that allow a robot like Softbank’s Pepper to read human facial expressions. These models in both their theoretical and mechanical form serve as a primary object of ethnographic research and our first meaning of the word “model.”
Second, drawing on early work on the anthropology of emotion (Levy 1973, Rosaldo 1980, Abu-Lughod 1986, Lutz 1988, Schieffelin 1990) and on more recent and diverse work in the anthropology of ethics and value (Faubion 2011, Graeber 2001, Laidlaw 2014, 2018, Robbins 2012), a model can also refer to moral and ethical discourses in which emotions are evaluated as exemplary, ideal, or otherwise “model.” As social scientists have long documented, emotions are constructed through the discursive arrangements and rhetorical ways that groups of people order the value of particular ways of speaking about, representing, and expressing emotion. Referencing more recent work from affect theory, we also add to these dimensions the ways that different social hierarchies of emotional value structure how feeling can be embodied somatically and non-consciously. This is the second primary meaning of the word “model” in our research.
Third, “model” also refers to the practices of subjects mimicking, patterning, and otherwise “modeling” their own modes of emotional expression and communication after others. This notion of model grounds our analysis of emotion in the “affective practices” (Wetherell 2012) in which particular somatic routines become fixed in both social and somatic tissue. This is an important perspective to incorporate in order to understand how affects can be both social and somatic, cultural and biological, discursive and dynamically non-representational. This idea of a model as an organizing device of relational practices and development constitutes the third key meaning of the word “model” in our project.
CULTURE
Although a notoriously difficult word to define, Model Emotion is rooted in the disciplinary perspective of cultural and social anthropology, a field that applies long-term fieldwork to understand the varieties of cultural experience around the world. Although we are sensitive to the value of anthropology’s historical roots that, with thinkers like E.B.Tyler, defined culture as “that complex whole which includes knowledge, beliefs, arts, morals, law, customs, and any other capabilities and habits acquired by [a human] as a member of society,” we also find value in approaches that enable more granular, complex, and critical perspectives on what exactly holds such a society and its members together. Thus, Clifford Geertz’s definition of a culture as those “stories we tell ourselves about ourselves” relativizes this “we” to the shifting and changing currents that are in constant flux in relation to global movements of people, capital, technology, ideas, and the material and environmental consequences of our contemporary modes of production. To complicate matters even more, we also note the value of recent discussions of culture by such scholars as Donna Haraway, Eben Kirksey, and Tobias Rees who encourage us to think about anthropology as a discipline that might not be focused centrally on the human at all, but rather on its part-to-part connections with other animals, microorganisms, environments, and even technological objects like companion robots. Given our project’s focus on affect and emotion, we thus understand culture as that assemblage of social-material arrangements that give rise to different affective capacities to relate and respond to others.
ARTIFICIAL INTELLIGENCE
Artificial intelligence is a key concept in our work not because it is a robust and precise philosophical term, but precisely because it is not. Because the term refers to a loosely-arranged set of technologies that shift according to social contests and negotiations over what counts as machine “intelligence,” it serves as a useful metric for understanding the particular hopes, desires, fears, and other sentiments and ideas driving the automation of intelligence in machines. This ambiguity allows for particular arrangements of social values and technological innovation to become entangled in the creation of so-called intelligent machines. Our purpose as anthropologists is to try to disentangle these techno-social threads of emerging systems with artificial emotional intelligence in order to better understand them and thus better inform those implicated in their increasing presence and agency in our lives.
AFFECT AND EMOTION
Affect and emotion are two words with long and contested histories. Most simply put, we understand affect as those automatic and non-conscious capacities produced, circulated, and embodied within arrangements of social and material culture. Emotion is the representation and communication of those affective capacities in thought, discourse, sign systems, and material objects. However, although we articulate the difference between affect and emotion, what we are most interested in is the dynamic relation between the two. Based on the premise that the gap between what we feel (affect) and what we know about what we feel (emotion) serves as a productive cultural site for a variety of ethical, aesthetic, political, economic, and technological projects, Model Emotion explores the implications of those projects for political belonging, personal inquiry, and the potential for cultural theory within both the human and hard sciences.
WELLBEING
Like another of our project’s key concepts, artificial intelligence, wellbeing is a concept in flux, changing in relation to the different social pressures that grant the term its meaning, value, and function in a particular time and place. That the term wellbeing has attracted as much attention in technology discourses in Japan as it has elsewhere in the world indicates globally-interdependent networks not only of expert discourses but also of the social, economic, and emotional pressures they respond to. Although the terms “technology” and “wellbeing” have often been opposed to each other in recent times, in a number of experimental settings we observe in labs from Tokyo and Kyoto to Cambridge, Berlin, Boston, and San Francisco, many engineers are thinking about how to leverage technology for increasing health, emotional intelligence, and affective wellbeing. In our research we trace the emergence and circulation of this latest understanding of wellbeing and explore how it changes relative to new technologies equipped with artificial emotional intelligence.