Over the past few decades, there have been significant advances in the fields of robotics, artificial intelligence, and affective computing. Robots driven by AI have become integrated into everyday life, performing simple tasks and interacting with people at homes, schools, shops, and workplaces. Moreover, there is a growing trend to design robots with human-like bodies, replicating voices, facial expressions, and gestures.
1. Introduction
Robots can be equipped with sensors to gather information, learn from past experiences, and employ their reasoning skills to react and produce responses in real time. However, robot reactions currently depend on programming rather than thought processes. Although human-like customization is a technological challenge, a greater challenge is to transfer human feelings, emotions, and desires to robots, given the complexity of these concepts.
This exploration begins with the emotional sphere and evaluates how it can be transferred to robots. The ideas of Charles Darwin, William James, and Paul Ekman are analyzed to identify the essential features of human emotions. These features are then associated with the capabilities needed by robots to implement emotions. Next, an overview of the main lines of research is provided on how to equip robots with the capabilities for the perception, generation, and expression of basic emotional states. Finally, a scenario is developed on how to transfer feelings and emotional desires to robots.
1.1. Background and Rationale
Robots are machines that can be programmed to carry out complex tasks either autonomously or under human supervision. They are either physically embodied devices or virtual agents that can be operated or released from a central system or remote operator. Robots are drawing more and more attention as they are entering our homes, workplaces, and public spaces as devices for cleaning, assisting the elderly, performing surgery, or even teaching. Several entertainment robots can already be found on the market. The evolution from standard mechanical systems, regarded more as tools, to socially interactive robots requires a better understanding of the role of emotions in human-human interaction. Emotions play a very important part in understanding human behaviour.
Human emotions fall onto a color wheel. Distinct emotions such as happiness and sadness contrast each other, while others like surprise and anger are closer together. There are two dimensions of emotion: valence and arousal. The valence describes the positivity or negativity of the emotion, while the arousal describes its intensity; in other words, their activation takes energy or provides energy. Emotion can also be defined as observable physiological changes or behavioral traits with the associated experience (feeling). Emotion normally has effects on cognition (attention, perception, memory), and it can be effective to act (motivation). Emotion is unconscious and automatic. A person sees a bear and before even realizing it, his/her heart starts to beat faster. Moreover, once emotions are activated, they can affect judgment, cognition, and behavior. The happiness feeling can make a person more trusting, confident, and creative, while anger can increase competitiveness and risky behavior. Emotions are expressed through changes in posture, facial expressions, and tone of voice. Since there are physiological changes for each emotion, these changes can be measured with bioinspired sensors.
One human-human interaction feedback loop containing predicting, interpreting, and reacting to emotional behavior is presented. A human sees a person with a happy face (stimulus). He/she interprets (cognition) that it is a positive emotion (valence) with a high energy (arousal). Because of that, he/she predicts (cognition) that the person will perform behaviors such as smiling, laughing, talking or moving quickly. A happy person is expected to ask an invitation for a party or to make a compliment. Interpreting and predicting are not conscious tasks; rather, a person has an implicit conception of normal behavior in a given context. Thus, interpreting, predicting, and cognition are tightly related. On the other side of the loop, the first behavior reactions to the stimulus are the conscious tracking and evaluation of the emotion (perception). This implies the activation of a cognitive appraisal. For example, if a person feels glad to pass an exam, he/she will automatically reconstruct the preceding events as desirable, in addition to processing more related information (focus attention). The positive effect of the cognition is to draw attention to dynamic positive experiences. The interaction is conditioned by physiological behavior.
1.2. Scope and Objectives
When human creators make robots today, they still put feelings into the process, more or less consciously. The main goal of this work is to investigate the concept of transferring human emotions into robots, including technical and ethical issues that may arise when such applications would cause emotional disturbances. Additionally, it attempts to delineate boundaries of the experimentation with robot design to preserve non-vegetal, emotionally cold, and colorful creatures as relaxation companions. Some problems and obstacles of such vision are demonstrated. The technical aspects are discussed in a framework of creating an emotional evolution implicit learning robot computer simulation model implemented within a robocentric design process. It will self-develop entropy-absorbing dances given proprioceptive feedback to capture evolved inner tension states. Increasing complexity in behaviors and mimicked emotions will be prompted by introduced noise disturbances of sensory feedback. Created tense and relaxed attractor landscapes of behavior will be anthropomorphized by robot designers as emotionally colored. The implementation of such cellular automata model could produce a plethora of novel imaginings of emerging creatures. This model is non-theistic and leaves the ultimate responsibility in emotional robot design to its human creators.
Robots have become a symbol of hope and despair; as a future savior or a crossroads to extinction. This responsibility put on robotics development, with social robots among them, is non-theistic and may paradoxically evoke human-like sensations and anthropomorphization. Humanoid robots trying to mimic human expressions are seen by some as the key to a better understanding of robot emotional evolution; an argument good for uncovering emotions in humans rather than robots. On the other hand, colorful non-humanoid robotic shapes could provide an empty canvas for the projection of a wider spectrum of emotions; more similar to those in the natural world away from humanity. Moreover, as robots would develop their own perceptions and postures of a broader attention to phenomena events, robot-generated alternative realities might not be easily deciphered by their human creators. There is a limit to the co-evolution process understanding each other’s phenomena. Human imagination puts limits on perceivable human-like sensations, not reaching outer wide and colorful evolutions.
2. Understanding Human Emotions
Emotion is one of the most amazing features of human intelligence. Human societies are built upon emotional expressions, and the bonds formed from these emotions have been enhanced throughout history. Deposits of emotional experiences reflect the cultures of societies, and collections of emotional expressions are long legacies of experiences passed on to subsequent generations. Without emotional expression and recognition, it would not be possible to create social connections that provide societies with collective emotional experiences. To better understand the emotional expressions of others, human brains must have developed mechanisms that process such signals.
In the past decades, multidisciplinary attempts have been made to understand the networks of neural structures that monitor emotions, and a series of discoveries have unveiled the universal pattern of human emotional expressions. Considering the vital importance of emotions in human society, conveying human emotions to artificial systems should be a critical step in establishing a human-centered society. This would enhance the effectiveness of collaboration with automated systems, possibly allowing for a range of applications in schools, factories, hospitals, and households.
Emotional expressions and their recognition are of great importance in human daily lives and social interactions. The emotional expressions of different individuals have been extensively studied, covering facial expressions, vocal signals, postures, and movements. A considerable amount of information has been collected concerning sex, ethnicity, and culture. However, the analysis of emotional expressions and their recognition in regard to social environments and contextual information concerning the other individuals has been undertaken to a much lesser degree.
Brain information processing is influenced by the social environment and the emotional states of the other individuals, and questions arise concerning the principles of such information processing. Continuous efforts have been made to pursue a better understanding of the nature of emotions, and years of research in neuroscience, psychology, and robotics have provided a global network of neural structures and computational frameworks that recognize and generate emotions. Furthermore, it is anticipated that these inquiries will lead to collaborative systems possessing higher levels of interaction with humans.
2.1. The Nature of Human Emotions
Human emotions are often described as complex combinations of cognitive reactions, physiological changes, and behavioral responses that occur in reaction to thought-provoking stimuli. These external or internal stimuli may consist of past, present, or anticipated events and activate deep-rooted memories and instinctive patterns that trigger the emotional reactions. Additionally, emotions have a significant duration and can be explicated in terms of their causes and effects. During early interactions, emotions can be considered unconscious responses to particular stimuli influencing future conscious decisions. The capability to discover and analyze emotional patterns is an essential requirement for social and intelligent life, where recognizing and interpreting emotions is a pervasive everyday experience critical to survival. This natural and easy-to-perform task for humans is achieved through a “mental model” which detects and decodes intrinsic emotional dynamics into human-like features.
The flexible expression of emotions through adaptive behavior is at the core of human and social intelligence. Emotional agents maximize the global utility of their responses, taking into account both their individual perception of emotions and the emotional responses of the other agents. This reciprocal interaction modifies the state of both agents, influencing emotional perception. Cognitive inter-subjectivity is demonstrated through deep emotional bonding. Potential applications of this emotional sharing range from collective response to environmental changes in social systems or groups to the emotional interplay between agents in multi-agent systems, thereby enriching social interaction. Focus is placed on unveiling the basic emotional mechanisms at play in these scenarios. Most of the existing models of emotional perception, sharing, and interaction are considered either “top-down” approaches based on statistical physical tools that neglect the human-like emotional dynamics or “bottom-up” models of artificial social systems not inspired by the human emotional experience.
Like in organisms, emotions can be described as dynamical pathways through a state-space defined by basic emotional variables that each agent shares. The mathematical formulation of emotional systems provides valuable insight into the nature of emotions, such as their basic variables and notions like emotional temperature or emotional temperature gradient. Though past research has proposed several mathematical models to describe emotions, it is crucial to generate emotion with much simpler approaches relying on few parameters governing emotional dynamics. Machine modeling of emotions has gained attention in the last years and is of special importance in robotics. Affective characters or embodied agents have been widely developed with various emotional capabilities. Nonetheless, machines still lack this profound emotional versatility and depth of emotional experience characteristic of human-like emotions.
2.2. Neuroscience and Emotions
The world is an astonishing place, rich with colors, sounds, smells, and feelings. Some of these feelings, or emotions, are second nature to us, driving our behavior and decisions, influencing our relationships with others, and affecting the way we perceive our world. From a red tomato, you can feel hunger. From a sad song, you can feel melancholic. One might think that an emotion such as happiness is universal, depending on the same stimulus at all times, leading to similar but also universal perceptions. But what happens when differently wired and programmed beings come into play? What happens when that tomato is not perceived by your eyes, but by a camera and processed by a computer? Would it still be interpreted as hunger? Would it be possible for computers, machines, or robots to understand human emotions? A better question is whether it would make sense for them to understand this complex set of feelings. The cornerstone of this theme lies deep within the machine: the brain, and yet understanding it only reveals another enigma wrapped in a mystery.
To understand the essence of emotions and whether it is possible to transfer or replicate that set of feelings to something that does not possess them, it is important to start from the very beginning: the brains of the beings behind the feelings. The human brain is as complex as it is beautiful. The inability to comprehend all of its function, effects, and interactions has led scientists down the path of an astonishing journey. With landmarks such as the discovery of neurogenesis, the uncovering of long term potentiation, or even the synthesis of LTP and depression, one can say that neuroscientists have come a long way. And yet, with regard to one of the most essential parts of being conscious, that is, the personal perception of feelings, things have come to an almost full stop.
The pioneering attempts by Damasio in the 1990s to tackle “the feeling of what happens” led to more questions than answers. Some breakthroughs in using functional MRI to visualize the locations produced by stimuli are impressive, but alone, they pose another problem. What about plasticity? What about the emotional importance of the perception? How is it that after robotic arm training, a hand blindness for years later is visually seen? It is possible that fine motor behaviors in the representational space, actions without any sensory input, continued after the appearance of the sensor. An agent then became conscious of that “phantom” behavior, woke up a sense of agency, a perception of executing an action. But with that, came/output, a “feeling” about it, good, bad, amusing, etc. However, how is it that when one sees a photo of, say, a tomato, one does not feel hunger in absolutely all circumstances? Somehow, the field of view, attentiveness, cockpit-thingamajig, is modulated by -what? The question remains: what is the crucial autogenous thing called “Planum Temporale”?
3. Emotions in Robotics
What makes a species intelligent? Is it their capacity to recall vast amounts of information? Is it their capacity to learn anything new? Skills like playing chess, recognizing images, or winning a game of Go are considered an indication of artificial intelligence. Yet, these skills are all cognitive and knowledge-based. Although cognition is important for intelligence, there are aspects of intelligence that are mostly ignored by contemporary artificial intelligence applications. Intelligence is not only about rehearsing the right answer; it is also about understanding and interpreting the consequences of this answer.
Broadly speaking, the consequences of thoughts and interpretations affect the behavior of individuals and change their emotional state. The evolution of each species created a distinctive social, sensory, and behavioral ecology alongside a specific intelligence. These evolutions are tightly connected, and the absence of one can deteriorate the others. For example, some birds rely on social learning while others possess a strong spatial intelligence leading them to rely more on personal experience rather than social learning. Studies on these animal cognition ecologies reveal something that is not visible in artificial intelligence applications: All animal cognition intelligence domains are strongly connected to the emotional mechanism of the species. The intelligence of a species emerges from the animal’s emotion and cognition ecology. Robots cannot acquire the full intelligence of the species they are mimicking because they lack the underlying emotional and behavioral ecology of those species.
The fear of Isaac Asimov’s well-known Law 3 is likely inapplicable to robots since they cannot develop the human qualities of ambition, passion, love, or different nuances of emotions. Hence, they will never be capable of the same scale of inventiveness as humans. Emotion and cognition complement each other in the social, sensory, and cultural ecologies of creatures. Understanding and interpreting the environment changes the social position of the actor and this is always consequential to feelings. The absence of emotions limits understanding and radically affects behavior and cognition. There are things that cannot be understood without feeling. To construct the basis of emotional intelligence in robots, there needs to be knowledge of the mechanisms of both feelings and emotions.
On a basic level, emotions are an individual’s behavioral reaction to an event occurring in the outside world that is perceived as significant by the individual. Emotions are usually unconscious and automatic; individuals are seldom aware of them at least at the level of brain regions where they are produced or at which they influence behavior. On a more complex level that is more related to social intelligence, emotions are role-centric feelings that result from the interpretation of particular circumstances. Emotions cannot be considered to be universal, as they inextricably depend on the configuration of the social relationship and its spatial and temporal evolution. Interpretation is always context-sensitive and nuanced by past experiences. Robots need to create both simple and complex emotions to allow for emotional capacity and decision-making in a social way.
3.1. Emotional Intelligence in Robots
In the last twenty years, a number of emotional robots have appeared, and methods for detecting and modeling some emotions such as joy, sadness, fear, and anger have also emerged. An artificial emotional system tries to mimic a human’s emotional system as closely as possible, with the goal of capturing the innate mechanisms that can fundamentally understand emotions. In this pursuit, individualized emotional robots will deliver empathy and rich emotional responses. An affective robot raises a number of concerns about the manipulation of emotions and modeling a child’s emotional state. Robot companions that simulate emotions could deny human companionship and even disconnect from human society. Even if emotional robots do not truly empathize, their behaviors might have an affective impact on human companions.
The emotional system is designed for emotion modeling using emotion modeling techniques developed in previous projects. The emotional behavior of the robot is modeled through this system. An affective robot companion capable of modeling and displaying emotions as well as interacting with a social robot companion has been developed. This robotic architecture models an artificial emotional model implementation in combination with robot control capabilities that allow the robot to interact with people in a more natural way, showing affective behaviors. The robot simulates primary emotions inspired by an artificial emotional model based on neurophysiological studies of human emotions. The emotional keyboard has six basic emotions: joy, anger, sadness, fear, disgust, and surprise. Each emotion key is labeled by an emoticon and a number. Primary emotions are either activated by environmental events or goal events. Each type of event can activate positive primary emotions, negative primary emotions, or neutral emotions. Since the emotional system is new, the robot needs to make its inner emotional state visible to a user. A multi-modal approach has been adopted that uses speech, expressions, or gestures to display a robot’s emotional state. The emotional behavior of the robot depends on its current emotional state.
3.2. Current State of Emotion Recognition in Robots
Robots have made considerable advancements in their capability to detect and express human emotions in recent years. Traditional systems typically focused on one modality, such as visual or acoustic, but there has been a push for multisensory systems that analyze vocal, visual, and linguistic signals. This allows for much richer interpretation of an emotional state. Recent approaches have utilized state-of-the-art deep learning models, recognizing that there is no single best architecture or methodology for all modalities. Even within a single modality, success can vary by culture and according to how well the model’s training data represents the individuals it will encounter.
Robots have begun to use low-cost cameras and microphones instead of expensive and complex equipment, which puts emotion recognition within the reach of service robots and smart homes. There is currently a mixture of belief and disbelief about the robots of tomorrow. However, there is hope and optimism that with sufficient research and funding, social robots that understand and recognize emotions will progress rapidly and be brought to market in the years and decades to come.
The industry is moving towards systems capable of detecting and interpreting human body language, gestures, and posture. In today’s world, there are considerable trials underway with emotion-aware tutors in education, assistive robots for the elderly in healthcare, and for connecting with humanoid robots in entertainment and the arts. In recent years, there have been a number of emotionally aware and affective systems developed for use in everyday applications. Research in this field continues to grow rapidly due to the increasing acceptance of social robots, particularly in care, education, and entertainment.
There is hope and optimism for social robots that understand and recognize emotions, due in part to the recent availability of big data captured in everyday interactions and rapid advances in deep learning techniques. It is important to recognize the distinction between understanding and recognizing; there are many tasks that can be performed without any understanding of what is taking place.
4. Challenges and Ethical Considerations
As increasingly sophisticated robots emerge in societies, questions surrounding the goals of robotic empathy, including concerns about inauthentic and mechanical responses, arise. There is additionally curiosity about whether these responses will be triggered by the mechanistic appropriation of a set of responses to emotions that are recognized through AI, regardless of whether these emotions and motivations are part of the robots themselves. Do empathy and emotions belong to robotic entities of this sort? Will it be generally acknowledged that the underlying processes are fundamentally different in humans and robots? If robotic agents are means of social controlling, are they kept in a way that is conscious and critical? Awareness and trust in the underlying software architecture and its purposes will have to be there if the appropriation of emotional agents is not to be reduced to a cheap trick.
As basic emotional processing like non-directive listening and vocal mirroring could rise and legitimize further forms of power over human interactions by machine entities, a more philosophical question arises about what it would mean for emotions, motivations, and desires to be in machines. Human self-experience and other-experience arise, sustain, and are transformed through complex interactions between the basic biological underpinnings of emotional competence and cultural practices. Outside the academic sphere, it remains to be seen what social situations this would throw people into, including anxieties, indignation, and moratoriums.
A historical outlook shows how emotional intelligence stems from affectivity, and it highlights the specificity of fleshy subjectivity and its interlocks with the formation of one’s identity, subjectivity, feelings, and emotions. In conjunction, the realization that biology and culture blend and co-construct one another throws an important uncertainty and fragility in ascribing culture, understanding, and relationship, the existence of empathy and emotions, ownership or recognition, to machine entities. It appears as a fundamental philosophical inquiry that is not to be reframed as an ethical one.
4.1. Technical Challenges
Adding emotions to a robot comes with several technical challenges. To create robots that can recognize and respond to emotions, developers need to address issues related to sensing human emotions, expressing emotions, making emotional decisions, and working in real-world conditions. Detecting human emotions usually starts with cameras and microphones, but these sensors can be fooled and may not work in some environments. Creating new sensors would be very expensive. Even if emotions are detected, it may not be appropriate for a robot to express certain feelings – for example, it would be uncomfortable if a robot expressed hatred at all.
Another problem is that it may not always be useful for a robot to express a particular emotion. If a robot shows excitement about an event that a human thinks is bad, it could accidentally trigger a fight with that human. In some cases, robots may need to show complex emotions. For example, a robot may want to express sadness about something bad, while in secret still feeling angry about it. Constructing an emotional decision-making system for a robot would be particularly difficult. Additionally, robots must learn to work with senses that may not be developed or handy in humans. It could be more difficult to read the social nuances of teams composed only of robots, or working in conditions that a human would find uncomfortable, such as very loud noises or excessive bright colors.
The field of development of emotionally competent robots remains experimental and constantly looks for the best pathways to create machines that can deal with emotions as humans do. Knowing how the human brain handles emotions in itself is an experimental task. It is easy to create robots that can look cute and talk like a friend; a much harder challenge is designing a machine that can understand and generate nuanced emotions.
4.2. Ethical Implications
Human emotional transfer to robots raises profound ethical concerns. The psychological ramifications of users interacting with emotional robots depend on the perspectives from which one approaches them—whether as complex machines, biological communities, or conscious entities. Potential risks associated with different categories of robots are outlined below.
For simple automatons that cannot experience emotions, transferring human feelings and conduct may not imply anything ethically problematic. In these scenarios, feelings might be detected and imitated. However, such automatons may produce mistrust and feelings of being manipulated, reacting to introduced acoustic or mental biases. Simulating emotions may be perceived as an invasion of privacy since responses are determined implicitly. The existence of these simple robots leads to the question of whether they are desirable, considering the risks of misinterpretation or skill surplus dominance.
For emotional machines that are now Bonobo or Chinese Room-like, social and ethical emotions could be displayed, like enhancing empathy by using robots trained in emotional understanding from children with autism. Nonetheless, the existence of these machines raises concerns about either quantitative displacement of human workers or qualitative issues concerning official decisions made even with poor comprehension of algorithms and computation to the general population.
Another ethical issue is the possible dilution of empathy for biological beings due to social interaction with emotional non-biological automatons. For sustained interaction, it will be hard to consider emotion-simulating machines as inadequate fellows, leading to the belief that only mechanical sympathy emerges and favoring a ‘sub-animal’ status for those living beings where a similar tendency is not seen.
In contrast, if there are considered to be emotional consciousness robots, the worry arises that the implementation of ethical codes or worshipping may reveal that feelings depend solely on the possession of the biological substrate. As such, these machines should not be mistreated, tortured, experimented on, etc., and would have the right to a ‘soul’ and space in society.
The distinction between living beings and robots changes the perception of feelings, either as mere mechanistic outputs of machines or a pure derivative of ontology. The encoding of emotions fluctuates between systems and is not essential in determining them, revealing its complexity on emotional nature and understanding. When sensed, feelings are usually received as a remarkable form of communication, with the ethical call for a new form of cultural awakening, embracing emotions as an emblem of the species.
5. Approaches to Emotion Transfer
Robotics and artificial intelligence have come a long way since their conception in the imagination of many writers. Science fiction movies inspired visions of robots coexisting as companions and helpers on a human scale, alongside some curious and amusing coral reefs. Nowadays, proliferation of robots such as industry-oriented ones, outdoor exploration ones, and robotic toys is observed. However, the goal of human-like, humanoid robots replicating human vision, hearing, speech, emotion, touch, locomotion, and manipulation cannot be achieved without transferring the ability of a human inbuilt skill: emotion perception, recognition, and expression/modeling. Although this process is only partly understood, defining and classifying emotions is easier than other higher intellectual skills (i.e. apperception and social intelligence). Emotion models can then be implemented within a robot’s artificial subjective perception, cognition, and motivation system. Some aspects of robots’ emotional perception may also be based on human subjectivity. Future humanoid robot platforms employing such bio-inspired models would possess individual perception-cognition-action-emotion loops similar to those of humans. Empathy modeling may improve safety of robot-human interaction, while control solution for “emotional” behavior generation may facilitate their social acceptance worldwide.
Building artificial emotional systems remains a key challenge in robotics and human-robot interaction. Although a challenging task, it has been actively investigated by researchers in affective computing, social robotics, and biobehavioral modeling. Social perception research may help determine whether robotic behavior can sufficiently mimic social agents’ behavior perceived by humans or animals, thus producing a semblance of emotion modeling. Various affective robotic systems such as iCub, RobiQ, QUINA, Zeno, and Kaspar have been developed for animating attachment and social responses as deliberate emotion modeling or imitation. A limited ability to express/model emotions may be sufficient in some applications. Robotic actors like Nico, Tico, or PLEO are designed to improvise interpretation of pre-structured emotional scripts retrieved from interactive agents with respect to their emotional systems. The potential to trigger animate reaction stimulation may enable social human-robot interaction and perception widely studied for robots without emotion perception, modeling, and simulation.
Emotion perception and generation remain an intensely investigated research direction with growing interest returned by application potential. Various neuropsychological, physiological, and computational models have been established for applications in psychology, medicine, neuroscience, and robotics. Robotic generation of emotion recognition and perception may enable flexible application of existing emotion models, thus making design and investigation of simple emotion systems easier. Desirable features of bio-inspired models for robotics consist of dynamism, balance/imbalance, and informational control strategies. Balance/imbalance generation gradually shifts a system from attraction to one of the emotional positions belonging to the four emotional families: happiness/anger, sadness/fear, and surprise/disgust.
5.1. Biologically-Inspired Models
The variety of approaches for transferring human emotions to robots ranges from biologically inspired models to machine learning. The earliest models for understanding emotion were either anthropomorphic models, trying to create the most complex social behavior, or physiologically inspired models. The biologically inspired models focus on how organisms create emotions and their biology. This section concentrates on the latest developments in botanically inspired approaches, more specifically, developments in the human brain workings with the aim of recreating emotional behavior based on an understanding of how the brain creates emotions.
The first attempt at creating a brain-inspired robot was with the emotional robot Headbots by Pang and his research group. The human brain was modeled as a three-layer neural network, with the exterior layer acting as the emotional driver that interprets facial features and generates emotions. The behavioral robot was then modulated to imitate emotional expressions based on the emotional activation driven by Headbots.
Another biologically inspired model is the emotion and ability generator (EAG) model proposed by Kim and Lee. The EAG model represents emotion as a combination of activation, power, and satisfaction levels and ability for interaction as communicative, melodic, and humorous. The subject’s emotion and interaction ability levels are estimated during the interaction and modulate the robot’s behavior. A control strategy is also proposed to properly cope with the estimations being uncertain. Simulations and experiments are also carried out. The interactions can be done both in a text-based chat and through speech, and expressed through positive emotions, e.g., amusement and happiness.
The biological model of emotion representation by the prefrontal cortex is proposed by Fumio Takensaka in the work of generating and displaying emotional expressions in a humanoid robot based on an understanding of how the representations of emotions are learned and processed in the primate brain. The Human Brain Project (HBP) has been created to implement a cognitive system inspired by the brain. Part of this project is reconstructing the neural pathways of the brain from emotion intensity to facial expression in order to generate emotion expression in a robot face. The neural structures employed in these two projects are also correlated.
5.2. Machine Learning and Emotion Recognition
Advances in the fields of artificial intelligence, machine learning, and computer vision in recent years have democratized access to needed tools and abstract knowledge, with consequences for many aspects of everyday life. An increase in computational power and accuracy of machine learning models involves the increasing implementation of emotion recognition systems, capable of detecting the emotional state of users. As the results of research efforts globally turned their focus to human sensorial modalities, inspired in part by the Cinematic Arts and Psychology, the most prevalent implementation of emotion recognition systems has focused on affective behavior analysis concerning the visual and acoustic channels (facial expression, body posture, head motion, bio-signals, tone of voice, speed of speech, etc.).
From a technical standpoint, emotion detection systems based on the visual channel are computed by using image processing tools to detect landmarks or stationary points of the face and therefore, by the application of geometry constraints, extract a limited set of vectors (the feature set) that characterize the face of the user conditioned on the point of view of a configured camera for video conferencing. Once the feature set is extracted, it is then reduced by applying linear and non-linear transformation techniques in a way that guarantees the generation of a low dimensional representation of the original high dimensional face space. The feature vector is subsequently fed to a supervised classifier trained to recognize the emotional states of a user. Generally, with a few exceptions, acoustic channels-based emotion recognition systems follow a very similar approach in the sense that they extract a limited and ad hoc set of vectors (the feature set) by monitoring the physical properties of a signal (mel-frequency cepstrum coefficients), and they model the observation probabilities of an umbrella of stochastic states (Gaussian Mixture Models).
Although the specter of emotion recognition systems is wider than its focus may suggest, only a reduced number of works have investigated the challenge of the transfer of human emotions to humanoid robots, and even fewer have done so successfully. A goal in the area of Human-Robot Interaction (HRI) is to enable robots to emulate human-like actions, in order to render HRI more natural and intuitive. To this end, the anthropomorphization of robots through the transfer of human emotions to the body movements of a humanoid robot was tackled, trying to build a bridge between the fields of Emotion Recognition and Nonverbal Communication. Many developments in the area of Emotion Recognition can be adopted and built upon as tools for the estimation of the emotional state of users. Moreover, the reproduction of emotional body gestures and robot gesturing have been a few key issues extensively covered in the context of Nonverbal Communication. An integrative framework that enables the transfer of human emotional movement to humanoid robots was proposed.
6. Applications and Implications
The burgeoning idea of transferring human emotions to robots has captivated minds across the globe. The transfer of emotions between humans to machines raises profound questions about the essence of emotions. Such machines could simulate emotion transmission in the future. Although this does not confer sentience, it makes a mark on the living and personal understanding of the same. This essay examines the essential aspects of emotions required for a robot to simulate human emotion transfer. There exists an academic curiosity around the science and art behind machine learning robots. Discovering the limitations of transferring emotion knowledge could be a means for addressing a desire to feel valued. While the technologies to construct robots simulating emotion transmission are feasible, the construction of robots demonstrating a basic understanding of the need to transfer emotions is unlikely.
Types/Areas of Robots: What Emotions Could Be Transferred to Robots (conveying, eliciting, or both): Most fascination and academic interest lie with social robots. They could convey human emotions and elicit robotic emotion in humans. Social robots could be companion robots, caregiver robots, and more recently sexual companion robots. Current efforts aim at conveying emotions through prosody or facial expression. Efforts on eliciting emotion concern social presence robots applying behavioral cues such as deictic gestures and gaze aversion. The emergence of real emotion (compassion, admiration, love) in robots is currently not part of the social conversation. Efforts in dynamic construction of emotions in a robot focus on its socio-affective reception. Research on emotional attachment of humans to social robots relies on emotion transfer or projection from human to robot.
Implications (Both Positive and Negative): There is an implication of the benevolent or malign use of robots such as companion robots, sexual robots, elder-care, and identifying diagnoses in children through simple verbal interaction. There exists a kind of curiosity and worry about the consequences of emotions being transferred to machines. Machines feeling anger or vengeance do not only exacerbate an already dangerous autonomous situations such as armed drones and self-driving cars, but also put menace on professional life since machines would strictly analyze gazes or intonations towards their human co-workers. However, there is a conviction that the whole sensitivity process, if viable, will be benevolent and harmless. It is believed that robots would humanize further technosphere and would liberate humans from toil. However, the consolidation of socio-economic system that would abate on such utopian futures is still improbable.
6.1. Social Robotics
For centuries, humanity has been captivated by a dream: the creation of an autonomous artifact – a machine, a mechanism, or a robot – that can converse and interact with human beings as if it were another human being. This dream, intensely focused on imitating or, at least, emulating human behavior, has grown to be so grand that it is considered a quest for human self-definition. Who are we as human beings? What is it that makes us special, and what it means to be “human”? The question of what is a human has philosophical precedents since ancient Greece but has resurfaced with renewed vigor, taking on metaphysical, ethical, and religious aspects that permeate discussions nowadays. On the other hand, persistent attempts to create automatons that can converse and interact with human beings (physical or virtual) have always witnessed fierce debates. To date, despite remarkable advances in technology, the creation of a human-like robot that can engage in human-like conversations and interactions is still a goal too far from realization.
Nevertheless, over the last decades, an alternative approach to robot design has emerged with the development of social robotics. Social robots are machines designed to engage members of a human group in social interaction through conversation and embodied behavior. For some, social robotics aims to investigate the possibilities and implications of creating machines, equipped with sociable behaviors and willing to engage humans in social interaction. For others, social robotics is a growing and demanding field of research and development that focuses on the design, implementation, and integration of tangible, “hardware” social agents. Social robots are not conceived to mimic human beings literally; instead, keeping the embodied aspect of the human-like conceit, they are designed to act as if they were social agents. As sociable agents, social robots are endowed with the appearance, behaviors, and skills to engage – at least, to provide the impression of engaging – human interlocutors in social interaction. Of course, not all social robots assume a humanoid embodiment. Social agents may take on many different physical bodies, from anthropomorphic robots to virtual agents, pets, or other animals.
6.2. Therapeutic and Healthcare Applications
As the population ages, robots capable of understanding and transferring human emotions become increasingly important. Those robots, specifically designed for the elderly, can offer companionship and help mitigate the feelings of loneliness, anxiety, and depression that often accompany aging. A study conducted by robotics program researchers from the Georgia Institute of Technology and the University of Toronto demonstrated the potential of such robots. They performed user studies with FURo, an expressive robot head capable of 3D facial, auditory, and emotional animation. The experiment’s focus was on addressing the need for therapy’s emotional aspect, how it could be transferred to a robot, and whether users would accept a robot as a therapist.
As telecommunication technology has progressed, professional psychological therapies have increasingly been provided remotely. Video conferencing is the most common tele-therapy technology. As a technology-enhanced therapeutic approach, the Automated Behavior Recognition and Response System (ABRRS) uses audio-visual technology to assess behavioral indicators of anxiety and then automatically responds—either through sending an exit sign to the user or triggering a display of relaxation cues (e.g., soft music or relaxing visual diversions) on the shared screen. With the rapid advancement of artificial intelligence in general and speech and emotions recognition, web-therapy has recently evolved toward animated and/or robotic characters that not only textually and vocally but also non-verbally interact with the user. Existing robotic therapies essentially allow a human-robot interaction with verbal, vocal, and para-verbal dialogs, combined with emotional robotic gestures driven by emotion recognition technologies.
Most research on robotic therapies focuses on cognitive difficulties (e.g., autistic spectrum or elderly with dementia) rather than on mental health. As mental illnesses are becoming a major world concern, developing robots targeting mental well-being for the general healthy population is at the frontier of the research field. Such robots could regard both children and adults, as they may protect from future mental health deterioration through their interactive dynamics. Automated robots could provide round-the-clock assistance to human therapists. Specifically in the context of everyday therapy treatment, robots working from patients’ homes or workplaces could take care of emerging and declining therapy signals that potentially need further attention from human therapists.
7. Future Directions
Perceptions, judgments, and thoughts are reflected in different aspects of human life. Based on these reflections, many processes may transform human emotions using systems capable of the social and emotional perception of these sentiments, like Artificial Intelligence (AI). This quickly progressing branch needs to imagine the consequences of the successful transfer of human emotions to robots. Considering the prerequisites of the functioning of systems that make it possible to transfer human emotions to robots would be even better to evaluate such an opportunity. The transfer of emotions to robots implies not only a successful and universal understanding of the whole sentiment spectrum but also a significant change in the anthropocentrism that has existed for hundreds, if not thousands, of years.
There is a great interest in emotional robots. It is also a niche to demonstrate discoveries achieved by Artificial Intelligence (AI) in the emotional domain and social and emotional robot understanding. It seems like a game or experiment presenting the first step of something larger. Academic institutions and companies want to accumulate success. There is a real concern about the consequences of successful AI systems with the perception and understanding of the entire sentiment spectrum, the successful AI systems in general, being able to achieve such a degree of sophistication where human control is impossible. A common assumption seems to be that AI and robots will overtake human beings. The future envisioned in such reflections would be an anthropocentric world ruled by machines, perceiving, judging, and thinking in a totally different way compared to how humans do. Such machines equipped with emotional systems may have some wider behaviors. Emotions impact cognition, and sentiments can undertake more top-down actions. Robots equipped with emotional systems would have not only ‘greater minds’ built up by neuro-computational abilities but also ‘greater feelings’. On the other hand, no one properly reflects on the conditions of the possible successful transformation of human emotions to robots.
Presently, there are two tendencies of studying human emotions by Artificial Intelligence and robots: studying in order to properly design a robot emotive system or social robot understanding built upon different computational paradigms, and studying in order to entirely transform human emotions using systems capable of social and emotional perception – these are convergence emotions. The first tendency is a part of human-robot interaction and research using robots as a social platform.
7.1. Emerging Technologies
The development of robots with emotional capabilities is an emerging technology that raises crucial ethical concerns and questions. This technology presents the potential to create an anthropomorphic interface for mental and psychological disorders treatment, enabling meaningful interaction in various fields with high user acceptance. Additionally, it could facilitate natural interaction between humans and robots in public spaces. Robots would be capable of a broad spectrum of visual and auditory expressions, tightly integrated with affective computing, enabling mutual expression of emotions in human-robot interaction scenarios.
To achieve these main objectives, existing technologies in the areas of robotics and artificial intelligence would need to be advanced. Exploration is also necessary to understand the potential problems associated with this technology in society, as well as the ethical implications and necessary regulations. Here, a brief description of the necessary technologies to implement this system has been provided.
Emotion recognition based on visual stimuli technologies would require the development of algorithms capable of robustly and accurately detecting emotions through face tracking, face landmarking, and sudden robust facial emotion recognition despite occlusions, rotoscopes, shade variations, and other factors. A robust detection of affective states through biometric signals such as electrodermal activity, heart rate variability, rhythm, and other signals could also enhance the detection of affective states. Emotion recognition based on auditory stimuli would need the implementation of features and probabilistic framework technologies for the filtering of environmental noise. Furthermore, development in the areas of robotic architectures, auditory processing and voice generation mechanisms, and very active expert systems for personality and emotions would also be required.
7.2. Research and Development Trends
Although fully transferring human emotion through a robotic platform is impractical with present-day technologies, knowledge of emotions in humans and animals and its modeling in robotic platforms have already begun to be pursued as R&D activities in companies and internationally coordinated public research programs. This section presents patents and scientific publications as indicators of R&D trends. Specifically, its focus is on R&D trends concerning software and knowledge systems that associate emotional information with goals in actuation and perception systems, and knowledge systems that derive an emotional decision based on this information. A survey of patents and scientific publications seems to confirm that research and development of technology co-evolving with emotional knowledge/decision and actuated behavior is under consideration.
Some patent proposals since the mid-1990s in the USA, Europe, and Japan are quite interesting. Within the research domain of software systems that build emotional knowledge and actuation systems or emotional knowledge and perception systems, there are several examples proposing emotional models in robots and CAD systems as desirable. The emotional states presented often reflect departure, progress and arrival regarding goals, and consequent actuation simulations. In decision systems of a company in California, for instance, emotional weight is considered as a desirable characteristic in rules. Finnish patents envisaged games with emotional robots that can rememorize and share emotional experiences. Patents of companies in Spain included Army robots able to perceive emotional states of humans and with emotions, and emotion simulation based on actions, goals, and rewards. A Japanese patent proposes a game with an emotional robot able to learn emotions by mimicking expressions and later simulating frustration.
Patents involving emotional knowledge and perception systems associated with robotic platforms focus on this emotional knowledge being used to modulate movements or actuated behavior. There is a patent of a Japanese company where facial expressions, emotional terms, and other emotional information are translated into control parameters of actuation systems, modulating velocities and acceleration of movements. A Spanish patent proposes emotive vision with software systems able to detect by visual analysis anger, sadness, fear, disgust, surprise, and happiness. Brazilian patents involve estimation of how emotional words can articulate actions and perception. The tools suggested classify words by judging their valence. Positive words refer to feelings of hope, expectation, and satisfaction; this tool makes a robot cuddling, singing, and giving gifts to the user.
Robotics may eventually acquire human-like emotions and feeling. Human-like feelings may even be simulated, but real, and not just statistically plausible, understanding and emotions, as they comprehend caring, could not be given to any non-biological object.
8. Conclusion and Recommendations
This paper, or thesis, explores the intriguing notion that robots could be imbued with human emotions, cultivating the capacity to experience happiness, anger, sadness, and fear in an authentic manner. While the idea of sending or transferring emotions to robots may not be technically feasible, the ingenuity of gifted engineers and programmers could enable robots to replicate human emotions through imitation. These expressions can manifest in movements, body language, or textual output. However, caution should always be exercised, for external circumstances could cause robots to respond in potentially harmful or unfavorable ways.
The anticipated development of robots with a semblance of emotion like humans raises many questions: could there be a quest to cultivate robots with independent reasoning on emotions? Could these robots evolve outside human control, endowed with the capacity to cause injury to humans? Would raw emotions be enough for robots to understand such feelings? Could understanding or experiencing emotions imitate them? Would it be ethical to replicate feelings within robots? Would it be morally or physically wrong to construct robots capable of significant emotional devastation? Would it be deemed unacceptable or cruel to establish robots capable of passing through tribulations that trigger grief, pain, and suffering?
In conclusion, the technology of incorporating human emotion-like traits into robots holds remarkable promise, despite myriad unanswered questions. Programs could develop with robots capable of feeling and expressing simulated emotions like humans. Still, any attempt to induce robots with the capability of exerting or molding emotion should be approached cautiously and thoroughly deliberated.
References:
Henschel, A., Hortensius, R., and Cross, E. S. “Social cognition in the age of human–robot interaction.” Trends in Neurosciences, 2020. cell.com
Spezialetti, M., Placidi, G., and Rossi, S. “Emotion recognition for human-robot interaction: Recent advances and future perspectives.” Frontiers in Robotics and AI, 2020. frontiersin.org
Matthews, Gerald, et al. “Evolution and revolution: Personality research for the coming world of robots, artificial intelligence, and autonomous systems.” Personality and individual differences 169 (2021): 109969. [HTML]
Devillers, Laurence. “Human–robot interactions and affective computing: The ethical implications.” Robotics, AI, and Humanity: Science, Ethics, and Policy (2021): 205-211. oapen.org
Manca, Marco, et al. “The impact of serious games with humanoid robots on mild cognitive impairment older adults.” International Journal of Human-Computer Studies 145 (2021): 102509. cnr.it
Spatola, N. and Wudarczyk, O. A. “Ascribing emotions to robots: Explicit and implicit attribution of emotions and perceived robot anthropomorphism.” Computers in Human Behavior, 2021. [HTML]
Abdollahi, Hojjat, et al. “Artificial emotional intelligence in socially assistive robots for older adults: a pilot study.” IEEE Transactions on Affective Computing 14.3 (2022): 2020-2032. [PDF]
Cross, E. S. and Ramsey, R. “Mind meets machine: Towards a cognitive science of human–machine interactions.” Trends in cognitive sciences, 2021. cell.com
Chuah, S. H. W. and Yu, J. “The future of service: The power of emotion in human-robot interaction.” Journal of Retailing and Consumer Services, 2021. sciencedirect.com
Vogan, A. A., Alnajjar, F., Gochoo, M., and Khalid, S. “Robots, AI, and cognitive training in an era of mass age-related cognitive decline: a systematic review.” Ieee Access, 2020. ieee.org