Emotions are what make us human. They play an essential role in how we express ourselves to others. Fear can indicate to us that our life could be in danger and encourage us to make a decision to run or fight. Joy can indicate to us a feeling of safety, assuring us that there is no immediate danger. But what advances could be made if Artificial Intelligence technology is used to automate emotional empathy and understanding?
There are six basic emotions that researchers consider universal and biological basis: happiness, sadness, fear, surprise, anger, and disgust. Emotions can quickly become complex when feelings such as contempt, embarrassment, guilt, etc. enter the conversation; they can create a clear picture of what people feel even if what they say doesn’t paint an accurate picture.
While humans can struggle to pinpoint the prevalence of these emotions within each other, is it possible for artificial intelligence to understand emotions in their basic and complex forms and automate emotional services? And how safe is it for humans to train artificial intelligence to understand and interpret human emotion?
Emotional AI is technology that uses affective computing and artificial techniques to sense, learn, and interact with human emotional life. Though there is much potential in this sort of technology, it is still in its early stages of development. Already, emotions are considered complex biological reactions and can be difficult to read for humans and technology alike.
At the moment, Emotional AIlacks the ability to understand the cultural differences involved in reading and understanding emotions. To add another layer of complexity, research has shown that Emotional AI models have a tendency to inaccurately identify emotions in non-white faces if the model is not trained on diverse data. These shortcomings can create adverse effects on business decisions and even create inaccurate performance evaluations for employees of color.
While ethical and safety concerns are still under debate, one major concern that has come up is privacy. Namely, the privacy that comes with Freedom of Thought. Automating empathy comes with concerns regarding psychological health and what Andrew McStay, author of Automating Empathy - Decoding Technologies that Gauge Intimate Life, refers to as mental integrity. McStay states that mental integrity is “an expression that represents interest and social need for privacy, security, and uninfluenced freedom of thought.”
These technologies could be used to improve services and create hyper-realistic interactions between humans and AI in the future. A good example of the use of emotional AI is United We Care’s launch of an AI-enabled multi-lingual chatbot named Stella. United We Care is a mental health and emotional wellness company that uses Stella to perform health screenings and assessments to offer effective and efficient diagnosis. This is surely a breakthrough to be celebrated, yet, this technology requires the gathering and storing of sensitive patient information such as voices, body language, gestures and facial expressions and this can of course raise many concerns.
Though emotional AI shows promise in providing insight into improving interactions between technology and human emotions, it is important for businesses and governments planning to use this technology to remember the importance of privacy, responsible AI principles, and the mental integrity of the individuals using the technology.