A New Research Found Out that Many AI Models Can Feel Empathy Only to an Extent

Researchers from Cornell University, Olin College and Stanford University carried out new research about conversational agents (CAs) like Alexa and Siri and how they display empathy. Most of the CAs are powered by large language models which means they possess a lot of information. This information leads to them having biases, just like humans. The researchers carried out the research by talking to CAs with about 65 human identities.

The research found out that many CAs have biases and judgements with certain human identities like gay and Muslims. They were also somewhat encouraging to many problematic identities like Nazism. The lead author of the study, Andrea Cuadra, said that automated empathy in CAs can bring a lot of positive things to health and education. She also added that automated empathy in CAs is highly unlikely to happen but if it happens, the developers should be aware of harmful impacts it can cause.

The researchers also found out that large language models are good at emotional reactions but cannot do well in explorations and interpretations. This means LLMs can only answer based on the information they have been fed and cannot add anything to that information themselves. Other researchers in the study Dell, Estrin and Jung said that they were inspired by Cuadra to think about this research. Cuadra was previously studying about usages of older generation CAs and newer CAs.


Image: DIW-Aigen

Read next: Can Generative AI Be Manipulated To Produce Harmful Or Illegal Content? Experts Have The Answer

Previous Post Next Post