Dr Raffaele F Ciriello
Irish writer John Connolly聽:
The nature of humanity, its essence, is to feel another鈥檚 pain as one鈥檚 own, and to act to take that pain away.
For most of our history, we believed empathy was a uniquely human trait 鈥 a special ability that set us apart from machines and other animals. But this belief is now being challenged.
As AI becomes a bigger part of our lives, entering even our most intimate spheres, we鈥檙e faced with a philosophical conundrum: could attributing human qualities to AI diminish our own human essence? Our聽聽suggests it can.
In recent years, AI 鈥渃ompanion鈥 apps such as Replika have attracted millions of users. Replika allows users to create custom digital partners to engage in intimate conversations. Members who pay for聽聽can even turn their AI into a 鈥渞omantic partner鈥.
Physical AI companions aren鈥檛 far behind. Companies such as JoyLoveDolls are selling聽聽with customisable features including breast size, ethnicity, movement and AI responses such as moaning and flirting.
While this is currently a niche market, history suggests today鈥檚 digital trends will become tomorrow鈥檚 global norms. With about聽聽adults experiencing loneliness, the demand for AI companions will grow.
Humans have long attributed human traits to non-human entities 鈥 a tendency known as anthropomorphism. It鈥檚 no surprise we鈥檙e doing this with AI tools such as ChatGPT, which appear to 鈥渢hink鈥 and 鈥渇eel鈥. But why is humanising AI a problem?
For one thing, it allows AI companies to exploit our tendency to form attachments with human-like entities. Replika is聽聽as 鈥渢he AI companion who cares鈥. However, to avoid legal issues, the company elsewhere points out Replika isn鈥檛 sentient and merely learns through millions of user interactions.
Screenshot of contradictory information on Replika鈥檚 help page versus advertising.
Some AI companies overtly聽聽their AI assistants have empathy and can even anticipate human needs. Such claims are misleading and can take advantage of people seeking companionship. Users may become聽聽if they believe their AI companion truly understands them.
This raises serious ethical concerns. A user聽聽to delete (that is, to 鈥渁bandon鈥 or 鈥渒ill鈥) their AI companion once they鈥檝e ascribed some kind of sentience to it.
But what happens when said companion unexpectedly disappears, such as if the user can no longer afford it, or if the company that runs it shuts down? While the companion may not be real, the feelings attached to it are.
By reducing empathy to a programmable output, do we risk diminishing its true essence? To answer this, let鈥檚 first think about what empathy really is.
Empathy involves responding to other people with understanding and concern. It鈥檚 when you share your friend鈥檚 sorrow as they tell you about their heartache, or when you feel joy radiating from someone you care about. It鈥檚 a profound experience 鈥 rich and beyond simple forms of measurement.
A fundamental difference between humans and AI is that humans genuinely feel emotions, while AI can only simulate them. This touches on the hard problem of consciousness, which questions how subjective human experiences arise from physical processes in the brain.
While AI can simulate understanding, any 鈥渆mpathy鈥 it purports to have is a result of programming that mimics empathetic language patterns. Unfortunately, AI providers have a financial incentive to trick users into growing attached to their seemingly empathetic products.
Our 鈥渄ehumanAIsation hypothesis鈥 highlights the ethical concerns that come with trying to reduce humans to some basic functions that can be replicated by a machine. The more we humanise AI, the more we risk dehumanising ourselves.
For instance, depending on AI for emotional labour could make us less tolerant of the imperfections of real relationships. This could weaken our social bonds and even lead to emotional deskilling. Future generations may become less empathetic 鈥 losing their grasp on essential human qualities as emotional skills continue to be commodified and automated.
Also, as AI companions become more common, people may use them to replace real human relationships. This would likely increase loneliness and alienation 鈥 the very issues these systems claim to help with.
AI companies鈥 collection and analysis of emotional data also poses significant risks, as these data could be used to manipulate users and maximise profit. This would further erode our privacy and autonomy, taking聽聽to the next level.
Regulators need to do more to hold AI providers accountable. AI companies should be honest about what their AI can and can鈥檛 do, especially when they risk exploiting users鈥 emotional vulnerabilities.
Exaggerated claims of 鈥済enuine empathy鈥 should be made illegal. Companies making such claims should be fined 鈥 and repeat offenders shut down.
Data privacy policies should also be clear, fair and without hidden terms that allow companies to exploit user-generated content.
We must preserve the unique qualities that define the human experience. While AI can enhance certain aspects of life, it can鈥檛 鈥 and shouldn鈥檛 鈥 replace genuine human connection.
This article originally appeared in .
Raffaele F Ciriello is a Senior Lecturer in Business Information Systems and聽Angelina Ying Chen聽is a PhD student at the University of Sydney Business School.