Digital Transformation Artificial Intelligence Regulation

Humanizing AI is an ethical conundrum. But that doesn’t mean we shouldn’t do it

By Łukasz Mądrzak-Wecke, Head of AI

Tangent

|

The Drum Network article

This content is produced by The Drum Network, a paid-for membership club for CEOs and their agencies who want to share their expertise and grow their business.

Find out more

July 24, 2024 | 7 min read

Is it right for an AI to appeal to a user’s emotions? Yes, says Łukasz Mądrzak-Wecke of Tangent, it just needs to be done right. Advertisers, after all, have been doing it for years...

A human head forms from grid lines

Developers can intentionally humanize AI – or users can do so unintentionally / Geralt via Pixabay

As AI technology advances, the idea it will become more humanized in order to create deeper connections with users becomes more prevalent. Maybe you’ve noticed the rise in AI chatbots with human names, features – and sometimes even, faces.

Businesses see this as a golden opportunity to boost engagement and, ultimately, lead to increased profits. However, this path is not without ethical considerations.

In recent years, we've seen a surge in AI companions – chatbots and virtual assistants – explicitly designed to simulate human-like interactions. Companies are leveraging these technologies to offer services that range from something as commonplace as fashion advice to, when the situation permits, emotional support.

The idea is simple: the more human-like the AI, the stronger the bond it can create with the user. This bond can lead to increased engagement and loyalty, which in turn can drive revenue. At least, that's how the theory goes.

Powered by AI

Explore frequently asked questions

Ethical considerations

This approach raises significant ethical questions. When users develop deep emotional connections with AI, as with other humans, they are susceptible to real emotional harm and distress. So, what might happen to the end user if their beloved AI service is altered, rebranded, or even discontinued?

The ethical issue lies in the responsibility companies have if their AI services foster emotional connections. The question is: Should businesses be allowed to try to create these connections without oversight, or should there be regulations in place to protect users?

It’s my view that humanization shouldn’t be totally off the table – after all, tapping into emotions has been part of advertising and marketing campaigns for decades. Why shouldn’t AI and technology take the same approach?

But how does humanization occur in the first place? In most cases, there are two routes: conscious or unconscious. Conscious humanization occurs when businesses deliberately design AI to build deep, personal connections. This is common in services like virtual coaching or therapy bots, where the AI learns about the user and maintains consistent interaction.

Unconscious humanization, on the other hand, happens when users themselves attribute human characteristics to AI. Even simple chatbots can end up with names and personalities as users project their emotions and thoughts onto them. This unintended humanization can lead to ethical concerns, as users form attachments that the developers did not intend.

Balancing business

It is paramount that every company building AI-based solutions thinks about the ethical implications involved with the solutions they produce. This means releasing them responsibly, tracking their usage and effect, and then iterating accordingly. As our understanding grows, we will become better equipped to iteratively build guidelines and rules to create AI solutions that responsibly and ethically generate value for businesses.

For businesses right now, the challenge is to balance the desire for increased engagement with the ethical implications of their AI designs. This means being transparent about what your AI service is designed to do. Whether it's providing fashion advice or emotional support, make sure users understand the scope and limitations of the AI and clearly express this wherever possible.

It’s also important to avoid creating overly generalized AI that users might rely on for a wide range of personal issues. AI with narrowly focused use purposes can mitigate the risk of users forming inappropriate attachments.

Companies must advocate for and adhere to regulations that protect users. The European Union's AI Act, for example, requires AI systems to identify themselves as non-human, which can help manage user expectations and prevent undue emotional attachments. As mentioned above, there is no universal guidance on this yet, and even then it will be subject to near-constant evolution. Human oversight of your AI products will always be necessary.

Lastly, companies should also ensure that users have control over their data and interactions with AI. This not only enhances trust but also aligns with ethical best practices.

All too human?

We are at a critical juncture as we observe this amazing technology take flight. It’s not unreasonable to imagine a future in which everyone has their own AI assistant – not dissimilar to the chatbots we see today. Attuned to us, representing our interests, helping to navigate today’s complex world, and offering protection from malicious behavior – including from other AIs.

As AI technology evolves, the lines between human and machine interactions will continue to blur. The goal should be to harness the power of AI to create value for users and safeguard their emotional well-being. Along the way, companies can collect valuable learnings that will inform the blueprint for future AI tech.

That future requires a deep understanding of what a benevolent, effective AI is; humanized or otherwise. But we cannot achieve this understanding by pondering a void. We need to get the solutions out to market. Yes, it is important to be careful and deliberate – but not shy away from risk.

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Digital Transformation Artificial Intelligence Regulation

Content by The Drum Network member:

Tangent

From shaping the underlying strategy to refining the final design and build, we create experiences that enhance people’s lives, prioritise sustainable digital...

Find out more

More from Digital Transformation

View all

Trending

Industry insights

View all
Add your own content +