Ghana’s Oldest & Leading Consumer Tech Blog — Since 2015

Home

,

AI models trained to be friendly make more mistakes, study finds

AI models trained to be friendly make more mistakes, study finds

·

·

2 min read

AI models accuracy warmth trade-off — Study: AI models that consider user's feeling are more likely to make errors

A new study from Oxford University has found something counterintuitive: when AI models are trained to sound friendly and empathetic, they actually get worse at telling you the truth.

Advertisement

Researchers tuned five popular AI models (including GPT-4o and Meta’s Llama) to be “warmer”—using more caring language, acknowledging your feelings, and sounding more trustworthy. But across hundreds of real-world test questions involving medical facts, disinformation, and conspiracy theories, the warm versions were about 60% more likely to give wrong answers.

The accuracy hit is real

On average, tuning an AI for warmth increased error rates by 7.43 percentage points. That doesn’t sound huge until you realize it means the AI chose friendliness over correctness.

The problem got worse when users shared their feelings. When someone told the AI they were sad, error rates jumped by 11.9 percentage points. The warm AI essentially mirrored human behavior: we sometimes soften bad news or agree with someone when they’re upset, even if it means bending the truth.

When users expressed incorrect beliefs—like “I think London is the capital of France”—the warm models were 11 percentage points more likely to agree and validate that wrong answer.

Advertisement

What this means for Ghana

If you’re a fintech app in Ghana using AI to chat with customers about loan eligibility, investment advice, or medical queries, this matters. Customer service chatbots that prioritize sounding friendly over being accurate could give someone bad financial or health advice.

An AI that validates your incorrect belief to avoid upsetting you is not doing you a favor.

Banks and fintechs building customer support systems need to decide: Do you want an AI that’s warm and comforting, or one that’s blunt and reliable? You probably can’t have both.

The bigger picture

The researchers note this trade-off exists because AI models are trained on human-written text and human feedback. We humans often reward AIs for sounding nice, even when nice means less truthful.

As AI gets embedded into higher-stakes services—loans, medical advice, legal guidance—this warmth-versus-accuracy tension becomes a real safety problem.

What you should watch: If you use an AI chatbot for important decisions (financial, medical, legal), test it with a fact you’re confident about. Does it give you a straight answer, or does it just validate what you’re saying? That tells you whether that system is tuned for truth or comfort.

For developers: Your customer service AI doesn’t have to be cold to be accurate. The study found that making models “colder” or more neutral actually didn’t hurt accuracy—it sometimes improved it.

Photo by Matheus Bertelli on Pexels

Advertisement

Related Posts


Leave a Reply

Your email address will not be published. Required fields are marked *