'They are not human': Why AI has 'predictable and systematic biases' when it comes to judging people
Date:
Sun, 19 Apr 2026 16:20:00 +0000
Description:
AI systems simulate human trust using structured models, producing consistent decisions that differ from human intuition and show stronger demographic biases.
FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter AI mimics trust while relying on rigid, structured evaluation patterns Machines separate human traits instead of forming holistic impressions Competence and integrity dominate decisions across both humans and AI Modern AI systems do not simply process
information; they make systematic judgments about people in ways that
resemble human trust but with important differences.
A new study from Hebrew University, published in Proceedings of the Royal Society , analyzed over 43,000 simulated decisions alongside around a
thousand human participants across five scenarios. These scenarios included deciding how much money to lend a small business owner, whether to trust a babysitter, how to rate a boss, and how much to donate to a nonprofit
founder. Article continues below You may like What we lose when AI starts doing all our thinking at work Speed isnt strategy: Human judgement must be central to AI-led decisions Trust and judgement: the challenge facing the AI-driven SOC How AI breaks down human judgment into separate columns The findings reveal that AI tools form something that looks like trust, but their judgment works very differently from ours.
Both humans and AI favored people who seemed competent, honest, and well-intentioned, meaning machines captured something real about human trust.
"That's the good news," said Prof. Yaniv Dover. "AI is not making random decisions. It captures something real about how humans evaluate one another."
However, humans tend to form a general impression, blending multiple traits into a single, intuitive, and holistic judgment. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
AI does something very different: it breaks people down into components, scoring competence, integrity, and kindness, almost like separate columns in
a spreadsheet.
"People in our study are messy and holistic in how they judge others," explained Valeria Lerman. "AI is cleaner, more systematic, and that can lead to very different outcomes."
These differences appeared even when every other detail about the person was identical. What to read next I asked a psychologist what worries the people trying to make AI safer Testing AI is not like testing software and most companies haven't figured that out yet Rebuilding trust in AI with
responsible adoption
"Humans have biases, of course," said Prof. Dover. "But what surprised us is that AI's biases can be more systematic, more predictable, and sometimes stronger."
In financial scenarios such as deciding how much money to lend or donate, AI systems showed consistent differences based solely on demographic traits.
Older individuals were frequently given more favorable outcomes, religion had strong effects, especially in monetary scenarios, and gender also influenced decisions in certain models.
Another key insight is that there is no single "AI opinion." Different models often made different judgments about the same person.
This means that the choice of an AI system could quietly shape real-world outcomes. "Which model you use really matters," Lerman noted.
Large language models are already being used to screen job candidates,
assess creditworthiness, recommend medical actions, and guide organizational decisions.
The study suggests that while AI can mimic the structure of human judgment,
it does so in a more rigid, less nuanced way, with biases that may be harder to detect.
"These systems are powerful," said Dover. "They can model aspects of human reasoning in a consistent way. But they are not human, and we should not assume they see people the way we do."
As AI tools and AI agents move from assistants to decision makers, understanding how it "thinks" becomes critical for organizations deploying it at scale.
The researchers emphasize that their findings are not a warning against AI, but rather a call for awareness.
That said, the question is no longer whether we trust machines; it is whether we understand how they trust us. Follow TechRadar on Google News and add us
as a preferred source to get our expert news, reviews, and opinion in your feeds.
======================================================================
Link to news story:
https://www.techradar.com/pro/they-are-not-human-why-ai-has-predictable-and-sy stematic-biases-when-it-comes-to-judging-people
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)