• Think AI hallucinations are bad? Here's why you're wrong

    From TechnologyDaily@1337:1/100 to All on Friday, March 06, 2026 15:15:32
    Think AI hallucinations are bad? Here's why you're wrong

    Date:
    Fri, 06 Mar 2026 15:03:55 +0000

    Description:
    AI isnt deterministic, its probabilistic, so reset your expectations and
    build guardrails for business value.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Get the TechRadar Newsletter Sign up for
    breaking news, reviews, opinion, top tech deals, and more. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful An account already exists for this email address, please log in. Subscribe to our newsletter AI hallucinations can be frustrating. If youve used an LLM , youve almost certainly seen it deliver an answer that was either confidently wrong or just downright mistaken. Steve Phillips

    Co-founder, Executive Chair and Chief Innovation Officer of Zappi. I recently ran into a hallucination while using an LLM for competitive intelligence. I run a market research software platform that delivers consumer insights on
    ads and products to consumer brands. But when I asked the model to assess our customer reviews, it confidently concluded we were underperforming due to failures in our electricity structure systems. At first glance huh?! But it became clear the model had conflated us with an unrelated company that shares our name and makes EV chargers. You may like ChatGPT hallucinates, here's 5 ways to spot when it does AI conversations at Davos have sprinted ahead we need to go back to basics AI can summarize meetings, but heres what it still cant do in 2026

    Most people view hallucinations simply as annoying mistakes but the truth
    is, hallucinations are byproducts of how LLM models are trained and what theyre optimized to do. If youre expecting AI tools to be perfect, youre expecting the wrong thing.

    I provided clearer context, the model produced accurate results, and I was able to get the intel I wanted. Why hallucinations occur So why exactly do hallucinations happen? A recent paper from OpenAI shows hallucinations occur because models are rewarded for giving an answernot for saying I dont know.

    An LLM is never deterministic; its always probabilistic. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get
    all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting
    your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

    OpenAI has explained that during the pre-training phase, AI models learn by ingesting vast amounts of data from the internet. In this initial stage,
    these models do a good job at signaling how confident they are in the answers they provide. They can also signify uncertainty reasonably well, saying,
    Heres a possible answer, but Im unsure.

    However, when it comes to post-training, models are refined by reinforcement learning that rewards accuracy without penalizing inaccuracy. Just like a multiple-choice exam, the LLM is trained to give an answer even if its a guess. As with humans, it often serves the system better to fill in
    something, rather than leave the question blank. LLMs: Confidently wrong by design Up until AI came along, we lived in a largely deterministic world. We used tools that provided a single, definitive answer, with little room for interpretation. For example, if we plugged a mathematical problem into a calculator, wed get an answer. What to read next 5 AI myths taken apart AI is no SKUand what that means for the enterprise 5 alarming signs of an AI apocalypse on the way

    If we queried a database for a document, it would provide it. We could trust these tools to return a predictable result.

    LLMs are not the same. AI was designed to mimic how the human brain works,
    and humans are imperfect; they get things wrong all the time. So, if we
    expect that LLMs are going to get things right 100% of the time, were misunderstanding how an LLM works in the first place.

    LLMs are a probabilistic system that produces the most likely answer, rather than guaranteed truths which means they can be confidently wrong in the same way humans can be (or at least many of my colleagues accuse me of being!).
    The bottom line: An LLM that never hallucinates is simply not possible.

    Demanding perfection and accuracy from a system is a human flaw. How to
    reduce hallucinations When it comes to curbing AI hallucinations, knowing
    that theyre a feature and not a bug is half the battle. It starts with resetting expectations realizing that errors are inherent and not a fatal defect. The good news is, model developers like OpenAI are also working to decrease the rate at which hallucinations occur.

    In the meantime, what can businesses and teams do about them? Here are three practical tips to keep in mind:

    1. You cant rely on the model alone for facts. As I mentioned, LLMs arent deterministic. Companies need to plan for errors by carefully reviewing the information returned and double-checking sources the LLM is pulling from.

    Even if you prompt the LLM to only answer if it is 100% sure, its often still unlikely to say, I dont know the answer. So, just like you would carefully review a colleagues work, you need to monitor LLMs for accuracy too.

    2. Feed the model trusted, connected information. What you give an AI system matters as much as what you ask it. The more you ground a model in trusted, connected sourcesvalidated research, internal reports, documented decisions, and shared institutional knowledgethe more useful and reliable its outputs become. When data is fragmented or vague, the model fills gaps. But with clear, current, connected inputs, AI can reason within real constraints instead of guessing.

    3. Use carefully curated prompts. The more general the prompt, the more general the response. You can better control the outcome by providing
    relevant context and source material, and then asking a specific question.
    The prompt then becomes, Answer this question only using the data I provided, and then cite where the information came from.

    This can dramatically reduce hallucinations. You can even prompt the model to be more nuanced by saying If you are not 100% sure about the answer, then say you dont know. Accuracy is very important here. AI as a system, not a magic box AI is a powerful tool that we will continue to fold into our everyday working lives and beyond. We must realize, though, that AI isnt a magic box. Its an imperfect system that reflects the training and insights its been given.

    Only when we stop expecting perfection from AI, can we use it in the way it works best alongside us to deliver real business value. We've featured the best AI chatbot for business. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and
    brightest minds in the technology industry today. The views expressed here
    are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/think-ai-hallucinations-are-bad-heres-why-youre- wrong


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)