ChatGPT 5 is finally saying 'I dont know' heres why thats a big deal
Date:
Thu, 21 Aug 2025 20:00:00 +0000
Description:
ChatGPT 5's habit of admitting ignorance instead of guessing is a huge step for AI development.
FULL STORY ======================================================================
Large language models have an awkward history with telling the truth, especially if they can't provide a real answer. Hallucinations have been a hazard for AI chatbots since the technology debuted a few years ago. But ChatGPT 5 seems to be going for a new, more humble approach to not knowing answers; admitting it.
Though most AI chatbot responses are accurate, it's impossible to interact with an AI chatbot for long before it provides a partial or complete fabrication as an answer. The AI displays just as much confidence in its answers regardless of their accuracy. AI hallucinations have plagued users
and even led to embarrassing moments for the developers during
demonstrations.
OpenAI had hinted that the new version of ChatGPT would be willing to plead ignorance over making up an answer, and a viral X post by Kol Tregaskes has drawn attention to the groundbreaking concept of ChatGPT saying, I dont know and I cant reliably find out. GPT-5 says 'I don't know'.Love this, thank you. pic.twitter.com/k6SNFKqZbg August 18, 2025
Technically, hallucinations are baked into how these models work. Theyre not retrieving facts from a database, even if it looks that way; they're predicting the next most likely word based on patterns in language. When you ask about something obscure or complicated, the AI is guessing the right
words to answer it, not doing a classic search engine hunt. Hence, the appearance of entirely made-up sources, statistics, or quotes.
But GPT-5s ability to stop and say, I dont know, reflects an evolution in how AI models deal with their limitations in terms of their responses, at least.
A candid admission of ignorance replaces fictional filler. It may seem anticlimactic, but it's more significant for making the AI seem more trustworthy. Clarity over hallucinations
Trust is crucial for AI chatbots. Why would you use them if you don't trust the answers? ChatGPT and other AI chatbots have warnings built into them
about not relying too much on their answers because of hallucinations, but there are always stories of people ignoring that warning and getting into hot water. If the AI just says it can't answer a question, people might be more inclined to trust the answers it does provide.
Of course, there's still a risk that users will interpret the model's self-doubt as failure. The phrase I dont know might come off as a bug, not a feature, if you don't realize the alternative is a hallucination, not the correct answer. Admitting uncertainty isn't how the all-knowing AI some imagine ChatGPT would behave.
But it's arguably the most human thing ChatGPT could do in this instance. OpenAI's proclaimed goal is artificial general intelligence , AI that can perform any intellectual task a human can. But one of the ironies of AGI is that mimicking human thinking includes uncertainties as well as capabilities.
Sometimes, the smartest thing you can do is to say you don't know something. You can't learn if you refuse to admit there are things you don't know. And, at least it avoids the spectacle of an AI telling you to eat rocks for your health. You might also like GPT5 Pro is brilliant, but its still nowhere near real AGI, says one of the professors who coined the term OpenAI's CEO says he's scared of GPT-5 AI that seems conscious is coming and thats a huge problem, says Microsoft AI's CEO
======================================================================
Link to news story:
https://www.techradar.com/ai-platforms-assistants/chatgpt/chatgpt-5-is-finally -saying-i-dont-know-heres-why-thats-a-big-deal
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)