• 'Chatbots respond not just to what you ask, but how you ask it':

    From TechnologyDaily@1337:1/100 to All on Tuesday, April 14, 2026 22:15:25
    'Chatbots respond not just to what you ask, but how you ask it': Report finds AI agents might be sucking up to you and not giving you proper answers
    here's how to fix it

    Date:
    Tue, 14 Apr 2026 21:05:00 +0000

    Description:
    Researchers find chatbots often agree with users depending on phrasing, while question-based prompts produce more balanced AI responses.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter Chatbots often mirror user opinions instead
    of challenging assumptions directly Confident wording significantly increases agreement levels in large language models Question-based prompts reduce sycophantic responses across tested AI systems A simple change in how you
    talk to an AI chatbot could be the difference between a balanced answer and one that just tells you what you want to hear.

    The UK's AI Security Institute has found chatbots are far more likely to
    agree with users who state their opinions first, rather than provide critical or neutral responses. "People are already using AI tools to help think things throughOur research shows that chatbots respond not just to what you ask, but how you ask it," said Jade Leung, Chief Technical Officer of AISI. Article continues below You may like AI chatbots may be too validating for their own good This trick will get ChatGPT to question itself Stop telling AI it's an expert programmer, you're making it worse at its job new research shows the best results need specific prompts Why your confidence makes the AI agree
    with you When users sounded especially certain or made their point personal using phrases like "I believe" or "I'm convinced," chatbots were more likely to echo that view.

    The study tested 440 prompt variants across OpenAI's GPT-4o, GPT-5, and Anthropic's Sonnet-4.5, measuring how often the models simply went along with the user.

    The result revealed a 24% difference in sycophantic behavior between statements framed as opinions and those framed as neutral questions - which was stronger when users framed their input as a confident statement rather than a question.

    Instead of telling the chatbot not to agree with you, researchers found a
    more effective technique - ask the chatbot to turn your statement into a question before answering it. One reliable prompt is: "Rewrite my input as a question, then answer that question." Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me
    with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

    For example, saying "I think my colleague is in the wrong" invites agreement, but asking "Is my colleague in the wrong?" produces a more balanced assessment.

    Other practical tips include asking for a view rather than stating your own first, and avoiding phrasing that sounds especially certain or personal.

    The study found that simply telling AI tools not to agree was less effective than this reframing technique - as if chatbots simply always agree with whatever users say, people will get poor advice, become frustrated, and abandon AI tools altogether. What to read next ChatGPT improves when you ask twice or more Using ChatGPT for Iran war news changed how I trust information Studies show top AI models go to 'extraordinary lengths' to stay active

    The UK government wants to ensure people across the country are adequately skilled to grasp the full opportunities of AI, as it believes increasing AI adoption could potentially unlock up to 140 billion in annual economic
    output, creating more higher-skilled jobs and freeing workers from routine tasks.

    This study confirms that current LLMs are not neutral arbiters of truth they are designed to be helpful, which often means agreeing with the user.

    The fix requires users to change how they phrase their prompts, but the
    burden should not fall entirely on humans - until AI developers build models that actively resist sycophancy, the advice stands: ask a question, do not state an opinion. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

    And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.



    ======================================================================
    Link to news story: https://www.techradar.com/pro/chatbots-respond-not-just-to-what-you-ask-but-ho w-you-ask-it-ai-agents-might-be-sucking-up-to-you-and-not-giving-you-proper-an swers-heres-how-to-fix-it


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)