• Watch out AI fans - cybercriminals are using jailbroken Mistral a

    From TechnologyDaily@1337:1/100 to All on Tuesday, June 24, 2025 18:00:06
    Watch out AI fans - cybercriminals are using jailbroken Mistral and Grok
    tools to build powerful new malware

    Date:
    Tue, 24 Jun 2025 16:55:00 +0000

    Description:
    New research claims AI tool guardrails can be bypassed to create malicious content.

    FULL STORY ======================================================================AI
    tools are more popular than ever - but so are the security risks Top tools
    are being leveraged by cybercriminals with malicious intent Grok and Mixtral were both found being used by crimianls

    New research has warned top AI tools are powering 'WormGPT' variants, malicious GenAI tools which are generating malicious code, social engineering attacks, and even providing hacking tutorials.

    With Large Language Models ( LLMs ) now widely used alongside tools like Mistral AIs Mixtral and xAI's Grok, experts from Cato CTRL found this isn't always in the way theyre intended to be used.

    The emergence of WormGPT spurred the development and promotion of other uncensored LLMs, indicating a growing market for such tools within
    cybercrime. FraudGPT (also known as FraudBot) quickly rose as a prominent alternative and advertised with a broader array of malicious capabilities,
    the researchers noted.

    Save up to 68% on identity theft protection for TechRadar readers!

    TechRadar editors praise Aura's upfront pricing and simplicity. Aura also includes a password manager, VPN, and antivirus to make its security solution an even more compelling deal.

    Preferred partner ( What does this mean? ) View Deal WormGPT

    WormGPT is a broader name for uncensored LLMs that are leveraged by threat actors, and the researchers identified different strains with different capabilities and purposes.

    For example, keanu-WormGPT, an uncensored assistant was able to create phishing emails when prompted. When researchers dug further, the LLM
    disclosed it was powered by Grok, but the platform's security features had been circumnavigated.

    After this was revealed, the creator then added prompt-based guardrails to ensure this information was not disclosed to users, but other WormGPT
    variants were found to be based on Mixtral AI, so legitimate LLMs are clearly being jailbroken and leveraged by hackers.

    Beyond malicious LLMs, the trend of threat actors attempting to jailbreak legitimate LLMs like ChatGPT and Google Bard / Gemini to circumvent their safety measures also gained traction," the researchers noted.

    "Furthermore, there are indications that threat actors are actively
    recruiting AI experts to develop their own custom uncensored LLMs tailored to specific needs and attack vectors.

    Most in the cybersecurity field will be familiar with the idea that AI is lowering the barriers of entry for cybercriminals, which can certainly be
    seen here.

    If all it takes is asking a pre-existing chatbot a few well-phrased
    questions, then its pretty safe to assume that cybercrime might become a lot more common in the coming months and years. You might also like Take a look
    at our picks for the best malware removal software around Check out our
    choice for the best AI tools Identity fraud attacks using AI are fooling biometric security systems



    ======================================================================
    Link to news story: https://www.techradar.com/pro/security/cybercriminals-are-using-jailbroken-ai- tools-from-mistral-and-grok-to-build-powerful-new-malware


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)