• Your OpenClaw agents can empty your inbox and leak your data. Her

    From TechnologyDaily@1337:1/100 to All on Thursday, April 16, 2026 12:15:25
    Your OpenClaw agents can empty your inbox and leak your data. Here's how to secure them

    Date:
    Thu, 16 Apr 2026 11:01:32 +0000

    Description:
    AI agents like OpenClaw can delete your data and leak passwords here's how
    to stop them.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter Metas Director of AI and Safety Alignment wanted to clean up her inbox, so she set up an OpenClaw AI agent and told it to confirm before acting. But it didn't. Instead, the OpenClaw agent mass-deleted hundreds of emails while she scrambled to shut it down from another device.

    OpenClaws adoption has skyrocketed in just a few short months, amassing hundreds of thousands of GitHub stars so far. Its part of a growing number of frameworks built to make agentic AI possible. Gil Feig Social Links
    Navigation

    Co-Founder and CTO of Merge. But greater adoption also comes with alarming headlines about unprotected setups leaking passwords , fake add-ons spreading viruses, and poor storage of sensitive information. Article continues below You may like OpenClaw is making terrifying mistakes showing AI agents aren't ready for real responsibility Always-on AI Agents put everything hackers
    could ever want behind a single attack surface How to safely experiment with OpenClaw

    The good news is that with the right processes in place, agentic AI can be secure, regardless of the framework you use. Here are 4 best practices worth putting into action before deploying your agents. 1. Give the agent minimum permissions OpenClaw requires broad system access to execute shell commands, manage files , and control browsers , creating a large attack surface for security issues. Its why everyone advises running it on an isolated computer. But doing so limits what your agent can reliably and safely do.

    Thankfully, there are alternatives that do not require you to give broad system access. You can build agents through a platform like NemoClaw, which runs them in a sandbox with tightly scoped permissions. Or you could use Docker Sandboxes, which use microVMs rather than plain containers for better security .

    During setup, consider what the minimum access for this specific task
    actually is. An agent summarizing emails needs read access, not write or delete. An agent filing documents needs one folder, not an entire drive. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro
    newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

    While it's tempting to give AI broad permissions so it can do more, it also exposes you (and your devices) to significant risk. By following the
    principle of least permissions, you're still giving AI permission to do the work while minimizing later headaches.

    For any OAuth approval the agent requests, verify exactly which permissions youre granting. Otherwise, you risk giving your agents too much power and access over time.

    In a similar vein, use purpose-built credentials instead of personal login tokens during setup, and rotate them regularly. When an agent uses your personal login token, it can access your full permissions, while a purpose-built credential is scoped to exactly what the agent needs. What to read next How businesses can stop their AI agents from running amok Here are the OpenClaw security risks you should know about The mobile app traffic your security team can't see and AI agents are generating it

    To create one, go to the settings on the platform the agent will access and look for "API keys," "service accounts," or "app passwords." These are separate login credentials that aren't tied to your personal account. When creating one, you'll be prompted to select what it can access; choose only
    the specific resource the agent needs. 2.Narrow your focus, then expand responsibilities Before trusting an agent with anything high-stakes, watch
    how it handles a low-stakes task, such as analyzing logs or drafting an
    email. If all goes well, give it increasingly ambiguous tasks as a test to
    see how it responds. Ask it to complete an out-of-scope action or one that requires a permission it doesnt have.

    An effective AI agent will ask follow-up questions before proceeding or can clearly communicate its limits. What you want to avoid is an AI agent with false confidence making an assumption and proceeding, despite not actually knowing the right steps.

    An agent that halts and asks on a low-stakes task will probably halt and ask on a high-stakes one. An agent that fills gaps by guessing will do the same when the stakes are real.

    That said, remember that these systems are probabilistic, so agents can
    behave differently in production. A safe assumption is that if something goes wrong in testing, it will 100% happen when running in a live environment; but just because nothing goes wrong during testing doesnt mean everything is secure.

    Thats why constant monitoring is critical. 3. Monitor from day one An agent thats been running quietly for weeks may have already drifted due to configuration changes, extended OAuth consents, and new permissions acquired through normal operation. Often, it's hard to detect issues because theres no clear breach.

    Have an observability tool in place to monitor for unusual activity, such as rogue tool calls or data transfers outside normal patterns, and set up alerts so you can quickly course-correct if something goes awry. You can also use it to periodically audit your agents credentials and actions for anything unusual. 4. Give measurable constraints You may have seen online that it's recommended to tell your AI to "confirm before acting" as a safeguard. Unfortunately, its too vague to be actionable, so in practice it often leads to inconsistent behavior.

    Instead, give the AI agents testable guardrails so you can clearly decide whether they followed instructions. Guidance like "dont delete, move, or modify any item without displaying a list of planned changes and receiving my explicit approval" is much easier to verify.

    The more precisely you define the constraint, the less room there is for misunderstanding.

    However, always remember that these systems are probabilistic and a bit of a black box, so there is a chance OpenClaw will ignore instructions at some point. You want to plan for the worst-case scenario when this happens.

    If an action could expose an API key, delete emails, or transmit sensitive data, you need to make that outcome structurally impossible.

    For example, you should revoke delete permissions at the account level so the agent literally cannot delete anything, regardless of what it decides to do, and store sensitive credentials in a secrets manager the agent has no access to, rather than in any file or environment the agent can read.

    Good instructions reduce the likelihood of a mistake, but the right setup minimizes the damage.

    Remember that while agents are powerful and quick, they lack human judgment, and most agentic frameworks, like OpenClaw, dont include security features by default. It's on the people deploying them to build in those safeguards.

    Scoped credentials, precise instructions, and frequent monitoring are the minimum viable conditions for deploying an agent that does what you actually want and nothing else. We've featured the best AI tool. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/your-openclaw-agents-can-empty-your-inbox-and-le ak-your-data-heres-how-to-secure-them


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)