• Why enterprises need governance frameworks for agentic AI

    From TechnologyDaily@1337:1/100 to All on Tuesday, April 21, 2026 12:00:26
    Why enterprises need governance frameworks for agentic AI

    Date:
    Tue, 21 Apr 2026 10:56:29 +0000

    Description:
    AI agents are making decisions for your business. That's why we need a new model of accountability for them.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter Enterprise productivity tools are entering a new phase. Instead of simply automating predefined workflows, platforms like Microsoft s emerging Copilot Cowork concept promise something far more ambitious: AI agents capable of executing complex, multi-step tasks across tools such as Microsoft 365 .

    These systems represent a shift from automation to delegation. Instead of defining every step of a process, employees describe an outcome and the agent determines how to achieve it sending emails, updating documents, adjusting permissions, or coordinating across applications. The promise is significant. But so are the risks. Article continues below You may like The leadership dilemma: Governing the Agentic AI workforce Why Agentic AI demands business process re-engineering Enterprise AI governance cannot live in a prompt. So where is the safety net? Jim Sherlock Social Links Navigation

    VP of AI & Cybersecurity R&D at ProCircular. For enterprise security and governance teams, agentic AI raises a fundamental question: what happens when the system making operational decisions isnt a human or even a traditional piece of software, but an autonomous agent acting on a humans behalf? The Check-In With My Human Problem Many agent-based systems attempt to mitigate risk with a human in the loop approach. When the AI reaches a decision point, it pauses and prompts the user to approve the next step.

    In theory, this introduces oversight. In practice, it may introduce very little.

    The check-in-with-my-human model is often a UX compromise disguised as a safety feature. Employees who delegated workflow to an AI agent did so
    because they were already overloaded. When the system interrupts them with approval prompts, the likely outcome isnt careful reviewits a quick rubber stamp. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar
    Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

    Weve seen this behavior before. Most users click through cookie consent banners without reading them. The same dynamic will apply to AI check-ins.

    Meaningful oversight requires the reviewer to understand what the agent did, why it made a decision, and what the downstream consequences might be. That level of scrutiny directly conflicts with the reason the employee delegated the task in the first place.

    For low-stakes activities, this approach may be sufficient. But the first
    time an agent executes an irreversible action that no one actually reviewed, organizations will discover just how fragile this safety model is. What to read next Why most agentic AI projects fail, and how to avoid being one of them How businesses can stop their AI agents from running amok 3 risks hindering enterprise-ready AI and how low-code workflows help When AI
    Actions Blur Accountability Agentic AI also challenges one of the core assumptions of enterprise governance frameworks: that actions in a system are clearly attributable to a human user.

    Tools like Copilot Cowork blur that line and create a major accountability gap. When an AI agent sends an email or modifies SharePoint permissions, it
    is no longer clear whether the employee , the AI, or the productivity
    platform is responsible for making that change. Most governance frameworks weren't built for a world where software makes on-the-fly judgment calls autonomously.

    Audit trails today assume a direct link between a user identity and an action taken within the system. When an AI agent is acting autonomously on behalf of a user, that relationship becomes murky.

    To manage this risk, organizations should treat enterprise AI agents less
    like software features and more like digital employees.

    That means giving them:

    - Their own identities

    - Explicitly scoped permissions

    - Independent logging and monitoring

    - Clear audit trails

    Without these controls, compliance investigations will quickly become difficultor impossibleto reconstruct. Agentic AI vs. Traditional Automation Part of the challenge comes from how fundamentally different agentic AI is from traditional automation .

    Tools like Power Automate or Zapier operate using deterministic workflows. Engineers define each step of a process and the logic connecting them. When triggered, the automation executes those steps exactly the same way every time.

    This model is predictable and auditable.

    Agentic AI flips that model entirely.

    Instead of scripting every action, users describe the outcome they want. The AI determines the path dynamically, making decisions along the way based on context.

    That opens the door to automating work that previously couldnt be automated tasks that are messy, ambiguous, or dependent on situational judgment.

    But it also introduces variability and unpredictability. Two executions of
    the same request may take different paths depending on context.

    Organizations shouldnt rush to replace their existing automation pipelines with agentic systems. Traditional automation still excels at repeatable, deterministic tasks.

    The better approach is to apply agentic AI to workflows that were never practical to automate in the first place. Where Enterprises Can Use Agentic
    AI Today Despite the risks, agentic productivity tools are genuinely
    exciting. Used thoughtfully, they can reduce friction across knowledge work and free employees from administrative overhead.

    Today, the safest applications tend to be tasks that are low risk but time consuming, such as:

    - Preparing meeting briefings

    - Summarizing project updates across teams

    - Drafting routine follow-up communications

    - Aggregating information from multiple workstreams

    These are tasks that often go half-done or undone entirely because employees simply run out of time.

    AI agents can fill those gaps effectively.

    However, organizations should resist the temptation to push agentic systems into high-consequence workflows too quickly.

    Until the platforms can deliver real observability, enforceable governance, and reliable rollback, organizations need to draw a hard line. And until that happens, there are certain domains that should be off-limits to agentic AI:

    Anything touching compliance or audit obligations

    Regulatory reporting and filing workflows

    Financial approvals, transactions, or budget authority

    HR and personnel decisions hiring, terminations, disciplinary actions

    Access controls, permissions, and data governance

    If your AI agent can approve a wire transfer or modify access controls
    without a human being in the loop, youve essentially created an unaudited decision-maker with admin privileges. The Guardrails Havent Caught Up Yet Agentic AI's potential is enormous. But right now, most organizations are focused on what these tools can do, not how they should be managed. And its not like we havent seen this movie before. Every major tech wave of the past three decades (web apps, BYOD, cloud, scripted bots/automation) has followed the same arc: rapid adoption, delayed governance, then painful correction.

    But the difference with agentic AI is that those were all deterministic
    tools. Then tools did what they were told. Agentic AI doesn't follow those rules. Tools like Copilot Cowork interpret, decide, and act. Two identical prompts can produce two different outcomes that touch email , permissions,
    and workflows before a single human reviews them. Combine that with the fastest enterprise adoption curve we've ever seen (driven by Microsoft embedding these capabilities directly into tools people already use) and the blast radius is significantly larger in this case.

    As agent-based workflows scale, the conversation must shift hard toward observability, accountability, and governance. Enterprises that treat AI agents like trusted employees, with identity, permissions, and auditability, will be far better positioned than those that treat them as just another productivity feature.

    The gains to productivity alone mean tools like Copilot Cowork are here to stay. The smart organizations won't wait for something to break before they figure out how to govern them. We've ranked the best identity management solutions .



    ======================================================================
    Link to news story: https://www.techradar.com/pro/why-enterprises-need-governance-frameworks-for-a gentic-ai


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)