• Governing the hidden risks of generative AI in the enterprise

    From TechnologyDaily@1337:1/100 to All on Tuesday, April 14, 2026 10:00:27
    Governing the hidden risks of generative AI in the enterprise

    Date:
    Tue, 14 Apr 2026 08:53:53 +0000

    Description:
    With the right governance, organizations can scale generative AI securely and build lasting trust.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter Generative AI has quickly moved from experimentation to everyday business use. Organizations are deploying large language models and AI copilots to accelerate workflows, improve productivity and unlock new services across functions from marketing to software development. Ian Jeffs Social Links Navigation

    ISG Country Manager at Lenovo UK&I. Yet as adoption spreads across the enterprise, the governance structures surrounding these systems are often lagging behind. Many organizations remain focused on the productivity
    benefits of generative AI while overlooking the operational, security and reputational risks that accompany its deployment. Research from the British Standards Institution highlights this gap: fewer than a quarter of business leaders say their organization has an AI governance program in place. As generative AI becomes embedded in critical workflows, governance, security
    and human oversight must evolve just as rapidly as the technology itself. Article continues below You may like 3 risks hindering enterprise-ready AI and how low-code workflows help AI governance under strain: what modern platforms mean for data privacy Tame your AI gremlins before the chaos
    becomes permanent A new kind of security challenge Generative AI systems introduce a fundamentally different risk profile compared with traditional enterprise software. Unlike deterministic applications, large language models respond dynamically to natural-language inputs, making them more difficult to control and secure.

    One of the most widely recognized risks is prompt injection, where malicious actors craft inputs designed to manipulate model behavior, bypass safeguards or extract sensitive information. However, this is only one dimension of a broader challenge.

    As generative AI tools become integrated into enterprise platforms, they can also be exploited to automate phishing campaigns, generate malicious code or accelerate other cyber threats. The scale and speed at which AI systems operate means these risks can proliferate quickly if safeguards are not carefully designed.

    Security strategies must therefore move beyond static protections. Organizations are increasingly adopting secure-by-design approaches that
    embed safeguards throughout the lifecycle of AI systems, from the data used
    to train models through to deployment and ongoing monitoring . Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get
    all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting
    your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

    Data governance plays a critical role in this process. Many organizations
    rely on high-level data classification frameworks that were not designed with AI systems in mind. Without more granular labelling and controls, models may gain access to sensitive data or generate outputs that expose confidential information.

    The risk becomes even more complex in emerging agent-based systems, where autonomous AI tools interact with each other to perform tasks. In these environments, each interaction can create a new vulnerability, potentially allowing data leakage or manipulation to propagate rapidly across connected systems.

    Maintaining human oversight and systematic monitoring is essential to prevent small errors from cascading into larger failures. What to read next The leadership dilemma: Governing the Agentic AI workforce Championing data leadership: how can data strategy shape AI success? How AI is reshaping compliance: Why governance still matters Building trustworthy AI systems Security breaches are often the most visible AI failures, but the longer-term risks associated with biased or unreliable outputs can be equally damaging.

    When generative AI systems produce misleading or discriminatory results, they undermine organizational credibility and erode trust among customers , employees and regulators. In sectors such as healthcare, financial services and the public sector, flawed AI outputs can also carry significant legal and compliance implications.

    Responsible AI governance must therefore extend across the entire lifecycle
    of a system, rather than being applied after deployment. Organizations that succeed in doing so typically focus on several foundational principles.

    Reliable data inputs:

    The quality of AI outputs is directly tied to the quality of the data used to train and prompt models. Strong data governance, including accurate classification, verification and labelling, helps reduce hallucinations and prevents sensitive information from being inadvertently surfaced.

    Built-in governance controls:

    Effective AI governance requires guardrails that are established from the beginning of any AI initiative. Controls should monitor data ingestion, model behavior and generated outputs to ensure systems operate within defined ethical, security and regulatory boundaries.

    Continuous evaluation:

    Generative models evolve over time as they interact with new data and users. Regular testing and validation are essential to detect drift, bias or unexpected behavior that may emerge after deployment.

    Together, these practices support a governance-first mindset that aligns with the security frameworks already used to manage complex enterprise systems. Transparency and explainability are key components of this approach, ensuring that both users and organizations can understand how AI systems produce their outputs.

    Human oversight remains particularly important in high-risk scenarios.
    Skilled reviewers should be involved in validating outputs where decisions could have material consequences for customers, employees or regulatory compliance. Moving from experimentation to operational maturity Despite growing awareness of AI risks, many organizations still lack the processes
    and tools needed to manage them effectively. Generative AI is often
    introduced through pilot projects or productivity tools without the
    governance structures required to support long-term deployment.

    In reality, managing AI risk requires continuous oversight. Security checks cannot end once a system goes live. Instead, organizations should treat AI governance as an ongoing operational function, similar to the zero-trust principles used in modern cybersecurity strategies. Several practical steps can help close this maturity gap.

    First, organizations must expand security awareness beyond technical teams. Business leaders and employees should understand issues such as prompt hygiene, data sensitivity and the potential consequences of AI misuse.

    Second, models should be tested and evaluated continuously throughout their lifecycle. This includes validating training data, assessing model behavior and monitoring outputs after deployment.

    Third, development teams should integrate DevSecOps practices directly into
    AI pipelines so that security and governance checks are embedded into
    everyday engineering workflows.

    Access management also requires close attention. Applying least-privilege principles ensures that both individuals and systems only access the data necessary for their specific tasks.

    Finally, organizations should prepare for the possibility of AI-related incidents. Simulated exercises and scenario planning can help teams
    understand how quickly AI-driven threats might escalate and how best to respond. Trust will determine the future of AI adoption Generative AI has the potential to transform how organizations operate, but its long-term success depends on the trustworthiness of the systems being deployed.

    Organizations that treat governance, security and transparency as
    foundational elements of AI strategy will be far better positioned to unlock the technologys value. Those that treat them as secondary considerations risk exposing themselves to operational failures, regulatory scrutiny and reputational damage.

    The next stage of AI adoption will not be defined by experimentation alone, but by the ability to operationalize trust. Embedding governance throughout the AI lifecycle, from data sourcing to ongoing monitoring, will allow organizations to innovate confidently while safeguarding their customers, employees and reputation. We've featured the best endpoint protection. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/governing-the-hidden-risks-of-generative-ai-in-t he-enterprise


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)