
Summarize this post with AI
At RSAC's 35th anniversary, Vasu Jakkal, Corporate Vice President at Microsoft Security, opened with a statement that reframed the entire security conversation: enterprises are not losing the AI race because they are under-investing in AI, they are losing because they are over-investing in AI capability while under-investing in AI security governance. With 1.3 billion AI agents projected by 2028, the surface area for attack is not growing linearly, it is exploding, and most enterprise AI cybersecurity strategy was written for a world that no longer exists.
TL;DR: 6 Insights for Security Leaders
Nation-state actors including North Korea's Jasper Sleet and Coral Sleet are already using GenAI to craft malware and autonomous attack operations. Agentic AI creates a new identity explosion where every agent is a non-human identity that must be governed with least-privilege and dynamic access controls. Reactive SOC models are architecturally obsolete and ambient, autonomous security is the only viable response to agentic threats. Prompt injection is the silent killer in AI-powered enterprises and most DLP policies were not built to catch it. The UAE's National XDR and Crystal Ball 2.0 demonstrate that cross-border agent-to-agent intelligence sharing is operationally viable right now. Heavy AI infrastructure investment paired with near-zero governance investment is not a strategy, it is a liability waiting to be triggered.
From Firewalls to Agents: 35 Years of Security in One Paradigm Shift
When the web was born in 1991, security meant perimeters. When the cloud arrived, it meant identity. Today, in the age of agentic AI, security means governing autonomous decision-making systems operating at machine speed across your entire enterprise stack. This is not an evolution of the previous challenge, it is a categorical break from it. The tools, policies, and mental models that worked for human-operated systems fail in environments where AI agents initiate, execute, and complete complex workflows without waiting for a human to approve each step.
Vasu Jakkal put it precisely at RSAC: attackers today think in graphs, not silos. They do not attack one endpoint, they traverse connected systems, exploit trust relationships between agents, and use AI to iterate on attack vectors faster than any human red team can respond. Defenders who still think in silos perimeter here, identity there, data security somewhere else are playing a game where the rules have fundamentally changed.
"Because attackers think in graphs rather than silos, defenders must move from reactive to ambient and autonomous security where protection is embedded into every layer of the AI stack." Vasu Jakkal, Corporate Vice President, Microsoft Security, RSAC 2025
This is the defining insight that must reset your entire AI cybersecurity framework. Ambient security is not a product, it is an architectural philosophy: security embedded not at the edge of your AI systems, but woven into every inference, every agent handoff, every data access event.
Watch: Ambient and Autonomous Security: Building Trust in the Agentic AI Era
The AI-Powered Threat Landscape: What Nation-States Are Already Doing
Most enterprise security leaders still treat AI threats as a future concern. They are wrong. Jakkal revealed at RSAC that North Korean threat groups, specifically Jasper Sleet and Coral Sleet, are actively using Generative AI to craft phishing lures, develop malware, and experiment with autonomous agents for operational attack execution. These are not proof-of-concept demonstrations they are active, deployed AI threat detection evasion techniques targeting enterprise environments right now.
GenAI has dramatically lowered the skill floor for sophisticated attacks. What previously required a team of expert threat actors can now be orchestrated by a smaller group using AI to scale their tradecraft. This asymmetry, where attack sophistication grows cheaply but defense still requires costly human expertise, is the central tension that every enterprise AI security model must urgently address.
Six Agentic AI Risks That Will Define Your Threat Surface in 2026
Risk 01: Agent Autonomy Without Auditability When an AI agent makes a decision that results in a data breach or a compliance failure, can you reconstruct exactly why it made that decision, what data it accessed, and which identity authorized it? In most enterprise deployments today, the answer is no. Autonomy without auditability is not innovation, it is uncontrolled risk that your board will eventually have to answer for.
Risk 02: Prompt Injection and AI Manipulation Prompt injection is the SQL injection of the agentic era. Malicious inputs embedded in data sources, emails, or documents instruct an AI agent to behave in ways its operators never intended, exfiltrating data, bypassing controls, or attacking other agents. Most enterprise AI threat detection systems were not built to catch this attack vector and most security teams have never tested for it.
Risk 03: Identity Explosion Every AI agent is a non-human identity with credentials, access permissions, and behavioral patterns. As organizations deploy hundreds or thousands of agents, the identity attack surface scales proportionally. Without dynamic, least-privilege access management for each agent, a single compromised agent credential can become an enterprise-wide breach vector moving laterally at machine speed.
Risk 04: Data Leakage via AI Systems Traditional data loss prevention tools monitor human-to-system interactions. Agentic AI creates machine-to-machine data flows that bypass these controls entirely. Sensitive data can be summarized, transformed, and transmitted across agent boundaries without triggering a single legacy DLP alert, creating invisible exfiltration channels inside your approved toolset.
Risk 05: Cognitive Offloading and SOC Skill Erosion When SOC analysts rely on AI to triage every alert, pattern-match every anomaly, and draft every incident response playbook, their own investigative skills atrophy. This is not a people problem, it is a design problem. AI-assisted security without deliberate human skill development creates fragile teams who cannot operate effectively when the AI system fails or is itself compromised.
Risk 06: Accountability Gaps in AI Decisions When an AI agent denies a transaction, flags a user, or blocks a critical workflow, who is accountable? In the absence of a formal AI security and governance structure, accountability diffuses across vendors, engineering teams, and compliance functions, ensuring that no one is actually responsible for high-stakes AI decisions. This is not just a legal risk, it is a governance vacuum that adversaries and regulators will both exploit.
"This is not a cybersecurity problem. It is an AI control problem wearing a cybersecurity mask. And most enterprises do not yet have the governance architecture to see the difference." Samta.ai Security Strategy Team
The AASG Framework: Autonomous AI Security Governance
Informed by the four pillars Vasu Jakkal outlined at RSAC and extended through enterprise deployment realities, the Autonomous AI Security Governance (AASG) Framework is the operational architecture every enterprise needs before scaling agentic AI. This is not a checklist, it is a living governance system built for the speed and complexity of autonomous AI environments.
Pillar 01: Dynamic Identity Governance Every AI agent receives a unique, time-bound identity with least-privilege access. Agent credentials rotate dynamically and are audited in real-time, mirroring Zero Trust principles purpose-built for non-human actors operating across distributed enterprise environments. Related reading: Zero Trust for AI Systems
Pillar 02: Autonomous Threat Protection Real-time monitoring and cross-signal correlation specifically detects prompt injection, agent impersonation, and lateral movement between AI agents, not just human-to-system attacks. This is the layer where AI cybersecurity governance strategy moves from policy to active defense.
Pillar 03: Continuous AI-Native Data Security Continuous data loss prevention designed for machine-to-machine data flows, classifying and protecting data as it moves through agentic pipelines, not just at human endpoints. Related reading: AI Risk Management Framework
Pillar 04: Automated Governance and Guardrails Policy enforcement that moves from manual review cycles to automated guardrails, with full audit trails, human-in-the-loop escalation triggers, and board-ready reporting built in from deployment day one. Related reading: AI Governance Guide
Five Mistakes Enterprises Make in AI Security Adoption
Treating AI security as a firewall add-on rather than a governance architecture that spans identity, data, threat, and policy simultaneously is the most expensive mistake enterprises make and the most common. Applying human-centric DLP policies to agentic pipelines leaves entire classes of machine-to-machine data flows completely unmonitored. Deploying AI agents without non-human identity management means using shared credentials or broad service accounts that become catastrophic single points of failure. Waiting for regulation to define governance instead of building a proactive gen AI governance framework ensures you are always one breach behind. Finally, measuring AI security investment by tool count rather than by actual risk visibility and control coverage creates a false sense of security that holds until the moment it does not.
What the UAE Got Right: Global Collaboration as a Security Primitive
H.E. Dr. Mohamed Al Kuwaiti, Head of Cybersecurity for the UAE Government, shared at RSAC that the UAE is not merely deploying AI it is building the governance infrastructure to make AI trustworthy at national scale. The UAE's ambition to deploy 1 billion AI agents is paired with the National XDR initiative and Crystal Ball 2.0, a system enabling agent-to-agent threat intelligence sharing across national borders in near real time.
This is the macro-level signal enterprise CISOs should be reading clearly: AI risk management can no longer be an internal function alone. Cross-border, cross-organization intelligence sharing where AI agents collaborate to surface threats faster than any individual SOC can is the emerging standard. Enterprises that build governance frameworks capable of participating in these ecosystems will have a structural security advantage over those operating in isolation. The broader societal obligation, as Jakkal concluded at RSAC, is ensuring AI systems earn and maintain public trust, and that is not philosophical it is a business requirement measured in regulatory standing, customer confidence, and long-term commercial viability.
The Investment Imbalance That Will Define Enterprise Risk in 2027
Enterprise AI infrastructure investment is growing at 40 to 60 percent year over year while AI cybersecurity governance strategy investment grows in the low single digits. Every dollar spent deploying an ungoverned AI agent is a dollar spent expanding your attack surface without expanding your control. The governance gap does not stay constant as AI scales it accelerates. And by the time an incident forces an enterprise to close it reactively, the cost is measured not in software licenses but in breach response, regulatory penalty, and lost customer trust. Related reading: Enterprise AI Security Checklist
The Security Community's Defining Obligation
Vasu Jakkal closed RSAC with a statement that deserves to sit at the top of every CISO's board deck: the future of AI security depends on whether the security community can collaborate fast enough to ensure AI serves the common good and earns public trust. The agentic AI era is here. The threats are live. The governance gap is measurable. The enterprises that win are the ones who treat the AI security governance framework not as a compliance checkbox but as a competitive architecture the layer that makes everything else possible, sustainable, and trustworthy. Build that layer first. Scale everything else second.
Ready to Close the Governance Gap Before It Becomes a Breach?
Samta.ai is the enterprise AI governance and risk visibility platform that gives security leaders real-time control, automated guardrails, and audit-ready accountability across every AI agent, model, and workflow in your environment. Whether you are mapping your first agentic AI risk inventory or operationalizing a full AI security governance framework at enterprise scale, Samta.ai provides the governance layer your AI stack is currently missing.
Book a strategy demo or explore our AI security and compliance services to see exactly where your controls end and your exposure begins.
Frequently Asked Questions
What is an AI security governance framework and why does every enterprise need one in 2026?
An AI security governance framework is the structured set of policies, controls, monitoring systems, and accountability structures that govern how AI agents and models operate within an enterprise. With agentic AI creating millions of autonomous decisions per day, governance is the only mechanism that keeps AI behavior auditable, explainable, and aligned with both security policy and regulatory requirements. Without it, AI capability and AI risk scale at exactly the same rate.
How is agentic AI different from traditional AI and why does it require a different security approach?
Traditional AI responds to prompts from human users. Agentic AI initiates, plans, and executes multi-step workflows autonomously, often interacting with other AI agents, APIs, and data systems without direct human oversight at each step. This autonomy creates new attack surfaces including agent-to-agent manipulation, non-human identity abuse, and prompt injection via data sources that traditional AI cybersecurity framework controls were never designed to address.
What is the difference between AI governance and AI security?
AI security and governance are complementary but distinct disciplines. AI security focuses on protecting AI systems from external threats including model attacks, prompt injection, and data poisoning. AI governance focuses on ensuring AI systems operate within defined boundaries of accountability, compliance, and auditability. Effective enterprise protection requires both working as an integrated system, not as separate functions owned by different teams.
What are the most critical enterprise AI risk management priorities for security leaders today?
Based on RSAC insights and enterprise deployment patterns, the most critical AI risk management priorities are establishing non-human identity governance for every AI agent, implementing prompt injection detection in all GenAI workflows, extending DLP controls to machine-to-machine data flows, defining clear accountability ownership for AI decisions, and building human-in-the-loop escalation mechanisms that prevent full cognitive offloading to automated systems.
How does Zero Trust apply to AI agent environments?
Zero Trust in AI environments means never implicitly trusting an AI agent based on its origin or previous behavior. Every agent interaction, data access request, and inter-agent communication must be verified, scoped to minimum necessary privilege, and logged for audit a foundational requirement of any robust enterprise AI security model and the architectural principle that underpins the AASG Framework.
