At NVIDIA GTC 2026, Cisco made a statement that should matter to every CISO and CIO in a regulated enterprise. Kevin Wollenweber, SVP and GM of Cisco Data Center and Internet Infrastructure, wrote that enterprise AI must be secure, observable, and governable before it can scale.
Cisco then backed that statement with product announcements. AI Defense now scans models for vulnerabilities, sanitizes prompts in real time, and blocks prompt injections. The new OpenShell integration constrains every tool call an autonomous agent makes at the infrastructure layer. Splunk provides the observable layer with SOC telemetry and SIEM correlation across AI workloads.
Secure and observable are covered. But governable — the word Cisco used — requires a third layer that security infrastructure alone cannot provide.
The Governance Gap
Securing a model and observing its telemetry is necessary. It is not sufficient. Regulated enterprises in financial services, healthcare, and defense face governance requirements that security infrastructure does not address.
Agent identity and delegated authority. Every AI agent acting on behalf of a human employee needs a cryptographic credential binding it to that specific person, with exact action scopes, transaction limits, and account allowlists. A ReadOnly agent should not be able to initiate a payment. When it tries, that attempt must be blocked inline, before execution, and logged to an immutable audit trail. Cisco AI Defense scans the model. But who authorized the agent? What are its limits? Can you prove it to a regulator tomorrow morning?
Regulatory examination readiness. Federal examiners do not ask whether your models are safe. They ask for an SR 11-7 model inventory, a FINRA Rule 3110 supervision package, an FFIEC IT controls assessment, or EU AI Act high-risk system documentation. These are governance artifacts, not security artifacts. They require structured audit data captured at the AI workflow layer, not at the infrastructure layer.
Information barrier enforcement. When a research analyst and an investment banker both use AI tools powered by the same enterprise deployment, Material Non-Public Information must not cross the wall. This requires LDAP and Active Directory group identity integration, real-time content classification at the semantic level, 90-day violation retention, and FINRA Rule 3110 attestation reports. Infrastructure firewalls operate at the network layer. Information barriers operate at the content and identity layer.
MCP tool call governance. Agentic AI workflows increasingly use the Model Context Protocol to connect agents to external tools and data sources. OpenShell constrains agent behavior at the infrastructure boundary. But who authorized the specific tool calls the agent is making? What action scopes apply? Is there an immutable log recording every MCP invocation with its authorizing principal? Governance at the workflow layer answers these questions. Security at the perimeter does not.
Why This Is an Infrastructure Problem, Not a Policy Problem
Most enterprises approach AI governance as a policy and compliance problem. They write AI usage policies. They establish AI governance committees. They adopt frameworks. These are necessary steps. They are not sufficient.
A governance framework defines what AI should do. The infrastructure determines what AI actually does. When an agent operates autonomously at 2am, the governance framework is not in the room. The control plane is.
This distinction has practical consequences. Enterprises that treat governance as a policy function produce documentation. Enterprises that treat governance as an infrastructure function produce evidence. Examiners accept evidence. They accept well-written policies only as context.
“Enterprise AI must be secure, observable, and governable.” — Kevin Wollenweber, SVP and GM, Cisco Data Center and Internet Infrastructure, GTC 2026
Cisco named all three requirements. Security and observability are product lines. Governability is an architecture decision.
The Three-Layer Stack
The enterprise AI governance stack has three distinct layers, each requiring purpose-built infrastructure.
- Security layer: model scanning, prompt sanitization, runtime guardrails, firewall enforcement. Cisco AI Defense operates here. This is the threat-prevention layer.
- Observability layer: SOC telemetry, SIEM correlation, behavioral analytics, anomaly detection. Cisco Splunk operates here. This is the visibility layer.
- Governance layer: agent identity and delegated authority, inline policy enforcement, regulatory documentation generation, information barrier enforcement, MCP governance. This is the control plane layer. It sits between the organization and every AI model, agent, and workflow it deploys.
The three layers are complementary, not redundant. Security prevents threats. Observability detects anomalies. Governance enforces boundaries, authorizes actions, and generates the audit trail that proves compliance. All three are required. Each is a distinct infrastructure investment.
The LiteLLM Incident as Context
On March 24, 2026, threat actor TeamPCP compromised LiteLLM versions 1.82.7 and 1.82.8. The attack used a compromised Trivy security scanner to inject credential-stealing malware into the most widely used open-source LLM proxy in the Python ecosystem. The stolen data included AI provider API keys and Kubernetes secrets from an estimated 3.4 million daily download users.
Enterprises running SmartFlow on-premises were not exposed. There is no public package registry dependency. There is no cloud-hosted supply chain. The software runs on hardware the customer owns, behind the customer’s firewall, with customer-held encryption keys.
The incident is not a reason to stop using AI tools. It is a reason to be precise about which AI infrastructure carries enterprise-grade security posture and which carries open-source supply chain risk. The Cisco framing helps: enterprise AI must be secure. The deployment model is part of what makes it secure.
What APERION Builds
SmartFlow is the governance control plane for regulated industries. It operates on-premises, Kubernetes-native, with sub-5ms routing overhead. It sits between the enterprise and every AI model, agent, and workflow in the environment.
The core capabilities directly address the governance layer Cisco identified:
- AIDA (AI Agent Identity and Delegated Authority): issues cryptographic credentials binding every AI agent to a human principal with exact action scopes, transaction limits, and account allowlists. Enforcement is inline, before execution. Every decision is logged to an immutable audit trail.
- Maestro: the identity-aware inline policy engine. Enforces governance decisions at the workflow layer in real time.
- Regulatory Examination Suite: generates SR 11-7 model inventories, FINRA 3110 supervision packages, FFIEC IT controls assessments, and EU AI Act documentation from audit data SmartFlow already captures. One API call.
- Information Barrier Enforcement: uses LDAP and AD group identity with real-time content classification to enforce MNPI controls. Violation events are retained for 90 days. Attestation reports are generated automatically.
- MCP proxy governance: authorizes every tool call against the agent’s AIDA credential and logs the invocation to the audit trail.
Cisco said enterprise AI must be governable. We agree. We built the layer that makes it so.
References
- Kevin Wollenweber, “Cisco Secure AI Factory: Powering Agentic AI at Scale,” Cisco Blogs, March 2026. blogs.cisco.com/news/cisco-secure-ai-factory-powering-agentic-ai-at-scale
- Cisco Newsroom, “Cisco Secure AI Factory with NVIDIA at GTC 2026,” March 2026. newsroom.cisco.com
- Cisco Blogs, “Securing Enterprise Agents with NVIDIA and Cisco AI Defense,” March 2026. blogs.cisco.com/ai/securing-enterprise-agents-with-nvidia-and-cisco-ai-defense
- Cisco Blogs, “Cisco Gives Its Secure AI Factory with NVIDIA a Secure Multi-Agent Edge Up,” March 2026. blogs.cisco.com/datacenter
- TeamPCP LiteLLM Supply Chain Attack, The Hacker News, March 24, 2026.
- Wiz Research, LiteLLM Cloud Exposure Report, March 2026.
- Federal Reserve / OCC / FDIC, SR 11-7: Guidance on Model Risk Management, 2011.
- FINRA Rule 3110, Supervision (2014).
- EU AI Act, Regulation (EU) 2024/1689, effective August 2024.
- FFIEC IT Examination Handbook (updated 2024).
Craig Alberino is CEO and Co-Founder of APERION, the enterprise AI governance control plane for regulated industries. aperion.ai
Ready to govern your AI infrastructure?
See how SmartFlow gives regulated industries complete AI sovereignty.
Request a Demo View Documentation