OPAQUE Raises $24M to Bridge Enterprise AI 'Trust Chasm' with Confidential Computing Infrastructure
OPAQUE Systems raised $24 million in Series B funding at a $300 million valuation to advance its Confidential AI platform, addressing what CEO Aaron Fulkerson calls the “trust chasm” blocking enterprise AI adoption. The UC Berkeley RISELab spinout has attracted customers including ServiceNow and Anthropic by providing verifiable privacy guarantees for AI workloads running on sensitive data.
The round reflects growing recognition that traditional security approaches fall short when enterprises attempt to scale AI beyond pilots. According to Gartner, Confidential AI techniques are becoming essential for securing GenAI workflows, while companies like NVIDIA, AMD, Intel, and all major hyperscalers have endorsed the category within the past year.
The Enterprise Trust Bottleneck
The challenge is stark: enterprises eager to deploy AI agents on proprietary data face a fundamental tension between innovation and compliance. CISOs, legal teams, and compliance officers routinely pause AI initiatives over concerns about data leakage, policy enforcement failures, and the inability to audit what happens to sensitive information during model inference.
“AI won’t scale unless organizations can verify, not just assume, that their data and models are protected,” Fulkerson explained. This verification gap has created what OPAQUE terms a “trust chasm” – the space between AI’s technical capabilities and enterprises’ willingness to deploy it on their most valuable data assets.
Traditional security measures like encryption at rest and in transit leave data exposed during computation. Existing governance tools provide policy frameworks but lack runtime verification of compliance. The result: most enterprise AI initiatives stall in pilot phase as organizations struggle to meet rising compliance standards while maintaining competitive velocity.
Cryptographically Verifiable Infrastructure
OPAQUE’s Confidential AI platform solves this through what the company calls “runtime-verifiable governance backed by cryptographic proof.” The system delivers three core guarantees: data remains private during computation, model weights are never exposed, and policies are enforced exactly as written throughout every AI workflow.
The technical approach builds on Trusted Execution Environments (TEEs) and advanced cryptographic techniques developed at UC Berkeley’s RISELab. Unlike traditional data governance tools that operate on trust assumptions, OPAQUE provides mathematical proof that sensitive operations occurred according to specified parameters.
The platform extends beyond basic confidential computing by creating an end-to-end trust layer for AI agents. Organizations can verify that their proprietary data never leaked, that model behavior adhered to approved policies, and that agent actions remained within authorized boundaries – all backed by cryptographic evidence rather than monitoring dashboards.
Enterprise customers report moving from pilot to production 4-5x faster with OPAQUE’s infrastructure, as security and compliance teams gain the verifiable guarantees needed to approve AI deployments on sensitive workloads.
Strategic Validation and Market Expansion
Walden Catalyst led the Series B, with participation from returning investors Intel Capital, Race Capital, Storm Ventures, and Thomvest, plus new strategic partner Advanced Technology Research Council (ATRC). The investor mix signals both Silicon Valley conviction and international recognition of sovereign AI requirements.
“OPAQUE solves this problem and has pioneered a platform built for verifiable privacy, policy enforcement, and model integrity, capabilities that are quickly becoming non-negotiable,” said Young Sohn, Founding Managing Partner at Walden Catalyst and board member at Samsung, Arm, and Cadence.
The company recently launched OPAQUE Studio, a development environment for building Confidential AI agents with runtime-verifiable privacy and compliance. This extends the platform from infrastructure-as-a-service to a complete development stack for secure autonomous systems.
Customer adoption spans regulated industries including financial services, healthcare, and insurance, where data privacy violations carry significant regulatory and reputational costs. ServiceNow’s deployment demonstrates enterprise-scale validation, while Anthropic’s participation as both customer and strategic partner underscores the technology’s relevance to foundation model providers.
The Sovereign AI Infrastructure Emergence
OPAQUE is expanding into post-quantum security, confidential AI training, and sovereign cloud environments as governments increasingly require domestic control over AI infrastructure. The ATRC investment reflects this trend, with the UAE-based research council seeking cryptographically verifiable foundations for sovereign AI systems.
“There is no such thing as sovereign AI without verifiable guarantees on how data, models, and policies are protected and governed,” noted Dr. Najwa Aaraj, CEO of the Technology Innovation Institute (TII), explaining ATRC’s dual role as investor and partner.
This sovereign dimension distinguishes OPAQUE from adjacent security solutions focused primarily on threat detection or access control. As AI systems become critical national infrastructure, the ability to prove – rather than merely monitor – compliance with data sovereignty requirements represents a fundamental architectural requirement.
The broader Confidential AI category is experiencing rapid adoption as organizations realize that traditional security approaches cannot scale with autonomous AI systems. Unlike human-supervised workflows, AI agents operating at machine speed require infrastructure that can verify trustworthy behavior in real-time without introducing latency that negates automation benefits.
Looking Forward: Trust-First AI Architecture
The funding positions OPAQUE to capitalize on the emerging requirement for trust-first AI architecture. As enterprises move beyond simple chatbot deployments to autonomous agents handling mission-critical processes, the infrastructure must provide the same reliability guarantees as traditional enterprise systems.
Over the next 12-18 months, OPAQUE expects to see Confidential AI requirements extend beyond regulated industries to any organization deploying AI on competitive data. The post-quantum expansion addresses the timeline when current cryptographic approaches may become vulnerable, ensuring long-term infrastructure viability.
The broader implication: as AI transitions from experimental technology to operational infrastructure, the security and governance layer becomes as critical as compute and storage. OPAQUE’s approach suggests the winning architecture will be confidential-by-design rather than security-as-an-afterthought.
Infrastructure Spotlight: Enterprise AI adoption often stalls not due to technical limitations but trust gaps. OPAQUE’s cryptographically verifiable approach represents a fundamental shift toward provable rather than presumed security. Overclock orchestration benefits from similar trust-first infrastructure principles, ensuring enterprise automation maintains security and auditability as it scales from pilot to production deployment.