Manifold Raises $8M to Secure AI Agents Directly on Enterprise Endpoints
85% of developers now run autonomous coding agents like GitHub Copilot, Claude Code, and Cursor directly on their laptops with broad access to production systems, source code, and CI/CD pipelines.
This represents a fundamental security blind spot that existing enterprise security tools weren’t designed to handle. Developers routinely get exceptions to standard endpoint policies because their normal activity already looks malicious to traditional security systems. Now AI agents perform those same high-risk tasks autonomously, creating what Manifold calls “the next major attack surface” as agent adoption spreads beyond engineering to every knowledge worker role.
The Endpoint Security Gap
Traditional AI security has focused on inference-time monitoring—analyzing text prompts and model outputs at the gateway level. But this approach becomes blind to anything that happens beyond that perimeter, missing the actual execution layer where agents interact with systems, databases, and external services.
The problem compounds as the AI agent ecosystem rapidly expands. Agents now connect to an growing web of MCP (Model Context Protocol) servers, third-party skills, and enterprise integrations that enterprise security teams have little visibility into. Each connection represents potential attack vectors that current security infrastructure simply cannot map or monitor.
Engineers represent a particularly acute risk because their roles require deep system access. They read entire codebases, execute shell commands, and make API calls as part of normal workflow. Traditional endpoint detection and response (EDR) tools flag these activities as suspicious, so security teams create blanket exceptions. When coding agents automate these same tasks, the exceptions persist—but the human oversight disappears.
Runtime Visibility Architecture
Manifold’s AI Detection and Response (AIDR) platform provides what the company calls “full runtime visibility” into agent behavior directly on endpoints. Rather than attempting to classify natural language at the inference layer, the system monitors the actual tools agents call, systems they access, and actions they execute.
The architecture maps every agent in an enterprise environment, tracking their connections to MCP servers, databases, and external systems in real-time. Security teams receive a live topology of agent activity, with behavioral anomalies flagged when agent actions drift from established patterns. The system distinguishes between normal agent operations and potentially risky behavior based on contextual understanding of what each agent should be doing.
The platform deploys without requiring new infrastructure, gateways, or proxies. Instead, it leverages existing endpoint infrastructure to provide visibility without introducing latency or friction to agent workflows. This “agentless” approach allows deployment within days rather than months of integration work.
Evidence of Early Adoption
Manifold’s founding team brings proven credentials in AI security infrastructure. Co-founders Neal Swaelens and Oleksandr Yaremchuk previously built LLM Guard at Laiyer AI—the most widely adopted open-source large language model firewall in production today. After Laiyer AI’s acquisition by Protect AI, which was subsequently acquired by Palo Alto Networks, they identified the emerging gap between chat-focused AI security and agent-specific threats.
The $8 million seed round led by Costanoa Ventures includes participation from Cherry Ventures, Rain Capital, and Modern Technical Fund. Notable angel investors include former Uber CSO Joe Sullivan and former Google DeepMind CISO Vijay Bolina, providing validation from security leaders who understand both enterprise security requirements and the emerging AI agent landscape.
John Cowgill, Partner at Costanoa Ventures, framed the investment as category-defining: “There’s an open window to define the category for agentic security now, but it won’t be open long. Endpoint agent security is the next major layer of enterprise infrastructure.”
Infrastructure Layer Emergence
Manifold’s approach represents the emergence of agent-native security infrastructure rather than retrofitting existing tools built for different threat models. Traditional AI security focused on preventing harmful outputs from language models. Agent security must monitor autonomous systems that execute code, access databases, and modify production environments.
The timing aligns with broader enterprise recognition that AI agents require purpose-built infrastructure rather than extended versions of existing tools. As coding agents become standard developer tooling and agent capabilities expand into other knowledge work roles, the attack surface expands exponentially. Current security architectures simply weren’t designed for autonomous systems that operate at machine speed across enterprise environments.
The AIDR category that Manifold is defining addresses this gap by providing visibility and control specifically designed for autonomous agents rather than human users. This represents a fundamental shift from securing AI outputs to securing AI actions.
Looking Forward
The next 6-12 months will likely determine whether agent-native security becomes a distinct infrastructure category or gets absorbed into existing security platforms. Current enterprise security vendors are extending their tools to cover AI use cases, but these approaches maintain the inference-focused model rather than addressing the execution layer.
As Claude Cowork, OpenClaw, and other agent platforms expand beyond developers to general knowledge workers, the security challenge will intensify. Every agent deployment multiplies the potential attack surface, making runtime visibility and behavioral monitoring essential rather than optional.
The broader trend suggests that AI agent infrastructure will require purpose-built tools across multiple layers—orchestration, governance, observability, and now security—rather than extending existing enterprise software. Manifold’s focused approach to endpoint agent security may establish the template for how these specialized infrastructure layers emerge.
As enterprises deploy more autonomous AI agents, the need for specialized infrastructure becomes evident across every layer of the stack. From runtime security to agent orchestration, purpose-built tools are replacing retrofitted solutions. Overclock provides the orchestration layer for reliable agent execution, complementing security-focused platforms like Manifold to create the comprehensive infrastructure layer that enterprise AI deployment requires.