What Perplexity just launched
Perplexity, the AI search company valued at $20 billion, announced at its Ask 2026 developer conference that its multi-model AI agent Computer is now available to enterprise customers. The launch is significant not just for Perplexity, but for what it signals about the direction of enterprise AI as a whole.
Computer is an orchestration engine. When a user describes a goal, say, preparing a briefing document by pulling from the web, internal Slack conversations, emails, and Notion documents, Computer breaks that goal into subtasks, assigns each to a specialised sub-agent powered by the optimal AI model, and delivers a finished work product. The system coordinates approximately 20 AI models from multiple providers: Anthropic's Claude Opus 4.6 for primary reasoning, Google's Gemini for deep research, OpenAI's GPT-5.2 for long-context recall, xAI's Grok for speed-sensitive tasks, and specialist models for image and video generation.
Enterprise-specific features include Slack integration (query @computer directly in channels), business-grade connectors for Snowflake, Salesforce, SharePoint, HubSpot, and Datadog, and support for custom connectors via the Model Context Protocol (MCP). Each session runs inside an isolated Firecracker microVM, the same technology Amazon uses for Lambda, ensuring one user's session cannot access another's data. Perplexity says more than 100 enterprise customers demanded access after a single viral weekend of consumer demos.
Why this matters for enterprise AI strategy
The most telling detail in Perplexity's announcement is a usage statistic buried in the technical briefing. In January 2025, 90% of the company's enterprise queries were routed to just two AI models. By December 2025, no single model commanded more than 25% of usage. In twelve months, the enterprise AI landscape went from near-monopoly to genuine plurality.
This is not a temporary trend. The logic of model-agnostic architecture is compelling: different models genuinely excel at different tasks. Claude reasons well. Gemini searches deeply. Grok responds fast. GPT-5.2 handles long contexts. Forcing every enterprise task through a single model, regardless of whether it is the best fit, is increasingly a strategic liability. Organisations that have bet entirely on one vendor's ecosystem, Microsoft Copilot, Salesforce Einstein, or even OpenAI's enterprise tier, now face the risk of being locked out of the best tool for each job.
There is also a data governance dimension that deserves attention. When an AI agent like Computer reaches into your Slack, your email, your CRM, and your Snowflake warehouse simultaneously, the question of where your data goes, which model processes it, and under which jurisdiction, becomes non-trivial. Perplexity's microVM isolation is a sensible technical safeguard, but it does not answer questions about data residency or GDPR compliance for European organisations. Those questions land squarely on the integration layer, not the AI model itself.
What this means from Laava's perspective
Perplexity Computer can orchestrate 20 models. But it can only work with data that is exposed to it cleanly. That is the crux. Out-of-the-box connectors for Salesforce, SharePoint, and Slack cover the obvious enterprise surfaces. Your ERP, your proprietary document archive, your legacy CRM with the 15-year data history, your approval workflow system, those do not come with ready-made connectors. Someone has to build them.
The Model Context Protocol, which Perplexity uses for custom connectors and which is fast becoming the industry standard for AI-to-tool integration, is designed to make this tractable. An MCP server sits between your internal system and the AI agent, translating your data into a format any compliant AI agent can consume. Build the MCP server once, and any agent, whether it is Perplexity Computer today or a different orchestration layer next year, can use it. This is exactly the architecture we build at Laava: clean integration layers that separate your data from the AI model consuming it, so your organisation is not locked into any single vendor's roadmap.
The data sovereignty angle is equally important for Dutch and European organisations. Regulations around GDPR and the EU AI Act require you to know where your data goes and which models process it. A cloud AI agent that routes your invoice data, your HR records, or your customer emails through 20 different models across multiple providers creates a compliance surface that most legal and IT teams are not yet prepared for. That is not an argument against using these tools. It is an argument for building a proper integration layer first, one that gives you control over what data is exposed, to which model, and under which conditions.
What you can do now
The first step is not to evaluate Perplexity Computer or any other AI orchestration platform. The first step is to map your data. Which internal systems contain the information your teams actually need to do their jobs? Which of those systems have clean APIs? Which are locked behind legacy interfaces or manual exports? That inventory determines what is possible and what has to be built before any AI agent can be useful to you.
The organisations that will get the most out of the current wave of enterprise AI agents are not the ones that rush to adopt every new orchestration platform. They are the ones that invest now in clean, model-agnostic integration layers. If you want to understand what that would look like for your specific systems and workflows, we are happy to walk through it with you.