In the rapidly evolving landscape of AI integration, the Model Context Protocol (MCP) has emerged as a promising standard for enabling AI models to interact with external tools and resources. However, as organizations implement MCP at scale, a few critical components are conspicuously missing:
An enterprise-grade orchestration layer
A mechanism for connecting to non-MCP “legacy” systems
A way to govern data and access via MCP
A production-ready runtime engine for these components
Without these pieces, the proliferation of MCP servers quickly becomes a maintenance and security nightmare that undermines the very benefits MCP promises to deliver.
The Current MCP Landscape
MCP provides a standardized way for AI models to access tools, resources, and prompts, a common interface that simplifies integration driven by LLMs. In its current state, it’s effective for point-to-point integration. But point-to-point integration has always ended up causing major problems at scale. The proliferation of MCP servers and direct integration with multiple LLMs would end up being an AI integration maintenance nightmare.
The Twin Challenges of Scale
As teams adopt MCP, they typically create dedicated servers for each system integration – GitHub, Salesforce, Google Sheets, databases, and custom applications. What starts as an elegant solution quickly becomes a sprawling network of disconnected servers, each requiring individual maintenance, security oversight, and operational support.
This proliferation creates several critical issues:
Orchestration gaps: No central intelligence to coordinate across multiple MCP servers
Runtime execution limitations: MCP defines communication patterns but not how to reliably execute the resulting plans
Limited data knowledge: companies can’t limit themselves to a single LLM or specific data sources. Companies need to organize and govern data more centrally to ensure the best AI-ready data and the best LLMs are chosen each time.
Governance challenges: Enterprise requirements for security, compliance, and reliability are unaddressed
Operational complexity: Each new integration adds configuration and maintenance burden
The fundamental issue is that while MCP defines an excellent protocol for model-to-server communication, it lacks both the orchestration layer to manage data and multiple servers and the robust runtime to execute complex workflows at scale.
The Complete Solution: Orchestration + Runtime
What’s needed is a dual-layer solution that combines:
AI-First Orchestrator: An intelligent coordination layer that discovers and composes tools across multiple MCP servers, creates execution plans based on user intent, and manages the overall workflow.
Integration: Not all sources will support MCP. An intermediate layer is needed to translate between MCP and other protocols seamlessly.
Data Collaboration and Governance: AI cannot directly access data without the proper governance controls, even if you resolve security issues. There also needs to be a way to collaborate, and make it easier to access and reuse data.
Enterprise-Grade Runtime Engine: A reliable, secure execution environment that implements the plans created by the orchestrator, handles data processing at scale, and ensures governance requirements are met.
Our architectural foundation of Nexsets which provides processing scale and governance, now layered with an MCP server offers a comprehensive approach:
Intelligent Orchestration
Dynamic discovery of MCP servers and their capabilities
LLM-driven agents (built with AG2) that map user requests to specific tools
Adaptive planning that sequences operations across multiple domains
AI-Powered Integration
The ability to connect to any source, including any REST or SOAP APIs, and automatically convert them into AI-ready data products that are accessible using MCP.
A built-in private Marketplace for each domain that is used to publish, discover, reuse, and govern data as data products that are discoverable by users and agents.
Production Runtime
Scalable data processing as well as API processing infrastructure for reliable execution
Enterprise security and governance controls
Monitoring, logging, and auditing capabilities
Error handling and recovery mechanisms
How It Works: A Practical Example
Consider a seemingly simple request: “Get all active contacts from Salesforce, match them with rows in a Google Sheet by email, filter to those created in the last 30 days, and upload the result to BigQuery.”
With our complete orchestration + runtime approach:
The orchestrator receives the natural language request
It queries registered MCP servers and Data Products for available tools and resources
LLM-driven agents map the request to specific tools:
query_contacts from the Salesforce MCP
get_sheet_valuesfrom the Sheets MCP
Data transformation operations
load_datato BigQuery
The orchestrator creates an execution plan based on this mapping
Nexla’s runtime engine takes over, executing the plan with enterprise-grade reliability
The runtime handles data scaling, security enforcement, and error recovery
Results are returned to the user with execution metrics and provenance information
Benefits of the Complete Approach
This architecture delivers several key advantages:
Enterprise readiness: Production-grade reliability, security, and governance
Scalability: Handles large data volumes and complex workflows that MCP alone cannot
Centralized control: Single pane of glass for monitoring, managing, and governing the MCP ecosystem
Future-proof architecture: Flexibility to incorporate legacy sources, MCP servers, and other standards as they emerge
The Path Forward
As MCP adoption continues to accelerate, the need for both orchestration and runtime execution at an enterprise scale becomes increasingly critical. The approach outlined here provides a pathway to unlock the full potential of MCP without the maintenance nightmare of unchecked server proliferation or the risks of unreliable execution.
By combining an intelligent orchestration layer with a robust runtime engine, organizations can scale their AI integration capabilities while maintaining operational control. The result is a system that leverages the standardization benefits of MCP while addressing the real-world requirements of enterprise deployment.
At Nexla, we’re building both pieces of this puzzle—the orchestration layer that intelligently plans across multiple MCP servers and the runtime engine that executes those plans with enterprise-grade reliability. Together, these components provide the missing links that enterprises need to move MCP from promising experiments to production reality.
Model vs. Context Companies: Saket Saurabh on Enterprise AI | The Joe Reis Show
In the News: Saket Saurabh sits down with Joe Reis to discuss why context, not models, defines competitive advantage in AI and how enterprises must rethink data strategy.
Mastering Enterprise AI: Saket Saurabh on Data, Context, and Innovation at the NYSE
In the News: In this episode of theCUBE + NYSE Wired Mixture of Experts series, Nexla Co-Founder & CEO Saket Saurabh joins host John Furrier to discuss the future of AI and data. From innovations in context engineering to the rise of AI factories, Saket shares insights on the challenges and opportunities shaping enterprise AI today.
Fivetran and Nexla are leading data integration platforms, but they take different approaches. Learn how they compare on features, deployment, and governance to find the right fit for your data strategy.