🚀 Unlock AI Success – May 21 at the 2025 Data + AI Integration Summit! Discover how data integration drives real-world AI impact. ✨

Register Now →

The Missing Links in MCP: Orchestration and Runtime Execution at Enterprise Scale

In the rapidly evolving landscape of AI integration, the Model Context Protocol (MCP) has emerged as a promising standard for enabling AI models to interact with external tools and resources. However, as organizations implement MCP at scale, a few critical components are conspicuously missing:

  • An enterprise-grade orchestration layer
  • A mechanism for connecting to non-MCP “legacy” systems
  • A way to govern data and access via MCP
  • A production-ready runtime engine for these components

Without these pieces, the proliferation of MCP servers quickly becomes a maintenance and security nightmare that undermines the very benefits MCP promises to deliver.


The Current MCP Landscape

MCP provides a standardized way for AI models to access tools, resources, and prompts, a common interface that simplifies integration driven by LLMs. In its current state, it’s effective for point-to-point integration. But point-to-point integration has always ended up causing major problems at scale. The proliferation of MCP servers and direct integration with multiple LLMs would end up being an AI integration maintenance nightmare.


The Twin Challenges of Scale

As teams adopt MCP, they typically create dedicated servers for each system integration – GitHub, Salesforce, Google Sheets, databases, and custom applications. What starts as an elegant solution quickly becomes a sprawling network of disconnected servers, each requiring individual maintenance, security oversight, and operational support.

This proliferation creates several critical issues:

  1. Orchestration gaps: No central intelligence to coordinate across multiple MCP servers
  2. Runtime execution limitations: MCP defines communication patterns but not how to reliably execute the resulting plans
  3. Limited data knowledge: companies can’t limit themselves to a single LLM or specific data sources. Companies need to organize and govern data more centrally to ensure the best AI-ready data and the best LLMs are chosen each time.
  4. Governance challenges: Enterprise requirements for security, compliance, and reliability are unaddressed
  5. Operational complexity: Each new integration adds configuration and maintenance burden

The fundamental issue is that while MCP defines an excellent protocol for model-to-server communication, it lacks both the orchestration layer to manage data and multiple servers and the robust runtime to execute complex workflows at scale.


The Complete Solution: Orchestration + Runtime

What’s needed is a dual-layer solution that combines:

  1. AI-First Orchestrator: An intelligent coordination layer that discovers and composes tools across multiple MCP servers, creates execution plans based on user intent, and manages the overall workflow.
  2. Integration: Not all sources will support MCP. An intermediate layer is needed to translate between MCP and other protocols seamlessly.
  3. Data Collaboration and Governance: AI cannot directly access data without the proper governance controls, even if you resolve security issues. There also needs to be a way to collaborate, and make it easier to access and reuse data. 
  4. Enterprise-Grade Runtime Engine: A reliable, secure execution environment that implements the plans created by the orchestrator, handles data processing at scale, and ensures governance requirements are met.

Our architectural foundation of Nexsets which provides processing scale and governance, now layered with an MCP server offers a comprehensive approach:

Intelligent Orchestration

  • Dynamic discovery of MCP servers and their capabilities
  • LLM-driven agents (built with AG2) that map user requests to specific tools
  • Adaptive planning that sequences operations across multiple domains

AI-Powered Integration

  • The ability to connect to any source, including any REST or SOAP APIs, and automatically convert them into AI-ready data products that are accessible using MCP.
  • A built-in private Marketplace for each domain that is used to publish, discover, reuse, and govern data as data products that are discoverable by users and agents.

Production Runtime

  • Scalable data processing as well as API processing infrastructure for reliable execution
  • Enterprise security and governance controls
  • Monitoring, logging, and auditing capabilities
  • Error handling and recovery mechanisms

How It Works: A Practical Example

Consider a seemingly simple request: “Get all active contacts from Salesforce, match them with rows in a Google Sheet by email, filter to those created in the last 30 days, and upload the result to BigQuery.”

With our complete orchestration + runtime approach:

  1. The orchestrator receives the natural language request
  2. It queries registered MCP servers and Data Products for available tools and resources
  3. LLM-driven agents map the request to specific tools:
    • query_contacts from the Salesforce MCP
    • get_sheet_valuesfrom the Sheets MCP
    • Data transformation operations
    • load_datato BigQuery
  4. The orchestrator creates an execution plan based on this mapping
  5. Nexla’s runtime engine takes over, executing the plan with enterprise-grade reliability
  6. The runtime handles data scaling, security enforcement, and error recovery
  7. Results are returned to the user with execution metrics and provenance information

Benefits of the Complete Approach

This architecture delivers several key advantages:

  1. Enterprise readiness: Production-grade reliability, security, and governance
  2. Scalability: Handles large data volumes and complex workflows that MCP alone cannot
  3. Centralized control: Single pane of glass for monitoring, managing, and governing the MCP ecosystem
  4. Future-proof architecture: Flexibility to incorporate legacy sources, MCP servers, and other standards as they emerge

The Path Forward

As MCP adoption continues to accelerate, the need for both orchestration and runtime execution at an enterprise scale becomes increasingly critical. The approach outlined here provides a pathway to unlock the full potential of MCP without the maintenance nightmare of unchecked server proliferation or the risks of unreliable execution.

By combining an intelligent orchestration layer with a robust runtime engine, organizations can scale their AI integration capabilities while maintaining operational control. The result is a system that leverages the standardization benefits of MCP while addressing the real-world requirements of enterprise deployment.

At Nexla, we’re building both pieces of this puzzle—the orchestration layer that intelligently plans across multiple MCP servers and the runtime engine that executes those plans with enterprise-grade reliability. Together, these components provide the missing links that enterprises need to move MCP from promising experiments to production reality.

Contact us to learn more.

Unify your Data and Services Today!

Instantly turn any data into ready-to-use products, integrate for AI and analytics, and do it all 10x faster—no coding needed.