The ‘No More Individual Contributors’ Framework: AI Team Management in Enterprise
In episode eight of DatAInnovators & Builders Podcast, Michael Domanic, VP at UserTesting, explains how enterprises run AI teams of three to drive transformation.
The recent explosion of interest in context graphs has sparked a fascinating debate about who is best positioned to capture decision traces. Ashu Garg and Jaya Gupta call them AI’s trillion-dollar opportunity, one for which orchestration tools are rightly placed in the stack. Arvind Jain positions Glean as uniquely capable of capturing context graphs with their Agent and Assistant tools while cautioning that “you can’t capture the why, only the how.” Prukalpa highlights integrators’ cross-system visibility as critical for capturing context.
But the debate is missing a more fundamental question: what do captured decision traces actually enable?
Context graphs will make AI dramatically better at reasoning. But they won’t teach it judgment. And in business, asymmetric value comes from judgment, not reasoning. We’ve seen this movie before with self-driving. Despite comprehensive decision traces and massive amounts of data, it has been incredibly hard to bridge the gap from reasoning to judgment.
Let’s not make the mistake of assuming that more context will solve judgment problems with reasoning.
A context graph is more than a log of data. It’s a record of the reasoning behind decisions. When a renewal agent proposes a 20% discount despite a 10% policy cap, the context graph captures not just the discount, but the approval chain, the incident history from PagerDuty, the escalation threads from Zendesk, and the precedent from prior exceptions.
The thesis: if we capture these decision traces comprehensively, AI agents will learn to navigate enterprise decision-making the same way they learned to write code and generate content.
There’s just one problem.
The context graphs movement rests on a powerful parallel:
What happened with LLMs and knowledge:
The implicit assumption about context graphs:
This assumes business decisions are primarily reasoning problems. They’re not. They’re judgment problems.
Reasoning problems are deterministic given sufficient context. Calculate optimal inventory levels, route support tickets, generate sales forecasts, and approve expenses within policy. Given complete information, there’s a “correct” answer. Context graphs will genuinely help here.
Judgment problems involve weighing incommensurable values under uncertainty. The same inputs can demand opposite decisions depending on latent intent, unobservable state, and relationship dynamics that exist outside any system.
You can’t feed reasoning traces to solve judgment problems. It’s a category error.

Autonomous driving is the perfect example for this, with over:
Yet full autonomy has taken so much effort, yet proven hard because driving is as much a judgment problem as rules and reasoning. Business decisions face the same challenges at greater complexity:
| Challenge | Driving | Business |
| Interpreting Intent | Pedestrian at crosswalk: waiting for light or about to jaywalk? The same scene requires different responses. | Customer requests discount: price shopping or genuinely constrained? The same email requires different responses. |
| Reading Unspoken Cues | Blinker on: actually merging or forgot it three turns ago? Watch the wheels, not the signal. | “I strongly suggest we make an exception”: suggestion, directive, or political cover? Message ≠ meaning. |
| Applying Unwritten Rules | Zipper merge, courtesy wave, when to yield all vary by region and situation. This will not be found in any rulebook. | Which issues need escalation? Which exceptions need paper trails? Official process will always differ from actual practice based on situations. |
| Asymmetric Risk | Hesitating costs seconds; misjudging costs lives. Context determines acceptable risk. | Losing a $10K deal costs $10K; damaging a relationship could cost millions. Automation can’t calculate this. |
These examples demonstrate that the same data point demands different actions depending on context outside any system.
This is why Arvind Jain is right: “You can’t capture the why, only the how.” Even comprehensive process traces miss the judgment layer.
Maybe, but judgment depends on counterfactuals that context graphs cannot capture. When a leader approves a “strategic” exception, the graph records the approval, not the similar decisions she declined, the precedents she avoided setting, or the organizational capital she chose not to spend. Judgment is defined by the roads not taken; context graphs only see the roads traveled. That is why trying to learn judgment from decision traces is fundamentally different from learning reasoning from data.
Context graphs will make AI agents dramatically more capable at reasoning tasks. But decisions that create asymmetric value building companies, developing talent, and navigating crises will remain human for the foreseeable future.
Why? Because, we’re feeding reasoning traces to solve judgment problems.
The limitation isn’t capturing context. It’s the category error: better reasoning enables decision support, not decision replacement.
The opportunity isn’t replacing human judgment. It’s augmenting it.
As CEO of Nexla and creator of Express.dev, I’m deeply invested in building the context engineering layer.
In addition to being improving reasoning, Context Graphs also unlock
The pattern: context graphs don’t replace judgment. They make reasoning scale, coordinate, and validate itself. That’s the real opportunity.
Here’s what we can do in practice:
The companies that win will not replace human judgment. They will design systems that know exactly when to defer to it.
Context graphs are records of reasoning behind decisions—capturing approval chains, incident history, escalations, and precedents, not just data logs. They enable AI agents to learn patterns in how humans navigate enterprise decision-making, similar to how LLMs learned to write code from massive training data.
Reasoning problems are deterministic given sufficient context—there’s a “correct” answer. Examples: calculate inventory, route tickets, and approve policy-compliant expenses. Judgment problems involve weighing conflicting values under uncertainty, where the same inputs require opposite decisions based on latent intent, unobservable state, and relationship dynamics.
Judgment depends on counterfactuals that context graphs cannot capture. When a leader approves a strategic exception, the graph records the approval but misses similar decisions declined, precedents avoided, and organizational capital not spent. Judgment is defined by roads not taken; context graphs only see roads traveled.
Build comprehensive context capture, design override mechanisms from day one, focus on “decision-ready context” not “decision-making,” create asymmetric escalation patterns (automate low-stakes, human-approve high-stakes), and build learning loops that treat human overrides as valuable training data showing where judgment differs from reasoning.
Context engineering surfaces governance through precedent (how similar cases were handled), enables error detection before execution by validating workflows against historical patterns, and creates learning loops from human overrides. It doesn’t replace judgment—it makes reasoning scale, coordinate, and validate itself while knowing when to defer to humans.
In episode eight of DatAInnovators & Builders Podcast, Michael Domanic, VP at UserTesting, explains how enterprises run AI teams of three to drive transformation.
In episode seven of DatAInnovators & Builders Podcast, Rowan Trollope, CEO of Redis, explains how teams hit 95% cache and cut LLM costs 70% using agent memory, semantic layers, and production-grade AI infrastructure.
Nexla and Vespa.ai partner to simplify real-time enterprise AI search, connecting 500+ data sources to power RAG, vector retrieval, and AI apps.