Context Is the Differentiator, Not the Model
Francois Lopitaux, SVP of Product Management at ThoughtSpot, explains why context, not models, is the key to trusted, scalable AI and analytics.
Francois Lopitaux, SVP of Product Management at ThoughtSpot, explains why context, not models, is the key to trusted, scalable AI and analytics.
Yorck F. Einhaus, former Global CDO at Liberty Mutual and Farmers Insurance, explains why enterprise AI fails without strong data foundations.
Michael Domanic, VP at UserTesting, explains how enterprises run AI teams of three to drive transformation and measurable ROI.
Rowan Trollope, CEO of Redis, explains how teams hit 95% cache and cut LLM costs 70% using agent memory, semantic layers, and production-grade AI infrastructure.
Fred Gertz of Collide Technologies explains how swarm intelligence solves NP-hard routing and scheduling problems in seconds, without training data or LLMs.
GrowthX founder Marcel Santilli explains the delegation test for AI and why poor context, not weak models, is the real reason AI initiatives fail to scale.
Ortecha’s Stephen Gatchell explains the data governance gap blocking AI production, why unstructured data breaks legacy models, and how data product frameworks enable scale.
BigPanda’s Alexander Page shares how his team designs AI agents that internalize corrections, evaluate tool use, and scale reliably in production.
Ashish Thusoo breaks down how CurieTech AI uses benchmarks-first discipline and AI-driven build loops to achieve 70–80% productivity gains.
Databricks’ Robin Sutara reveals why generic AI training doesn’t stick and how persona-based enablement drives real adoption.