95% Prompt Cache Hit Rate: How Enterprise LLM Cost Reduction Works in Production
In episode seven of DatAInnovators & Builders Podcast, Rowan Trollope, CEO of Redis, explains how teams hit 95% cache and cut LLM costs 70% using agent memory, semantic layers, and production-grade AI infrastructure.