Saket (00:03.567)
Hi everyone, thank you for listening to another episode of Data Innovators and Builders. This is your host Saket and today I’m speaking with Francois Lopito, SVP Product Management at ThoughtSpot. Francois, thank you for chatting with me today.
Francois Lopitaux (00:18.801)
Thank you, thank you for having me and for the invitation.
Saket (00:23.191)
Great to have you here. You have an amazing background in data and AI. I would love for you to share your background with the audience.
Francois Lopitaux (00:30.821)
Yeah, sure. So my background, I’m a technical person, started as a developer, full stack developer. I’m very proud of that, by the way. So every time when I can use it to my team and say, hey, by the way, be careful. I was a developer before. They really like it. But so that’s how I started. I love computers since I was like five playing with my computers that my parents bought to me. So this was my destiny, really, to be a technical working on software.
So that’s what I did. I started with a small startup in Paris for six years from software engineers to more like business consultants traveling the world. Then this company got acquired by a bigger one, Salesforce. So I joined Salesforce in 2008. And at the time it was about 3,000 people. So now there are about like 80,000. It’s massive, but.
At the time I was thinking it was actually a small company. It was a big company for me because moving from 25 people to 3000 was a big move. And so with this acquisition, I got the pleasure to actually have the opportunity to move to San Francisco. So this is what I did 16, 17 years ago now. And also I had another opportunity which is actually becoming product manager. So switching career.
because my target basically at Salesforce was to rebuild the product that was acquired by Salesforce on top of the Salesforce stack. And this was part of the service cloud, which was at the time was the second cloud of Salesforce. And so I worked there, it was amazing for six, five, six years. And then I wanted to change a little bit. And so at the time actually Salesforce was building another cloud and this cloud was analytical cloud. So.
I had the opportunity to join this cloud as one of the first PM. And so our basically segue was to build a new analytical platform. And so really that this is how kind of like I first get touched to the data world, building this new BI application on top of Salesforce. So I had to learn a lot. It was great. I did this for six years. We also bring AI into the mix.
Francois Lopitaux (02:52.711)
So how you can bring, not only like the BI and visualization aspect of it, but how at the time you were able to bring AI. So automatic discovery was the term at the time. So how can AI help you to define what inside are interesting and also be able to bring predictive model. So this is what I did for six years at Salesforce, was really good, was really great. Then I decided to move on after 11 years at Salesforce. So I joined a much smaller startup.
It was almost like a pre-product market fit startup doing very strong auto ML, hardcore auto ML. I was there for about one year and after I joined back another company, which I kind of worked on from series A to series C. And it was still linked to Salesforce. It was really like data security for Salesforce. It was named Modaseva.
Saket (03:24.013)
almost like three product markets, six thousand, winning like very strong auto ML, popcorn auto ML.
Francois Lopitaux (03:48.363)
And this was in continuity of the data world because it was not anymore about really like driving insight from data at the time, but was more around how to secure the data. So still a very interesting, I did about five years there. It was really interesting. And then I had the opportunity to, I have one of my previous manager, Ketan Karakanis, who became the CEO of ThoughtSpot and asked me if I was interested to join and go back into the BI world.
which I did about a year ago. So that’s in the cell, you know, how I come from.
Saket (04:24.282)
Awesome. And you have been pretty early into applying AI around data. And ThoughtSpot itself has been one of the earliest companies that brought like a search like natural language interface to data well before, you know, AI and LLMs were there. So, you know, maybe give us a bit of the lay of the land. Like, you know, people have been
Francois Lopitaux (04:38.651)
Mm-hmm.
Saket (04:46.986)
drowning in dashboards for long time. How has that evolution been?
Francois Lopitaux (04:52.711)
I think this, if you look at it, it’s like you had the BI world started with, you know, business objects and things like that. Two of these generations, I’d have to say, which actually is a French company, which I’m, you know, obviously I’m French. So I was really proud of. It’s one of the big, you know, one of the big first startup to join the NASDAQ. And so it was obviously very complex, but for the first time, it allowed people to get access to the data at scale.
Saket (04:53.419)
And yeah.
Francois Lopitaux (05:22.395)
be able to take decisions. Then you had the second generation of these tools, which was more around like Tableau and then Power BI. And these were more like, how can we basically build the tools of pure data visualization, but, know, against a business object, which was very IT-centric and require a big server, a big implementation. All these tools was more like targeted for analysts. Analysts can start day one, bring their own data and
drive inside. And I think this has been the state of the industry forever. ThoughtSpot came with a different paradigm. And very early on, they came with a paradigm like, let’s not build tools for analysts, but really let’s build tools for end users, for business users. And because of that, they had to innovate. They had to bring this kind of search concept in the BI world, which was a first. And obviously, this was very interesting for me.
And this paradigm kind of like to make this vision happen, they had to build some strong layers to execute on the vision. And one of the layers that they had to build is actually a semantic layer. So very early on, they had to create the semantic layers to be able to create some kind of abstraction on top of the data and on top of SQL. Because obviously if you are targeting end users,
you cannot ask them to create SQL queries. You have to come up with some kind of like abstraction that is easy for everybody to come up with. And so that’s how they come up with a concept of what they call search token. And search token is really powerful in a way that using keywords, you can basically generate very complex analysis. And this obviously was a premise, was the beginning of ThoughtSpot that as you can imagine over the last years have changed a lot with the arrival of LLM.
be able to continue this vision and make it even better, I will say.
Saket (07:22.847)
Yeah, yeah. And that’s basically getting to the natural language sort of interface. So when you are saying that the dashboarding tools for targeting analysts as their users, right? But now it’s the business user who is the target. So as a product person, I think who is the target user matters a lot. So tell us maybe about where do organizations maybe not or misunderstand
what the business users really care about and how they want to interact with data.
Francois Lopitaux (07:52.103)
Yeah, no, sure. I think that’s for very long time. We kind of confuse the market, to be honest with you, especially on the BI side. Because if you look at all the messaging and marketing around BI tools, it was all about, we are building a product for you guys, for the end users, for the business person, for the person working as a sales manager, for the person working at the customer service.
And now with my BI tools, you can access everything and you can answer your data and your question yourself. And I think that’s been the biggest, one of the biggest cam actually for the last years because the reality is, know, as a business users, I don’t want to explore data. I don’t want to wake up and say, I need to, you know, I honestly cannot find the answer myself because the tools are still too technical. And then I need to hire a data team.
that is going to become the bottleneck because obviously all the organization is going to jump on this data team and ask for question. And then they will ask for a question. They will create a dashboard. Then I will have a follow-up question and they will create another dashboard. And almost their incentive, if you think about it, the incentive of all this industry was not really about like answering business question or take decision, which is at the end of the day is what you want people to do, take decision.
it was more around like, hey, build a creator, build a factory. So you end up creating a lot of dashboard that, know, most of them don’t really answer any questions. It’s really hard to maintain your business and users are not happy because it don’t really get the answers they want at the time. And they have to wait a lot of time for it. So I think that it’s, you know, for a very long time, we have been in a, I know, you know, in a not so well product, you know,
very disconnect between marketing messages and really what people are expecting for. And I think now we are really on the paradigm shift where now we can really deliver value to these end users and to provide them values there at the level of what they’re expecting.
Saket (10:01.852)
Okay. So what did we, would be fair to say that, you know, we basically transitioned from a time when analytics products were creating dashboards, but the users really want insights and, you know, be able to drive decisions, but dashboards don’t automatically translate to that. And that’s why I’ve heard companies have like thousands of dashboards. As you said, you know, you get one, then you want another one, another one. is it fair to say that that’s where the shift has started to happen?
Francois Lopitaux (10:30.853)
Yes, I think, know, business and users, they don’t want to be empowered to analyze. They just want to have the answer. And I think this is a big change. you know, people don’t wake up in the morning and say, hey, I need to create five dashboards today and it’s going to make my life super happy and going to be happy about that. No, they just want to have insight that they can trust and where they can take decision on it. And this has been like to deliver the vision with the existing BI tools.
Saket (10:31.258)
Lulums.
Francois Lopitaux (11:00.955)
This has been, you it’s really hard because you need to have multiple people you need to have, which are going to create you as a dashboard. know, to be fair also data can be manipulated in some way. So the people who create also can mislead sometimes with the insight they found and can create not the reality of the business, but maybe the reality that they want to say about the business. So it’s also like more you are intermediate people.
then you can change a little bit also what is really true and what should be the best decision that sometime you want to hide or not.
Saket (11:36.325)
Okay, so this whole goal of self-service analytics, can kind of actually get to that now, right? Because it’s not just about the query and the dashboard and yeah.
Francois Lopitaux (11:43.408)
Yes.
Francois Lopitaux (11:46.831)
No, finally, I think this has been what people are asking for years, forever. And now this is finally possible. For the first time, the interface and, for example, the products that we are providing, they can really be used by end users and they can really work the way that you might want them to work.
So I think it’s really that’s why it’s extremely exciting in our industry at this time.
Saket (12:13.147)
Yeah.
Saket (12:20.368)
Okay, so one question I have, because we talk a lot about like data being AI ready, right? When we are thinking about AI ready data, how would you frame it? Coming from that sort of end user perspective, that business user, what they’re trying to do, what do you think of as data being ready for AI?
Francois Lopitaux (12:27.559)
Mm-hmm.
Francois Lopitaux (12:40.199)
Yeah, I mean obviously, you know, it’s all your insight will be as good as the data available, which is, you a fact. So you need to have at some point the data available. But I think also again here we have we are moving to a very interesting inflection point because you know when I was building like in my career tools to do predictive analytics, the data had to be very formatted in a very specific way.
Saket (12:46.279)
which is a fact. you need to have at some point.
Francois Lopitaux (13:07.387)
You had to flatten the data in a nice one table. You need to add one column, which was outcome tables. All the different values need to be right. You have to avoid data leakage or things like that to be able to have a good predictable model. And most of the time when we’re selling our application, our customers were saying like, I don’t track this data or my data are not clean or
Saket (13:33.159)
Mm-hmm.
Francois Lopitaux (13:36.327)
available in the right way. And most of the work actually was to prepare the data. 90 % of the work was to prepare the data. And after 10 % of the work was to create the predictive model. And I think now, again, we are in an affliction point where the data don’t need to be 100 % clean anymore. So data can be structured, but the data can be also unstructured. Because now, for example, with ThoughtSpot, we can manage and give you an answer
using structured data in your Snowflake environment or Data Break or AWS Redshift or your Google BigQuery. But we can mix that with information from Slack or information from your Confluence or your Google Drive. And you don’t have to create this of golden data repository with everything in the right format. You just can feed Spotter, which is our agent solution, with all these different repositories. And then Spotter is going to deal with it and going to
Saket (14:30.918)
Mm-hmm.
Francois Lopitaux (14:35.975)
kind of like extract what is valuable from it automatically. So I think the barrier of entry now in terms of like, my data need to be 100 % ready. I think we are going away from that. And I think it’s amazing because it has been the biggest kind of like slowing down for business to adapt such new technology was all the time the data was not in the right shape. So now it’s not anymore the requirement and which is really driving the great things.
Saket (14:57.7)
Yeah.
Francois Lopitaux (15:04.613)
And I think the second aspect to it is a factor so that, you know, again, back to my AI world and the fact when you are building like machine learning model, you have to flatten everything and have one beautiful table. Now with the semantic layers, the fact that you don’t need to flatten all your data anymore. You can just like declare what are the different tables that exist in your system? What is the relationship? What is the business term on top of it? Then again,
Saket (15:06.215)
It’s a fact of the that, you know, again, you are building like machine learning model, you are…
Francois Lopitaux (15:34.779)
your agent solution, like for example, spotter is going to be able to query one table, two tables, three tables altogether. And you don’t have to care about the complexity of that. So I think that all the technology now is really helping people and they should not be scared so much more, know, so much about that data being clean enough for that because we are moving the barrier of entry now.
Saket (15:35.111)
your adjusting solution like for example, Spodr is going to be able to query one table, two tables, three tables together.
Saket (16:00.89)
Yeah, I think this is quite fascinating and I want to double click on some of the aspects that you touched, maybe let’s, you know, from your perspective, you know, coming in from the expertise of data and insights there, right? How would you define, you know, agent-tick AI when it comes to a business user or a data user?
Francois Lopitaux (16:20.679)
Yeah, I mean, I think that, and this is maybe one of the very complex, complex item is the fact that people first need to be ready for that, right? And there is a big change management. There is still a lot of users where they need to be grounded with a dashboard, which is fine, right? But they need to be ready also to change a little bit the paradigm and
because the fact that now it’s really going to be different. And I think they need to embrace this change. So, you know, if you think about Adjantic now, Adjantic is, I think is amazing because it’s not anymore even like you having to go to a dashboard to get your answers with Adjantic AI. Now we are able to go back to you directly and say, hey, be careful. You should be aware of, you know, these five deals slipping or these customers not being happy.
or this supply chain item is broken or this medical trial is not going in the right way. So you’re really going to a finally proactive way where you can create agents that are going to monitor the business for you. So we are moving from a system of dashboard where I have to create for the right dashboard. I have to create
the right insight and I have to look at it all the time to be able to be aware of that. To like the next stage where we obviously are, where you have agents now, you can ask questions, you can really like get your insight. To the final stage that we really are working on now at ThoughtSpot, which is what we call the working agent. The agent that is working on your BAL to provide and go back to you with insight that you should be aware of.
I think there is a journey inside agent I, at least for the BI with the stage two where it’s more like an agent, seek creation where agent are going to help you to create a semantic model. An agent is going to help you to create your, your life board explanation and agent that’s going to where you are going to be able to conversations really like an assistant to give you business inside. And finally, the next stage, which is really the one that we are pursuing now.
Francois Lopitaux (18:42.033)
which is the most exciting is the fact where the agent is really going to actually go back to you all the time in proactive manner to tell you what you should focus on on the morning without even you having to ask about it. And we, it’s not like, you know, fancy fantasy of we are like five years from now that or, or a sci-fi movie, right? It’s really something that is actually now that we, we are going to share with our customers.
in the coming months and I’m super super excited about that because it’s really the ultimate of what they are expecting.
Saket (19:15.867)
Yeah, yeah, I mean, I think that maybe three, four years ago, there was a move around analytics tools that could almost be like your newsfeed. So you could go and say, okay, what’s really important? What should I look at? But then I think the technology wasn’t there to tie all the pieces together. And, you know, if I understand this correctly, in some ways we have transitioned the way of working, right? We would first say, hey, this is a dashboard I need.
Francois Lopitaux (19:32.295)
Mm-hmm.
Saket (19:41.172)
Therefore, this is the data I need to pull from these places and I would build the pipelines and stuff and then build a dashboard. But then you show this dashboard is a follow on question to that and then, wait a minute, let me go back, let me get that data, let me build that, right? And you’re saying, now we are very much at a point where one data is, of course, the understanding of businesses in different systems, right? It could be in sales, it could be in Slack, it could be in wherever, right? So one, we’re connecting those pieces.
Francois Lopitaux (19:41.447)
Mm-hmm.
Francois Lopitaux (19:46.321)
Mm-hmm.
Saket (20:10.423)
But are you also seeing fundamentally that whatever we need to do with the follow up question, the insights, the drill down that we need to do can be done very dynamically because the data is just so much more accessible? Are we getting to that stage now?
Francois Lopitaux (20:19.975)
Yeah, no, I mean, I think that’s very much yes. And for us, it’s really the way why it’s possible now, it was not possible before is again, in historically in BI tools, what you had to do is you had to take your data that was somewhere and you had to kind of like prepare it, merge the table together, know, augment them, do lookup.
Saket (20:27.811)
Mm-hmm.
Francois Lopitaux (20:49.167)
and flatten everything and then put that into your BI tools. And then from your BI tools, you are able to query the data which were in your BI tools. The problem with that is once you flatten, you kind of constrain yourself to a type of query you can do, which is a big problem. And then if you want to do drill down, some drill down will be possible, some other will not be possible. Then if your drill down is not possible, you have to go back to the data stage and you have to rethink about your flattenization.
create only one table to run your article product on top of that, right? Which obviously is not the different, it’s a different team. have to open a ticket, it’s long and so on. Or if you are missing some field and so on. I think the big, big change here is, is the fact that now, like within ThoughtSpot, you can query directly the data where the data is, right? In whatever cloud that or house database, you can query the data. don’t have to move the data. So that’s step one, which is easier. Step two is,
You don’t have to prepare your data. You don’t have to flatten your data. What you have to do is just create a semantic model. Semantic model where you will declare this is my table A, table B. So like, for example, account, opportunity, product and selling, cases of customers, adoption metric of my customers. And you just create what are the kind of like relationship between these tables. And this is all what you have to do, right? Once you have done that, then you can run your BI tools now on top of that.
which means the BI tools now can answer any type of question from account, opportunity, products you are selling, but also add options and cases. And you don’t have to think about all the different, you know, models or data set that you have to create and flatten. You just have to put your system on top of this, this system of semantic, and then automatically the right query is going to be generated at the right moment. And I think this is a…
fundamental change that as in is enabling like any agentic solution to basically answer any type of query. But if you don’t have that, your agent is going to be blind, right? Because he’s not going to have access to the right information or is not able to generate the right questions because you don’t have access to the right way to do that. So I think that’s a fundamental shift that is happening now on the market.
Saket (23:08.706)
Yeah, yeah, and I think the semantic model becomes key to this. As you were mentioning earlier, ThoughtSpot was early into the approach of having a semantic layer. One question I have on that front is that what all does the semantic layer contain? Maybe if you can double click on that a little bit. Yeah.
Francois Lopitaux (23:16.23)
Yes.
Francois Lopitaux (23:25.127)
Sure. I think for us, it’s something that we’re expanding actually as we are working on this solution. I think early on it was mostly like, what are the different tables? What are the different columns? What are the relationships? We have added also, what type of relationship is it? Is it one to many, many to many? So give him like some kind of more information because more you give information, smarter is going to be the engine to do the query.
Then we have added recently the AI context. what we call by AI context is any, it’s basically description of all the different columns that you have to provide more information to the agent that the agent is more aware of which column to use. Because you know, like sometimes we see some demos with like, have two tables and I have 10 columns and it’s working like super well. Yeah, I love it.
But the reality of the world is not that, right? As you know, like our customers, yeah, exactly. Our customers have like, know, 200 tables and which mean like 10,000 fields. And you need to provide context to an agent that you can, you know, that you can really like understand that. So that’s why we are moving from, you know, the semantic layer to the context engineering more now where we provide more context. So we have this AI context that provide description of your columns. We have instructions.
Saket (24:23.97)
in 2000 column tables.
Saket (24:29.889)
Yeah.
Saket (24:41.548)
Mm-hmm.
Francois Lopitaux (24:49.125)
We have also fact tables. So what are the facts about these informations? So more and more layers are added to your semantic layers because this is really becoming the full context of how to use this model. And this is really like the key of accuracy and the key also of transparency and be able to trust whatever answers you are going to get from your agent. If this layer is not strong,
is bad. And also, you know, we are adding like memories on top of it now. like memories, which is how people are using it and how they are chatting with us, we are learning more about the model and then we are storing this memory. So it’s really like this is becoming the brain of our system.
Saket (25:20.566)
Mm-hmm.
Saket (25:35.935)
Yeah, yeah. And I think what’s very important is to understand that importance of the context layer and how much that context helps the AI model to actually figure out what is the right query to generate. there are many signals that you can provide to the model. Would you say that the context layer, like the evolution of semantic layer, does it add to that? How would you frame that?
Francois Lopitaux (26:02.511)
I think that the semantic layers is been view-free from a technical point of view of describing your table and your schema. And I think now we need to go to the next level. Because again, in the old world, dashboard was kind of like the governance of your data. Dashboard was created by an analyst. And the guardrail was the dashboard. Because you were able to control what you expose in the dashboard. You were able to…
to understand it, you’re able to provide business context in the dashboard with title, description, and things like that, and you’re able to share it. Now in the new era, dashboard is not going to be so prominent, obviously, and people are going to query directly the data. But if you query directly the data, obviously, it’s a big mess, right? Some people try, they failed. So that’s why these semantic layers is becoming the governance of your data. This is where you are going to, this is where you describe permissions, this is where you describe table and junction.
This is not enough now. You need to bring more context. You need to be able to tell like, what does this means? What is this business about? So everything that, you know, if I would be an analyst joining a company, I need to create this knowledge about the business, right? And so semantic model don’t historically speaking store this information. So that’s why need to involve it. Some people are speaking about creating, you know, having more technology on top of your semantic model.
But I think the semantic will move to the context. We really need to bring not only technical information about your table underneath, but really about more like your business, about what people are expecting, what is the rule of engagement, how people are thinking about these different metrics, or about how they are querying it. And now our target for us as a provider, as a vendor, is to enrich as much as we can the context.
Saket (27:40.042)
Mm-hmm.
Francois Lopitaux (27:58.374)
this context layer with all the signals that we have. So another thing that we are doing, for example, here is we are now converting our dashboard into memory that can be used inside the context. Because again, when you create a dashboard, you provide a lot of knowledge about your business. So we want to extract this knowledge and make it part of the context layers. So this is a lot of different initiatives there to make this layer more consistent.
Usage base also, obviously, you know, like more you are using a certain field, more interesting, be able, but you know, what is really most important is how can a company can upload their business rules, their rules of engagement. So that’s really like what we are doing. But at the same time, what we need to be careful about is not asking the user to have to do this manually, because this is, you know, in early days of ThoughtSpot, we had, we ask a lot to users.
Saket (28:38.933)
Mm-hmm.
Francois Lopitaux (28:54.79)
we ask them to create description of fields. And know, when you have 10,000 fields, we want to create 10,000 description, right? Nobody. And then we find out that when they are doing it, actually, they don’t make sense. They contradict each other at some way. So it was actually worse for the system. So that’s why we are also working on automating a lot of things. So memory creation, we are automating it. The context, the AI context is fully automated because
Saket (29:16.831)
we are automating it. The context is fully automated because people don’t want to stay with one point to set up the system. They want to
Francois Lopitaux (29:22.736)
people don’t want to spend one month to set up the system. They want to plug it and run it with it. And this is now what we are achieving with our latest version of Spotter that we have announced two or three months ago is the fact that the coaching or the training is much less required because we have automated a of PCs and make the Spotter more interesting. What is last thing which is quite interesting is
Saket (29:43.507)
Mm-hmm.
Francois Lopitaux (29:50.566)
we see a lot in the market like, Hey, you know, you are building a commercial analytics engine. Great. And now they’re asking you to create like reference questions. So now, Oh, if people are this, you need to answer that. And you need to create 20, 30, 50 questions like that. Right. So what is funny about that is basically you are creating a system which is open-ended where people can ask any type of questions. But now you are constraining the system.
Saket (30:13.918)
a system which is open-ended where people can ask any type of questions but now you concentrating the system
Francois Lopitaux (30:20.346)
these predefined queries, right? Which, so this is kind of funny. So that’s really what we want to avoid. Really want to create a system that can answer any type of question in a trust manner without company to have to train them for like months and months to make it work. And I think we, are this stage now.
Saket (30:26.92)
Yeah.
Saket (30:39.912)
Yeah. I think being flexible in for the, you you’re saying business user is the one who wants to use it, right? So being flexible as to what questions you can ask is kind of critical versus being, you know, constrained to what set you can work with. As you were talking about the context part of it, one thing I was wondering was,
does this converge the world of structured and unstructured data? Because a lot of context sits today in documents and PowerPoints and slides and so on, right? Because like you were at Salesforce before and Salesforce is a big source of business data for many business teams and analytical use cases, right? Now Salesforce is such a customizable tool that company A and company B and their instance and their way of using may be different. And a lot of it is in sales enablement documents and slide decks and so on.
Francois Lopitaux (31:11.334)
Thank you.
Francois Lopitaux (31:15.974)
Mm-hmm.
Saket (31:30.782)
Are we converging both these worlds to get the best insights from model now?
Francois Lopitaux (31:35.866)
I think that you have no choice. These two worlds have to converge because you have so much information, obviously, know, both source of information between structured and unstructured. I will say, actually, I will redefine it. You have the data which is inside your cloud data warehouse, and there is a data into your enterprise data, like Salesforce, for example, and then there is your unstructured data, right? And these are three different flavor of your data.
Saket (31:38.514)
Mm-hmm.
Francois Lopitaux (32:03.588)
But the thing is they cannot live without each other. Like for example, you know, we have been a launch partner with Slack. Slack have launched an MCP server recently. So now you can go to Spotter and in every meeting when I go to see your customers, what I’m doing now is I go to Spotter and I say, Spotter, can you tell me about these customers, about these adoption numbers, about is dealing progress, renewal, open cases?
but also any conversation may happen on Slack that you should be aware of. And Spotter is going to be able to query all these different sources. It’s going to be able to query your product adoptions that may be in your cloud at the house. It’s going to bring the latest deal in Salesforce. It’s going to be able to bring conversation from Slack, even like information from the web about the specific company, about what’s happening in the world. Do we have an M &A or do you have big news, right? And Spotter is going to be able to
grab all of that, bring it together and let me out a concrete answers with table or very easy to digest. And if you think about it, before we had only access to the cloud data house data and we had like very partial picture of the things actually. So now we’ve also finally can aggregate everything all together from structured enterprise data, unstructured data, web knowledge into one place.
So definitely like it’s really give you much better insight that you ever had before.
Saket (33:31.793)
Yeah, and then MCP is maybe bringing you more even real time information. So you have the large volume of data in the warehouse, great. But then MCP interaction with, again, different systems. And part of it, I think, is combining data. So your data from sales to data from marketing tool to customer success tool, right? So a lot of that stitches together, correct?
Francois Lopitaux (33:48.006)
Mm-hmm.
Francois Lopitaux (33:51.962)
Yeah, and the value even of MCP is not only to give you access to information, but also take action on it. So now at the end of your analysis, you can create a record in Salesforce. You can create Slack message to the right AEs that need to be bring to the call, or you can send an email or you can book a meeting. So really like the capabilities to be able to automatically act on it. And this is,
If you think about the evolution, is for me really the next stage. What I was speaking at the beginning is instead of having you to ask questions, to have like automatic system running and say, Francois, I see next week you have this meeting. This is a report that you should be aware of. And by the way, we also book a pre-meeting with the AI that you can work on the presentations automatically. And we send the Slack message to the CSM to asking for this information. So you can really like have a
have a mechanism of interacting with all the data to create your best idea and finally also to take actions automatically.
Saket (35:01.243)
Yeah, think so we’re going to this, we’re talking about a whole evolution, right? From static dashboards that are built one at a time to then, know, conversational systems where you are getting insights with your AI agent to, you know, like combining data from different places to like even taking actions, right? And you mentioned earlier that Spotter has an agent that sort of works continuously in the background, right? Looking at stuff.
Francois Lopitaux (35:21.414)
Mm-hmm.
Saket (35:30.013)
Tell us a little bit more about Spotter and is taking action the next set of things that’s coming in the whole 360 cycle?
Francois Lopitaux (35:36.448)
Yeah, I mean, it’s not the next time of is this is what we do today, actually. Even better. It’s here. But I think that, you know, another component which is really like a cornerstone for for spotter is the fact that first you need to have the right information to take the right actions. If you don’t have the right information, then your action would be bad. So and this is why also I think the
Saket (35:40.782)
Okay, future is here. Yeah, yeah, okay. Yeah.
Francois Lopitaux (36:03.942)
the trust capabilities that we have embedded in our applications. You know, if you go back to what we were creating, the search token. So one approach that we have at ThoughtSpot that is very different from the rest of our industry is we don’t use text to SQL as a technique. Because the problem is, you know, when you use text to SQL, you may have hallucinations. To SQL query may be a little bit different. Not by a big, by the way, just as, you know, slightly different. But slightly different in SQL query can be a very different result at the of the day, right? So…
Saket (36:21.478)
Yeah.
Francois Lopitaux (36:32.26)
That’s why we don’t use LLM to generate the SQL in our product. We actually use LLM to help us to define our semantic abstractions, which is search token. And then from the search token, we generate the SQL query, which means all the time we are exactly sure about what query you are generating. It’s all the time the same query if you have the same search token, which is the first benefits. You have consistency into the query generator.
Second thing that is really important is because we are not using SQL directly, we can express to users what we are doing. Because what is worst is not understanding and taking, you know, as a fact, an answer from an agent. All the time you need to be able to check it. You need to be able to be sure that they are using the right filters, the right columns, the right measure, because a mistake can be very quickly done. And then your decision process can be even worse than that. So that’s why this
concept of verifiability by end users, quite unique also to ThoughtSpot, is really like here to build trust with users. Build trust that they can really trust answers that is provided by the agent. But yes, Sporter is our main assistant, I would say today. So any customers can interact with Sporter to ask any type of question from very simple question to very complex question like.
even like what question can I ask about my data? Or like, you know, give me the numbers for that, or what should I do next month to be able to improve my number by 10 % using variety of tools that we have embedded in Sparler to be able to achieve that. And then the next evolution of that is really like building more autonomous agents. And this is the one I was speaking about. The one where you are just basically telling him like, what is your target? And then the…
Saket (38:16.475)
Yeah.
Francois Lopitaux (38:23.75)
the workflow is going to work on your behalf 24-7 and is going to bring back to you the results that you are looking for and even act on top of that. And again, this is possible only because we have access to the data in a very governed way and in a very trusted way.
Saket (38:41.774)
Talking about trust, right? And when you’re giving the answers, I think the two key things that you said, right? One was that you’re not using, you know, LLMs to do text to SQL, which I think is a very important point. In our own experience, we have seen limitations of that and sort of issues with that that can come up. So you’re leveraging your search token capability. That’s kind of something that was a strength for ThoughtSpot before and leveraging that to do more deterministic SQL generation. And then…
Francois Lopitaux (39:08.186)
Mm-hmm.
Saket (39:10.702)
when you talk about trust, so that’s kind of trusting like what query am I generating? Is that consistent or not? And the second part of the trust also comes in like when you get the answers to what degree can you get the right sort of citations and references, so you kind of cross verify those. So tell us a little bit more about maybe some of these technical choices and decisions and how you have done that because that’s pretty unique to what your approach is.
Francois Lopitaux (39:29.83)
you
Yes, and I think this is really about, know, it’s something that you don’t necessarily learn on day one, but this is what you get through experience. And by experience, have, you know, ThoughtSpot obviously started from the search token technology, from the search aspect, and to be able to achieve this mission, we had to build a lot of mechanism to make it happen. And basically, we are just reusing the same mechanism to make this agentic solution work. it’s really like through experience that we have learned.
what people are expecting, how they want to interact with data, how we can report back to them. And this was even before LLM. So it’s a full experience that we require over the years, building our solution, that we are just reusing in the age of where we are now.
Saket (40:19.405)
Yeah. When we talk about context, by the way, as well, right, you mentioned what queries have been asked in the past, you know, there’s of course, you said there were challenges with humans annotating information, of course, back when it was needed. Now there’s a lot of AI applied being towards that. One of the things that I’m curious is where does the context layer stay? you know, I’ve seen catalog companies also try to look through query logs and stuff and get an idea of what questions have been asked and they were…
a place for people to write documentation, more Wiki style. Now we’ve gone past that evolution, like nobody wants to go write a page on everything, right? So how is this, you know, maybe converging between analytics and catalog, or where do you see that going?
Francois Lopitaux (40:56.646)
Mm-hmm.
Francois Lopitaux (41:05.03)
I think it’s a very good question. I don’t, I mean, I know what I would like to have, but I’m not sure it’s the right answer. I think, know, obviously you have the vendors that are just doing semantic layers, right? You have now the cloud data house that are also providing a semantic layers, starting snowflake data break and so on. You also have like all the BI products. Some of them have some concept of BI of semantic layers.
So ThoughtSpot obviously started one long time ago because we had to do one, right? And then more recently, you may have heard about OSI, the Open Semantic Data Change, which is a community of basically all these vendors and users coming together to define what should be the format of these common semantic layers. And I think it’s hard to say like, the semantic layer is going to sit here or there or there. I don’t think it’s going to be like one unique answer.
I think it’s going to be more like, okay, where does that make more sense for your business? Where should it sit? Should it be more like higher in the stack, like in your BI? Should it be like very low at your data warehouse? The problem again is what is inside the semantic layers and who is billing it? Because more it’s business oriented, less they have access possibly to the cloud.house to provide the right information. Higher is it at the top,
then you need to be sure that everybody is going to use it, right? So I think that’s why right now the OSI is really a good answer because having a common language across all these different players is actually going to enable people to decide by themselves where they want to make it sit. So the fact that you can easily move it then from Snowflake to SouthSpot, then to DBT to AtScale and so on, right? So I think that we don’t have the answer yet. I think that
there is value in every layers because in every layers you have knowledge that the other layer don’t have, right? So for example, you know, in SouthSpot, you may create dashboard. In SouthSpot, you may also directly contact with the business and users, where they may provide you business information or this is where you get the conversations. So you have then all this information that you are capturing here, you may want to share it with the lower layer. So I think that’s why having a common format that everybody can share and exchange information cross layer.
Francois Lopitaux (43:29.892)
I think is a really, really good initiative and that’s why we are part of it. I think it’s really going to be a good answer to this problem.
Saket (43:38.701)
Yeah, no, I think that’s a great point. Although I would say that, you know, I feel like, and you know, you might have good insights into this, right? Sometimes people are underestimating the complexity of creating solutions. So let me explain that, right? So data is a raw material. The context semantics are also raw materials, right? But at some level, maybe there’s an assumption that if we have the data and the semantic layer, then anybody, I can also bring an AI model and I can also get all the insights and stuff, right?
the gap from that raw material to actually really key insights into the business, a lot of that, as you said, is also coming from the prompts, that’s why the memory layer. Business users may be explaining like, this is how our supply chain works, by the way, and therefore, and so on and so forth, right? So where are people missing the question? Are we attributing too much to the AI model and what it can do if we just threw everything at it?
Francois Lopitaux (44:29.946)
I think we definitely give too much credit to the AI model. AI is like a CPU, right? The LLM is like a CPU. You set instruction input and you get some output. Better you input the instruction R, better is the output.
Don’t get me wrong, right? They are an amazing CPU, right? Very smart, very strong, and something that was not available before. But still, the context becomes a differentiator. The context is really like what is going to make an amazing output. Because everybody has access to the cell NLLM, not everybody is going to get the same value of your agent.
Saket (45:07.243)
Yeah, and would you say that kind of creates an advantage for you guys, know, having seen that, well, text to LLM has certain limitations. Having seen that business users are interacting with you as helping enrich that context and so on, right? So sort of, you know, creates that advantage in how your product serves people better, Yeah. Okay.
Francois Lopitaux (45:24.612)
Definitely, yes. Definitely, yes. And the fact also that we can connect to every different type of content, every type of cloud at our house. It’s kind of like an abstraction layers between that you are building between the storage of your information and the way you experience the data and the ways you interact with your data. Like you don’t have to pick one unique solution. You can really like have abstraction layers that give you flexibility because we don’t know what tomorrow is about, right?
If tomorrow there is a new database, a new repository of information, a new thing that you want to adopt, you need to be able to have abstraction layers that you can quickly switch and plug it to your experience layer.
Saket (46:07.415)
Okay, yeah, right. So I think going into one of the things you mentioned how in some ways the work has also changed, right? I mean, from defining those dashboards and data to where we are, what’s your take? I mean, you’ve been in the space, you’ve seen AI actually for a long time. Like how are teams going to evolve around data analytics insights? And how are we use the terminology?
Francois Lopitaux (46:33.065)
I mean, it’s, you know, it’s what we we don’t like as tech people, we like to build products, but reality is usually small change management and building a product. So and this is really like having people with open mind, having people with curiosity that they are ready to change the way they work, because a new way of working is going to be 10 times more efficient than the old ways are working. So I think that as a you know,
Customers don’t need to be stuck in the BI world and having the most beautiful dashboard with a pixel perfect, because this is not how you make money. This is not how you take better decisions. This is just like a piece of heart, which is beautiful, but it’s a piece of heart, right? So they need to be ready to move to the next stage of that and be able to embrace a new way of work. So if you think about data leaders, again, back to my point, they need to become like agent manager.
They don’t need to be able to build another dashboard. They need to be able to manage agents that are going to provide value to their end users, that is going to be able to take the right decision on behalf of their end users. And really like working on this kind of like workforce management of agents that are going to find insight and take actions. And for the end users, they need to be also receptive to this new way of working. The fact that now they can interact with an agent to get their answer right away.
versus going to speak to somebody to create a new dashboard. So I think this is a lot of change that needs to happen. Obviously, it’s not just in the BI world, right? It’s in every strut of what we are doing today. But if you look at the BI, you really need the ways that need to change.
Saket (48:15.515)
Then ultimately the benefit of all this, all this work that has happened over the years around data analytics, self-serve analytics and so on is to operate our businesses better, right? I mean, get to know what is happening in the business, get a pulse on that, understand what’s going on and take those actions. And that sort of goes to that, you know,
Francois Lopitaux (48:27.514)
Yes.
Saket (48:36.265)
the executive layer, the managers, the individual people, right? What are you hearing from chief data officers or other execs, because you’re at the forefront of bringing this kind of interface out, right? What are you hearing from them in terms of business impact?
Francois Lopitaux (48:47.226)
Mm-hmm.
I think that people are starting to be amazed by the capabilities of what, for example, Spotter and ThoughtSpot can do now. The fact that I can ask, hey, look at my quarter, tell me which dealers are at risk, build me a plan to create 10 % leads more this quarter. And the value of this, the things that you can do now, it’s just amazing.
For the CDIO, would say like, you know, it’s really like should come up to the what decision would you want to take, how you want to focus in your industry, in your company, how you want to help business leader, IC people to take better decision on every day. What are the type of decisions you need to take and be able to work up to the chain that’s, okay, if you want to be able to do that, this is what you need to put in place. And, and I think now
It’s a better day than ever to be able to do that through the different applications and solutions that exist on the market.
Saket (49:57.825)
Yeah, no, it’s very fascinating. tell me a little bit like, know, ThoughtSpot started as a very enterprise focused business, right? How has the business model also changed perhaps as this has gotten more democratized?
Francois Lopitaux (50:10.662)
Yeah, I think there is two things. One thing is we obviously, so it’s quite interesting. Our customers are from 4500 to much smaller company of like 50 people, right? And there is two main use case that our customers are using us for. The first one is obviously to revolutionize our internal BI to come more to an agentic solution.
where they can speak to their data and take faster decision for internal teams. But also we have a big new business which is about embedded analytics. It’s about every company that is building a product and they need to be able to, part of their product that they are selling to their customers, they need to be able to embed analytics, embed conventional analytics into their own product. And they don’t want to recreate everything from scratch because usually their business
their expertise is in a different industry. It’s not like a BI in expertise. So that’s why half of our business is actually about customers buying our solutions to be able to put it into their own product. That they can share a dashboard, that they can share also conversation with their own customers. And then they can monetize the data that they have inside to their customers without having to spend all the R &D time to make it happen.
Saket (51:32.915)
To wrap up, for those who are in the world of data analytics, what learning resources might you recommend to them to sort of keep up with these latest changes?
Francois Lopitaux (51:46.47)
Yeah, I mean, I have, you know, obviously subscribed to a lot of mailing lists. I don’t know all of them by heart. But one of the things I’m using right now is really like my best subscription I have right now is Medium, actually. Because Medium, you know, after one week, when you start to look at specific articles, it’s going to recommend you exactly what you want to.
And so through Medium, I was able to really like to discover new author that are fascinating and that are really interesting. for me, Medium is my source number one. And I have like plenty of other I can share with you after if you want of newsletter, but this one is really interesting for me.
Saket (52:29.127)
Yeah.
And what’s the best way for our audience to connect with you? Do you want to reach out or listen to them?
Francois Lopitaux (52:36.292)
Yeah, sure. Connect me on LinkedIn. That’s the best way to connect with me.
Saket (52:40.339)
Okay, awesome. But thank you so much. This has been a fabulous conversation, Francois. Really enjoyed, went deep into a lot of topics. Thanks for taking the time.
Francois Lopitaux (52:48.986)
Thank you very much.