The ‘No More Individual Contributors’ Framework: AI Team Management in Enterprise

Episode 8
Mar 10, 2026 | 49:35

Summary

Most companies think turning on ChatGPT Enterprise and running a few lunch-and-learns counts as AI transformation. Michael Domanic, VP and Head of Generative AI Business Strategy at UserTesting and OpenAI’s System Builder of the Year, has spent two years proving otherwise inside an 800-person org.

His starting point is a reframe that changes how the whole program runs: there are no more individual contributors. Everyone is managing a team of three, an assistant and a thought partner with PhD-level expertise available all day, and the real leadership skill now is knowing how to direct that team toward what actually moves the business. From there, Michael gets specific on how UserTesting built the enablement infrastructure to make that mental model stick across every function, how they calculate ROI when half the value is genuinely hard to quantify, and why the companies waiting to see how AI plays out are making the mistake they’ll most regret in five years.

Topics Discussed

  • Reframing every employee as a manager of a three-person AI team
  • Anchoring transformation to three business levers to avoid chasing infinite use cases
  • Using custom GPT hackathons to surface bottom-up adoption across all functions
  • Running a 2-dozen-person cross-functional ambassadors program as an internal force multiplier
  • Quarterly top-10 implementation reviews as a before-and-after ROI measurement framework
  • Why functional leadership, not function type, determines adoption speed
  • Shifting from model selection to purpose-built tooling as the real enterprise differentiator
  • Why transformation requires dedicated leadership and can’t be a distributed side project
  • Honest framing on job displacement: what the data actually supports vs. what is speculation

The genie is out of the bottle when it comes to AI. That genie is not going to go back in the bottle.”

Michael Domanic
VP of Generative AI Business Strategy at UserTesting
Transcript

Michael Domanic
We’re living through one of the most transformational moments, I think, in human history. This is the final normal year. Everything after this point is going to look radically different. The way that we do work is going to be radically different. And the question is, is AI going to take my job?

 

Here’s the good news. We don’t know. We know that the genie is out of the bottle when it comes to AI. That genie is not going to go back in the bottle.

 

The people who use AI are having a much bigger impact on productivity than people who are not. If you don’t do it today, your competitors might, and don’t let that happen to you. I would be very happy to hear that our competitors are taking the approach of let’s see how it plays out.

 

Saket Saurabh
Hi everyone. Thank you for listening to another episode of Data Innovators and Builders. This is your host, Saket Saurabh. Today I’m speaking with Michael Domanic, VP and Head of Generative AI Business Strategy at User Testing. Michael, thank you for chatting with us today.

 

Michael Domanic
Saket, thanks for inviting me. I’m excited to be here.

 

Saket Saurabh
Super excited, Michael. Would love to maybe start a little bit with your background.

 

Michael Domanic
Yeah, as you said, I run AI transformation at User Testing, which means that my job is to look inside the organization and help leaders and individuals across the organization bring AI into their roles to add rigor and efficiency to essentially almost everything that we’re doing.

 

I’ve been here at User Testing for just over seven years. I’ve had a lot of different roles in the organization. Before joining User Testing, I was actually a customer of User Testing. And that was back when I was building chatbots for companies during the first hype cycle of AI.

 

I think the first hype cycle probably lasted from around 2015 or 2016 to about 2019 or 2020. And that’s when every company felt like they had an NLP chatbot in places like Facebook Messenger, in Alexa, in Slack, in Skype. So I was the person building a lot of those experiences and then coming to User Testing to test them.

 

Saket Saurabh
Awesome. I think there’s a lot of rich experience that you’re bringing into this AI role. And congratulations on winning the OpenAI System Builder of the Year Award. Tell us a little bit about what the award is and how you got to that.

 

Michael Domanic
Yeah, so OpenAI has a champions group, which is a group of enterprise customers, folks leading AI across their enterprise. I’ve been a pretty active member of that group. I think I’m one of the founding members.

 

And yeah, so this is the first year that they gave out awards in different categories. And I think they’re recognizing me for building durable systems here at User Testing that will allow us to continue to use AI in interesting and meaningful ways in our business that will allow that transformation to happen.

 

Saket Saurabh
Awesome. Congratulations on that. Super excited for your recognition in the work you’re doing. You mentioned that you’re bringing AI into the workflows within User Testing across different functions. So give us a bit of a perspective on what that means. I know almost every enterprise is thinking about doing that. How have you gone at it? Is there a sequence to that?

 

Michael Domanic
Yeah, there’s certainly a lot to unpack there. So maybe a good place to start is, why does an organization like User Testing need a formal transformation program? I feel like every company, essentially probably every company of over 100 employees, needs a formal AI transformation program for two reasons.

 

One is opportunity and two is responsibility. On the opportunity side, it should be pretty clear to most of us by now that if we are really thoughtful about the ways that we bring AI into our organization, to augment workflows, to enhance the things that we’re doing, add rigor to the things that we’re doing, there’s incredible opportunity to add that rigor and efficiency across the company.

 

On the responsibility side, I see responsibility as having two different branches. The first branch is the governance piece, making sure that we’re providing our employees with pathways to use AI responsibly and ethically. That could mean that certain data classes and certain use cases are off limits because we want to make sure that we’re fulfilling our obligations to our customers, to our stakeholders, to our employees.

 

And then the other side of responsibility is the responsibility that organizations have to their workforce. A pretty bold statement that I’ve been making over the last year, and I think a lot of people agree with this, is we’re living through one of the most transformational moments in human history. And I do believe that if you’re a company that advertises yourself as a great employer, you have a responsibility to lead your workforce through that transformation. This is going to impact all of us. There’s no one that’s going to be left untouched by this transformation.

 

Saket Saurabh
Yeah, no, I would completely agree. How we do our work is fundamentally changing. And if the people working in our teams are able to learn through that transition and become that much more AI-first and AI-enabled, I think we are helping them build a better future versus saying that hey, it’s up to the people to learn on their own and figure it out.

 

I see. Absolutely well said. And let’s maybe double click first on the opportunity side. Opportunities abound. And I’ve also seen people sometimes overestimate the opportunity, like oh everything can be done with AI, and then sort of go back and forth. How have you approached that opportunity in being realistic, and maybe what outcomes are you seeing?

 

Michael Domanic
Yeah, I mean, we’re certainly not trying to boil the ocean. We don’t see AI as a solution to absolutely everything. But what we’re trying to do is bring AI capabilities into our workflows in very pragmatic ways.

 

What we teach our workforce is that there’s no such thing as an individual contributor anymore. Every one of us is managing a team of at least three people. So every one of us all day long has an intern, we have an assistant, and most importantly, we have a thought partner with PhD-level expertise in nearly every subject sitting next to us all day, every day.

 

That’s the first thing we should be thinking about. How do we bring this team we’re now managing into the most meaningful things that we’re doing as a business? Teaching people and getting into the practice of doing that is a big part of how we approach this.

 

But then when you pull back and look at User Testing as a SaaS company, we operate on the SaaS model, which means there are three things that we need to be doing really well. First, we need to be providing pathways to grow our revenue significantly. Second, we need to retain our customers. And third, we need to increase product velocity.

 

When you’re bringing capabilities into an organization and teaching employees how to use AI, you want to get really focused on those three things, because we know those are the things that will have the most meaningful impact on our business.

 

Saket Saurabh
Yeah, I like the sort of managing aspect, like those three team members managing the basic different AI functions. And in some ways I think it has empowered the individual contributor a lot more because one person can actually go a little bit more end to end. I’ve seen that in engineering specifically, like a product manager can go from concept to prototype to something functional.

 

But I think the people who can manage these assistants well and understand how their task breaks down is really key to getting success there.

 

Michael Domanic
Right. And those three things are obviously really important to us. Those are not the only three things that we’ll say we should be doing, but those are the three we really focus on. So if someone in HR wants to improve an HR process, then yes, we should definitely be doing that, even if it’s not fitting directly into one of those three things, because most things we do in our business are going to have either a direct or indirect impact on them anyway.

 

And look, this is hard. It’s not a matter of just turning on models and telling everyone in the company that they have a team of three people working with them all day. This requires systemic change, top down and bottom up. We have been doing this for nearly two years and we’re just now at the point where we’re looking back and recognizing here are the things that we’ve done really well, and here are the things that we may have over-indexed or under-indexed on. This is a constantly iterating thing.

 

Saket Saurabh
Okay, so what practical advice might you give? I mean, this is not just about hey here is an AI tool and here are the three things you can leverage it in. Are you running workshops and training sessions, showing people how it’s done? What are some of the practical things that people can take away?

 

Michael Domanic
There are a ton of enablement initiatives happening. This is a top down, bottom up approach. The things that we do on a very regular basis to provide that enablement include basic stuff like monthly lunch and learns that teach folks in our company how to use new capabilities and existing capabilities with AI.

 

We put a spotlight on people doing things already, so if you’re in sales and you see how your colleagues are using AI in interesting ways, we’ll put a spotlight on that so you can see how your peers are doing it.

 

I host a weekly office hours where anyone in the organization can come in and talk about any AI topic at all. It could be things they’re working on or just existential AI transformation. Anything under the sun.

 

We have an ambassadors group, a Center of Excellence, which is a group of about two dozen people. We’re about an 800-person company. So we have about two dozen people in our ambassadors group, a cross-functional group. They’re raising their hand to say I’m going to spend a couple hours of my week every week teaching my peers, teaching people sitting next to me or adjacent to me how I’m using AI to transform the things that I’m doing. Whether that’s someone in marketing, on our HR team, or as a developer.

 

This is ongoing. I think a mistake a lot of organizations make now is they’ll turn AI models on for their workforce and do some basic upfront training and say, okay, we’re on our way to transformation because we’ve provided people with some basic enablement and models and this is just all going to happen on its own now. What we’ve seen is that is usually not the case.

 

Saket Saurabh
Okay. And did you have to change the incentive structure as well? Because one of the things I’ve seen is people at least in some cases felt like if they use AI to do the work, they don’t deserve the credit. But like, no, your success is your success. Did you have to do something there?

 

Michael Domanic
I mean, that’s certainly something that we talk about a lot. When I look back to when we first started this formal transformation journey and started giving people access to AI tools, there was certainly a hesitancy for people to say, like, oh yeah, I used AI to do this.

 

And I think we’ve turned the corner on that because a question I continue to ask everyone in the organization that did anything meaningful, especially our leaders, is tell me how you used AI to achieve the thing that you’re talking to us about. And if they don’t have an answer, my frank response is why the hell not use powerful tools to enhance the rigor of the things that you’re doing.

 

I think this is a cultural thing. You’ve got to teach your organization that using AI is somewhat of a responsibility. That’s a tool we’re making significant investments in that we roll out to the organization. And we know that tool has the capability to enhance the things that we’re doing.

 

And for those that do talk about it, we put a spotlight on those individuals and say, this is why this is such an incredibly valuable person in our organization, because they saw an opportunity to experiment with a new capability and having done that experiment, they achieved this incredible outcome. Continue to do that across your organization and you can watch that culture shift.

 

Saket Saurabh
Yeah. That’s a very good point. So across the teams, would you suggest there is a specific function that companies should focus on initially and go one after the other? Or is it like just go across the board? What’s worked well for you?

 

Michael Domanic
I mean, it might depend on your company. For a company like ours, there wasn’t one specific function where we said let’s get really focused on this. We know that AI has the capability to enhance pretty much all the work that we’re doing because User Testing is a knowledge work company, and that’s where AI fits really well.

 

Engineering might be on its own island because AI from the start has been talked about as an engineering-enhancing tool, a developer tool. I think it’s built into the nomenclature of how we talk about AI.

 

But when you look at the rest of the functions across the business, I don’t think there is anything specific about the marketing persona versus the sales persona versus the HR persona or the finance persona that makes it a better fit. I think really this comes down to the leaders leading each of those different functions, how they talk about AI, how they engage their teams on AI. I think that is actually what makes the biggest difference.

 

Saket Saurabh
And how does this work functionally? Do the teams come up with the ideas of what they want to do? Are you specifically guiding them in a certain direction?

 

Michael Domanic
I think it’s both. That’s kind of the top down, bottom up approach. I think a lot of companies are not focusing enough on the bottom up approach. I think that’s really critical.

 

One of the ways that we do this is we use a number of AI tools at User Testing. Our center of gravity tool over the last two years has been ChatGPT Enterprise, and we got really early on custom GPTs as soon as the capability became available. We democratized that across the company.

 

We love custom GPTs for a number of reasons. If you take the right approach and try not to make it too complicated, a custom GPT often fits really nicely into repeatable workflows. What we say to our organization, and we’ve done hackathons on GPTs, is everyone in the company has the access to build these things. Let’s do an experiment. Not all those experiments are going to lead to something valuable, but the experiment in itself often is the point.

 

We’ll say to our organization, build custom GPTs. If you’re building a GPT that has relevance outside of just yourself, we will work together to evangelize usage of that GPT. Sometimes there are two GPTs being built in tandem that solve the same business case, and we’ll get involved and merge the capabilities of those things.

 

I think that has had a really big impact on our transformation journey because it gives people some ownership over customizing the tool for themselves. And I think it’s also giving people a lane to experiment, which might actually be the most important thing to do right now.

 

A lot of GPTs that people are building don’t actually lead anywhere, they don’t leave the experimentation lane. But what it’s doing is, if you’ve built a GPT and there’s not additional utility beyond the experiment, you’re probably discovering some aha moments about other ways that AI can help your role. So we just try to encourage that as much as possible.

 

Saket Saurabh
Yeah. And I think when I think about some cultural aspects based on what you’re saying, the two things I find that one are people who are naturally curious and are going out there and maybe watching some YouTube videos or seeing things that people are doing and trying it on their own. And then people who are willing to experiment. So that willingness to experiment and curiosity are some cultural aspects that help.

 

But not everybody has that. And it’s one of the parts of the organization when you talked about the responsibility part, how to make that happen, because we’re all different. Some people are not naturally experiment-driven or as curious as others. Any thoughts on that?

 

Michael Domanic
Yeah, I’m not expecting everyone to have that same level of curiosity, creativity, or experimental mindset. That’s fine. We don’t need everybody to have that. But there is a critical mass of individuals that do have that.

 

And when you give them a lane to play and experiment, you’re giving them the ability to say, here’s a problem I deal with on a regular basis in my role, I think I can bend this capability to add value to this thing. And then if they do that, then they can go to maybe the people who aren’t as creative. But they’ve already built the solution for those people. And I think that’s a really big difference in the way that we’ve moved through our adoption journey.

 

Saket Saurabh
Yeah. And when I talk about curiosity and experimentation, I feel like that’s kind of the core essence of User Testing as a company itself. Because I’m building a product, I’m curious how people would react to it, and I’m willing to experiment and try different things and gather the data.

 

So maybe tell us a little bit about how AI has been coming into User Testing from a product perspective itself. With AI, you can ask questions about almost anything.

 

Michael Domanic
I think culturally we did have a little bit of a head start here. I think this actually goes back to 10 years ago when I was building AI solutions and was a customer of User Testing and testing them inside of User Testing. There was some natural curiosity about the way that User Testing played in that space at that time.

 

I joined the team and then in 2019, I think it was 2019, we started building our own homegrown ML models. So we didn’t start our journey in November of 2022 when ChatGPT was launched. We started this journey on AI a while ago. And I think that gave us a little bit of a head start because it’s already something that we were thinking about building our product around.

 

But yeah, doing that early on helped give us that cultural head start that prepared us for the moment when we entered this big hype cycle that we’re in right now. So you fast forward to November of 2022, ChatGPT 3.5 comes out, and there were already enough of us in the organization that were pretty engaged on the conversation. We recognized how important that thing was.

 

And then our journey really pivoted into our product. We started thinking very early on about how do these capabilities enhance value when we bring them into our product? How is this going to enhance the value our customers have in our product? So we got really focused on some of the bigger bottlenecks of the workflows that our customers are in when they’re using our product. And that’s where we started building some generative AI solutions. That continues to proliferate. There’s more ambition than we have resources for right now, and that’s something that continues to enhance our product.

 

Saket Saurabh
Yeah. And what’s the state of the art there? Like in terms of itself, sort of mimicking user personas and things like that.

 

Michael Domanic
So I think there’s a lot that we can do. But really what we’re doing right now, I mentioned earlier that the first problem that we tried to solve was one of the biggest bottlenecks that our customers experience when they’re in the process of capturing insight. You come to User Testing, you launch a test to test maybe a prototype that you’re working on, you get 15 people that come back with a lot of feedback. These are video sessions, and each of those videos are probably anywhere from 25 minutes each, times 15. It takes a long time to parse out the critical insights.

 

When we first started bringing generative AI into our platform, that’s the problem that we directed our efforts towards. Let’s bring generative AI into our platform to help our customers more easily surface what those insights are when people are giving them feedback about their product.

 

And this continues to proliferate. It’s also developed into helping you understand and target the right personas to get feedback from, helping you set up the right set of survey or qualitative study questions. And then now it’s also across all the work you’re doing on our platform, what are the trends that you’re seeing as far as insights go, where might there be opportunity for you to enhance your strategy based on what we know about the things that you’re testing.

 

There are so many things that we could be doing and we continue to push the creative buttons to really focus on adding that value for our customers.

 

Saket Saurabh
Yeah, I think talking about your journey into User Testing, you were previously already working on natural language-driven chat and you said you were a customer of User Testing at the time. Given how much of the user experience and customer experience is driven by LLMs right now, would love to get your take on that particular aspect of the market.

 

Having come from the experience of building customer-facing chatbots with natural language way back, well before language models, and now it’s like one of the hottest areas of applying AI. But I’m still seeing some friction at the user end, like hey, I don’t want to talk to an AI bot, get me to a person. Where are we headed in that direction, just from your expertise?

 

Michael Domanic
Yeah, look, I mean, when we were building NLP chatbots for customer experience in that first hype cycle, all of those experiences were garbage. They were just really bad. And that’s simply because the technology wasn’t ready to deliver on the expectations of customers. And I think going through that moment set expectations maybe in the wrong way around what chatbots should do.

 

And I think a lot of people having had bad experiences in that first hype cycle still remember the pain of trying to get an NLP chatbot to just answer basic questions. So that might drive a lot of the hesitancy that consumers have today.

 

But look, I think it’s really critical for any company that wants to build a consumer-facing or customer-facing experience with AI to first understand what is it your customers actually want. And that’s where User Testing comes in. We talk to our customers about this all the time. Pretty much every one of our customers is trying to figure that out right now, what are the right set of features that we should be delivering to our customers.

 

And a lot of times, if you can give your customers simple ways to solve significant challenges with AI, I think at that point they’re not even recognizing that it’s AI that’s helping them. You’re just creating a more frictionless experience for them. And I think every company is going to have a different version of this.

 

Before you start developing, get into discovery mode research, understand what are the friction points that your customers have and where might AI be a solution. And then once you start prototyping that solution, go to your customers and see if that’s something they would even be comfortable using, if that’s an AI solution they would be comfortable with.

 

It doesn’t look that much different from a typical product discovery or lifecycle stages. You’re going to do your discovery research, you’re going to build a prototype, you’re going to get feedback on the prototype, you’re going to build something that has a little bit more functionality, you’re going to get feedback on that. So the point is continue to get feedback throughout the development lifecycle.

 

Saket Saurabh
Yeah. And I think one interesting movement that has happened in the software industry is that more companies are not just about providing tools to customers as a SaaS service and then going out of the way. Instead, the role of the FDE has come in, where they’re actually working to build a final solution for the customer. So I think you deliver a platform with an FDE, you’re working with the customer building a solution, and I’m guessing there’s a lot of feedback and learning happening in that process, which previously may not have been the case.

 

Michael Domanic
Yeah, well, I think you’re talking about something different now, and that is the way to build product. The way that we build product is changing all of the time. Now we have all these rapid prototyping and vibe-coding tools. That means that we can build something that has more functionality much earlier in the process. That’s something that our PMs can do before they even deliver specs to an engineer. They can say, here’s how I think that this thing should look, and here’s the vision for this product before we even get into speccing this.

 

And you can take that and bring that to your customers and get feedback on that. But certainly these capabilities are changing the way that we build products.

 

Going back to what I was saying earlier, I think the process of getting feedback and where you get feedback, that’s still largely the same.

 

Saket Saurabh
Yeah, I was just thinking that that is there, but then you’re maybe getting even more feedback through the full lifecycle as your product gets adopted and used and you have an FDE there. So it’s not just in the early phases as you’re building the product, but you also become a user of your own product trying to deliver a solution sometimes.

 

Michael Domanic
Yeah, I totally agree with that. The key thing is make sure that you have feedback before making the investment to move to the next stage.

 

Saket Saurabh
Yeah, that’s always been the case. Absolutely. And I think user experience has become extremely critical because a lot of other layers are becoming more and more similar in some ways. A level of intelligence is expected almost in every product today, and the user experience matters a lot.

 

Michael Domanic
I totally agree. I wrote a predictions piece a couple months ago about what trends we might see in 2026. I actually think the role of the UX researcher and UX designer will be significantly elevated in importance this year. And the reason why is, again, we have more ambition than resources. These tools are making it a lot easier to build product, they’re lowering the barriers, which means that we can build product faster. But we still need to get it right. It’s not any easier to build the right product. We still need to make sure that we’re doing this in the right way that delivers against the expectations and adds value to our customers. And that’s where your UX researchers and designers are going to add significant rigor.

 

We’re going to build more product, we’re going to ship more product in 2026. That means we need to get really, really tight on how we’re getting feedback.

 

Saket Saurabh
Yeah. And I think what I’m seeing in our own work day to day is that the UX research aspect has also evolved quite a bit because part of our product is a conversational one where there’s a lot more text and verbosity. You have to get it exactly right, both in terms of latency of response and what you say. But then there’s a generative UI aspect to it as well, which is trying to balance that not everything in an enterprise product can be conversational. So you have to bring up the right visual artifacts along with it. And that’s a very different model than the old everything is buttons and menus kind of thing.

 

Michael Domanic
Yeah, totally. And again, that’s something I think every company is trying to figure out. What’s our version of this? How do our customers want to talk to our products? We’re all figuring that out now.

 

Saket Saurabh
Yeah. Very much. Early in the conversation, you mentioned that you think of yourself as a system design person. Am I saying that correctly, and would love to maybe have you dive a little bit deeper into what you mean by that?

 

Michael Domanic
Yeah. So we’re all going through this transformational moment right now. And I think the impact that transformation will have on us is we’re all going to be doing work differently.

 

Greg Shove, I don’t know if you know who he is, he’s the CEO of a company called Section. He said something interesting on a webinar a couple of days ago where he was discussing this idea of the last normal year for businesses. He was saying that this is the final normal year. Everything after this point is going to look radically different. The way that we do work is going to be radically different.

 

And if that’s true, that means we need to think about how we’re redesigning the way that we do work. The processes, the workflow. All of that needs a redesign. We need to figure this out so that we’re prepared for how different the future of work is going to be.

 

And this gets back into what we were talking about earlier, the responsibility piece. If this is the final normal year and everything looks weird and different next year and years beyond, we need to help prepare our employees. There’s an intentional design that’s involved in this transformation journey. And that’s essentially what I mean by that.

 

Saket Saurabh
Okay. Yeah, I think that’s extremely thoughtful. And I would say I sort of agree that the normal is shifting very quickly. Especially the coming of the coding tools and how coding has changed fundamentally, and that has happened in just the last 12 months or so. That’s one part of it. The vibe coding aspect and then the MCP and other things that are connecting systems together. These have been big shifts in just the last 12 to 14 months.

 

And we didn’t even talk about autonomous agentic capabilities yet. So when you start introducing that into the things you’re doing across the business, work is going to look very, very different in the future.

 

Michael Domanic
We’re doing a lot of experiments with those right now. I think the focus for User Testing this year is bringing in a robust orchestration tool that will allow us to build and orchestrate agents into meaningful workflows around our business. That’s going to give us the possibility to do more as a company and continue to add rigor.

 

As we do these things, the work that we do is going to continuously shift. It’s going to look different. And we need to be really intentional about designing what that looks like for everyone in our company.

 

Saket Saurabh
Yeah. And since we’re talking about all these changes, when executives are asking about ROI of this work and the transformation, what are you measuring and what would you recommend people to measure?

 

Michael Domanic
So this gets a little bit challenging. There are a lot of schools of thought on measuring the ROI of AI. One school of thought is you don’t measure the ROI of AI because it’s like electricity, it’s table stakes at this point. You bring AI into your organization because you know it’s going to have a big impact. I am sympathetic to that point of view, but there’s some nuance there.

 

I think that you can measure meaningful implementations of AI. And this is something that we do at User Testing. On a quarterly basis, we look at the 10 most meaningful implementations of AI in the company. And what we’re essentially doing is measuring the before and after state of those implementations.

 

This is a little bit easier when the implementation is happening on your go-to-market team because we’re already measuring the impact of everything we’re doing there. So we know that if an assistant or an agent increases the number of discovery call bookings in a specific quarter, we know the value of every discovery call that we do. So we can actually develop an ROI model around that. Same thing with number of campaigns. Every campaign that we do has a specific dollar value. If we lower the time and increase those campaigns with the same level of resources, we can pretty accurately calculate the ROI.

 

I think of measuring the value of AI in two ways. There are squishy metrics and there are hard metrics. The squishy stuff is the stuff that feels like table stakes. We know that this is important, we know it’s going to add value, we can’t quite calculate what that value is today, but we just know it’s important. And then there are the hard metrics, which is when you measure the before and after state of bringing an AI solution into a workflow or into a part of the business, you can start to get a little bit tighter on what the actual value of doing that thing was.

 

Saket Saurabh
Yeah, I think it’s good to look at both aspects. Because if you take a very financial dollar-in dollar-out measure, that may not have a real outcome in a year’s time frame or a six-month time frame. But that doesn’t mean you stop doing that. You have to have that innovation perspective because, as you said, work is changing by itself.

 

Michael Domanic
Yeah, I think a good way that plays out is the harder to measure, squishier stuff. One of the values of having an AI transformation program is it makes recruiting easier. One of the things I talk about all the time is if you’re looking for a job in 2026, one of the most important questions that you should be asking any prospective employer is tell me about your AI transformation program. Because people want to go to a company that is providing the enablement, providing the tools.

 

If you’re given the option of two different companies, one company is like, we’re not really organized around AI, maybe that’s important maybe not, and another company is super hyper focused on it, I’m pretty sure you’d choose the company that’s more hyper focused on it. And if it makes recruiting easier, there’s probably some measure you could do there. But you have to have a great answer to that question. As a company, if someone’s asking you to tell them about your transformation program, you better be prepared with a pretty good answer.

 

Saket Saurabh
Yeah. I think that’s a good question to ask and see, are you going to invest your time in a place where you actually get to grow with AI and make your career growth aligned to where the world of work is going anyways.

 

But talking about jobs, do you still see the concern about job security as you’re telling people to adopt AI? Is that coming up as a concern?

 

Michael Domanic
So this is probably one of the trickiest things to figure out at this point. On the question of is AI going to take my job, here’s the good news. Well actually, let’s start with the bad news first. We don’t know.

 

Right. So that’s kind of where we’re at today. Any predictions that people make about the level of job displacement when it comes to AI, we’re really just shooting from the hip. It’s a shot in the dark. There’s an equal possibility that AI is going to have a detrimental impact on job displacement and an equal likelihood that it will have a positive impact.

 

Here’s what we do know today. We know that the genie is out of the bottle when it comes to AI. That genie is not going to go back in the bottle. We know that we’re living through one of the most transformational moments. The people who use AI are having a much bigger impact on productivity than people who are not.

 

At the end of the day, it’s up to each of us to make a choice of which path we want to follow. I would certainly encourage people to engage today. I’m not telling everyone that you need to use AI all day, every day, but engage. Try to figure out what is this thing going to do for me that’s going to benefit me. We’re all just going to have to get really comfortable being in experimentation mode for the next few years because this thing is not going to slow down.

 

Saket Saurabh
Yeah, I think it’s a very fair thing to say, like hey, the honest answer is we don’t know. And perhaps for all of us it’s like, let’s get on with the wave and see where it goes. Better to be with it than oppose it, because I think it’s almost impossible to oppose that force of change that is happening. So it’s better to be aligned to that and get better in what we do with it.

 

Michael Domanic
Right. And I think again this comes back to the responsibility piece of employers to lead workforce through the transformation. I don’t see that enablement happening anywhere else. We advertise ourselves as a great place to work. And a big part of that is we take AI transformation very seriously. It’s a responsibility.

 

Saket Saurabh
Yeah. Right before the start of the conversation, we were talking a little bit about what’s happening in the AI model world. We have some very big companies. In my perspective, I used to think that all models are getting almost very similar, but I’m seeing them diverge in their own ways with their focus on different aspects. We were talking about the Super Bowl ad that was coming out. I do see the companies are specializing in specific types of text generation or more enterprise use cases or more consumer-friendly experiences, whether it’s Gemini, Claude, OpenAI, and all of these. What’s your take on that, like how would companies sort of adopt these perhaps based on the differences?

 

Michael Domanic
Yeah, I mean, again, I think it will depend on the company. The thing that we’re really focused on right now is probably less on model capabilities and more on the tooling that each of these companies is providing.

 

Go back two years ago, you roll out a tool like ChatGPT or Gemini or Anthropic in an organization and there was a lot of trying to figure this thing out. How is this going to impact the things that we’re doing? What we’re seeing more and more is each of those frontier models now developing specific tools that solve specific problems in an organization.

 

Two years ago it was really hard to connect your data. Now with MCP and out-of-the-box connectors and tools like that it becomes a lot easier. That’s really important because we know that bringing more context into the things that we’re doing with AI can enhance the output. So I think that’s where the difference gets made today.

 

All of these models are incredibly powerful on their own. They transform so much of what we do. But it’s a lot easier to make that transformation happen if the model provider is building a specific tool that solves a specific problem. Coding is an easy place to talk about this. There are a lot of purpose-built coding tools in these models that make building and reviewing much easier.

 

But what happens when you’re in sales? What happens when you’re in finance? What are the purpose-built tools that will impact those individuals and help them use the capabilities to add that rigor and enhance the work that they do?

 

Saket Saurabh
Yeah, I think there’s a lot of exciting stuff to come. And maybe as we get to the close of the conversation, if you’d be willing to put on your future hat and tell us where we go in five years and what might be some of the mistakes or decisions that people would take structurally in getting AI-first as a company.

 

Michael Domanic
Yeah, I mean, making a five-year prediction right now feels really risky because even a five-month prediction might be managing.

 

Look, you asked about mistakes. I think maybe the biggest mistake a company might make today that they’ll look back on in five years is not getting serious around formalizing the transformation. I think that’s critical. A mistake that a lot of companies are making is they’ll say, okay, we’re a Microsoft shop, we’re going to turn on Copilot, or we’re a Google shop, we’re going to turn on Gemini, and we’ll provide some basic training and we’re on our way to transformation. We’ve seen that happen enough now to know that that doesn’t work.

 

A lot of companies haven’t really come to the realization that transformation has to get formalized. AI transformation is not the side gig of two dozen people in your company. AI transformation requires dedicated focus and dedicated leadership. And I think we have seen that the companies that have done that have made very significant progress and have very significant momentum towards that transformation.

 

If you don’t do it today, your competitors might. And don’t let that happen to you. Get really serious about the transformation, whether it’s a Head of AI, a Chief AI Officer, or whatever it is. Make sure that you’re consolidating leadership around someone who’s capable of leading through the transformation.

 

Saket Saurabh
I think that’s an incredibly important point. Because I’ve heard from folks that hey, let’s see how it plays out and then we’ll get into it. But maybe they will miss out on all the early learnings and mistakes and the relearning and all of the frustration that you deal with sometimes working with AI. But I think they all become learning moments in a way. And if you don’t go through those phases, you can’t just land into a perfect AI execution.

 

Michael Domanic
Right. I would say, I would be very happy to hear that our competitors are taking the approach of let’s see how it plays out. If I heard that our competitors were doing that, that would make me very happy.

 

Saket Saurabh
I think this is very well said. You’re doing incredible work in understanding and bringing the organization into becoming AI native and making it part of their job. I feel like your framework of thinking, that you’re not an IC but you have three different assistants or people that you’re working with, is a great framework. And I totally agree that the time is now and go through that process of learning.

 

Thank you so much. This has been a great conversation, Michael. Thank you for being here. It’s been a fun learning conversation for me as well.

 

Michael Domanic
Yeah, thanks Saket. I really appreciate the opportunity to talk with you today.

Show Full Transcript

Ready to Conquer Data Variety?

Turn data chaos into structured intelligence today!