Christine Pierce – 00:00
The advertising ecosystem in the US When I last looked at it was well north of $600 billion. That is being exchanged. Ultimately, there are buyers and there are sellers in the marketplace. This whole ecosystem around measurement is now really centered around big data or transactional data. It’s really sort of the exhaust that comes from digital delivery. And that’s just been a fundamental change in terms of how media is measured as well as it creates some new challenges for smart data scientists. One of the places where there’s failure today is when there’s focus on deploying AI for AI deployment’s sake, rather than focusing on solving a real business problem. And I think that’s happening naturally because there is a lot of pressure. So CEOs and boards, they are getting the pressure to demonstrate that they are deploying AI. Your mobile phone replaced your alarm clock. There’s going to be more and more things like that where we don’t even think about it. It’s just our behavior naturally changes once we have a better.
Saket Saurabh – 01:29
Hi everyone. Thank you for listening to this episode of Data Innovators and Builders. This is your host, sakped, and today I’m with Christine Pearce. Christine is a principal consultant at Ozone Insights. Christine, thank you for chatting with me today.
Christine Pierce – 01:46
My pleasure. Thank you for the opportunity.
Saket Saurabh – 01:49
Christine, you have had a very deep and long experience in the world of data. Tell us a little bit about your background.
Christine Pierce – 01:56
Sure. So I actually identify as a quantitative social scientist and, you know, someone who’s moved into a role of being more of a data and analytics and operations executive as well. I spent a lot of my background in market research and in media measurement. And I, last year I actually opened up my own consulting practice and in that I focus on media advertising, market research, but also focus on something that I think is really critical this day and age, which is how AI can be deployed in a way that is effective, useful, and, you know, making sure that all the right controls are in place for good ethics and good data quality as well.
Saket Saurabh – 02:47
So for those who are not familiar, tell us a little bit about the world of media measurement. Hundreds of billions of dollars of spend, but many people don’t know about it.
Christine Pierce – 02:57
Absolutely. So let me just talk a little bit about that. I mean, essentially the purpose of media measurement is to enable transactions and I think the advertising ecosystem in the US When I last looked at it was well north of $600 billion that is being exchanged. And ultimately there are buyers and there are sellers in the marketplace. And so it’s really critical to have a way of measuring. And there’s different types of measurement. There’s counting of eyeballs. There’s also measurement that really is centered on things like detecting fraud or making sure that the ad was delivered when it was supposed to be delivered. So there’s really different types of measurement. But ultimately it really comes down to having a mechanism for ensuring that whatever it is that the buyer and seller agreed to that is actually what happened when that advertisement was. Was deployed.
Saket Saurabh – 04:04
We talk about decisions of high value being made on data. This is a prime example, as you said, $600 billion plus of decisions are being made based on the measurement of how effective media is and how it’s performing. Tell us, you know, how this space has changed with data or maybe how it has remained the same in some ways.
Christine Pierce – 04:28
Yeah, absolutely. So I’ve been in specifically in market research and media measurement for over 20 years. And when I first started, I would say that most of the mechanisms for measurement, they really centered around methodologies that were built for measurement. So if you think about it’s sort of the traditional surveys or media panels. And as the audience has changed, we’ve gone from a consumer landscape where media is no longer served to us primarily, but actually we are making the choice about the media that we consume as consumers. And that’s really made fundamental shifts in measurement as well. So it’s simply not as feasible to use the same sort of traditional survey and sampling approaches alone. They can still be used, you know, really helpful for us for certain calibrations, et cetera. But this whole ecosystem around measurement has. Is now really centered around big data or transactional data. It’s really sort of the exhaust that comes from digital delivery. And that’s just been a fundamental change in terms of how media is measured as well as it creates some new challenges for smart data scientists. So if you think about it, that exhaust or that transactional data is really centered around a device, not necessarily the human being. And what advertisers want to reach or what content producers want to reach is the human being. So actually making that tie between the person and the segment that they represent or the target audience that they represent is really critical. And it’s not as easy as it was when you were really designing a measurement system. So that’s been a big change in terms of. And it’s created a lot of really cool opportunities for. For data scientists and modeling and predictive analytics. So it’s been a lot of fun that way. I will say the one thing that hasn’t changed and I don’t think is going to change anytime soon is that need for reliable trusted measurement. I think that it’s, we talked already about just the size of the purse we’re talking about, but it’s also important to have some mechanism in there that is independent, so to speak, so that you’re making sure that reconciliation is helpful. And it’s also important that it be consistent. So media buyers or advertisers are really looking to make the right decisions around where do I put this ad. And measurement has to enable that. So there has to be some level of longitudinal consistency so they can do that. And, and then at the same time, when you think about content producers or distributors of content, it’s really important to them that you’re being fair, that you’re creating a level playing field so that they’re, so that their content is really measured in a way that’s equal to the competitors they have in the market space. So media measurement has to be. No measurement is ever perfect. But I think that having a useful and consistent barometer is really important.
Saket Saurabh – 07:58
Measurement can also be very tricky to do. Right. Because now we are dealing with so many different channels. And you talked about the data exhaust part. So started with devices that were measuring TV in a sample set of households to set the data exhaust from the applications, which means your streaming application, your mobile applications, they are pushing out trillions of terabytes of data coming out. You know, every day. Right. So are we in a place where because of this the data is very precise and it’s much more trustable, or are there still like, things to think about, like so that people can trust the data and act on it? Right.
Christine Pierce – 08:43
No, that’s a great question. So I do think the data can be much more precise. I mean, you can literally track the same user from advertisement to measures of ROI in a way that simply wasn’t possible in the past. At the same time, there are some cases where the signals themselves are inherently probabilistic versus deterministic. And sometimes they’re a little cloudy. And there’s also things that are harder to do. So one thing that was really nice about a panel or survey is you could track the person the same way across devices or across different programs. A lot of times when you’re dealing with integrating first party data, say from a platform, you may have one really deep and precise view of a subscriber base or a platform’s users, but you may not know what happens if the consumer leaves or what they’re doing on their other leaves that platform or if they go to another device. And so there are challenges that come up in terms of important measures like deduplication across devices and making sure that you’ve got a way to measure frequency of how often the user is seeing the ad. So there are some things that are trickier even with the precision that you have.
Saket Saurabh – 10:11
One thing I was curious about, having seen the evolution of the industry, is that over time, most publishers have become their own sort of platforms as far as advertising is concerned. Right. I mean, of course it was Google, but then Amazon became an advertiser and then, you know, Reddit and Instacart and everybody has started to see this as a business model that they’ve adopted. Do you see that, you know, measurement and trust have changed with that as the company benefiting commercially is the one also running the application and measuring it?
Christine Pierce – 10:42
That’s a, that’s a great question as well. I think it depends on the measurement in this. There’s a, there’s a set of, I’ll say controls in place. So if you think about big media buys, so if it’s a live program that someone has bought, let’s say in the upfront media planning process, that is how big programs are still bought and sold. Most advertisers will still require some level of independent measurement for those situations. But there are other types of metrics. I mentioned fraud detection, I mentioned viewability metrics. There are other types of measurement that can be independently audited. So a platform can still do that, or the platform can actually give the data or transmit the data to the measurement provider, and it’s up to the measurement provider to then standardize it. But definitely that’s definitely happening where there are cases where the, those who are the publishers or the distributors of the data, they do have their own level of measurement for certain things. But when it comes to the sort of the big television buys, for example, or the big video buys, a lot of them are still predicated on the independent measurement that exists. But in programmatic exchanges, there’s sort of different levels of measurement that’s enabled there. And there are verification companies that really focus on verifying there and advertisers really want to make sure that you’ve got trusted, that there’s trust in that process as well. So something like brand safety. There are measurement companies that focus on brand safety. So I think that independent measurement is still important, but there definitely are things that the platforms can do, especially understanding their own consumer base. I think where they get challenged is when they start to see the consumers move to other platforms. They would not necessarily have visibility into that. And then of course the high stakes spies, a lot of times the media, the measurement is still required.
Saket Saurabh – 13:09
One thing I was curious also was that as I’m thinking about the evolution here and data exhaustion so massive, I feel like advertising industry has seen it at scale that very few industries get to do, right? And also in a sort of a real time way where measures have to be made relatively quickly and performance metrics and so on sort of come in because real dollar decisions are being made. If you look back at the evolution of the data collection, data measurement, data analytics and you know, computation systems, where do you see that things have gotten really mature and where would you say that there’s still things to be faced in the underlying system now?
Christine Pierce – 13:50
It’s a great question. And this day and age and just our environment, there’s so much focus on LLMs as AI. But the media ecosystem is one where machine learning and AI have been part of it for decades now. I mean honestly, it’s been decades. The programmatic exchanges are, they’re very mature and you have entire companies that are operating in a way to enable that to whether it’s the data brokers that are adding segments, whether it is the onboarding that happens, whether it is the advertising exchange, the dsp, the ssp, there’s all of these different companies. So I would say that is a very mature and traceable. I mean there are standards. The bid stream, the bid stream that is basically the log, if you will, or creates a log from programmatic, it’s auditable. So I would say it’s a very much a mature ecosystem and it’s one where predictive analytics and AI have really been embedded for quite some time. It’s, it’s not the LLMs that we think about or the chatbots we think about, but very much one that has had a level of automation maturity for years and years.
Saket Saurabh – 15:17
I have to add there that there was part of the starting story for Nectar because I was at Rubicon project which was one of the largest rtps at the time and now is known as Magnite. And we’re seeing all this data and this is pre LLM machine learning for figuring out, you know, the floor price for an auction, for example. And this continuous data, as you said, it’s auditable data. You see the bit streams and you can predict also where things are going and you have data at a scale in real time that you can sort of almost see like, you know, microeconomics, like behaviors that you can measure in demand and supply and all of that stuff. So yeah, I think so. Do you see the ecosystem from a maturity perspective as it has matured in handling data and so on, or where do you see is the cutting edge of the challenges as far as, from your perspective, like as a data leader?
Christine Pierce – 16:13
I think it’s. First of all, I do think it’s mature and probably more mature than. I mean, ad tech has been around for quite some time. But I do think the world of AI getting better and better is going to create more opportunity. And the way I see that potentially happening is I mentioned the buyers and sellers and the intermediary companies that exist today. I think a lot of that over time will likely become more based on agentic models. I think the key will be for the industry to decide where they want to do that. So much of programmatic exchange and the real time bidding that you mentioned is sort of based on prescribed specs, if you will, but they’re still based on a human who’s doing the prescription. And so I think over time there’s opportunities to get, you know, learning, deep learning into that process as well. And that may lead to, you know, improvements in targeting and improvements in outcomes, which would be a really good thing. So, you know, I don’t think any industry has reached its full maturity right now. I think there’s a lot of. I mean, technology is just moving at a pace that I’ve never seen in my lifetime. And I think that’s going to continue to be the case for most industries.
Saket Saurabh – 17:41
Very true, very true. And you know, with all the data that is in there, one of the goals has always been, is to actually understand the audience, understand their behavior and use that to be more meaningful and appropriate and contextual and targeted and not just throw random ads at people because that doesn’t serve anybody well. The audience doesn’t like it and you waste money doing that. So tell us a little bit about how AI has helped us or can help us get better customer insights and how this problem is being solved today.
Christine Pierce – 18:19 Yeah, so again, I think there’s a lot of. You’re exactly right. There’s. What’s happening today is there are mechanisms for measuring devices across different ecosystems and there’s identity platforms that allow you to really resolve who is that person, not in a way that reveals any sort of personal information, but in a way that allows you to create a Persona and see if that person or that device is part of your target. And I think that can just get better and better with, with models that can learn from the past, especially if there’s a, a chain, a feedback mechanism where you see, okay, when I did X, Y and Z, and you discover that you’re actually more capable of reaching your audience, your target audience. And, and especially the degree to which you can actually link that to brand outcomes in certain industries, it’s still really difficult, especially things that aren’t necessarily purchased online or aren’t purchased on the platform. You know, pharmaceuticals, there’s certain things that you may not have that direct link. And so I think the degree to which algorithms can learn from different data sets and bring them together, I also think this is where the barrier to join data sets is getting lower and lower. With preponderance of vibe coding and other ways that things can just happen a lot more quickly. So I think the industry will continue to evolve and there’s going to better and better predictive capabilities.
Saket Saurabh – 20:08 Yeah, I would say the velocity has certainly increased and white coding is one of the ways. But also I would say even at the model level. Right. It would take several months to come up with a model idea, train that, get it to production, see that work well or not work well. Right. And today with, you know, LLM based approaches, you are probably testing new models and ideas a lot more rapidly. So certainly things have changed from that perspective and that has certainly upped the pace of innovation. Right. And I’ve seen that with earnings reports of companies like Meta, for example, saying that AI has significantly helped them in their operations, especially as the industry has also adopted more ways of anonymizing information so that we can guard the privacy but still be able to target effectively.
Christine Pierce – 21:00 Right, Absolutely. Yeah.
Saket Saurabh – 21:02 Would you like to maybe shed a little bit of light on how generative AI typically is playing a role?
Christine Pierce – 21:07 Yeah. So I think in a couple of ways, and you kind of, you touched on anonymization, which I think synthetic data is. It is becoming more and more common in market research and brand research, specifically, whether it is synthetic Personas. And you can actually create generative AI models or mechanisms essentially where they’re trained on enough people that you can have, you can do a brand test with a chatbot and may get results that are very similar to what you would get with a human being. And it’s, you know, the way that models learn. There’s still a lot that we don’t know, especially someone who’s more of a layperson when it comes to computer science. Like I am. But if you think about how human beings learn and they’re exposed and they learn from the data that they’re exposed to in their own experiences. And models are the same way. And so in a lot of respects you can create, you can, by feeding the right data to an agentic AI system, you can actually give yourself the capability of doing brand testing or qualitative research in ways of which you can really get good insights, but not necessarily from actual human beings, but from data that was trained or models that were trained on human beings and human behavior. I think that’s where I think agentic AI will definitely make leaps in the industry.
Saket Saurabh – 22:55 One of the things were chatting about earlier was productivity at an individual level versus an enterprise level. Tell us how do you look at that part and compare those?
Christine Pierce – 23:06 Yeah, this is something I’ve been really fascinated by, especially because I am in a different position now. I have had a lot more time to spend time getting to know what AI tools are out there. And I deployed AI when I was in a larger organization. And I’ve really started to take a look at the distinction between, I’ll call it enterprise macro productivity versus individual productivity. And I think both are important. And the first thing I will say is I think it’s critical for organizations to enable people to do their own automations, to deploy their own, you know, agents or whatever it is that individuals want to do. I think that helps with the culture, creates more of an AI first culture. Also, if you have skeptics in your organization, it helps with change management and you know, you can show what the AI can do. But I still think when it comes to really making enterprise level change, one of the places where companies are, where there’s failure today is when there’s focus on deploying AI for AI deployment sake rather than focusing on solving a real business problem. And I think that’s happening naturally because there is a lot of pressure. So CEOs and boards, they are getting the pressure to demonstrate that they are deploying AI. And so what can happen in that situation is that you miss some of the things that I think are really important to enterprise deployment. And I would say those kind of come with number one, change management, number two, organizational structure and thinking about what’s actually going to change. If you’re deploying a major enterprise level change, like maybe it’s something, you’re deploying something in your contact center and you have, you know, hundreds of agents, that’s going to look very different in terms of the training that’s required, the metrics that you set up, all of those things are going to be very different than if someone is deploying something on their own to make themselves more efficient. So number one, I think what’s always important with a big program like that is making sure you’re solving the right problem and then you’ve got the right metrics to see if you’re solving the right problem. But I think that change management piece is very important, especially because there are going to still be humans in the loop, even if it’s a different makeup of the humans and then ultimately the customers, you know, are still going to care about the output. So that change management is really important for enterprise level AI. And those are kind of human things to nuts to crack, if you will. Rather than technology. The technology is very evolved, but you often have to think about that interaction between the two to get to be successful.
Saket Saurabh – 26:11 Yeah, then the human level can get especially tricky. And one of the things to mention when you were saying this, that you had a chance to get in and do some of these things for yourself. And one of the things I’ve noticed about technology right now is that the only way to learn it is by doing it. And the other part that is happening is that the pace of the frontier movement is extremely packed. So in a traditional approach you would be like, you know, for example, in engineering, you know, let’s say we want to do, you know, agile processes. So then you set up the process, you train people, you know, spend weeks, months, you know, years sort of adopting that and getting good at it. Here the frontier changes in three months. The frontier is different. So it’s very hard to, you know, set a process and roll it out and train it people and have a six month time window for that because by then it has shifted or what the technology is capable of and it’s really pushing us all to think about it differently and how to roll it out to the Org. And much of it is going to can, you know, people try and learn on their own and to what extent you give them the flexibility and freedom as well as the resources to not just say, okay, this is how I’m going to use AI, because I’ve been told to use it this way, but more like I have to do this job and how can AI be my partner or resource in that?
Christine Pierce – 27:34 Absolutely. I mean, I will say that I’ve taken some time to kind of see what other tools are out there and I think there are probably huge share of the population that is using one of the GPT models or one of the, whether it’s ChatGPT or Perplexity Gemini, any of them. I think that’s very commonly used. But what I think is exciting is to see people that I know who maybe have never developed an app in their lifetime actually doing things like that, which is very neat. And I think it really lowers the barriers to entry, if you will, to innovation and within an organization to enable people across your teams to be able to do things that maybe they couldn’t have done in the past. And I think that’s just super exciting. And I think the last thing you want to do is limit it. Yeah, you have to have guardrails and you have to make sure that you’ve got the right mechanisms in place. But you also don’t want to be too strict in terms of those things because I think it can. You really do want to encourage people to use AI.
Saket Saurabh – 28:52 Yeah, absolutely. And I think that as you said as well, right. I mean how we operate the team and the team structures and all that stuff will change also as technology generative AI gets adopted widely because it’s changing the way we work. So tell us a little bit about at Nielsen you were managing a really large team. How do you tell us a little bit about how you were getting that scheme of exercise and scale aligned on the goals and maybe if you have a bit of a view in how something similar will happen going forward. Yeah. So you know, and it’s interesting because this is one of the first places that I did use AI was actually when you are leading along, when you’re leading a large organization, the way that you communicate really matters and what you say really matters. And so that’s probably one of the first places when we got access to enterprise wide Gemini where I really used that to make sure I was communicating the right things.
Christine Pierce – 29:57 But I think perhaps most critically when you’re trying to get a team on the same page, I think the business context again when you’re, especially when you’re trying to do something that’s cross functional data scientists are going to be optimizing model performance, your technologists are going to optimize towards efficiency, operators are going to want to operations teams are going to want to optimize towards what is the, I’ll say lightest touch method or approach. I think when you are trying to get your team, especially a cross functional team aligned, you have to start with a business context and make it really clear that this is the goal, and we win or lose together. And it’s a business goal. It’s anchored in what the client needs or what the business needs. Whether that’s a cost savings, whether it’s revenue, whether it’s a product innovation, you know, making sure that the teams understand that you just can’t under. You really cannot over. Sorry. You can’t overestimate the importance of that. It’s so important. And that comes with communication. And I would also say operating mechanisms. One of the things that. One of the. I think one of the biggest leadership lessons that I learned leading teams was the importance of operating reviews. It sounds bureaucratic, but when you sit down with a team, you look at the metrics, you hold them accountable while also giving them an opportunity to help you remove barriers. You’re showing that what they do matters. You’re holding the team accountable, but you’re also showing that their work matters. And I think that’s really critical in terms of just making sure people understand their priorities and where they need to focus.
Saket Saurabh – 31:56 Yeah. And then, you know, as a leader, I think that communication course, super critical. People are really paying attention to what you’re saying. Right. And then you’re also committing back to the, you know, executive leadership in the company because you’re making all these investments in people and data analytics. Right. So I’m just curious. You were in a company that is essentially a data business. I don’t know if that made it super easy. Okay, no questions. Just do what you feel like doing or, you know, maybe. Or did you have to think about how to prove that ROI and where are people? What advice could you give to people who are trying to justify those investments in data analytics or, you know, maybe AI has made it easier. Tell us where the reality lies. I think. Well, first of all, I think anytime in communication, and this is one of the things that I. I wish I had learned earlier, is you always have to think about your audience. If you are reporting back to whether it’s your board or whether it’s your boss, you have to really think about what is their perspective going to be in and then telling a story that is, again, focused on the business problem. I think that one of the biggest mistakes that data people make, or even technologists to some extent, is they focus on what the tool can do and they put it in a language of we’ll be able to produce X, Y and Z this much faster rather than we’ll be able to reduce your Q so that we can get this much more revenue.
Christine Pierce – 33:31 You have to focus on the outcome metrics and in the terms of the business. And I think that’s something that data, sometimes data people struggle with. And it’s a skill in and of itself. It’s something that I worked a lot of, I worked really hard on, you know, kind of putting together a process for myself where I would ask myself when I’d start a presentation, who’s the audience? Okay. What is, what’s going to work for this audience? Is it going to be telling a story? Is it going to be providing data points and really thinking through that? That’s advice I would definitely give to anybody, no matter where they sit in an organization. Being able to translate what you’re doing into real business results from the perspective of the audience, that’s just critical.
Saket Saurabh – 34:19 Yeah, I think, and sometimes I think that it’s also about the audience’s audience. Right. So I may be super excited about technology and I might go to my board and say, oh, this is the coolest thing and what we should do. But if you think about who’s their audience are their investor and like, what am I going to tell my investors is how is my investment performing? Right, Exactly. All great. But what is the outcome they care about? Something different, Right?
Christine Pierce – 34:44 Absolutely. And that’s one of the things too, that when I would talk to my team about something that was maybe a hard thing that we had to do, I would always say, you have to remember that the leadership of the organization or the owners of the organization, they have a different set of stakeholders. Yes. We may be concerned about, you know, facing this outcome, but they have a different set of priorities. And it’s really important to understand that context when you’re. Especially when you’re going through a transformational program.
Saket Saurabh – 35:19 Yeah, yeah, absolutely. So now that you are looking at organizations from outside, you’re advising them, tell us a little bit about the practice and how maybe that changes your perspective from being inside and driving things to being outside. Absolutely. So this has been one of the most fascinating things for me actually, especially since I spent a long time in one organization. So when I started to advise organizations and I have a variety of different services, so I want to keep myself relevant in terms of market research itself. So I actually have some projects that are very much doing market research and then I also have advise investors and talk to about how the ecosystem works. And so, and I’ve done some work on some due diligence type exercises where I’ve had the opportunity to actually look at the inner workings of other companies. And I will say that the one thing that I have really come to value, and I valued it when I was in, when I was in a big company too.
Christine Pierce – 36:29 But even more so, I do think there’s something that you get when you are looking at it from an outsider. First of all, when you’re the one who has to solve the problem or you’re the one that has to face the client or face your boss, it’s very natural to get focused on the fix right away, right? You can get it fixed, get it done. You’re in urgency mode. You may need to do heroics to get it fixed. What often doesn’t happen is you don’t often have the ability to step back and say, why is this problem happening? And I think when you come in and you look at an organization, you can say, oh, wait a minute, their incentives are not aligned or it’s not clear what the escalation path. You can really see those things when you’re not in the mode of trying to fix the problem at hand. And I think that’s been extremely valuable and really enlightening to have that opportunity. So I think it’s one of those things. If I do go back to a, you know, a big corporate role, I think I will value independent consultants and new. I’ve always met, I’ve always valued having external hires. I think the best mix for a company is always people who know how to get things done at the company but are change agents and then having external point of view. But I really have a better perspective of that now.
Saket Saurabh – 38:01 Very true. And I think I have seen that for myself too. It’s always easier to give advice to other people in similar situations than to solve your own problem as well. Because sometimes I think it also allows you to zoom out and see the whole picture. As you said, when you’re in there, you’re going after the fix and really focused on getting that done. One thing I’ve also maybe observed, sometimes I would get the thought is that when I’m on the inside and I get advice from people, I’m like, yeah, but you don’t know that this is the real problem and all that stuff. But then it also sometimes is like, you know what? Maybe the thing that I’m worried about is not the most important thing. The person from outside is actually able to cut through the noise and go to what really matters. And it’s hard to do that on the inside. I think that’s so important you know, I, when I would have to explain to consultants or, you know, maybe it was to the board or to executive committee and I would find myself struggling for words in terms of having to simplify something, I would ask myself, does it really need to be this complicated? So I think just the act of having someone come in and ask those questions forces people to simplify. Like you said, cut through the noise. And I think that’s really productive for a big organization, especially not only because it helps the way you think and communicate, but I think it also forces you to sort of look and say, is that really necessary to do all of those layers? So I think that’s really healthy, yeah.
Saket Saurabh – 39:44 And you know, as I look at the evolution of the data ecosystem along with AI, and one of the things that I believe is that, you know, instead of analytics, it’s the agents that are going to become the bigger users of data. I’m just curious about your take on evolution of certain technologies. For example, we talk a lot about context layer now, understanding the data, the meaning of that. I was just curious from your vantage point, how do you see some of these, you know, pieces evolving?
Christine Pierce – 40:15 I think that is so hard to predict. I think the one thing that I can definitely say is that the pace, I think what is it? There’s one of those laws that the pace of technology changes exponential. I think that is the one thing we can predict. But I do think that, I do think what’s likely to change and think is just think about certain things that have changed so much. Like if you joined a call, a conference call and someone sent out typed out notes, wouldn’t you just think that was the strangest thing? Whereas two years ago that would have been the normal. So I do think it’s sort of like your mobile phone and how your mobile phone replaced your alarm clock. There’s going to be more and more things like that where we don’t even think about it. It’s just our behavior naturally changes once we have a better tool. So I predict we’re going to see more and more things like the note takers that, as I said, I think if someone sent out handwritten notes, we would probably all give them a strange look now. Whereas that used to be the norm, I think there’ll be more and more things like that are just embedded in our micro tasks rather than everything being sort of a big automation project.
Saket Saurabh – 41:46 Yeah. So no, like, so let’s say, you know, you are advising a company today and you go talk to them, you know, are you looking at, what are the basics that you’re looking at? Like, is it the data quality, is it the knowledge around it? You know, is it just a collection of data itself? Like, you know, you’ve been in that world, right, where collection of data itself can be challenging. Right. So how are you assessing, you know, a given situation? Maybe some of our audience can think so.
Christine Pierce – 42:13 First of all, I think it’s important for there’s data companies that are looking to monetize or create value from their data. And I think that’s one set of questions or advice that I have a, I’ll say I have a framework around and then there’s also those that want to deploy AI, so I’ll take them separately. So when you’re looking at the data. So, you know, there we’re in the middle of this SAS apocalypse and you know, there’s been a lot of really good writing out there, which I absolutely agree with that there’s going to be changes in how the SaaS companies operate, no doubt about it. But those that are sitting on really valuable data, that data is going to become more and more valuable. So when I talk to companies around, I actually have done a little bit of work with a nonprofit on this. It’s really thinking about how if you want to monetize your data, how can you do it in a way that is consistent with your values? How can you do it in a way that’s consistent with your donors? How can you do it in a way that is achieving your results, but also making sure that you’re not compromising the way the data was collected? That’s important when it comes. Let’s go ahead. The trust part of it. Absolutely.
Saket Saurabh – 43:46 I’m saying the trust part of it, yeah.
Christine Pierce – 43:49 And that often can be, especially if you think about, you know, governments or you think about non profit companies. I mean, the trust is of the data that they have and that they own is really important. And then, but if it’s something that’s more focused on deploying AI, I think again, number one thing is I do think opening up the tools and encouraging people to use them is first and foremost. But you also don’t want to have, you know, agents everywhere, you know, sort of without any sort of guardrails. So I think just understanding where your organization is, I think those of us who work in technology think that everybody is on the AI bandwagon. But the fact is that there’s a lot of people who aren’t. And I think understanding where your company is and where are the underlying fears that may exist with your employee base, where are you going to see resistance? And so I think getting a good understanding of where you are is really important as well because you can deploy an AI tool but if people don’t use it or if they’re, if the way in which it’s, the success is measured is not the right measure of success and you don’t have your stakeholders bought in, then it won’t succeed. So that’s what I often will advise on is yeah, quick wins are good, but you also really need to think about the people around the system and making sure that it’s a sustainable deployment. So, you know, as we get to the close, what advice would you give to data leaders out there? We’re trying to get more of the data. Of course, everybody is trying to adopt.
Christine Pierce – 45:35 AI, I think, I mean to me I think the number one thing is focusing on the problems that you need to solve. So if you’re a data leader, it’s very easy to fall into a mode of delivering customized analytics or one off analytics. I think getting focused on what are the biggest one to three problems that this team or this data can solve and then focusing there, I think that’s important and that’s where you get really big value or you can demonstrate your ROI in a way that’s meaningful. And I think of course you also have to be willing to acknowledge that it may change over time. Those priorities often change. But it, if you’ve done a really good job defining those really important meaty problems, they’re not going to change every month or even every year. They tend to be problems that are going to be, you know, more like an 18 month plus horizon. And I think if you can add value there, then that’s a big win and that’s what ultimately will really drive results. So I think it’s just important to focus on the results and not necessarily what the tech can do or how cool it is.
Saket Saurabh – 46:51 And one last question I have is more of a curiosity question from a social behavior perspective around data, which is that, you know, coming from your world of measurement, like if you were to measure everything very perfectly and we really know like hey, this is what this person likes and we only show them the content that stop their preference. And then content producers also have exact data on, you know, how people react to different things. Do you think that we might lose a sense of like creativity on the production side or an exploration or wonder on the consumption side?
Christine Pierce – 47:27 I think that is such an interesting question. It’s interesting. So I’m a parent of two teenagers, and so, you know, they’ve grown up in a world that’s very different. And it’s funny, when they see AI generated content, they can just automatically. They can detect it in a way that I can’t, which is interesting because I grew up and. And I don’t know what that sense is or what that radar is that they have, but I almost think it’s sort of reaction to this generation wanting to preserve a level of humanity. And so human emotions are very difficult to understand and diagnose. And I think AI inevitably will be part of the process, whether it’s the creative process, measurement. It’s already part of, you know, so much of the measurement, monetization process. I think it will continue to be part of the creation, but I do think will still continue to have those really big, powerful moments. I just finished watching the Night of the Seven Kingdoms. I’m a big fan of the Game of Thrones universe and just having the joy of that show. And it’s very different from the rest of the House of Dragon and other shows. It’s just. It’s an experience. Even though there’s a lot of AI generated content and green and there’s, you know, a lot of things. You can see green screens and things. But I think there’s always going to be a place for that real human emotion, and it’s, you know, helping it become more efficient is inevitable. But I think the creative part of the human brain, if you will, is. I don’t think that goes away.
Saket Saurabh – 49:21 You’re saying, based on what you’re seeing with your kids, like, despite all the AI generated content, the human mind is still capable of detecting.
Christine Pierce – 49:29 Yeah.
Saket Saurabh – 49:30 Unique and human. Right. And ultimately, it’s just funny how they.
Christine Pierce – 49:34 Have that ability that I don’t even when I. I bought some. Some like a poster for my. Outside of my garden recently, and they said, oh, that’s an AR AI generated. How do they know that? But they’re just, you know, they’re digital natives. They grew up, and so they’re just able to detect that in a way. And so I think. And I do think there’s a real desire for human connection that is just relentless in humans. So I think it will continue to be there.
Saket Saurabh – 50:07 Well, thank you so much, Justine. It’s been a pleasure talking to you. Great conversation.
Christine Pierce – 50:12 Thank you again for the opportunity. It was absolutely a pleasure. Sat.