Growth & Maturity 39 minutes

5 AI Guardrails COOs Need to Put in Place Today with Sharon Toerek

Harv Nagra
Host
Guest

AI is moving fast in professional services firms. 

And while most conversations focus on efficiency gains and new tools, the harder questions are around risk – ownership, liability, and what happens when AI use goes wrong.

In this episode of The Handbook, Harv Nagra is joined by Sharon Toerek, a leading expert on IP and legal risk in the creative and consulting space, to unpack what AI really changes for operations leaders – and what needs to be in place to protect your business.

Here’s what they dive into:

📄 What actually happens to IP when work is human-created, AI-assisted, or fully AI-generated

🔐 The data privacy risks agencies and consultancies often underestimate

🧭 Sharon’s five-step framework for building an AI-ready workplace

📝 How contracts, policies, and client conversations need to evolve as AI becomes operational

If you’re an ops leader trying to balance innovation with risk, this episode is a practical reminder that AI governance isn’t a legal edge case anymore – it’s part of running a mature, well-managed operation.

Additional Resources:

👉🏽 Follow Sharon on LinkedIn.

💰 Check out Sharon’s website.

👨🏽 Follow Harv on LinkedIn.

📈 Measure your business maturity and find out how to get to the next level: https://bit.ly/assess-business-maturity 

📬 Stay up to date with regular ops insights. Subscribe to The Handbook: The Operations Newsletter.


Transcript

[00:00:00] Harv Nagra: Hi all. I’m Harv Nagra and welcome back to the Handbook the Operations Podcast. When I worked as a group ops director at an agency, it was tricky enough navigating the ip, intellectual property, conversations with clients. Who owns what? What happens when they suddenly want source files? Do you hand them over?

Do you charge for them? Do you point to your terms and conditions and just say, no?

It was already a sensitive area. But in the era of ai, it’s become even murkier. We’ve got human created work, AI assisted work, AI generated work, and not a lot of clarity on where ownership actually lands, but it’s not just about ip.

As agency and consulting firms weave AI into audits, strategy work, creative, and delivery, there are bigger questions too, around data privacy, confidentiality, accuracy, bias, and even what happens when an AI assisted recommendation turns out to be wrong. And when a lot of us are still working out how to operationalize AI into our ways of working, it’s easy to focus on the upside and forget that it comes with a very real set of risks, for us and for our clients.

So today we’re tackling the legal side of all this, how to think about IP in a world where humans and machines are both contributing to your work and how to reduce the wider risks that come with using AI inside your agency or consultancy.

To help us do that, I’m joined by someone who’s an authority on this topic.

Sharon Toerek. She advises agencies and consultancies every day on IP ownership, contracts, compliance, and more recently, the fast evolving legal questions surrounding ai.

She’s written extensively about how businesses can reduce their AI risk exposure, how contracts should evolve, and what leaders need to put into place to protect their teams and their clients.

This is a big conversation today and genuinely one of the most important topics agencies and consultancies should be thinking about right now. So let’s get into it.

Thanks for listening to The Handbook: the Operations podcast. This podcast is brought to you by Scoro. If you’re in ops right now, AI is probably a huge part of your day. ChatGPT, Claude, Gemini. And now there’s MCP Model Context Protocol, which lets those AI tools securely connect to your systems and actually do things, not just answer questions about them.

That’s exciting. It means your AI can start acting like a control center, pulling data, spotting risks, creating tasks, updating systems. But here’s the catch, AI is only as useful as the systems it’s plugged into. If your data is scattered across a dozen tools, spreadsheets and workarounds. You’re just automating confusion. Scoro is different. It’s a modern PSA, designed to be the system of record for professional services businesses, from pipeline to projects, to resourcing, billing, and reporting.

One place where the data actually makes sense. And now with Scoro’s MCP server, you can connect that single source of truth directly into your AI workflows. Summarize meetings and log actions. Pull live project data and flag risks. Draft updates, create tasks, and keep work moving. Without jumping between tools.

AI doesn’t replace your PSA. It makes a good one, dramatically more powerful. Go to Scoro dot com slash demo to learn how Scoro is helping ops teams work smarter in the age of AI and for the VIP treatment tell them Harv sent you. Now let’s get to the podcast.

Sharon, welcome to the podcast. Thank you so much for being here.

[00:03:30] Sharon Toerek: It’s my pleasure, Harv. I’m looking forward to talking with you today.

[00:03:33] Harv Nagra: Sharon, before we get into ai, and we’re gonna spend a lot of time talking about that today, I wanted to start with IP in agencies more generally because I used to work in an agency and face this issue, and I’m sure there’s other listeners listening that face this issue.

This kind of confusion around ownership that comes up sometimes, particularly the idea that clients own the final deliverables that are produced, but the editable working files are owned by the agency, which can lead to awkward conversations when clients ask for them later.

So my question first is, at a high level, is that still the typical default position?

[00:04:09] Sharon Toerek: It is still a typical default position, and it’s really only one of what we see is the most frequently arising issues or are the most frequently arising issues with ip. The second is, a lot of agencies, a lot of consultancies, they create their own proprietary IP and there’s never any intention

that that

proprietary stuff become part of the deliverable work for hire from an IP ownership perspective. So example you cite, which is, do we have rights just to the final delivered work versus the source files and all the working files that led up to that… And also if the work you’ve delivered to US consultancy or agency has some of your proprietary IP in it, what is our position, as a client to that stuff?

And so those are the two issues we see most frequently. And I think they arise with about equal frequency with consultancies as well because, the IP issues are almost identical when it comes to the work product and the client’s interest in owning, the

rights

to it.

[00:05:12] Harv Nagra: From your experience, what is the right way for a business to handle that conversation? It’s one thing I think when it comes up during the quoting stage where you’re kind of preparing the proposal and, but separately when it comes up after the work has already been delivered, I think it’s even trickier sometimes.

So what would you recommend in those scenarios?

[00:05:28] Sharon Toerek: right. From a paralegal perspective, it’s not hard to document an adjustment in the party’s understanding around. final deliverables only versus origin files and working files. From an a practical and financial standpoint, what we usually see is if the parties haven’t had a conversation upfront, setting expectations, the agency or the consultancy usually has to think about what the cost of doing that work, of assembling the files, transferring them over, making sure they’re in workable, usable condition and what an equitable way to recover its cost for doing that would be with the client. And so you’ll see sometimes an administrative or a file gathering charge of sorts. the other thing, and this is not really a legal consideration, this is more of a business relationship consideration is how far do you wanna push this?

[00:06:26] Harv Nagra: is a client with whom you have a good relationship and you want to keep the doors open to future projects,

[00:06:34] Sharon Toerek: future

business opportunities, the need for testimonials, all of it.

And and honestly, what is the true burden to the agency, or the consultancy to actually put the stuff together?

[00:06:45] Harv Nagra: But to your point, originally, it’s much easier to have the conversation upfront and to be explicit. You can always change your mind later if you decide to do that, but to be explicit in your language and also in your contract terms about what they’re actually obtaining the rights to.

[00:07:02] Sharon Toerek: What’s the work for, hire? What’s only licensed to you as the client, and how do you handle alterations to that relationship.

[00:07:10] Harv Nagra: I think the idea sometimes might be when you’re commissioned for a campaign or you win a pitch that you’re gonna get repeat business from this campaign or this client. So when you release the files, obviously they can hand them over to their in-house team or to another, another business to continue the work.

So

[00:07:27] Sharon Toerek: Yeah.

[00:07:27] Harv Nagra: where I think some of the trickiness also comes up from a business perspective.

[00:07:31] Sharon Toerek: Agreed. And so this is, look, I am an IP warrior. That’s how I really made my way to, creating a firm that serves a creative industry is because I believe in the power of IP and that it is important and it is property and you do need to protect it. think we’re starting to see a little bit more movement towards the idea that agencies or creative partners or consultancies should share in the upside in different ways than they’ve traditionally done. And that means, figuring out a revenue formula that compensate the creator on an ongoing basis based on the success of The buyer, the brand, the client, using the work that’s been created for them. I will say it’s still a minority practice, between, creative agencies and probably consultancies and their clients. But we’re seeing more and more conversations, particularly between independent agencies and. Brands that might be enterprises, but they’re willing to think more entrepreneurially

about the value

of the work and the creativity that’s being provided to them.

So I hope we’ll see more movement in that direction and certainly, the legal and contracts, we’ll have to catch up to that. But it’s still minority practice for most agencies in my experience.

[00:08:56] Harv Nagra: really good point.

We’re gonna get into AI risks. The way agencies and consultancies use AI does overlap in a lot of ways, but also differs in terms of the outputs that are created.

So I wanted to start off by outlining some of the risks for the, for these groups separately first, if that’s okay. So if we start with consulting firms, they might be producing audits, insight strategies, working on implementation of these strategies. I think there was a recent example of Deloitte having an AI scandal where they were caught fabricating data and having other errors in something like a 440, what was it? $400,000 project for the Australian government. So can you talk through, talk us through some of the risks for consulting firms?

[00:09:37] Sharon Toerek: I think the risks on the consulting side basically, are born out of the practice of taking shortcuts where you shouldn’t be. AI is not an excuse for substituting human judgment, human insight, particularly in the consulting realm, you’re being engaged by a client to help them solve a business problem or help them take advantage of a business opportunity.

And ultimately, what they sell is either sold to a human, an individual consumer, or a buyer for an enterprise. So the same negligence principles, the same due diligence principles, the same accuracy

principles

apply whether or not you’re using ai. AI is just creating more real estate for trouble to occur

[00:10:26] Harv Nagra: because it’s very tempting to take shortcuts, and it’s easy to forget that the machines have not caught up with humans. when it comes to judgment, when it comes to insight, when it comes to expertise and context, right? Consultants are engaged because they’ve got a, they’ve done a body of work, or are exposed to data and experiences that help them be able to be strategic advisors to their clients.

[00:10:51] Sharon Toerek: And so if you’re putting reporting out there or strategy out there. Or making recommendations to your clients that are born out, right? Garbage in, garbage out, that are based on shortcuts AI has made it easier to do so. Um, you’re not gonna be any less responsible, any less liable, for those mistakes, for those shortcuts than you would be if you had used some other technology to create the work product.

[00:11:17] Harv Nagra: Absolutely. the hallucination thing is still a real risk and kind of the reputational damage that can arise from something like that is just, maybe

[00:11:26] Sharon Toerek: Hundred percent. Yeah. And law firms are not immune. We’ve had a number of cases of law firms stupidly, and that’s my word, submitting briefs to the court or doing research and not verifying the results of the research on ai. and hallucinations popped up,citing cases that don’t exist or citing them for propositions that they don’t support.

And the legal industry, the accounting industry, the consulting industry, and certainly, creative services field, none of us are immune to that. We still have to remember that. We’re standing by the advice and the work product that we’re delivering to our clients. And AI is just a tool, it’s not a responsibility eliminator.

and it doesn’t make us any less, liable or responsible for turning out work product that is helpful to our clients and meets the terms of that we’ve agreed to with them.

Absolutely. Let’s talk about agencies and risks for them. I think the top two risks for agencies, whether they’re creative or media or digital, whatever their discipline,are the IP questions. because I don’t think the parties are really thinking about at the outset, the ownership position. They want the benefits of integrating AI into the workflow. They wanna be cutting edge in terms of the tools that they’re using. They want any cost savings that are achievable, and those are minimal right now. but they won’t continue to be, they will be more meaningful as time goes on, in my view. so they’re not thinking about who’s gonna own this stuff at the end of the day. So the answer right now, and certainly in the United States and in Europe and in most of the world, is that you can’t own the rights to anything that’s machine created.

[00:13:17] Harv Nagra: Right.

[00:13:17] Sharon Toerek: the question then becomes, what if you incorporate aI generated work product into the deliverables. What’s the ownership position there? What are the liabilities and responsibilities for it? And the truth is, you can only own what the humans have created. So everybody’s taking a risk that, the contributions to the final deliverable that are AI generated are not material enough to have to worry about IP ownership in. or that, gonna treat it just the way they’re gonna treat any other third party work that gets incorporated, a stock photography image or a piece of software that gets incorporated. part B of that problem slash question is what if something goes sideways?

What if the work gets delivered, gets deployed out into the market

and there’s some incorrect claim. There’s backlash because, an image has five fingers, where it’s supposed to only you show two or whatever it might be. Who’s gonna be responsible for that? And so that’s not really an IP question, it’s more of a liability question.

but the simple pathway here on IP when AI is involved is if a human didn’t create it, Then it’s not ownable either by the brand or by that consultancy or by the agency. and you just have to decide how much that matters, based upon the purpose of the work and who’s gonna see it and how it’s gonna be used and deployed. and then the other thing to keep in mind is that every platform, every AI platform that’s engaged to create generative ai, produced work. Is got its own terms and conditions, they’re mostly fine with you owning the end product as a result of using their platforms. They’re just not gonna help you if there’s some infringement that happens out in the world 

or indemnify 

you, even if it comes as a result of using the AI platform.

So those are issues um I tend to spend time worrying about a little bit more than the actual IP ownership of the deliverable, but you have to handle it in your contract language. You have to handle it by having upfront conversations with the client about it and make sure it’s not a detail swept, aside until later

[00:15:36] Harv Nagra: Absolutely.

[00:15:36] Sharon Toerek: relationship.

The second

risk pool really is handling manipulation, of data. And that’s for a couple of reasons. First of all, relates to making sure that you are keeping sacred, if you will, the confidentiality of any sensitive or proprietary information that the client has that it’s putting in your hands you can do your job for them. But secondly, there are consumer privacy laws, data privacy laws, especially in Europe. and in the United States. We don’t have a national standard for data privacy, but we have this patchwork of state laws, California being the most restrictive. although Maryland’s starting to give it a run for its money with their new law. data privacy, compliance, confidentiality of sensitive data. That those questions are top of mine when it comes to risks for agencies. ’cause agencies are handling this data and information every day they are creating original deliverables.

And so the IP questions that we’ve talked about, so those are issues top of mind for me and for my legal team when we’re counseling clients that they need to be aware of. There are other issues that don’t probably occur quite as often and that can be reputational risk associated with product that is either, inaccurate or distasteful or whatever, and doesn’t make it through enough scrutiny or,making claims in advertising that aren’t supportable or sustainable based on facts that AI has generated that are wrong.

[00:17:13] Harv Nagra: Sure.

[00:17:13] Sharon Toerek: but those tend to pop up less. And I’m, right now, today as we’re having this conversation, I’m a little less worried about all those for agencies than I am about the IP questions and the data management questions.

[00:17:24] Harv Nagra: Sure. And in talking about this for both these groups, I, I just realized how much overlap there is. Really all of this applies to either kind of business agency or consultancy. Even the examples we’re citing a lot of them. 

So you’ve developed kind of an AI ready workplace framework. It’s got five steps. I was looking over it. I’d love for you to walk us through what those steps are.

[00:17:47] Sharon Toerek: Yeah, I’d be happy to. 

The first part of the framework is key conversations, making certain that early in the relationship with your client or your prospective client, that everybody’s talking about how AI is gonna fit in to the work or the relationship? are the expectations of the client, the brand, around it? What do they need to know, want to know? What do they forbid? What do they require? Because a lot of enterprises are still, they’ve got very conservative policies still around AI usage and anybody who’s serving them needs to know what those are and work within those swim lanes so that they’re

not violating

those policies.

So having conversations with your clients, it’s having conversations with your vendors and your strategic partners. their risk tolerance for ai? What are they doing? What are they using? both tools and use cases. How are they using it and what tools are they using? and what are their practices in terms of, maintaining data integrity and disclosing when they’ve used AI to create a deliverable? Your freelancers, you certainly need to be having conversations with those parties so that they understand if your client has restrictions that might apply to the work they’re gonna do for you. Your team members, obviously the people who work with you, you need to be having robust conversations internally, training, whether formal or informal. and communicating with clarity as a leader of one of these, organizations to your team, what’s our risk tolerance for using ai? do we talk to clients about? What do we need to know from clients? and. What, where are the human guardrails, right? if we wanna use a new tool, if we wanna make a new use case recommendation.

So having those critical conversations, that is step one.

[00:19:44] Harv Nagra: So on, on, on that,I think what’s, important to highlight is that you’re saying it’s not sufficient just to have your terms and conditions outline how you anticipate using AI or how you plan on it. You need to have these conversations just to make sure, and a again, that example of kind of enterprise clients having very strict guidelines or policies of their own is an important one to highlight.

[00:20:06] Sharon Toerek: Yeah. And I think that. Where you don’t wanna find yourself is midway through creating the work or doing the research or developing the strategy, and then, hitting a roadblock or worse having to undo and redo there’s some policy or prohibition or cultural objection on the client’s side, to the way you’re using ai.

[00:20:36] Harv Nagra: You don’t wanna find those things out later. Absolutely.

[00:20:39] Sharon Toerek: know those things upfront. So crucial conversations.

[00:20:41] Harv Nagra: Yeah. And two other examples that you gave there. Talking to your team and talking to freelancers and contractors. I think such an important point, and I think that is actually something that is quite easy to fall through the cracks. maybe the accounts team or the project delivery team has had conversations with the clients on all of this stuff at kickoff or

at the planning stage of the proposal, but making sure the rest of your team knows. ’cause if all these policies are in place and these are the client’s restrictions and nobody else is told that, then that opens you up to significant risk.

[00:21:12] Sharon Toerek: Yeah, and I

[00:21:12] Harv Nagra: I,

[00:21:13] Sharon Toerek: should be you just ingrain in your either onboarding with the client or your onboarding of the freelancer, or the strategic partner or the vendor.

This needs to

be a standard part of your vetting process and your onboarding process when you’re working with these parties.

And it should be a regular part of your internal team education and training.

[00:21:36] Harv Nagra: Absolutely. Let’s move on to step two. so tell us about that.

[00:21:40] Sharon Toerek: Step two is somewhat related and is developing written policies around AI use. And these are gonna be dynamic. they’re not gonna look the same six months from now as they look today, but think that every agency agency, and consulting firm too should have a written articulation of its policy around using ai. And you might need two, you probably need one that is outside facing. That helps clients, vendors, other partners, understand, your policy and position around using ai. And then you may need one internally that’s a little bit more detailed, a little bit more process and procedure, in addition to policy. But having written policy and regarding it as a living, breathing thing that needs to be dynamic and updated as practices are updated is the second sort of plank. And it really, it can be a trigger for those conversations that you’re gonna have that we just talked about.

[00:22:40] Harv Nagra: So what happens in that scenario that we were talking about a few minutes ago? let’s say you have a designer that uses Mid Journey or something like that without permission or doesn’t know any better,a strategist use ChatGPT to analyze client data without permission. You have a subcontractor that relies on AI generated content and this has happened, and it’s found to be infringing.

How would you navigate that kind of situation?

[00:23:04] Sharon Toerek: you know it’s infringing before it’s released to the client, then unfortunately, fortunately, and unfortunately you haven’t caused harm to the client yet ’cause it hasn’t made it out into the world. But

[00:23:16] Harv Nagra: Mm-hmm.

[00:23:16] Sharon Toerek: yourself a lot of money because you’re gonna have to redo the work most likely,sacrifice the part of it that might be causing the potential issue. If it’s released out into the world and it’s caused some sideways effect as a result, like there was an infringement of somebody else’s original work that nobody knew about, it’s, um, objectionable for some way or opens up the client to some or liability. Those are issues you handle just like any other error or omission that an agency might make.

Um, is an example of you just having additional tools in real estate, frankly, to get in trouble with.

[00:23:52] Harv Nagra: Hmm.

[00:23:53] Sharon Toerek: those are handled the same way. what concerns me and the reason why I think the risks are deeper is that the speed with which the tools can be employed makes them look simple, easy, and less risky than they actually might be. and slowing down is not something that is a favorite of creative firms in my experience. faster, better, bigger, Taking that beat to sort of assess before, you put work out into the world, whether it’s to your client or out into the public market, is something that we’re gonna have to think harder about when we’re tempted to move faster because AI makes things faster. this is something that can also be reflected in your policies. What are your work review and approval sort of measures, and what’s the client responsible for? What’s the agency or the consultancy responsible for? before the work goes out into the world.

[00:24:56] Harv Nagra: Absolutely. Excellent. Let’s move into step three then.

[00:25:00] Sharon Toerek: spend some time reviewing the terms and conditions of the platforms that you use. And this sounds silly and basic, but most AI platforms have created different levels of access now, and so you need to know which level of access do you have? What kind of an account do you have with these platforms?

Because there may be different terms and conditions the platforms are starting to dabble in taking on a little bit more responsibility from demnification

[00:25:30] Harv Nagra: Hmm.

[00:25:31] Sharon Toerek: liability if you access them. Say an enterprise level or a paid level, then if you just use the, the basic free for everyone versions. And so know the terms and conditions, I can give you a spoiler alert and say that in almost every case the platforms are fine with you owning the IP to the output from ai. They’re just not gonna protect you if something goes wrong in the world as a result of using output from ai. Read the terms and conditions of the platforms that you use. Understand the kinds of licenses and accounts that you have so that you can speak fluently about responsible for what platform versus creator.

[00:26:10] Harv Nagra: Absolutely. really good point. And I don’t think that’s silly at all because we have a tendency to click agree on those terms and conditions and not bother looking at them, in fact. But in this kind of business context, using this kind of technology, I think you’ve just really highlighted some of the reasons we have to be really careful and good point about, the differentplan tiers having completely different terms and conditions sometimes.

[00:26:34] Sharon Toerek: Yeah, and this is, again, this is not the sexy part of the business for either a consultancy or an agency. Nobody wants to read a software license or a platform license, but the ones you use most frequently start there. be aware of the differences between an enterprise level account and just a basic, free access account and what it might mean when you use those tools and understand which versions of the tools your team is, you know, are they working at home on the weekend and their personal accounts when you need them to only be working in the enterprise accounts.

So it’s

worth taking

some time to look at the terms and conditions of the platforms you’re using, is important and meaningful.

[00:27:14] Harv Nagra: Excellent. On to step four then.

[00:27:17] Sharon Toerek: Step four is

 your contract language is important think about. We have completely retooled our approach, um, and client service agreements for agencies at the firm. We used to treat it in the beginning, you know, Harv two years ago when before ChatGPT and make this all very democratic. We used to treat it like any other third party work, but now it’s really worthy of calling out in its own set of terms and conditions. Um, who’s responsible for approving the work before it goes out in the world? Who’s responsible for complying? Who is responsible for addressing problems as they occur? So having specific contract language with your clients, with your contractors and your strategic partners that line up the responsibilities with the right parties. That is I think, a crucial step because this is sort of a new frontier and things we hadn’t thought about before. We’re needing to think about now, and if we don’t catch them in the conversations, if we don’t catch them in policies, then we need to catch them in our contracts with our clients so that have some written expression of our understanding about who’s responsible for what and who will solve problems if they occur.

[00:28:39] Harv Nagra: Step four then.

[00:28:40] Sharon Toerek: Step five One step

[00:28:43] Harv Nagra: Oh,

[00:28:43] Sharon Toerek: that is,

[00:28:44] Harv Nagra: step five, sorry.

[00:28:44] Sharon Toerek: yeah. Yeah, that is, you’ve gotta be cautious when you’re inputting your client information into any AI platform. And you’ve gotta be extra, particularly cautious if you are gonna be inputting any consumer data. Everybody who touches the data, who manipulates or processes it is gonna have some responsibility in the liability chain when it comes to violating a data privacy regulation. That’s true GDPR. It’s true in the United States primarily because it’s true according to the state of California’s rules and the state of Maryland’s rules. And you have to really be thoughtful about what information. You’re gonna be using in your prompts and what the client’s position is regarding the confidentiality or the sensitivity of that data. and then choose your platforms and your level of access. Closed systems versus open systems are very important to distinguish between

because

you wanna know whether the platform you are using is gonna be training on any data that you input. And then you need to know your client’s policies around, um, protection and use of that information.

So it’s not only data privacy law regulations and compliance with those, but it is your client’s own particular preferences and the confidentiality and non-disclosure terms you might be subject to as a result of working with them. So, uh, proceed with caution when inputting any client or consumer information that’s step five.

[00:30:23] Harv Nagra: really good point. 

So Sharon,we’ve been talking about different jurisdictions. you mentioned GDPR, you mentioned California. You know, businesses that work internationally might be exposed to laws in a lot of different places. are there any major differences listeners need to be aware of, perhaps between the US versus the UK or the EU?

[00:30:43] Sharon Toerek: GDPR is a more global approach, agreed upon by the member nations of the EU who have harmonized and come to a mutual agreement about how they’re gonna enforce data privacy. it’s also the standards are, gold standard when it comes to strictness and privacy. the United States is a little bit, um, looser constructs, because we don’t have a federal standard and there hasn’t been any agreement yet on what the federal standard would look like, whether it be more strict like GDPR, or like the California statute is, or whether it’d be more permissive. So have to play to the strictest standard.

We always tell our clients. the second thing we would say to any client, would be, it’s not about where you are, it’s not about where your client is geographically. It’s not even about the markets that they target. If one single consumer in that database or on that email list or in that lookalike audience on a social campaign is in a jurisdiction that is governed by gDPR or the California Consumer Privacy Act, you’re subject to that law. It doesn’t matter if you didn’t mean to be, it doesn’t matter if you even don’t care about selling to somebody who lives in California

or

Germany. It just is, this is the way the law works. And so we play to the strictest standard.

That’s always our advice. And to me, the way we advise clients, it doesn’t matter what the differences are because we want them to be as compliant as they can be with the highest level of privacy governance that they can afford. But in general, to answer your question specifically, the European standard is still, a little bit more protective of individual privacy data rights. And, I think the enforcement mechanisms are more stringent because you have a unified standard and there’s been more time and energy put around enforcing it, as a group. Whereas in the United States, like I’ve said, we’ve still, we’re still a patchwork here of regional rules about data privacy.

[00:32:57] Harv Nagra: So that has to do with kind of data privacy risks with using AI and getting it to analyze your data. when it comes to AI regulation, I suppose the main thing that that I’ve heard people talk about is around this kind of IP law. Any differences there you can point to between the different regions.

[00:33:14] Sharon Toerek: Not significantly at this point. I will say that,the European market has a long legacy of enforcing moral rights in intellectual property, so that even where there aren’t statutory rights to trademark copyright, a creator has got, has, there’s a long history there of enforcing moral rights and that will impact, I think, the way that AI generated work probably gets Protected or that infringements get handled. versus in the United States where we have, frankly, for lots of reasons, legal, geopolitical, and other, we have a tension between the states wanting to step up in some cases New York, California, and enforce AI specific things like use of synthetic performers,

making disclosures when AI has been used to create a piece of work. But there’s no unification to an approach and there’s actually a bit of tension between our federal system and our state system about how we’re gonna regulate it. And so from an IP perspective my prediction is that at least in the United States, we’re gonna see more movement towards these licensing arrangements, like the one that Disney just made, with OpenAI. I saw that coming. I didn’t see the, the equity stake part of it coming, but, I think we’ll see that globally. But I think that, Europe will be slower to step away from strict enforcement of

creator

rights. and I think there’ll be a hold, they’ll hold onto the, those protections probably a little bit more tightly than the American market might.

[00:34:57] Harv Nagra: It makes sense. So Sharon, we’ve covered a lot of ground there, but with this kind of context that you’ve given us today, what are two or three things that you’d think that agencies or consultancies should put in place over, let’s say the next 90 days to protect themselves if they don’t already have, some of this stuff in place?

[00:35:16] Sharon Toerek: Yeah, I think three things. First, get with your teams and create a conversation checklist. This is something consultancy or an agency of any size can do. What are the questions we need to be asking our clients, or what are the things we need to be saying to the client when we are talking with them in early stages about taking them on or taking on a new project for them if they’re an existing client.

So creating those conversation checklists. do some brainstorming about what you need to know and why you need to know it and think about your talking points. So that would be step one. very scalable and simple to implement. Step two, have a written AI policy. if you’re not sure what to say to clients yet about ai, start internally,

have

an internal policy around how you’re gonna use it. Don’t expect your first pass at it to be great or permanent because it’s gonna need to be fluid and, and dynamic and updated as the tools,change. And then third, take some time looking at your service agreements and determine whether it’s right for you to add specific language around how you’re gonna be using AI to serve the client. So those are the top three things that I think, an agency or a consultancy of any size can do and they somewhat relate to one another, so they’re kind of synergistic in that respect.

[00:36:39] Harv Nagra: Absolutely fantastic advice,I think you’re right, there’s a lot of excitement around these tools and we can see that we’ve had conversations on the podcast with people doing really innovative things with ai. But it’s just so important that we’re protecting ourselves and that’s an operation director’s, COO’s responsibility to make sure this stuff is in place.

So I hope this conversation has been really helpful for our audience. I know it has been for me. So thank you again for joining us today.

[00:37:06] Sharon Toerek: My pleasure. Thanks for having me.

[00:37:07] Harv Nagra: Before we wrap up, I wanted to share a few of my own takeaways from this conversation. The first is that a lot of the AI risk we talked about today isn’t actually new. it’s an amplification of the risks that have always existed. IP ownership, confidentiality, accuracy, liability. AI just makes it easier to move faster, and in doing so, easier to get in trouble if the guardrails aren’t there.

Second is that none of this is about saying don’t use ai. In fact, the opposite. What Sharon really reinforced for me is that agencies and consultancies should be using these tools, but in a way that’s deliberate, transparent and operationalized.

Conversations upfront, clear policies, clear contracts, and making sure your team, your freelancers and partners all understand where the lines are.

And the final takeaway, especially for anyone in ops or leadership, is that this is now part of the job.

AI governance isn’t a future problem or a legal edge case. It’s an operational responsibility. If something goes wrong, it’s not the tool that gets blamed, it’s the business.

If you’ve been experimenting with AI without formalizing how it fits into your ways of working, this episode is your nudge to slow down just enough to put some structure in place.

Now, if you’ve enjoyed today’s episode, please share the episode with someone that would appreciate it.

That’s it for me this week. Thanks very much for joining us.