AI Governance: Navigating the Speed of Change - Sweeney Williams - Guardians of the Data - Ep #41

GOTD - Sweeney Williams
===

Speaker: [00:00:00] Welcome to Guardians of the Data. I'm your host, ward Balza. Each episode will explore the passions, expertise, and real world experiences of security leaders who are helping the future of data security and governance. Guardians of the data is made possible by support from Centro. To learn more about our AI powered data security platform, please visit sentra.io.

Let's dive in.

squadcaster-6gb2_6_04-17-2026_133944: Welcome back to another episode of Guardian to the Data. My guest today is over 20 years of experience in the industry specializing in cybersecurity privacy and AI governance. Currently, the head of responsible AI for an insurance organization, Sweeney Williams. Welcome to the show.

sweeney-williams_1_04-17-2026_143944: It's great to be here. Ward. I've, I've really been looking forward to this and thank you so much for having me on.

squadcaster-6gb2_6_04-17-2026_133944: Same, same. So let's jump in, uh, Sweeney. In your professional opinion, what's the biggest data security challenge that organizations are facing?

sweeney-williams_1_04-17-2026_143944: Great question. And, uh, and before I, I do that, I just wanted to [00:01:00] mention, um, You know, anything I say today or, uh, You know, that represents my own views, it does not represent, necessarily represent the views of the organization I work for. Um, so I just wanted to get that out there before I start. Um. The other thing I'll mention is, You know, I'm, I'm in, uh, I'm in AI governance, AI risk and compliance. Um, so surprise, surprise. My, uh, response is, is really gonna cover ai, uh, perhaps a little bit more than security. Um, but there's a security tie in and I'll, I'll try to kind of bring us back there. Um, but the biggest challenge in my opinion, uh, is actually speed. Right. When we think about ai, there are, um, there are a lot of, You know, factors that kind of relate to speed.

And I think the obvious is, um, we see AI capabilities increasing. Um, almo, it, it, it feels like a, on a daily basis. Um, if you're like me, you were able to kind of. Follow the model releases and the capabilities and, You know, all of the new things that were happening. And, um, I've, I've given up about, I don't know, six to [00:02:00] eight months ago I kind of stopped trying to follow everything. It's just impossible to keep track of. So there's kind of that, that standard, that capability speed. Um, but in my opinion, the bigger issue is actually speed related to the way the world's been required to adapt around. Everything that a ai, uh, AI enables and everything that it's, um, supposedly going to enable, um, in the, in the near future.

Right. And, and I'll give you some examples, right? There's, um, You know, the speed of geopolitical change, right? And, and, and those geopolitical views around. What a act AI actually means to, um, to a country, right? I, I, I think if we think about, You know, the early days with, um, with chat GPT, it was this, You know, really, really cool tool.

You know, me as, as, You know, kind of a, You know, a, a former CPO. I could, You know, if I really wanted to, I never did this, but I could just write a really good foundation of. Of a privacy policy and You know, that's great, right? And, And so you could do things like that. Um, but I think the discourse [00:03:00] very, very quickly shifted to, um, countries seeing it as, uh, more of a national imperative.

More of a kind of a, a, a national security question. Um, a relevance question, right? If we think about. Um, all of the big frontier labs talking right now about, um, You know, a GI is just around the corner or, um, You know, depending on who you ask, they think maybe a GI is here. I don't, I don't know that I agree, but that discussion, You know, whoever kind of gets to a GI first, um, there's, there's kind of a, You know, a, a dominance play there that's, You know, that's the nation or, or kind of the group of nations that. Maybe kind of control the world and, and, and control kind of global economies and uh, and, and, and military assets and all of those things, right? So there's, there's that geopolitical speed of just everyone kind of rushing into to get their arms around this thing. And then there's, um, You know, kind of near and dear to my heart, there's the speed at which attitudes towards. AI safety and AI regulation have, [00:04:00] have shifted, right? If we, You know, go back to that chat GPT moment, um, the discourse around that time. Not, not just after that, but even, You know, a couple of years before that discourse was all about AI safety. You know, we know these technologies are advancing and, um, let's make sure that they're human controllable.

And, um, You know, they don't get to the point where they, they pose an existential threat to humanity. And, You know, there was, you remember the Pause AI Act, right? Where, where there was that. That big push to get, You know, all of the big labs

let's stop developing the stuff, uh, You know, past the, the capabilities of, I think it was GPT floor or something like that, and let's get our arms around this before it kills.

Do you remember

that?

squadcaster-6gb2_6_04-17-2026_133944: know, not only do I remember that, but it was, it is funny you mentioned that I was looking at LinkedIn probably an hour or two ago, and I saw they were just doing some sort of march in San Francisco today. Again, trying to pause ai.

sweeney-williams_1_04-17-2026_143944: it's starting back up and it's gotten violent, [00:05:00] right? Like all of the news about, uh, You know, attacks on, on SAML, Sam Altman's house and things like that. It's, I I think it's, it's, it's gotten to kind of another level and, and again, You know, the speed at which there, that, that's changed and that discourse has gone from let's all hold hands and, and do things the right way and sign a, You know, sign an agreement and then, You know, everyone cooperates. Um, You know, if I think about it from, from a regulatory standpoint, You know. EU enacted. Its, its, its AI act in 2024. It's been coming in, coming out in phases since then. Um, there's, You know, Canada had, um, a very EU AI act, um, uh, uh, aligned act called ada, the Artificial Intelligence and, and Data Act. Um, that was, You know, was proposed in a, kind of went through cycles and uh, and things like that.

And so Canada was all in on on that. Um, if we look at what was happening in the us. A lot of proposals for federal and, and state AI laws. There was, You know, the prior administration released that [00:06:00] AI Bill of Rights and everything was about, You know, safety and, and regulation and uh, and all of those things.

And I think everyone was generally aligned with that. Um, and You know, so if we're talking about how speed has impacted that, well that geopolitical thing I spoke about earlier, the speed at which countries have pulled back. On the safety and the regulatory discussion. Unlike anything I've ever seen.

Right. That happened so quickly. And, um, You know, let's think about some catalysts here. administration came on board, um, You know, uh, the, You know, beginning of 2025 um, there was kind of this immediate, I'm not saying this was the only trigger, but it was a really, really. Um, clear and verbose trigger. Um, and that was at the, uh, it was the Paris AI Action Summit in February of last year. JD Vance got on stage. You know, the previous two summits were all about safety, and that's what everyone spoke about. I think, You know, his opening [00:07:00] line said something to the effect of. The US is, isn't, uh, interested in AI safety. Um, You know, they're there to talk about AI opportunity and AI dominance, um, and continuing that, that US lead in, um, uh, in AI capabilities and the entire infrastructure that runs it, right? And, um, again, not the only catalyst, but it was articulated very clearly there. Um, and we've seen a lot of fallout there from, from a regulatory perspective. You know, Canada's uh, AI bill never passed. It was, it, it died in, in in Parliament when Parliament was Perro actually about a month before that speech. Um, but interestingly enough, it was never brought back. Right. And, You know, we're expecting some sort of a policy announcement, some sort of potentially regulation. I don't think anybody expects it to be something that has the teeth that the previous version had. I think we're expecting, You know, something

squadcaster-6gb2_6_04-17-2026_133944: Little bit neutered.

sweeney-williams_1_04-17-2026_143944: industry. got it.

squadcaster-6gb2_6_04-17-2026_133944: Yeah. Yeah.

sweeney-williams_1_04-17-2026_143944: And, and if we're talking about like neutered regulation, um, You know, [00:08:00] what's, what's happening in the EU right now?

The eu that's the global bastion for, uh, for, You know, regulation and the rights of individuals and all those things. They've got, um, they've got a digital omnibus package that's. Kind of currently working its way through the legislative cycle right now.

squadcaster-6gb2_6_04-17-2026_133944: Hmm.

sweeney-williams_1_04-17-2026_143944: that would simplify some of the aspects of the act. It would delay some of the requirements, actually the, the high risk requirements that are supposed to come into force in, I think it's, You know, August of this year. They're looking to delay that until December of next year. Right. So, so again, kind of making that, trying to level that playing field by, um. I, the e would never call this deregulation, and I don't think it's deregulation, but that simplification in order to compete, because again, this is a national imperative. This is national security. This is, You know, is your nation going to be competitive in in the next, You know, 10 years, 20 years? Are you still gonna be a global leader?

And, and, and this is driving all of that. So I think the speed at which the world has gone [00:09:00] to deregulation. I've never seen anything like that. Um, and then I promised you something security related, preaching to the choir. I, I, I know you've, You know, you've spoken about this. You've had, um, uh, You know, great guests speaking about this in, uh, in prior episodes, but. The speed at which AI has been adopted by bad actors. Um, and maybe people who were on the fence, but now they get to be bad actors because just that barrier to entry is so low is astonishing. And so, You know, how quickly do security teams have to have to pivot and, uh, and, and, and understand what's going on at any given time?

You know, the speed at which, um, a single individual could launch. You know, potentially thousands of attacks all at once. And they're all highly sophisticated. It's putting these, You know, these tools into more hands and giving those hands more capabilities, I think is is is rather astonishing. And

squadcaster-6gb2_6_04-17-2026_133944: It.

sweeney-williams_1_04-17-2026_143944: thing in the news right now? Andros, You know, mythos, right? And, and, [00:10:00] uh, You know what, it's. What they say it's able to find and, and all of those discu, those, uh, those volumes, it's, it's, it's able to discover. I think that's like, that's a massive security challenge. And maybe I'll, I'll stop. I, I know I'm always long-winded, but, um, the other challenge is, is just really the speed at which organizations have had to. Learn and, and understand and adapt to all of this, right? We, we don't get that regulatory certainty that comes from, um, You know, regulations that having, having been, been enacted. And I know a lot of us, certainly in the financial services industry, we tend to rely on that. If you don't live in the eu, you're, you're probably not getting that anytime soon.

So how do you operate in this environment and still continue to innovate and move forward and use these technologies? It's, um, it's, it's, again, You know, you, you said, I've been in, I've been in this game for about 20 years. Nothing like this has ever happened. Just the pace speed is everything right now.

It's, it's insane.

squadcaster-6gb2_6_04-17-2026_133944: It, it is absolutely [00:11:00] crazy. And definitely, I, I've too, you and I probably entered the industry around the same time. You know, I, I too have not seen the, the speed and I'll, I'll just call it craziness that the last couple years have, have brought, um.

sweeney-williams_1_04-17-2026_143944: Absolutely

squadcaster-6gb2_6_04-17-2026_133944: know, e everybody, uh, from all walks of life that in, You know, corporate America kind of feel like their hair's on fire, right?

Trying to, uh, I I, I think you, you said it best, right? To sum it up, all of your points there went back to speed, right? Just trying to keep up with all these various facets. Um. I, I wanna, I wanna go back to speed in general, and, and you mentioned organizations to adapt, You know, and, and certainly your role being the head of responsible ai, that probably, I, I'm going to imagine that probably resonates the most for you because you're probably getting pressure or maybe you're applying pressure or both.

All these orbs, everybody out there, they're trying to compete, right? They're trying to stay relevant. You've got. Uh, corporate America, [00:12:00] you get the vendor space. You mentioned bad actors are well white. Everybody's adopting it or trying to, they're trying to keep up. How are you managing or how would you, um, what tips would you give to others to try to keep up, right?

You said that you're no longer worried about keeping up with the new models anymore 'cause it's insane how quick, but how are you keeping up with some of the, uh, um, fundamentals of, of the items that are coming out?

sweeney-williams_1_04-17-2026_143944: Yeah, and, and that's a great question and, and I think that the fundamentals. Good, good term, because it kind of answers the question here, right? I think we need to go back and, and rely on the fundamentals of, um, of good AI governance and good governance in general, right? There are, um, there are AI practices, there are ai, uh, kind of guidelines out there that still apply and, You know, as, as AI accelerates as kind of the, the universe around AI accelerates around us, those practices have actually. They've, [00:13:00] they've actually been, uh, pretty robust. Um, and they've stood the test of, You know, the, the last few, I'd say the test of time, it feels like forever, but it's just, You know, it's been only, it's only been crazy for a handful of years. Um, but they've, they've stood that test, right? And, and, um, You know, what are. What are those foundations they are, um, ensuring you are, uh, continuing to adopt, um, good data governance practices. Right. And so that's, that's really important to, uh, to me in, in my role. And I, and it's certainly important to, um, I think a lot of us. Uh, in, in the space, right? Ensuring that, um, when you are, um, You know, when you're collecting data, you're only collecting the data that you need.

And when somebody comes in and asks for, um, data to do something with ai, um, You know, organization shouldn't be, um, just say, yeah, You know, we've, we've all gotta use this stuff and, and let's go and, um, and provide that access. No. Consent still matter. And, um, and, and, and proper data, uh, data governance proper, um, data [00:14:00] access controls, um, You know, data limitation, data destruction, like all of those things still app, uh, still apply. And then there's, there's kind of, um, You know, some of those AI specific. Best practices, right? Ensuring that, um, you are being transparent about your AI use when you need to do that. Um, ensuring that, um, You know, if you, if you have AI that's making big decisions and, and, and things like that, decisions that'll materially impact someone that, um, there, there's again, there's that transparency, but there's also, um, You know, the ability to explain a decision.

And if. You know, if you're, if you're going the fully automated route, um, ensure that someone can request a human review and have a human come in, um, and really, uh, and make the final decision, right? So those best practices, best practices still apply. Um, and, uh, and I think we all know about, You know, the, the potential for bias for bias.

And, um, You know, if you have bias data that trains a model, you're gonna get biased outcomes. [00:15:00] Um. Ensuring that you're checking for that bias and, and that you have mitigating controls around that, right? Like all of those things still do apply and they'll continue to apply. But how we apply them, um, and uh, You know, how, how quickly we, we kind of have to go down that rabbit hole. Um, really depends on what we're deploying, um, and how sensitive it is and how sensitive those outcomes are.

squadcaster-6gb2_6_04-17-2026_133944: I like it. I like it. I, I wanna get your take. So I've been to a lot of conferences lately. Um, and surprise, surprise, I shouldn't say lately. I. Been to a lot of conferences the last couple years and surprise, like AI has been, You know, the, the top topic. Both, um, controlling and governing and securing ai, but also using ai, right?

Like, like I mentioned before, just about everybody has got something, some sort of flavor of AI that they're touting they have or they actually have right. Devil's in the details of, of what's going on. Um, curious your thought though, so. [00:16:00] There's a lot of conversations lately about, um, how to properly start your AI journey, right?

Getting, getting everything. K kind of, I, I call it the unicorn and fairland type of, uh, conversation. Like, Hey, you don't have ai. Here's how you start. I think just about everybody has it, right? Uh, the, the cat is outta the bag, whether these organizations like it or not. So I'm, I'm curious your thoughts for those organizations that they know the cat's outta the bag and they're saying, oh my goodness.

Right. I'm not ready. What, what are maybe, You know, two or three things that you would say like, Hey. Put out the fire, do these couple things to start and everything will be okay.

sweeney-williams_1_04-17-2026_143944: No, that's, that's a really, really good question. And I'll give you, I won't give you two things. I'll give you two buckets,

squadcaster-6gb2_6_04-17-2026_133944: Okay.

sweeney-williams_1_04-17-2026_143944: Um, the, the [00:17:00] first bucket I think is, um. You know, we went back to kinda like that speed and that ambiguity thing. I, I, I I think that if you're waiting for clarity, that clarity isn't coming in terms of, You know, what's the, You know, what, what's the regulatory approach gonna be?

And, and can I wait for that and, and then decide what to do based on, uh, regulations. Those regulations, either they're not coming or if they're coming, they're not coming anytime soon. So I think, You know, bucket number one is to just start now and get that work going. Right. What's, what's, what's the old. saying it's, You know, the best time to plant a tree was 20 years ago. The next best time is right now,

It's time to plant your AI governance tree. Right. for, lack of a better term. And, uh, and, and, You know, what can you do to get started? You know, you can certainly. Document an AI governance policy, right?

Like what are your responsible AI principles that that'll guide everything your organization does, and what are the standards you can document to kind of flow down from that? [00:18:00] Um, uh, You know, putting together a governance group. right let's get accountable people in the company for the company's AI use and the protection of, You know, of, assets and, You know, the protection of, individuals when, AI is used to, materially impact them. You need to have a governance committee and you need to, ensure that that governance committee is actually doing its work. Right. Um, uh, you certainly wanna invest in training, right? Make sure everyone's up to speed on what they need to know and what they need to care about relative to ai. And it's not just like the, the protection bit, it's also the opportunity bit. Um, so there's a lot there. And, uh, and, and that's. Critically important. Um, and, uh, and, and I was, You know, if we go back to those best practices, right? Where can you get some best practices that you can hang some of this stuff off of and know maybe what should go into that policy or what, what should go into that, that governance, um, structure. Um, there's, You know, the nist AI risk management [00:19:00] framework, right? There's a standard there that anyone can pull from and, and leverage. Right away. Um, there's, if you wanna go a little bit deeper, there's the NIST ai, excuse me, the, um, there's ISO 40 2001. Um, so, You know, a, a more defined, um, ai, uh, AI standard and, and, and framework. Um, there's, um, You know, I talked about the EU AI Act. There's, there's some really, really good risk-based principles in there. Um, and if we wanna go right down to. A foundational organization. There's, there's the OECD and they've got their AI principles that they published quite some time ago. Um, those are still relevant, so you have a lot to pull from there, right?

So start now, I think, is that first bucket. Um, the second bucket is, um, and I'm speaking to a security guy, so, so you'll appreciate this. The second bucket is collaborate. Right. If you are, You know, it, it, and it's, it's collaborating kind of, You know, within your organization, right? So you can't go it alone within your organization.

And, and I was [00:20:00] talking about, You know, putting together a governance group, um. There's, uh, You know, a cross-functional governance group is what you need. It can't just be technology. It can't be, um, just technology and security and privacy. You've gotta have legal there. You've gotta have, uh, corporate strategy there.

You've gotta have, You know, you might have to have HR there, right? If you're doing something that might, uh, result in employee displacement. No, there's, there's gotta be a voice for those employees in the room. And, And so you might need to have HR there to really advocate for, uh, for your employees, right?

I, I think if, You know, what do I care about as, uh, as, as someone who kind of sees what's happening in, in various markets right now, I care about. continuing to do things that empower people that continue to, um, give them the ability to do their jobs more efficiently and, uh, and more effectively, but not, You know, put them out of a job.

I think that's critically important. Right. Um, so, so kind of collaborating within the organization is [00:21:00] really, really big and, You know, coming back to the security bit, um, collaborate within your industry. Collaborate, um, uh, even with your competitors, right? I think everyone is, is seeing something. We're all, we get more secure if we understand the threats, uh, and, and the attacks that others in our industry are seeing.

We need a forum to, to properly share that out. We also need a forum to, to develop and, um, and implement best practices, right? Like if we're, if we're not getting this from regulators, if we're not getting this from government per se, um, then we need a, we need a way to kind of. Do this ourselves. And I've seen this in the past and it works very, very well.

I think we need that more than ever right now. Um, and then maybe last but not least, on that collaboration bit. Collaborate with your regulators and, and, and collaborate with government. They're, they're trying to learn this stuff just like we are. They're figuring everything out as they go, um, just like we are. Well, let's help them figure it out. Let's help them understand what's important to our various industries. Um, let's help them understand, [00:22:00] You know, if they, if they want to go. A certain route, You know, how does that really impact our ability to innovate and to, to, to innovate safely? So helping them achieve that balance and understand what's really important to industry, I think is, um, is really important.

So, uh, You know, start now and, and collaborate. Uh, collaborate as much as you can.

squadcaster-6gb2_6_04-17-2026_133944: Oh man. Really, really great tips. Um, I, I wanna go back to. Earlier point you made, uh, again, going back to to speed, you mentioned something there and, and I'm really curious your thoughts on this, especially in, in your role today. So you had mentioned, um, being transparent about AI usage. Right. Organizations having to do that.

Um, so my questions to this, You know, a lot of organizations that, that I've worked for and with, You know, often struggle. You know, just understanding their own usage, right? What is that business unit doing? What is this business unit doing? And that can be fixed through some of your other tips, [00:23:00] collaboration, and all that.

I think where there's still a lot of pain is the third party usage, right? Your suppliers, how are your suppliers using it? So I'm curious your thoughts when it comes to, You know, being transparent about AI usage. Um, where, where do organizations really need to start? Documenting, um, and, and kind of cataloging what their suppliers, right.

The, the end degrees are, are using, uh, folks that they rely on.

sweeney-williams_1_04-17-2026_143944: Yeah, I, uh, I wish I had an easy answer for that. Um, my answer is good luck. I, I think we're all, we're all struggling with this and You know, you, you, you made the point earlier. Everyone's kind of putting. Uh, AI into everything. And, um, uh, You know, how do we, how do we get our arms around that? How do we even know that they've done it? Um, so you've got different categories of vendors, right? You've got those vendors like, um, I don't know, like the Microsofts of the world who are [00:24:00] very upfront about this. They bake everything into contract. And so, You know, you, you get to. Um, uh, to adopt a, a reasonably strong set of contractual obligations.

You know, what they're deploying, You know, what the protections are around it, generally, at least in terms of what they're agreeing to and contract. But then you have those vendors who are doing, um, one of two things, right? They're, um, they're implementing AI tooling, um, in their, You know, pre-existing, uh, tools that, that you're already using today. Um, and they're telling you about it. And maybe you get a measure of control, maybe you don't, right? Like some

squadcaster-6gb2_6_04-17-2026_133944: Sure.

sweeney-williams_1_04-17-2026_143944: They'll flip it off if you don't really want it, or they'll flip it on if you sign a contract, that type of thing. Pay more money. Um, but, and, and, and then, You know, you can control some things.

You know, there's a, there's a trigger for due diligence. There's a, there's a trigger to renegotiate the contract around those AI components. Um, but then there's those vendors who are introducing those things and not telling anyone. They're just,

squadcaster-6gb2_6_04-17-2026_133944: Right.

sweeney-williams_1_04-17-2026_143944: day you log in. [00:25:00] That stuff's on and it's processing your data and all those things, and much, much harder to get your arms around, right? Um, and you don't get that advanced notice, and you don't get to kind of, You know, get your arms around that. So, um, uh, the, the, there's no easy answer for this. How do you get your arms around this? I think you need a multi-pronged approach, right? I think you, You know, you can do a number of things. You can reach out to your vendors and, and simply ask them, You know, are you, are you implementing ai?

And if yes. You know, give us this explanation and, and we do need to launch some new due diligence and all those things. Um, we need to recontract on all those things. Um, You know, and, uh, and, and you can maybe do some of it that way. Um, I know some organizations in the past for other reasons have implemented triggers that say, You know, if you're going to materially, um, change your platform or you're going to materially, um, do something different with our data, well, there's a trigger and you need to let us know.

And if you're. You know, if you're, you're an organization that [00:26:00] was subject to GDPR over the last few years, maybe you have something like that in contract that's reasonably sufficient. Say you're, You know, a lot of these organizations are using new Subprocess to do this. Not everyone's a, You know, a frontier model provider. Um, so they're implementing, they're adding new subprocess and maybe you got, You know, a month notice trigger in your contract and, and they notify you that way. Um, so there's, there's kind of ways to. that out. But there's no kind of one silver bullet that says this is the best way to manage this.

But You know, for the sake of argument you've done that, you've um, You know, you've figured out which of your, um, your existing vendors are, or, You know, new prospective, prospective vendors are, um, are introducing this stuff. Well now you need to have, uh, robust. Contract control. So you need to know what you, what you need to care about in contract, right? Um, You know, do you, are you gonna allow them to train their models on your data? Are the, are you gonna allow them to do, You know, testing with [00:27:00] your data? Um, are, is, is PI gonna be allowed in anything they do with that ai or are they only allowed to use it for non PI applications? Like, you need to know what you wanna do in contract and make sure you negotiate that in. Um, you also wanna make sure that you're negotiating in. Their responsibility to govern their own AI properly. Right? They can't just implement it and,

squadcaster-6gb2_6_04-17-2026_133944: Yeah.

sweeney-williams_1_04-17-2026_143944: hope everything goes well. They have to, we've got it. They've gotta have, they've gotta have strong practices. And if they don't cut ba You know, if, if you have the option of not working with them, then don't work with them. Um, and the other bit is, um. You know, due diligence, you need to know what questions you want to ask, and you don't have to figure that out for yourself. You know, there's, uh, you can use, uh, You know, the, the sig for example, standard information gathering and, um, uh, and, and there's kind of an AI component to that. Um, so there's, there's kind of a lot of ways to do this, but there's no single best practice in doing it. I think everybody's struggling with this, and [00:28:00] the more vendors you have, the more catch up you need to do, and it can get pretty cumbersome pretty quickly. Right.

squadcaster-6gb2_6_04-17-2026_133944: Yeah, I mean, third party risk and third party assessments is already difficult. Now, add in the, the additional layer of, uh, of ai, um,

sweeney-williams_1_04-17-2026_143944: You've

squadcaster-6gb2_6_04-17-2026_133944: world keeps getting crazier and, and ex, You know, speaking of, You know, the, the world evolving. You're, you're the head of responsible ai, um, a, a job that did not exist 20 years ago when you started.

So what was your journey? How did you get to where you are today?

sweeney-williams_1_04-17-2026_143944: My journey. You know, interestingly enough, I feel like it's kind of come full circle just based on kind of the environment and, and, and, uh,

squadcaster-6gb2_6_04-17-2026_133944: I.

sweeney-williams_1_04-17-2026_143944: Let's see if I can kind of land this well. But, um, I started kind of this, this journey. Uh, around the mid to mid to late two thousands, I I was at a company called Eloqua. Um, Eloqua was [00:29:00] one of the pioneering kind of marketing automation SaaS platform providers. And so they, they, You know, we did things like, um. You know, uh, email marketing deployments and things like that. So, so Eloco was an email service provider. They, they provided kind of, You know, cross, You know, website tracking and all those things.

Some, some things a lot of us still kind of might consider creepy, but the interesting thing about that place was they were. So incredibly focused, like right up to our CEO. So focused on always doing the right things and always doing things kind of privacy first and best practice first. And You know, they were willing to not take on. Um, You know, customers who, who maybe didn't align with that view and who just wanted to spam and

squadcaster-6gb2_6_04-17-2026_133944: Oh.

sweeney-williams_1_04-17-2026_143944: Very, very, very privacy, very trust first, very first data collection. First, like is one of those kind of rare organizations, especially back then when everyone was just all about the data collection.

Just we were all starting to talk about big data and um, You know, just [00:30:00] collect everything you can and figure out what you're gonna do with it later and, and all those things. And, um, You know, my. My, my area of, of, of specialty that I kind of fell into at, at Eloqua initially was, uh, what was called, You know, email deliverability.

And that was, You know, the practice of ensuring your, um, your IP reputation, your sender reputation, the, the form of your emails and the volume of your emails. it was, there was a bit of a science behind it, and there was a bit of an industry behind it that, um, You know, would, would help to ensure that, You know, marketing emails got into the inbox versus the spam folder getting blocked out. Um, and I, I, I, You know, met a lot of, You know, friends that I still have today, some of my closer friends. And, um, You know, I, I think that was really, really in interesting and, and kind of the tie into privacy was. If you are going to not get blocked by, by, You know, spam, anti-spam services and things like that, you should probably follow the law.

And, and You know, the, one of the biggest and only anti-spam, like dedicated anti-spam laws back then, it [00:31:00] wasn't the only one, but one of the bigger ones was, You know, can SPAM in the US right? Can spam if you, if you look at it as fundamentally a privacy law, I found it wildly fascinating. So I got into. know, I just went off and I kinda learned privacy law as well, just kind of global privacy law. Um, so, You know, Eloco was a, was a, a, an organization with a global footprint. You know, what's what, what are the US privacy laws, what are Canadian privacy laws in European, You know, and, and in Asia. Like all of those things.

And I went off, I learned that I, I got certified at uh. You know, through the IEPP, the International Association of Privacy Professionals. And, um, You know, one of my closest friends right now was the, uh, chief Privacy Officer at Eloqua at the time, and he and I kind of built the privacy practice there together.

And, um, You know, I, I, I kind of fell in love with the whole thing, global privacy. I was a little bit weirdly obsessed with it. Um, and, uh, And so, You know, I think I, we built what I considered one of the strongest [00:32:00] practices in the industry at the time. We were really proud of that. Um. forward a few years, Eloqua got, uh, acquired by Oracle. Um, I spent what I'd call a cup of coffee with Oracle. It's a wonderful organization. Um, and, uh, and I, I, You know, I, I kind of ran deliverability and, uh, and, and, uh, and privacy for like the, the marketing cloud, uh, uh, component there. Um, and then I was, uh, You know, I was recruited away too. Uh, a company, uh, another SaaS company called, um, it was called Vision Critical at the time.

It's called, uh, Alida now. Um, so I'll just refer to it as Alida. Um, but Alida was a market research platform and it, it let, uh, You know, our customers kind of, uh, engage with their own customers and get, You know, constant feedback and things like that. And there was this whole SaaS platform around it. Um, and, You know, another, what became another very close friend of mine was, was the CISO there. Um, he's the one that hired me. You know, I became the privacy officer there and I kind of helped build the [00:33:00] privacy practice there. Um, uh, You know, he, he left. few years in for another opportunity, and I ended up taking on, uh, You know, I, I became kind of the, the vice president of, um, security and privacy and compliance there.

So running kind of the whole shebang and, uh, had a wonderful team there. Uh, a bunch of guys that I, I love to this day. Um, and, You know, we, we built what I, what I, again considered, You know, a very, very strong privacy security compliance practice there. Um, and, uh, and You know, again, I continued to. To fall in love with this thing and, and, and continue to be really, really interested in it. Um, You know, fast forward another few years, I was recruited over to actually the parent company of the, the company I work for now, as, as Chief Global Chief Privacy Officer. Um, I got to build a privacy practice there. I, and. That chat GPT moment happened and I started doing AI governance off the side of my desk, and it became very quickly apparent that that was a [00:34:00] full-time job. Um, and I took that on and, and, and I took on, uh, my, uh, You know, my current role and um, uh, You know, just kind of that full circle moment. Those early days at Eloqua really, really became, um, something very similar to what we're seeing now, having to build things from scratch and kind of build through ambiguity and all those things.

And so it's, uh, uh, You know, here I am. Uh, it's, it's been one hell of a journey.

squadcaster-6gb2_6_04-17-2026_133944: Wow, sweetie. Quite, quite the journey to, uh, to go from, uh, uh, You know, your, your first, You know, kind of learning about privacy on the marketing side to, You know, building privacy programs to, holy cow here, here's the onslaught of ai, and now congratulations.

You own responsible ai. Wow. Um, I, I like to ask a lot of my guests this one, and I'm, I'm curious, you, You know, just the. Kind of the, the, the morph of your own career here. But if you could go back 20 years to the beginning, would you have done anything different on [00:35:00] your journey?

sweeney-williams_1_04-17-2026_143944: That's a great question. Um, you heard me talk about kind of how interested all of the, I I was in everything I've done. Right. And you heard me talk about. of the fantastic people and friends I've, I've kind of made along the way. know that I'd trade any of that for, for anything. So I'd say, You know, nobody's had a perfect journey.

I certainly haven't had a perfect journey. Um, but in my mind it's about as perfect as it as it could be. And it's, it's kind of been everything I could ask for. Um, so I don't, I don't know that I'd do anything differently. Uh, my. My life has been enriched by my career and, and the people I've met along the way and all the things I've learned. Um, so no, I'm, I'm, I'm pretty, uh, pretty thrilled at, at where I've landed and where things continue to go.

squadcaster-6gb2_6_04-17-2026_133944: That's an awesome answer. I love that. Usually I get though, I would've done this, would've done that. You could have even said, Hey, would've, I would've invested in Bitcoin when it first came out. Right. We'd, uh, we'd be million or billionaires at [00:36:00] this point if we had done that.

sweeney-williams_1_04-17-2026_143944: I didn't have to say that because everybody

has the same regret,

squadcaster-6gb2_6_04-17-2026_133944: yeah. Everyone has that.

Uh, that's never gonna be a thing. And guess what? It sure is

sweeney-williams_1_04-17-2026_143944: Did you ever hear the story about someone who like bought a pizza for like

squadcaster-6gb2_6_04-17-2026_133944: eight.

sweeney-williams_1_04-17-2026_143944: Bitcoin

squadcaster-6gb2_6_04-17-2026_133944: Eight

sweeney-williams_1_04-17-2026_143944: like

squadcaster-6gb2_6_04-17-2026_133944: I think was what it was. Yeah, I did. And then you kinda look at it in like eight Bitcoin today. What it's like about 70 grand a Bitcoin. So yeah. It's crazy for pizza these days.

sweeney-williams_1_04-17-2026_143944: Yeah. There

squadcaster-6gb2_6_04-17-2026_133944: love to have that tip right as the pizza guy, You know, get, get one Bitcoin as the tip.

sweeney-williams_1_04-17-2026_143944: Hindsight is 2020 pizza industry early days.

squadcaster-6gb2_6_04-17-2026_133944: Oh, absolutely. Absolutely. Well, Sweeney, if, if folks want to connect with you, uh, what's the best way to do so?

sweeney-williams_1_04-17-2026_143944: Uh, the best way is is LinkedIn. Um, I'm a privacy guy, so you're not gonna find me on a ton of, of social media. Um, but LinkedIn is always a good bet. Um, feel free to reach out. I'm always happy to, uh, to discuss.

squadcaster-6gb2_6_04-17-2026_133944: I like that. Eating your own dog food from a privacy perspective, stay limited and uh,

sweeney-williams_1_04-17-2026_143944: it

squadcaster-6gb2_6_04-17-2026_133944: only keep what's [00:37:00] important out there.

sweeney-williams_1_04-17-2026_143944: always on brand, my friend. I.

squadcaster-6gb2_6_04-17-2026_133944: Oh, man. Well, thank you so much for joining me today. This has been a great episode.

sweeney-williams_1_04-17-2026_143944: Thank you so much for having me. It's been a pleasure.

squadcaster-6gb2_6_04-17-2026_133944: And big thank you to the audience. Really hope you enjoyed the episode today and learned something. Please tell others in your network to follow and listen. This has been another exciting episode at Guardian to the Data. See you next time.

Speaker 2: That's a wrap on another episode of Guardians of the Data. Thanks for tuning in for show notes and more Visit Guardians. The data do show Guardians of the data is made possible by support from Centro to see how we help organizations discover and classify all of their data accurately and automatically while quickly achieving scale data protection without the fuss, please visit sentra.io.

Catch you next time.

AI Governance: Navigating the Speed of Change - Sweeney Williams - Guardians of the Data - Ep #41
Broadcast by