
AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E033 - AI or Not - Debbie Reynolds and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
What happens to your privacy once your data enters an AI system? Can you ever truly regain control? These questions take center stage as Pamela Isom welcomes back Debbie Reynolds, "The Data Diva," for a compelling conversation on the one-year anniversary of the AI or Not podcast.
Reynolds, CEO and Chief Data Privacy Officer of Debbie Reynolds Consulting, dives deep into the evolving landscape where privacy and artificial intelligence collide. She challenges us to consider whether privacy in AI is a myth, arguing that we must shift our perspective from focusing solely on data collection to understanding the complete lifecycle of our information. This paradigm shift becomes crucial as the traditional model of privacy protection, where individuals consent to sharing data in exchange for services, no longer captures the reality of how our information flows through modern systems.
The conversation takes a fascinating turn as Reynolds reveals how location data has recently been elevated to "sensitive data" status by the Federal Trade Commission. This reclassification, partly triggered by concerns about reproductive health privacy following the overturning of Roe v. Wade, represents a significant shift in how location information is regulated and protected. Reynolds explains differential privacy—a mathematical technique that adds "noise" to datasets to protect individuals while preserving statistical utility—and why it matters for everything from census data to medical research.
For organizations navigating the AI revolution, Reynolds offers practical wisdom about building effective AI teams. She advocates for Chief AI Officers with strong foundations in data management, privacy, and cybersecurity, emphasizing that multidisciplinary expertise is essential. As Isom highlights government guidelines for maintaining public trust in AI systems, Reynolds reinforces that innovation need not come at the expense of safety and privacy. Her parting advice resonates deeply: "Data is food for AI systems." Just as we care about what we eat, we must care about what data feeds our AI—and remember that human wisdom remains irreplaceable in distinguishing between AI-generated content that's helpful and that which is merely convincing but incorrect.
Join us for this thought-provoking exploration of trust, privacy, and responsible AI development. Subscribe to AI or Not for more insights from global business leaders on navigating the opportunities and challenges of artificial intelligence.
[00:00] Pamela Isom: Podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health, tax, nor professional nor official statements by their organizations.
[00:45] Guest views may not be those of the host.
[00:53] Hello and welcome to AI or not, the podcast where business leaders from around the globe share wisdom and insights that are needed right now to address issues and guide success in your artificial intelligence and digital transformation journeys.
[01:08] I am Pamela Isom and I am your podcast host.
[01:12] We have an exciting and special guest with us today. She's been with us before,
[01:18] but she's back and this is my one year anniversary and so Debbie is here to kick it off for us. Debbie, welcome to AI Or Not.
[01:28] Debbie Reynolds: Oh, thank you so much, Pamela. And I'm so proud of you and I'm so thrilled that you invited me back to be on your show on your one year anniversary. This is tremendous.
[01:37] You're doing a great job.
[01:39] Pamela Isom: Thank you. So, Debbie, tell me, tell me more about yourself again. So tell me about your mission and tell me more about what's next.
[01:49] Debbie Reynolds: Oh, great question. So I'm Debbie Reynolds, the Data Diva. I'm the CEO and Chief Data Privacy Officer of Debbie Reynolds Consulting. I intersect between privacy and emerging tech. I'm a technologist by trade.
[02:02] I have a podcast as well, the Data Diva Talks Privacy Podcast, of which Pamela has been a guest. He's tremendous. And I think, you know, there are so many things brewing now in the data space in terms of, you know, even terminology, you know, safety versus security,
[02:24] you know, what, you know, transparency and what does that look like in AI systems, really thinking about lineage of data, like data through life cycle, as opposed to people being obsessed with collection and what that means.
[02:38] So I think because there are new use cases for AI and there's new ways that people are using data, there are probably new or emerging risks that people aren't thinking about.
[02:49] So it's just like a whirlwind of activity and thought around this. So it's a great topic and a good time to talk about it.
[02:58] Pamela Isom: Okay, so what's next for you?
[03:02] Debbie Reynolds: I'm doing a lot of speaking, a lot of writing, probably going to be doing some interesting different things. I put out a lot of content, a lot of information, and so I'll continue to do that.
[03:13] But I may ask some other elements in the future.
[03:15] Pamela Isom: Here's a question I have for you and I have an opinion as well. But. And I know you're going to tell me, Pam. No, but is privacy,
[03:27] is privacy a myth when it comes to AI?
[03:31] Debbie Reynolds: That is a fascinating question. Wow. I guess I would say yes and no.
[03:37] Yeah, I have it both ways. So depending on,
[03:42] no matter what the AI system does, there's stuff that goes in, right? And then there are things that come out of the system. So people, companies that are using AI, they're going to be responsible or should be responsible for what goes in and what comes out and what they do with all that stuff.
[03:59] So what goes in, in terms of privacy perspective, I guess a lot of what we're seeing in AI, unfortunately, is like not transparent to people.
[04:11] So if companies want to use that lack of transparency to say, well, we are not mishandling our private data and then nothing happens, right. And like that data isn't transmitted some other place, I guess it could be somewhat seen as a way to protect privacy.
[04:30] But then you don't know what comes out of the other end and you don't really know what companies are going to do with it. So I think it's just going to be a challenge in the future because I think a lot of privacy,
[04:44] a lot of privacy regulations and laws, what they, what they want to do or what they have always thought about privacy was that, you know, you as an individual, give your data to a company,
[04:58] they are, you know, in exchange, they're giving you some type of good or service, right. And then they're supposed to protect your data throughout the life cycle. But that's not really what, that's not all there is.
[05:10] That happens with people's data, right? So there are people like data brokers that you don't know, you don't have relationship with, and they're handling your data. There's data going into AI systems.
[05:20] Unfortunately, there may be decisions that are being made about you that you may not know, which is kind of the opposite,
[05:27] the opposite of what privacy should be. And I think one of the things that, one of the challenges AI brings to privacy is that it really exacerbates the idea about someone having control.
[05:41] Right. So once your data is an AI system,
[05:44] you as an individual sort of lose control and sometimes companies lose control of the data that's in those systems. So I think talking about data as a life cycle, as opposed to just the collection point, is really important.
[05:58] And then trying to determine what happens with that data and why it happens after it comes out the other end, I think it's very important.
[06:08] Pamela Isom: So privacy is not a myth. You can have privacy,
[06:16] but you want to understand the full lifecycle of the data and have that understanding as you're working with the model so that you know what to perhaps share and what not to share and how that information could be potentially leveraged.
[06:33] Is that what you're saying?
[06:35] Debbie Reynolds: Yes. Okay, very good.
[06:37] Pamela Isom: So tell me about this here business that's going on with location services. That's a big deal.
[06:46] Debbie Reynolds: People in the privacy world have been very dismayed that there hasn't been a lot of movement on privacy regulation in the US On a federal level. But what has been brewing on a federal level and even we're seeing on a state level, and some of this started around the Roe v.
[07:07] Wade being struck down, right? Where the impact of that was women's data, data about women's health was less private. Right. So, you know, we're seeing different states trying to create cases against people for miscarriages or different things or different types of health services.
[07:28] And part of that, almost like kind of a dragnet type of thing, will centered around location data. Right? So what these, some of these law enforcement in these different states, what they were doing was they were tracking people and tracing them, saying, hey, we know that you went to this clinic.
[07:44] And then we, so we suspect that you did X, Y and Z and it's illegal in this state, so we're going to arrest you, and different things like that. So what we started to see on a federal and state level, you know, we're seeing states like, I live in Illinois,
[07:59] Illinois has law where law enforcement can't share data about people in Illinois, while other states, if it is anything to do with any reproductive rights case. All right, so we have a couple states doing that.
[08:15] But then we also have the FTC that has basically come out and said that like, location data is sensitive data. And so what they mean by that is that location data now has been been catapulted into a.
[08:30] A level in which companies, they have location data about people, they have to protect it and protect it in different ways, in more ways. And then also that means that if they run afoul of that, they may be in more trouble.
[08:48] So sensitive data is kind of a category, a higher level category of data that can be used to discriminate or harm people. And so to see the US Kind of Move in that direction where they're saying location data is sensitive data now and it's needing especially protection is very different than what we've seen in the past.
[09:10] And this, so what we're seeing is kind of enforcement. And then we're also seeing, you know, this kind of like elbowing state to state around protecting location data of people.
[09:24] Pamela Isom: Interesting is that so it's beyond like the location data of tracking me when I'm driving my car and tracking my vehicle. This goes beyond that. But they're calling it location services because I guess now we're starting to realize the sensitivity of and the implications of the location data and inappropriate use.
[09:46] Right. Is that what you're Right. What's going on?
[09:49] Debbie Reynolds: Yeah. And then two, it is broader than like reproductive health also. Right. Because,
[09:56] you know, a lot of push around privacy for some companies have been, you know, let's anonymize data and let's like, make it, like, less identifiable to a person, which is fine.
[10:08] And well,
[10:09] but when you have someone's location data, it's not that hard to figure out who they are.
[10:15] Right. So that re identification can be done very well with location data. And that's why a lot of companies want that information because they can ascertain, especially using AI, they can figure out who this person is by their location.
[10:34] Right.
[10:34] Pamela Isom: Does this tie to differential privacy at all?
[10:38] Debbie Reynolds: It can. It absolutely can. So the differential privacy,
[10:43] and I'm glad you asked me about this, because I think it's a big confusion for people around differential privacy because I hear some people say, oh, we should just use differential privacy.
[10:52] It was like, it's nothing wrong with that. But a lot of people don't know what it is and what it does. So differential privacy is really used in data sets that have numbers or statistics or something, some other type of way to kind of group and identify people.
[11:13] So we see differential privacy used very well in things like maybe census or medical research. They use differential privacy as well, where the goal is to be able to glean information about groups of people without really singling out any one person.
[11:34] So let's say, for instance, like an example, let's say you have a study, and the study is about, let's say 10 people in Vermont named John.
[11:48] Okay, so 10 people in Vermont named John. Right. And you try to de. Identify this information. Right. But because nine of those people named John live in one town. Right.
[12:00] Town A. But then one person named John lives in town B. Okay? So somebody said, well, I can figure out who John is in town B because he's the only person in this town.
[12:12] Right. So differential privacy says, well, let's add more people. Let's pretend like there are more people in town B that are also named John. So it makes it harder for the person to re identify that individual.
[12:28] So that's really what differential privacy is. So it's really about sort of math and stats and Being able to find a way not to either target or eliminate or discriminate against people and groups or statistics in a way that can be changed by sort of changing that calculation to make it more fuzzy around kind of those people who are,
[12:56] who are marginalized or sort of on the, on the edge sort of. So that's what differential privacy really is.
[13:04] Pamela Isom: So we, we could use that in many different situations. But when we describe it, when it's described, it talks about noise, adding noise,
[13:17] the data sets adding noise. So that is difficult to differentiate. And so the example of the noise is what you said, like, just add a bunch of johns,
[13:33] add a bunch of them. So it distorts and kind of, it really protects. So just exactly, let it protect. But you keep. But the system keeps track of what's, what's the real, what separates the real from the noise.
[13:48] Debbie Reynolds: Exactly, exactly. So a lot of that is used. It's especially in, like, public data sets or data sets that are given to companies so that they can use and get the insights there without actually reidentifying that individual or like discriminating against them.
[14:05] So this is used a lot in, like, clinical trials where people, let's say they may have a sensitive ailment or something, and they don't want researchers or anyone to be able to identify that person.
[14:17] So they'll add that noise without damaging kind of the result that they get. Right?
[14:24] Pamela Isom: Yeah. It even reminds me of the cybersecurity realm where we talk about the tampering with or membership inference. So it reminds me of that because you're trying to figure out, like, is a person a part of this data set?
[14:37] Is that your information in that data set? All right, so then let's talk about the chief AI officers. Okay, so I'm a chief AI officer and an advisor to chief AI officers, and I'm real big on cybersecurity and privacy.
[14:52] So I have a preference, I think that chief AI officers should have some privacy knowledge, go through privacy training,
[15:02] something. But you tell me, what's your perspective on that? Should the role of the chief AI officer evolve or. Or what's your thoughts?
[15:12] Debbie Reynolds: Well, let's see. I'll start with. For AI, my view is that data is the food of AI systems. And so you need people who understand data and they understand what it takes to protect data and to be able to use it in the right way.
[15:32] And because privacy and cybersecurity, in my view, are sort of horizontal to almost any type of data use that you could think of. Like, those are things that to Me, it would be great if AI officer had one or the other or both of those.
[15:50] But I think eventually,
[15:52] you know, we're not there yet, but eventually there needs to be some level of base foundational knowledge that all AI officers should have around cybersecurity and privacy because it just touches every part of organization, what you're handling on AI.
[16:09] So I think people who have that as part of their foundation or part of their expertise, I think it makes them that much more valuable as an AI officer. So yeah,
[16:23] I've seen a lot of different,
[16:26] I know a lot of AI officers and I see how companies come at it from different points of view. But for me, I think I would gravitate more towards people who have really deep knowledge and data and sort of understand all these, you know, not, not similar fields, maybe these fields that have kind of symbiotic relationship like privacy and cybersecurity.
[16:49] Pamela Isom: So is a course in privacy,
[16:53] a couple courses good enough or what? What do you think about that? Like what would a team look like in the, in the future?
[17:02] An AI team?
[17:03] Debbie Reynolds: Yeah, that's a good question. That's a great question. I think AI team should be a multidisciplinary team, right. That understands data from different angles. So I don't think it's going to be a one size fit all.
[17:17] I don't think it's going to be, you know, always going to be a cyber person or always going to be a privacy person.
[17:23] I see actually a lot of data governance people moving into AI roles. Right. And so, you know, anyone who has,
[17:31] who had kind of data stewardship in their background, I think could excel at that. But right now it's kind of wide open and I think it's great that it's wide open in that way because I think it's just going to depend on the talent that's out there, the needs of the organization,
[17:48] you know, how much, you know, boots on the ground or how much hit the ground running if knowledge the person needs, versus is what they need to kind of learn along the way.
[17:58] But I really, if I have to, you know, I'm seeing a lot of people move into those roles who kind of don't really know about data, like they may know about regulation.
[18:09] And to me I don't think that's the right fit because these are, these are data jobs in my view. So you really need to understand data, you need to understand data flows, not just the legality of it, but just, you know, what's happening and why it's happening and what are the questions to ask.
[18:28] And I think an AI officer really is going to be kind of a diplomat and someone who can talk at all levels of the organization about artificial intelligence, what it means, what people need to do, what are their responsibilities, what are the risks, what are the rewards?
[18:44] You know, we're hearing a lot about the reward, but not the risk part. Right. And so being able to talk about that I think is important. Important. So I don't think, in my view it's probably not a one size fit all.
[18:54] But I think just in general, I think companies,
[18:58] as almost all companies now are data companies, they need a more sophisticated team of people who understand data and how it flows in and out of their organization.
[19:10] Pamela Isom: So I was reading. I agree with you. I was, you know, I check in on what's happening with government and I'm just gonna say, just point this out here. So there's this memorandum, there's a couple of them, 21, 22 the most recent.
[19:26] But 21 speaks to the use of AI without losing public trust.
[19:34] So my episodes have been speaking a lot lately on trust just because it's so important and it's so hard to quantify. Right. Trust, the one may not mean the same thing to another.
[19:47] So but it speaks to some things that I believe could help mitigate risk. And you were talking and you made me think of that. So I'm just going to list a few.
[19:56] Pre deployment testing. So it just seems like,
[20:00] okay, so we know this, so let's just do it. Pre deployment testing. There's an AI impact assessment. So to understand the impacts and of the solution as well as the implications if something goes wrong, then there's ongoing and continuous monitoring.
[20:16] I think that that one is critical. Human oversight and accountability is in there. And then they talk about remedies and appeals. So an appeals process.
[20:28] So you need a. We talk about, and I talk about this too, we talk about being able to get to solutions faster and get to decisions faster, but they need to be reliable and trustworthy decisions.
[20:44] So they mentioned the remedies and appeals so that affected individuals must be able to request timely human review. Like who wants to wait a year before you get contacted just to say you have to wait another year.
[21:02] Right. So we've all gone through that. So this, this speaks to that. And then public feedback mechanisms is in there. And then waivers. And the chief AI officers are the ones with some authority there.
[21:14] So they have the authority to waive specific controls. But the way the waivers must be tracked and publicly reported when this is, when this involves public situations. So I thought that was pretty good.
[21:30] And I only bring that up because I was listening to what you said and you can hear what you said, some of what you said in that list. So I wanted to let you know you're kind of in alignment with the, with that memorandum.
[21:44] Did you have any comments on that?
[21:46] Debbie Reynolds: No. Very good. You're teaching me some things here, so thank you.
[21:50] Pamela Isom: We are kind of wrapping up here. So we have gone through pretty much what we said we would talk about.
[21:58] So I really don't have anything else. I want to wrap up this conversation by seeing if there's anything else you want to talk about before you provide us with a call to action.
[22:12] Debbie Reynolds: Oh, wow. Oh, wow. I don't think there is. I think this is pretty broad stroke conversation. And so I'm very pleased with this discussion and the things that we talked about today.
[22:25] So, no, I don't have anything to add.
[22:27] Pamela Isom: You have words of wisdom for us?
[22:30] Debbie Reynolds: Sure, absolutely. Always. Always. Let's see, a words of wisdom. I would say keep an eye on data.
[22:40] I always tell people data is food for AI systems. So how people use data will greatly impacts what AI can do with that. And so thinking about that connection and thinking about data all the way through a life cycle and like, I like the way that you had talked about this particular order and that it had things like remedies and waivers and stuff like that,
[23:06] because I think,
[23:08] you know, there should be some type of adequate redress of some sort and there should be not just a human in the loop, but it might be a human in the driver's side seat, right?
[23:19] Where AI isn't being the one relying on to be the final decision or the final say, and can help to raise the idea around us and help to sort of mitigate it or even eliminate it if they possibly can.
[23:38] So I like that life cycle, and I think that's a pretty good framework to be thinking about and looking at. But I think I don't like when people say, like, let's forget about safety, let's forget about purity, let's just innovate.
[23:54] Just, you know, let, let everything, you know, let the tractors run over people. Because we're innovating, right? And so I think you can innovate without harming people. And I want to see a lot more companies and organizations to really think through that because I think that's the only way that trust will happen.
[24:11] And that's really the only way to get the type of wide adoption that companies really want with AI systems.
[24:17] Pamela Isom: Do you Think we trust it too much or we just set it aside, set safety aside. Because I often say large language models and AI, especially generative AI, predicts, but it doesn't verify.
[24:31] But when you're dealing with AI and you're working with it, sometimes, especially the generative AI tools, it can sound so compelling, like, oh, yeah, this has got to be correct.
[24:43] And it's not. And plenty of times it's not. So do you think we trust too much or we just. We just don't want to? I don't know. What do you think?
[24:51] Debbie Reynolds: You know, I think this, to me, this has happened. This, to me, this happened with books, this happened with the Internet, computers. This also happened with AI, where people think if some other system helped create it, that it gives more credence to it and gives it more authority in some way.
[25:08] Like, you know, like someone thinks, oh, it's published in a book, so it must be true. We know that's not true, right? Or if it's on the Internet, it must be true.
[25:16] We know that's not true.
[25:18] So I think we're going through the same thing with AI where we're like, oh, AI told me this, and so it's true. And it's like, no, that's. It's not true.
[25:26] Right? So understanding what it can do, understanding what its limitations are, is really important. I tell people, so they understand how these systems work, especially generative AI. You know, ask it questions about something that you know really well, and you'll start to see those gaps there.
[25:43] You're like, what? It said. What? Oh, that's not right. Right. But that's what you really need. That's what you need a human for, to be able to say, okay, well, this part is right.
[25:51] This part is wrong. Like, they missed this. They did that. And so having that wisdom of humans is very important because you can't really just let the AI run off on its own because it'll tell you, you know, it.
[26:04] The systems want to give you an answer regardless of whether it is right or not. So you need a human to know, to judge, to be able to provide their own wisdom and knowledge and experience, to be able to say, okay, this is right and this is wrong, or this is how we need to use it,
[26:21] or this is how we shouldn't use it.
[26:23] Pamela Isom: That goes along with your final words of wisdom. Not your final words, but your last few words here of wisdom. I appreciate you being here and talking to me again, and I a big fan.
[26:38] So thank you, Debbie, for being here.