AI or Not

E001 - AI or Not – Debbie Reynolds and Pamela Isom

Pamela Isom Season 1 Episode 1

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

In this episode, we are thrilled to welcome Debbie Reynolds, "The Data Diva," to discuss the nuances of data privacy and governance in the rapidly evolving world of artificial intelligence. A trailblazer who transitioned from library science to becoming a tech titan, Debbie shares her deep insights on the complex interplay between AI and ethical responsibility. We explore her career journey, from her initial encounters with privacy issues that catapulted her to becoming a revered advisor for industry giants, to her current role.

The discussion unfolds as a treasure trove of insights for businesses at the edge of AI integration. Debbie meticulously explains how AI can be effectively harnessed for low-risk tasks to circumvent the pitfalls of regulatory challenges, while ensuring high-stakes applications are managed with the highest ethical standards. We delve into the common misconceptions about AI, highlighting that many errors attributed to the technology are, in fact, human oversights, and emphasize the crucial role of rigorous testing and clear understanding of the limits of AI tools.

As our conversation reaches its peak, we focus on the critical aspects of AI governance amidst the digital transformation of business. Debbie underscores the importance of a robust governance framework and asset management and stresses the need for continuous vigilance and expertise in an era of rapidly advancing AI systems. By the end of this episode, listeners will gain a nuanced understanding of the delicate balance required to manage the disruptive capabilities of AI with the seasoned wisdom of human oversight, all under the vigilant guidelines of regulations like the European AI Act. Tune in to elevate your perspective on the global impact and ethical considerations of AI governance.

Pamela Isom:

This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health, tax, nor professional nor official statements by their organizations. Guest views may not be those of the host. Views may not be those of the host. Hello and welcome to AI or Not, a podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. My name is Pamela Isom. I'm your podcast host. We have a special guest with us today Debbie Reynolds. Debbie is founder, chief executive officer and chief privacy officer of Debbie Reynolds Consulting. Debbie, I want to welcome you to the podcast and I'm going to turn it over to you and ask you to provide us a little bit on your background actually as much as you want to share on your background and tell us about your journey into entrepreneurship that's pretty exciting and tell us more about how you got into the field of data privacy and governance.

Debbie Reynolds:

Wow, that's a lot. That's a lot. First of all, thank you so much for having me on the show. You and I know each other. You've actually been on my podcast and I'm so impressed with you and I'm so happy that you invited me on to be a guest. You said it right I'm the CEO of Debbie Reynolds Consulting.

Debbie Reynolds:

I'm the CEO and Chief Data Privacy Officer. My technology journey started in library science, so I was creating databases. Pamela, you're probably too young to remember this. When libraries had card catalogs, I was turning those into databases. I've been a technologist by trade. I've helped a lot of companies go from digital transformation Back then it was just called transformation, so a lot of it was from analog to digital and so I built a lot of data sets, a lot of databases, data systems, so that's why I call myself the data diva. During that time a parallel thing that was happening this was around the time that the commercial internet was just getting started and I got a personal interest in privacy just from a book that I read. It was a book that my mother had called the Right to Privacy.

Debbie Reynolds:

And she was interested in it and it got me interested in it. And when I read the book, I was shocked because I think in the US we think we're the land of the free, the home of the brave. And I I read the book. I was shocked because I think in the US we think we're the land of the free, the home of the brave, but I'm like privacy isn't in the Constitution, what the book was really about those legal loopholes around privacy. And so over the years, as technologies continue to develop, I saw those gaps, those cracks that were beginning to develop. I thought, oh wow, it's going to be crazy.

Debbie Reynolds:

And then, as I continued to do my work in more of the digital transformation and technology space, a lot of people who knew me from big corporations I was helping them with these projects started calling me up. And hey, I know you know this privacy stuff because I was moving data all over the world and you have to know the laws and certain jurisdictions to make sure you're not doing illegal stuff with data. And so one of the first companies to call me up was McDonald's Corporation. So I went to speak to their global legal department, oak Brook, illinois. That's where their headquarters are, or was. I think they're in Chicago now, but I spoke in a Big Mac room. It was a room shaped like a Big Mac actually and I talked with them about privacy.

Debbie Reynolds:

And this is before a lot of the European regulations started to come out. I was one of the only people in the US at that time that was really talking about this. I'm like, hey, you all need to pay attention, it's going to be a huge deal. And then eventually PBS called me when that law came out. I spoke on PBS about that law. People still call me about that, but I think that interview on PBS was the start of my entrepreneurial journey. I was like, well, if so many people are asking me about this, maybe I should turn this into a business. So that's how my personal interest in privacy joined with my technical stuff and that's how I started my business well, that's exciting.

Debbie Reynolds:

So PBS huh yep yeah yeah, people still call me about that interview. They're like oh, I saw you on PBS or they laugh because some of the things that I said actually came true in that video.

Pamela Isom:

So yeah, Tell me more about governance. How does governance play in all of that?

Debbie Reynolds:

Governance is vital. Governance is core to what you're supposed to be doing with data. What we've seen a lot of times with the corporations they just take data and they just grab it and it gets duplicated and split around and used for different purposes and a lot of that. Obviously there may be some good business reasons to do that, but a lot of times, unfortunately, companies haven't spent enough money or time or attention on the governance part. You need to make sure, not just once the data gets in, who has the right access. You obviously have to do that but, especially when you're talking about AI, you get into issues of data provenance, meaning, do I even have a legal right to have this data in the first place? And then lineage not only where it goes when you obtain it, but where did it come from in the first place? That goes back to that legal rights issues.

Debbie Reynolds:

I think there's going to be more focus now, because of artificial intelligence, on the governance, and I'm happy about that. I love governance people because they really are the watch, I would say, for data within organizations. So I'm hoping that as companies try to go towards more digital transformation and more as they're introducing AI, as you know, that brings more complexity to organizations, and so having that strong foundation and governance is vital. I don't think that you can really implement an AI system without having really good governance. So if you don't have good governance, you probably can't use AI.

Pamela Isom:

Yeah, you're right, and I also use the expression ethical governance because it's just not something we can take for granted anymore. Governance can be not so good and it can be also unethical. So I typically will use the expression that we need ethical governance on top of governance in general, because of the fact that you can say you have governance and really express that you have it, and you could be just as crooked as they come. So I believe in ethical governance, especially in the day and time of data management and dealing with artificial intelligence. What do you think about that?

Debbie Reynolds:

I agree with that. I actually saw a guy did a post. He was a lawyer, a partner at a law firm. Maybe I should not have been surprised by his stance. So he had made a statement you can do whatever you want, as long as it's not illegal. That's so unethical. Do you get customers by telling them, hey, we're going to just do whatever we want with your data?

Debbie Reynolds:

A lot of this ethics is, to me, about things for which laws may not even exist, that you need to be able to take a stand and say, hey, this is who we are, this is what we stand for, this is how we're going to use your data, and so part of that ethics to me is that transparency. So companies have not, up to this point, been transparent about data. A lot of that has been by design. A lot of the tools that we use are very much black box. They are not made to be transparent. So I think there's going to take some time for companies to really develop that muscle of being transparent about what they're doing, and hopefully the tools will create a situation where it can help companies be more transparent, because right now, for example, I saw a study or saw an article where doctors were using AI to run x-rays through a system and they were saying, well, the system somehow magically could tell whether someone was black or not, just from an x-ray.

Debbie Reynolds:

And they were like, well, we don't know why that happened. I'm like that is the wrong answer. Okay, if you don't know why something's happening, you don't need to be using that tool. Just like if you went to Costco and ordered a cake for your kid, okay, and you went to go pick up the cake and they gave you a casserole, you'd be like, wait a minute, that's not what I ordered, right? So in an AI world, if you're getting unexpected results, you have a problem. You need to go back to the drawing board, because you should ask AI to do a particular task and it should be able to give you the answer that you're looking for. And if it's unexpected, that may mean that something's broke down within that system or that process.

Pamela Isom:

I agree with that. So I think that one of the challenges in industry today is number one, making data accessible, and then number two, being concerned that if you make the data accessible, are you liable. But what we don't realize is that if you're going to be using AI and AI is pervasive it's in everything. So now we have to try to get a better understanding of how AI will use the data. How will the data be used from an AI perspective? There's this balance that my customers are struggling with, which is how do I make information available without violating any rules, but, at the same time, not looking at the AI and how AI is going to be using the information? So there's this balance that I think has to be thought through, and that goes back to governance. So I think that those that are handling our information, those that are providing the algorithms, those that are suppliers of AI solutions, should tell us how they're going to be using the data. They should tell us how data is going to be used, how data is going to be shared. Is the data only going to remain in a, how data is going to be shared. Is the data only going to remain in a certain location, and what does that really mean? So I always have conversations with folks about that, because I think deep AI offices are in industry and it's a big deal right now, but do they really understand what that entails? Are we talking to the procurement teams to help the procurement teams understand that these are things we should be looking for in the AI suppliers? I don't think so. I think there's opportunity to do more in that space, in an area that's of utmost importance is back to the data how are they going to be handling and managing our data, and how are you going to have conversations with leaders in organizations that are responsible for ensuring that our data is safe? So I'm just going to say this, and then I have another question for you.

Pamela Isom:

I went to the doctor recently and it really disturbed me that they gave me the results of my records on a CD. Disturbed me that they gave me the results of my records on a CD, like who uses CDs anymore? And so I feel like that they did it, because the burden then becomes my burden to share my information which no one can read. That's just on a CD, I couldn't even read it, but the burden becomes my burden. So I think that that's what made me think that they are struggling with how to share information and what information is OK to share, and I felt like the burden then became mine to figure out how to get this to my doctor. You know what I mean, so what's your perspective on that? You know what I mean, so what's your perspective on that?

Debbie Reynolds:

I think you're right.

Debbie Reynolds:

We've been talking many years about medical doing electronic records and stuff like that, and a lot of it is still paper and a lot of medical things run on faxes.

Debbie Reynolds:

When was the last time you even saw a fax machine? Unfortunately, they still use that, so they have a long way to go on digital transformation for sure, and I think you're right about the burden going to you and so to me, that's one of the big problems about AI and a lot of this digital transformation, where we still have people who are doing ABC123 passwords and they think they have a Nigerian uncle that's going to give them inheritance, right, and so now we're going to bring in these complex data systems that people really really don't understand it and what it's doing, and so it's going to be incumbent upon leaders if they want to get the trust of people that they understand what these tools are doing and that they can answer questions that people have about those tools, about those tools and they need to be looking at it anyway because I mean, depending on the type of data they're using, it may be a high risk for human potential harm for humans. So that's going to be critical in the future.

Pamela Isom:

Yeah, okay, so let's talk a little bit more about AI and digital transformation. So what do you see as good use cases of AI and then where do you see some issues?

Debbie Reynolds:

I guess I have a couple answers to this. Good use cases for me are those low risk, low stakes cases. I tell companies you know I speak to companies all over the world, so I've spoken to Coca-Cola, paypal, uber, johnson, johnson and the thing I tell them all is that hey, think about the things that you do right now that you don't like. Those tedious tasks that you have are lower level, lower risk. No regulator is going to knock on your door. If you want to automate some of those types of things, start there to see how you can apply AI and once you have success there, look at maybe higher use cases. But unfortunately, for whatever reason, I see a lot of companies going for those more dystopian use cases, like oh, we're going to have AI, use it for evidence in cases. I saw that recently where someone was like, oh, we're going to take a cell phone video and then we're going to enhance it with AI and use it as evidence, and the judge was like no, you're not going to do that because the AI has changed the evidence and that's the whole point of it. You see crime shows of people with plastic bags picking up stuff off the floor because I mean, they have to have a certain level of accuracy and chain of custody, and you lose that once you're scrambling data with AI.

Debbie Reynolds:

I'm more leaning towards. I like to see companies do those lower risk types of things with AI, because that's what I really think it is leaning towards. I like to see companies do those lower risk types of things with AI, because that's what I really think it is. So I think of artificial intelligence being something I've heard people like oh, it's like fire, it's like the invention of the internet, it's like iPhone. I think of artificial intelligence as a washing machine. It's a tool that you can use, that it'll probably make your life easier if you can find the right way to use it, and it will make more time for you to do other things that you're more interested in. But I don't think it's going to cure cancer and I also don't think it's going to end the world in the two years, and these are things that I've actually seen in press articles.

Pamela Isom:

So your take is if it's life, if it's impacting human life or human safety, steer away from it. And if it's more redundant tasks things to help you with your day to day tasks but it's not really impacting life what you call low risk then that's a good use case for AI.

Debbie Reynolds:

I think so. You can't stop people from trying to do these high risk things, but I mean it creates more complexity and it creates more guardrails that have to be around what you're doing with AI. Unfortunately, the habit people have is like let's take this huge bucket of data we have and throw it into AI system. Especially, you're dealing with data that may create a high risk of harm for people, like their personally identifiable information, their health or financial information. That may not be the right thing. So maybe coming up with a more discreet data set has been vetted before it gets in AI systems To me and I want your thoughts.

Debbie Reynolds:

I'm asking you questions now about expert systems. I actually had a guy ask me at a federal conference. He was like I was talking about AI. He was talking about hey, we have this expert system and we put this stuff in there. It was like medical and it thought that men could be pregnant based on some systems and I'm like, well, if this is an expert system, the data should be in set to know that a man from a woman and that a man cannot be pregnant. That tells me you have a data problem. You didn't have an AI problem. I mean you're just dumb from the start. You didn't have the right thing to begin with, but that's my thought.

Pamela Isom:

Yeah. So one of the things that I and I don't know. I'm eventually going to say this louder, but we need to stop blaming AI. Don't start blaming AI because of your negligence. There are some common things that we should be doing and we don't always do so. And AI is not going to be a scapegoat. It's not going to be the fall guy or gal. It's a machine and you see it happening more and more.

Pamela Isom:

And I think the carelessness like in the situation where there's a couple examples I'll use but in the situation that you mentioned a moment ago, where AI was used without really thinking it through, it's a result of the AI, when you know that it's not. So. Take an example, like with the chatbot. So there was Air Canada and a situation that's all over the news pertaining to Air Canada and a bereavement situation and the chatbot and this is from not testing scenarios this is what this was. It's not the AI's fault, it's not the bot's fault. This is inadequate governance. So the chatbot then tells the customer that get your ticket because you'll get refunded. Just get it within 90 days or so At least that's what I read. Okay, and then the individual when the company didn't want to reimburse them properly. They filed a small claims matter with the small claims court and ended up the company ended up having to pay them for their inconvenience and for the ticket and reimburse them.

Pamela Isom:

So in that situation, that is not because of the data, necessarily, but because you didn't test it properly, the company didn't test it properly and the company didn't guide the bot on what to do based on questions that is asked thoroughly, and so what I always say is those tools are really cool. I had thought about putting one on my website. I took it down immediately because when I went out there and tested it, I didn't like the results I was seeing. So my team had to take it down, because you have to really think about business leaders, business owners, people in general have to think about these things, and what kind of responses do we want those bots to give? Talk about expert systems. Expert systems are if you have a situation, this is what happens, so you have to think about those scenarios. A man is not pregnant.

Debbie Reynolds:

Right, you should know that before the data goes into the system.

Pamela Isom:

That's how it is. So we have to be careful with the type of data. True enough, and you can't call that an expert system. That's not an expert system.

Debbie Reynolds:

That's not how they work. No, I know when the guy asked me that I wanted to like slap him backhand. Oh my goodness, couldn't believe it Exactly.

Debbie Reynolds:

With your Air Canada example. Basically, what the judge said in that case is that the bot basically created a contract with that language and that the company had to honor it because it was their bot. Like you said, they didn't test it, they didn't give it any rules, they just went out and did whatever it wanted to do. But I think companies need to really think about that thoroughly when they set these chatbots free and just let them do anything.

Pamela Isom:

Yeah, or AI models in general these chatbots free and just let them do anything yeah, or AI models in general.

Pamela Isom:

When you're using AI models generative AI when I use generative AI a lot of the teaching and training that I provide I tell the students in the class hey, this here is a way to help accelerate research activities. And sometimes they ask well, when are we going to get to the place where we don't have to? Well, when I onboard people to help me with things, I don't micromanage, but I double check. I double check why, because I'm accountable. And so once we understand and this whole conversation keeps going back to governance Once we understand accountability and who's really accountable, and that we're not gonna be able to pass accountability over to a machine, then I think we will get better at governance, hoping that that's what happens in the day and time of AI. I just wanna go back to privacy and I have a question about how AI and if you think AI can amplify goodness when it comes to privacy, or, if not, what might we consider so that we don't shy away from using AI because of privacy concerns?

Debbie Reynolds:

That's an excellent question. I guess I have a two-part answer. So there are tools that use AI to deliver a particular product or service or a different discrete thing that it's doing. There are some, for example, privacy-enhancing technologies that use AI to actually help preserve privacy. To me, those are great use cases, and they tend to be more discreet and more narrow in how they use AI. That's the one good thing, the one bad thing and I see a lot more of this. I see more of the bad than the good when people are trying to use AI systems.

Debbie Reynolds:

Now you can't just take a bucket of everything and just throw it into an AI tool and think everything's going to be okay there has to be governance there into an AI tool and think everything's going to be okay. There has to be governance. There has to be structure to the data. You have to know what your problem is. What AI does with personal data is that it amplifies the risk Maybe someone's seeing data that they shouldn't see, or that governance around, who needs to access data, or what you're going to be able to do with that data and so companies really need to be very careful about the type of private or personally identifiable information that they put into AI tools. If, for example, someone's health or financial data is in the AI system, you have to have some really, really tight governance there on what goes in and why it goes in. Who sees what.

Debbie Reynolds:

Another thing you have to really think about, especially in those higher regulatory industries like finance and health, for example. Let's say, for instance, you have some unstructured data in your company and you say, hey, I'm going to use this gentle AI tool and I'm going to throw all this data in there and everybody in the company can start. Let's say, someone in accounting now has access to something that was maybe someone's financial or health record and they weren't ever supposed to do that and in some places that can be considered like a breach or at least a reportable incident, because they weren't supposed to see that. So we see those things every day, even without AI.

Debbie Reynolds:

Famous person went to the hospital someone who wasn't supposed to see their medical record. That's happening in regular systems now. But now you have AI, you have people throwing colleges of information into big buckets and then you don't know what people are seeing, what they're doing with that stuff. You don't even know the prompts a lot of times that are being used. You don't even know the prompts, a lot of times that are being used. You don't know the outputs and what's happening with it. So I think there has to be a lot more governance. There has to be a lot more control and a lot more thought about data, as opposed to just throwing it into an AI system and hoping for the best.

Pamela Isom:

What do you see as the future of digital transformation in the AI era?

Debbie Reynolds:

Yeah, well, digital transformation in the AI era will continue to go rapidly. As you know, there's a lot of talk, there's a lot of money, tons of money and interest in AI. So we're even seeing, over the last several years, a lot of these AI systems. They're changing very rapidly. So, because the money is there, because the attention is there, companies really want to use it. Some companies buy AI tools and don't even know what they want to do with it, which is a problem. So I think that digital transformation journey will be tough.

Debbie Reynolds:

I think digital transformation is hard anyway, but then you add that whole level of complexity. So the thing that is going to be hard for companies, not only whatever the expense is for the tool, but they have to really do more training. They need to have more experts in the mix. It's not a set it and forget it type of thing. If you're going to be using these tools, you can't be like, okay, let's set this up and then go about your business. There has to be using these tools. You can't be like, okay, let's set this up and then go about your business. There has to be monitoring, there has to be checking. Just because you set it up correctly, you can do something and end up off the rails in your AI system. So I think companies well, some companies aren't really thinking it all the way through You're going to need people that understand those systems. If you want your employees to use it, they need to have training.

Debbie Reynolds:

The systems are changing very rapidly. So before, let's say, a traditional system that you use within your company, it may not have a major change in two or three years, maybe incremental. Where these AI systems we're seeing, they're changing every month, every two months, within six months. There are things that some of these AI tools couldn't do last year that they can do now. So you have to change the way that you think about training.

Debbie Reynolds:

You know it may need more training, more incremental training, more people looking at these tools and what they're doing, figuring out what features work best or not, the data sets that go in. So I think it's just creating more complexity for companies. But if they really want to take on that task, they definitely should. So the companies that will win the day will be the ones who have better governance. So if you have good governance, I tell people, if you know that your governance is not very good, you don't know where stuff is. Don't even bother trying to do AI, because you're just a mess. You need to get the foundation. If you have a good governance foundation with your data, you know where your data is, you know who's supposed to be looking at it, all those things, then that's a great foundation to be able to do these higher level things.

Pamela Isom:

Yeah, I like that because I think one of the things we need that you're pointing out here is asset inventory and understanding what your assets are and where your assets reside. And the other area is how do we go about when the assets are no longer to be used, so disposal or retiring the assets? I do hear more nowadays about asset inventory and, of course, I always talk about the chief AI officers because that's one of the things that they want to be doing. But security officers do it as well. Right, we're making sure that we understand our asset. Data officers do it as well. We want to understand our assets, we know where they are, we want to know who the users are, the personas, et cetera, et cetera. We know where they are, we want to know who the users are, the personas, et cetera, et cetera. But then the other part is that I don't hear a lot about when it comes to AI and governance and data governance is the disposal. So the disposal and the retirement what are those processes and how do we keep those current in the day and time of AI?

Pamela Isom:

I was talking to someone recently about security plans, so we're not going to go into security here, but it is a part of governance. Do you have a system security plan and how is your system security plan evolving, considering that AI is in the middle of your digital transformation activities? So we should be thinking about how do we evolve these fundamentals. And those are some of the fundamental things that we learn as a part of IT, that it's repeating itself. It's just a new player, a new actor in the game, but it's repeating itself. And I think that's what I was hearing you say is go back to some of the fundamentals, and I heard you mention data lineage, I heard you mention provenance and then the asset inventory and some of these things, and those are some of those fundamentals that we need to know.

Pamela Isom:

But we have to know what our assets are, especially those AI models, because, like when cloud computing came out, anybody can go and acquire an AI asset, get it from the app store. So what are our policies, what are our rules? And once we agree that these tools can be used which it's not always agreed right, some organizations don't allow genitive AI tools, et cetera but once we agree, then the question becomes when we decide that there's a new version or we're ready to no longer use them, how do we handle the disposal, the resigning of the tools. So I think that's something that we want to make sure we include in the governance playbook.

Pamela Isom:

So here is a question that I have for you, for our listeners to help listeners understand the convergence of digital transformation and human wisdom and give insights so that businesses are successful and so that we're keeping up with the times, keeping up with an emerging landscape, whether it be AI or emerging tech. It's a part of our digital transformation efforts. Considering all of that and considering our discussion today, do you have any words of wisdom for us on this call and any advice that you would give to those that are listening?

Debbie Reynolds:

Very good question, I guess two things I want to say. One is artificial intelligence will have a horizontal impact on almost any business. So we've seen that horizontal impact in security, we're now seeing privacy and AI is going to take that similar track, I believe. So it will be very disruptive and changing for organizations as they're able to leverage those AI tools, for organizations as they're able to leverage those AI tools. On the flip side of that, we have to make sure that we're not abdicating our human judgment and responsibility and accountability when we're using AI tools.

Debbie Reynolds:

There was a lawyer actually there's more than one. This happened several times actually, in a couple of states, where a lawyer used, for example, generative AI to create a brief for a court and it was citing cases that did not exist. And they both said well, I thought the tool was like the internet, I thought it did things correctly because it did it so fast and it looked great, it was formatted the way it's supposed to be. But to me that's more of a cop-out where it's like hey, you as the lawyer, first of all, I'm sure your client wouldn't be happy to be charged $800 an hour for you to go on a $20 tool on the Internet and do a brief, first of all. And then the second thing is you're still responsible. You're still accountable for that information and you always were.

Debbie Reynolds:

Even before AI, before AI, before computers, people were checking, citing cases and knowing that that stuff was accurate. So you have to check. So AI may help you part of the way, but you can't think it's going to do everything and you still, as the user, that accountability or that responsibility falls on you. So use it for the thing it's good at, but don't let it be in the driver's seat in terms of what you do with that tool. So we, as humans, have to be the ones in the driver's seat, in my view.

Pamela Isom:

And I would say that I agree with you. I would say that the European AI Act does a pretty good job of categorizing risk types. So, as a quick reference, it's a global policy and it's still got some approvals to go through, I think, at this point, but it does a really good job of categorizing the various risk types. So if one wanted to get a sense of what is considered a high risk item, what is considered low risk, what is no risk at all, what are people thinking, I think that it might be a good idea for listeners to check that out. Whether it's a European act or not, it's got some good insights in there pertaining to how to categorize the different risk types. But I agree with you and I do think that we should be responsible we human beings should be responsible for the decisions that we make pertaining to AI adoption and integration. So is there anything else you want to add?

Debbie Reynolds:

No, I agree with you. I highly recommend people check out the AI Act. I think it's like 500 pages so far, so I'm sure there'll be digest to come out about that, but I think that that act will be very influential worldwide, just as the general data protection regulation was in 2018. I've talked about on PBS. People just couldn't believe it. They were like well, how is the law in Europe going to have an impact? We're seeing the impact of that on privacy globally, so I think that the AI Act will similarly have an impact globally and so, understanding, like you say, the different risk levels of different things, those things will flow through and we'll start to see those things show up in other laws and regulations, even standards, that come out in the future.

Pamela Isom:

Okay, well, debbie, I am really happy that you were able to partake in this session today. This is the first one and it's exciting. I'm glad to have you as the first guest. It's an honor to have you here. I appreciate you being on the podcast and I look forward to continued collaboration. Thank you very much for your insights and your wisdom.

Debbie Reynolds:

Thank you so much. Thank you for having me on the show. This was wonderful, excellent. Thank you, you're welcome.