
AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E029 - AI or Not - Dominique Shelton-Leipzig and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
Join us for a compelling conversation with Dominique Shelton-Leipzig, the visionary founder and CEO of Global Data Innovation Inc. Dominique’s journey from her academic roots at Brown University and Georgetown to her impactful career in tech litigation is nothing short of inspiring. Her experiences with titans like Yahoo and PeopleSoft have equipped her with unparalleled insights into the world of privacy law and cybersecurity. As Dominique embarks on her venture with Global Data Innovation, she shares her mission to lead organizations through the challenges of AI and digital transformation, emphasizing the critical need for effective communication in cybersecurity.
Discover the complexities of AI governance as we discuss the evolving regulations and their profound implications for businesses. The stakes couldn’t be higher; maintaining trust and accuracy in AI systems is not just good practice but essential for protecting assets and fostering innovation. Through real-world stories, such as a logistics company facing backlash from an AI error, we underscore the importance of continuous oversight and ethical compliance in AI deployment. By prioritizing accuracy, fairness, and trust, companies can navigate the challenging landscape of AI regulations effectively.
The episode also highlights the immediate impact of the EU AI Act and the importance of AI governance. With a growing list of prohibited applications and dynamic legislative changes, staying informed is crucial. We delve into the five key areas of trust in AI, offering guidance to prevent damaging headlines and ensure compliance with global standards. Our challenge to listeners: take actionable steps within their organizations by asking critical questions around regulation, compliance, and communication. Engage with us to gain valuable insights into successfully navigating the AI and digital landscape.
[00:00] Pamela Isom: This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health, tax, nor professional nor official statements by their organizations.
[00:40] Guest views may not be those of the host.
[00:48] Hello and welcome to AI or not, the podcast, where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey.
[01:05] I am Pamela Isom and I am your podcast host. I'm so happy today that we have a special guest with us, Dominique Shelton-Leipzig. Dominique is founder and CEO of Global Data Innovation, Inc.
[01:21] She's CEO and board AI Privacy and Cyber Advisor. She's a Forbes 50 over 50 recipient and she's a speaker and an author. And there's like so much more until I'm going to ask Dominique to fill us in.
[01:40] But Dominique, welcome to AI Or Not.
[01:42] Dominique Shelton-Leipzig: Thank you so much for having me, Pamela. I'm delighted to be here.
[01:47] Pamela Isom: Thank you so much. So please tell me more about you, your career journey, your trajectory, and if I left off anything, tell us all about your credentials.
[01:58] Dominique Shelton-Leipzig: Thank you so much, Pamela. And first of all, I'm so honored to be here with you for all that you have accomplished and pioneered as it relates to data leadership. So this being here today really means a lot to me in terms of my personal journey.
[02:14] I from Los Angeles and I went to undergrad at Brown University. I was very interested in international relations, diplomacy and global affairs. And so I graduated from Brown in 1988 with a double major in international relations and French civilization.
[02:35] I actually spent a year studying abroad in France so much I was committed to the global perspective. And then I went and got my law degree at Georgetown and graduated in 1991.
[02:49] And I went to Georgetown because I was so interested in global affairs.
[02:52] Along the way, though, just as life has its surprises, I learned back then, it's changed now. But back then, in order to do litigation and international work, who we're talking about the International Court and the Hague and international public law, things like genocide and Rwanda and those sorts of things were on the table then.
[03:15] And I knew that I wanted to do litigation. And I felt like before I would do international public law, it'd be really great for me to understand business as well.
[03:25] So I decided to go to a big law firm. Ended up doing that for 33 years before I started my company. But just one more thing on school while I was at Georgetown, very much focused on global issues.
[03:40] I did take a course that ended up being my favorite course in Three years and it was Privacy in American Law taught by Professor Anita Allen, fabulous professor, who's I think now Vice provost or provost at U.
[03:55] Penn Law. But at the time she was on loan from Harvard to Georgetown. And I'll just mention in Black History Month, amazing African American woman leader in privacy.
[04:07] And I just loved the substance of that course. Everything about it I just really enjoyed. Of course this is pre Internet at the time, at least pre public Internet. Then for seven years, really not working on privacy or tech or anything, just general commercial litigation, IP trademark, unfair competition, that kind of thing.
[04:29] And I then started working for a law firm that was a boutique that did a lot of work because one of our associates had left and gone to a startup back then in 1998, which was Yahoo.
[04:41] And so I stayed at that firm for 10 years, made partner in 2001 and stayed for until 2007. But we didn't think about it. But we also did a lot of work with a number of tech companies, PeopleSoftware's acquisition by Oracle and then brick and mortar that were getting into digital apps in America, enterprise runted product, et cetera.
[05:01] So all of us were privacy and tech litigators. We didn't think about it at the time, but that's what we were doing.
[05:06] And I just found it fascinating that just being on the cutting edge with all of our clients as they navigated these issues that were matters of first impression and figuring out from the litigation what are the lessons learned, what can we put into practice that will protect the client going forward?
[05:24] Working on the very first privacy policies in the aftermath of litigation so they could be protective blankets for the companies.
[05:32] One of the things that I noticed, Pam, in 2001 when I made partner at that little tiny boutique, there's just 70 attorneys and I made equity partner for the first time at any place.
[05:43] And I was so excited. We were working with a lot of IT professionals, privacy and legal. And we knew back then with all of the frameworks out there, NIST853 ISO, we knew precisely how to prevent cyber crime from destabilizing our markets and our business community.
[06:06] We knew something they called defense in depth. And now we call it cyber preparedness. But we basically knew how to protect our companies not from the criminals coming in. We knew that would happen, but with having it not be so disruptive and get in the way of revenue, revenue for companies operations and strategies.
[06:24] And even though as I was working with the IT managers and legal, I kept thinking, even as a baby partner, 10 years out of school, someone's going to tell our CEO and board community how straightforward it is to put in six key steps to avoid harms from cybercrime.
[06:42] I just thought somebody was doing it. And holy moly.
[06:46] Last year cybercrime cost our global economy 9.5 trillion.
[06:51] So no one ever did. And now, coming up on AI. I left that firm in 2007 and then founded the privacy and ad tech, privacy and data management teams at multiple global AMLO 50 law firms, most recently a global law firm.
[07:10] My most recent role before I started the company on November 1, 2024. My last day at that firm was October 31, 2024. But what I realized was that our CEO and board community are not just going to automatically understand when they're at a fork in the road.
[07:31] And we are at a fork in the road when it comes to AI right now. And I just realized, oh, we're going to do cyber all over again, but in spades on something so important important that involved education, finance and healthcare.
[07:47] So that's why I started my company, to make sure this time our CEO and board community are not taken off guard and flat footed that they have the information they need to exercise data leadership.
[08:00] Pamela Isom: Well, that's just fascinating. I'm so thankful that you are doing the things that you're doing with the mindset and the mentality to prevent as much as possible a recurrence of history.
[08:19] I mean, it's hard because breaches are pervasive.
[08:26] AI is pervasive.
[08:28] The evaluation of AI and the study of AI and the data associated with it to understand how to amplify adversarial activity is pervasive. Right.
[08:44] Some people just thrive on this. So I appreciate what you're doing and that makes me think about our conversation. So there's a couple of things I think would be helpful for the listeners.
[08:56] One, let's talk about some legalities, like where are we with regulations? We got the United States and we have the executive orders and the executive order was there before that.
[09:14] I was involved with and utilized under the Trump administration the first round. Then the Biden administration had executive order and then people got really nervous because now the new administration has revoked that executive order.
[09:32] And what's your take on all this? Like should we be concerned or what? What do we need to be doing? What are, what do you think about that?
[09:41] Dominique Shelton-Leipzig: Thanks. Pam, this is Pamela, this is such an important question and it's one that a lot of people ask and I want to put it in context for everyone.
[09:51] And it's on some of this information is on my website. But there are 100 countries in six continents that have aligned on what trustworthy AI looks like. What I mean trustworthy, accurate?
[10:07] How do we make sure our AI is accurate? Especially when it's dealing with. It's going to be inserted and it already is in every aspect of our lives. Let me personalize this for people and take it outside of the technical.
[10:22] Would you want to know if an AI was inaccurate and it's diagnosing a loved one for a potential life threatening disease?
[10:33] Would you want to know if the AI were drifting? If it is involved in transportation of your children to school? Would you want to know that AI is accurate? If it's assessing children in your family for going on to proceed with their education or whether they're going to behavioral program for kids that have behavioral issues.
[11:00] What if the AI is wrong in that? What about our elders? Okay, anybody taking care of seniors? Would you want to know if the AI just misidentified your loved one looking for care and didn't understand them reaching out in the middle of the night?
[11:19] You want to know that, right? If the AI is not picking up these things, that is why this is personal to all of us. It is not just business. It's not set and forget.
[11:31] This is our lives and we have a say so in making sure that the things that we are using are accurate. It's also not. And I've represented companies my whole career and I'm doing that now.
[11:45] Fortune 500 companies, market cap of well over 7 trillion. And it's not in any company's interest either for the AI to be inaccurate because companies that have inaccurate or quality control issues are 400% less successful than those who don't.
[12:08] So it's the fork in the road and the steps necessary. So let me just say, regardless of whether the US joins in at the federal level with these 100 countries and six continents that have their legislation or the federal doesn't.
[12:27] The point is most of our companies want to get to scale so they can be global.
[12:33] So you don't want to be creating any AI, for example, that's going to be prohibited in a major market like Europe or following protocols that have been set up in growth areas where all the population is growing.
[12:46] Africa, Asia, hello business imperative. But it's also very personal to individuals and all of us are concerned.
[12:55] Pamela Isom: I mean, is is your message here that we should be thinking about it more from a personal perspective and understand that when you think about it from that respect, it naturally folds into good business Practices.
[13:09] Dominique Shelton-Leipzig: Precisely.
[13:11] There's not a CEO in the world today that is a Fortune 500 or S& P company that doesn't have an idea of what accurate product looks like in their company.
[13:26] You probably have very specific standards. Our CEO and board members who are listening, you know exactly what accuracy looks like for your company. This is how we want people to talk to our customers.
[13:37] This is what we want our product to look like. That's how you got to your brand name. And the only thing I'm saying is what the legal frameworks give you guidance on.
[13:47] But you don't need a legal framework to tell you that the AI needs to know those standards too, if it's going to represent your company.
[13:54] Pamela Isom: So basically, irrespective of what's happening in Washington, irrespective of that, do the things that we know we need to do in order to protect our assets and in order to take care of what we need to take care of.
[14:17] Right. So remember to just take care of the things that we know are fundamental and critical to us and let that be the thing that guides the conversation and the direction.
[14:30] And then what Washington is doing is kind of like, okay, but you do what you know we need to do. Because your, your point about accuracy is so important.
[14:41] Dominique Shelton-Leipzig: Exactly. And I will say, as we were discussing earlier in the original Trump administration, Trump 1.0,
[14:49] frankly, the concepts of trust, accuracy, fairness were contained there. They were built out just slightly more in the Biden order. So it'd be interesting to see what. You know, I did read the other executive order that came out in January, you know, to unleash innovation.
[15:07] But of course you can't unleash innovation if the innovation is inaccurate. All over the place. Place.
[15:14] Pamela Isom: Because it's not innovation.
[15:16] Dominique Shelton-Leipzig: Yeah, it's. You can unleash it, but people are not going to use it.
[15:19] Pamela Isom: Right.
[15:20] Dominique Shelton-Leipzig: Let me just give you an example and I'll go from something relatively minor to something more important. So, major company, logistics company in London, global outfit, and they deliver packages very similar to the things that we have here in the US you have a tracking number, they give package over to someone, et cetera.
[15:43] So when a ChatGPT4 became commercialized, they were one of the first adopters and they trained an application on their own data, the types of questions their own customers asked all the time.
[15:54] Eight months, it's working beautifully. They're able even to give additional tasks and move people away from the call desk to some other higher level work for the company. So everything is humming along and suddenly the CEO wakes Up to the name of the company trending on formerly Twitter, 300,000 views went up to 2.2
[16:17] million and counting, is still going viral because what happened that morning? A customer asked a very basic question that the chatbot had answered beautifully, many times. Where's my package? Here's my tracking number, when can I expect it?
[16:29] Et cetera. And suddenly, even though this chatbot had answered thousands of questions beautifully in this tone of the company, et cetera, suddenly the chat bot started cursing at the customer, criticizing the company being the worst logistics company in the world, quote, unquote.
[16:48] Also blame the company for letting go all of its real customer service employees and leaving the customer with this quote, unquote useless chatbot.
[16:59] I'm paraphrasing because it went on for a while to the point where the customer was able to pull out his phone and tape all of the cursing, etc. That the chatbot was engaged in.
[17:12] And this is why I believe in our people, in our society right now.
[17:17] The first thing the customer did was not go to X and put the company on blast. Instead, sent the videotape or the movie he'd taken on the phone, emailed it to the email address that was next to the chatbot that the company put there and said, here's a tape of the cursing I did receive and the criticizing of your company that did occur, wanting you to know because pretty sure you would not want this, and for others, et cetera.
[17:49] But that was an email address that the company did not look at on a regular basis for crisis management. It was just like, you know, a contact at blah, blah, blah, the company put that email next to the chatbot.
[18:01] So the customer contacted the company where the company said to be contacted, and nobody looked at it for 48 hours. So then the customer jumped onto X viral views. And by the way, just think about how that CEO and the board felt and the AI governance team that had worked really hard at that company up to this point, to see the company in headlines and Time Inc.
[18:25] BBC, Guardian, fiasco with chatbot, et cetera, cursing out, that's what they're known for.
[18:32] Pamela Isom: Instead of wow, Instead of all the good, that's what they're known for.
[18:37] Dominique Shelton-Leipzig: And to your point, Pamela, instead of all the good. And so this is really about getting companies back onto their purpose, the good purpose that they have versus AI incidents that they're trying to tamp down.
[18:50] Just like cyber, we can get to the crux of the matter early. Get the governance in there so you are not a headline. And that's my message to everyone. It's not a check the box exercise.
[19:02] There's not a law that says right now that that company can't have their chatbot cursing customer. But they have standards that they want to implement, and this is how we talk to customers.
[19:13] So they ought to be able to monitor that. Just imagine. Let's replay this whole scenario going to the six steps here to avoid blind spots that you and I were talking about before the chat.
[19:24] What if someone had gone to the customer service supervisor who trains humans and said, what are our no nos? What do we never do with our customers?
[19:35] And what if someone had coded that in terms of ones and zeros that the AI can understand into the AI application itself?
[19:44] So when the AI drifts, not if, but when it moves, when it learns how to curse, et cetera, you know about it immediately. What about that? Taking it from reactive to proactive?
[19:59] Because what if that scenario had been the company alerted that the AI was cursing? Rather than let that go on for days and metastasize, Just get in there and write to the customer.
[20:10] Customer, we're experimenting with our new technologies.
[20:14] There's some verbiage that came to you. Apologize, here's where your package is. And we've taken the chatbot into surgery here to work on tone, and it'll be back online. I could be wrong, but I don't think that customer would have put the company onto X.
[20:31] Pamela Isom: Tell me more about some blind spots. So that's a good one there.
[20:35] Dominique Shelton-Leipzig: Yeah. Thanks, Pamela, for asking. Because in 33 years now, going 34 years of practice, representing companies of over a trillion in market cap collectively, I have not walked into a single company yet.
[20:53] It hasn't had at least one.
[20:55] And why does that happen? They have wonderful intentions. Everybody's working hard. It's almost like there are too many jobs out there. The consultants are putting everything together.
[21:06] Big law where I came from. We're so busy surrounding the companies and customers after their headline and incident response, we're cleaning that up.
[21:16] And the companies and the organizations are busy getting everything going with the AI governance that they have. But the blind spot exists on where are right underneath the surface, the seeds or the patterns that tell us, oh, we're about to go into an explosion of disastrous harm for the brand.
[21:40] Somebody needs to be just focused there. That's what I'm doing right now. Just center in on that. And let me just talk about the blind spot here.
[21:49] Another example, we have one of our major states. You know, we have 22 states that have automated Medicaid delivery. In other words, they were way in antiquated old systems and not getting to our neediest people, our disabled, our children in states, not getting to them quickly enough for very important decisions about critical medical care.
[22:18] Under our Medicaid programs, there are guidelines for when Medicaid submissions can be approved and disapproved and they're written down. So let's just call that for purposes of discussion, the accuracy metrics.
[22:32] But if you don't know or aren't aware that models can drift and that's not an aberration, when you hear about hallucinations or model degradation, people think, oh my God, the model hallucinated.
[22:45] What happened? What's wrong? I'm here to tell everybody listening to this podcast once and for all, this is endemic to the technology. It's not an aberration.
[22:57] Just like cars moving with wheels is not an aberration that the car moves forward. It's the same thing with drifting with AI, it's not an admiration, it happens.
[23:09] So how do we solve for that part of the technology? Well, with cars, because cars can move and they continue.
[23:18] We know that there's times for traffic reasons and elsewhere that you need to put on the brakes. The car needs to be enabled with brakes, otherwise it's going to hit things.
[23:28] It's the same thing with AI. AI is going to drift. So therefore we've got to find a way to catch that.
[23:35] I talked about cursing out the customers, but what if this were our power grid, Pamela, or our water supply and the model was drifting, potentially contaminating water that people were going to drink and nobody knows about it because nobody's testing and checking.
[23:51] So when we talk about continuous testing, monitoring and auditing, this comes out of the legal frameworks and there's six other things that need to get done. But let's just talk about that alone.
[24:01] Everybody listening needs to know that subject matter experts that no accuracy need to be consulted so that you have embedded in your AI in terms of code. Not a policy hanging out there that has nothing to do with the technology, but in terms of code sitting in your application itself what the accuracy measure is so that you are alerted real time when the AI starts to this one state that I'm talking about over here in Tennessee, but it's also happening in Idaho and multiple other jurisdictions.
[24:34] But I'll bring up this one case because it's already happened.
[24:38] 400 million spent on the algorithm. Top consultant worked on it and a judge just ruled in August that the AI was inaccurate. 90% of the time in denying Medicaid benefits.
[24:51] Now let's replay that whole scenario. How did this happen? You have a whole AI governance team within the state working on it. An AI governance team within. Within the consultant legal working.
[25:04] How did this. It's the blind spot I'm talking about. Let's just replay that scenario. And what if the blind spot had been dealt with and someone had said, hey, you know what?
[25:15] Since models drift, we better check our output all the time, make sure it's working continuously every second of every minute of every day. Where do we find accuracy standards? Oh, those are the regs that already exist on HHS website.
[25:26] Let us download those and code them into the AI in terms of ones and zeros so we can be alerted.
[25:34] Pamela Isom: If any deviation occurs and alerted if no response happens. Right, so there's a deviation and then again, if nothing, if nothing happens and.
[25:46] Dominique Shelton-Leipzig: It'S in everybody's interest, maybe that would have cost 403 million instead of 400 million, but they would have an AI system they can use right now. Now they're banned from using it till it gets fixed.
[25:58] It's not efficient to build technology that is not in accordance with these practices I'm talking about. And it just isn't efficient to build it and then have to take it all down or not use it because some of these safeguards haven't been put into place.
[26:15] You can move just as fast and have it built correctly.
[26:18] Pamela Isom: I definitely think that it's like the, the things that you're saying and the points that you're making here are those critical, those quality assurance steps that we need to take that I don't know if we have said it's AI, so let it do its thing.
[26:45] Maybe the mindset is, well, if there is an issue, we have a crisis management team, so we'll let them deal with it. We know what to do because we know how to handle crises.
[26:55] I'm not sure, but I do think that consciously listening to what we're talking about here, it ought to cause a stir to get back to that. Don't be overwhelmed by these capabilities.
[27:14] Don't let that overwhelm us until we set aside the safety of our kids or the safety of our loved ones, which is what you started out talking about.
[27:27] So I really, really love this conversation. And I really did. I was like, I need to get to her about these blind spots.
[27:37] Dominique Shelton-Leipzig: And you, you know what's so great is if this will help people remember.
[27:41] I think of it in terms of trust.
[27:44] So if everybody could just pull out a piece of paper and just write the word trust vertically instead of horizontally. T R U S T. And what if T stood for triage risk ranking your AI treating high risk AI differently than low risk.
[28:04] And what if r if you just keep this in mind. Righteous data. What do I mean by that? Correct? Data going in to train the model that you have IP rights, privacy rights and business rights to train with righteous data to train u for uninterrupted testing, monitoring and auditing.
[28:26] It's not enough to test every week or every quarter border, as you mentioned, drift can happen in the in a matter of seconds. And you want to know about it if you're the CEO or if you're the board or if you're working on these matters in the company.
[28:40] Because look, when a company's destabilized with these headlines, it impacts market cap, jobs, lives. Okay? So we all have a stake in making sure we are proactive and not reactive.
[28:54] And then s that's where you have supervision.
[28:58] When models drift, there's gotta be someone to react to your point, Pamela. Not just letting us know they're drifting, but a supervising human that goes in. It looks at the last t technical documentation of these continuous tests that you've been doing, diagnose what the problem is with the model and get it back on track.
[29:21] Those are five straightforward trust activities that would take care of most of the headlines I've been talking to you about.
[29:28] Pamela Isom: You have to have cursing.
[29:29] Dominique Shelton-Leipzig: You don't have to have models denying coverage when it deserves coverage, recommending the wrong medication to people talked about children right now in Florida in a predictive policing model, there is AI that has misidentified children.
[29:46] Fourth, fifth grade was having violent tendencies because the AI did not understand tonality of joking and certain slang and so forth.
[29:56] Pulling kids out of a learning trajectory into essentially a pathway in prison drift.
[30:03] That's not in anybody's interest, certainly not the children, not their families, and not the police department and sheriff's department either that are going down rabbit holes when real criminals are blowing up the schools elsewhere.
[30:17] This is why we have to get to accuracy and governance. And you've mentioned something very important, Pamela, is instead of just letting things be reactive and let crisis management deal with it and blow up the company and market cap and everything else.
[30:32] Look at one of our major retail pharmacies right now that's under bankruptcy protection because they're vendor's AI was misidentifying paying customers as criminals. Don't just think about the customers who are inconvenienced everywhere they go because of mistagged as criminals, et cetera.
[30:50] But think about the employees of that company in bankruptcy protection right now, having to close stores, be sued, et cetera, instead of be on their way in their mission, which is being a major retail pharmacy.
[31:03] One of our public unions. I do a digital trust summit.
[31:07] We're having it this year. I just announced on April 23rd in D.C. to bring our CEO and board community together and get down to brass tats. We just stop mystifying this stuff.
[31:18] It's just five things that you need to talk and make sure are happening in your company.
[31:23] Pamela Isom: Yeah, no, this is great.
[31:26] I have one last question for you, but before I ask you that last question, I want to know. You mentioned the legislation earlier you mentioned different countries have legislation, so there should be no excuse for us not to find something to refer to.
[31:45] But is there a specific legislation that you would want to point people to or any anything that is maybe a good cross reference? In addition to that, the past legislation in the US is good, but is there anything specific that you would want to point people to?
[32:05] Yeah.
[32:05] Dominique Shelton-Leipzig: So I talk about this in my book Trust, Responsible AI Innovation, Privacy and Data Leadership. There are a hundred frameworks, but they really come down to the five things that I talked about.
[32:16] All of them say that. So if we at least implement those five things, that would be great. Also important, the EU AI act is going into effect. Most of the provisions will be going into effect in a couple years, but one that's going that already went into effect at the time of taping just a few days ago on February 2nd.
[32:35] Prohibited AI has to stop in Europe right now. Okay? So those of you listening, if you don't know what the prohibited areas of AI are in the eu, I want to invite you to.
[32:49] I have a free chapter of my book on my book website. Not the Global Data Innovation website, but my book one and you can download that and find out what those 17 are.
[32:58] But please do not think of this as static.
[33:02] That number is going to change. There's 17 prohibited now. Pretty soon as time and more uses get finalized and our use cases come out more, look for that number to increase and you need to stay on top of it.
[33:16] Okay? So this is not a set and forget legislation. You need to be on top of just. I talked about testing every second of every minute of every day there are new developments in legislation every second of every minute of the day.
[33:27] We've coalesced them and synthesize them in what we do in order to just say this is what's across the board and this is what's everywhere. So the pneumatic that I gave you with these five words really is.
[33:40] I mean, trust with these five letters is really in every framework that I've seen in 100 countries, six continents, on global data innovation. If you want to get into the granularities, I do have a list that hyperlinks to the 100 countries around the world.
[33:55] But the EU AI act is a great start. There's a lot in there. It's 375 pages. It's not just the word trust. Okay? Your lawyers will burrow down into that for you in terms of, you know, some transparency, obligations, explainability.
[34:08] But you really can't do any of that until you've tested the model and know what it's good for anyway. So I would just say start with my mnemonic and then figure out what countries you're doing business in and then make sure you're aligned to those laws.
[34:24] The key things to keep you out of headlines are in the five things I mentioned.
[34:28] Pamela Isom: Okay, well, my last question for you is, are there words of wisdom, advice, or a call to action that you would like to leave with the listeners?
[34:40] Dominique Shelton-Leipzig: My call to action to you is this. If we do not want to be regretting and looking in the rearview mirror 25 years from now, what we do today in the next 15 to 18 months is critical.
[34:57] Most companies have already and organizations and governments have already spent the past two years looking at pilot projects for AI they have distilled down now the ones they want to move forward with and they're going to step on the gas and just like getting on the freeway, if you didn't have any brakes, you would have cars hitting each other everywhere, et cetera.
[35:19] It's the same thing with AI right now. We keep seeing a hallucination over there and a hallucination over here as a headline. It stands out because there are just not that many cars on the road.
[35:31] But as soon as everyone starts going at scale, you're going to see these things popping like popcorn.
[35:36] You do not need to be part of of those statistics. Your tech needs to be built to the five areas that I mentioned so you can keep an eyeball out on them.
[35:48] Otherwise you will be roadkill with negative headlines around your brand. We have about 15 to 18 months to effectuate this governance, so that means you can't wait. The five areas that I mentioned for trust, triage, making sure you have righteous data, uninterrupted monitoring, supervision and technical documentation.
[36:13] You can set your organization in the right direction by asking questions about those five areas. In five days you can turn around where you are and course correct immediately. And that's what needs to happen so that you can begin to start to build your technology the way it needs to be built.
[36:29] Because after 15 and 18 months after everything's already built, just as we learned with cyber Pamela, once the tech was built 23 years ago and the standards were over here and they came into effect 23 years later, it's over 20 trillion.
[36:44] It's cost our global economy. And just think what we could have done in terms of jobs, real estate growth, education with that 20 trillion instead of having it go to crisis management.
[36:55] Pamela Isom: Big log.
[36:56] Dominique Shelton-Leipzig: I was a big part of that and others beneficiaries of it. But it we can do something much more productive in society with it. And that is peanuts compared to what will happen to AI if we don't get our act together.
[37:08] So five days this correction can occur. What are you going to do with your next five days after you listen to this? That is my challenge to all of you and I look forward to hearing from Pamela what you let her know about how you have changed your circumstances by asking these five questions in your company.
[37:27] Pamela Isom: Exactly. Because I'm going to follow up with folks just to find out. I'm going to ask them to get back to me and let me know so that we can get some information back to you.
[37:37] So important, so critical. I mean I can't tell you how many times I hear that regulations and requirements like that are a burden. But what we discussed today is that's nothing compared to not doing it right.
[37:54] So I want to thank you so much for being here and for talking to me and for making it plain, just making it plain for folks. So thank you again.