AI or Not

E022 – AI or Not – Leonard Lee and Pamela Isom

Pamela Isom Season 1 Episode 22

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

Discover the nuanced landscape of AI economics with me, Pamela Isom, alongside my distinguished guest Leonard Lee, Executive Analyst & Founder nextCurve. Ever wondered if your business is chasing trends at the expense of true innovation? Leonard and I unpack this dilemma, reflecting on the pressures companies face from their boards to implement AI for competitive parity rather than differentiation. We raise a bold question: Are we prioritizing the appearance of advancement over actual progress as financial strains challenge AI pioneers like OpenAI?

Journey with us as we dissect the real versus perceived capabilities of AI technologies such as ChatGPT. Leonard and I scrutinize the initial promise versus the tangible outcomes experienced by businesses. We highlight the elusive nature of transparency in AI adoption and caution against "AI washing"—the rush to embrace AI without realizing its true benefits. Through vivid analogies, we illustrate how initial excitement can deflate and leave organizations grappling with unmet expectations, calling for a grounded understanding of AI's real-world applications.

In an era where data privacy is paramount, Leonard and I explore the delicate balance between harnessing AI for data aggregation and safeguarding user privacy. We share insights into the ethical considerations of default data collection settings and targeted advertising, making a compelling case for scrutinizing vendor practices to protect user information. Wrapping up, we examine the common pitfalls in project execution and stress the importance of aligning tech initiatives with their intended outcomes. Our conversation offers practical advice to steer clear of oversimplification, ensuring your digital transformation journey is both impactful and responsible.

Pamela Isom: 0:18

This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health tax, nor professional nor official statements by their organizations. Guest views may not be those of the host those of the host.

Pamela Isom: 0:50

Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. I am Pamela Isom and I am your podcast host, so you know I always have special guests, so we have another one today. My special guest today is Leonard Lee. Leonard is an executive analyst and founder of NextCurve. He's a tech industry advisor and thought leader. He has his own podcast, so much more that I'm going to allow Leonard to elaborate because I probably won't get it right. So, leonard, welcome to AI or Not, and please expound upon your career journey and where you are headed.

Leonard Lee: 1:39

Absolutely. Thank you so much, Pamela, for having me. I might not even get it right myself. Absolutely. Thank you so much, Pamela, for having me. I might not even get it right myself. Yeah, I've been around for a really long time.

Leonard Lee: 1:54

I started my career in developing database applications way back in the day, you know, around the time where ERPs were really hot, and since I've had a long journey.

Leonard Lee: 2:01

I was a partner principal at E&Y, pwc, ibm, and I've worked across a number of different industries, across a number of let's call it disciplines and domains, from custom app development to CIO advisory to corporate strategy stuff corporate strategy stuff, you know. And then you know I was with Gartner for three years as a managing partner with their tech, media and telecoms practice, and so I had a stint as a industry analyst consultant and I started my own firm about seven years ago, nextcurve to really leverage and capitalize on my experience my broad experience that both technical, business and strategy for the benefit of my clients who I serve today. Many of my clients are some of the leading technology companies that you see out there in the headlines every day, and you know I'm having a blast working with them on some of their very, very critical and top priorities. So it's really fun and it's really great to be on your show and I look forward to our chat.

Pamela Isom: 3:19

Well, congratulations on your success, because I know you've had a pretty successful journey, so that's a good thing. So congratulations on that, thank you, I'm happy.

Leonard Lee: 3:29

That's all that counts actually. You know, regardless of how people perceive success, I'm happy.

Pamela Isom: 3:37

So I'm glad that you're happy, I'm glad that you're a good role model, you're a great role model. So, as you know, you mentioned Gardner. So I want to ask a question. So I've been paying attention in my role you know, I'm paying attention to what's happening with AI, digital transformation at large, how organizations thriving in this era and, as you know, if we look at the analyst reports, including Gardner, content is surfacing that indicates that the total cost of AI spending is surpassing the benefits. So we've got vendors that are experiencing public failures and there also have been some investigative reports. So this morning I was reading one that suggests that OpenAI is on track to make an operational loss of about $5 billion in 2024.

Pamela Isom: 4:35

So I personally talked to my clients about service and product differentiation, product differentiation, but to me it doesn't seem that organizations really either they don't understand what competitive differentiation means, or are we just trying to. My mom used to call it keep up with the Joneses. Seems like they're more about competitive parity, like whatever the Joneses. Seems like they're more about competitive parity, like whatever the Joneses are doing. Let's do that, but that's not differentiation. So and like I said, my mom called it, keeping up with the Joneses. I want to get your perspective on that. What do you think about what I just said there and what's your perspective on AI? I call all that AI economics, yeah.

Leonard Lee: 5:22

No, it's really interesting that you've started off with this keeping up with the Joneses analogy, because that sort of is what we're looking at and a lot of organizations, especially their CISOs as well as CIOs, are, and maybe even CTOs are challenged and being very heavily pressured by boards to do something AI and without really in my observation and I do talk to a lot of companies I look at this problem quite a bit not only on behalf of organizations, but my vendor clients without having a really firm understanding of what they're talking about. I mean, what is AI? And then we have generative AI that gets mixed into broader AI without an appreciation for how hard AI was to implement and get value out of foreign enterprise even before generative AI. And you know, one of the things that kind of startled me at the beginning of this new hype cycle or gold rush and we talked earlier about gold rush right, and how we continue to follow these and just get enamored with these gold rushes that end up not amounting to anything is the statement by Sam Altman. I quote this quite a bit and I might paraphrase here a little bit that the chat GPT at the time that it was released, does a couple of things really well to give an impression of greatness, and then he goes on to say we have a long way to go before this stuff can be reliable and trustworthy.

Leonard Lee: 7:00

Okay, but then just a few months later, all of a sudden everyone's talking about how this is going to be exponential and transform everything and of course that got boardrooms concerned that their organizations, their businesses, might fall behind or be subject to some sort of competitive threat of a first mover with generative AI and that hasn't turned out. I mean, I can't, I honestly cannot even tell you about a vendor. Okay, that has been generative AI first. That is beating everyone else at the moment. Right, because the technology going back to really what Sam Altman was warning everybody even up to December at Davos this is like experimental stuff, it's R&D research.

Leonard Lee: 7:51

Okay, and I think you know everyone, including maybe Sam Altman and you know I'm just assuming here just based on what he said is astonished by how much excitement there has been of the technology, but in terms of like adoption and the actual expression of value in a business, an end market business forget about GPU sales and how. You know how much you know AWS or Microsoft is spending on, you know, supercomputing AI, supercomputing data center, build out and all that stuff. Where's the end market value right for organizations and users? And that is still a very, very murky and, you know, clouded picture and nobody still knows. You know it's amazing After you know more than picture and nobody still knows. You know it's amazing after you know more than a year, nobody knows.

Pamela Isom: 8:49

Yeah.

Leonard Lee: 8:49

But it's supposed to be transformative right.

Pamela Isom: 8:52

Transformative and and do you think that in that that you said, do you do you think that he is surprised by the way that AI is taking off from a consumer perspective? Do you think that that's what's going on? Is that he's surprised by the way that we are using it, or just surprised that we're finding it so exciting to use?

Leonard Lee: 9:18

I think he's surprised at. I mean, based on what, it's hard to tell. I can't speak for Sam Altman and how he's pursuing things, but definitely you know, I think everyone's surprised by the excitement. It's new and how that what we're excited about, I think, is still quite open to debate.

Leonard Lee: 9:46

Initially, why people were so excited was that this was supposed to have been the fastest ramping or adopted technology in history faster than TikTok and blah blah blah. Facebook, right. 100 million active users, or users within a month or so, a hundred million, right. Well, guess what? You know? A year and a half later, openai when they introduced GPT-4.0, cited the same number a hundred million active users. We don't know how many are paying still, but you know, no one's being transparent about that stuff and I think you know for a lot of the organizations out there. I think you have to really start to ask the question well, this was supposed to be so transformative. Well, how come no one's being transparent about how you know these massive impacts of? You know, just reinvention and transformation that this thing is driving? But it all boils down to this is AI is hard, it's not a mystery, and it's funny to see some of these reports by consulting firms.

Leonard Lee: 11:00

Now talking about how you have to focus on the data, I was like, well, when was data ever hard? But what's more important is you have to start looking at knowledge. Right, that's a higher order of input into you know it's sort of abstract, but it's a higher order input into these models right, these large language models, and what have you? A mixture of expert models. Models, right, these large language models and what have you? A mixture of expert models. And that's really hard. And how many organizations have that in order? And if that's, if they go back and they look at the quality of their knowledge base and that's the input or the, you know the contributor to, let's say, a RAG or retrieval, augmented, you know, generation-based application, how good is the output going to be? And then what is the totality of the total cost of ownership look like.

Pamela Isom: 11:56

Exactly so I like that. You pointed out that we want to pay more attention to the knowledge and you mentioned this earlier too the training and the understanding of what AI really means. So the boardrooms are like, hey, we need to adopt this capability because we want to be there right with everybody else, and it has led to ai washing, for instance.

Pamela Isom: 12:22

I oftentimes talk about that like why would we do that? Why, why would we do that? It's not really giving us the advantage, it's not even worth it. It's not worth it, right? So because the advantages, the advantages are there if we use it for certain use cases and, and so I think about productivity gains. Maybe Productivity gains to some extent, but be careful with it. And I think you know you see a bubble, you see a balloon, and you blow up a balloon and it's full of air, yeah, and then you need it to stay, because if the air doesn't stay in the balloon, it starts to sink right, it starts to drop.

Leonard Lee: 13:05

To me, that's what's happening with AI, you have like the best analogies. It's awesome. I love it. You're right.

Pamela Isom: 13:15

So people have, and it's a hard feeling when you get let down, right? So what happens is when this happens. So the excitement was there, remember when you were talking about the excitement? So the excitement was there and we were so excited. But what happened is then you started to not be able to trust the outcome. So when you weren't able to trust the outcomes, then the air starts coming out, the balloon right and so, and it starts to die down to add value.

Pamela Isom: 13:44

So there are some productivity gains, there are some time savings, quality, I question, right? So I know Gardner had published some information and talked about quality in 2024, but I don't necessarily agree with that because I think, with all the monitoring that we've got to do and the double checking, it counteracts the benefits. Right? So I don't think quality, but I think quantity is there. So I can use ai to go and search through massive amounts of materials that I just couldn't do before. Now. We can use ai for computer visioning purposes to help to identify if someone, for instance, needs to be rescued that we may not be able to see with the naked eye. We can use the computer visioning aspects of it. So I think it's good from those respects, and I think it's so important for us to understand what those good use cases are. Those are like, that's like productivity. Time quality is the one that's questionable, more so with generative AI than with the traditional AI. So, yeah, yeah.

Leonard Lee: 15:03

You know, even there, with the traditional stuff for the machine learning that we've seen a lot of industries work really hard to apply in a reliable way, right, and quality is part of it. Getting to quality, getting to like what I tell a lot of folks and my clients is like where is the enterprise grade? You know, industrial grade, telco grade, ai and this is not new to anybody. This stuff is difficult. The data is a problem, sure, but as you start to, like I was saying earlier, with generative AI you're looking at higher orders of information, right, data is like the very rudimentary, if we were really to talk about bits and bytes, starting from there, moving all the way up to, you know, like an article or a paper, right PowerPoint presentation with a lot of details in it, the quality of knowledge that is typically grossly missing. Most of that stuff is in people's heads, right, and that's why humans are so important and why you, even at this phase in the evolution of generative AI and the applications that use them, for whatever purpose search or knowledge management or you know technology is going to be they discount how difficult it is to actually build a quality application, something that actually garners productivity versus introduces risk into your operations, which may have a massive material cost at some point or penalty for your business.

Leonard Lee: 17:06

We have examples of certain airlines having used these chatbot tools that are Gen AI augmented, where the thing recommends and provides an offer that doesn't exist, and then the company has to honor that offer, right?

Leonard Lee: 17:23

I mean, who do you blame? You have to blame yourself for using a tool that's not reliable, and so I'm not trying to discount the technology. There are a lot of really smart people out there that know the state of this technology, know what the roadmap looks technology, know what the roadmap looks like, know what the risks are, and can hone in on what an effective and useful application is. Unfortunately, their voices are actually pretty small. What we have right now are a lot of loud voices talking about how you can do all kinds of crazy stuff with this technology without due consideration for a number of risk limitations of the technology and what it actually takes to build that target application that will deliver that exponential insanity of you know productivity benefit, right? Very few people are going through that cycle. There's more of that FOMO driving it and misunderstanding, misconceptions and delusions about what this technology can and can't do.

Pamela Isom: 18:38

Right, so I would just say that I'm from the US Department of Energy and so what we did there is I just had a recent conversation and I know while I was there, even we looked at using AI and, working with the national labs, we were able to come up with some really good, viable use cases for large language models and machine learning, and it's starting to evolve and more and more information is coming out now on what some of those use cases are associated with climate and clean energy and inspecting the health of the transmission lines and fortifying the grid.

Pamela Isom: 19:12

So there are good use cases. One of the things I try to do on this show is bring out some of those use cases the good ones, but it's just so important to also make sure that we're understanding, from a sustainable transformation perspective, what are some of the risks that we should be paying attention to and managing. I have a question. So you talked about the enterprise, adoption and some of those considerations. What about personal risks? What's your take on the personal risks and things that we should consider? Any thoughts like privacy?

Leonard Lee: 19:44

Oh yeah, I mean it's going to be really interesting. You know, like, for instance, one of the things I mentioned and I think folks may know about Microsoft's recall, which was supposed to be launched with Copilot plus PC. It was delayed indefinitely, I suppose. I don't know. I don't think they've published a date when they're actually going to release this feature for Copilot plus PC, but it was held back because of some fundamental privacy deficiency, protection deficiencies, or what Debbie and I call Debbie Reynolds, by the way, shout out to Debbie Reynolds, you know, on the semantic I think it's called a semantic index that they use, which I think is a graph database combined with a vector to profile everything that you do right as an individual. If you're using it for your own purposes and personal use, I think, yeah, there could be a little bit of concern. It all depends on how the vendor and all their partners actually implement that, these types of personalization features on device right, but on the enterprise, I think it gets a little weird.

Leonard Lee: 21:19

One of the things that I think is potentially a benefit, a use case for why enterprises would use something like recall, is to capture knowledge Right. It's a great thing to kind of scour through everything that you've done as an employee, and then you forfeit, basically, your knowledge, your know-how Plus you could. Ok, I'm not saying that it's going to do a really good job of this. I think this is still really really difficult to achieve, but enterprises could then capture institutional knowledge through functions like this. And then, for an enterprise or an employee of an enterprise, what does that mean to you as a professional in terms of your skill set and your let's say, your differentiations in the marketplace? When your knowledge and your person, your talent, is codified and captured by enterprises, it opens up a big question, you know.

Pamela Isom: 22:22

Yeah, I can definitely understand why that was put on hold and I hope it stays on hold until we get a better picture of what are the viable use cases for something like that.

Pamela Isom: 22:33

Yeah, yeah, so I'm just going to add a little bit to that. So, that particular tool in itself, I remember when I first heard about it I didn't have a lot of details, but when I first heard about it, I started making excuses for why they would do this, which would be to aggregate more information for the large language models, right. That's why I was like, okay, yeah, this is a tactic to start to get information that can be leveraged to make generative AI even more powerful, because there was lots of communications out that they're running out of data and real life practical data. But I felt like, if that's the case, that might not be transparent and say this is what we're doing. How are we making sure that this is truly beneficial to the users of this service? And then I think the real issue that everyone has that we don't have to belabor, because that was the main reason why it was pulled back is that it was, by default, active. And who does that? I mean, that's such an invasion of one's privacy.

Leonard Lee: 23:40

So here's yeah, it's such an invasion of one's privacy.

Pamela Isom: 23:41

Yeah yeah, it's a total invasion of a privacy. So I was in a discussion with someone not too long ago, Leonard and we were doing some research.

Leonard Lee: 23:51

You're going to like this.

Pamela Isom: 23:53

You're going to like this, I'm already smiling.

Pamela Isom: 23:55

We were doing some research on what are the better use cases of AI to amplify privacy, right To protect privacy, for privacy preserving practices, et cetera. And I had I'll say she was an intern, I had her, I wanted to go do this research and three weeks into the research she came back and says oh my gosh, I'm. I don't think that we should use AI for preserving privacy, because the tools ask you too many questions about yourself and the minute you give it that information, you've just relinquished your privacy. And I was like OK, so OK, let's, let's be cool here she was nervous I don't think I'm going to use.

Pamela Isom: 24:47

I'm not. I stopped using this tool, I stopped using that tool. I'm just to the place where I want to go in a cave and just work out of a cave, you know. And so she was joking, but she was serious, and so I think what we want to do is privacy is so important, and now we have these tools that are getting introduced to preserve privacy, and now we have to figure out what are the right questions to ask the vendors that are making these tools available, and if they're open source as well, right, what are some of those good questions that we should be asking to ensure that our privacy is preserved? Because you can think that you have a tool that's masking information or obfuscating information.

Pamela Isom: 25:31

Yeah, it is, but where is it storing that original information? Right, and so, and as she dug into that, she had more and more questions than she had tools, and so we finally nailed it down to a few. It was great exercise for people that are working with me as interns, but more so, it brought to the attention that we want to be very careful, and the reason why I brought that up is because you mentioned the recall right, and that tool in itself. So food for thought for us as we are looking at protecting our own privacy and what things we should be doing to help ourselves in this process. All right, Any comments.

Leonard Lee: 26:13

No, I think those are all good points and I think maybe your intern had a revelation you know that a lot of people haven't had that, which is, oh my gosh, privacy and then they started asking about it, right, and then maybe started to cycle through a thought process of what the implications could be of using certain or how certain tools could be used by advertiser in all likelihood, and your information will likely be sold to a third party. But going back to your comment about that recall, was it was on by default or it was opt out by default, meaning you have to opt out of it. You have to, number one, be aware of it and then you have to, you know, opt out of it instead of the other way, which is, you're aware of the feature and then you make this conscious decision to make that trade between, and you know an assumption of risk of that tool, right, or that function, and then you say okay, I'm cool with it. Nine times out of 10, I think most people just don't do it, which is what Apple kind of proved when they instituted you know the anti, you know the tracking feature, right, there was like only 10% of people actually opted out of that protection, and so what it tells everyone is no, we don't want to trade our privacy for your free features or what you're calling targeted ads or, more convenient, Most people don't even care about ads.

Leonard Lee: 28:11

We think that are creepy, because we had a conversation over here and all of a sudden we're getting a pitch for a product that definitely a value for an advertiser right, and so we need to. I think the general public needs to understand who actually the customer is for a lot of these free services. You're not the customer. You're like how I describe it is you are the cow that is yet to be made into hamburger, that goes into a burger that then gets consumed by a hungry person. Yeah, that hungry person is an advertiser, and you know how? Do you like my analogy?

Pamela Isom: 29:04

It's pretty deep.

Leonard Lee: 29:05

Yeah, but a lot of people don't know about this. Yeah. Yeah, as much as people in our you know know, like folks in our bubble you know. We preach at all times and we assume everyone knows.

Pamela Isom: 29:19

This is like uncommon common sense I agree, because we our information is being used, as you pointed out, by the advertisers, as you pointed out, and so part of what sometimes what I do is I am one of those that evaluates the solutions of advertisers and make sure that they're not introducing biases. So, for instance, if you're in a certain zip code you're going to announce, you're going to make sure that this type of product is reflected on their television right or through their streaming services, but if you're in X zip code, it's a different type of product and services. So sometimes I have the luxury to work with suppliers, advertisers, and look into that and talk to them about their practices. I actually had an experience, so I knew that it started to exist once I became an AI auditor. But then I actually had this one experience that really made it hit home, and that was when I contacted my niece and we were trying to decide.

Pamela Isom: 30:28

I kind of wanted to do something for her, and so I started to shop around to see what I wanted to do, and the mistake I made was I put her address on my file. That's all I did. I put her address on my file and, lo and behold, I started getting all these advertisements of things that I don't buy, things I'm not interested in, all of a sudden. So then I started to to realize, yeah, so this, these brokers, and, and so you know we, I'll have to get to Heidi don't don't get me started, because Heidi's one of my good friends now but these brokers and what they're doing with our information is just not healthy. It's really not. And then AI is helping with the case, right?

Leonard Lee: 31:15

You know, I've actually had some brokers reach out to me. They're more like junior level folks who are just curious about some of the stuff that I say and they don't realize what they're doing. They just see it as a business. Someone hired that. Hey, we're in the business of brokering information. Number one it's legal. Whether it should be or not is totally debatable. Is it right? That's also debatable, but I think most of us are going to lean toward no, it's not right. You shouldn't be buying information. Companies shouldn't be selling the information that we trust to them for their business, to sell it to third parties. And if they do, or if they allow third parties to use our information to protect it like it's gold, it's obviously not happening, because we have this thing called the dark web, where we could pretty much find ourselves in the dark web. And how do we think that the dark web emerged? Well, when you have all these people selling your data everywhere, it goes into a big, big, nebulous pool that anyone, including bad actors, can purchase and that you know.

Leonard Lee: 32:37

Everyone talks about data, data oh, it's a new oil, and blah, blah, blah. Well, how about this? That data is like the new oil for the massive that is fueling the massive and fast growing cyber criminal economy which Microsoft now continually points out. That is they estimate. I don't know who estimated this, but the estimation is it's the third largest economy in the world. It's a shame. And then it's the fastest. It's a crying shame. It's growing very fast, faster than any economy, probably any industry, and it's the cyber criminal industry. So there you go. Yeah, consequences, right, I mean. Yeah, there you go.

Pamela Isom: 33:27

I mean we could really go down that path because there's just so much in that space, so I won't. But I mean there's just so much in that space, so I won't. But I mean there's just so much, right. Phishing hasn't been amplified. All these different types of threats have been amplified. I will say that I always encourage people to take a really close look at the life cycle, at the AI life cycle, at the project life cycle, right. Take a look at every single angle and understand that, even if it's discovery, it could be a threat. You know it could be an attack surface.

Pamela Isom: 34:03

So I do tell my clients that.

Leonard Lee: 34:06

So here's, good advice.

Pamela Isom: 34:08

Yeah, and in my classes we actually go through the lifecycle and and we identify like, where are those, those points of attack?

Pamela Isom: 34:17

right Potential points of attack.

Pamela Isom: 34:19

So so here's my last question for you. So this has been good, not the last question, ok, ok. Well, next to last.

Pamela Isom: 34:27

So, yeah, ok go ahead and I was talking to a guest not too long ago. And I was talking to a guest not too long ago and the question that I asked him is what's a sure way to excel at digital transformation and what's a sure way to sabotage it? And one of the things that he said while you were thinking about this, one of the things that he pointed out for sabotage is that we forget about the people. So we think about technology when it comes to digital transformation, but we don't think about the workforce and the people. And if you want a sure way to sabotage your digital transformation programs, you just forget about the workforce, right? So I just want to get your perspective on that, like, what's your perspective on sure ways to excel at digital transformation? It could be AI transformation and what's your thinking around? I mean a sure way to excel at digital transformation, it could be AI transformation.

Leonard Lee: 35:19

And what's your thinking around? I mean a sure way to sabotage. Yeah, I would say it all boils down to the risk of change. Digital transformation is about change and whenever there's change, there's risk, right, and there's always this play between benefit and risk and unfortunately, risk takes on two roles. It's number one, inherent in change, and then number two, there's a risk of the actual technology enablement delivering on the target capability, realization, and that oftentimes is off. And this is not coming from an analyst who has never been in the trenches. I've been in big trenches throughout my consulting career, so I can tell you it doesn't matter what technology it is. I've been through generations of technologies implementing them, designing them, program managing them and managing change management for large transformation programs. That is one of the biggest challenges. Oftentimes, yes, it is the technology enablement that falls short, almost always.

Pamela Isom: 36:31

And is it because of the overemphasis on the technology or losing sight of what the mission is of the organization, or just getting too focused on the technology perspectives?

Leonard Lee: 36:44

No, it's what we deal with now in markets. It's inflated expectations, having aspirations for a particular technology that don't match reality, and that's why oftentimes you see projects have cost runovers and there's a lot of things that get in the way because of mismanaged expectations. And that impacts people. I think people, the consideration of people I mean it's germane People process technology we hear it, everyone knows this. I don't know why we're still talking about this stuff. That should be okay, all right enough, get it. Let's talk about real problems. Why do projects fail? Why do technologies get implemented in a way that don't deliver the promise? And I guess, now that I've ranted a little bit, my answer to you should do and oh, you need to take care of your data, like it's a simple thing. You know, oversimplifying, overdoing all these things that create, actually build and create risk into a digital transformation program.

Pamela Isom: 38:10

Right? No, that's what you've been saying throughout the whole talk. I appreciate it. That's what you've been saying to focus on the risk.

Leonard Lee: 38:19

That's you. This is what you do to me, Pamela. This is like I'm going to blame it on you.

Pamela Isom: 38:23

And remember the balloon and the air.

Leonard Lee: 38:26

Oh yeah, oh yeah, oh yeah Can.

Pamela Isom: 38:28

I borrow that. You can borrow that. You can borrow that because it, because it what happens when the air comes out? When the air seeps out, it sinks, and you're so disappointed, so great. So then now I'm to my last question Words of wisdom for those that are listening, or just advice? Right, usually the advice is words of wisdom, but I'll let you be creative here. But please give us some insights, some words of wisdom or advice that we can take with us, including me.

Leonard Lee: 39:06

Okay, yeah Well, you know everything, so it's hard to give you any kind of advice, but for most folks out there, what I would say is don't believe the hype. Do your homework. Ask a lot of tough questions is what do your homework means and you'll save yourself a lot of headache.

Pamela Isom: 39:28

Okay, no, that's good, and that pretty much sums it up, so I really want to thank you. That's great. That's great and very succinct, so I really want to thank you for being on AI or not. I think it's been a really good discussion. I'm so glad that we met and so, so thank you so much, everything you.