AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E046 – AI or Not – Christopher Richardson and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
What if the real edge in AI isn’t bigger models or more GPUs, but leaders who can turn complexity into clear decisions? We sit down with analyst and “tech whisperer” Christopher Richardson to unpack the hard truths behind AI’s economics, why reliability still lags the hype, and how to build systems that actually pay off. From conference rooms where “the cloud” gets mistaken for the sky to board tables demanding ROI, we map the shift from curiosity to concrete roadmaps that stand up under pressure.
Christopher explains why large language models remain probabilistic and how that matters for accuracy, governance, and risk. We dig into the promise and pitfalls of agentic AI, where chaining models often compounds hallucinations and cost. The conversation tackles the “AI bubble” risk head-on: when adoption slows because outputs cannot be trusted for mission-critical work, the funding engine sputters. We also examine a fresh take on competitive advantage through the DeepSeek lens—achieving strong results with fewer parameters and lower GPU use—hinting that smart efficiency may outpace brute-force scaling.
Sustainability and equity thread through every topic. We talk plainly about the byproduct of electricity being heat, why water use and siting decisions matter, and how to measure trustworthiness beyond slogans. Then we turn to access: open models, practical training, and education pipelines that unlock talent in places like Baltimore, building a broader, more resilient workforce. The playbook is pragmatic and hopeful—smaller, domain-tuned models; clear governance; measurable utility; and real investment in people.
If you care about AI strategy, responsible innovation, and turning hype into durable outcomes, this conversation offers a grounded path forward. Subscribe, share with a colleague who’s wrestling with AI decisions, and leave a review with the one change you’d make first to improve trust and ROI.
[00:00] Pamela Isom: Foreign this podcast is for informational purposes only.
[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice,
[00:35] neither health tax nor professional nor official statements by their organizations.
[00:42] Guest views may not be those of the host.
[00:51] Hello and welcome to AI or not the Podcast where business leaders from around the globe share wisdom and insights that are needed it now to address issues and guide success for you in your artificial intelligence journey as well as your digital transformation journey.
[01:09] I am Pamela Isom and I am the podcast host today.
[01:13] And we have a unique and special guest with us as always this time is Christopher Richardson.
[01:20] Christopher is a tech whisperer.
[01:24] He's an analyst, a researcher and an advisor.
[01:28] Christopher, welcome to AI Or Not.
[01:30] Christopher Richardson: Thank you so much for having me. Really do appreciate the opportunity to share some insights with your audience.
[01:39] Pamela Isom: All righty, so let's start out by having you tell me and it's a pleasure to have you here. I'm excited and glad that you are willing to be a guest on the show.
[01:47] Will you tell me more about yourself to start out with your career journey? Your and while you're at it, give me some insights on what's in your future. What does it hold for you?
[01:57] Christopher Richardson: Sure. So,
[01:59] like a lot of people who listen to this podcast,
[02:04] I started in the advisory space with the what some will call the evil Empire. But I'm an alumni of Accenture and I got my chops with a small management consulting firm about 20 years ago and I went in with the idea that I understood business pretty well and I understood at the time,
[02:28] the tech space pretty well so that I would be able to go and help everybody. And that was really my ambition.
[02:35] And what I quickly learned was that there is a psychology to problem solving that has nothing to do with problems.
[02:45] That if I was going to be successful as an advisor, I needed to quickly learn.
[02:51] So I started a personal journey around understanding humans from a sociology standpoint and from a psychology standpoint,
[03:01] understanding what drives and motivates people to make the decisions that they make and also understand what brings kind of that dopamine and serotonin rush to folks. So I spent a lot of time looking for quick wins, looking to negotiate with people around how they could feel like they were winning themselves.
[03:26] And through that I was able to build infrastructures and be able to create systems and programs that really benefited people.
[03:35] And then I took a technical turn and started in the unified communications space when we were still using more hardware. Now it's a. It's a software game, but it was a hardware game at the time and you know, went the certification route and started building infrastructures.
[03:52] And I built infrastructures for government agencies and non governmental agencies, spent some time working for the World bank,
[04:00] working for industry manufacturers and still doing that advisory. But I took it internal. And then eventually I went back to Accenture and started working with, with larger government clients and started creating solutions the same way, using that psychology and sociology to kind of understand what drove people to make good decisions.
[04:24] And finally went to an ed tech company, started working as a solutions consulting leader for them.
[04:31] And as I was doing that, I realized that a lot of the decision makers didn't understand the technology well enough to make good decisions.
[04:46] So what I did was I decided to take a different approach to explaining the ideas around technology and how they directly related to their bottom line. And that's really where the tech whisperer comes into play.
[05:03] The idea of the tech whisperer for me is taking a complex technical idea and bringing it down to a level where they not only understand the core ideas behind it, but also how it relates to the decisions that they need to make as a leader.
[05:23] Right.
[05:23] So I would sit in conference rooms and you know, ask a leader, well, do you understand what we mean by cloud infrastructure?
[05:33] And they say yeah, and they just point to the sky. You're like well no, that's not what cloud infrastructure means. And then you know, I would tie it back to let's say a data center.
[05:40] Like do you understand the idea of a data center? Say well yeah, we have data centers out in the middle of nowhere. Perfect.
[05:46] Cloud and data centers are synonymous. It's just how you use them and where your equipment kind of sits. And they were like, wow, this is, no one's ever explained it to me this way.
[06:00] I got a reputation for being able to take those kind of high level, complex technical ideas,
[06:06] bring them down to a level where people could not only understand them, but understand them in context and use that skill and taught that skill to others. And that led me to a couple years ago,
[06:21] help to co found Gaussian research,
[06:24] which is where I am now.
[06:26] And our goal with Galson Research is taking that kind of tech whisperer kind of complex idea mindset and bringing it down to an accessible level so that executives can understand exactly the key points that they need to in order to make smart decisions and how that that kind of falls into the AI space.
[06:48] So we cover 26 domains and generative AI is one of the domains that we cover, but we also cover machine learning,
[06:56] we cover other domains in the AI space and what I've come to realize is that there's so much information that you can be overloaded very quickly.
[07:09] And when you have information,
[07:11] you have misinformation. And it's in that misinformation where a lot of the poor decisions are coming from.
[07:19] So our goal is to make sure that we're giving factual information in a streamlined way that is directly tied to the business value that leaders are seeking in order for them to make the best possible decisions for the organization.
[07:39] Pamela Isom: Okay. Okay. I think that that is amazing. Tell me, like, about an experience that you're experiencing that is making you feel like what you're doing is rewarding.
[07:51] Christopher Richardson: Sure.
[07:53] So I can do a couple. But one that comes to mind pretty early.
[07:59] I should note that we play in the SMB space.
[08:03] So we're dealing with businesses that are large enough to have infrastructure and maybe a little small to have dedicated leadership.
[08:15] So that would mean that they would be in the fractional cio, fractional CTO space,
[08:21] as opposed to having kind of a full cadre of senior leadership at the tech space.
[08:27] I met a gentleman at a Maryland Tech Council event.
[08:33] That is,
[08:35] his organization is a manufacturing organization.
[08:39] They have three locations. They are a premier distributor for one of the parts that's basically needed in every factory,
[08:49] period.
[08:49] So they. They get a lot of business just from being the sole provider of these parts.
[08:57] And we had an initial discussion about generative AI. And I said, well, what do you. What do you think about generative AI? Like, what are you hearing? And he's like, well, My kid uses ChatGPT and he loves it, and I think we can maybe do it, use something like chat GBT for our business.
[09:13] And I asked him one question that not only kind of sparked curiosity, but changed his view on how to approach generative AI.
[09:26] And the question I asked him was, why?
[09:29] Now, that sounds like a simple question, why?
[09:32] But what I wanted to do in that moment was challenge his ideas about what was possible and what he was capable of doing with the resources that he had available.
[09:45] Right. So when I asked him why,
[09:47] he kind of took a step back, and he said, well, I don't know. I said, well, what do you think you can do with it? He said, well, I know I can probably use it for writing.
[09:55] So, okay,
[09:56] what else? Said, well, I can probably do something with marketing. We have a small marketing team, and we're not really sure sometimes if our marketing is working. I said, well, okay,
[10:07] maybe can you do analysis with it? I don't know, maybe.
[10:10] I said, okay, what else do you think? You can do with it. And he said, you know, I'm not sure, what can I do with it? And it was in that curiosity that we ended up having a fruitful conversation and the start of a really beneficial relationship because I was able to challenge his early ideas around what was possible.
[10:31] He then and his team then attended one of our workshops where we talked about what Gen AI, excuse me, can do, what it can't do,
[10:41] the costs, the risks, the use cases.
[10:44] And when we were finished, we said, now that you have a better idea of what it can do,
[10:50] how are you going to secure it? How are you going to govern it? How are you going to manage it? And he said, man, I'm not sure.
[10:57] What we ended up doing was putting together a realistic plan for him to work with his leadership team to attack all of those things. Right. A simple conversation led to practical advice that then led to a roadmap that they could actually do down the line.
[11:18] When I see that process happen,
[11:20] that's where magic happens for me, and that's where the fulfillment comes in, is where I can take someone with a broad idea and get them to a point where they, they see it as not only realistic,
[11:31] but tangible.
[11:33] That's where really, where magic really is for me.
[11:36] Pamela Isom: That makes a lot of sense. So I can appreciate that. I like the tie in to the tech whisperer, because to me,
[11:43] when I hear you speaking on that subject,
[11:47] that sounds like the impact of a tech whisperer.
[11:50] So appreciate your business. Yes, but that sounds like the impact of a tech whisperer.
[11:57] So I spent a lot of time trying to resonate with clients as well,
[12:01] and people in general and meet them where they are. So I don't like to talk over people's heads. I don't like to talk in jargon.
[12:09] So I like to think about what is going on and what's needed and make it, try to make it as simple as possible so that people understand why this relates to them.
[12:19] So a tech whisperer, as you were describing your scenario there,
[12:23] sounds like what a tech whisperer would do. So I'm impressed. I like that. So that's why. Really?
[12:28] Christopher Richardson: Absolutely. And just kind of as an aside, I know that you are celebrated for doing tech whispering, whether you call it tech whispering or not,
[12:39] but really there are very few people that are in our space that understand the technical side well enough to be able to articulate it in more brass tacks, common sense language that resonates with business leaders.
[12:57] And in order to understand the impact of it, you really have to understand,
[13:03] like being in a corporate space or being in a government space where you have someone who's a practitioner and eventually they become a supervisor and then hopefully through their career track, they become a manager and then they become a leader.
[13:17] And once you're out of the technical day to day operation of work,
[13:22] it becomes very difficult to stay aligned with the technical aspects of the work that's being done.
[13:30] You're supervising it, you're managing the people eventually who will do that work. But there's a disconnect often from the leaders to the practitioners in terms of the work that's being done.
[13:44] They just know that work is being done and there's a difference.
[13:49] So being able to talk to leaders and say, hey, you don't have to push the buttons, but you do have to understand at a very high level how the work of the practitioner eventually leads to the revenue benefit or the operational benefit that comes from them pushing the buttons.
[14:09] More that you can do that, the clearer it becomes how the puzzle pieces fit together.
[14:16] And when that happens, that's where magic happens. Especially when you're trying to do something like digital transformation or have organizational breakthroughs.
[14:25] Pamela Isom: Exactly. I appreciate that. I appreciate this whole conversation.
[14:29] So you've already explained your career journey. We talked earlier about the economics of AI and I'd like to go more into that because there are some things that I think it's worth reiterating here.
[14:46] So I'll start with my statistic that I found. So I have not verified this yet, but what I am finding when I did a little bit of research is that there are projections that place the AI market in 2025 US dollars at about 371.7 billion.
[15:07] Anticipated growth with growth to 2.4 trillion by 2032.
[15:15] Now I don't see it, I hope,
[15:18] but I don't see it. Right.
[15:20] I don't know your perspectives on this, but when it comes to the, the economics of AI, let's talk about that.
[15:28] And what do you think about this and also tell me more about this bubble that you've mentioned earlier. What's the bubble and how does that tie to this here market data that I'm finding?
[15:40] Christopher Richardson: Sure. So I am one of few folks who is willing to openly talk about the idea of the AI industry. And I'll just call it an industry to separate it from the tech industry as a whole.
[15:56] But the AI industry is a bubble that sets a burst at some point.
[16:03] So what does that mean? That means that the industry has basically taken a bet that the AI industry will grow and sustain itself based on revenue projections of the frontier organizations that will allow it to not only be widely adopted and that new things will come from it.
[16:32] So let me just kind of baseline the AI industry and why we are where we are now, because I think it's important.
[16:40] So the last major innovation that we had with generative AI, and a lot of people think that generative AI just started in the last two or three years, that is simply not the case.
[16:53] Generative AI as a concept traces its origins back to the 1950s. It has been through several, what we call winters, which are periods of slowed down development.
[17:07] But we fast forward, the last innovation was in 2022 with OpenAI's ChatGPT that I think people are very familiar with ChatGPT at this point. So 2022,
[17:20] we're using large language models to do generative AI through chatbots. That's our ability to type in a request or prompt and be able to get an output. Right?
[17:33] So from that time, from that 2022 to now in 2025,
[17:38] there have been improvements to that large language model,
[17:43] right? So now we have the, you know, we have competitors where we have the the Geminis and Claude through Anthropic and we have the open models through metislama and even some international models with Deep seq, right?
[18:01] So we have all these models that are trained off of several millions and in some cases now I even saw a 3 billion parameter model recently. So it was massive amounts of data and they are able to generate new text, new videos, new audio and even new code.
[18:25] Right?
[18:26] But here's the problem is that they're all found.
[18:30] The foundation of them is the transformer model, Right? And the transformer model,
[18:36] in kind of layman's terms is a predictive, it's a probabilistic model, not predictive. Excuse me, a probabilistic model meaning that,
[18:44] that we're talking complex math and probabilistic, excuse me, outcomes.
[18:51] Meaning that there's not a lot of certainty to the outcomes, right? And you can prove this to yourself by taking a prompt and presenting it to a large language model and then a little later put the exact same prompt in and you're not going to get verbatim responses, right?
[19:11] And you're like, well, why is that? It's because it's a probabilistic model. It's because it's designed that way. And why I'm emphasizing it so much is that in order for the AI industry to make money,
[19:26] people have to see AI as a reliable source of content,
[19:32] right? So that means that if I put in a prompt, I have to have content out of it or output out of it that I can rely on. And what do I mean by rely?
[19:42] I mean that is truthful that it doesn't violate any copyrights, that it can be useful, that it has utility.
[19:50] And when that doesn't happen,
[19:53] then that means that I can't use it. Right?
[19:56] So the problem that is happening right now is that the casual user does not understand prompting. They do not understand the idea of prompting, right?
[20:07] So they're not getting good outputs.
[20:10] And when you don't have good outputs, you have low adoption. And when you have low adoption, that means that the economic side of revenue generation isn't going to happen at scale.
[20:23] So what does that mean? That means that if A company has 10,000 or 20,000 users and they say I want to have widespread adoption of generative AI and the outputs that are coming out of it are not contributing to the revenue generation piece, that at some point they're going to say what are we doing here?
[20:44] Why are we spending this much money and we're not getting good outputs? Right now you can say, okay, we can train to get better outputs and that will help.
[20:56] But at some point we're going to hit a wall, right?
[20:58] And within this kind of three year period for some companies we've already hit that wall.
[21:03] Now there's a second part of this. Now you may have heard of agentic AI and people talking about AI agents more recently.
[21:11] So I want you to think about it like this.
[21:14] You have the large language model, which right now most large language models are hovering between 25 to, let's say 30, 30% hallucination. Hallucination meaning outputs that are, that are not true, factual or accurate.
[21:32] Right.
[21:33] So you have one large language model being used to compound to another large language model.
[21:40] So now you're increasing the hallucination rate or the potential failure rate. And if you have five or six of these agentic AI models going together to do something, well now you have a problem,
[21:54] right? And it's in that problem where it's not a question of can I get it working, it's a question of is the cost too high to have this level of success?
[22:07] Right?
[22:08] So a lot of organizations right now are trying, they are trying to put together something in the agentic AI space and more often than not they're failing because of the hallucination rate not being low enough, the mission critical things not being able to be utilized with AI.
[22:27] And they fortunately the industry will need those things to be accurate, otherwise there's not enough revenue coming in. And that's when, when I talk about the AI bubble bursting, it's the height versus the actual utility on the ground being out of alignment to the point where people are no longer willing to try.
[22:47] And it's taken three years, but that's where we are, is where very soon people will no longer be willing to try because they don't have confidence that they can produce mission critical things or things with high enough utility to justify the cost.
[23:05] So when that happens, that's where the idea of the bubble bursts. That's where that comes from.
[23:09] Pamela Isom: So you think when the bubble bursts that will come to the realization that it's ineffective?
[23:19] Christopher Richardson: I think that what will happen is that the potential for widespread adoption will be limited.
[23:27] And it will be so limited that the cost and the revenue generating potential will be so out of alignment that there will be no more desire to influence improve the models to the point where we can potentially increase utility.
[23:44] Because keep in mind, R and D is, is expensive.
[23:47] Training models is expensive. It's expensive in terms of the raw cost, the people cost, the land costs, the water cost, the resources cost. It's expensive to put these models out, right?
[24:00] So at some point if they're not getting a return on that investment, they're going to stop doing R and D.
[24:05] They're going to stop improving these models because they don't have the revenue to pay the people to come in. It's your job, you know, if you don't have enough money coming in, you're going to close shop.
[24:14] This is how that works.
[24:16] The difference is that right now, and I don't think a lot of people understand this, is that the frontier model developers are being propped up by larger tech organizations through investment.
[24:29] If that stops, the money flow stops, then the R and D stops, then that means the improvements stop. And once that happens, then we're at a point where there's going to come a day of reckoning.
[24:40] And that's what I mean by the AI bubble bursting. I don't think that it's going to be like an Armageddon kind of situation.
[24:46] But the people who are directly involved in AI are smart.
[24:50] They need to be thinking about what comes next. Right? And I have some thoughts around what I think should come next and what I hope comes next.
[24:59] And it's not scaling, it's, you know, we're going from the large language model idea to more niche small language and small reasoning models, right.
[25:10] That we become less general and more purpose built if we can get there, I think that we're going to have some good opportunities to recover. But the economic side of it is going to be a problem.
[25:23] If that bubble, if what we kind of forecasted happens,
[25:27] I'm concerned. I have really, really big concerns.
[25:30] Pamela Isom: Okay.
[25:31] Yeah. There is a lot of backing from the big companies to move this forward at accelerated paces.
[25:42] And some of the drivers behind it is for economic advantage and competitive advantage.
[25:50] And so the test and the verification and the validation, if we can get better at that and know. So like, we always talk about how we want to. We need trustworthiness.
[26:03] That's hard to measure.
[26:05] Christopher Richardson: Yes.
[26:06] Pamela Isom: Need to get to the place where we understand.
[26:09] Is trustworthiness just hallucinations?
[26:12] Is it data manipulation?
[26:15] Is it preventing the models from getting manipulated?
[26:19] You know, so we have to better understand what trustworthiness really means. And that's hard to quantify because. Yeah, I can't even quantify today what trustworthiness means in my relationship with colleagues.
[26:31] Like, really.
[26:32] So it's hard to. So it seems like we need some better measurements.
[26:37] And as you were talking, I mean, I just could see myself running and I'm a very visual person. Clearly,
[26:44] I could see myself running and jogging and running and running and running and running and never reaching the destination.
[26:57] Christopher Richardson: Sure.
[26:57] Pamela Isom: And so that's what I was kind of envisioning as you were talking. And we don't want to be there. We need to get to where we are actually reaping benefits that are sustaining benefits and that we can experience for even greater competitive advantage and greater rewards.
[27:20] And I. So I appreciate what you're saying. I get it. I get what you're saying. But it's. But Christopher, I mean, like, we have to do it in order to stay competitive as a society.
[27:31] So what got to figure out is how to do it.
[27:35] We used to talk about as I was growing up,
[27:38] don't throw your money in bags with holes in it.
[27:42] You need to be sure you're not throwing your money in bags with holes in it because it's not going to work for you. And that's how we were raised. So we were very meticulous about how we invest and how we give and all of that.
[27:57] Right.
[27:58] So you make me think about that when it comes to the economics of AI in that it is necessary,
[28:05] but we just need to be it. Like we don't know when we're going to really reap those benefits. You said the cost. I heard you say the cost of this.
[28:14] When you think about the resources, which I'm Concerned about the resources as well. You know, I'm real concerned about water.
[28:20] We talk about energy, but I'm really concerned about water. And I had Mashika not too long ago and she was talking about that. So I'm concerned about water because if our water resources dry up, we die.
[28:31] Christopher Richardson: Yep.
[28:32] Pamela Isom: We are born and bred based on water.
[28:35] So we have to be mindful of these types of things so that that doesn't happen. So I get what you're saying, but at the same time we need to balance it because we've got to be competitive.
[28:47] Christopher Richardson: So. So just a note on, on the idea of competitive advantage and I'm going to just use Deep Seek as, as an example here.
[28:56] So when we think about the idea of competitive advantage when it comes to generative AI,
[29:04] we're talking about two different things that I don't think people are framing correctly.
[29:13] So one is the idea of model production and the second is the idea of infrastructure.
[29:23] So if you look at it from a policy perspective,
[29:26] Nvidia being kind of the dominant force,
[29:29] has been prevented from selling to certain markets in particular like China. Right.
[29:37] So there's an infrastructure piece to this where we're saying the competitive advantage is we hold the cars in terms of GPU usage. Right.
[29:46] But the question is what happens when you can train a model on limited GPUs and you don't need the same level of energy to get a similar quality output? Right.
[30:02] And that was the game changer for Deep seq, was that they were able to train their model on far less parameters and far less GPU usage. Right.
[30:14] So if that continues to be the trend where we're going to smaller parameters, less GPU usage, then the question becomes where does the competitive advantage lie? Does it lie in our intellectual prowess as Americans versus others?
[30:32] Does it rely in our educational institutions producing theorists and physicists and computer scientists and folks dealing with quantum to be able to produce the things?
[30:44] Where does it lie?
[30:46] Because if lies in our education system. Well, policy wise, our education system is being eroded. So that can't be competitive advantage. If it comes in the form of our ability to produce chips,
[31:01] then the chips and science act from a policy perspective was our goal,
[31:05] was our inroad to competitive advantage that's being pulled back.
[31:10] So where does our competitive advantage really lie if it's in the intellectual prowess?
[31:15] Well, now we're creating a environment where we can't have our best and brightest stay in the US to create the things that we need to create in order to create that competitive advantage.
[31:26] So I Wonder where does it lie? Does it lie in the infrastructure? Because what we are doing is creating many,
[31:33] many,
[31:34] many new data centers, Right?
[31:37] Any barren land they're looking to put a data center in, right?
[31:41] But if it lies in the outputs,
[31:43] then we should be seeing people working on creating more with less.
[31:50] And in the industry, if you're observing other industry, we're seeing more with more. So I'm confused as to the strategy around competitive advantage in this moment because it doesn't feel like we are in alignment with what will allow us to have competitive advantage as the US when it comes to AI,
[32:13] we don't have universal policy around governance. We don't have safety measures.
[32:19] As a matter of fact, we have the opposite. We had a administration who is saying we don't need policies, we don't need guardrails. But if you work in any kind of large organization,
[32:31] you have to have governance, you have to have guardrails. You have to clearly say, yes, you can do this,
[32:38] no, you cannot do this.
[32:40] If for no other reason,
[32:42] to protect your intellectual property and to make sure that the people who are working within your organization stay safe and know what the rules are.
[32:51] But yet that's not policy wise. That's not where we're headed.
[32:55] So I wonder if the idea of competitive advantage has a different meaning than the one that I understand because it doesn't feel like the actions are in alignment with that.
[33:06] Maybe you disagree,
[33:07] but that's kind of how it seems. Just from a analyst observation right now,
[33:12] I don't disagree.
[33:13] Pamela Isom: I think that we need to define and strategically craft a plan for gaining competitive advantage and pay attention to the fact that some of the things that we're talking about on this call,
[33:32] some things will help and some things will not.
[33:35] Right. So the example with Deep SEQ that you used,
[33:39] they are using fewer resources and getting results. If they can figure it out, we can figure it out.
[33:46] Christopher Richardson: Yeah, I agree.
[33:47] Pamela Isom: That's what I mean. So we have to figure out what does competitive advantage really mean. And it does not mean pumping out a bunch of models thinking that people are going to use them.
[34:01] Yeah, yeah,
[34:02] right. That's not what it means. It means strategically thinking about and planning innovations that are going to be sustainable and that are going to withstand when our competitors come up with something.
[34:23] Right. So it also means paying closer attention to what the competition is doing. Like it didn't take rocket science for them to come up with a model that used fewer parameters and cost less with strong results.
[34:36] That didn't take a rocket scientist to come up with that.
[34:40] So we, we say this a lot, like reduce the cost of the chips.
[34:45] Come up with mechanisms that don't cause the chips to overheat so that it's then causing consumption of electricity and water. Like just do it.
[34:56] Christopher Richardson: Yeah, yeah.
[34:57] Pamela Isom: Throw in on that infrastructure and do it. Right. And so. And let the research help us figure this out. And I think that that's where we're going as a society.
[35:07] I think that's where we're going. But there are a lot of things that you pointed out that are occurring,
[35:13] but that's. We need to define and really understand what competitive advantage means. Now people like you and I can help push that message to say we need to really take a serious look at what do we mean by competitive advantage.
[35:28] But I think that the conversation is definitely good because there's also the whole privacy realm. Right, the privacy realm. I don't know how Deep Seek is from a privacy perspective.
[35:38] And Deep Seek isn't the only one.
[35:40] Christopher Richardson: Right.
[35:40] Pamela Isom: So there needs to be some holistic conversations about this,
[35:46] is what I would say. But I don't disagree.
[35:49] I just know that we need.
[35:51] You gotta go fast, you gotta go hard,
[35:54] but you gotta be smart. And we gotta figure out what smart means in order to gain real competitive advantage before the bubble bursts.
[36:03] Christopher Richardson: For your listeners, as you were talking about the resource side,
[36:07] there's something that I've told people for a long time that I'd love to share with your audience,
[36:13] which is the byproduct of electricity is heat.
[36:17] Okay. The byproduct of electricity is heat.
[36:21] When you think about the compute power needed for some of these models,
[36:27] what we should be thinking is that the byproduct of electricity is heat and heat rises. You talk about global warming. I know that's a very controversial topic, but I think we can all agree that we've seen,
[36:40] you know, we've seen more natural disasters, we've seen more problems in the climate space, just the overall warming of the world, and that has dire consequences. So when we think about the cost of innovation,
[37:00] we have to put the idea that climate is important into the overall story, especially in the US because we have ambition to build for not just AI, but for Quantum.
[37:16] Right. And we're thinking ahead.
[37:18] Right. And we're strategically planning for what's next. And as part of that,
[37:23] we're thinking about chips and we're thinking about compute. And when we think about those things, we have to think about the resources being used. And. And we have to remember, again, the byproduct of electricity is Heat.
[37:39] So when we think about water, what does water do?
[37:42] It cools, right? If you don't have water, you just said it. When water keeps you alive and sustain you and you're using it to power chips, then that means people aren't drinking it, that means people aren't consuming it.
[37:53] That means that when there's heat rising.
[37:57] I don't know if you've ever heard of cloud seeding,
[38:00] right? But we've done more cloud seeding in the last couple years than we've, we've done at any other time.
[38:07] Why is that? Because we have droughts, we have near universal droughts on the coast and that, you know, that all plays together. I don't think a lot of people really think about the environmental impact of technology.
[38:19] But you know who does is the people who have a lot of money to spend and invest.
[38:23] And they're saying, you know what, I'm not going to worry about that. What I am going to worry about is getting these chips into data centers and making sure that we can get a return on investment.
[38:34] When we talk about the economics of AI, we're talking about large scale money being invested, right?
[38:42] And at some point if you invested a penny in a stock, you know, you want a return,
[38:48] you're looking for that return.
[38:50] Do you care if someone has to suffer for it?
[38:54] You should if you're a consciable person.
[38:57] But we're at a point where it's again, it's too big to fail. So sacrifices are being made. But I just wish that those sacrifices weren't coming at the expense of poor people,
[39:10] of people and kind of already depressed communities. Because that's where data centers are going.
[39:15] They're not going in the Hollywood Hills,
[39:18] they're going in outside of Memphis, Tennessee.
[39:20] They're going in rural areas that already have limited resources.
[39:26] So I just wish that we would speak a little louder about the environmental impacts now so that we can get creative. Because that's not to say that we have to stop.
[39:38] I know you talked about innovation has to keep going,
[39:41] but there is a moment where we can say, is there a different,
[39:46] better or alternative way that we can power these things?
[39:50] And that requires sound policy,
[39:53] right?
[39:54] So can we harvest wind? Can we harvest solar? Can we have alternative fuels? Does it always have to be fossil fuels? Do we always have to cool with water? Now these are questions that have to be asked and answered if we want to have a sustainable future where not only we have the AI power,
[40:13] but we also have human power,
[40:15] that we have people alive to be able to power as AI.
[40:19] Pamela Isom: You know, we're smart country, we're, we're a smart country. We will,
[40:23] yeah, we will get there.
[40:25] I'm a very optimistic about it. We will get there. But I understand the practicality and the current situation.
[40:33] I hear you right,
[40:35] but we will get there.
[40:37] Christopher Richardson: I just hope that it's fast enough to create the value and utility that people have really high expectations for.
[40:48] But also, and this is something that I don't hear people talk about a lot and it's something I'm very passionate about,
[40:53] which is that the opportunities are democratized.
[41:00] Right.
[41:00] So what does that mean? So I live in Baltimore and Baltimore is a city that has a lot of potential in terms of human capital,
[41:14] but no way to really harness it in ways that would be valuable enough right now. And what I mean is,
[41:23] is that we got some really smart kids that if I could get them trained in AI, if I could get them trained in some of the tech of the future,
[41:33] then we have a ready made workforce that can really be utilized to get things to the next level. Right. We have the brain trust here. We just don't have the infrastructure here,
[41:46] we don't have the resources here.
[41:49] And there's a lot of reasons why, and I'd be happy to discuss that at a later time.
[41:55] But my bigger point is that I originally saw AI as an opportunity to democratize some industries in some spaces that were not democratized. So when I first heard about AI,
[42:12] and I heard about it in the context of maybe being used in the banking sector to make banking more fair,
[42:21] right?
[42:21] So in terms of credit approvals and things like that,
[42:26] to make them more fair.
[42:28] But when you think about that large language models and models are programmed by people,
[42:35] if those people already have biases, then they can be impregnated into models. So then that's like, well, okay, if that's the case, then do we have checks, balances, governance and guardrails that allow for the intention and the spirit of it being democratized and being fair to be in place.
[42:55] So that's my real concern is making sure that there's fairness and equity baked in to the technology,
[43:07] not only in its use in its outputs and who has access to it.
[43:13] You know, that's why I'm so excited about open source models and why I'm not as excited about some of the paid models, because who can access them?
[43:27] I want young people to be able to access models and be able to learn on models that they can get access to for free or get access to without the Kind of the limitations of guardrails.
[43:42] The schools can't afford to pay for a lot of these models. They don't have money for books. They certainly don't have money for subscriptions.
[43:51] So it's a little frustrating when you think about equal access now that's something again, you know, it's a policy thing, so I understand that, but it's a little frustrating.
[44:02] What I am trying to do to solve that is working with the University of Baltimore that recently propped up a AI Innovation center to try to get kids trained and get kids access to AI resources so that they have the baseline resources they need to try.
[44:20] And that's all we want them to do right now. We want them to try, we want them to experiment. When I talk to the business leaders,
[44:27] one of their first thoughts is I want to give licenses to a small group of people so that they can try and they can experiment and then they can innovate.
[44:37] Right.
[44:38] If we want that for the enterprise level customers and for business,
[44:43] we should certainly give it to kids.
[44:45] Yeah,
[44:46] young people, so that they can try. Because I mean, you know, the reality is, is that we have an aging populace that is going into retirement. And if we don't fill those with qualified,
[44:57] learned, smart young people,
[45:00] then we're in a bigger problem than just AI. So, you know, part of it is getting them attached to innovation now, getting them attached to scientific understanding of process now, getting them thinking about bigger ideas now, having them integrated into the process so that when it's time for them to do the work,
[45:23] they're not trying to figure out how to innovate. They already know it's baked into to their psyche and culture. So that's just another thing that was kind of on my mind.
[45:32] I know I've been rambling, I apologize for it. But I want your audience to understand that fact that you have a phone that you can, you know, type queries on and get a response is only a part of the story.
[45:46] There's a much bigger story that is going on. And the more that you're aware of it,
[45:52] the more that you can make better, smarter decisions around how to use it in your own personal life and then also how to use it in your business.
[46:01] Pamela Isom: Yeah, no, I think that's really great insights. And I still think that the whole conversation, even when you talk about.
[46:08] I'm going to the next question. But even when you talk about the economics. So when you think about building up a community and our communities across the globe. Right. Building up all communities and our young people and equipping them with tools that they need to, to carry the torch.
[46:27] You're talking about sustainable economies.
[46:30] Christopher Richardson: Absolutely.
[46:31] Pamela Isom: So I think that is, we always talk about it from the standpoint of workforce development and talent and training and all that, but it also goes to sustaining our economy.
[46:43] So I believe that is very relevant and very applicable. And of course, you know, I'm with you 100% on that. I think that our young people are going to carry the torch and they're not that young.
[46:55] Right. So.
[46:57] But our next generations need to be able and equipped to carry the torch. And if we can help them as much as possible, we should do so.
[47:03] Christopher Richardson: Absolutely.
[47:04] Pamela Isom: Tell me about a myth. Is there a myth that you would like to dispel today? Like right now?
[47:10] Christopher Richardson: It's the one that I start most of my presentations with, which is that AI is always right.
[47:16] You know, we start with the. That because we have grown accustomed to having a thought or a question in our minds and being able to get an immediate answer from what feels like a reliable source.
[47:33] Right. So when I think about when I was growing up,
[47:36] you know, pre kind of personal computing,
[47:39] I would have to write my questions down and my mother and father would take me, me to the library on Sunday and I would pull out an encyclopedia and I would try to find the answer.
[47:49] Right.
[47:50] And I came into personal computing as an adult.
[47:55] So it was an interesting concept to me to be able to have a question, thought or idea and be able to explore it in slow time, but ultimately real time to be able to get an answer.
[48:09] Right.
[48:10] And then we get to the point where the search, the web search becomes ubiquitous with Googling, right. So now everybody's Googling.
[48:20] And that gave you the idea of,
[48:23] well, if it's on the first page, that means that it's more popular than other answers.
[48:28] So I can probably reliably trust this or I can validate it with one or two easy sources, right. And then we get to the point where we have Wikipedia, where it's kind of it's crowd, it's crowdsourced information that has some guardrails around it in terms of information being accurate or inaccurate.
[48:49] So that becomes a more reliable source, right. And now we're at a point where people are putting search queries into AI thinking that this is crowdsourced information that has some level of reliability without vetting it.
[49:07] So they end up in a world of trouble because they don't understand the transformer model. They don't understand probabilistic outcomes and predictive, predictive math. They don't understand that. Right?
[49:19] So what they're doing is they're taking that information without the betting that they would have probably done in yesteryear and putting it into official documents.
[49:31] I even heard of an attorney getting in trouble because he referenced cases that came from a ChatGPT search. Right?
[49:38] So now the process of searching and being curious enough to validate information is being degraded,
[49:47] right?
[49:48] So the myth is that it's accurate. The response that I like to give with that is you can trust,
[49:57] but you have to verify.
[49:59] And what that verification looks like is going back to what you've already known to do, which is find multiple sources that are saying the same thing.
[50:07] And if you do that, then it can become a useful tool.
[50:11] But if you do not verify, it will cost you at some point because of how math works. So that's the myth that I think is the important one that I like to leave people with is like, just make sure that the information that you are receiving is accurate, and that's independent of your ability to prompt.
[50:34] That's independent of your source material. That's independent of all that.
[50:38] It is trust but verify.
[50:41] Make sure that the answers are accurate. That's what I would say in terms of a myth.
[50:45] Pamela Isom: Okay, and then the last question I have for you is, can you share words of wisdom or a call to action?
[50:53] Christopher Richardson: Sure, I'll give both. One, the words of wisdom is remember why you're doing anything that you're doing.
[51:03] Right?
[51:04] And I have a little piece of paper that I take to any laptop that I have that says, are you sure? Like a question mark,
[51:13] are you sure? Right.
[51:15] Because when I look down at that, if. If I have to say something, I want to make sure that I'm confident in what I'm saying and make sure that the.
[51:24] The information that I'm giving is legit, is accurate, and it aligns with my beliefs. Right?
[51:30] So making sure that you understand what your. Your purpose is in terms of presenting information,
[51:37] and make sure that you understand the why.
[51:39] The call to action is two. One is to continue to listen to this podcast because I personally have learned a lot about AI and the perspectives on AI through this podcast.
[51:54] So, one, continue listening to the podcast, and two,
[51:58] is be curious and find reliable resources to get your questions answered.
[52:05] So if you have questions around AI,
[52:08] then Ms. Isom is extremely generous with her answers around questions regarding AI. I want to extend to your audience the ability to ask me questions as well.
[52:20] So if you have a curiosity or question,
[52:24] please reach out to me on LinkedIn at Christopher Richardson and I'm more than happy to explore various topics around AI with you. And it's in building community around AI that we get questions answered and that we start the true innovation people power process.
[52:43] And that's what I would leave your audience with.
[52:46] Pamela Isom: Okay. Well, I appreciate those words of wisdom and I appreciate you being on the show today and talking with me. Very much appreciated. So, Sam.