AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E047 – AI or Not – Joshua Linard and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
A dark cloud can teach us how to lead AI. We unpack the link between everyday intuition and model design—when probabilistic signals are “good enough,” when deterministic rules must take over, and how a hybrid approach powers safer, more useful agentic systems. Joshua Linard, a former senior geospatial and data leader at the Department of Energy, joins me to trace a path from environmental physics and national-scale modeling to practical enterprise AI, where community and governance make or break outcomes.
We dig into the real work of governance inside flat, federated organizations: building communities of interest that surface experiments, evolving into communities of practice that standardize methods, and guiding executive boards that set policy and unlock funding. You’ll hear why lean rituals—weekly accomplishments, risks, and issues—create more clarity with less burden, and how time-boxed best practices keep pace with fast-moving tech. We also explore enterprise risk beyond cyber, pulling in IP, compliance, operations, finance, and public trust to shape smarter priorities.
On the technical side, we break down agentic AI as a modular, hybrid architecture. Deterministic guardrails handle sensitive boundaries like PII and export controls, while probabilistic components accelerate discovery, summarization, and pattern detection. The key is domain-aware metadata and explicit error tolerances so models stay grounded. We compare precision needs across contexts—from programs counting pennies to portfolios rounding to the nearest half-million—and map process templates to tool selection so AI fits the job, not the other way around.
The leadership thread across it all is humility, curiosity, and courage: admitting none of us knows it all, asking hard questions about why processes exist, and starting small so we can learn quickly and iterate. If you’re navigating responsible AI, digital transformation, or the leap to agentic systems, this conversation offers a clear, field-tested playbook. If it resonates, subscribe, share with a colleague, and leave a review telling us the one guardrail you’ll implement next.
[00:00] Pamela Isom: This podcast is for informational purposes only.
[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice.
[00:35] Neither health, tax, nor professional nor official statements by their organizations.
[00:42] Guest views may not be those of the host.
[00:51] Hello and welcome to AI or Not the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and your digital transformation journeys.
[01:06] I am Pamela Isom and I am your podcast host.
[01:10] And so we have a delightful guest with us today, Joshua Linard.
[01:15] And Josh has and I know each other from a while now. We work together at the Department of Energy.
[01:23] Josh is Director of Enterprise Data and AI. He's a leader of geospatial Data strategy. When we worked together, he was the Senior Geospatial Information Officer and Chief Data Officer. We just go back.
[01:36] So Josh,
[01:37] welcome to AI or not.
[01:40] Joshua Linard: Thanks Pam.
[01:41] Yeah, it's a real pleasure to be here. I've been listening to the podcasts and yeah,
[01:47] they're great, informative.
[01:48] It's nice to hear other people asking the same questions and thinking about the same things.
[01:53] Pamela Isom: That's great. I'm glad that you are here. So let's get started. Let's have you tell me more about yourself and your career journey and while you're doing that, tell me what's up with you next.
[02:05] Joshua Linard: Sure, let's see. So I started my career at the U.S. geological Survey and I was tasked with developing and applying agricultural chemical transport models all across the country. And so as part of that work I got to look at how people were doing agriculture all across the country, all the different,
[02:21] say, irrigation practices or the farming methods or the soil mitigation and control measures and then you know, what kind of nutrients they were adding to their soils and how much pesticides they were adding and trying to understand where and how those chemicals made it from point A to point B.
[02:38] And so as part of that my background was environmental physics and chemistry. And but I really quickly had to learn about how those, those kind of very say one dimensional or simplistic equations operate in a 3D and a 4D sets.
[02:53] And so that's where my geospatial experience really came to play where I had to understand the extent of these physical and chemical processes at various resolutions within say, a watershed.
[03:06] So that could be my looking at a 10 meter square area or 30 meter square area or hundreds of square kilometers.
[03:13] And so,
[03:14] so understanding the interaction of that spatial physics and chemistry combined with how they, those change from in a temporal aspect. Right. So Often we would look at, say,
[03:23] sub hourly data and then. But we would also look at annual time scales and what are the variabilities on each of these processes within those time intervals.
[03:32] And so all of that gets to big data, right? And this was before big data was big data.
[03:37] It was a term. And my simulation of forecasting is what we would just now call AI. And so. And it was also one of our close peers, Dr. Kelly Rose, she was one who told me, he's like, tosh, you really just do an AI.
[03:48] And,
[03:49] and I said, okay, that makes a lot more sense now that I understand what people are doing. And it also helps to understand the struggles that people encounter when they're trying to deploy AI,
[03:59] especially in today's age of just being confronted with any number of options on how we might exploit AI to improve our business processes.
[04:06] And so that rolled into managing data. And then I came to the Department of Energy and tried to understand how we were going to deal with the nuclear legacy. So, you know,
[04:16] in the process of getting the nuclear materials we use to develop all of our arsenal, there was a big mess left behind. And so there's a lot of environmental modeling and trying to understand where all that goes.
[04:27] And so, and all of this modeling, you know, it just kind of involved either,
[04:32] you know, things that we could measure, specifically, I can go and measure how much rain is falling, I can go and measure how much moisture's in the soil, and I can take water samples of the groundwater and understand what the chemistry is.
[04:43] So there's all things that I can measure and very closely understand what's going on.
[04:48] But then there's also things that we just couldn't understand. Right? You just didn't have the data. So if you tried to characterize what is the temperature and the wind speed of a piece of the atmosphere, and that is only a few square meters, there's no information to do that.
[05:01] You have to make an inference based on some larger scale data. And then you have to downscale it.
[05:05] And so then you put these statistical approaches and so. And then with these models, you're trying to characterize all these different things that go into understanding how to get to that endpoint of a watershed and what is the chemical concentration of analyte, whatever to a very fine resolution, very fine number of significant digits.
[05:22] You don't necessarily have all the data to do that.
[05:24] And so then it was,
[05:26] what are. How can I mix and match the different approaches, whether it be probabilistic or statistical or deterministic,
[05:32] to get to that Answer. Because you have to get to an answer. Right? That's the job.
[05:36] Those were the issues that we dealt with and understanding how all that data went together with those algorithms that got the attention, I think the things that you were trying to do at headquarters, Pam, and trying to understand what that meant from a geospatial perspective, because there's lots of people that are trying to deal with this and then later recognizing the value that from a broader perspective and data in general.
[05:58] So that got me to those positions at leadership at DOE is the Chief Data Officer and the senior agency official for geospatial information. And, yeah, it's been a fun ride.
[06:07] Pamela Isom: I'm so glad that we had the opportunity to work together and get to know one another. And you were one of those leaders in the business unit at first that I worked with, and as I was Chief Data Officer and chief AI Officer there.
[06:24] And I remember working distinctly with you on using AI to figure out what's happening with the whales and what the nature, the status of the whales and the condition of the whales were for.
[06:40] And were these whales disseminating any harmful chemicals in the atmosphere. And so we had said, we'll use AI to help to AI and computer vision to help figure these things out in support of the mission, while you were actually out in the mission area.
[06:59] That was before you came to the CIO area. So it's really good to circle back and it's a nice throwback. So, so glad to have you here.
[07:08] Tell me more about your thoughts on leading practices for governing digital transformation,
[07:16] especially considering current times and events.
[07:20] I know we were talking about a sense of community to help us to understand the value of some of the outcomes.
[07:31] But I'd like you to elaborate more because you had quite a bit to say around governance, which is what my area is right now, and accelerating governance and making governance effective in this digital transformation era.
[07:46] So tell me more.
[07:47] Joshua Linard: Yeah,
[07:48] this is something that we were kind of forced to do at Department of Energy, and we may have been a little atypical in other agencies and that we,
[07:59] we were not hierarchical. It was very kind of flat organization. And so we were in positions of accountability, but not much authority.
[08:07] And so this is where that community aspect comes in, where essentially we were forced to get the buy in of our stakeholders on better ways of doing business.
[08:17] And then also, I think,
[08:19] especially as I went through that process,
[08:21] recognizing that everybody's at a different stage of the journey and even different business lines within a specific organization are at different points in the journey, and everybody has different ways of measuring success.
[08:33] They have different,
[08:34] say, applications and software that they're using, different kind of the tech stack in general.
[08:39] And so if everybody's using something different and they don't have a way to align,
[08:44] then what is the path forward on trying to figure out how can we modernize the organization?
[08:51] And so we looked at this and this really came to a forehead recently with AI and part of the organization that you set up. We set up community of interest, right.
[09:01] And I usually think of a community of interest as a group of people that are talking and commiserating about a similar topic. Right. And so we were often talking about what are the things that people are doing in terms of implementing AI and what does that, especially in doe with the research base,
[09:18] you were often talking about, you know, what are people doing in the R and D space, what are they doing in terms of demonstration and then how are they working to deploy that to solve real world problems.
[09:27] And that was the big emphasis at doe, is to try to get things into the taxpayers hands so that we can improve the way that things are done. And so that community of interest has really evolved in trying to understand, you know, kind of what people are doing.
[09:43] And the next different level, right. Might be the community of practice where you're actually trying to figure out, well okay, we figured out what we're doing, let's figure out the best ways to do things.
[09:51] And obviously there's, there's a, there's a timestamp on that, right. Because technology, if nothing, I mean it changes a lot and so,
[09:59] and rapidly. Right. So it's going to be, and that's something I think that a lot of communities struggle to accept is that, you know, this is going to be a way to, you know, do things now.
[10:08] This is going to be the, you know, the best practices that we've identified for the current moment.
[10:13] And recognizing that, you know, we need to have a mechanism to come back and reassess their validity at some point in the future.
[10:19] And, and so, but that's, that's what that community of practice would do, right. And then ultimately we set up these governance board comprised of, you know, these, is these, you know, executives across the department to get informed by these different community practices or these PMOs.
[10:34] In our geospatial. Org we had a PMO and then being the executives would be guided on these different topics to understand what decisions should be made from a, say a guidance perspective or a policy perspective or even just the directive that would be the DOE term.
[10:50] DOE has they aggregate their different policies, plans and guidance into directives.
[10:59] So, yeah, I mean, I think there's different approaches, right. And everybody's a different start. So within a small organization, you may have a community of interest and ideally it's going to feed up to a broader,
[11:11] more comprehensive data in terms of the agency community of practice, and then it'll feed up to a more broad,
[11:16] say, data governance board. But it wasn't always the case. Some people didn't have anything right. It was a lot of times we found it was Joe or Jane, data scientists recognizing that There were no SOPs, and we needed to find some peers to talk to about how to improve things.
[11:33] And, and so it always started at that individual level. And that. That's essentially how I got started. Right. Like I had come from an agency that had very robust and very burdensome processes for governing their data,
[11:46] and then to come to another agency and find those things were not there in any sense and saying, okay, well, what can I do to add this? Because there is value, right?
[11:54] There was burden, but then there is value. And I think that that's something we always struggle with when it comes to governance.
[11:59] Pamela Isom: That's one of the challenges is with governance is sometimes we feel like.
[12:04] So for instance, for me today as a principal investigator for some work with the department,
[12:12] there's a lot of rigor when it comes to reporting,
[12:17] when it comes to communicating what we're doing and the progress. But at the same time, a lot of it is just common sense.
[12:26] And so if we can look at governance from the standpoint of it's about doing the things that we need to do as lightweight as possible, but as effective as possible, if we can focus on effectiveness,
[12:40] then it may be so. Like, for instance, one of the things that we're doing is we have weekly reviews of accomplishments.
[12:50] We'll talk about accomplishments, we'll talk about issues,
[12:54] and we'll talk about accomplishments, risks and issues,
[12:58] and we'll have that in our discussion. And then we convert that into a weekly report. Not everybody has to do a weekly report because that adds to the burden.
[13:09] But we have to show up and share our accomplishments for the week.
[13:14] We have to share if there are any risks, and then we have to examine whether those risks are truly issues and need attention.
[13:23] Right. And so what's our mitigation strategy? So I think we need to start thinking more, especially in the day and time of AI, of how do we keep our governance lean but effective?
[13:34] And I believe that what you were saying is always take into account a sense of community because A sense of community can help us to zero in on the things that we may not have thought about and some very varying perspectives.
[13:48] So that's what I heard you say and that's why I wanted you to lean into that some more because I like the value of community to me is very important.
[13:57] And what you were saying makes a whole lot of sense and something that we need not exclude in our thinking and in our governance alignment.
[14:06] Joshua Linard: Yeah, I think a lot about the Agile mindset weighs into this, right, in terms of what that community is comprised of.
[14:14] And when I think of an agile approach to say, managing a project that's making sure that all the stakeholders that might have something to weigh in are involved in the conversation.
[14:24] And I've seen a lot of these communities, especially in the IT community,
[14:27] they, they get very like domain centric, right? You may only have networking people talking to networking people. So they talk about networking, but they don't understand how the networking's being used, how's it getting paid for, how is it going to be, you know, like so.
[14:41] And that's where I think the community gets involved. And this gets to that risk assessment, right? Because like often we found that if we wanted money to do anything related to enterprise, data governance,
[14:53] enterprise AI, it had to go through cyber, because cyber had the money, because that's where everybody thought the risks were. But we knew at the operational side that we had public stakeholders that were concerned.
[15:03] We had, if we were transitioning R and D to, to, to deployment,
[15:08] we had, we had IP rights issues. There's lots of risks. And these are all framed and some nice references that have been out, put out there by the CFO Council.
[15:16] This enterprise risk management playbook is the one that I use that categorizes 11 types of risk.
[15:21] And I think that making sure that those risks are represented in your governance process is the way to make sure that you're at least considering what there might be and making sure that those executive champions and sponsors that are controlling what work gets done,
[15:40] you know, have the awareness that, okay, you know, yes, there's a cyber risk, we get it, but there's these other things that I need to consider and that's something, I think that, you know, that risk management in general, I think that's been advocated for, for at least the last decade or something,
[15:53] you know, and doe, they haven't have an order that gets to risk management.
[15:57] It's often talked about and not really,
[16:00] I don't know, it's not executed consistently enough to make an informed decision about what's good or what's not working.
[16:06] I think, you know, we could talk about that's, that's the same thing for enterprise architecture and which also weighs into a lot of this. Right? Because if you don't understand those risks, then you're not informing your EA well enough to build it comprehensively to mitigate these risks.
[16:19] Pamela Isom: I think informed decision making is critical for AI and for humans. So I agree with you on that.
[16:27] Now I want to talk about deterministic and probabilistic models and the association with agentic AI.
[16:36] But to get into that conversation, I would like you to tell me more about the dark cloud and weather forecasting that you mentioned to me earlier, because I think it's a wonderful analogy and gets into that conversation and makes it very plain for the listeners.
[16:52] So go for it. Tell me more.
[16:54] Joshua Linard: Okay, so the way that I think about trying to understand what AI has to offer and how to understand its risks in how you're using AI and understanding, maybe helping to understand where you should invest in your time and resources.
[17:08] When we think about understanding whether it's not going to rain or not,
[17:12] and as we grow up,
[17:15] we get used to it raining and we make an association with a dark cloud and the chances that they're going to be rain.
[17:21] And so that is what we would call in the data world, anecdotal data.
[17:25] It's something that we just experienced. We haven't really kept track of, of, you know, how, how many days there's been dark clouds and there's also been rain. Right. We haven't done any of that kind of math.
[17:34] We just kind of have this general association. So that's anecdotal data and those are the kind of things that we use all the time to, to make decisions. Right. We make them in business all the time.
[17:43] Our experiences told us this. And so this is the decision that we're going to make.
[17:47] And then, you know, we've seen over the last, maybe I could be getting close to two decades now about this data driven decisions and how that's the key to success.
[17:55] And so, and this is where we get to measured observations and measured data. And so that might be where you're actually keeping track, okay, it was dark clouds this day and it rained.
[18:04] And so you end up with this time series that you can then use to create a statistic that tells you,
[18:11] okay, well it's, it's highly likely that it's going to rain if there's dark clouds. And so, and that would, and you end up with that probability and this is. We're getting a probabilistic models.
[18:21] And so for those who don't know,
[18:23] most of your large language models that run your chatgpts and your,
[18:28] you know, your anthropic tools and your Geminis, you know, those are all based on probabilistic models, right? So you. There's an association that is statistically based from one thing to another.
[18:39] And that works, right? It works for a given kind of context. And so if you wanted to know that,
[18:44] you know, is it going to rain? I'm going to go look at the sky. And if this dark cloud make. Yep, that's good enough, right? So that's a good enough answer.
[18:50] And that's a lot of what AI tends to offer. It's good enough.
[18:53] But if you wanted to know whether or not a dark cloud was going to create 0.2 millimeters of rain over a surface that was only like my driveway,
[19:03] you know, it's not going to be able to do that.
[19:05] And the way that you get to that kind of an answer, at least in my experience, is through a deterministic model, right? You're going to look at the physics that are involved in that.
[19:14] You're going to go measure all kinds of different things or not. Which gets into the issues with deterministic models. And it all gets to what is the amount of error that I'm willing to accept at the end?
[19:26] And so if we're going to try to think about a deterministic model, it can be really, really quantitative, right? So you can,
[19:35] theoretically, a fully distributed model is the term that we would use to characterize a volume of the natural environment and all the physics that are going on in there and the chemistry, that's your game.
[19:47] But that representative elementary volume,
[19:49] that would be the goal. If you understand that one little thing and you understand that one little thing and how it represents itself in the world. So if you think about a handful of soil and if you tried to understand everything that was in your hand and you had all the ways to measure that,
[20:03] that's one great thing.
[20:04] But then there's lots of handfuls of soil all over the world.
[20:08] And so trying to characterize all the little things across the world in that handful size, it's just not reasonable, Right? It's not cost effective. Any number of scientists will say it could be done, but again,
[20:19] we'll never know because it's too expensive. But it's a fully distributed, highly quantitative, very deterministic way to get at the answer. And so because you can't get it all that you still have to come up with some kind of answer to represent all these different things.
[20:32] And so that's where you end up with a mix, right? You have some data that represents say what is the extent of clouds in the atmosphere and that's going to be measured at say the 4 kilometer grid scale.
[20:44] That's huge,
[20:45] that your land surface is being represented by data that only is like 10 square meters.
[20:50] So that's much smaller. And so how do you get those two to react? And so that's when you have to rely on some statistics, you have to rely on some probabilities.
[20:57] And so you have to mix and match your different kind of math and your statistics to get to this answer. So that's one aspect of how you connect this deterministic and probabilistic approaches.
[21:09] But then, but you can also, your algorithm that you're using can be comprised of any number of, you know, different parameters and variables. And so like as an example, the ones that the models that I ran,
[21:20] we could run them as simplistically as, you know, a handful of parameters, but then we had others that were 300 individual parameters represented over these handful size things that are just a few square meters that ends up with thousands and thousands of data points.
[21:35] I never did the math. Now it's a big thing to say, I've got 10 billion parameters, whatever.
[21:39] Pamela Isom: In your LLM, you know, where are we then? So if you, if you think about that. So I think that agentic AI. Thinking about agentic AI is a combination of deterministic,
[21:53] maybe guardrails and probabilistic reasoning and that it's a hybrid. So I see agentic AI as kind of like this hybrid architecture that is having to need deterministic capabilities in order to ensure the safety of the agentic AI or the safety of the agent.
[22:19] What's your take on that?
[22:21] Are we in the hybrid era and should we zero in more on that or what's your perspective?
[22:27] Joshua Linard: When I think of agentic, you're trying to represent a process and you're representing that process by any number of different connecting nodes to your algorithm, right? And so each of those may be represented by this probabilistic or deterministic method.
[22:41] And it all depends on the data you have available.
[22:43] And then in terms of the guardrails,
[22:46] every model has,
[22:47] in every data set, right, they have a domain to which they're representative.
[22:52] And so like if I developed like a weather model for the arid southwest, it's not going to work for the tropics. Of Florida or something like that.
[23:02] And so then you can help with obviously the bigger models they deal with, they're designed to handle all these different kind of variances. But so, yeah, I mean, I think that those are those guardrails, right?
[23:12] Every model, like there's a saying, every model is wrong, some are useful,
[23:16] and it's because a model is initially designed for a given domain.
[23:20] And, and then it. What the problems happen is when they try to apply it to a different thing and, and without understanding what are the ramifications.
[23:29] So yeah, I think that, so there's definitely guardrails in place,
[23:33] whether people understand them or not. And, and I think that this, we could get into a whole discussion about metadata and the problems associated with populating it and whether or not people even use it.
[23:44] And I think that rolls into the data and the algorithms and AI.
[23:49] So when we were talking about guardrails and what they might mean, like in the context of say, a normal application of AI versus agentic AI, one of the nice things about agentic AI is it's comprised of sub models.
[24:02] And when you have those submodels, you can add guardrails at that sub level. It allows you to constrain the error and reduce the probabilities of model misuse. Versus if you're just trying to do one application in one kind of big model, which is just constrained by one set of guardrails,
[24:19] you know, with an agentic model, you have the ability to, to further control what the outcomes might be.
[24:25] Pamela Isom: I feel like that if we're trying to be effective governance leaders,
[24:31] effective leaders of AI,
[24:34] effective leaders of digital transformation,
[24:37] considering the state of affairs,
[24:41] considering the uncertainties that we are faced with that continue to evolve,
[24:49] I think agentic systems are good.
[24:52] But I believe that we need to zero in on a hybrid of the deterministic and a probabilistic approach.
[25:03] And I think when we talk about deterministic guardrails, an example of that, to make it really clear to folks,
[25:11] an example of that could be logic that blocks releasing or exporting personal identified personally identifiable information or export controlled data. Right? So then there is the gara. So the agent is there.
[25:31] It says that this is going on,
[25:34] that I've identified something in the environment.
[25:38] Maybe if you're a contracting officer and you're using an AI agent, that helps you to understand whether I'm using SAM properly or I followed some requirements.
[25:49] It sees some personal identifiable information and knows not to reveal that information but redact it. But still, still tell me that this is going on. Or share that information with me as a contracting officer, but not with others.
[26:05] Right. So this is where I think the deterministic behavior comes in. And I see it more on the safety side where I see me personally, where. And I think you said that as well,
[26:19] where I see the other.
[26:22] The more probabilistic in other areas.
[26:26] So I feel like if I'm. If I'm talking to listeners and I want them to understand how do I build up my literacy,
[26:33] how do I zero in on things that I should be focused without getting too technical.
[26:40] I would look at things like this. I wouldn't try to understand this exactly to the T,
[26:45] but I would get why this is so important.
[26:48] If you are looking at managing and governing in this era,
[26:52] would you agree with that or anything you want to add to that?
[26:56] Joshua Linard: I do.
[26:56] Pamela Isom: Right.
[26:57] Joshua Linard: And I do agree. And I think about how this gets to the value of defining, I'll say best practices and then I usually suggest people try to do that before they go try to grab an AI to figure out how to do the process because what that allows you to do.
[27:13] So if you have a best practice, theoretically you can derive a form or a template,
[27:18] then you're going to ask people to fill out, populate and then you can measure success by what's in that template.
[27:25] And then that template allows you to develop that deterministic model. Whereas if you were to just say I'm from a public perspective, go find instances of phone numbers or addresses using this pattern and it can make the.
[27:39] We all know that the AI that the probabilistic AI that's out there is good enough to make those relations and find those things.
[27:44] But then we don't know whether or not it's really pii. Right. Maybe it's just our company address or our company and. Or it's our HR person. Right. There's different things that are okay, but that's where these templates I think come in handy.
[27:55] And so from a governance perspective, I always try to get people to. Again, it gets back to understanding your data and understanding what you're asking for.
[28:02] If you go through the effort of just. And this again, this gets back to the community thing. Go into the effort of understanding what it is that you really need to answer your questions and then designing or selecting an AI to meet that, I think that's a better way in the long run.
[28:16] Right. It protects you from other risks.
[28:19] Pamela Isom: Okay, so how can we become better business and individual leaders in responsible AI and LLM powered design?
[28:31] Joshua Linard: Yeah, I mean, I think that I Always come to it with, with trying to understanding where the organization is that I'm trying to help.
[28:40] So if they don't like oftentimes they don't even know where their data is or they'll just say it takes care of it or my vendor takes care of it. And so that's one of the things you have to have a data owner and be responsible.
[28:53] So that's one thing. And so meeting people where they are and I've met with some groups and where I was just talking to one of the lab, one of the cleanup sites a couple of weeks ago and how they had a very detailed process about how people could in like introduce new data to the environment for broader use.
[29:14] They like it had specified metadata that had to be populated. They had a community that was willing to populate it to the level of expectation.
[29:24] And as a result, right. You've got a bunch of people that, that are, you know, they recognize the value of, of the data sharing and then quality data and, and they were humming.
[29:34] It was, it was impressive.
[29:36] And so, so there's different group and, and this go. And that extends to different business lines. Right. And so like we could talk about something like your financial folks and so, and, and their different objectives and so in a large organization they may only care about to the closest $500 million.
[29:51] So, so for them, you know, they can get away with a lot more in terms of like what AI to pick.
[29:57] Maybe something probabilistic is fine for them. But if you're talking to about a small business that's you know, counting pennies, right. They're going to really need something more refined to understand what it is that processes can be improved on.
[30:11] When I used to manage some of our cleanup sites and my annual budget was $22,000 or something silly. Right.
[30:19] When you think about managing a site of tens of square of kilometers with lots of data collection and report writing and technical experts involved, 22 grand doesn't get you very far.
[30:31] And so I was very conscious of my pennies and I was doing earned value management statistics on a monthly basis. Right. Every time I got invoiced and it was a lot of effort to make sure that I was in the black at the end of the year.
[30:46] But then I talked to another part of the organization that was dealing with with much bigger problems and, and like, and they were one of these groups that were closest to half a million bucks they could, they didn't care about pennies was I was in the error margin.
[31:01] My whole budget was in the Error margin of their. Their slop.
[31:05] Pamela Isom: I know the feeling.
[31:06] Joshua Linard: Yeah. Yeah. And so, but that gets to the issue of tool selection, right. And understanding what it is you need to do at the end of the year. Like, my boss expected me to count to the penny,
[31:16] but his boss expected the closest half a million dollars. Right. Or we could talk about regulatory requirements. And so, like, what does that mean in terms of, like, I need to get these many data assets out or I need to have this many pen tests done or whatever it might be?
[31:28] Pamela Isom: All right, so that's interesting.
[31:31] You just keep making me go into throwback zone. So,
[31:35] yeah, I'm like, I have to come back. Come back. Okay, so let's talk about this. So I want, before you share parting words of wisdom or advice or a call to action, so be thinking about that.
[31:48] I just want to know if you could change anything pertaining to the way that we are leading in this AI era,
[32:02] what would that be? If you could change anything,
[32:05] and if there isn't, if you think we got it all good,
[32:08] say that.
[32:09] What would you change?
[32:11] Joshua Linard: I think it is humility is a big thing, especially from an executive perspective. And that recognizing that we, the world has moved very fast as we've moved through our career,
[32:22] and there's no way we're going to know at all.
[32:24] And so we have to lean on the other people in our organizations to understand what their business processes are, what, what are their expectations and requirements,
[32:33] and then understanding that also, like the people that are supporting us, whether that be it or legal or environmental health and safety,
[32:41] whoever it is,
[32:42] they don't know it all either.
[32:44] And so that's something I feel like CIOs really bear. The burden of having to understand everything in the world about AI and they don't know.
[32:51] And so it's just too much.
[32:53] And so that is just being humble and recognizing that we're all in this together and that all we can do is take that first step and, you know, just try to work together.
[33:02] And as long as we're working together and everybody's aligned on where we're going,
[33:06] then that we're only going to fail ourselves. And, and that's something I say from, from that big enterprise perspective.
[33:13] Pamela Isom: Right.
[33:13] Joshua Linard: You know, obviously it's smaller groups. You know, they, they can, they can be a little more agile, a little more flexible. But.
[33:20] Pamela Isom: Well, I appreciate that. Oftentimes think about in that context,
[33:25] curiosity and courage and how there's humility.
[33:31] And then there, along with that is curiosity and being willing to listen to others and learn from the experiences of others and share our own experiences.
[33:46] And then courage. So take courage to try it out, but also to realize that challenges are going to come and we're going to fail at some things,
[34:02] but we're going to take courage and overcome. And I think that's what I heard you saying. And you called.
[34:09] Joshua Linard: Yeah, yeah, absolutely. I mean, I think that you're dead right. And so you have to have the courage to ask your peers, why? Why are they doing something a certain way?
[34:19] And you have to have the humility to accept that they're asking you for a good reason. They're not trying to poke at you about your way of doing business.
[34:28] They're trying to learn. Everybody's in this together.
[34:30] And one of the things that we used to say a lot back when you were at DOE was all boats rise with the tide.
[34:38] And so we're every. As long as we're all working together, we're all going to benefit and recognizing that. And it also gets back to the whole agile mindset and trying to work together so that we can get there faster.
[34:49] And. Yeah.
[34:52] Pamela Isom: I don't know if there's anything else you want to share with me while we're in this talk, but I do need you to share either parting words of wisdom or advice or a call to action if you don't have anything else that you wanted to share before you get to that.
[35:08] Is there anything else?
[35:10] Joshua Linard: Let's see. All kinds of things. This has just been really fun and so we could talk all day probably. But one of the things in terms of parting words is that would be to get started.
[35:20] Lots of groups, they really struggle with where do I start?
[35:24] I'm afraid that I'm going to mess up.
[35:26] My boss is going to yell at me, My stakeholder is going to alienate me, whatever it might be.
[35:32] And. And I think. But you can spend a lot of time wrestling with where to start. And I think. And this is actually where AI gets to be really handy.
[35:40] It tends to be a little less defensible, but it helps you get started.
[35:43] And that is it. Just start going.
[35:46] You don't know what you don't know until you start asking.
[35:50] And you have to move and find a way to get yourself the answers you need to move forward.
[35:56] And the only way to do that in my experience, is like, you've got to try things out. And if you fail, you fail. The important thing is to fail fast and recognize it quickly so that you can adapt and move on.
[36:06] And. And that can be hard with complex organizations. You know it's you just have to overcome a lot of traditional inertia. You know it's one thing to get a group to align but then once you've aligned and then asking people to change what you've aligned on it's up to talking KPIs or something and that's just a brutal process.
[36:24] And so, so that's something. But then again you have to have that conversation with your community and that is like hey we're just going to agree with this for now.
[36:31] Is it good enough to start and then we're going to revisit this at the next quarter, the next year,
[36:36] whenever it might be. But yeah, just get started.
[36:39] Pamela Isom: Okay.
[36:40] And then I just want to take time out and say again thank you for being here. I'm so glad. I do think that this is very informative. I love this discussion and I just love talking to you, I always have.
[36:53] So I appreciate you being here.