AI or Not

E044 – AI or Not – Felix Gonzalez and Pamela Isom

Season 2 Episode 44

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

A candid journey from chemical engineering to AI leadership sets the stage for a grounded look at where artificial intelligence truly delivers value—and where governance must take the lead. We sit down with Felix Gonzalez, a data science and AI director and educator, to connect past tech waves to today’s GenAI moment, surfacing practical lessons you can use right now. Think of GenAI as a calculator for language: it speeds drafts, code, and analysis, while your expertise sets the guardrails and makes the call.

We walk through real use cases that cut through the buzz. Drawing on public AI inventories, we unpack how federal agencies are already piloting and deploying automation, machine learning, and GenAI to streamline classification, triage documents, and modernize internal processes. In safety and health, we explore computer vision for PPE compliance, fire detection, environmental monitoring, and even search-and-rescue support—areas where human-in-the-loop design turns fast pattern recognition into smarter, safer operations. In healthcare, we address privacy, bias, and why recommendation systems augment clinicians instead of replacing them.

If you’re wrestling with ROI, you’re not alone. We compare traditional ML and LLMs on cost, quality, and risk, showing how to match the tool to the job. We also push back on doomsday takes about disruption: roles will change, but critical thinking gets sharper when AI removes busywork and surfaces contradictions worth investigating. The durable advantage isn’t magic prompts; it’s governance, oversight, and continuous learning that keep you adaptive as models evolve.

Ready to turn AI into better decisions rather than bigger risks? Follow the show, share this episode with a colleague who owns a critical workflow, and leave a review with the one AI use case you want us to break down next.

[00:00] Pamela Isom: This podcast is for informational purposes only.

[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice,

[00:35] neither health, tax, nor professional nor official statements by their organizations.

[00:42] Guest views may not be those of the host.

[00:51] Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey.

[01:07] I am Pamela Isom and I am your podcast host.

[01:11] So, okay everybody. So we have a wonderful guest with us today, Felix Gonzalez.

[01:18] Felix is a data science and AI director.

[01:21] We work together during my tenure at the Department of Energy as well as currently as he is a data science consultant at my firm.

[01:32] Felix, welcome to AI or Not.

[01:35] Felix Gonzalez: Pam, thank you very much for having me.

[01:37] Pamela Isom: Okay, so let's go ahead and jump right in. Will you tell me more about yourself, your career journey and why you're thinking about that?

[01:46] Tell me what's in your future?

[01:50] Felix Gonzalez: Yeah, absolutely.

[01:51] So I am. So I started my career as a chemical engineer. So that's what I studied in my college in much bachelor's degree.

[01:59] I started as a risk and reliability engineer for the nuclear industry where we did, you know, regulatory analysis and took into consideration risk and informed decision making.

[02:10] Early in my career, actually I was exposed to a lot of data analysis and you know, I've always been interested in computers. So. So it's an era that even though I was a chemical engineer and you're working on risk, I would always get involved in as many data analysis as I could.

[02:26] Anything that was related to programming, I would, you know, raise my hand and volunteer for it. So I was always exploring, looking for opportunities in,

[02:35] in that area.

[02:37] So I was also during my career I worked in environmental safety and health.

[02:43] I also teach at a local university here in the Maryland area.

[02:48] So. So being exposed to many different areas, you know, being exposed to many different points of views from experts in the field,

[02:54] students that are coming into.

[02:56] So I teach data science, which is my focus right now. Anything AI,

[03:01] how to use, I'm getting in now. I think everybody's getting involved in how to use generative AI for decision making. So that's one of the classes that I am going to be teaching in the future where we're going to be teaching students.

[03:14] Okay, how do you all these tools, large language models, how do you use them for your work? How do you use them for data science, for coding, for helping you code better and understand the problems and provide being more efficient in general.

[03:27] So yeah, so that's a Little bit of my career.

[03:30] So I've been working for a little bit over 19 years now, so not to date myself, but.

[03:35] So it's been a very cool career going from the first transformation back in the early 2000 with data analysis. And even back then AI was not something. So we can talk a little bit about that in the podcast where even back then, you know, I was working on some projects that would now be called artificial intelligence.

[04:00] So. So that, you know, that's been an interesting part of my career where, you know, I've been involved in AI even before it was actually called AI.

[04:08] Pamela Isom: Yeah, that's pretty cool. We can talk some more about that. So what's in your future? What do you see?

[04:14] Felix Gonzalez: So in my future, definitely continue pushing the boundaries on how to use AI. How to leverage AI,

[04:19] especially Gen AI now is a topic, you know, that nowadays everybody's talking about.

[04:24] So. So how to leverage gen AI into our processes of the different organizations I work with, even how I use it personally, which I think it's, you know, having an impact on how gen AI is being used within society itself.

[04:36] So. So definitely very transformative technology.

[04:40] You know, sometimes I think it's a little bit more. People think it's gotta be more transformative than what I think personally is going to be. But you know, we can talk a little bit more about that as well.

[04:49] But definitely you continue pushing the boundaries, teaching my students how to use the different technologies that are out there, not only AI and gen AI related,

[04:59] as well as teaching my colleagues and you know, pushing those boundaries on where can we take AI within the different areas that I work with. As you mentioned, environmental safety and health is the one area that I work with right now.

[05:11] So it's going to be an area that I think we're, we're going to continue using these new technologies to inform decision making and making our organizations more efficient.

[05:21] Pamela Isom: Yes. I'm looking forward to teaming with you on some of the education and training that we'll be doing together with the students. That's pretty exciting.

[05:30] I would like to ask you about past technical disruptions.

[05:35] AI is a disruptive technology in itself, and I'm starting to think of it more than technology.

[05:42] So I think more about the governance aspects of IT and how it's a,

[05:47] a capability that can help augment decision making.

[05:50] But I'm starting to say we should look at it beyond the technology realm,

[05:54] but we got to get there.

[05:56] But if I look back at past tech disruptions,

[06:00] I'd like to know from you,

[06:03] do you see any Lessons learned that we could potentially carry forward in this AI era?

[06:09] Felix Gonzalez: Oh, yeah, absolutely. And there's a lot. And if you don't mind, I want to mention a few of the.

[06:14] Of the technological disruptions and transformations that I lived throughout my career, even before then.

[06:22] So when I started back in the early 2000s and, you know, started my internships,

[06:28] one of the first experience I had was there was this person and this colleague of mine in the organization that I worked with. He was an expert in Excel, right?

[06:35] And back then in the data analytics, if you're familiar with, early in the 2000s, there was lots of companies that had like, spreadsheets software that would do a lot of different things.

[06:45] There was this person in my group, he was always sought out. People like, were always reaching to him.

[06:51] And he was an expert in Excel. He would do a lot of really complex things in Visual Basic for applications. And, you know, back then that was like, pretty impressive.

[06:59] So I started to get involved with him and learning as much as I could. That was very early during one of my internships. And I was like, when I am an expert in my field, that's who I want to be.

[07:09] I want to be able to do things like that, grab data, any data set and extract insights to be able to inform decision making.

[07:16] So. So that was probably the first experience that I had with the world of data analytics and, you know,

[07:24] Visual Basic programming and programming in general on automating things, right? On automating, how do you transform data to extract insights?

[07:32] So that was my first experience, you know, back then, as I mentioned, lots of spreadsheet software. Right now there's really not that many out of those companies are out of business or not not used as of, you know, as much as they were back then.

[07:44] And, you know, you basically get those main players that are, you know, that are able to continue forward and pushing the boundaries. So, you know, Microsoft Excel is still big 20 years later.

[07:57] So. And there's, as I said, not a lot of companies that were out there during that time that still active today.

[08:04] So that was the first exposure that I had. And back then there were not a lot of people who knew how to do data analysis. I think it's much, much common nowadays that are able to open an Excel spreadsheet,

[08:18] even at a minimum,

[08:20] be able to manipulate the data a little bit. Obviously not everybody knows how to do Visual Basic for applications or programming. It's not easy.

[08:28] Even back then, some pretty cool things that we did that I'm like, I'm not even sure how I did that because I would definitely not go back to my Visual Basic for application days.

[08:37] Especially when you have codes like Python nowadays that are really easy to understand and to work with.

[08:43] But that was kind of like the first, I would say, transformation that I lived.

[08:47] And you know, as I said, you know, I think nowadays almost everyone knows how to open an Excel spreadsheet or, you know, different spreadsheet software that we have nowadays, a CSV file and understand and manipulate it.

[08:57] Back then it was not that common.

[08:59] So that was the first transformation that I lived.

[09:03] Social media. Then a little bit afterwards, especially with the introduction of smartphones, I think that was a big one.

[09:09] Where, you know, nowadays it's like everybody has a smartphone. I don't know anyone who doesn't have a smartphone. And that is literally a mini computer that you have on your pocket that has a GPS that has a camera in it.

[09:21] I mean, if you remember, before the smartphones,

[09:24] we had those TVs that you would put in the car and you would put in and you have to update them apps when you got home. And it was different times.

[09:33] And obviously we had our cell phones as well.

[09:36] We had the cameras, we would have. A lot of people would have their own digital cameras. You don't see that those anymore as often unless you want to have a professional camera that allows you to take settings.

[09:47] And so those were really, really big changes. And obviously many companies went out of business. Sometimes I wonder even if you remember the personal digital assistants. I think this was my, my favorite one.

[09:57] Pamela Isom: I do.

[09:58] Felix Gonzalez: You do, right.

[10:00] So I was really into the personal digital assistants. I had various models of the Palm OS models, even the Dell A51V which I sold years later for a lot of money.

[10:11] I was like, man, I wish I would have kept it. But Even the Dell XCM x 51v was almost the same as a smartphone. And one lesson learned that I am always surprised is like, if you think about those companies,

[10:24] it was like literally that was almost a cell phone.

[10:26] The only thing that for whatever reason they didn't integrate was the cell phone part, that communication part. Because even those PDAs you could connect via Wi Fi, you could connect through Bluetooth back then.

[10:38] And there was a lot of things that you could do almost the same as a smartphone, except for calling. So I always wonder, man, it's like those companies were prime for being able to make that transformation and they didn't.

[10:50] I don't even know those Palma OS still exist. Obviously they don't do any hardware.

[10:55] Dell doesn't do any PDAs, I don't think they exist anymore. So.

[11:00] So yeah, so that was another big one with the smartphones and social media communication. All these new apps that we have allows us to do a lot of different things that we weren't able to do before.

[11:12] So.

[11:15] Pamela Isom: I think it was what mine was called, the Palm Pilot.

[11:18] And I had another one,

[11:21] but that one I used the most and I was so excited about having it.

[11:25] And I think back to, if we think about lessons learned, I think about how not only that, but just go back to Blockbuster.

[11:34] So Blockbuster didn't make it and everybody was at Blockbuster, like everybody. I remember dealing with lots and lots of people shopping and at Blockbuster and standing in line to get videos, yada, yada.

[11:49] Right.

[11:50] So what this tells me in this conversation when you think about lessons learned, is that we have to keep up.

[11:57] The technology has to keep up. So here's a conversation I was in, this is interesting because I was in this conversation earlier with a different guest and we were talking about how he was concerned that the bubble was going to burst.

[12:11] And he was talking about the AI bubble.

[12:13] And I was like, what are you talking about? Like, well, if it burst, what are you talking about? So he was concerned that the return on the investment,

[12:23] investment may not reach what we think if we don't keep up. And we started getting into this discussion about competitive advantage.

[12:35] And one of the things that, that I think about now, without repeating that conversation, is they lost competitive advantage. So the lesson learned was did they even understand? Do they think that what they were doing was going to last forever?

[12:52] Or was it just so good at the time until we thought it would sustain? Right. And so that's what, this is something that I am excited about.

[13:03] But I do think it's an opportunity, it's an opportunity for businesses today to think about those things that happened. Like why didn't the PalmPilot and the PDAs, why didn't they continue?

[13:19] Or maybe they did, maybe they morphed into the smartphone,

[13:24] but what about those providers?

[13:27] So it's something we have to think about today because it is. Our economy is dependent on it.

[13:35] Felix Gonzalez: Absolutely. Yeah, absolutely. And as you mentioned, yeah, I mean, maybe the technology morphed a little bit or you know, the, the way that industry. But I think the same here with AI and Gen AI, it's to ride the way we have to learn these new techniques on using these tools which at the end are making us more efficient.

[13:55] And the way I see Genai specifically, it's kind of like a calculator for non numbers. Right. It's a calculator for language, it's a calculator for images, it's a calculator for sound, for video is going to continue to evolve as we move forward but it probably in my view is going to have exactly the same impact as the calculator.

[14:16] It allowed us to be more efficient in calculating numbers, be more accurate. It allowed us also to morph our, the way that we learn from focusing on the mathematical formulas and not really understanding sometimes how these mathematical formulas do but rather now we're, you know the calculator allowed you to more from moving from learning all these mathematical formulas and just plugging in numbers to understanding why the concepts on why you're doing what with math.

[14:52] Yeah, I think gen is going to be exactly the same thing where it's right now it's exactly the same thing where we are focusing more now on in the academic industry you see it a lot where it's focusing more on the concept rather than in the coding.

[15:03] For example, for, for, for code generators focusing more. Do we really understand the codings now? I don't have to worry as much about oh, I had a period somewhere and I don't know where my, my, where's my error?

[15:14] Because I now have to look over 330 lines of code and you know, I know I have an error somewhere and then eventually after an hour I'm like oh, here's the error.

[15:22] It's a period misplaced. Now with co generation tools I don't have to worry about that. Right. I don't have to spend time on that. I can focus on the two, on the actual concepts and how I'm going to use that into decision making.

[15:35] Right. So it's a calculator for non numbers and yes, it's very advanced obviously and it does a lot of things on the back end but that's in general I think that's where Genai is going.

[15:46] Is it a bubble? I don't think so. I think it's here to stay. I mean I see a lot of people using it for a lot of different things and as I mentioned earlier it's going to affect how society as a whole works.

[15:57] It's going to allow us to write a message and perhaps I'm not the best writer. I go in into any of these tools and put in my, my brainstorm ideas of text and I hit enter and I tell the tool it's like hey, rewrite it.

[16:11] And the tool goes in and rewrites Me a really nice looking paragraph. Right.

[16:14] Pamela Isom: I do think that our students are not just our students, we're all students. We're all students AI in emerging tech. Right. But I do think that as we are maturing that we should come up with creative ways to use the tools and use them as tools.

[16:34] Tools to help us be efficient and effective in our roles and responsibilities.

[16:40] We just have to provide.

[16:42] Just as we learn how to use the tools, we have to learn how to oversee. Right. Govern and provide excellence, like oversight excellence. So, you know, I emphasize a lot around governance and oversight.

[16:55] And the reason being is because you are leaning on tools and leaning on sophisticated tools to help us out. But you will always want to be the one that is guarding what you are using and monitoring the performance and the outputs, particularly in this AI era.

[17:15] And I think that's one of the lessons learned.

[17:19] Generative AI is going to evolve. It's going to evolve even more as data science evolves and as AI evolves.

[17:25] So that, that we're learning today may be obsolete in six months. Right. So we've got to keep sharp in that capacity as well.

[17:35] But is it going to be, is AI going to be obsolete? I don't think so. There are concerns though that the return on the investment isn't there,

[17:44] but I think that we can get there. I just think we have to do our due diligence to get there. Right. So from another perspective,

[17:53] it's kind of a change in the subject, but not so much. So I'm going to ask you about the impact of AI on safety and health. But before I do that, we've been doing some work together and you did some analysis, particularly for me,

[18:07] on the public inventory that was made available by the government and they were on the AI use cases.

[18:15] Now I have an intentional goal when asking you to do so,

[18:21] which was advanced analytics and starting to look at data and aggregate that data and look for insights.

[18:29] You did uncover some insights based on that public information.

[18:34] I was wondering if you could share that.

[18:36] Felix Gonzalez: Yeah, absolutely. Absolutely.

[18:38] So, so, yeah, so, so we looked at the inventory of AI use cases that the federal government has published with the idea to understand where is the federal government going, right.

[18:50] And we looked at, you know, that the different publicly available data sets from different agencies.

[18:55] So one of the things that I want to mention is in my view,

[18:59] the federal government actually is pretty ahead of the way that we are. The federal government, general, is using AI. They're right in the way there is new technologies.

[19:09] So we explore what is the impact of new technology across different viewpoints. And you can see that in the AI inventory.

[19:18] And it's not only limited to AI in general.

[19:22] They're looking at gen AI,

[19:25] different impacts from different missions of the different agencies.

[19:30] So they're doing the work that they need to do and they're focusing on different areas. Obviously there's people that are going to be looking at okay,

[19:39] where's my cost benefit for this specific technology or that specific technology? Because AI is in data science in general they're really big fields.

[19:48] So they're looking at automating their internal processes. Not only the impact of how industries that perhaps some agencies regulate, how do they use AI, but they also using AI to automate their different processes.

[20:02] And I think this is true for almost every organization out there. And when I look at data science positions out there, my students ask me where should they focus on,

[20:10] I mean anything automation nowadays days, it's big. Either you through using AI or using gen,

[20:17] either of the two. You know, there's different technologies and you know, there's other technologies that are non gen AI that can be used for automation as well. So there's a lot of focus on that, on how to make, you know, business processes more efficient.

[20:29] And obviously looking at the cost benefit of when you're looking at, you know, automating, for example, a predictive model and a classification model. You can use Genai, you can use like a traditional machine learning classification algorithm.

[20:42] Well, which one is going to be more cost beneficial based on the cost of running Genai versus on accuracy and the cost of running a traditional machine learning algorithms?

[20:52] Pamela Isom: Quality.

[20:52] Felix Gonzalez: Right, and quality as well. Yes,

[20:55] so, so there's a lot of that right now throughout industry and I think the AI use cases show that where there's been all these technologies being evaluated, some use cases have been deployed, some use cases are more exploratory in nature.

[21:10] So, so yeah, just interesting times because you can see, you know, these organizations trying to figure out okay, where's the benefit of using this, what can I do with this?

[21:18] And you know, we're just inventing new ways of using these new technologies. So it's really, really exciting times to be in data science and AI and any area because you can use AI for anything.

[21:28] You don't have to be a data scientist to be able to use AI for informing your processes.

[21:33] Pamela Isom: Right,

[21:35] okay, well I appreciate the report that you did and the analysis and I'm still going through it. But I do think that the trends that were reported are real and kind of line up with what we're seeing 2025 and onwards.

[21:54] So that it's a very insightful piece of data.

[21:57] I would encourage listeners, if you are curious about what the trends were and if you think that the trends from like some of the activities from 2023 may be obsolete now, you may find that that's not the case.

[22:17] Right. So there are some trends that we saw that based on the public information that are evolving and that are supporting what we are seeing today. And you can see that it started a while ago.

[22:35] And so that's some of the things that the report exemplifies. But it also helps with planning for future activities and future insights. So thank you for that report and for the work that you did there.

[22:47] And definitely I am a consumer of the public information.

[22:51] Now let's talk about the impact of AI on safety and health.

[22:57] What's your take on that?

[22:59] Felix Gonzalez: So this is an interesting one.

[23:01] So as any field,

[23:03] they we were trying to leverage AI, right?

[23:06] And I don't think even trying to leverage AI. We have been leveraging AI for a long time now. Even when, you know, these AI transformations started to come back, you know, started to become common in the 2000 and tens, I started looking back at some of the work that I did in the different organizations that I worked at.

[23:26] And I remember back in the like 2008, 2010 timeframe, I was involved in this report that looked at compensatory measures in nuclear power plants. And in that report we were looking at video image detection.

[23:42] So if you say image, video image detection nowadays,

[23:45] I think a lot of people know what it is, how to use AI, how do you apply algorithms for video image detection? But back then in the whole report,

[23:55] when all these AI started to come out back in the mid 2010s and started to become really common, the topic of AI went back and looked at that report. I was like, yeah, I remember working on that.

[24:06] The whole report doesn't mention AI,

[24:09] not even one mention of the word artificial intelligence.

[24:12] It doesn't even mention the word algorithm.

[24:14] So just to put in perspective how the language started changing between when I wrote that report and after and a few years after.

[24:23] So even back then in the early mid 2000s, we were looking from a health, safety and environment perspective. We're looking at applying different new technologies that now we can we talk about AI.

[24:37] So that was kind of like an evolution. And people ask me, well, how long have you been working with AI? I'm like, oh, I think all my career I've been working with AI.

[24:44] We just didn't call It AI when I started working.

[24:48] So back then, in that rich specific report we were looking at how could you detect fires using a video image feed? It was exactly the same thing as we would do now.

[24:57] So you have this algorithm running in the video image feed. You would train the algorithm on what a fire looks like and then the video would basically detect it using machine learning algorithms to detect, okay, there's a fire, there's a higher probability of a fire here, so send someone to check.

[25:12] So that was one of the first, not, not first ones, but in early my career worked with, with video image detection. So and nowadays it's more common. You can see I do trainings on how to train video image for environmental safety and health.

[25:27] For example, if you go in, you want to detect a ppe. It's a big one where you want to make sure that your workers, when they go into the field,

[25:35] they have the appropriate feed, the appropriate ppe. So you could go in, grab a video feed from places where they're working, making sure that they have the hard hats on.

[25:45] And if they don't have the hard hats on, perhaps you send a notification somewhere that tells them, hey, you really need to have hardhat in this area.

[25:52] So there's a lot of ways that you can use AI in this case, video image detection or trending on analysis is another big one. So you want to make sure that you are looking at your trends because if they're going up, you perhaps have a problem somewhere that you can fix before a problem becomes a more serious issue.

[26:12] So we use AI for trending analysis for trying to identify what are the issues. We have a lot of written reports in safety and health, so we want to make sure that those written reports,

[26:25] you know, we can leverage lessons learned from past events into the events that we predict into the future so that our workers are being kept safe throughout the different work that they perform.

[26:36] So yeah, so you can use AI in many, many different ways.

[26:41] Pamela Isom: I can remember using some work that I was doing while working with you in the past.

[26:48] And I can remember using AI to detect,

[26:55] and it was the computer vision side to detect issues with whales,

[27:02] so that we knew if the workers were working in the environment,

[27:07] if there was any type of exposure to chemicals,

[27:11] and also so that they knew the state of the literally wells, right. And the health of the wells for not only water consumption, but just for worker safety.

[27:25] And so I do agree with what you're saying. I think computer vision has been around for a while and we didn't call it that even. Right so it's been around for a while.

[27:37] And then to compound that with the machine learning aspects is a wonderful thing. And that is something that,

[27:44] by the way, that really shines through in the report that you generated for me and work that was going on in the federal government, not just at doe, but in federal government at large.

[27:53] It's a lot of work with ML around visioning the computer vision, and that's in that report. But more so that is evolving. So you can take a look at technologies today,

[28:05] AI technologies. And they have just taken computer vision to a whole new realm.

[28:11] And it's getting better and better all the time. And it needs to get better because we need accuracy and we need precision and we need people to train those models like what you described, so that it gets even higher.

[28:24] Better images and better predictions.

[28:27] I keep thinking about. I'll give it back to you in a second here. But I keep thinking about search and rescue.

[28:33] Search and rescue needs quality vision. We're using it today for that purpose. And I am looking forward to saving even more lives because of the safety aspect that comes along with using tools like AI.

[28:51] So I just wanted to bring that out and that's something I keep an eye on because we need,

[28:58] there are so many use cases,

[29:00] but we definitely need it in that space because a lot of times we cannot send everybody in to rescue. We don't even know that they're there.

[29:12] And the tools in the aerial vehicles and different devices can help us detect so we can save even more lives. So that is a good area for health and safety.

[29:27] So thanks, Felix, for that discussion.

[29:31] Now, from a health perspective and also from environmental perspective,

[29:37] can we use AI to help from those angles?

[29:43] And also,

[29:45] is there any concerns that you may have?

[29:48] Felix Gonzalez: Yeah. So one of the things that we haven't really talked about is the concern sides. So obviously there's a lot of, you know, when you're using training data, you know, biases that perhaps are in that training data, you have to consider them.

[30:00] So yeah, there is a lot of, from an ethics perspective that you have to relook into these models.

[30:05] Now,

[30:06] my view is that when you pair AI with a human in the loop,

[30:11] that's where you're really going to be reaping out the benefits the best because you have best of two worlds working together. So you have that experience, that expertise from professional in a field,

[30:22] along with now the efficiency that an AI model will provide and you know, the data quality. And in many cases for any, all the different areas that I've worked with,

[30:31] you will be able to in many cases. Sometimes the AI algorithm will tell you something that perhaps you're like, is this correct or not correct? And it makes you think different.

[30:40] Right. And you go in and question what the model is telling you. You go in now, explore it, and it's like, okay, I can see what the model perhaps is doing it.

[30:47] And now it forces you to think differently sometimes.

[30:50] So that's one of the things why I always like, you know, pairing, you know, human expert with an AI model. And a lot of the systems that I've worked with, we typically deploy them as a recommendation system.

[31:01] It automates a lot of things within data analysis, but ultimately the person is responsible.

[31:08] Talking about healthcare, it's exactly the same thing in healthcare. There's a lot of data.

[31:13] It has specific challenges because now you have health information that you have to protect.

[31:18] So depending on the area, you may have different challenges, different laws that you have to take into account when you are deploying a system.

[31:26] But there's a lot of data there.

[31:28] We as humans, we don't have the time to go through a lot of these data. Right. So these were these computers and AI algorithms really shine. They are able to process all of these data very quickly and then tell you at a really high level to an expert, it's like,

[31:43] okay, this is what the AI model is recommending and you can have cases where you're automating something. So, yes. I mean, if you obviously have taken precautions and put controls in place to make sure that when something bad happens, it's taken and it's addressed and mitigated.

[31:59] From a healthcare perspective, there's a lot of data there, a lot of research being done on different drugs. Right. I actually volunteer for a group where we do analyze healthcare data from research as well.

[32:11] I'm not even sure I mentioned it is one to use.

[32:14] So even in the healthcare side, I have a little bit of experience as well on using AI and data science to analyze this data and try to extract insights. And there's a lot of algorithms that you can use for different purposes, from classification algorithms to clustering algorithms on natural language processing.

[32:29] Combining natural language processing with classification and clustering gen AI also obviously will give you insights and you can search the data.

[32:39] So there's a lot of applications that are out there on how to leverage AI for extracting insights. But yeah, healthcare specifically, I would say privacy information is important. Obviously, we need to protect people,

[32:52] health information from people. So. So that would be one thing that, you know, one of the challenges that you have to Consider that are really, really important.

[33:00] Pamela Isom: But there are good use cases for AI in the healthcare space. And I just think of,

[33:05] of diagnoses and the ability to navigate data much faster than we can individually using augmenters like AI. And then if we have, if we keep it in its perspective as a recommendation engine,

[33:25] could be a great asset to doctors. Now I have a optometrist that I see frequently and he has a digital assistant that's there,

[33:35] that's listening to the conversation.

[33:39] But he tells me that he has a digital assistant, he has a in person assistant and he has a digital assistant.

[33:45] And the digital assistant is tracking what's happening with my vision and keeping records and conveying back in imagery like what's going on. Right. So not our conversation, but like what are some of the reports,

[34:02] what are some of the imagery saying? Right. So it's displaying back what the images are saying based on the test that I have taken.

[34:12] And I've seen him and my dentist too. My dentist has some things that they did.

[34:17] And so I do think that it's very helpful. With my dentist, I've said to him,

[34:22] I recognize that technology.

[34:25] Yeah, they simulated my mouth and then they turned around and told me what it would look like after I had some work done. Right. And so then I, I recognize the technology.

[34:36] So I was like, yeah, yeah, okay, that. But you're still accountable.

[34:40] Felix Gonzalez: Exactly. Yep.

[34:41] Pamela Isom: That's what, that's what you were saying is keep that human in the loop. And. But we can trust some things to some extent,

[34:49] but just don't become naive. Right. So that stewardship is still required of us is what I heard you say. Is that correct?

[34:58] Felix Gonzalez: Yeah. And you know, just that example that you said, you know, you have the dentist looking at specific recommendation from an assistant and in many cases it's going to confirm what the dentist probably already knows.

[35:10] Right. But in those cases that it doesn't confirm, it will force the dentist be like, and try to think, okay, why is the system giving me something different?

[35:18] And I can talk all day long here about many cases where I've gotten into similar situations in the past. And yes, sometimes you spend, when those cases happen,

[35:28] sometimes you spend, you know, 30 minutes an hour figuring out why. This is the system recommending something that is different from what I thought.

[35:36] And it makes me think different.

[35:37] Maybe go back to the data and analyze it and dive in, into the data and really understand what is happening.

[35:42] And yes, it forces me as a professional to better understand what is happening and you know, lessons learned from the past, from, you Know all this data that the digital assistant is giving me and you know, trying to figure out some cases,

[35:57] the system may not be correct either. Right. So you have to be, as a professional, be aware of that. Some cases it is not correct and you have to make that determination.

[36:05] As you said, at the end of the day, the professional has got to be responsible for what is being done.

[36:11] So. But it really helps these systems really help make your work better at the end of the day,

[36:17] in my view.

[36:17] Pamela Isom: So would you say that the use of generative AI and recommendation engines and capabilities like that, where you have to go back and cross check and be that steward,

[36:33] which is something. So if I hired a paralegal to help me, it's not. So I'm a, I'm an attorney, I hire a paralegal to help me or an assistant to be there to support me.

[36:45] I don't check everything, but I do provide oversight and I know the level and the extent of oversight that I'm going to provide and what I'm going to take their word verbatim.

[36:55] And when I'm not,

[36:57] what my question to you is when it comes to the AI era,

[37:03] so I used to write code a lot.

[37:07] I am one of those from the Java era,

[37:11] the Visual Basic era and the Ops 5 era.

[37:15] And in all of those cases, if there was an error,

[37:18] I would hunt through lines and lines of code trying to find out where the problem is and fix it. Today I would feel relieved if I didn't have to do that.

[37:29] I embrace AI for tools like this.

[37:32] So my question to you is, in all of those examples that I just had,

[37:36] is it helping us from a critical thinking perspective or is it hurting us?

[37:41] Felix Gonzalez: I think it is helping us,

[37:44] yes. There's always these questions. People always complain about, oh, I don't have the time to, to spend on this. Right. I don't have the time to do, to spend on that.

[37:53] And these AI systems, at the end of the day, they're making us more efficient. So now perhaps we're going to have the time to spend on higher risk, higher value items that we in the past didn't.

[38:05] And there's always this risk perspective that people have to take into consideration.

[38:09] So depending on the work that you're using AI for,

[38:12] that's the time that you also have to take into consideration on reviewing what the AI is producing,

[38:17] they're going to be low risk items that perhaps I'm okay, if the system gives me something wrong,

[38:22] there's not going to be a big consequence.

[38:24] There's going to be other items that the risk and the consequence is going to be large. So in those systems I need to have more controls and more mitigated strategies to be able to catch errors.

[38:35] So as I mentioned, I started my career at risk analyst. So I'm bringing now at the end of here with AI these risk concepts and I think they all work exactly the same in industry.

[38:47] In AI industry you always see comparisons with the nuclear industry, which is also coincidentally another area that I spent a big chunk of my career. And I think that is a true, accurate statement where you have to take into consideration risks when you're working with AI.

[39:02] And it's exactly the same way if you are using AI or different process within your field.

[39:08] You have to take into considerations the consequences of which part of the process you're using AI.

[39:15] So you're always going to have to be looking at rest, you're going to always going to have to be looking at, you know, potential mitigative strategies and preventive strategies and controls to make sure that when you use that AI output,

[39:28] you make sure that when you're using has already taken into account and you're going to have a good decision. So if it means your example about a paralegal,

[39:38] if it's going to be a case that is going to be publicly available, high stake case, yes, you're going to have to spend more time making sure that the output of that tool is going to be correct.

[39:48] Whole bunch of different controls that you can put within a AI system output.

[39:53] Which is another thing that we can.

[39:55] Pamela Isom: Talk about a lot I want to address.

[39:58] I wanted to do just what we did because I wanted to put to bed that AI is dampering our critical thinking skills. Because in this discussion this is a way to help to see how your critical thinking, your oversight,

[40:17] your human aspects is actually being or there's an opportunity for your human aspects to be sharpened even further.

[40:27] So it's laborious though. Like it's a pain having to check things out.

[40:33] Sometimes it can be a bit of a nuisance,

[40:35] but it is if you can put a strategy to it,

[40:38] if you can put the risk mitigation to it,

[40:42] if you can do all of that, then you're sharpening and cultivating your critical thinking skills and keeping up with what's needed today.

[40:52] Critical thinking skills from yesterday need to be sharpened as well as everything else.

[40:58] Would you agree?

[40:59] Felix Gonzalez: Absolutely. And sometimes one example, another example related to that.

[41:04] So you have to understand the process. I mentioned the analogy with a calculator. Right.

[41:11] In AI and Genai specifically, it's exactly the same thing. You have to make logic out of the output. And I've had many cases where sometimes I have junior staff, they do something for me and the output is wrong.

[41:24] And I'm like,

[41:25] dude, it's like, did you check this?

[41:27] Does the output makes logic?

[41:29] And when they look at it, it's like, oh, yeah, it doesn't make logic. There's an error somewhere. I'm like, exactly. So you have to, as you were mentioning, Pam, you have to sharpen your critical thinking skills.

[41:38] When you get an output from a tool, you have to make sure that that output makes sense. That's the first thing. And I tell the students, my students that all the time, if your output should give you, if your tool should give you an output between 0 to 100 and you're getting an output of a million.

[41:52] But why is that right?

[41:54] Don't trust it,

[41:55] that critical thinking, you have to have it and you have to question the tool. The same thing as you would question me. And I give you something as like, well, does it this make sense?

[42:04] And I tell my students that all the time. It's like, does this make sense? When you see the output, does it make sense? Always question, does it make sense? If it doesn't make sense, then let's go back and see did we miss something as a professional, which can be the case as well in some cases.

[42:18] But in many cases I had my staff give me something and I'm like, yeah, this is like, wrong. I don't even have to tell you this makes no sense. And when we go back, it's like, yeah, there was an error.

[42:28] So you have to use that critical thinking.

[42:31] Pamela Isom: Okay, so that's cool. I, because I work with students as well as business leaders that are looking. The listeners are oftentimes business leaders that are looking to.

[42:43] They're embracing the technologies, but they're also concerned.

[42:47] There's always that healthy blend.

[42:49] And there's been lots of discussions that I've been in about the critical thinking side. And then AI has taken that away from us. And I don't agree.

[42:58] I think there's extremes with everything.

[43:01] But this will help us to see that.

[43:04] No, it's not. If you do the due diligence that you are supposed to do, if you don't understand what that due diligence is,

[43:11] this will help with that.

[43:13] Now, I want to know about a myth. Is there a myth that you would like to dispel today?

[43:19] Felix Gonzalez: I don't know if it's a myth or not. And it goes, you know, along the same lines that we are, you know, we have been talking for the past, you know, half an hour and we can use this kind of like as a conclusions as well.

[43:32] So Gen AI nowadays,

[43:36] there's a lot of people saying a lot of statements out there about, you know, how big of a transformation this is going to do and, you know, a lot of people saying, oh, well, look at all these things that AI is doing, you know, you know, companies that are laying off people and so on.

[43:52] And I'm like, well, I mean, there's a lot of uncertainties right now, not only the Gen AI transformation.

[43:58] So. And you know, when I talk to people about those statements that are being thrown out there, a lot of them are the leaders that are selling these same technologies.

[44:10] I'm like, yeah, I mean, they have to sell it. They're good salesmen. So that's why they're doing this. They're saying, yeah, this is going to transform it. You need to use it.

[44:18] Yeah, go use my tool. It's absolutely. They're going to be saying that I personally don't know that it's going to be that big. And if we dive into what we have been talking for the last hour, different prior transformations,

[44:30] yes, there's going to be positions that are going to be abolished that perhaps we may not need anymore.

[44:36] So the question is, how do we transform ourselves if we are in a position like that, where we now develop new skills?

[44:43] So, yeah, we are always going to have to be learning new skills.

[44:47] Is it going to be, I think,

[44:49] what some people are saying out there? I don't think so. I mean, I don't think Gen AI is going to like transform industries to the level that sometimes I hear.

[44:59] So I think that personally is a myth. Obviously, we're in the midst of these changes,

[45:03] so I don't know,

[45:04] I may be wrong,

[45:05] but if we dive into past transformations,

[45:09] as I said, I think there's going to be new opportunities, new positions that people are going to be looking at. I mean, 20 years ago, I didn't think I was going to be working in data science, even less teaching data science.

[45:19] I mean, data science as a term did not even exist when I was in school.

[45:24] Pamela Isom: Right.

[45:25] Felix Gonzalez: So, yeah, it has been really long transformation since the last 20 years, 30 years. There's always some transformative technology that it's out there.

[45:35] So I think some of the statements that I hear about how big this transformation is going to be, I think they're exaggerated a little bit.

[45:43] Pamela Isom: Okay, that's good insight.

[45:45] I have nothing to say to that because I agree now those actually were words of wisdom and also it goes back to critical thinking.

[45:53] But my last question is, can you share words of wisdom or a call to action?

[45:58] Felix Gonzalez: Yes. So words of wisdom.

[46:01] Never stay still when it comes to learning.

[46:05] Always keep learning. There's always going to be new technologies. So I mentioned even my own career. I was trained as an engineer. I'm working as a data scientist.

[46:14] Data science is a really multidisciplinary field in my students. They always come from non computer science backgrounds. I've had a few obviously computer science backgrounds as students but so always keep learning, never stop learning.

[46:28] I know it's sometimes tough, we want to enjoy life but you always have to spend some time learning new technologies.

[46:35] In this case, Genai, I think he's here to stay.

[46:38] But I think in five years from now we'll be like, oh, Gen AI. Yeah, yeah, large language models. Everybody's going to be able to. It's like Google nowadays, right? Everybody knows how to use Google in search engines and how to do research using Internet search engines.

[46:52] I think Gen AI is going to be similar where everybody, five years from now,

[46:56] this is going to be just a drop in the bucket where everybody knows how to use Gen AI. We got exposed to it and that's it. And there's going to be some other new things that's going to come out.

[47:06] So keep learning.

[47:08] Gen AI. I think everybody needs to learn. Genai. Play with it. See how you can use it in your personal life, how you can use it in your professional life.

[47:16] Definitely a transformative technology.

[47:18] So it's going to help us a lot of different things in different ways throughout either society or your professional life. So.

[47:27] Pamela Isom: All right, well, I certainly appreciate you talking to me today. This has been very fun and very enlightening and I'm impressed with the work that you're doing, particularly the work with the students.

[47:42] I love. I know about your work in government and different places. I am particularly impressed with the work that you're doing with students and I'm so happy to have you I team.

[47:53] So thank you for being here.