AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E045 – AI or Not – Phil Hartman and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
The shiny demo rarely survives contact with real data. We sit down with Phil Hartman—director, data architect, and seasoned integrator—to unpack what actually makes AI work in production: infrastructure, integration, and testing that respects non-determinism without abandoning reliability. Phil shares how embeddings finally “clicked,” why sending the same prompt to multiple models and merging results can improve quality, and what guardrails and adversarial tests reveal when policies get complex. His insurance parsing example—great up to ten vehicles, then chaos—shows how hidden limits surface only when you move beyond the happy path.
We discuss user experience that respects people’s time, including clear escalation to a human and predictable flows for employee training, as well as brand consistency. Then we dig into a big shift: AI search that visits hundreds of sites and never shows your branding. To stay discoverable, content must be structured and anticipatory—think FAQs on the landing page, schema markup, and concise, high-signal answers that retrieval systems can trust. Phil also makes the case that low-code tools struggle with hierarchical, many-to-many enterprise data, and why leaders should expect custom code, deeper testing, and realistic budgets that reflect the complexity of integration.
If you want AI to be an innovation accelerator, tie it to real outcomes: shorter cycles, cleaner data, higher throughput. Involve end users, measure beyond ROI buzzwords, and design governance that spans model selection, prompt management, privacy, and audit. On jobs, Phil argues for augmentation over replacement—let AI handle the routine tasks so people can focus on judgment-intensive work. The pace is fast and messy, but progress belongs to teams who experiment, test the edges, and build safety into the fabric. Subscribe, share with a colleague who owns AI delivery, and leave a review with your toughest integration challenge—we might feature it next.
[00:00] Pamela Isom: This podcast is for informational purposes only.
[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice.
[00:35] Neither health,
[00:36] tax, nor professional nor official statements by their organizations.
[00:42] Guest views may not be those of the host.
[00:51] Hello and welcome to AI or not the Podcast, where business leaders from around the globe share wisdom and insight that are needed now to address issues and guide success in your artificial intelligence and digital transformation journeys.
[01:08] I am Pamela Isom and I am the podcast host and we have a really special guest with us today, Phil Hartman.
[01:16] Phil is a Director and data architect.
[01:19] We worked together during my tenure at IBM. Hey Phil, thank you for joining me and welcome.
[01:25] Phil Hartman: Thank you for having me!
[01:27] Pamela Isom: Welcome to AI or not.
[01:28] Phil Hartman: Appreciate that! I've listened to several of the episodes.
[01:32] Pamela Isom: So to get started, tell me about yourself and your career journey. Tell us more.
[01:40] Phil Hartman: So I kind of embrace the term, you know, architect, a software architect, IT architect.
[01:46] But my career spans - I counted up 44 years. I started in electrical engineering.
[01:54] My first job out of college was at the Department of Defense.
[01:58] Then I when I left that I I joined IBM. As she mentioned, we were at IBM together.
[02:04] I kind of got into the object-oriented software development at that time and I started to kind of specialize in customer service.
[02:13] This was back in the client server days and built a lot of applications that were for call centers and you may have heard the phrase, you know,
[02:22] screen pop. When you would call a call center, the caller ID would be used to pop up a screen and look up things if you already knew information about the customer.
[02:32] I built several call center applications in the day.
[02:35] After that I got more into integration and we used to call it enterprise application integration, middleware, you know, things like that.
[02:44] Then I spent a lot of time working in the area around business-to-business (B2B) website development.
[02:50] A lot of work in the area of dealer extranets, you know, like manufacturers who had dealers that sold their products.
[02:57] After that I got more into ERP projects and SAP in particular and how to integrate SAP with all the legacy systems, all these new web things.
[03:10] And after that I kind of made a pivot into the healthcare arena.
[03:15] Spent some time in IBM's healthcare area and when I left IBM I joined a local consulting firm in the Nashville, Tennessee area.
[03:25] I’ve been doing a lot of work there related to software architecture,
[03:30] cloud services,
[03:33] data warehouses, ETL,
[03:35] ELT,
[03:36] data science and more recently gotten very much into the generative AI and trying to help my firm embrace and adapt and adopt the generative AI and kind of become,
[03:51] I don't know, an unofficial gen AI advocate.
[03:56] Pamela Isom: Okay, you're an advocate. That's cool.
[03:57] Phil Hartman: I made that title up. It's not official.
[04:02] Pamela Isom: I like what you said about,
[04:04] well, everything I like. I'm impressed with your background, of course, and I'm so glad to have you here.
[04:09] I asked you to join because of you’re a data architect and your strong background, because I know you have that. We used to call it business intelligence. So I know you have that data integration and that strong background, which is absolutely essential for strong AI outcomes.
[04:27] Right. So. So I appreciate having you here and I'm looking forward to what you will be sharing with us today.
[04:32] But I do want to start by saying that I've been working on my. Working on a book, and one of the things that I'm focused on in that book is infrastructure.
[04:44] I feel like that infrastructure is an underlying and undervalued component of the AI fabric, the infrastructure component. And by that I mean components that ensures that performance is good.
[05:00] Components like AI governance, I call that a component. Robust data centers, resilient supply chains, those kinds of things are to me a part of the infrastructure that is undervalued and underappreciated.
[05:14] But you mentioned that integration is underappreciated in a conversation that we had earlier.
[05:22] I was wondering if you could expound on that.
[05:24] Phil Hartman: So I think when you first start investigating gen AI, your whole world is basically like a web browser and a chatbot, right?
[05:35] But as you start trying to work AI into real business workflows and you start, I want it to answer questions based on data that I have over here in this database.
[05:47] Maybe it's a customer database, maybe it's a database of my products or the services that I offer.
[05:52] In the case of something like healthcare, maybe it's patients.
[05:56] I want it to be aware of that information.
[05:59] Well, then you start getting into a lot more security issues,
[06:03] particularly in a regulated industry.
[06:05] You know, where are you worried about whether there's HIPAA constraints or, you know, in healthcare, whether it's personally identifiable information, financial information, and you've gotta integrate all those things together.
[06:17] And then if you're in a larger corporation and you're trying to, you know, get the benefits of all this AI, but that corporation has grown through acquisition,
[06:27] right? And they may have many different companies that they bought over time, and they may not all be fully integrated into like a single database table, you may have to go to multiple heritage systems from the previously acquired companies and somehow merge that information together.
[06:46] All of these kind of things make building a real system for a real business a real workflow. With either external or internal users,
[06:58] a lot harder.
[06:59] And it's not just that out of the box thing that you see in the cool YouTube video.
[07:06] And that skill, that need for integration and knowing how to do it in your organization and all the approvals for security and all those kind of.
[07:18] All that stuff still has to happen.
[07:20] It doesn't magically get easier just because you're using Genai. In some ways it gets harder because you've got some new things introduced.
[07:30] It’s not deterministic,
[07:32] you ask the same question to the chatbot, you get a little different answer every time. And it's like, how do you test all that? You know,
[07:39] so all of these things come together and just make it where you still need to know how to do that. And if anything, it's the problem space gets a little bit bigger.
[07:49] Pamela Isom: Yeah. So complicated. So, but so behind the scenes,
[07:53] you're speaking from a integrator perspective and a provider perspective.
[08:00] From a consumer perspective, we just think it happens.
[08:04] So you're kind of from the. Yes, okay, got it. To me, that makes a whole lot of sense. And I think for budget and planning, we should take that into account.
[08:13] Right. As business leaders, you have to take that whole integration aspect into account because it's not. Yeah, I know.
[08:20] Phil Hartman: That whole budget and planning thing is another area that I've recently had to deal with a little bit. I've kind of done a little informal survey of people,
[08:31] particularly developers,
[08:33] that have been developing code the old way before code assist tools with AI.
[08:40] The consensus seems to be that they don't really know what the impact of these code assist tools is. They basically estimate it the old way and then discount it by some percentage.
[08:52] It's like they'll say, you know, if I think I had, if I had to build it from scratch by myself, it would be,
[09:00] you know, 5,000 hours and I'll.
[09:03] But I've got, you know, GitHub, copilot or something. Oh, so I'll take 30% off,
[09:09] you know, something like that.
[09:10] That's kind of the level of metrics that we have for estimating how to do this kind of development.
[09:19] Pamela Isom: There's definitely opportunity to mature in that space. Right. So estimating, when you and I work together, we always talked about how to improve our estimating processes and our estimations so that they are more reliable and more accurate.
[09:33] That is not accurate. Right. So that is not a recipe for accuracy there.
[09:37] But it's because there are so many unknowns that that's happening. So, so that's, that's Good. That you bring that. And how do you measure, how do you estimate for the whole integration component?
[09:49] Yeah, so that. That's a good point.
[09:51] Phil Hartman: Well, that's. I think in my case,
[09:55] I try to use my experience to basically make a big, long list, big long spreadsheet of all the pieces that I know.
[10:04] Where does the data come from?
[10:06] Where does it have to be merged together?
[10:10] Am I creating new records, am I updating existing records? The whole CRUD process. Right.
[10:16] And basically counting all those things. And if I don't have a good number,
[10:21] to me there's safety and having that longer, very detailed list.
[10:25] If I say that they're,
[10:27] I don't know, oh, this one's a hundred hours and this one's 40 hours and this one's,
[10:31] you know, 200 hours, whatever.
[10:34] Making sure that I didn't – that I have as long a list with as much detail as possible.
[10:40] Some of them I'm going to overestimate, some of them I'm going to underestimate. So I'm kind of looking for – there is safety in numbers, just having the detail and, you know, that's kind of my starting point is I think I can come up with a more detailed list that anticipates more things than other people.
[11:00] Pamela Isom: Yeah, yeah, yeah, that's cool. All right, well, I appreciate that.
[11:03] So let's talk about that some more. So we were talking about the rise of AI,
[11:10] how you. When you started working with it, how you started working with it, and how that's impacting you today. Can you talk some more about that?
[11:19] Phil Hartman: So I started investigating Gen AI,
[11:24] you know, about the time that a lot of people did probably,
[11:29] I don't know, in earnest, probably about year and a half, two years ago.
[11:34] And it was baffling to me how, you know, how on earth does it understand any meaning behind these strings of characters that I type in as a prompt?
[11:45] And I did a lot of research trying to understand that. And finally I found some resources that were trying to explain the embedding process to me and how there's these large language models that their purpose is to,
[12:03] I guess they intuit meaning somehow from the words and they assign a very long, you know, wide vector of numbers that each column in that vector basically means something and what I later discovered and when things started clicking for me, I was going through some of the resources and writing some code in Python and I found some examples where 2000 wide vectors were reduced down to like 7 and where it was a small enough matrix and things
[12:40] that you could kind of visualize and what suddenly jumped out at me is like, as long as the going in part,
[12:49] the interpretation coming out were the same model, I don't have to care what those columns are. I don't have to care,
[12:56] you know, the 2,000 different kinds of meanings in that vector.
[13:00] I don't have to know what they are as long as you use the same embedding model in both directions.
[13:07] And it basically just is doing a probabilistic assessment of, you know, which things are closest in meaning to each other.
[13:16] And that was when things started clicking for me and I started, ah. And so now as I was starting to experiment and build some prototypes,
[13:25] I started feeling like,
[13:28] I bet if I do this,
[13:29] I'll get this kind of answer, just it'll change it in this way. And I started realizing I was starting to get it right.
[13:37] I was starting to anticipate if I, if I made certain changes to the way I was doing things, that I would start getting a more desirable result.
[13:45] And I started writing mostly Python code and then I started experimenting with some of the no code, low code tools like N8N.
[13:54] Built a lot of prototypes with the N8N.
[13:58] And I just started following various channels that are putting out the free training content.
[14:07] I was really kind of amazed to find people Talking about how AI works on TikTok and places like that on social media. And you could get little chunks of information and, you know, 60 second digestible bites.
[14:21] And I started to kind of see some patterns, kind of started certain people that I, okay, I like the way they,
[14:28] they say it.
[14:29] I also started to say, wait a minute,
[14:32] I'm not so sure about this.
[14:34] And started to kind of have some skepticism about different things and I don't know, I just started getting to the point where I would start to get problem statements or business problems coming from potential clients and,
[14:49] and I started being able to come up with solutions how I can apply AI, you know, to help solve some of those problems and build some prototypes and things. And I don't know, it just kind of got easier.
[15:02] But it was probably about a year and a half before I kind of started feeling some confidence with it.
[15:09] Pamela Isom: And do you feel like today that you're more comfortable with it? And do you? Oh yeah, the learning curve is pretty consistent with what you're seeing.
[15:20] And for the rest of us.
[15:23] Phil Hartman: That's a hard question to answer. You know, I don't know how necessarily how long it was taking other people.
[15:29] I think I kind of have some aptitude for learning new technical things. Maybe not everybody has that amongst people like you and me, I think I'm probably more typical,
[15:41] but I think my entire 40-something career,
[15:45] 40-something-year career, I've kind of been an early adopter.
[15:49] And I think compared to most people, I'm an early adopter.
[15:54] Obviously the people that were doing AI 10, 20 years ago, no,
[15:59] they're the real early adopters. But in terms of when you don’t have to have a PhD anymore, not having been growning up in that environment, I think I'm an early adopter and it's almost like a hobby for me, I guess, to kind of figure out the new stuff and I get bored if I don't do that.
[16:16] Pamela Isom: Okay, yeah, yeah. I have some colleagues who like to kind of pick things apart and put it back together.
[16:22] And so I hear you kind of often they're probing around, trying to understand how it works. So that's good.
[16:29] So here's something I want to talk about. So as you know,
[16:33] I'm a strong proponent of creative testing and evaluation,
[16:37] particularly around AI and autonomous systems. I think we should move and be deliberate and intentional about moving beyond the happy path which we used to talk about in the past.
[16:50] I know you had mentioned that there is some confusion when it comes to testing. So my question to you is why do you think there is so much confusion when it comes to testing AI, particularly non-deterministic systems where the same inputs don't always yield the same results?
[17:10] And you kind of were going there a moment ago, so I just kind of want to go deeper.
[17:14] Phil Hartman: So that was one thing that I,
[17:18] in my prototyping I started noticing is I could run the workflow, I could make some, what to me would be minor inconsequential change. Maybe I wrote read a different row out of a database instead of the one that I did before.
[17:38] But it's the same kind of data and I might get an answer that's more different than I expected.
[17:45] And I actually then after that I kind of built a prototype that had.
[17:52] I asked some kind of question via a prompt and I would send it to three different AI systems. I might send it to Google Gemini,
[18:01] I might send it. I got to be kind of enthused about Perplexity as a, as a search tool.
[18:07] And so I learned how to do the Perplexity API. And then I downloaded an open source model from Hugging Face.
[18:17] And so I'd sent the same prompt basically to three different systems.
[18:22] And then I, and, and then I'm using N8N to do this. And then I could, in my workflow I could see the answer. I got from the three different systems and the same answer.
[18:33] I get very different answers in different levels of detail between the three different,
[18:40] you know, LLMs that I was using.
[18:43] And then I said, well, all right, I'm going to use another LLM or another step in the process to look at the three and merge them together and try to, all right, summarize the highlights and give preference to the ones that appeared in all three or appear in at least two of the three and de emphasize that the ones that only things that only show up in one.
[19:05] I started making use of the citations or the references, the footnotes that come back in the search results or in the answers and counting things like that and started realizing how it's kind of hard to do that evaluation.
[19:22] It makes it better. I think when you kind of like try it multiple ways and kind of see what the answers are, you do an evaluation of the results from different paths.
[19:31] But that's really what kind of like illuminated the, you know, found a bright spotlight on the issue of “How do you test these things?” is when I started seeing it multiple ways in parallel.
[19:43] And then I started thinking about, well, gosh, you know, we've got these armies of business analysts and quality assurance people,
[19:51] and you got all these executives who've grown up in the last 20 years of technology, and all of a sudden, you know, what are they going to do the first time you put a GenAI-enabled application in front of them?
[20:04] How do you write a test case for something that same input doesn't give you the same answer every time?
[20:12] In the past, that would be a bug.
[20:15] All of a sudden it's not necessarily a bug anymore. It's just, well, I just phrased it different.
[20:21] You know,
[20:22] the answer still means the same most of the time,
[20:26] but you realize it's not.
[20:28] Maybe that's true 85% - 90% of the time, but there's like maybe 10% of the time that it's really different.
[20:34] And you start getting in, there's all these parameters you can tweak on your API, you know, the,
[20:40] what is it? The temperature and things like that that vary how creative you allow the large language model to be in generating those responses?
[20:49] Well,
[20:50] I can envision, you know, some people are,
[20:53] you know, if this is a business system and you're trying to have a consistent brand and a certain level of service with your customers, whatever,
[21:00] you don't want it to be very creative. You want it to give the same answer every time.
[21:07] I don't know. I guess this was kind of made me aware of all this, and I will admit this is something I'm still exploring. I know one of the things that could, you know, think I want to do is include more test cases where I try to break things.
[21:25] Right. I'm going to, I need, if I'm processing some documents, I need to make some documents that have some of that prompt injection stuff embedded in them, where maybe you bury stuff in fonts that are the same color as the background and so they don't show up visually to the naked eye.
[21:43] Maybe you need to make the font like really tiny and kind of sneak it in there somewhere and try to see if the systems pick up on those things.
[21:53] And one of the things I've learned too is like when you start getting into the.
[22:00] I use Microsoft Azure a good bit and their AI Foundry “playground” environment.
[22:08] If you drill down into the menus on the left hand side enough, there's a section on guardrails and there's certain default guardrails that they build in for safety and security.
[22:19] But you have the opportunity to build some custom ones. And I haven't, I will admit I haven't spent a lot of time on that. But I think at some point I'm going to get a project where it's going to be important to put in some custom guardrails for a client,
[22:36] that there's going to be something that they're particularly sensitive to.
[22:41] So I just think, like I said, this is something I feel like I'm still figuring out. But I think it's also compared to,
[22:48] you know, the masses of people that are playing with AI right now. You know, maybe I'm an advanced user.
[22:54] There's probably people who hadn't even thought about this yet. You know,
[22:57] and there's probably people more advanced than me for sure. And I think this is an evolving area.
[23:04] Pamela Isom: It is an evolving area and it's an open space. You know, I've, like I said, I've pushed this hard.
[23:10] I try to insert unusual values. I'll insert like February 29th as a date, see what the system will do, see how the system uncovers outliers, how it handles outliers.
[23:21] But then it becomes what's an outlier? Right. So it's just a whole different animal that we have to deal with and work with. How does it handle extremely large numbers, how's it aggregating and consolidating information?
[23:38] And yes, we are getting inconsistent results. So then it becomes more around contextualizing the outcomes.
[23:46] That's a strategy for testing in itself.
[23:49] So, and then I, you know, I test governance and different things. So I hear you. I think that it is.
[23:55] There is much confusion.
[23:58] There is an opportunity, There is an opportunity to get better at testing the AI systems and the autonomous system. And we have to do it. We have to do it because we're going to be.
[24:12] We're depending on it. We're depending on autonomous driving scenarios and different things. So we have to stop and think about those things that are. Are that we probably wouldn't think would happen and throw it at it.
[24:24] Right, Throw it at it.
[24:26] Phil Hartman: So something, a very, very recent example for me, I was using some AI tools with insurance information and I had some test documents that I was running through AI and I was very impressed.
[24:44] It was doing really great.
[24:46] And you know, you've got like a policy and it's got, you know, the person that took out the policy, the agent that sold the policy interpreting all that fine,
[24:56] and has certain properties, you know, whether it's a home or a vehicle or whatever,
[25:01] that are insured, and it was getting all that fine. Well, some of those properties had a lien holder, you know, like if you got a mortgage or if you got a car loan or whatever.
[25:11] And it was interpreting all that just fine. And then I got a policy where there were four properties. It was like a landlord, they had like these four apartments right next to each other.
[25:19] And it did just fine. Well, then I got this commercial auto policy and it had like 12 or 13 vehicles on it. And I'm looking through the result. It's getting down the first 10 vehicles.
[25:32] Wonderful. When it got to the 11th one,
[25:34] it just all started going to ****.
[25:38] It was like there must have been some internal limit, something in the internal workings that in that one, the many relationship,
[25:49] it just couldn't keep track any further. And it started duplicating things that had already been on there that were correct.
[25:56] And so I think that that's something I would not have uncovered if I hadn't kept testing and progressively more complicated stuff.
[26:05] Pamela Isom: That's why we have to test and that's why we have to keep probing at it. Yeah, so that's good. So you're a good tester. You're definitely a good tester. Bringing out the things that you're uncovering here is a sign of a.
[26:17] Of a persistent tester. And that's how we have to be in the day and time, particularly in the day and time of AI.
[26:22] Let's talk about chatbots.
[26:25] So I had an experience recently where I kept saying, I want to talk to a human being and because the system would not allow me to get to a human being.
[26:35] It was either an AI agent that was voice, a voice agent,
[26:41] or it was a textualized chatbot.
[26:45] And when I kept saying I want to talk to a human being, it would just take me back and forth between those two. And you know, you can tell when you're talking to an AI agent and when you're not and when it's a real person.
[26:57] And so much so until it was an all day affair for me. And I decided whenever I'm looking for something to shopping or something, if I see that this vendor is a potential solution provider, I'd run from that vendor.
[27:09] I don't want to deal with that vendor.
[27:11] So that goes and puts me in the mind of user experience.
[27:15] So we were talking earlier about AI and how AI has impacted us, impacted our lives. I asked you how it has impacted you. You talk something about how you're building up your skills and how you've had this opportun opportunity to really sharpen some skills and dig deeper.
[27:32] But what about the user experience? I feel like is the user experience now not important?
[27:38] But I don't want to talk to a chatbot all the time.
[27:42] Phil Hartman: So I don't know. I have this opinion where if you're building something that's going to be AI enabled and it's for a large group of employees, say or is external facing to a large number of customers,
[27:58] I don't think you want it to be completely free form.
[28:03] You kind of want something to be a little predictable. You know, if you're going to have your employees working with it, you need some way to kind of train them.
[28:11] You know, it needs to be repeatable from one customer to the next or from one employee to the next.
[28:17] And so I think that that carefully designed screen-to-screen navigation, you know what we called user interface design,
[28:27] whether our user experience,
[28:29] I think those things are still important.
[28:32] I think we are starting to see more and more of these chatbot kind of things where it's almost like a parallel universe, like oh, you don't like what we're doing over here?
[28:42] Well, this interact with this chatbot and it just kind of like takes over and people can type in whatever they want. And if you don't have all those safety and privacy type guardrails,
[28:54] bad things can happen.
[28:55] So I think that is an issue.
[28:58] I think the other thing that's impacting user interfaces,
[29:02] it used to be search engine optimization was a big deal and now that you got tools like Perplexity that do your kind of Replace Google search. And now the Google's had to put their own AI summaries in there.
[29:15] Again, perfect example. When a new version of Google Gemini came out, I took as a test case.
[29:21] I've been thinking about getting a new car and well, you know, help me shop for a new car. And I gave it some criteria. You know, I want it to have room for at least four adults.
[29:30] I want it to have decent acceleration, let's say 0 to 60 in 6 seconds or less.
[29:35] You know, I want it, you know, I had four or five criteria. It went off and visited 315 websites on my behalf.
[29:45] And I didn't see any of their visual branding,
[29:48] I didn't see any of the ads that they might have wanted me to click on to earn money off of.
[29:53] And so now what you have is like all the effort that people put into making their websites appealing and trying to draw your attention in.
[30:05] People don't even see that stuff anymore because they're just doing AI enabled search and they're just skipping the whole visit to your website.
[30:14] And I think that's a huge, huge thing. If you're dependent on eyeballs and clicks on ads and stuff for your livelihood, you're really struggling right now.
[30:23] And so there's a lot of things that are impacting the user experience and the economics of the web in general.
[30:31] Pamela Isom: Yeah, so that's something to think about as well.
[30:34] So when we think about how AI is impacting our lives now and business leaders now, we have to really think about the investments that we are making because of the fact that we've got AI and AI powered search engines.
[30:52] That's what you're saying, that that makes a whole lot of sense.
[30:55] Phil Hartman: I think that being able to train employees and stuff is important too, some repeatability.
[30:59] Pamela Isom: As I said,
[31:00] that's a good point. And I think that customer service is an opportunity as well.
[31:09] Like I said about the chatbot or just talking to an AI agent, I may not want to, I may need to get to a human and I need to be able to get to it without like begging and pleading and it taking all day to get there.
[31:20] But what you said that really got my attention was we spent a lot of time, we do spend a lot of time trying to design our websites to make sure that our websites are appealing and that it says certain themes and da da, da.
[31:34] When really the AI agents are out there now perusing our sites for us. So we have to be in tune with that so we know how to build our sites.
[31:44] Phil Hartman: And what I've Seen recommended on various articles I've read.
[31:49] It looks like the guidance now is to build your user interface to be like an faq.
[31:54] All the AI search engine things are like trying to anticipate what your next question might be so that when they do their summary they're like anticipating, well if he had done this himself, he would have clicked this next, he would have asked this follow up question and they're trying to give it to you right up at the front.
[32:13] And so saying like in order to get the attention of these AI search bots,
[32:18] you basically have to put your FAQ on the landing page.
[32:21] Pamela Isom: You know what, that's a, that's a good point. Because we gotta know how to navigate and be efficient and get business results in the day and time of AI. That's a good point.
[32:32] Honestly I never thought about that part of the equation. So I appreciate this is good.
[32:38] So now considering that, what are some misconceptions about data that you wish more business leaders understood,
[32:48] Phil Hartman: I think it's real easy for a business leader to kind of get,
[32:53] I hate to use the phrase sucked into thinking things are a lot simpler than they really are.
[32:58] One of the things I've noticed as I've played around with these AI tools,
[33:04] particularly these low code/no code tools,
[33:07] they struggle with data that isn't like nice one to one correspondence like customer to order.
[33:16] If you've got customer and they've got 10 different orders but they went to three different ship-to’s and some of them were for services and some of them for products and some of them have special warranty considerations, blah, blah.
[33:28] If you got all this more of a hierarchy of things and lists of things that have their own hierarchy.
[33:35] Those kinds of complexities are not easy to do in these quickie, you know, things, things that you see in the YouTube videos where they build this magnificent workflow in 10 minutes and those kinds of things with.
[33:49] When you got real data that's not trivial, you know, that's like the stuff that corporations have when they got lots of products and lots of employees and lots of services and maybe they're got different terms and conditions based on what state you live in or what country you live in.
[34:07] All those kind of things make it so much more difficult. You just can't write it once and it's going to work. In all those situation there are a lot of things to test for.
[34:16] Just like we kind of got talking about.
[34:18] I have just found that a lot of those things you wind up having to write custom code even though they're supposed to be a low code or a no code tool because the quickie way of handling it can't go find the third element in a list and then inside that element go find something that's in a one to many relationship.
[34:37] Underneath these things turn out to be harder and don't expect them to be as easy as the YouTube video.
[34:45] Pamela Isom: Okay, no, that's a good point.
[34:47] So let's, let's talk about this for a minute. So we were talking about AI and making AI a true innovation accelerator.
[34:56] And I'm exploring the possibilities of whether that is possible and if so, what are some good practices so that we make sure that it truly is an innovation accelerator. Are there things that we can do better?
[35:15] Is there advice that I could give to boardrooms or senior executives that to help them focus on making AI a true innovation accelerator for themselves,
[35:27] not just IT project. You know what I mean?
[35:31] Phil Hartman: So I think you as a business leader you kind of need to focus on,
[35:38] What am I improving? And is there a manual process that you can help speed up, make faster more people? People can get more done in the same amount of time.
[35:50] Is there something where you can drive additional revenue? I don't want to get completely wrapped into some kind of hard and fast ROI type discussion,
[36:01] but there's gotta be a business case and some of it may be intangible,
[36:07] but you just don't do it for the sake of doing it because you think it's cool or because everybody else is doing it. It's like do it because you have a specific thing that is not as good as it can be.
[36:20] Make it better and involve the end users, involve your customers, whatever, and deciding if it is better and how much better is it and, and be willing to test some of the more complicated scenarios.
[36:35] You mentioned Happy Path earlier. Well, what about some of those outliers, you know, some of those crazy ones where, you know like the, the business with the 12 or 13 different automobiles that I mentioned earlier.
[36:46] If you don't explore some of those, you, you probably could be led to think things are, may be too easy, you might roll it out too quickly,
[36:54] you might not build the security and safety type guardrails that you should.
[37:00] Pamela Isom: Okay,
[37:02] so focus on is there a real business problem? Is AI the right fit?
[37:10] What about the concern that we have today and the talk around AI replacing people?
[37:18] What's your perspective on that in this context?
[37:23] Phil Hartman: So I think that the concern is real, particularly I think for entry level people.
[37:31] I think I saw just a couple of days ago some Survey that had said they thought that entry level jobs were down like 13% or something like that. So I think that is real.
[37:42] I will say, like I'm working with a client right now that says that they're mostly interested in helping people get more done,
[37:49] not getting rid of the people.
[37:51] And I think that's real too in the sense that I think AI can help you make the routine things better and faster, cheaper.
[38:03] And I think in a lot of real world companies there's enough of the more complex situations, the outliers, whatever,
[38:11] the 13 autos, whatever, that you need the human being to kind of devote their brain power to those and not so much to the routine one. And so that's where I'm hoping that the AI is going to go, that it's going to augment us and make us be able to focus on the things where we can add the most value.
[38:32] I think there will be breakage along the way.
[38:35] There's going to be somebody probably lose their job. There's probably more than we all want to admit that that's probably already happened.
[38:43] I think it's true that the advocates say, you know, that it's creating some jobs as well.
[38:51] The pace of change is really fast, it can be scary. But I can see a lot of positives in this as well. And there's a lot of things that are just mind numbing to me that I would love to kind of outsource to my large language model.
[39:06] I get bored easily,
[39:08] so the more routine it is, the better.
[39:11] Pamela Isom: So that's some criteria for thinking about how we want to delegate some of the responsibilities to the AI agent, which I think is good.
[39:20] I like what you said about think about the throughput,
[39:24] think about productivity enhancing.
[39:27] Don't pop into this with the mindset of you're going to replace people. That's what I heard you say.
[39:34] Phil Hartman: Yeah. I think we need to free people from the most routine, the most boring tasks,
[39:40] give them the harder situations to deal with,
[39:44] the ones that take the most judgment and knowledge and maybe past history,
[39:51] corporate knowledge.
[39:54] How do we need to handle this to keep this customer? This is an important customer. We want to keep them happy, those kind of things.
[40:02] Pamela Isom: So can you share words of wisdom or a call to action?
[40:06] Phil Hartman: So I guess maybe a call to action and some wisdom, maybe advice. But I would say “you can teach old dogs new tricks”,
[40:18] you can learn this stuff.
[40:20] I would say that one of the things that I have found valuable is like if you can link AI to something that you're already good at,
[40:30] then you know, it doesn't feel like you're getting paid to learn something new. You're bringing Phil Hartman in because of his experience with integrating applications and things like that.
[40:40] He's bringing in AI.
[40:43] Yes, I'm learning things still,
[40:45] but I am able to draw on all of that other kind of experience.
[40:49] And so I'm bringing you something hopefully that's you're not going to get from other people. Right.
[40:55] And then the. The other thing I would say is there's the pace of change is so fast.
[41:02] Yes, you're learning, but it's easy to feel, like,
[41:05] overwhelmed because the pace is. The change is so fast. You're like, there's no way I can keep up. I still haven't learned how to do this.
[41:12] Everybody's going through that.
[41:14] And I think you just kind of have to be at peace with that. There's just no way to everybody know everything.
[41:21] And if you're the kind of person that is motivated enough to learn this stuff and kind of study up on it and try things and experiment and see what works and what doesn't, you know, you're probably ahead of 95% of the world population.
[41:37] Right. I think the Nike ad says, just do it. Right. I think if you haven't started, start.
[41:43] And if you're going down the path and feeling overwhelmed,
[41:47] everybody is, but just do your best and keep at it.
[41:51] Pamela Isom: Well, that's great. I appreciate those words of wisdom and the call to action. I heard both in that. In that commentary there. So I'm honored to have you talk to me today.
[42:02] I'm so glad you were able to join.
[42:05] I appreciate all the comments that you made and all the insights that you share, particularly for business leaders, but for those that are not necessarily leaders as well, you've given us some good wisdom and some insights that we can run with.
[42:17] So thank you very much.