AI or Not

E020 – AI or Not – Claus Thorp Jensen and Pamela Isom

Pamela Isom Season 1 Episode 20

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

Join us for an enlightening conversation with digital transformation leader Claus Thorp Jensen, Chairperson of the Board, Independent Director, Board Advisor,  whose career journey spans the fields of finance, healthcare, and technology. Claus, with his Danish roots and influential roles like the CTO of CVS Health, shares his compelling story of charting a nonlinear career path filled with on-the-job discovery and adaptation. As we explore his transition to the healthcare sector during the COVID-19 pandemic in New York City, Claus offers insights into the importance of innovation and adaptability when facing uncertainty, particularly in revolutionizing cancer care.

Ever wondered whether to position yourself as an expert or a storyteller? Claus shares his thoughts on this pivotal choice, emphasizing the balance between deep knowledge and relatable narratives that resonate with listeners. His journey into board roles, driven by a desire for meaningful contributions, highlights the nuances between governance and management. Through his experiences with a not-for-profit focused on elder care and a cutting-edge AI startup, Claus underscores the alignment of board roles with personal and professional values, offering a roadmap for those considering similar paths.

As AI technology continues to evolve, Claus and I discuss its transformative role in healthcare, from machine learning to ambient solutions that integrate AI and sensors for continuous health monitoring. This exploration includes ethical considerations and the necessity of keeping human intelligence at the forefront. We wrap up our discussion with a focus on positivity and gratitude, two vital components Claus believes are essential for personal and professional growth in the technology landscape. Join us as we reflect on these powerful themes and express our gratitude for the invaluable insights Claus brings to this episode.



Pamela Isom: 0:19

This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health tax, nor professional nor official statements by their organizations. Guest views may not be those of the host, may not be those of the host.

Pamela Isom: 0:47

Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. My name is Pamela Isom and I am your podcast host. We have a special guest with us today , you're a chairperson of the board, an independent director and a board advisor. We met during my IBM days and we've maintained contact, which is a good thing. We're both leaders in digital transformation. We're both future thinking architects. I love the fact that we both have that architectural thinking background. That's just very special to me. Claus, welcome to AI or Not.

Claus Thorp Jensen: 1:47

Thank you very much. Thanks for the invitation.

Pamela Isom: 1:50

And tell me more about yourself, your career journey and how you arrived where you are today. Tell me more.

Claus Thorp Jensen: 2:00

Good question. Well, my career has been nonlinear. It spans two countries born and raised in Denmark, so I'm a first-generation immigrant and three industries. So I started in finance. You finish your college degree and you've got to get a job. So I got into finance. I accidentally got into the C-suite when I was 32.

Claus Thorp Jensen: 2:21

So at 32, you know, some of the higher-ups had been to a conference and they'd heard about needing a chief architect. So I was the fixer for the organization and I was called to the office and I was like hey, Claus, you need a favor, what do you need this time? You just be now a new chief architect. I'm like what's that we don't know, but that's your job, go figure it out. So, in a very real fashion, my senior executive job started with I have no idea, I'm going to go figure it out. And that's actually what I've spent the last 25 years on figure it out. So I figured out how to be a chief architect. I figured out how to lead LADS, m&a and integration transaction and create the connectivity issue between different organizations that hadn't worked together before. And at one point, around 2008, it was like what's next? I'm not 40 yet, so it's like what are we going to do next year and next became the US, so moved from Denmark to the US and joined IBM, joined Big Blue and See the World. So lots of travel.

Claus Thorp Jensen: 3:23

I got to work with a lot of interesting organizations and I had one of those jobs that weren't on the orchard. Now I had a title and I was placed somewhere in the organization, but the real job was helping clients answer a very simple question. So we have all this technology, now what, uh-huh? It's sort of in the spirit of figure it out. Okay, because it's like you know, go and help people figure out what's important to them from a business perspective, figure it out, what are they going to do with all this technology?

Claus Thorp Jensen: 3:49

And one of those clients was edna, the one of the largest health insurance in the us, and they must have liked what they saw because after one of the readouts I gave them, mick mccarty, the ncio and coo decided that she was going to hire me and so I went from from working for IBM to, all of a sudden, working in healthcare. So that's how I got into healthcare. You know were acquired by CVS, got the CTO job for CVS Health and most people would say, hey, you made it. You're a CTO of Fortune 5 company. It's a great job. It was a great job. It is a great job. But when the Marsan Kettering asked me hey, do you want to help us cure cancer? How can you not say yes? If you're driven by focus and I am then that was meaningful. So I joined an academic medical center, which is totally different, and my timing was impeccable.

Pamela Isom: 4:36

And you had to figure it out.

Claus Thorp Jensen: 4:38

I had to figure it out, yes, but I only had to figure it out most of the time because I had a great time and this was at the end of 2019. And we all know what happened in the spring of 2020. The COVID-19 pandemic, and the epicenter of that was New York City, where I happened to be working. So the mayor gave us two days notice to send 40% of staff home and still run the hospital. So we figured it out, because we had to figure out how you do that and how do you manage the crisis but at the same time, also make progress on what you were trying to do, which to make cancer care better post pandemic. You know good place to say hey, we made a lot of progress and virtual care had become real.

Claus Thorp Jensen: 5:21

So it's been the last, you know, a little less than three years working on how does virtual care become, not something on the side but a fundamental part of a longitudinal person approach to care which brings us to today. So you know a bunch of figure it out sort of experiences. Later, it turns out my career became that. It became we are at a, we need to be at b and my job became how to manage that journey for me to be. So I live in the intersection between business, technology and people, and you know what. That's a great place to be for someone who started in denmark with a technology degree that's fascinating, fascinating.

Pamela Isom: 6:04

That's an interesting career journey. What I love, you know I'm a big fan of yours, but what I love the most in what you said is to figure it out. So I always tell my daughter, and now my grandkids, that there's nothing too hard and that sometimes that's some of the best fuel when you don't understand something and you're challenged to go and come up with a resolution. It can be difficult in corporations because when they're putting things before you you sometimes want them to tell you well, what do you expect? Well, what do I do? But it can be of significant reward. So it's nice to hear that journey.

Claus Thorp Jensen: 6:51

What you learn along the way is there are techniques to figuring it out. I mean, you don't have to start from scratch because there are sort of techniques you've tried before. You know they work. There are things you know that don't work, so sometimes I describe myself as a technologist turned storyteller storyteller okay you're figuring out. You have to be really good at telling stories you want to tell me more about that?

Pamela Isom: 7:19

tell me, give me an example I can so.

Claus Thorp Jensen: 7:22

So how about this? What's the difference between storytelling and telling stories?

Pamela Isom: 7:30

One is based on truth and facts and the other is bib.

Claus Thorp Jensen: 7:38

No, I think the best story is always true, but yeah, it could have been. I think storytelling is telling stories with a purpose, whereas telling stories is just telling stories. Maybe tell stories all the time. Our brains are wired for stories. Here's a fact for you it turns out that a story that resonates with you emotionally impacts how you think 22 times more powerfully than any amount of facts.

Pamela Isom: 8:04

Okay.

Claus Thorp Jensen: 8:05

That's been researched and, allegedly at least, proven. But there's no question that stories are powerful. So you get to this point where you're sitting, in that if your sweet spot is between business and technology and people, there's a people component, and if your job is to move from A to B, how do you help shape that journey? It turns out that at some point in my career I had to make a choice. It was actually when I worked for IBM and it was another one of our colleagues and we were out for dinner after a client meeting. And she tells me out of the blue I think she still remembers hey, Claus, you're being stupid. Oh no, she did, and I'm like what she did and I'm like what. Where did that come from? You know we were friends.

Claus Thorp Jensen: 8:45

Why would you tell me that I'm being stupid? Yeah, you're just being stupid. Okay, cop it out. Why do you think I'm being stupid? Well, you just are. I mean, you tell people the whole truth. Yeah, that's my job. I mean I'm the expert you bring me in, so I can nothing but the truth. That's what we do. Yeah, that's why you're being stupid. I was apparently not getting it, so I was being a little bit slow. And then she said but people respect you for all you know, but, frankly, you scare them. So when you tell people the truth, the whole truth and nothing but the truth, they totally get it. Then you know everything there is to know about this topic, but they don't want to talk to you again so that's why you storytell bingo, because stories are not nearly as scary.

Claus Thorp Jensen: 9:28

Right, and and and the point is, you have a choice to make. You can choose whether you show up as the expert and you will tell the truth, the whole truth and nothing but the truth, and you can be kind of scary and intense. Or you tell the truth and nothing but the truth, but you don't always tell people everything you know. You tell them the story that they need to know, that's right, the context of their reality. So that is probably the most fundamental career choice you can make as a technologist. You have to choose. Do you want to be the expert or the storyteller? I chose the storyteller.

Pamela Isom: 10:00

I like that. I use the approach meet people where they are, so it's storytelling. But I don people's heads or people can't relate to what you're saying. Just might as well, forget about that. That's just an exercise in futility and it's our job to understand how to connect and meet people where they are. So that's good. I like that. I like it. I like it.

Claus Thorp Jensen: 10:45

Here's what's fascinating about it, right? Because, to quote Henry Ford, allegedly early 1900s, if I'd asked people what they wanted, they would have said faster horses. So it's not just that you have to meet people where they are, you actually have to meet people where they're going to be. That's true, that's true. So part of figuring out is that you can't ask people what are your requirements?

Pamela Isom: 11:10

If you're trying to figure it out and move from A to B, it just doesn't work, and that's why you're, that's why I see you as a future forward, a forward leaning thinker, and I consider myself the same way, and those are things that we have in common, but that can be challenging. That's a whole different discussion that we can have, but that can be challenging in itself because you're ahead and now you're trying to not come across that way but still be forward leaning, and that's a part of that storytelling. By the way, I know I get it, but we could have a really good discussion on that, because there are some challenges that come along with that. I want to know more, since I have you, since we're in this discussion, tell me more about the boardroom and your boardroom journey. I see a few titles that you have, so what's that journey like?

Claus Thorp Jensen: 12:05

So many senior executives that I meet, and even more junior people you know, keep asking me how do I get a board role? I don't know that that's the right question. The right question is why do you want a board role Now? For me, it's a question of at some point in time, I'm not going to have a full time job anymore, but I'm also not the kind of person that want to slow down and do nothing, but I'm also not the kind of person that wants to slow down and do nothing. So, for me, taking on board roles is about giving back and it's about having a working board role where, yes, you've got your nose in and your fingers out, because it's governance, it's not management, and you have to be very appreciative of the difference. But I've always looked for places where I could follow my heart in terms of things that were meaningful to spend time on, but also feel that the 30 years of experience that I have is actually a meaningful contribution. So how do you get one? Well, one of three ways. You know people. That's how most board seats are actually built. You get really lucky and just accidentally it falls into your lap. You get really, really good networking and being known, so being a known entity. Interestingly, both of mine actually came by a executive search firm, so I I didn't get them by my networks, and part of it is I'm an immigrant, so I'm actually still building networks on the US side of the Atlantic, but both of mine came by executive search firms and I think the trick to actually ending up being the right candidate for the role begins with where I started knowing why it is that this is something that you would really really like to do, being clear in your own mind, being able to express clearly why you think this is something you want to do, and also what you bring to the table. I mean, your job as a board member is to govern, but also to help the company that you're on the board for. So if you take the two I have. One is a not-for-profit where the board is actually run in a very professional fashion. So it's an elderly living organization that operates communities where it's all about figuring out how can you age graciously in place as you get more and more needs from a healthcare perspective, and that's meaningful, because we've all seen the people in our own families get to that point. You know we're going to get to that point ourselves at some point. And, gee, I would actually prefer for people to have the choice that you can stay in the same community and you can have different levels of care. So so, when that opportunity came around, it was meaningful to me.

Claus Thorp Jensen: 14:37

And the other one is a startup. And, look, I worked in some large organizations. So so how? Why a startup? Because there's some problems out there that need grassroots innovation, and this is one of those where what this company, qa, does is to use AI to help improve clinical trial match. Why is that a problem? Because the state of the art is I give a description to clinical trials to a bunch of physicians and maybe I get lucky that I find patients that meet the criteria. If you're a pharmaceutical company or it's a group of physicians that think that this is good treatment, just finding the right patients to prove it is actually hard. So if you can help with that, that's meaningful. So I think in the boardroom you have to be clear on why you want it, how you can help, but also the distinction between management and governance.

Pamela Isom: 15:30

That's good. That's good. That's good. I mean, I sit I actually sit on a board, but it's and it's a nonprofit board, and they tell me that the the approach with the nonprofit board is certainly different than those that are for profit. I still have fiduciary responsibility. I'm still looking at how we are allocating and distributing our funds, how we are generating funds, but we know that there is a difference with the nonprofit versus the for-profit. But I do think that is really good experience and I at first I didn't really want to be on a not-for-profit board because of everything that I was hearing, but I actually like it. They brought me in as a technical director, so I'm the technology director on the board and it is still you know, it's hands-off, so there's a different team that's responsible for implementation. But I have an eye on what we're doing from a technology perspective to ensure that it's guiding the organization in the proper direction. But it is all about governance. So I appreciate your insights there, because I'm sure that the listeners and myself as well are looking and interested in advancing our journey into the for-profit boards in addition to the not-for-profit. So those are good insights. Now, since you've done that, I'd like to talk some more. You and I, when we talked recently, we were discussing something about this yellow brick road and the world of connectivity, so I would like to talk about that some more, and so you go first Tell me more about this yellow brick road.

Claus Thorp Jensen: 17:21

The yellow brick road is how did we get to this whole discussion about AI that we're having now? There isn't a part of society that isn't discussing AI, and in some cases, with a lot of background for what it is. In other cases perhaps less so. But the real question, because why is it possible in today's day and age to build a large language model? Because it wasn't 20 years ago, and most people will tell you it's because we got more computational power. Okay, I'll grant you, that's a component. I don't actually think that that's the most important part of yellow brick road. So here's my yellow brick road and how we moved from.

Claus Thorp Jensen: 18:01

Take the 1970s, 80s to where we are today. It starts, starts, with connectivity. If you look at the 70s, the 80s, to a degree the 90s, we had these isolated computer systems that didn't talk to each other and that was a problem because we had more and more complex business processes that needed to work and needed visibility across these different systems so we can run our businesses better. So we started with connectivity, this whole notion of exchanging data, you know, ultimately making API calls, which is sort of the latest and greatest in terms of connectivity. But we could connect the connective tissue of all the technology that operates a company and its products, and before connectivity you couldn't. You stood at one system and that system existed in isolation. So connectivity starts the yellow brick road. All right, if you can connect the component, how about connecting the data? It would be nice. I would really like to understand the holistic nature of this system, not just its parts. They can talk to each other Wonderful. Now I want to be able to know what's going on at a holistic, systemic level. So I've got to get the data out of the system. Because it can't do that inside each system, I need something that sits on top.

Claus Thorp Jensen: 19:12

Enter data warehouses. What's a data warehouse? It's a warehouse of all the data that comes from all the operational systems and what you can do with it is you can generate usually operational insight in terms of this is the operational state of my organization as a whole, and that's great. But they're a little bit unwieldy and they may not necessarily answer all the questions that we would like to answer, because they do dashboard really well. But I went deep inside. So we started doing analytics. We went beyond business intelligence, we went beyond dashboards. We started running actual computations on the amalgam of the data. This is different because we started with systems running actual computations on the amalgam of the data. This is different Because we started with systems that run computations on what we do as a business. We connected them, we took the data out and now we're running computations to generate insight based on all the data that came out of all these systems. And we learned a lot. We also learned that sometimes we get more data than humans can cope with.

Claus Thorp Jensen: 20:08

So the next step in the yellow brick road is machine learning and the machine intelligence, which is really. Hey, I can throw algorithms at this. And not only can I throw algorithms at taking the data out, not only can I have humans to sign algorithms. You know what? If the human can cope with this, I can throw a machine learning algorithm at it and it will learn some interesting things. So machine intelligences and finally you get to the point where these grow to become large language models. What's a large language model? It's a human-style interaction, it's learning from a vast body of human knowledge. It can pass the Turing test because it can have what looks like a meaningful conversation. That's the yellow brick road. But what it isn't? It's not a human, it doesn't have ethics, it doesn't have any judgment. And here's a simple example of why machine intelligence is still very different from human intelligence.

Claus Thorp Jensen: 21:02

Because you will laugh, because it's so silly, there were some people it wasn't me, I wish it was, because it was a great test who threw a bunch of symptoms at ChatDBT, and ChatDBT did a pretty good job of saying this is a likely diagnosis. So they got clever. The next prompt was how do you know that? Chatdbt replied because I have a PhD and 13 years experience. Uh-oh. The next prompt was I don't think so, you're a machine. The next problem is I don't think so, you're a machine. Chatgpt replied yeah, I was just kidding, right, this is spiraling down the rabbit hole, right. And then the final problem, which is sort of the killing point here, is why would you do that? And ChatGPT replied because that's what people do.

Pamela Isom: 21:48

Yeah, but that's a good example. Really, it really brings up the point that you're making around data, the data that the models are trained on and how the models use the data. But they were trained. The chat, gpt, or the tool that you use there, was trained to respond like that, based on the data that it was fed.

Claus Thorp Jensen: 22:13

It's a machine known to the algorithm. It does whatever it does from a computational perspective. Now don't get me wrong. I'm a big fan of AI. I think AI will make a big positive difference in the world. I'm not afraid of AI. I just think we have to be mindful of the that in and of itself, it's a tool. It doesn't have ethics, it doesn't make judgments. It does large, complex computations very well but it's not a human.

Pamela Isom: 22:46

We have people that are working to influence and I don't think it's ethical ethical but we have those that are vulnerable and are starting to confuse reality and ai, and that's part of why I am doing what I'm doing, because I want human intelligence to stay in command. It must stay in command, but with the power of suggestion that's going on, where there's this suggestion that the AI is smarter than the humans, or AI is smart, really going to overtake the human mind and so it's going to replace human intelligence. I don't like. I want us to be able to help us as human beings, appreciate that AI is there to augment and support us in what we're doing, but that it will not replace our human intelligence. And there's just too much work going on to allure and lead to an extreme.

Pamela Isom: 24:02

And the example that I'll use is it was a while ago and the gentleman with the dating app like there was a dating app and the gentleman was listening and the app was telling him things to do and it was just hard for him to rightly divide after a while that this thing is not real, that it's artificial. So we we want to be very mindful of that, because that starts to work on one side. So I said that because I like how you pointed out several times that it doesn't think and it doesn't have emotions. It doesn't have feelings, it's a machine. So over to you.

Claus Thorp Jensen: 24:43

I think we have to be careful with to your point as well, mead what question we ask? Because everybody is busy saying you know what's smarter and better? Is it the machine intelligence or is it the human intelligence? But every time we've measured it on some problems, the machine intelligence actually makes better judgment calls, and some things the human mind makes better judgment calls. But what we've unequivocally seen every time is put the two together, they're better than either in isolation, right, so that's the case you can make for augmentation.

Claus Thorp Jensen: 25:14

I also think there are areas, even in healthcare, where absolutely we should let the AI intelligences do their thing, and they don't necessarily need quite as much supervision as I would want them to have if we were dealing with patient care. So let me give you a handful of areas where it's pretty safe. Things like the co-pilots just to run the office, helping us write emails, helping us generate prettier PowerPoints very benign, there's absolutely nothing that would worry me around doing that. Right, that is a way of optimizing how we run our organizations. All right, let's take another one. How about marketing? If we can have less crap and we can have more targeted messages that actually resonate with people, you should be careful. There is an ethical sort of boundary in terms of where you start influencing people. But look, there are a lot of marketing problems, like, for example, getting people to take the flu shot, where I would absolutely say it's a great thing to have a system that resonates more.

Claus Thorp Jensen: 26:18

You can look at operations, taking also meaning some of all the claims in healthcare so that you don't have as many denials. Taking also maintaining some of all the claims in healthcare so that you don't have as many denials. Don't spend as many people on manual processing, making sure that, hey, if you're a virtual care company and you have an SLA of 30 minutes, you really need to know how many physicians you're going to need in Kentucky next weekend, because if you don't know, you're not going to be able to meet your SLA. You're going to have physicians that burn time. That wasn't necessary. There's logistics.

Claus Thorp Jensen: 26:46

During the pandemic we talked about personal protective equipment. Well, guess what you can throw machine intelligences after. How do you get the maximum benefit out of that and how do you optimize your supply chain? And again, I wouldn't be worried at all. I mean, this is not an area that needs regulation, it's just being clever and constructive around how you make the world better. You can even go to the clinician space and say look, every physician I know hates writing the physician's note. Would I want the machine intelligence to write it? Without oversight? Of course not, but having an algorithm that just synthesizes. This is what was said during the visit and have the doctor sign off on it absolutely and want that any day of the week, as would the physician. And then, if you move to the patient side, there's a lot we can do on learning and guidance.

Claus Thorp Jensen: 27:34

So let's take cancer as an example. I worked a couple of years from the Marshall and Kettering Cancer Center and the sad part is cancer is scary and when you get a cancer diagnosis and this has actually been researched you're not going to be hearing anything the doctor says after that. You literally don't hear it because you're in a different world. You're just in a world of okay, this is bad, I'm scared, and you're not hearing what's being said. So how do you help the friends, the family, the caregivers at home with learning what they need to learn about a disease that's pretty scary to deal with and cancer is not the only one? Well, you can actually use things like generative AI to do a good job of just teaching people what the nature is of what goes on inside the body. So I think there are plenty of examples that don't have to do with care, that are completely benign and where we should absolutely accelerate the use of machine intelligences.

Pamela Isom: 28:31

And I still think, in the examples that you use in your descriptions here, I can see the AI, I can see generative AI, I can see predictive analytics right. So, some of the things. If I think about that yellow brick road again and that chronology, I can see the use of the data, the connectivity. You can see it all. And what we want to do today is start to look at AI generative AI, are we using predictive analytics? And what's amazing is that when it all comes together, we get some really powerful outcomes. So I don't think a doctor needs to be concerned about what type of AI it is. Is it machine learning? Yada, yada, yada. The boardroom doesn't need to be concerned with that. What they need to be concerned with is results. I don't need to be concerned with that. I just happen to know because of my expertise. What I care about is results and how do we go about getting the best results for the situation at hand, and the examples that I heard you describe are good ones in the sense that they are safe. The examples are safe. Each one that you described was pretty safe, not really going to cause harm to humans, but where there was the possibility. You did mention the fact that we need that oversight, which is the guardrails that we need.

Pamela Isom: 30:04

When I think of an example, I think of my baby brother. So my baby brother he's not with me anymore and we were very tight, we were very close. He went to the hospital, went to the VA. His right arm was swollen, or one of the arms was really swollen, really bad, and they couldn't get to him. So he stayed there all day, from seven o'clock in the morning and then he's there and they're going to get to him. They're going to get to him. They could never get to him. And so finally in the evening he decided, okay, I'm not going to just keep hanging around up here, I'm going home, right. So he went home.

Pamela Isom: 30:43

Now what I think about is patient care, and what I think about is you know, why aren't we planning from a staffing perspective, from a doctor's perspective, from a patient care perspective? How do we improve the wait times for our patients, for our loved ones, right? So my brother ended with going back home and getting. He got worse and I lost him right, and when I found out that he was, he had spent the day before swollen up to no end sitting in the VA. Right, I had a real issue with that, and when I look at working with tools like AI today, I think about those experiences and how we can use tools to help us with faster patient care, delivering faster wait times because they shouldn't have had to wait all day and then making sure the example that I think you mentioned this before the example that you share with me about how you need resources to be available when you need them. You need to be able to predict when they will be and how many resources you need, when and where. So I think tools like this are very valuable for things like that, and I always like to think about my own personal experiences, and that is why I personally have the passion that I have towards using tools like this to help solve some of the challenges that we are experiencing in society today. So I wanted to share that with you, because you gave some examples there.

Pamela Isom: 32:31

I would like to talk about metrics, and so I started to get into the metrics Metrics. Clients are struggling with what are some good metrics and measurements that they should be putting in place today to measure the effectiveness of their AI investments. So I'd like to talk about that, and that's part of why I shared the example that I had there, because you want to save some lives, whether you're in healthcare or not. In my case, I used the VA example. In healthcare, but one of the measurements and metrics was care, but one of the measurements and metrics was around patient care and tending to the patient care so that he didn't have to wait for eight hours and finally get frustrated and leave. So can we talk some more about some metrics? What do you see as some things that organizations should be looking at when it comes to measuring the effectiveness of their AI investments?

Claus Thorp Jensen: 33:28

Good question and I think my answer begins with measure the value of AI the same way you measure the value of any other technology solution. I don't actually think it's different. I mean you can get better in health, you can get better clinical outcomes, you can get higher quality, you can get reduced cost. You would look at all the typical metrics. The challenge is if we do AI work outside the context of solving a business problem. Now I don't know how to measure it, but if we accept the fact that technology in general and AI in particular are tools in our toolbox they're not the only tools, but particular are tools in our toolbox. They're not the only tools, but they're tools in our toolbox to solve real-world problems.

Claus Thorp Jensen: 34:13

The real-world problem will tell you what the metric is for, what good looks like. If the real-world problem is operational optimization, then we all know what that metric looks like. It's a question of how much money do you spend on running the organization. If the real world problem is, I want better outcomes for cancer care, then again I know how to measure that. Now it can be difficult to isolate the effect of one component of a large system if you've got moving parts. But I don't actually think that the metric should be any different for the value of AI solutions than anything else. The metrics for risk could be different, but the metrics for value? I don't think they're different.

Pamela Isom: 34:53

Okay. So the things like I want to maintain my clientele for an extended period of time and I want to be able to measure that in increments during that period of time to ensure, want to be able to measure that in increments during that period of time to ensure, so not wait till the end, but measuring in increments during that period of time and how AI or tools like AI can help me with that are things that we would normally put in place service level agreements, all those kinds of things that we would normally put in place. You're saying that if you keep your eye on the mission and stay focused on that, then, whether it's AI or not, you need to be able to make sure that you're meeting your goals and objectives, and if AI can help you with that, then yes, and then measure the effectiveness according to the mission and the basic metrics that you were going to put in place anyway or that you should have.

Claus Thorp Jensen: 35:50

Yep. In my experience that works best.

Pamela Isom: 35:53

Yeah, yeah. Tell me about the promises and risks of using AI in healthcare. What's your perspective in addition to what you've already described?

Claus Thorp Jensen: 36:05

I think we should just decide not to be afraid. I actually think that's important. I'm not saying that you can't use machine intelligences for bad purposes. You can. You can use any technology for bad purposes, but I don't think we should be afraid of the technology itself. So when people ask me, what do you fear about using AI? I don't actually fear anything. Now there are concerns you have to address. There are ethical guidelines we should put in place. There are problems that are well suited for AI and problems that are not, but I don't actually have any fear in terms of applying AI in healthcare. I do have dreams and hopes. My dreams and hopes are tied to what, in the book I wrote like 18 months ago, I called ambient solutions.

Claus Thorp Jensen: 36:48

What's an ambient solution? It's a solution that is nascently present in the environment around you. It monitors on a constant basis what's going on, whether that's a wearable, whether it's sensors in your house, whether it's something that's in your car. It can be a lot of things. But if you take devices and you combine them with machine intelligences that are smart we're not going to say intelligent, but that are smart enough to detect when there is something that a human should look at just that constant monitoring and the ability to create signal from noise and say, yep, something happened and somebody's going to have to make a decision within three years.

Claus Thorp Jensen: 37:25

I think we could instrument our environment to have that kind of benevolent presence that helps with ambient healthcare solutions. Whether that is your doormat, you know, measuring your weight. Whether it's a sensor if you are in the elderly population that looks at whether your movement patterns are the same, when you could deduce from that I've improved that like 15 years ago you can run algorithms that actually tell you whether you're doing well from both mental health and physical health perspective, whether that is measuring if you sleep appropriately, whether it's using wearable data and this is an actual company I'm working with to try to figure out whether your ADHD medication is having the right effect, because how do you measure that? You put together a whole bunch of data points. So the notion of ambient solutions I think is very exciting. I think we can have a smarter, more caring environment by combining sensors, machine intelligences and the alert to the human care team that says, hey, you got to take a look at this. That, I think, is exciting.

Pamela Isom: 38:23

And it's here, it's here already, right, it's it's exciting. And it's here, it's here already, right, it's this time to to pay attention and let it evolve.

Claus Thorp Jensen: 38:33

But the solutions are here absolutely, yeah, yeah.

Pamela Isom: 38:35

So on our on our smartphones. Not too long ago, my device told me that my heart rate had elevated, and what it was is I was anxious because of something, and so it was different than what had been going on for the past month. So then it's tracking and it let me know that my heart rate had elevated, and it's been going on now for the past few minutes, it told me, and so I was like, yep, I know what that is, because I'm excited about something and I just need to get past this call and then I'll be fine. And so I really appreciated those insights and it makes me think back to inventions and patents and things that we were coming up with. Back to inventions and patents and things that we were coming up with back then that are starting to come to fruition now. Right, so I had, I had one around ergonomics and I know you've had some, and so I love how. I love how that's happening and how what you call ambient health care solutions are starting to get more integrated in our everyday lives and continuing to evolve, and I agree with you we should embrace it. So I agree with you on that.

Pamela Isom: 39:56

I have one more question. I asked my guest to share parting words of wisdom or experiences with the listeners. Wisdom or experiences with the listeners. So, first of all, if there's nothing else that you want to go over on this call. If there's anything else, let me know. Otherwise we're ready for your parting words of wisdom.

Claus Thorp Jensen: 40:20

I think there are two. It's going to be a two part answer. The first part something that someone told me like 15 years ago. It was one of my mentors, the first mentor I ever had, and he said, when I tried to push him for how do I become successful? He wouldn't answer me. But he said something I remembered for a long time. He said, Claus, you can be good at many things, but you can only ever be great at the things you love to do.

Claus Thorp Jensen: 40:49

So as professionals, we should look for the things we love to do. Don't get me wrong, there's going to be not so fun parts of any job, but go for the things you love to do and then combine that with look for the value, instead of looking for all the barriers and all the things that we can't do. Instead of smacking our heads against regulatory barriers and say, well, I can't buy, look for things you can Enjoy, the things that we can do, and use those to drive. I like to think a better world. I mean, being a technologist is a wonderful way of amplifying your impact. I mean being a technologist is a wonderful way of amplifying your impact. When you build great solutions, you're helping thousands, hundreds of thousands, millions, tens of millions, hundreds of millions of people, not just one or two. So isn't it great to be a technologist.

Pamela Isom: 41:36

It is great to be a technologist. It is so great to be a technologist. So I heard you say do the things that you love. That's what you said. Do the things that you love that's what you said. So you can only be as great as you can be if you do the things that you love.

Pamela Isom: 41:56

That's how I'm processing what you said, and then look for the value in everything. So I am a very positive person. So I really like that, because if you look for the value, then you're looking for the good. So, no matter how difficult it gets, there is hope, right, and so I love that. I love what you said in general, but I really love that because I look for the good and I want those that are around me to look for the good. If you don't, you go nowhere right, because negative will usurp you. So I really appreciate that. So, hey, I want to say thank you for being a part of this show. It's wonderful to see you and have you here, and you provided some wonderful feedback and you're doing well. I'm glad to know that you're doing so well, and I really want to thank you for joining AI or Not, the podcast where digital transformation and artificial intelligence challenges get addressed by leaders like you to help us survive. So thank you so much.

Claus Thorp Jensen: 43:07

My pleasure. Thanks for the invitation.