
AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E024- AI or Not - David Broniatowski and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
Join us in a captivating discussion with Dr. David Broniatowski from George Washington University as we explore the intersection of artificial intelligence, engineering, and human psychology. What happens when technology meets human perception, and how can AI be a force for good when steered correctly? Dr. Broniatowski shares his unique career journey that combines social sciences with technical expertise, offering insights into the complex societal impacts of AI, from its benefits to its potential pitfalls like bias and inequities.
Trust is the cornerstone of AI adoption, and in our episode, we unpack the importance of aligning AI outputs with user values and expectations. We delve into systems engineering principles, emphasizing the role of engaging diverse stakeholders and promoting participatory AI design. Discover TRAILS, the NIST-NSF Institute for Trustworthy AI in Law & Society. TRAILS is the first organization to integrate artificial intelligence participation, technology and governance during the design, development, deployment and oversight of AI systems by conducting scientific research into what trust in AI looks like, how to create technical AI solutions that build trust, and which policy models are effective in sustaining trust.
We also tackle the challenge of misinformation and the critical role of active listening in the design process. By discussing innovative strategies like the accuracy nudge and accuracy shove, we highlight the importance of fostering responsible information sharing online while listening to diverse voices to create systems that truly empower individuals. This episode is a call to action for designers and tech creators to prioritize genuine participation and listening to bridge community gaps and build a trustworthy AI future.
[00:00] Pamela Isom: This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health, tax, nor professional nor official statements by their organizations.
[00:41] Guest views may not be those of the host.
[00:46] Welcome to AI or not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and your digital transformation journey.
[01:03] I am Pamela Isom and I am your podcast host.
[01:07] And we have an exciting guest with us today, a very special guest, Dr. David Broniatowski. He's a professor of engineering, management and Systems Engineering at George Washington University.
[01:24] David, thank you for being a guest on this podcast and welcome to AI or Not.
[01:30] David Broniatowski: Thank you so much. It's a pleasure to be here.
[01:33] Pamela Isom: So, as we get started, will you tell me more about yourself, your career journey, tell me how things are going over there at gwu, what's going on?
[01:45] David Broniatowski: Absolutely. So I'm really, really excited to be here. And you know what's sort of my own background and career journey? It's a little bit of a circuitous path. I started out as an undergraduate in aerospace engineering and then did a doctorate in a program at MIT called Engineering Systems, which is all about bridging the gap between engineering and the humanities and the social sciences and being able to understand the context, the social context in which technology is used and using that to improve our design.
[02:24] And so while I was doing that, spent a fair amount of time doing work with natural language processing and AI, using that to really try to understand how groups of people made decisions.
[02:35] And in particular, I was looking at FDA advisory panels and how people voted on things like devices for heart surgery. And so it was a really interesting overlap between a lot of these different things.
[02:47] And worked for a couple of years in the defense sector at a small systems engineering firm and then did a postdoc in a department of Emergency Medicine at Johns Hopkins doing mathematical modeling.
[02:58] So I've kind of been bouncing around a lot before starting at gw, and I've been at GW in the Department of Engineering, Management and Systems Engineering ever since. And it's a really fantastic place to be.
[03:09] GW as a university is, you know, of course, located right in the heart of Washington, D.C. and so it's got a lot of the excitement of being right there, right next to White House, you know, not too far from Congress, not too far from various executive branch agencies.
[03:25] And so there's obviously a lot of interest in ways in which we can productively impact policy and have A real impact on people's lives and using our research to help people.
[03:35] And so that's something I'm really excited about.
[03:38] Technology can be a force for good if it's appropriately managed, but that appropriate management is not to be taken for granted. And so that's something that I'm really excited to talk about, really excited to study and really, really glad to be here to be able to discuss.
[03:55] Pamela Isom: That's so good. I'm impressed with your background and I'm just happy that we got to meet one another. So we met on a, on a panel. I was a panelist at the GWU event and so the panel was a great opportunity for me to get to know some folks and get to know who's who.
[04:17] So you're one of those who. Who people. So I'm so glad that we stayed in touch and I'm just delighted that you are willing to come and talk to me today.
[04:25] So your background is interesting and I love how you interweave technology, engineering and the social aspect of humanity. So psychology. Right. I love how you do that. And so I'd like to know more about the interest that you have and that you bring about with psychology and the psychology of AI, the concept of interpretability.
[04:59] And we talked earlier about consequential decision making.
[05:04] So can you talk some more about that?
[05:07] David Broniatowski: Well, absolutely. So thank you for bringing that up. I think we spend a lot of time talking about AI at two levels. There's the level of the technology development and that's a level where I think a lot of people who are sort of in the trenches developing algorithms spend a lot of their time and massive progress has been made at that level.
[05:29] And then we also spend a fair amount of time nowadays talking about the impact of AI on society. And we have very legitimate concerns about harms from AI bias from AI ways in which AI can entrench existing inequities or hurt specific populations because it draws upon data that reflects those inequities and then if not appropriately managed, can emphasize and enhance those inequities.
[05:58] And so this sort of two level conversation is going on where on one hand, in order to be a good algorithm developer, you need to abstract away all that societal context because as computer scientists, one of the things we're trained to do is to look at a problem in the abstract without how it's used in practice.
[06:18] On the other hand, the way in which it's used in practice is the thing that really impacts real people.
[06:24] And so I've spent a fair amount of time in my career trying to figure out how do we connect those two levels. And one of the things that I believe quite strongly is that it has to go through this third level, this middle layer layer of human psychology.
[06:40] And that layer of human psychology is all about how do people, how do individual people with their own values, their own goals, their own concerns, their own loves and hatreds, how do they make sense of what the algorithm is doing?
[06:56] And then how does that ultimately bubble up to some of these bigger societal issues?
[07:03] Because at the end of the day, if somebody, for example, feels that an algorithm is biased against them, they will have to make that judgment or that perception based on their own experiences.
[07:18] And that experience might be other people tell them that experience might be that they experience some bad outcome themselves.
[07:25] And in many cases, it may be that they don't actually realize that it's the AI that's causing this bad outcome. And so they have something bad that happens to them, they don't know why.
[07:36] Or on the other hand, they're actually getting a good outcome from the AI, but because they don't trust it, they don't use it because they see other people having outcomes that are not good.
[07:47] And so this perception of to what extent is this system operating for me, to what extent is it putting me at risk, to what extent is it engaged in harming myself or my friends or people that I care about, and to what extent is it helping?
[08:05] Those are all the top of mind concerns. And so one of the things that we talk about when we talk about interpretability from a psychological perspective, is making that link.
[08:17] You see what the algorithm gives you as an output, but what does it mean to you? And it's that question of what does it mean? That to me is core to interpretability.
[08:27] And so I think that one of the things that I try to distinguish is between interpretability, which is again, this question of what does the algorithm do? What does it mean for me if my automatic loan recommendation system denies my loan?
[08:44] What does that mean? Does that mean that I need to go and get a different job? Does that mean that I'm being discriminated against?
[08:53] Or does that mean that the algorithm made a mistake and the people who designed it didn't take into account important information? And do I have any recourse? Is there anything I can do about it?
[09:04] None of those questions can be addressed by a computer scientist because none of those questions are built into how the algorithm works.
[09:13] Those questions are how people make sense of what the algorithm is doing in the broader context of their life.
[09:20] And so we really need to figure out how do we make that link, how do we connect the design to the societal concerns, and how can we do that through that sense making process?
[09:33] I distinguish between that and what in computer science we call explainability. And explainability is in large part is how does this thing work? And the two of them may have some relationship.
[09:44] If I understand how the algorithm works, I may be able to make sense of what that means in the context of my life. But we don't necessarily have to have those two things connected.
[09:54] I don't really know how my car works nowadays, but I do know that it helps me get to work on time. And if my car breaks down, I could lose my job.
[10:02] And so the interpretability component is, if this thing doesn't work for me, if this thing is not reliable, I could lose my job. That's bad, right? Whereas the explainability component is, well, the carburetor is connected to the engine and it produces this much force and this much torque, and that means that I can go this distance before having to refill with gas.
[10:25] So the idea of interpretability is to really try to do targeted research on the psychology. How do people perceive what AI is doing and how do they link it to their needs, to their societal needs?
[10:38] And importantly, this is true for everyone.
[10:41] To my mind, this is a universal human thing. We as human beings are sense makers.
[10:47] If somebody sees the output of an algorithm based on their personal history, their values, one person may look at the output of an algorithm and say, you know, this isn't fair, this is not right.
[10:59] I am being discriminated against in some way because of, because of who the algorithm perceives me to be. Another person may look at that same output and say, no, no, this is totally fair, this is fine.
[11:11] This is, you know, you, you know, this particular case just, you know, didn't make whatever metric and, you know, you just need to work on improving that metric. And so given that there are differences in interpretability, different people may interpret things in different ways.
[11:27] We need to figure out from a scientific standpoint what are those ways, because it's not that any interpretation is possible.
[11:36] There are interpretations that align with people's values and people's prior knowledge. And so if we can understand how somebody's prior knowledge and their values shape their interpretation of the output, then we can design algorithms that help them to understand whether their values are actually being executed on whether the output is aligned with those values or whether the output is not.
[12:02] And to my mind, that's one of the biggest Predictors of trust.
[12:07] If I see the output of an algorithm and I say like this thing is doing things that I think are bad, it's putting me in a situation that's harming people, harming myself, I'm not going to trust it, I'm not going to use it.
[12:20] And if I see that what it's doing is what I wanted to do, what I intended to do, what makes sense to me, and it's doing so reliably and it's making my life easier, great, I'll use it.
[12:31] Pamela Isom: But how do we get there? Like we always talk about how we want to have diversity of perspectives, diversity of opinions, multi stakeholder inputs, multi stakeholder feedback, but you don't have time to get all that.
[12:45] I mean, how do we get there in a reasonable way so that the different perspectives are taken into account? Because, you know, I'm a big person that focuses on ethics and equitable outcomes and you can't have equity if you are not taking into consideration the various perspectives which you pointed out earlier.
[13:06] Right. So the challenge is how do we get there? And is that some of the work that you're doing at the university?
[13:13] David Broniatowski: Yeah, absolutely. And so I think one of the things that we are used to in the computer science world is this mentality of move fast and break things. And we do want to move things fast, we want to scale things big.
[13:27] And when you do that, there's always a trade off between speed and accuracy. There's a trade off between moving really fast and getting an answer versus understanding what the consequences of that may be longer term.
[13:40] And so in engineering, you know, I'm in a systems engineering department and in systems engineering we have a process that we call requirements analysis. Requirements analysis always starts with requirements elicitation.
[13:53] You identify the people who are your stakeholders or your customers or the people who will be affected in some way.
[14:02] And you talk to them, and more importantly, you listen to them and you understand what it is that they need out of the system. And it's well understood within the systems engineering community that if you don't get this requirements elicitation process right, you will spend a lot more money and commit a lot more effort downstream.
[14:23] Because you're going to deliver something, you're going to put a lot of effort into designing something and then ultimately deliver something which is not going to be used and you're going to have to go back to the drawing board.
[14:32] So for large scale government project, this usually translates into things like schedule overruns and cost overruns and losses of major contracts. And things like that. But we can learn a lot from that perspective and we can apply some of that to the ways in which we develop AI.
[14:47] Now in the process of doing so, we may become a little bit less experimental. And so we have to really figure out what is the balance that we're comfortable with as a society.
[15:00] And this is one of the reasons why I think I'm going to use this as an opportunity to put in a plug for trails, which is the NSF NIST Trustworthy AI and Law and Society Institute and the GW lead for that.
[15:13] And I also lead the sense making research thrust in that.
[15:17] One of the key ideas underlying TRAILS is that AI design and development requires participation.
[15:25] And what we mean by that is if you're going to get AI that works for the people who are going to be using it, you really have to be able to get their input into the process.
[15:38] We can't have a situation right now where one person makes a decision about what it is that the public needs and puts a system out there. It starts impacting everybody's life.
[15:52] And then the people who are involved and whose lives are being impacted never had any say in what's happening to them. Because even if it is making their life better, because in many cases it is, even if it is making their life better, there was no consent, there was no ability to say, well, this is something that I agree with and that I approve of.
[16:12] And so maybe some people are fine with that, but then there are other people who may be, would object to the use of systems to even again, even if they're making their life better by some objective metric that may not be aligned with their values, that may not be aligned with who they are.
[16:31] Not everybody wants to live the lifestyle that that one designer put out there. And so, you know, out of the simple idea of having respect for people's own choices, we want to be able to deploy systems in a manner that provides people with the ability to opt in or opt out.
[16:49] Pamela Isom: I like, I'm. I love the concept of trails. I'm looking forward to some collaborations as we've been talking about in that space. But what I really like is the focus on diverse perspectives and inclusiveness.
[17:07] And I like that because of the fact that we don't know what the different needs are. You don't know unless you get out there and you start to do things to understand what those needs are.
[17:18] Which goes back to your concept of requirements. And here's what happens when we don't do that, when we don't do These the types of things that we're talking about. You don't have collaborations like trails.
[17:28] What happens is you find that people don't trust the algorithms. So again, get to that algorithm aversion, or what I call AI aversion, and they have no say in the matter because AI is going to be there irrespectively.
[17:45] So the AI aversion is just like something that really is wasted energy because it's happening anyway. Right. But what happens is they just don't trust it, which you pointed out, you mentioned earlier, there starts to be this form of not trust or untrustedness.
[18:02] And that could be avoided if we start to take on actions like what you described and if we really start to appreciate the human psychology. So the human psychology, if I go back to that, you mentioned explainability and you mentioned interpretability.
[18:19] And we've always said that the two are different, but we need both. And so your examples help to clarify why we need both. And really there's a tight coupling between that interpretability and ethics.
[18:34] Right. So you can see if you can get that right. We can get the whole ethical concerns around AI and autonomous systems. We can get a deeper understanding of why that's so important and really start to address that.
[18:50] And I do agree with you that our computer scientists are there to address a problem and not necessarily look at some of these other cursory things, except for when it comes to today.
[19:04] Today we have to take into account these, the human psychology and the social aspects. We have to figure out a way to integrate that into the process.
[19:14] David Broniatowski: And we need a team. We obviously need the computer scientists.
[19:18] And at the same time, we don't need only the computer scientists. And I actually think that this is a dynamic that we see across a lot of different kinds of technology.
[19:29] And so in this way, AI is not really unique. And when you compare it to other technological developments. So, for example, we relatively recently came through a major pandemic. And during the pandemic we got vaccines in amazing record time, vaccines that helped to blunt that pandemic.
[19:52] And at the same time, there were people that were concerned that. And they didn't have good interpretation or good interpretability of what that would do for them.
[20:04] And there were, there are people out there who, who have the perspective, if I were to take this, I'm being told what to do. I don't agree.
[20:13] And that created a lot of what we call in psychology, reactance, a lot of backlash, a lot of anger, and a lot of refusal to do something that would ultimately have helped People, we don't want to see that happen in AI.
[20:26] We don't want to see that happen. And of course, this is not just in the area of vaccines. We also see this, you know, with the development of nuclear power, for example.
[20:33] Nuclear power decades ago was, was a technology that certainly other countries around the world rely on much more heavily than we do. But there's a lack of trust in nuclear power due to some very important, almost failures and due to questions about who pays the cost.
[20:51] And at the end of the day, do we have trust in the systems that will mitigate the waste?
[20:57] And as a result, because of this lack of trust, we are unable to draw upon what might be a major source of benefit to society because there's a lot of opposition.
[21:09] We don't want AI to fall into that same trap where people oppose the adoption of AI when it could help them because they just don't have trust in the people developing it.
[21:23] And again, that trust, that lack of trust may be warranted. And so if it's not warranted, if it, if it is warranted, we need to fix that. The only way we can know whether or not it's warranted is we need to be able to help people to express those concerns.
[21:36] We need to take them seriously, we need to investigate those, and then if there are problems, we need to fix them. To do all of this, first, we need to listen.
[21:43] We need to hear what those concerns are, and we need to take them seriously.
[21:47] Pamela Isom: I agree with you. So let's talk about this concept that you have in this paper that you published on combating misinformation. And your paper is shelves Nudges and combating Misinformation. It speaks to evidence on a new approach.
[22:04] Now, part of that approach involves encouraging people to speak up, to share what they're thinking.
[22:16] Tell us more about this whole concept and what you discovered.
[22:21] David Broniatowski: Absolutely. So when it comes to combating misinformation, especially misinformation online, there is relatively small number of tools in the arsenal that we use. And one of the major tools is fact checking.
[22:35] But fact checking some people object to, because when you fact check, you're essentially telling people what to think. You're telling people, look, this is wrong, here's the right answer. And that creates or can create some kind of a reactance that is similar to what we've been talking about in other cases.
[22:54] So in order to address this, some colleagues have come up with the concept of an accuracy nudge. And this is the work of David Rand and Gordon Pennycook. Rand is at mit.
[23:05] Pennycook is at Cornell. And the idea underlying the accuracy nudge is that rather than telling people this is the right answer, this is right, this is wrong, you just tell people, think about whether this is accurate.
[23:14] And the accuracy nudge has been found over many studies to reduce people's willingness to share misinformation a little bit. And it's that last point a little bit that I think was a lot of the motivator for this particular paper, because we wanted to ask whether there was some way that we could have a bigger effect.
[23:37] Could we help people resist misinformation again, without necessarily telling them this is the right answer, this is the wrong answer. But is there some way we could encourage people to be, if you will, more responsible about what they shared online?
[23:51] And so we came up with this concept, we being my colleagues Ethan Porter, Tom Wood, and Pedram Hosseini, and I came up with this concept of an accuracy shove, where rather than sort of tell people, think about whether or not this is accurate, we tell them misinformation is really a big problem and you have the power to help.
[24:09] It's very similar. Imagine like Smokey the Bear, only you can prevent forest fires. We need you. Right? It's appealing to people's social responsibility.
[24:20] And this is based on literature on the bystander effect. If you see somebody is being harmed, if you tell someone you have the power to do something about it, you can do something.
[24:29] People don't just stand back and let bad things happen. They intervene. They make things better. And so we asked whether we could use a similar kind of an approach to help combat misinformation.
[24:39] And what we found is that, indeed, yes, when you tell people you have the power to do something about it, they are less likely to share misinformation. But here's the catch.
[24:47] They're also less likely to share true things.
[24:50] They're less likely to share overall unexpected.
[24:54] But it's something that we can explain through a psychological theory called fuzzy trace theory. There are other explanations, too, but I want to focus on this one because I happen to think that it's most consistent with prior data.
[25:07] And the basic idea is that when people are trying to share something online, they largely see it in terms of upside. They see it well. Either I could do nothing and get nothing, or I could share it and I could get all these likes and comments, and I get positive reinforcement from my friend group.
[25:24] And so I'm going to share because, like, hey, I want attention. I like that. I want that positive reinforcement what our intervention does is it draws people's attention to the fact that sharing also has a downside.
[25:36] And so it reframes the question of whether or not to share from do nothing and get nothing versus do something and get positive social reinforcement. To do nothing and lose nothing versus share it and maybe get negative social reinforcement.
[25:57] Because what we're again, doing is we're appealing to people's values, we're appealing to their responsibility. And this idea that, like, sharing false things is not something most people want to do.
[26:07] Most people don't want to feel like they're sharing lies or that they're engaged in spreading misinformation. That's a bad feeling. To share something and be like, oh man, I shouldn't have done that, like, I'm responsible for spreading lies.
[26:20] Nobody wants to feel that way. And so what we're doing in our shove intervention is reframing the question of whether or not to share to one about, do I hold my fire, keep my powder dry, let me think about this versus do I share?
[26:38] And then maybe get that negative backlash. And so again, the reason why that is something we expect to have a negative reaction even for true content is because we're not telling people whether it's true or false.
[26:50] And so we're just pointing out the risks. We're saying, look, this might be false, this might be true, but given that it's false, you might be spreading misinformation. And so we're really drawing people's attention to those downside risks.
[27:06] Pamela Isom: And you're drawing attention to conduct the due diligence to ensure, or to at least put some diligence into verifying what the impacts will be. Right. On information that you're thinking about sharing.
[27:23] Is that right?
[27:24] David Broniatowski: Absolutely. And I think one of the things that you point to is that we found that after giving people the accuracy shove, the accuracy nudge didn't have an effect. And so that points exactly to what you're saying is that when you tell people, think about the downside, think about the risk, think about the ways that this could possibly harm someone.
[27:46] One of the outcomes of that is that if you tell people, think about whether or not this is accurate, that's included within that broader set of considerations. So if you tell people, just think about whether or not this is accurate, but you don't necessarily point to the fact, well, if it's not accurate, it might be harmful.
[28:06] Well, some people may be like, well, it's not accurate, but it's harmless. Who cares? I'm still going to get My likes. And so they're going to look at that upside again.
[28:14] Whereas if you tell them, well, it's not accurate, but by the way, it'll hurt people. It'll hurt people you care about, your friends. Then what we see is thinking about whether give us any additional explanatory power.
[28:26] It plugs into this idea of people making decisions that are consistent with their values and simply reminding people, hey, we know that you don't want to share falsehoods online.
[28:38] Think about that. Think about, like, we're not telling people that they're bad people or that they're good people that what they're doing is, is false or what they're doing is true.
[28:48] We're just telling them, think about who you want to be. Do you want to be that person who might be sharing falsehoods or do you want to be that person who, you know, taking their time and really thinking this through?
[28:57] Pamela Isom: Yeah, I like this a lot. I am very careful when I publish anything or post anything. I sometimes feel like I overthink it. Like, should I put this out there? Well, okay, well, let's make sure I give the people credit.
[29:10] Right? So, yeah, I go through this whole spiel before I will repost something or publish or post something myself. And it's because of that very purpose. Because I'm always thinking about how that will not necessarily be received, but I'm more focused on is it accurate, is it true, is it real information, and is it information that matters?
[29:40] I don't like and don't want to spread misinformation. If I see something and that it's a blatant misinformation, I don't like to share it. If I'm making others aware of something that's going on, I try to tell myself if it's misinformation or bad information, then they need to find out a different kind of way.
[30:00] Right? So, because otherwise you're just adding to the, to the flame, right? So which is what you're bringing out about that people empowerment. And it makes me think about that even more.
[30:12] So I'm empowered to do that. I should not feel bad about the fact that I do that. I am empowered to do so. And if it causes me to not share information as much as, as some of my colleagues or something like that, don't worry about that.
[30:28] Focus on the values that you're speaking to. And that's what this paper is all about. Is that correct?
[30:36] David Broniatowski: That's right. And I think one of the key, the key things here is those of us who have been using social media now for almost a couple decades have had that experience of going back and looking at old posts and saying, wow, that did not age well.
[30:49] That is not like I, I really should not have done that. And this is part of the process of we as a society are learning how do we use these new technologies and what is it that when they first came out, it's, it was all moving so fast and we thought, well, it's at the top of my newsfeed and it's gone five seconds later.
[31:08] But here we are some 10, sometimes even 20 years later and we can go back into our newsfeeds and we can be like, wow, yeah, that's still there.
[31:17] And like, I may not be that person anymore, or I might be, I don't know, but like, wow, that did not age well. And so I don't want to put myself in a situation where at some point I look back and I say, I really wish I hadn't done that.
[31:34] Pamela Isom: That's good. I appreciate that. That's. So this is so thought provoking today. But what I really like is that it's giving us examples on how to apply proper or more advanced governance to our approaches to AI, an emerging tech.
[31:49] So I like this conversation a lot. So I have a final question for you as we wrap up today. You have some choices. You can share words of wisdom or you can share experiences that you've had for the listeners or a call to action.
[32:06] David Broniatowski: Thank you. Yeah, so I, I'm almost going to say a call to inaction. But what I mean to say by that is we need to focus on listening. And I think if there's one thing the body of work that, that I've been talking about here today points to is that we as a society get a lot more when we're able to listen to one another's concerns and address those concerns and really understand and interpret how those translate to our designs.
[32:39] So we need to learn how to listen. And that may mean in some cases slowing down, taking the time to really converse and integrate what it is that other people are saying.
[32:53] We, we talk about participatory design and participation in the AI process. If somebody participates but no one listens to them, then that's not, that's not participation at all.
[33:04] If somebody feels that their voice is being heard and feels that they were being respected in the process of design, then they feel empowered to be able to change things.
[33:20] If an outcome works out in a way that they didn't anticipate or that they didn't approve of. And so being able to have that dialogue, being able to really focus, how is it that we can make sure that people's voices are being heard, regardless of what communities they come from, across all communities, that will help us to be able to design systems that are helpful to people in those communities and maybe even bridging those communities.
[33:54] Otherwise, we are stuck with a system that is only based on imperfect information. It's based on what it is that the designer was able to hear and translate into that design.
[34:08] Because the design is always what the designer intended.
[34:12] And if what the designer intended is not what the user needs, because the designer was not able to listen to the user's needs, or because nobody ever asked the user, maybe you'll get lucky and you'll get a design that helps.
[34:26] That certainly does happen. But we can't run a country, we can't run a society based on maybe you'll get lucky. We have to be intentional about this.
[34:34] Pamela Isom: Yeah. Oh, I love the concept of balancing the designer's intention and the user's needs.
[34:41] David Broniatowski: Absolutely.
[34:41] Pamela Isom: That was just wonderful. So, hey, this was great. I really appreciate you taking the time to talk to me today and partake in this discussion and share some insights here. And we hear the call to action, which is to listen more and pay attention to how we listen.
[34:59] And I think that it's something that is always an opportunity to do better. So I really think that that's good that you brought that out. And so I just. Thank you so much for being here and taking the time to talk to me today.