AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E021 – AI or Not – Victor George and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
How do perceptions of AI expertise shape professional opportunities and impact workplace culture? As we welcome Victor George, Principal Consultant, Ethixera Consulting, a distinguished strategist in regulatory risk and ethics, we unearth the pivotal role of authenticity and ethical governance in digital transformation. Victor shares his career journey, including an unusual encounter with a consultancy firm that assumed AI contributed to his interview responses. This intriguing experience sets the stage for a broader conversation on the misconceptions and ethical considerations of AI in professional settings.
We navigate the intricate implications of AI in recruitment and education, shedding light on scenarios where candidates and students face unjust accusations of AI usage. Victor and I dissect the misuse of AI as a scapegoat for human biases, emphasizing the urgent need for ethical guidelines and a nuanced understanding of AI's role in our lives. From recruitment to education, we explore how AI feedback systems often fail to capture the cultural subtleties of modern environments, calling for more refined and culturally aware AI governance frameworks.
Turning our focus to AI's role in anti-money laundering and fraud detection, we uncover the challenges of relying solely on technology in these areas. Victor highlights the irreplaceable insight of human auditors, who provide the nuanced judgment that AI currently lacks. Our discussion underscores the importance of ethics and accountability in communication, bringing to light real-life examples of the chaos that ensues when these principles are neglected. Join us for this enlightening episode as we explore the critical intersection of ethics, governance, and AI in today's rapidly evolving digital landscape.
This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health tax, nor professional nor official statements by their organizations. Guest views may not be those of the host.
Pamela Isom:So hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. My name is Pamela Isom and I'm the host of this podcast, and we have one of those special guests with us today Victor George. Victor is a leading strategist in regulatory risk and ethics and a principal consultant. Victor, your experience is absolutely incredible. I thought it would be a good idea to spend some time talking to you today, sharing with us your experiences, and so welcome to AI or Not.
Victor George:Oh yeah, thank you, pam, so much for inviting me on. You know, seeing you, you know, connecting with you and our shared backgrounds and stories has just been a privilege in itself. You know, connecting with you and our shared backgrounds and stories has just been a privilege in itself. So I'll just share about myself. I am, as I said before, where ethics, governance and audit meets. And so ethics and understanding how people are cultures I love the uniqueness and our cultural uniqueness, from my African-American background to understanding who people are, to how we think, consider, and then that governance, part of how organization thinks what is the brain, what purpose, what drives them.
Victor George:And then that audit space has been does it really make sense from a regulatory side? Are you following, are you doing what you say you do? And so that has been a big part of my career for the last 10, 15 years and I've worked across this US international teams partnered with big four firms, and so continue to drive that ethics, that governance and that audit process. It's just I enjoy it, from cybersecurity to remediation efforts. Now we're in AI, so that's exciting. So you're, if we talk about your career journey that's exciting.
Pamela Isom:So, if we talk about your career journey, where are you in your journey today and where do you see yourself headed?
Victor George:Well, at this moment, my journey I have is my ethics my consultant firm, exterior, also known as Ethics Era, where I consult with companies whether it be startups, banks, fintech, software on their ethics, their governance, and our programs can be cybersecurity, so anything you name it keeping them from being sanctioned by the government, sued and having negative media and really, though, really helping government organizations think ahead of time, before these actions happen. So ethics really is to me, it's not about this morality thing. It's just thinking and considering and mapping it internally with your people so they can understand and move fluidly in your organization, because it does help and it does serve them more. Companies where the individual within companies.
Victor George:Most of the issues I find regulatory is that you have a lack of psychological safety within organization internally. Employees don't have it, they don't want to provide documents, they don't want to do this, they feel their jobs are at risk or, if they do any, and it makes it harder for the organization, it makes it harder for the people there, and that's how a lot of problems come up. I mean, most of that deals with fines and lack of deadlines, is that? So I want to continue to harness that and build that up for organizations, as well as being able to build as we also talked before, this equity as well in our communities as well as in the workplace.
Pamela Isom:That's interesting thing. We had a conversation about an experience that you have that touches on ethics, equity, diversity, inclusion, accessibility and artificial intelligence. Do you mind sharing that with me again, Because I have a few questions about that experience that you had. So would you mind going over that with me again?
Victor George:Yes, I had one of those what the F? Moments. I'll say this I interviewed with this consultant firm, grc. You know, a consultant firm, small boutique, and actually the hiring, the recruiter, reached out to me so I was like, ok, you know, I'll check this out. And I made it to the second round, final round, with the actual owners of the firm and it went well. I thought it did. Even the recruiter was like, hey, we're going to have this offer letter going. I was still unsure about accepting it because in my mind I was like, eh, you're not reading something that I don't know whether I will or not. But I read from the recruiter that he sends me a message like, hey, the company, they're all just starting to move on because they felt that you use AI to respond to their questions during the interview. And I was perplexed, and still am, because of that association and the questions that were asked during the interview. But really my part in this is because, as Pam, as you and I met at a Black AI think tank event online and I actually a few weeks ago hosted I was one of the panel speakers on that and I don't I'm assuming that could also have been an impact when they looked at my profile. Oh, you know, he knows all about AI. That was also factoring because you know, this job, that role, was to me a piece of cake. I showed up as my most authentic self. I didn't do a whole bunch of preparing, reading this and going over here and there I showed up.
Victor George:And you know there were parts in the interview where and I'm just going to show you an example where you know my son had a bug bite and he had to go to the doctor's office. He was having some infection on it. So you know, while he was in school I thought I saw a notification about that. I'm like, well, let me make sure he's okay, cause I was a little bit on edge and I looked at her just like this and said, oh, so you're off. I was looking at my phone. I let them know, I was just looking at my phone really quickly. My son he had a bug bite Just want to make sure everything's good to go. And it was like, oh fine, yeah, yeah, and we went into the interview. There was another time where interview was stopped, ironically because the owners of this consultant firm the husband wife owns it One of them was in the hospital and they had to take a break. Between that, the doctor medical office called him and so I said hey, you know, can we take a few moments? We need to go take this call. So we probably stopped for a break for probably about five or 10 minutes, you know. Just okay, all right, come back. They did apologize and I at that time I understood for me, grace, because I just had a 20, 30 second call. Life happens, life is life, and we both laughed about it.
Victor George:The part, that other part that came off on that interview was they asked me you know things that I don't even know. I couldn't even, I don't even know if could create this. Because they asked me where do you see yourself in five years? And part of that was very explicitly I want to be in a good environment with good people collectively. Rather than in this, in a place of politics, I want to be in that of a collectiveness to get the job done. You know and they were a little bit surprised that that's all I said, and the other part came off was highly intelligent.
Victor George:People can have a struggle in this role because it requires and, as I interjected, it requires understanding people and, at times, people you know who are highly intelligent, maybe can get caught in the academia side, which that's what I like about audit. There's a scope and like, yes, they were connecting on that Very specific work instances, like you know, building cybersecurity programs, helping organizers. That I gave in my experience. So, you know, I don't know if AI can come up with my experience helping build a cybersecurity program, or challenges I had with my managers where they were stressed and how I was able to manage that conflict. And, you know, get to the point where. And if you look on my LinkedIn, I have a recommendation on that too because of that, so support it.
Victor George:So the recruiter also used this word verbatim odd on their reasoning for that, because his reason was like, hey, what did y'all do, what did you do in the interview? And just like this, here is a video interview. I'm looking directly at you. I'm thinking like, huh, what could have been that? But my thinking is I was very good for the job, I knew the work, I understood it, and the other part was why would I take this job with my law degree?
Victor George:Because, as I didn't mention this, that I do have a law degree and, as I expressed here too, I went to law school. That was a compliance audit. Law school focused on that. It had a compliance program, and I like to be solution-based rather than litigation-based in that organizational space. So that's why I did this and I have courses to support that. That was their other question. Why would I want to take this job? When I went to law school and plus, I'm like, hey, you know, y'all paying me, why not? And I don't want that stress that lawyers have either of that law environment. So and I'm, you know, I just think about like how can AI be a justification on that? And you were, and I were just talking about that, like and I actually did use AI to ask me that to feedback chat to DT.
Pamela Isom:We're talking about ethics today and governance, and so that was an example of very poor governance.
Pamela Isom:It was an example of horrible governance and so sorry that you had to go through that experience. It was also a it seems to be a misuse of your time and a how one could jump to the conclusion that you were using AI makes no sense to me, I think I so there's no sense in trying to figure it out. I know that you told me that you query after the experience, you query one of the AI tools to get some perspectives on how you could have done things better, but before we get to that, I'm I'm sorry that you went through that experience. Number one. And then, second of all, I just think that this is why we want to bring forward the fact that we're discrimination and just bad judgment is a problem. It's just a problem in society. So I don't understand, based on what you said, how you did you make it to the second round of interviews?
Victor George:Yeah, the second final round. It was the second.
Pamela Isom:So you made it through the first round, then you got called back for the second round, and then did you partake in the second round.
Victor George:Yes, yes, I did.
Pamela Isom:I did.
Victor George:I'm sure those were the two that that made that assumption, the or one of the two being the owners of it.
Pamela Isom:The firm yeah, so it was during the second round that it was concluded that you were using ai and did they explain to you to what extent they thought you were using AI? Did they think you were a deep fake?
Victor George:The recruiter was unsure. He just said they felt that I was using AI to respond to their questions. And these were the people interviewing me the second, final round. And in his words, again it was odd. He was like I don't know what to do with this here.
Pamela Isom:Okay. So I think that that's definitely a problem and you, being with your legal background, you know that that's a problem. Blame AI, blame their actions, their negligence, their discriminatory behaviors, their unethical practices on AI, and this is an opposite of what I normally talk about, in that you know, I've had examples where you know folks should have the AI gave a response and instead of them taking responsibility for the response, they blamed the AI, which is not an entity, it's not a legal entity. So that was just foolish. That was like so foolish. In this case it's different, but in this case it's still an example of what we are experiencing. And what I don't appreciate is they didn't tell you any rules. I just think that this whole thing is just a mess and just wrong. So it's just wrong.
Victor George:But I'm very concerned about AI supposed to improve our lives so that I can spend and have a good time, a good life with my family, so I can work well and not be overwhelmed. Why is this now a problem? Now we're using AI against us. Ai is supposed to enhance our lives and that is where I'm really concerned, at where, if someone else is going to be accused of in an interview that may have needed the job, that's my thing. I didn't need it. That's where I'm at.
Pamela Isom:But this is a common problem. This is a common problem in society today, in the school system as well. In the school system, kids are getting accused of using AI, sometimes falsely accused. There's been lots of claims out there where students and young adults those that are in college have been falsely accused of using AI and then they've got these tools that they're running that are not that accurate, and so then the tools are coming back saying that you have been using AI and it has caused anxiety in some of our youth. It has caused all kinds of issues.
Pamela Isom:I believe that the school system, the undergrad schools and colleges and universities are trying to work through this, but this is a common problem. The thing is that this has been going on for a while now, so this needs to get fixed. But this isn't the school. This is an example of what humans do, and this is why we have to put governance in place and guardrails in place. I don't like it when people and humans will do this right, because it's our own prejudices, our own biases get in the way, and so I don't care what it was. They were thinking they didn't have facts, they didn't have facts. So you know this was just wrong, just outright wrong. And what is it? Where is the governance in all this? How? Where's the ethical governance? So this goes back to my concept of ethical governance. How do you tell a person we thought you use AI. We thought we thought you use AI, so we're not going to extend the offer to you. I mean, you can do better than that. They can do better than that.
Victor George:And I'll add this too Tell me how AI gives that person a unfair advantage.
Pamela Isom:That's what I mean.
Victor George:I am. Also, when I say audit, I inject it too. I'm where community meets because I understand cultural uniqueness and I love it. I connect with people because most things in audit is psychological safety. You get someone to provide their documents and I gave specific examples. So when someone hears, that is that fake.
Pamela Isom:So the second part of this scenario is you then went to one of the AI tools and said you know, you put in a prompt and the prompt said give me feedback on this interview. And this is what happened. So could you give me feedback? And my understanding is the feedback that you got was off, but off phase. And what kind of feedback did you get?
Victor George:I'm going to read. I was reading some of it. Yeah, it said you know key ways. Ensure a distraction-free environment from the outset to set a strong initial impression. Balance structure and continuity. Use structured responses but allow for natural conversation elements to avoid seeming rehearsed. Articulate clear career goals. Align your career aspirations clearly with the company's strategic direction. Empathize practical skills. Highlight your practical on-the-ground experience. So it just so and it says speak specific. So I asked them what was their thought. But their response was basically you know, maybe you can do this better or that better or that better. And I actually interjected in this AI tool, jackbt, that I was directly on video. So you know I didn't even say, hey, you know there may be biases, but I had to give more information. Then it said well, it could be biased based on and you know whatnot, but you know it just wasn't ready to, you know, acknowledge what it was more about, what I needed to do more.
Pamela Isom:Sounds like the tool itself gave the textbook responses. So when you have a interview, you sit up straight, make sure there's no distractions. But that's more of the textbook and those are things that we should do, but that doesn't fit with the culture in which we live today where everything is connected, where your social life is a part of your work life, right, so that doesn't fit.
Pamela Isom:So that's a matter of the data still isn't where it needs to be in the AI models, in the repositories, the LLMs that the AI models are using. But what I do think I'm so glad that you brought this, because I think that a lot of times, people don't really understand why we say that ethics, equity, these types of biases are so important to identify and address, and they also don't understand how this carries forward into the models so that mindset, the attitude that you experience, can carry forward into the models. There's plenty of instances where things have carried forward. So what I don't know is exactly why they responded like they did, and we don't want to jump to conclusions, but what I do know is you don't tell a person they're going through the interview. They make it to the second round, and then I just think it's poor governance, poor ethics, just poor everything. To tell somebody, well, we didn't extend you the offer because we thought you used AI. That just sounds so weak.
Pamela Isom:And so I would ask that we, as leaders of GRC and things like that, that we help organizations understand, help people understand that we set the guidelines up front, let folks know. You know and we understand that you're going to be human. My instinct says you wouldn't have took the job anyway. So, despite all of that, you wouldn't have took it, you wouldn't have been happy. If you did take it, you would have been there for a minute and gone. So that's, that's the instincts about it. So, yeah, I appreciate you bringing it up, because the thing for us is to help people understand why paying attention to how we treat others, how we do that, carries forward as data in the models.
Victor George:Right, and I agree and you know I had. This is that you know, because I'm human and how I can want to second guess myself. Did I say this right? How can I not be associated with an AI tool, now that you know it creeps you?
Victor George:know subconsciously, that's just stress, we don't need Right, and you know, because now these AI tools and we're going to talk about this before, on investigations, which I was going to say how that could create bias, but it's now I'll be in my head Did I say this right? Did I do this right? And how can I not be seen as AI? How can I be real and be real but not too real where I am a threat? So then there's another aspect of that which, now that it creates more, now AI is going to make our lives even more. You know that kind of, you know kind of a thing where anxiety, stress, and now we're in a workplace and that I believe you say I'm not if that's, if that's part of that's what it's lead to, I'm on ai, not. You know that not.
Victor George:and so, oh, you know, I'm saying ai now, but ai not if that's what it's going to be I, I hear you, I understand it's, it's supposed to enhance ours, but that's not ai's fault, that is people and that is people and that is the configurations based on that, where you know how those thoughts, concepts, consciousness, ethics as you say and I've said this too ethics is we got to keep. We use that term too broadly. Ethics is a set of principles and values based on from that, unique from one group or another, and so how we measure the rules of that person's ethics organization should be more considered. Are we including the ethics of a particular culture, group, age, are we including the ethics and the principle based in this group? And then I think we'll have more understanding, because the AI tool just basically said well, we've been taught a lot, you got to work harder, you got to do this and that basically said we've been taught a lot.
Pamela Isom:You got to work harder, you got to do this and that, exactly, it was a horrible experience. It's a horrifying experience, actually, and we just have to think about it. So my mom always used to tell me you make sure that you don't stoop to the same level. So you be forthright with people. Be forthright, and if they had a question about it, they could have asked you the question, right, if they really had a question. But, honestly, that was neither here nor there, just a weak excuse to as far as I'm concerned, because they had no rules, they had no guidelines, they had no rules, and I don't think anything is said that you can't do research before you speak, even though you weren't doing that. I don't think there is anything that says you can't do research or refer to your notes in the middle of an interview. So that's just nonsense. I want to make a note. I'll tell you something.
Pamela Isom:So one of the things that we've done in my organization is we've created this ethical governance framework, and this ethical governance framework brings together cybersecurity and AI and privacy principles, right. So we put them all together and so we've got these 14 pillars and it's called this ethical governance framework, and at the very front of this is values. Where are the values? So, because what I feel like is ethical values are not there, right, ethical values aren't there, and so for many, and so that seems to be taking priority for many Right To be unethical. So we lay it all out and so, when I get some time, I'm going to share this with you, because maybe we can look at how we convey this even more, because it's a serious matter. It's not just something to talk about, it's something to pay attention to. But I blend cybersecurity tenants and AI tenants and ethics all together and call it an ethical governance framework. So I'll let you take a look at it. You can give me some feedback on it.
Victor George:Oh, absolutely I'd love to because, as you said here too, with a lot of cybersecurity issues and we're talking about GRC, grc was a. I mean, cisos didn't know this. They want to say now GRC's. But GRC's always been the same. It's just that we didn't understand. They didn't want to connect or tie the technical aspect to that broader picture of what ethics and governance looks like. Now they're doing it and so, no matter how you know technical we're getting with AI, we're not, like you're saying here, getting to the principles and ironing out and measuring what that ethics looks like we're going to be using. This will be incredibly costly to people such as ourselves who are in other groups or people who may not even have the proximity for people to share their truths. I've had many people before when I've interviewed who spoke English as a second language, had a Southern dialect and had an urban dialect, had a very Southern drawn dialect, and I appreciate that's their uniqueness. Okay, can you do the job, can?
Pamela Isom:you connect?
Pamela Isom:Are you kind, are you like those principles, that but you see, you see, in that example that you just gave. So let's say that there is a dialect. The AI models need to be able to process and conduct natural language processing and process the various dialects and draw no conclusions and recommendations based on that dialect. But I had a person that she when we met we met about a year and a half ago and one of the first things she said to me is Pam, I think AI is racist. And I said why do you say that? And she said because the AI, she uses the voiceover. So she asked AI a question and she asked the question in English, although she's from a different ethnic background, and so she asked the question in English because she has a dialect. It responds back in her native language. So she's upset because she says I want you to respond back to me in English and it responds back to her audio in her native language, right?
Pamela Isom:So she said I think it's racist. Is it racist? And I was like no, it's not. It's the way it was trained, it's the way they train the algorithms, it's what, it's how they train the, the models. And I had I spent some time explaining it to her and I said the thing for you to do is to send feedback via the tools to let them know that, hey, this is going on and the different languages are not being considered Right. So so the different desires of people with with different dialects and accents are not being considered or is being there, is drawing conclusions and is drawing the wrong conclusion. If I tell the model, I want you to respond back to me in English. I don't care if I have a dialect, that my accent is some other culture. I want it back in English.
Pamela Isom:Oh yeah, yeah, that's, that's a problem, that's a training issue Right, but the thing is that we have to be mindful. So we talk about diversity of opinions, diversity of perspectives, multi-stakeholder feedback and inputs. That's what has to go into the models, things that we probably wouldn't think about when you're testing the models and you're proving them out to ensure that they're going to deliver without introducing harm. So that's why I created the ethical governance framework.
Victor George:Absolutely. It's just that some of the models and AI tools that I've been testing out where you know companies may reach out to me for their AI GRC tool that audits their cybersecurity processes I've seen misses on how they reviewed the control design and I'm like, okay, this doesn't create that psychological safety, that cultural awareness that the tool would have. So, even from a lens of cybersecurity GRC, a lot of the control designs we find, you know, I found are inadequately mapped and created and where it can customize to the organization's needs. So organizations want diversity you know they want that in those tools so they can have adequate protection and the right governance in place to protect their from hacking and other threats involved.
Pamela Isom:Right, if I think about your experiences and your background, you have a specialty in AI risk management in the anti-money laundering and fraud arena. Can you tell me more about that?
Victor George:Yes, and so I have in my experience worked on. Anti-corruption means third party risk management where anti-bribery, anti-money laundering within applications the last five years with applications that are on a crypto, when you think about cryptos like, are you know, are these applications escalating the correct transactions involved? You know if there's a KYC or a transaction made from you know a high-risk country, is that being escalated to that system or tool that this is a problem? You know, when you're onboarding a particular client with AML and there are risks involved, are they being correctly escalated, managed and remediated? All those controls being remediated covering managing these AML risk that are involved? Because so much now we know with transactions are going and remittance are through many unique different means, from cash out to wiring and all the cryptos, and there's not adequately covering to manage those and safeguarding those risks from an AML standpoint.
Pamela Isom:So is your position in all of this to help organizations to understand the vulnerabilities in the AI, or tell me more about how you're working with organizations in this capacity?
Victor George:So, without the AI part, I've been doing some data tests. I want to say that because I do not think that AI is ready yet. Okay, I'm not ready. No, I'm like nope, because it's going to create more problems and we're going to overly depend on that AI tool to create a beta or this synthetic auditor who is going to be reviewing the controls in place, and so I don't think the tools are quite there yet. Reviewing the controls in place, and so I don't think the tools are quite there yet.
Victor George:Now, from what I'm doing is I work directly on KYC, anti-money efforts, investigations and now I'm working on the applications that review it and so configuring it and opening up how the tools or what information is being pulled in from the. It can be from transactions, from personal information data, how the data set, how the rules are being applied that escalate, that trigger an escalation result, a red flag. So now working on that space, on the data rule sets, but AI is not quite ready yet. So I work directly with organizations to how they fit and what their service offerings are and how we can create the data sets that are going to correctly safeguard them. On the AML KYC, because the government now that they're required. You got to have these safeguards in place.
Pamela Isom:When it comes to anti-money laundering and fraud? Are they using AI at all, or is it more and you don't recommend?
Victor George:I haven't seen it yet. I don't recommend it, not now. I have worked with some service providers on that and particularly on that think it can help with certain you know, you know reviews that if you have such a large data set of tools that you're reviewing, it can help assist and give you a guidance of what to look for. That I'm still not comfortable yet on solely relying on that as we would, you know, on a technical tool or software. I'm not comfortable yet with it.
Pamela Isom:Okay, I was reading. There's some documentation out there that says between two and five percent of global GDP that's up to two trillion dollars is laundered each year. That's a lot of money. That's a lot of money. And it says these cash flows cause financial institutions up to hundreds of millions annually on anti-money laundering technologies and operations.
Pamela Isom:So, I think, as I know that the technologies are emerging every day and I've been involved in conversations with people that you know, during the COVID crisis, there was quite a bit of fraud going on. It wasn't money laundering, but it was fraudulent activities and they use technologies like AI to help. It was just because there was so much information that they had to sort through and this government was using it to, or this agency it wasn't a government agency. They were doing work for the government and they were using AI to help to look for commonalities and trends and sort through the massive amount of fraud that was going on. So I can see some use cases, but I hear you. I hear you saying be careful with that, because we don't want to falsely accuse, but more so, you need it to be accurate. You need your responses to be accurate and AI is not to the place yet where the responses will be as accurate as we would like for something as significant as risk management in anti-money laundering.
Victor George:The part that AML and I've worked in that and when I have, I'll say that it gives me, and I'll say this, a headache to work with AML KYC. I'm protecting someone's accounts and all that. It's a headache because the rest the controls that are in place and how they're doing it is incorrectly mapped. That is the biggest issue. It's very rarely and AML analysts will tell you this. Aml experts will find that you have people who are coming from audit background where, from an audit standpoint, you are reviewing things in scope and there's such a tight methodology in place Most times, and not from an organization standpoint too, the people they're using are KYCs, aml. They're using people who don't really understand it.
Victor George:I work alongside these and this is where my connection piece comes in, and I'll ask someone who's an AML investigator or analyst. I'm like, hey, what are we supposed to do? What are you supposed to do here? And their question is I don't know, I'm just doing what they say. If this number comes up, I just escalate this here. So the why part and a lot of these parts, they just don't know. And so that's to me where I find the risk, the measure. So I think over time we overanalyze and we over consider and it becomes a place of connecting that again, like I said, with the people and those frontline workers and mid management and big picture, they just they're not, they're not correctly creating that program for their clients. And the fact that very little people understand audit in the way of the methodology, because it's a way of thinking for me is I would say for me in my own experience maybe 10%.
Pamela Isom:Yeah, I'm not sure I would want an AI agent as an auditor. So I hear you. I mean, that's what I do for a living, is I audit AI solutions? And I'm not worried that AI is going to replace me at all, because there's no way an auditor can do the things that you would require one would require as a human being. So I understand, I understand where you're coming from. Are there any words of wisdom or experiences that you'd like to share, above and beyond what you've already gone over today, that we can take with us?
Victor George:My grandma would say you know, seek to understand and then to be understood, and you know. Another part for me is I say no ethics, no accountability, and it's not N-O-No, it is K-N-O-W-No no ethics, no accountability. I think most things are not, will be, will flow more fluidly, because most things are a misunderstanding.
Pamela Isom:I believe your example today was a pretty good example of what happens when ethics are not considered. Happens when ethics are not considered right, because that yeah, that was just nuts.
Victor George:So I appreciate you being on the show and I'm so glad that we've met absolutely and yeah, so thank you for being on the show oh, thank you so much for having me and I get to share it with you today and wonderful, and the listeners as well. So I appreciate you for doing this, for having me on.