AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E013 - AI or Not Kohei Kurihara and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
Ever wonder how a career in e-commerce can lead to becoming a leading advocate for data privacy? Join us as we chat with Kohei Kurihara, the co-founder and CEO of Privacy by Design Lab, who shares his unique entrepreneurial journey. From his beginnings at a major Japanese e-commerce company to launching a manga and anime crowdfunding platform, Kohei navigates us through his pivot to international marketing and his impactful work with blockchain organizations. Gain insights into his realization of data categorization's crucial role in innovation and how this inspired him to create a privacy-focused community, culminating in the Privacy by Design Conference.
In the latter part of the episode, we unravel the complex world of data transfer and privacy governance, spotlighting TikTok as a case study. Explore the multifaceted challenges of international data protection and the implications for national security. Kohei emphasizes the significance of data categorization, understanding data lineage, and ensuring governance to prevent misuse. We also discuss Japan's AI guidelines, which promote transparency, privacy protection, and human-centric design. Exciting AI innovations that prioritize privacy and data minimization are paving the way for more accurate and trustworthy user interactions, making this episode a must-listen for anyone navigating the digital landscape.
This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health tax, nor professional nor official statements by their organizations. Guest views may not be those of the host views may not be those of the host.
Pamela Isom:Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. My name is Pamela Isom and I'm your podcast host, and we have yet another special guest with us today Kohei Kurihara. Kohei is co-founder and chief executive officer of Privacy by Design. We met recently, where I was a guest on his podcast, and we have remained in contact. Kohei, welcome to AI or Not.
Kohei Kurihara:Yeah, thank you for coming, Mira. That's a privilege to join your channel, so thank you again.
Pamela Isom:Yeah, and so the first thing I want to know is can you please tell me a little bit more about yourself? You have a very interesting journey. You have your entrepreneur with Privacy by Design Lab. You have an interesting career at large. So tell me more about yourself, your career, how you got to where you are today, how did you get going with entrepreneurship and why privacy by design labs sure.
Kohei Kurihara:Thank you for the questions. So back to my history. I started my own company from 2014, when I was in a 24 at the moment. I created my previous career from the largest Japanese e-commerce company and I started my first business with four of the cargos from different countries. My first project was the manga and anime creators platform, which is supporting their fundraising. We call this the crowdfunding platform for them because a lot of the manga creators or anime creators have been challenged to make their own funds to make content deliveries, so we're gonna help them through our platform. But actually the first project was not going successfully due to some of the business failures. So I have to reconsider our own opportunities, how we can help the other people who is trying to create new things. So I changed. My business is just only focusing on the creators, but also some of the interesting person just want to sell them by the products cross borders, so I do focus on the marketing channel to deliver the products and businesses to the international opportunities. So that's my brief history.
Kohei Kurihara:And I was lucky to be a part of the new initiative from the United States in 2017, which is blockchain organizations. So they work for the nonprofit actions and they support some American governments or public sectors to encourage them to start using the new technology. President, to make a community and engage with the Japanese regulators to encourage them to start using the blockchain. And I'm trying to reach some of the community leaders in the international market, such as some regulators in Singapore, regulators in Europe. I talked with them how we can create a new environment to start using the blockchain in the future. But I realized through the conversation with them the blockchain is the infrastructure, but the infrastructure is less than important from the society Because the infrastructure itself is not working so well. They need more information and data to make it effective. But I realize at this time also, there is no specific taxonomies to define the data itself, so that's more important. To make an innovation in our societies, we have to categorize each of the data. What kind of data set is important to create innovation? This is the opportunity where I started to work on the privacy and data protection part.
Kohei Kurihara:I came back from the UNESCO 2019. After talking with some executives, I came back with new ideas to create an international collaboration community and I came back to Japan and talked with some of the colleagues ideas to create international collaboration community and I came back to Japan and talked with some of the curriculum and gathered the four members. Let's get started the grassroots approach and we created a company from 2000, training during the COVID and we started community-based communication with the regulators, business persons, several organizations and we tried to engage with international practitioners as well. Then, finally, we had started our main business the Privacy by Design Conference. We organized an annual and the first edition was 2021. We invited regulators and business persons, cp organization to gather and have a dialogue to create a better society in the privacy world. So that's a brief story of my careers.
Pamela Isom:Well, that's pretty interesting. So there's a couple of things that I want to reiterate from what you just said. So one I appreciate the fact that you learned from your experiences and didn't stop, but brought that forward. So if I think about entrepreneurship, I believe that that's what it's all about. So I heard you say you started originally focusing on content and content development and then you expanded it to the marketing side of the house and focus more on marketing, and then which is where you are today and then you morphed into a privacy by design lab.
Pamela Isom:I'm also interested to know to see that you focus on community. So that international community, I think, is really good, and I appreciate that, because we honestly need more of that. So the reason why I like talking to you and the reason why I was honored to have you as a guest is because it's really important for businesses to understand, not just within their local domains, but at a global perspective, what are the regulations? How do we make sure that our products are going to be effective as they are applied at a global scale? And so conversations like this are very helpful, but it's really nice to know that you have a focus on communities and a community orientation that is at an international scale. I heard you say that a couple of times when you were going over your background. So thank you very much and I appreciate your business and I appreciate understanding your growth and your trajectory, so thank you very much for that.
Pamela Isom:One question I have for you, really, that I want to discuss. It's not a question, but I'd like to just discuss more around the AI guidelines for business. I think it's version 1.0, the Japanese government has compiled AI guidelines for business. Can we talk some more about that? I'd like to know more about how that's impacting you and what are things that I should be aware of, for instance, as a small business owner here in the United States. Let's talk some more about that.
Kohei Kurihara:Sure, that guideline has been released in this April. Before the publishing of this guideline, the Japanese government has made some actions, just like the Hiroshima AI process, which is very famous. In last year in the G7 gathering and also they have been discussing a lot of things with the different community, which is just like the industrial community, scientific community. There is a bunch of the working group has been organized of the japanese government, the finally, the door discussion has been consolidated as the one guideline then just published. So that's a short history how it works and this is a very big impact to the industry people, because in last year we have talked a lot about how we can use the AI itself. They didn't talk about a lot of the things, especially for the privacy copyrights. They just care about it, but the business side is in almost the actions. But from this year the atmosphere is becoming changed to focus on the privacy copyrights, fundamental rights. I think those important dialogue has been started this moment and this is very important.
Kohei Kurihara:The government has been released this kind of guidelines and included some of the very important concepts. So the Japanese government is trying to give more freedom for the business side to use AI to create new innovations From this year. They tried to change a bit to regulate some parts of the AI usage, but they are starting to be more open, more voluntary for the enterprises to comply with the principle of using personal information for AI use. So this is a little bit different from the perspective, such as in Europe, the United States. The government is trying to balance between the regulation and businesses at this moment. So the guideline has been a comparison. Once the company wants to use the AI technology, that's a good navigation, even for the small company. They are trying to pick up some points and leverage their own principle. So this is a very important notice in the market of the AI in this year.
Pamela Isom:So you're saying that what's different about it or what's advancing about it is there's more focus on innovation and using AI because the culture sees it as valuable. But did I hear you say that now they're starting to introduce more around privacy and the regulations but at the same time, still promote the use of AI for innovation?
Kohei Kurihara:Yeah, okay, actually, there is a very few companies compared to the Western regions to invest in AI because the amount of investment has been very small in our country at this moment. So once we put very strict regulations for the enterprise side, the fields become more conservative to use to invest new technology. So that's a Japanese mentality. So the government side is also paying attention to how they can pick up the new innovations, not just regulate it. That's why the government tried to look for the best efforts to create innovation through the AI opportunities.
Pamela Isom:Okay, yeah, that makes a whole lot of sense, and I know that culture is everything, so it takes time to really start to mold and allow culture to adapt to innovations. But I was reading that Japan is experiencing aging populations, that projections show mission worker deficits by 2040, and that AI is intended to boost productivity. I believe that the caution that the regulators are introducing is global, so that makes a lot of sense, because you want the AI solutions to not introduce privacy, risk and other types of risk. I think that that makes a whole lot of sense and it's nice to see, actually, that there is a focus on using AI to boost productivity. So that's really good to see and that blends in with what's happening in Western culture as well, so I think it's great that you point that out. So, within that AI guidelines for business, I saw categories around human-centric design, safety, fairness, privacy protection and transparency.
Pamela Isom:I'd like to talk some more about privacy protection and accountability. So I've been involved in numerous discussions with folks, and one of the discussions that I've been having that I'd like to continue even in this discussion, is around data categorization. So I believe that in order for us to protect the information, we have to categorize it. So we have to understand the data and we have to categorizeize data so that we can protect it, and particularly around one's privacy. Now I'd like to get more insights on that from the standpoint of privacy protection and data categorization and then maybe talk. Well, first give me your perspective that. Am I on the right track? Do you want to add to what I'm saying? I personally believe that categorization is everything when it comes to privacy protection, but what's your thoughts?
Kohei Kurihara:yeah, I totally agree that your opinions. The categorization might be the very important to protect the information, especially because a lot of the enterprises requirement at this moment what kind of the information could be the personal personally identifiable. That means it directs you the coverage of the regulations whether your business model or business data can be complied or not. So that's critical for the business side. But also it's very difficult to define this kind of data, these kind of categories, and there is some of the difference in a context in between. Like Europe and GDPR has been like the special category data.
Kohei Kurihara:Those data is not always same levels of the protection in a different region, even in our countries. Or this data is under the GDPR could be the special category, but in our regulation this is the same level of the protection or not. So that's some gaps in the different regions. The problem is how we can harmonize the different level of the personal identifiable and how we can be harmonized actions in a different region. So that's why I'm now paying attention to the cross-border deals. So we have a different level of granularities to the perceptions of the personal identifiable information. But we need to protect the fundamental rights, how we can cooperate together. So that's a very important context for us in the privacy and data protection community right now.
Pamela Isom:Is the protection of the fundamental rights and the PII. Well, you mentioned both. So protection of the fundamental rights and you said that there are differences when it comes to the geographic locations, so, for instance, categorization of PII in one nation may be different than in another. Can we talk some more about that? What are some examples of how it might be different? I'm trying to understand more about what we mean.
Kohei Kurihara:I mean the difference is the kind of the threshold. It means what is the level of the data has to be, the more stronger, more weaker, secure level of the deed, of the personal information. Of course we have this difference, just like the ISO. This kind of standardization is a good difference, okay, but under the regulation it's not always sticking to the level of the international standards. So in this case we are struggling. Oh, iso has the level of the security assurance, but in these regulations might not be out of the scope and how they can deal with it. That's very important. So we have some standards, but this standard is not always applicable to each local regulation. So that's the challenge from my perspective.
Pamela Isom:Got it. So what you're saying is there needs to be some consistency because, for instance, the individual's personal information in the United States might be categorized as personal, sensitive information, but in another nation it may not necessarily be considered sensitive information. It may be personal, but it may not be categorized as sensitive, and I think a lot of times it has to do with context, and that is true. I study on the various categories and I know that there are differences. There's overlap as well, but I know that there are differences. Sometimes gender is not considered personal information in some nations, where in the United States it is a part of that categorization of personal information. So I understand what you're saying there.
Pamela Isom:I'd like to talk some more about data transfer and data localization. So if I think about the situation with TikTok and the whole concept of what's going on there, in that we need data that resides in the United States to be managed by the United States and governed by the United States, or at least governed by allied resources, there's this whole discussion around transference of data and use of data at a global scale. Does that touch, in your perspective, on data transfer mechanisms and areas that we need to improve from a privacy perspective, does that also touch on categorization of data and what do you see as the connection?
Kohei Kurihara:There is a bunch of issues that still exist and especially for the data transferring, as I mentioned in the last part, we have a different level of data protection in each category and besides that, we see some of the different issues. That's a business perspective. As I mentioned, tiktok, they have a lot of presence in the United States, this woman but the business perspective oh, this company is originally from China. Oh, this company is originally from China. It actually is very controversial in the level of national security. The internal business has come to the more secure level of the requirements. It's not just one single company, it's a kind of the international movement.
Kohei Kurihara:At this moment there is a similar case between Japan and South Korea, the largest Japanese messaging app. Their ownership share the half percentage is the South Korean company, the half in the Japanese. So in this case the South Korean company side has some control of the data management. It's not easy to free control of the data administrations, even the big company you imagine any platform company who got a lot of the share in your country, but their ownership is a different country. In this case they can easily sneak in the accessing of information for the national security purposes as well.
Kohei Kurihara:That's a problem that happens actually from the business side. That's a problem that happens actually from the business side. That's the different layers of the systematic issues of the data protection and privacy. But the finally consider to the data categorization as well. If this data is very sensitive, this data is transferred to the third country for the national security purposes, just like religious data or just like genders or many other things. Any infringement of the democratic decisions, such as the election, the voting, that actually happens in some countries in a control by the other parties for the democratic process that the thing is actually happens. With the data transfer issue, we try to solve the problem through the international dialogues, so that's very important so that makes me think about governance and accountability.
Pamela Isom:So I often always talk about governance and putting the structures in place to guide safety of the information, protection of our privacy, and to just be there to provide those guardrails. And that has everything to do with the data sets that are used for AI and the AI models in themselves. So I think it's important to maintain an inventory of the AI models so that we know what our models are used for. But more important, which is something that I always mentioned, is understanding that data, the data lineage and the data provenance, so that we know, should you even have access to the data to begin with, who should have access and I think that's what's happening with the whole TikTok situation is who should have access and, if so, who is going to be accountable if there is an invasion of privacy or if data is misused. But what we're trying to do in that particular governance perspective and governance at large is prevent things from happening to begin with, because it is going to be hard to track the lineage and the provenance, and it's going to be hard because of the proliferation of the data. So you bring about some really good points. I'll go back to the guidelines, the Japan's AI guidelines for business, version 1.0. So we talked about accountability. We talked about transparency some in that you want to know who has the information Privacy protection which is what you specialize in and safety. And then the human-centric design. You know, I associate that with privacy, in that we are looking at ensuring that our privacy is protected and that our privacy is respected, and earlier you mentioned human rights, and so I think that the human centric design is so important. And then fairness.
Pamela Isom:So I am going to review some more about that, but I think that's a pretty important piece of work that could potentially scale internationally. So the last thing I want to do is go into AI innovations. Is there an AI innovation that you're excited about that you want to share with us today? I know you have a lot going on in the privacy lab. Can you tell us how you're using AI and are there any AI innovations that just really excite you?
Kohei Kurihara:Yeah, my point is the privacy is essential to protect the fundamental rights.
Kohei Kurihara:Then also, the privacy is a new inducement for the users to provide more accuracies of the information Under the guidelines they mentioned that it's very important to protect fundamental rights because the user wants to take a willingness to provide more information and the AI can squeeze the more accurate information, the accurate feedback. So that's very important. Even you got a lot of the data. This data is in garbage. The garbage out. It's a processes of the informations. So if you want to have a productivity the competitiveness AI you should be more transparent. You should be more accountable to protect the privacy. The competitiveness AI you should be more transparent. You should be more accountable to protect the privacy, the copyrights, the fundamental rights for the users. Users can be your safety. Users can be trusted to provide more good information for the AI. That could be the competitiveness in the future. That's why I'm very excited to have a new AI solution to protect the privacy rights. So that's very important.
Kohei Kurihara:And also I think AI will be a new step, which means small amounts of the data, the data minimization, but to create a better feedback for the users, because the data itself is becoming more similar to how the user is behaving on the internet. So you don't have to store too much of the data, you just need minimum data. But the user wants to give feedback. Feedback is much more important than the data in the data. You just need minimum data, but the user wants to give feedback. Feedback is much more important than the data in the future. So in this case, some of the AI is training to be more clever with the small amounts of the data.
Kohei Kurihara:I think it's a very big innovation in the future because a lot of the data set is necessary. They need more electricity, the power to consume it, and that's a big cost for the business side as well. So how we can minimize these costs and give them a great feedback. So I think it's a competitiveness for the future. So the privacy is the factors to direct those new AI technology and the cost-effectiveness competitiveness as a business perspective. That's why I'm excited the future AI can create those kind of elements to produce the more effective future.
Pamela Isom:That's just a really great way to think about. It is because I look a lot of times at sustainability and what are some things that we can be doing from a sustainability perspective when it comes to AI. And you just mentioned something really cool because you talked about the feedback, which I hadn't talked about with my clients yet. I talked about other things, but not the feedback loop and the fact that the instant feedback will require us to store less data, which could ultimately end up adding to sustainability. So that was really that's a great innovation and I hadn't really thought about it, hadn't even had that discussion with my clients and I hadn't really thought about it, hadn't even had that discussion with my clients. So I appreciate you bringing that up because that's a great way to think about it. And I do have discussions around the competitive advantage, because the more the solutions are sustainable and not adding to energy consumption, the greater your competitive advantage is, so that we do have. But I love the concept of the real-time feedback, which means less storage, which, when we think about these sustainability concepts, we don't think about storage. We think more about processing power and speed and how it's burning energy and consuming energy. So that's a great way to think about it.
Pamela Isom:You also mentioned data minimization, which still goes to. Do you need all this information? Do you need to be storing all this information? Because that's what businesses are doing today they're storing too much information, more than what is really needed, and then that ultimately can get us into trouble, especially if we don't have good archive and decommissioning programs and processes in place. So you brought up some really good points, but I love how you tied that to not only protecting privacy but sustainability. So those are great innovations. That's just great. Okay, so I wanna know from you do you have any words of wisdom outside of what you've already shared? Do you have any words of wisdom or experiences that you've already shared? Do you have any words of wisdom or experiences that you'd like to share with the listeners?
Kohei Kurihara:Yeah, I would say Azana, I'm working on the privacy by design. The part of the by design is becoming very important because, for example, the AI is representing the machine is the part of the society. They can create more innovative things, just like then chat, gpt. We don't have to make an accord from scratch. They can be like a basic code or some kind of thing, because they memorize a very important amount of the information. On the other side, the developers on the business side have to embrace the concept by design. That's very important because all things become automatic.
Kohei Kurihara:So the creators, business sides, engineers have a liability of producing a specific service, engineering. So that's a similar concept of the automobile. They have a lot of responsibility to manufacturers, to those businesses, to the market, because they have been a very huge responsibility, responsibility to manufacturers, to those businesses, to the market, because they have been a very huge responsibility for the fundamental resources for all the humans who drive in the cars. If making any problems, just like cars have some wrong system, the drivers might be dead in some accidents. The AI has the powers If these systems could be integrated in the physical products so the automotive car is an example they might be making a kill to drivers if it makes a wrong.
Kohei Kurihara:So the thing is, ai is not just for the software. The software will be migrated to the hard things in the future, so all the person who concerned with the processing of the producing AI has to take a responsibility. So the by design approachign approach is fundamental. The same things of all the industry have been worked through. So that's why I'm going to try to deliver my message that by-design is important. Of course, privacy, of course security or other things to protect fundamental rights.
Pamela Isom:So that's great. That's really good insight. So your emphasis is on by design, because you're saying that we want to integrate some of the principles up front and at the onset rather than waiting until the end and privacy by design. I often say equity by design because I think we should build in equitable solutions and sustainability. I think we should be thinking about that in the onset rather than wait until the software has made it all the way through the life cycle and now we come back and try to retrofit. So by design is very popular and very needed, so I agree with that. But I do think and agree with you that it's about the life cycle and looking at the entire life cycle. I heard you say that and that AI is not just about the software but integrating fundamental principles up front or, as you said, by design.
Pamela Isom:I really want to thank you. I hope I caught everything that you were saying. Did I miss anything when I summarized it there? Did I get it? Yeah, that's pretty good. So I really want to thank you for being with me and talking to me. I really appreciate you. I had a great time when I was on your show and we had a good dialogue and it's really interesting this conversation today because of the fact that you are a privacy person but an innovator, and the innovation really came out in this discussion, which is how I am as well. I believe in balancing the innovation and the risk, so balance the two and pay attention to both by design. So I appreciate the fact that you have brought that up and I hope that we can have more dialogue. This has been a great discussion and so thank you so much for being here, and if you need to get in touch with me, you know how.