IGF 2024 - Day 1 - Workshop Room 9 - OF 26 High-level review of AI governance from Inter-governmental P

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> YOICHI IIDA:  Thank you very much for your patience. We'll start now, the session, on Internet Governance.

Japanese Minister of Internal Affairs ‑‑ (audio is distorted) ‑‑ in Civil Society. (Indiscernible)

(Audio is distorted)

>> YOICHI IIDA:  ‑‑ privacy policy (?) And we have one speaker for an international organization. The Deputy Director of science and technology and innovation.

We have two speakers.

Did I pronounce that correctly?

She's director of (?) From Smart Africa.

And we have one more speaker, in particular, the youth committee.

Thank you very much to all of you for joining us, and we will have a very productive session.

So, in the beginning, I would like to invite the speakers to speak about their views on the general current status and also challenges in AI governance, probably in your domestic AI governance situation, or probably, you can talk about the global AI governance situation.

What are you most expecting from AI and what you are doing now.

So I would like to start with Ambassador ‑‑

>> Good afternoon, everyone. You did ask us a very important question, and you asked us to do it in five minutes, which is impossible.

This is simple to say, not to do.

Are we sure that this reversion would be a progress. It's not just innovation. It's the progress for human kind.

And that probably is the main responsibility of governments to be sure that we balance innovation and security, economic growth, and equality, and efficiency and diversity, and to find a good balance.

And that's why we are (?) We have asked citizens to pay attention to this. I'm a diplomat. We have the responsibility to do it in an international framework and in conversation with the important logistics committee.

And if we start with the idea of progress, then I will pass the mic to other speakers.

I just want to emphasize that, since the beginning of this AI, we have had different conversations about ‑‑ now, we are speaking about economic development. Are we addressing the needs of energy economics? I think that's the most important start, to recognize that we have to face a lot of challenges and to try to have a broad vision of the challenges.

So security matters, security is not just AI becoming crazy and attacking (?). It's also cybersecurity. Security is so bias. Are we sure we (?) ‑‑ (audio is distorted) ‑‑ and that we're not reproducing in equalities. We have to think about future diversity. I believe that if you don't have (?) It will disappear as an economic language.

So we have to be sure that we have the possibility to (?). Diversity doesn't mean just (?) Because you have also to train the model with a knowledge of the future and make sure your point of view or your perception of the world will be taken into account.

We just want to face the question of environmental impact of (?). Are we sure we still know how to (?) And have a good framework? Maybe we will tell about this. Maybe we have to find new ways to protect privacy. Maybe just to protect my personal data is not enough to protect my privacy.

And we can continue. Maybe we need specificities to make sure we try enough skills and competencies in managing economies, and they will take the driver seat (?) Consumers. Maybe we have to think about education and to be sure that the future citizens will be ready for this new world, and there will be free minds and free citizens in the new world.

I could continue. I won't, but the idea is that the approach and the questions we have to face is, from my point of view, very important.

Thank you.

>> YOICHI IIDA:  Thank you very much, Ambassador.

You talked about a lot of various risks and challenges. And, in particular, you talked about security and diversity. Diversity will be very important when AI continuously develops. And, also, we need to recognize the importance of protecting the next generation.

The risks and challenges were talked about. At the same time, we also recognize the importance of innovation.

What about the (?) Perspective.

I would like to invite (?) To share with the group.

>> Yes. Thank you so much. Can everyone hear me? Great.

Just to set the stage from my perspective from Meta. In AI, we're very much an open‑source AI company. What that means is we're all in on providing our AI technology on an open‑source basis. So our large language model, Llama, it's available to anyone to download for free. This, in our view, is the best way forward, in terms of approaching AI innovation, for a few reasons. One is that for developers, this is the most valuable and flexible option for them to build on and be able to customize applications to their local needs, fine‑tune with the data they want. To

It's also the best of an economic perspective, being able to provide a diverse set of tools to developers and to countries is going to have the most benefits from an economic perspective because people won't be locked into a few companies that are providing closed models.

Finally, it benefits us as a company because we won't be beholden to other operating system, so to speak, ‑‑ (audio is distorted )‑‑ the global frameworks, in the last year, including the frontier AI commitments that will require us to publish a safety framework before the AI Summit in France next year.

As an early adopter of G7 Code of Conduct, so that's our perspective.

What I've seen happening in the AI governance landscape, there are some positives and some challenges.

I think the positives are that we've seen a real harmonization in AI safety at the global level. So there's an increased understanding of the safety risks. There's an increased understanding of the steps we need to take to mitigate those risks, and, more importantly, I think a firm understanding that we need to have a harmonized global approach to this technology.

I think some of the challenges, however, that we're seeing are that there's a lot of conversations happening that are not necessarily relating to each other. So while we have international agreement on the safety conversation, as the Ambassador pointed out, there are other conversations happening.

So there's the data conversation around data privacy and the use of data.

There's the copyrighting conversation that's happening. There's conversation around AI, not just advanced AI but our classic AI, and how do we look at the risks and harms of regular AI when it's looking at decisions that affect people's lives.

Of course, there's a lot of industry standards being developed that are important in a lot of different ways.

And then there's the conversation around safety institutes, which, I think, in a positive development, are being stood up around the world that will help with the science of AI and the evaluations and benchmarks that should be looked to for AI governance.

So I think the question is going to be how to tie a lot of these things together, as they deepen, as the science deepens, how do we connect these pieces to make sure they talk to each other.

Finally, the point I want to make about one of the priorities we have, in terms of the AI governance conversation is, how do we reflect in our governance frameworks the realities of the AI value chain, and, particularly, open‑source AI?

What do I mean by that?

We have to reflect the different roles that the actors in the ecosystem and chain have to play? Those are different roles.

What the model has in terms of safety and risk mitigation, then what role does the deployer of the model play?

All of these players have unique goals and responsibilities.

I think as we look at a comprehensive governance framework for the ecosystem, we need to take that into account.

So speaking from the open‑source perspective, we don't have the control and the visibility into the downstream uses into the model that a closed model provider might, simply because anyone can use our model for any purpose.

Na case, what are the responsibilities of the developers that are developing the applications for very specific use cases?

So I think we need to bring that complexity to the conversation to make sure that we're addressing using the right tools in the toolbox to address the forums that may arise.

Thank you.

>> YOICHI IIDA:  Thank you very much. The origins or the parallel things that are going wrong, as the international governance frameworks (?) I'm told what will be the second discussion in our framework, the point I would like the other speakers to share, your overviews of the understanding of the current situation.

The previous two speakers will cover a lot of elements like risk and challenge opportunities and diversity and inclusiveness will also be a very important part.

Now, I would like to invite (?) From an African point, as we talk about the AI governance. What do you want to prioritize, and how do you regard the current situation?

>> Thank you very much. So far, I would like to clarify that (?) ‑‑ (audio is distorted) ‑‑ supporting together with the private sector in the digital transformation.

Now, the perspective of the AI from the African context, I would like AI (?) It helps us. It helps us to be more efficient. It helps us to digest a lot of information. It helps us to (?) For Africa. (?) In terms of education, health, transportation, for instance, but if I take places like Rwanda (?)

(Audio is distorted)

>> These are things we would not be able to do on our current infrastructure. For us, AI is a way to (?) If we don't cover anything, what happens? (?) It's going to be disastrous. (?) If we don't have (?) It can also take (?) So the efforts (?) A multistakeholder approach.

It's very important that we come together, first of all that's harmonized and, also, it's fair and ethical. I think my other colleagues have spoken about it. And inclusive.

For us, if AI is going to help us to leapfrog, we need to be able to make it inclusive as well.

In terms of the challenges, within the African perspective, we know that the bedrock of AI is data, right? You need a lot of data to be able to properly utilize AI.

But the number of data centers in Africa equals the number of data centers in Ireland.

Look at the population of Africa, 1.4 billion and compared to the population of Ireland.

So we need to increase the infrastructure. We could say that we would leverage on other people's infrastructure, which is what we are doing now, but I believe that it takes some sort of sovereignty from us, and if we're talk about ethical AI, fair AI, using AI within your context, it's important that data is also within jurisdiction so we are able to properly leverage it and make sure it's sovereign.

The second for us is the skills gap. I think five years ago, there was a craze about creating a lot of coders, but now AI can also code. What do we do now? I believe, for instance, that what we need is Africa is to be able to develop (?) Skills. For example, we're talk about electric cars. We are talking about assembling phones in Africa. Why can't we use Generative AI in classrooms to lend these skills. We cannot scale it, but we can leverage AI to scale it up. But, also, within the space of software development, we still need people who can develop these Iowa and robots and the rest.

And the last one is also the datasets. I think it speaks to the biases that my colleagues have spoken about. When most of the AI tools we have have datasets from other regions, we do not have the datasets in Africa. So that's one key thing that we also need to focus and build the dataset so that the AI is African. It's to our story. So we can leverage on our story to tell our story. AI is as intelligent as the data you feed it. If you feed it with other coaches and other datasets, it will always go against us. We've seen some of them when we apply AI tools from ‑‑ let's say AI tools from other regions that are rejecting people applying for loans just because they're of a certain race or of a certain gender.

So it's important for Africa.

I know it sounds very (?) But that is what it is. We need to be able to create those datasets.

Thank you.

>> YOICHI IIDA:  Okay. Thank you very much. Very deep thoughts about the current situation of Africa and also challenges of Africa.

I think many of those shared by other people from around the world, but your talk also reminded me of the speech by the Minister in the opening session and other speakers. Actually, it was have impressive to hear many speakers talk about AI in the opening of the Internet Governance Forum. Everything is related and reflected to each other. We need to read one after another to make the best benefit from technologies, and we need future proof. From that special counsel, I would like to ask the viewpoint from the youth generation.

>> Thank you, Iida. The other speakers have raised this aspect, the good and the bad. Talking about the youth perspective, I think a number of youths give or take. They have contributed largely to the development of artificial intelligence, be it from the stage of how some of the systems are working, and others have used it to cheat their way through, to leapfrog, but others are using it holistically and based on honest and truth as well as transparency and accountability.

Now, when you look at it from maybe touching a bit back in Africa, the majority of youths that I think have used AI have, among other things, been the source of data. We talk about data mining, for instance. Most of the AI tools have been trained by a number of youths. Let me give an example. Recently, I think some Kenyan youths were protesting because the amount they were getting to feed systems was way lower to the amount of money that these data systems are going to generate from it. So it's the issue of the balance between you have the Global North developing most of the AI systems, but the Global South is training ‑‑ in terms of how many people are using local AI centers based on these systems. One of the arguments that I've had is we've had most of the platforms and systems and technologies being used by the majority of Africans not developed in Africa, which means data localization doesn't exist in context. It can exist on paper, but it's not working. As much as we talk about data regulation, we're building data protection bills based on the data we don't host. I think that creates a disparity.

Most of the data that we're actually working on operating on is not being hosted on local platforms. So you have a bigger gap, in terms of governance of data in Africa, based on the data that we're actually not fostering in the first place. So it comes back to who then hosts most of the data, and how do we create local acts or bills or laws to govern the data that we're using when it's actually not being hosted in Africa.

So when you talk about what should then be the priorities and expectations from the youth perspective, it's how much influence do we have on the data that we are producing at a local level when it's being hosted outside of Africa? Because, give or take ‑‑ I can use this as an example ‑‑ the majority of governments and Civil Society organizations that are based in Africa are not using local tools. For example, I think Microsoft is one of the big techs that most African countries and governments and Civil Society are using as a platform.

Microsoft 365, most of the data centers are not based in Africa, yet, what is used is something expected to be used in AI. When it comes to youth's perspective, how do we create a balance between globalized (?) Governing data from the local perspective, and how much benefit does African countries have on the data they're given when they're not being leveraged and hosted from the Africa perspective.

I think that's the youth perspective. I will end here for now.

>> YOICHI IIDA:  Thank you very much. It covers a lot of things. It's very interesting to see that we gather for Internet, and we talking about AI, and now we are talking about the data. So everything is related, of course, to each other, and, probably, we also need to talk about the data competing power.

Also, we talked about the supply chain and the benefits trying to reflect the governance discussion onto the reality of supply chain.

But it's very interesting because, from the government perspective, we are trying to reflect the reality of supply chain to the discussion of the policy‑making. So there could be kind of a very healthy, mutual interaction, but, if it fails, the future will not be very much effective.

As government, we have been making ways for developing governance framework such as was talked about, the G7 Hiroshima (?) And spent time talking with those from G7 countries.

And we had the European AI convention of the European council, and we had AI Act in place and global partnership was integrated with the OECD AI community, and a lot of things happened over the last one or two years.

And everything was connected to actually OECD, and my partner (?) Looking after probably everything, and she knows everything, I believe.

I would like to ask for her comment and overview of the current situation and, also, a point that previous speakers touched upon.

So, please.

>> Sorry I could not be there today. Thank you for having me and OECD on the panel. I will be brief so we can have more discussion. Just to say, of course, we've had, in the last couple of years ‑‑

>> YOICHI IIDA:  I'm sorry. We cannot hear you. Just a moment.

>> Ah.

>> YOICHI IIDA:  Okay. Try again.

>> Does this work is? No?

>> YOICHI IIDA:  I'm sorry.

Okay. So the technical people are working on this. Please wait. I will come back later.

So we talked about a lot of things, and we have touched on data and infrastructure, and we talked about the challenges and risks, of course, as well as opportunities. And, also, we heard the views from different communities.

So, now, we do not have enough time, but I would like to invite all speakers to make a comment on your perspective about the responsibilities or laws or what you are planning to do in your own community, such as government industry.

So I would like to invite the last comment for all speakers.

Before that, I would like to invite Audrey to try again.

>> Does it work now?

>> YOICHI IIDA:  Okay. Can I hear you.

>> Thank you. I think maybe I was in the observer room and not the speaker room.

Thank you for the kind word, Yoichi.

I'm sure you're having a great Riyadh.

We've seen a lot of changes to the global Internet space in the last five years since we initially adopted our first AI recommendation, which we just revised earlier this year.

As Yoichi mentioned, we have the emergence of safety institutes. We've had changes just recently here at the OECD with the integration of global partnership on AI and to our work program and the emergence of a lot of different policy topics, many of which other speakers talked about.

Just to give a couple of examples, we know the issues of data and AI are super important. They're critical. Everybody has said that in their own way, and we, our expert community here at the OECD, has more than 400 people, and one of the more recent things we did in the last six months is create a group focused on privacy data and AI.

So just to say that I think the topics of this table and the issue that you're coping with in different regions around the world, we see very much across the community that we work in, which is a broad, global community. I would just that if you're not familiar with the OECD's observatory, it's really the place where we're trying to put as much data and evidence behind trends that are happening in AI, trends like what kind of language models are being built on what languages so that policymakers can look at that data and start to shape a policy environment that implements the broad principles that I think we all agree on, things that have been mentioned before already, like around quality and fairness, around bridging divides and other things.

If you have not checked out the observatory, it's a great things to look at where things are happening across countries and patents are being filed and where investment is going into AI.

We've built that out right now. We have 60 jurisdictions participating in the observatory. I invite others to join us. I have a colleague in the room that you can talk to since I'm not there.

But a big thing we're trying to do at the OECD is provide harmonization across different approaches, whether that be standard approaches or please approaches. I think many people said the importance of us operating on a global space and in a global way, and we're trying to bring our analytical and data‑driven approach to AI.

I think as we move forward into 2025 ‑‑ 2024 has been jam‑packed with AI ‑‑ and there's a lot of focus on safety frontier models, but I also note the importance that others have said about maybe not frontier models, just the day‑to‑day integration of AI. I will just close by saying some of the data we released earlier this year shows just how much runway there is for AI to diffuse or to be adopted across industries for a lot of potential benefit.

I think, at best, we see about an 8% diffusion rate (?) Mostly if large companies.

To make AI both more accessible and more widely adopted, we think ‑‑ we know it has to be trustworthy but also that we have to put some of these other framework conditions in place around safety, security, and fairness to make those numbers around diffusion go up.

I look forward to the rest of the discussion.

Thank you very much, Yoichi.

>> YOICHI IIDA:  Thank you for the comment. I think the remaining time is very much limited, but I would like to hear one more voices from all speakers.

We (?) Inclusivity when we talk about AI governance. There's a lot of people talking about the AI gaps, the AI divide.

Now I think OECD is 38‑member countries, but now, they have more than 40 members and are welcoming more.

So there will be more inclusive group for AI discussion, but I think France has a similar perspective on AI governance and when you're organizing the summit for next year.

I would like to ask the ambassador what will be the objectives and goals of the AI Action Summit.

You have just two minutes.

>> Can you hear me?

(Audio is distorted)

>> Hello? Hello? So, in two minutes, first, maybe there's something we did not say enough in this room and maybe nor within the IGF itself. Of course, AI is not just a very promising technology. It's also a source of power and an intense competition between companies. They want to take the lead. It's a geopolitical competition between models, and that is a competition between international organizations to take the lead. We have to face this. If you don't recognize this, you won't do a good job.

And, for us, for France, for a lot of people, there's a big threat regarding the future of AI and the future of AI governance would be a fragmentation of the AI governance. If we let fragmentation happen, we have a race to the bottom, a race to the worst.

For everyone to remain strong in the competition, there's the weakest regulation of governance. So we have to stick together. So, yes, of course, we think that the political framework of the OECD is of the utmost importance. We discuss. We integrate. We aggregate. And we're doing a great job within the OECD, but we think we also need a universal conversation.

So the Paris summit, in two months, February 11th, will be probably the biggest international summit ever so far. (Chuckling). So we did invite head of states, and we do expect something like 80 head of states of governments, and most of the head of international organizations, and we will propose an agenda around what is a good‑but‑broad governance regarding the holistic approach I mentioned earlier. It will also be a very intense conversation.

So we expect something like a thousand or 2,000 delegates from research, from the digital sector, private sector, civil society, and we try to put the conversation about three main topics or maybe four, risk, security, safety institute, and even the question about catastrophic risk still matters. But we'll add the three layers, a conversation regarding system‑enabled AI. Let's try not to break the (?) Again with this new technology and promising technology. A conversation about, yes, a broad governance addressing all I mentioned earlier, and the conversation about the needs ‑‑ (audio is distorted) ‑‑ because, frankly, we don't want this movement to be completely privatized (?) Public resources.

I travel a lot. At the biggest universities. Today, Microsoft cannot reproduce the research of the biggest companies. So ‑‑ (audio is distorted) ‑‑ we want ‑‑ where there is public knowledge and research can produce the results of the private sector.

So we need finance. We need more money in the public sector. 

>> YOICHI IIDA:  Thank you. We have five ‑  left.

What do you think you can do in developing AI governance.

(Audio is distorted)

>> Thank you. Just a couple of things I want to touch on. I think companies have significant responsibility here, clearly. I think particularly around participating in the international frameworks and initiatives that are being developed, adhering to them, working with safety institutes in their home countries to develop the research and evaluations of advanced models.

I think being transparent with everyone about how their large models are developed, what they're capable of, what the risks are, how they're addressing risks, all of those things, I think, are really important and squarely on the shoulders of developers.

And then I think there's a lot of partnerships that need to be developed, in terms of public‑private sector, whether it's on research capabilities, whether it's working to develop data that's going to be representative of the entire world.

Meta is working with the Gates Foundation, which is not the government, to develop African data for training.

So how can we partner together to advance some of the common goals? I think that's going to be really important going forward. I think we're starting to get a sense of what the real needs are and the opportunities, and then no one can do this alone, right, so everyone has their interests to further, but we need to be doing it together.

So I think really getting a clear understanding of where we can partner together to advance the governance, I think, is the next phase.

>> YOICHI IIDA:  Thank you very much.

And what about your organization?

>> Thank you. So I will speak on the (?) Perspective. One of the key things we've seen is a disparity between (?) ‑‑ (audio is distorted) ‑‑ so that it's effective. There's no point in developing (?) And on that, we're looking at things like giving experience to companies that are (?) This kind of thing. So it's a key thing to consider.

Also ‑‑ (audio is distorted )‑‑ approach to AI governance, and we fully support what others have said. That's what we do also (?) Civil service organizations that the government, but it's Africa. So we need to go beyond (?) To create a (?) Model.

Thank you.

>> YOICHI IIDA:  Thank you very much.

Actually, enforcement is very important. Actually, this is what we are working very hard on.

Before I invite comment on this, I want to invite (?) To share your view. What is your expectation? What do you think you can do (?) Community?

>> There are two parts.  The first is (?) In the conversation, especially when it comes to governance. (?) As much as I have an idea of what other regions have done with regard to youth involvement, but regard to Africa, partly three, a feeling that governance must come into trend with (?) Ideas before they actually thrive. So I think it's something from the youth, the core is for the government to engage youths with an open mind so that they allow for innovation to thrive and then the (?) Zones in terms of what are the risks that can then be talked about with the youth in the meetings and in the room. And talking about (?) At the end of the day, we're trying to bring regulation. And it (?) Allows the youth ‑‑ the youth have been at the heart of the most innovative ideas. So when we talk about AI and emerging technologies, we don't have to think of the youth in terms of not knowing something but the benefit of doubt and allowing them to thrive and grow. Most of the technical ideas or some of the infrastructure or developments where we have (?) Most were actually done by the youth. So involving them from the start all the way to the end of the process is crucial.

The question is how to engage the youth. If it's government mostly making these policies, there's a high chance that the youth's perspective is left out of the room. And then the youths are not involved at a stage when they're behind most of the innovation.

I think trying to create a balance between government perspective versus the youth perspective in most of the governance (?) Becomes very (?).

>> YOICHI IIDA:  Sounds good.

Thank you very much for the comment.

And now (?) Important innovation. I would like to invite Audrey to talk about the (?) On behalf of myself.

>> Thanks, Yoichi.

I just want to say that governance is more than regulation. Regulation is really important, but governance can include other tools.

I just don't want to miss the opportunity to talk about one we hope to be finalizing very soon this week, which is an implementation framework or an implementing reporting framework to implement the Hiroshima AI Code of Conduct. And the purpose of this framework is to allow companies and institutions, organizations to report publicly on their activities related to the Code of Conduct so that we can take voluntary tools and move them from just sort of nice words on paper to building out an ecosystem of information that can inform policy decisions. I think the prior speaker said rightly that, you know, we operate often in a vacuum. Those of us who work on AI day in and day out, we may know a lot, but there's a lot of stuff we don't know about AI. And it's hard to implement a good governance regulation in that vacuum.

So I think filling that vacuum and the void is an important step. As part of that process, to develop this reporting framework for the Hiroshima Code of Conduct, we have also mapped this to other codes of conduct, and we're going to be hoping to adopt and start to engage in this governance dialogue in more than just talking, which is also really important but in more of information sharing in a concrete way that can help inform researchers and the public, that we make these tools an interoperable as possible and that that's not a negative thing for the world. It's, instead, advancing a system whereby we can compare and, as we say, in an apples‑to‑apples way. We can compare things together to see what is happening on a global scale.

I hope to be able to announce good things there, and I know you, too, Yoichi, as we work on these activities in the governance space that's taken place over the last few years and try to move them out of the negotiating rooms and into the practice implementation phase.

Thank you so much.

(Audio very low)

>> (?) That will be an experimental mechanism where the private sector and the comment will work together to ensure safety and security and (?) Over AI systems (?).

We know this is not the only answer, but we are making a lot of efforts to build up governance framework, which should be inclusive. We, of course, understand this effort among the different stockholders. It's not only the government but industry and civil society and academia.

I hope today's discussion was very much productive and helpful to the audience. I hope we continue working together toward the open, free, and not fragmented (?) AI ecosystem.

Thank you to the speakers and to the audience.

Very much sorry about the audio system, but I hope you enjoyed. Thank you very much.