The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
I'm here and next to me is Lisa, Lisa Vermeer. Welcome. Juliana is joining us online. Welcome as well. And Ananda is also here next to me in the room. Welcome all. I think I will give the floor, Lisa, to introduce yourself.
>> LISA VERMEER: Yes, thank you so much. My name is Lisa Vermeer. I work at the Economic Affairs in the Netherlands, and one of my main jobs is to recorrect AI --
(Audio Difficulties).
>> ANANADA GAUTAM: Hello, everyone I work with internet, and I've been in a civil society community and work in the capacity of young people and making internet more transferable and inclusive.
>> MODERATOR: And Juliana, can I give you the floor as well.
>> JULIANA SAKAI: Sure. Can you hear me?
>> MODERATOR: Yes, I can hear you.
>> JULIANA SAKAI: Thank you so much I am Juliana Sakai the director in Brazil. The independent promoting for transparency and accountability under the Brazilian government. And this also includes the government use of AI. So it has been monitored on working how the Brazilian government is deploying and using AI and recommendations in this field and parallelly also monetary how the AI regulation is being discussed
(Audio Difficulties)
Thank you.
>> MODERATOR: Thank you very much. So this is open for today.
But today we are in an interactive session. So I would encourage you all to join the discussion one once we are there. But first I would like you to participate in a vote. So you can scan the QR code or go to kpmgvote and log in with IGF 2024. So as a starter I would like to introduce to you the global impact of European AI regulations. And for this I would like to give the floor to Lisa.
>> LISA VERMEER: Thank you so much. Well, this policy question I would like to start first with doing the poll online. And then we can see what gets out of it. So let's see if it works. So the question is do you believe that the EU AI Act will be the de facto global standard for AI governance? Because it's always claimed as one of the first comprehensive AI laws in the word.
There are many other laws that will become the global standard. So what do you think? Yes, or no? Or do you have actually no idea what EU AI is about. It's actually positive.
>> MODERATOR: We have six votes in already.
>> LISA VERMEER: Don't be afraid to just choose something.
>> MODERATOR: I'm looking at the room. Most of them voted right now. So let's go to the results.
>> LISA VERMEER: So everyone know what is EU AI is. That's a good thing to know. It's 55% yes and 45% no. So the preference why yes or no. I'm looking forward to hearing more about your perspectives on how this would work. So for this session, I would like to share my thoughts about why it can be yes and why it can be no. And what is in my perspective challenge for all of us. So if you look at the European AI act it's policy, so all the AI systems that enter the market in the EU in the European Union you can assume they are safe because the AI gets all the requirements for risky AI, several risk types of risky AI.
And if the AI system is in one of these pro effect just it has to meet certain requirements before it can be sold or be used in EU. By the private sector and by the public sector. Basically everyone. So that means for lots of AI systems there will be requirements that make it actually safer and safety can you look at it for --
(Audio Difficulties).
>> MODERATOR: You have to hole it closer.
>> LISA VERMEER: Perfect. This is better I think for the audience. The safety of AI systems will be improved and it means that there are requirements for secure AI, for healthy AI and for fundamental rights abiding AI. So the risks that are -- that may come with AI on these areas will be tackled by all AI systems before they enter the market. At least that's the premise of the law. So that means that these systems, when they are made by European companies, companies all across the world, if they are made for the European markets, they will be safe enough and meeting the requirements which may have impacted lots of companies will build one time of AI existence to sell everywhere.
Because of the EU's requirements they will build a system, for example, for the hell sector to use in a hospital, and then they will meet the requirements for AI in Europe but also other areas of the world will benefit from the fact that it's AI that is meeting the requirements also in, for example a hospital in Nepal or any other area in the world.
So for a whole range of topics that is going to be the case -- and that makes me expect that there are some kind of -- you can say yes because it will be the center for lots of AI across the world but there's a lot of risky areas in AI that it may be difficult to say whether in the future the requirements for the AI will really be adopted. For example if you look at critical infrastructure of biometrics AI, they will be regulated but it's pretty -- we can expect that some companies will build multiple pocket. So they will make the safest products for you. But also other products that do not meet same -- for example the data safety requirements or other requirements. Then we see that in the world. We see that happening a lot.
So it depends on incentive for the company whether they will make one for the whole world or one for this super secure for the EU. That's why the yes is maybe a bit limited in a way.
So I also say will is very much a case for no as an answer. Because the EU is a very specific area in the world in terms of regulation. There is already a lot of regulation has been adopted for the digital economy and for personal data, for example with the GDR and data select. And there's really a very dense regulatory field which may not -- it's very different than the regulatory ecosystem in other areas of the world. For example big areas like India or maybe Brazil, it's very different from a legal context where a law can be adopted and that's why the AI Act design may not be suitable for other areas of the world to replicate.
And also the -- making a product regulation is a very old way of regulating the markets. There's lots of regulation in the EU but it may not be the approach in the other areas of the world where it's not that easy to replicate.
And then another challenge I wanted to share with you is that the AI Act enforcement promises to be really challenging, because the AI Act is very broad, it sets rules for different areas, a whole range of areas basically touching all industries and all public areas. And how do you effectively enforce a law? How do you make sure shoot regulator is not -- policy and lawmakers like myself but the regulators that will do the oversight of the law are really able to work with it and make sure that people and companies and organizations abide by the law. So already in the EU, this is a major challenge and it's something I'm really working on a lot and wrapping my head around. And I think that is also the case in lots of other areas in the world where it may be quite difficult to have the AI Act in other areas barks how do you enforce a law which is so brought and that creates uncertainty.
So, yeah I think I will leave it at that.
>> MODERATOR: Thank you very much, Lisa.
So I hope at this point the AI Act could steer us in the right direction, and hopefully could be adapted one we learn how to make use of it. That's also part of the market I guess. And also a part of the standardization bodies that are enforced in making standards for AI which is also the bridge next policy question. This question is about the action for standardization body.
There are local standardization bodies trying to get involved in the AI standardization. But we also see some challenges for those global standardization bodies. And now again a question for you all what about are the biggest challenges for global standardization for AI? I would encourage to you vote again. The answers will be on the screen. I will give you some time to grab your phone and log in at this page.
Scoping, so what do we consider AI. Consensus. Compatibility. Translate fundamental rights to technical standards. Finding common ground. Different view on the existence of human beings and also interesting. Capacity of experts, local context, trust. Tough question. Universal concepts, understanding that AI replicates traits of the minds and persons ingrate standard language models. We will move on to the next one.
So really interesting challenges and I really think those are all true, to be honest. And we are indeed maybe in the early stage of AI and AI standardization. So currently a lot of standardization bodies are active trying to come up with standards and ethical guidelines and technical standards.
However what we also see is that those standards are quite fragmented. All different kinds of standardization bodies are trying to deal with AI in their own certain way. I don't believe it's a hype as one of the participants just mention.
However what I do see is quite some overlap between standardization bodies and initiatives trying to make a standard according to their best practices.
And it also comes to the second point I was trying to make, and that's the sector-spec complexity. So what we -- what I do see in my work is that some standards are being created are quite generic. And those Mike be applicable for all kinds of AI use cases. However, some sectors do really want to get a more steering on how to make use of those standards.
So for instance in health care that requires way different standards than the mobile industry for autonomous cars or the defense industry also requires different standards. And it might even -- yeah. Not publicly shared. My third point is regulatory diversities. So Lisa just mentioned the AI Act which is applicable in the EU for EU inhabitants.
And that's according to the EU way of thinking. While we create ethical standards for guidelines in the standardization bodies, it might differ where those guidelines are created which can contribute to a good debate which we might be able to have in the future on the global scale as well.
My fourth point is how can we balance this? So balancing between regulation, standardization and innovation. It was mentioned in the different session that I attended today is how can we make also a balance between the regulation side of AI initiatives? And innovation. So a small startup is not the first one to be looking at as an industry standard, for instance. It just creates -- according to their best practices, and their way of working. Which my regulation could hinder those startups in being innovative.
And my last point is the dynamic technology landscape. What we did see in regulation is when the EU AI Act was being developed, Generative AI was not that big in the beginning but it did become big and there was a need to regulate. That that's the same with standardization. Standardization takes a lot of time and it was created on what works the best.
And with a sector that is being really noticed and really trying to take outline the all the newest challenges available this may be a challenge for the global standardization scale.
With this I would like to conclude my introductory remarks and I would like to give the floor to Juliana.
>> JULIANA SAKAI: Thanks, everyone. Thank you. So we have right now the policy question 3. Enhancing enforcement through civil society. And I would like you to answer in your opinion, what is the most significant role civil society can play in AI governance? Please share with us your thoughts.
>> MODERATOR: Log in and share your opinion.
>> JULIANA SAKAI: Coming right now. Monitor the government. Awareness. Don't think in problems, think in solutions. Capacity building. Advocacy. Vote for parties that have good plans. Voice concerns. As user public service is foreign have a role can democratic control. Research potential of impact of AI system. And defending and protecting values the public interests facilitating dialogue agenda setting, and monitor the government. We are back again. Thank you for sharing so much. Make the minority heard. Yeah.
So I would like to -- as a member of Civil Society talk a little bit about the context we are in now under the regulation of AI coming.
But I would like to share -- I would like to look at who different comments one is the comment in which specific AI regulations are being implemented like the EU, for example. Or they didn't directly effect, not like the U.S., likely is a mentioned, mass producers, performers, the tech companies, that are selling AI systems. You have to comply with the EU AI legislation. And the other context is where -- it's like Brazil, for example, we don't really sell projects to EU, not massively or logically. So let's say the global majority that is not being affected directly at least by the legislations. But might have experienced an indirect influx.
Let's begin with where regulations are being implemented. As a Civil Society the Civil Society has a register role in shaping its enforcement. So it didn't fight all the problems where intakes is now working and advocate for institutions to take measures. But there order to do the assessments and why it's working and why it's not working and in the end present these problems to society and to the institutions we have to have like real transparency.
So one thing has been able to assess information both from the government and the companies to understand what is working and what is not working.
So once we secure a list of transparencies, then we can at the end really do -- like the report analysis and so on. And aural the focusing work. And the second end are where we don't have a specific AI framework or sell projects to the EU. And I watch to share a little bit of Brazilian experience and Civil Society experience on AI governance without having a specific framework for this.
So I think the first thing we have to think about is really to understand what is the legal framework, the current existing legal framework, that can actually protect rights in the context of AI use.
In Brazil, for example we have consumer's rights and the general data protection. And both of them have been used even to avoid the use of some -- especially the systems. And the Civil Society this point can present point to Brazilian institutions where for example the (?) thanks to consumer protect in Brazil, has filed a demonstrate active procedures against the use of this tool in different scenarios.
For example when the subway started to use it for a marketing and advertising proposals and collecting information on the reaction of people watching it, and also in the clothing stores are where they were also recording and capturing information through facial recognition existence.
No most cases we have absolutely no context of the consumer. So under this situation, institute for consumers protects won the case for injure and administrator to spirit. So that the metro end of this clothing store stopped using.
It so I think this is more or less just what I wanted to try to start a conversation in. And make sure that even in the context where we do not have a specific EU -- a specific AI framework, we still have a lot of work on the governance of this AI assistance. And just to close it, we are currently actually discussing, as I mentioned in the congress, for AI. And the risk approach framework has also been discussed there. So this is also -- at the end of the day, Civil Society is also fighting for protecting its rights on a more specific way under the AI assistance.
>> MODERATOR: Thank you very much for your great info. We really see the importance of Civil Society being active in this. So it's great that you and your organization is a part of that.
Before we move on I would like to look at the -- we will do that after the next speaker. I will share my screen again. I'm giving the floor to Ananda.
>> ANANADA GAUTAM: Thank you so much for describing the role of civil society, I think Juliana has started doing. That so I will be discussing from the global perspective, maybe how we have been starting the discussion saying will the EU AI Act be a de facto regulation for EI? I think it's the best thing going today and it can regulate products and services that are outside of the EU as well. So in the context of developing nations and on the context of human rights.
So the basic fundamentals of AI is that we talk about is about the equal of the data and the bias of the data. So looking from that perspective there are two thing that need to be considered.
One is social thing and another is technological thing. So the quality of data, the fundamental data is what trains the algorithm of the AI. So in developing nations we thrive -- there are many challenges. We are talking about AI governance, there still exists a digital divide and we are also talking about an AI divide now.
So there are still -- there are still more than 2 billion people who don't have access to the internet itself.
And then the data centers, the quality and collection of data. I think that is very challenging. And without the standard data it is very challenging to make the AI models. It might create bias or it might not be as if we are saying that it was unrealistic in the developing nations, they lack the capacity to actually either implement it or build the model. Or if we talk about the policy legislations, they also lack the capacity to build their own legislations.
And A the EU AI Act, many contributors are trying to get the legislation but they don't do it correctly because they don't have that capacity. So how do we develop the capacity is one thing. Another is like, we are talking about how it can be in developing nations. If used correctly, maybe we can use it to close that physical divide. Maybe we can use it to empower the population that does not have -- as you Calcar the literacy as a person living in New York or somewhere in the EU region where the visible literacy is no more a problem or access to internet is no more a problem or access to technology is not a problem. When we come to developing nations there are many other ceases as well.
There's a language barrier and models that cannot work in the native language. Still if we want to train them we don't have enough data. And if they are trained using the publicly available data there are other consequences of public data and other kinds of things.
And this makes the development of AI a bit complex. But if the developed nation helped those countries to build the capacity, maybe -- these developing nations can live with the power of AI to actually compliment the issues that we have been facing. To compliment giving them access to technologies and making systems more accessible in terms of language and any other areas that we have, we might be able to -- even compliment data in having put many facilities in the rural areas, or like enhancing the education system by implementing the AI system and the education.
We can create teachers, virtual teachers who can interact with them. Or implement it. There are many kind of cases where AI in developing nations. But like we have to be very mindful that this kind of consequences have made that -- it is a global debate how do we make the AI system more responsible? And that should be what societal and technological. Is there alright bias in society. There will be bias. Unless the bias in the system is eliminated -- because it is based on the data that is available in the society. It is a data that we have created.
So we have to be very mindful what we feed the AI system that is very important. And if we do that from -- because there is one thing in the AI ecosystem that developing nations didn't have. During the 2000.com boom the developing nations couldn't live with the internet like developing nations could do that. We will telling today but AI is in a very premature and if we could accommodate the needs of those developing nations and with AI we can make them way more prosperous in terms of economic and in terms of other social benefits. I would like to stop here am I think we have to go for a discussion.
>> MODERATOR: Yes, we have to pause here.
>> ANANADA GAUTAM: Thank you, sir.
>> MODERATOR: The question is does the AI Act give possibilities of leveraging AI in developing nations while upholding human rights? No, not by itself. Yes. I don't think. So human right is not globally enforceable. Yes as a guide to develop local regulatory frameworks. Possibly on the contrary. Yes, global organizations are being developed. Pulp influenced. Being inspirational. Only European companies operating elsewhere. No, not by itself. Is anyone in the panel want to react?
>> ANANADA GAUTAM: I think there is something about global east and measures. So my response would be it is like AI Act itself can want be delivering to upholding the human rights because it will be more focused on what can be done and what can't be done of because legislations are always focused on do's or don'ts of something.
But if we have policies that will look at the development of AI or how other nations or like the extra territorial -- it might also be how verified nations can help the developing nations to -- third world developing nations to live as this. Can there be other options? Another one is uphold the human rights. The frameworks, UNESCO has one. And then OECD is working on the second innovation. So those kind of frameworks would be one of the fundamental practices that could help on ensuring the human rights, not only developing count text but in the global context.
>> MODERATOR: Thank you.
>> REMOTE MODERATOR: We are going have breakout discussions, but given it's quarter to 6, we have until 6 o'clock. Let's just take questions from the floor and also from the online participants. Maybe first the question that was asked in the chat. It was about Gaza being a test ground for using AI. Which I this is very urgent. And has been quite shocking. Thanks. >> LISA VERMEER: I'm taking the question from the chat about Gaza being the testing ground for search of AI. It's rather difficult to answer this question because there's a lot of nuance about it. So let me first say that the AI Act in the EU is of course an initiative to try to make AI more responsible and to avoid AI systems globally that really pose a lot of risk to, for example, safety and fundamental rights of people. But AI Act is not touching all AI systems. So the context of it is that there is also a discussion about, for example, AI in the military domain.
And also military domain and the defense and national security is excluded from the AI Act. But it does not mean that there's no ongoing discussion about these areas. Even -- rather more so there has been a long discussion already, especially on the issue level, for example, about the responsible use of AI in the digital, in the military domain also initiated by the Netherlands. It's called the reaim trajectory. And you also have a long Geneva based conversation about lethal autonomous weapons.
Of course that means that Gaza was still --
There -- still lot of AI was used there. And I'm afraid, that's my personal opinion, I'm afraid that AI will be used for very bad purposes. But the discussion about how to tackle this, how to disincentivise this. How to make it impossible is really on the table, very straight forward. It's being discussed between stakeholders, between governments and UN body. So it gives some hope it is on the table ask there may be change.
But that's where we are now.
>> MODERATOR: Thank you, Lisa, for answering the question from the chat which is already an urgent topic. I would also like to ask a question to the audience. Because in the audience I do see some people and also on the internet. And my question to the audience is what can we learn in the creation of standards from AI from the internet standardization process? Can I give someone the floor from the audience?
>> Yeah I can say something about it. I'm with the platform internet standards. Any perspective there's quite a difference, where the internet itself was built on standards. And I think these standards really formed the internet as we know it right now. AI, although not my expertise, seems more to me as technology is out there, and now we are trying to introduce standards to lymph or to control AI in that sense. But it's not really founded on standards like the internet was.
So there is something to learn but I think that's the difference between AI and internet standards in that sense.
>> MODERATOR: Thank you for sharing your thoughts. What my reflection on that is that indeed AI is not built on standards. It indeed is now being regulated while the threats have been identified or have been more up front. So now we are trying to reengineer the world for creating useful standards in certain demands. Is there any other reflection from the audience?
>> I have a question. Connecting to balancing innovation and control. So I think it's for you, Auke. Do you think there is a risk that is big tech says no to the EU? And if so, what can be changed to balance our vision in the EU and divisional big tech?
>> MODERATOR: Let me think about it. There is a big risk that big tech says no to the EU. And also on other topics the EU is being challenged by big tech. So your question is what changes are efficient in the EU beginning with big tech. To be honest I don't think I have a clear answer on that. Maybe some of my other colleagues do have that.
>> LISA VERMEER: You see this happening because especially -- I think Meta at the moment, really ramping up against AI and its consequences. For example the AI Act is regulating large language models or general purpose AI models which come prize with them being difficult to practice. And most large companies from the U.S. have signed up with the AI Pact from the peep commission to collaborate with companies to become compliant with the AI Act but we now see especially with Meta and other companies are requiring to GP model the large language modelling and some are more constructive and others are saying we don't want this because this will make it very hard for us.
So I get this question a lot especially during negotiations of the AI Act. Basically all countries were asked do you see the AI Act as a barrier to innovation? The idea is that the AI Act is not a barrier to the right kind of innovation. Because it's a risk oriented approach it means that a lot of AI falls outside of the scope. But to be honest a lot of AI also falls within the scope. But then the argument is that the EU deliberately chooses to have a certain we want responsible AI to develop and to innovate and to grow and to scale in the EU and if large companies, for example from the U.S. but also from other areas in the world, they are not responsible enough, i.e., not meeting our EU criteria, it's the kind of AI that we don't want.
So it is a balance. But if big tech says no to the EU, then the EU says no to the big tech. And it really means do you want to have access to our market or not? I think you see the upcoming of years will be interesting to see how it goes because we have a new European commission in Brussels. And they have quite enforcement power also for the digital services act. A large law affecting large platforms and with the AI Act being in almost a year, how will they play their cards now that we first had the commissioner Jerry Platoorin, who was there for Elon Musk and now he has a conciliatory tone toward the ex-owner.
So the law is there but it depends on how it's going to work out and how forceful the EU will stand and define testimony but it still remains to be seen. Thank you for your question.
>> MODERATOR: Thank you, Lisa, for also answering the question. I saw a hand raised from Karen online. Karen are you there? I why unmute you.
>> I'm sorry. I was writing my concern on the chat. So I think that the difference, for example, with internet, it's that AI is not limited to giving permission or granting information. The information that it gives, it's biased already. Another concern is that it also replicates and improves human traits.
It also interprets data, for example when it's used in medical devices and technological devices. It will read information and interpret this data and give feedback to this neurotechnology as well. I'm talking about for example the devices that will read and it will interpret and it will send science again to neurotechnology to either activate or suppress activity on the brain.
And it is not regulated from the design and development use. It is not regulated however it will be trance nationally moved.
So I think we have a lot of concerns. It's a broader concern. Because it affects many dimensions, many fields. And I do think that society does not fully understand the profound impact of using and interacting with AI. Thank you.
>> MODERATOR: Thank you very much, Karen. Juliana, do you want to reply on that?
>> JULIANA SAKAI: Thank you for your comments. We are always following the technological advancements so to say. This is really where the Civil Society playing a role in trying to explaining what is going on, putting more information available, and make it -- when I say putting more information is also like breaking this consequence that I just mentioned like Karen. So for each kind of use in each field, we have to -- like Civil Society is monitoring how the results are going on and how the implementation is going on. How the test of each system is working.
And I think this is something that has to be parallelly developed sometimes with the help of the government. When we are talking about an assessment, it has to be prior to a launch. And when it's launched, the Civil Society has the data to see the data and rhythmic bias has provoked and how it might impact badly on the equality.
So I think this is pretty much the field that we have to work on. At the end of the day, the Civil Society the population, and the consumers and user as a whole have more influence on how we should protect ourselves, right. And for this, also the organized Civil Society and academia and journalists are there to get information and develop advocacy work.
This is really important, because at the end of the day, the institutions are accountable once also if the Civil Society is denied. So there is a flux and a kind of Civil Society demand and institutions answering to this. So we have to press and demand that institution takes the action, the relative measures to protect and to implement the regulations that are being proposed.
>> MODERATOR: Thank you very much for your response. I'm getting the sign already that this session is nearly to its end. I would like to give the opportunity for someone to reflect or make any last comments. If there is none I would like to thank my panelists, Lisa, Ananda, Juliana for being a part of this session. I do also really think that much discussion can go-on-this topic but not within the 60 minutes that we received today.
With this, I really encourage you to stay in touch with us through LinkedIn. Add us if you need us or want to start a new discussion. And with this I would like to close the session. Thank you very much.