IGF 2025 - Day 2 - Workshop Room 3 - WS 187 Bridging Internet and AI Governance From Theory to Practice - Raw

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We are starting.  So Oliver ‑‑ start.

>> I think I should shouldn't I.  My machine is talking as well.

>> Oh let me remind everyone here the mics are always on.  So be careful because everyone is listening.

>> Is it the...

>> While we wait Oliver's preparations.  Let me also invite everyone here in the room if you want to sit here.  Because the seats are free.  So if you want to stay closer was, you will have the benefit of having a mic on and not having to be in line to ask questions at the Q&A moment.

Okay.  Welcome everybody to this joint session of the dynamic coalition of core internet values and dynamic coalition on network neutral.  I'm Oliver Crepin‑Leblond.  And co‑Chair of the is going to be Luca Belli.  And Pari Esfandiari for the (?).  It is great to see such a lot of you here.  If anybody wants to step up over to the table here they are very welcome to do so.

We are going to have a session that is going to be quite interactive.  So we'll have the speaker speak and so on.  And then we'll see if we can have a good discussion in the room about the topic.  I'm just going to do just a quick introduction of the speakers that we have.  And so we'll start with four speakers.  Each providing their angle on the topic.  We'll have Vint Cerf who is joining us remotely.  Unfortunately he couldn't make it in person at this IGF.  So he's over in the US.  And he will actually let us know at some point when he will be online.  Because also, as often, doing more than one session at the same time.

>> VINT CERF: Actually I am ‑‑

>> OLIVIER CREPIN-LEBLOND:  ‑‑

>> VINT CERF: I am online.

>> OLIVIER CREPIN-LEBLOND: He's already there.  Goodness gracious.  Sorry.  Vint.

So Vint Cerf.  Renata Mielli.  And Hadia Elminiawi.  And Sandrine Elmi Hersi.

Three commentators.  William Drake, Bill Drake.  Roxana Radu.  And Shuyan Wu who has just arrived from China.  And after that we'll open to it a wider discussion.

But I'm kind of wasting time.  We've only got 75 minutes, so I'm going the hand the floor straight to Luca and to Pari for the next stage.  Thank you.

>> PARI ESFANDIARI: Thank you Oliver.  And welcome everybody.  It is great to be with all of you.  So we convene this session bridging internet and AI governance from theory to practise not just because things are changing fast but because the way we think about digital governance is being fundamentally reshaped.  As technologies converge and accelerate, our governance systems haven't kept up.  And at the centre of the shift is artificial intelligence.

Let's start with theory.  The internets core value.  Global interoperable open decentralized end to end, robust and reliable and freedom from harm.  These were not just technical features but deliberate design choices that made the internet a global commune for innovation, diversity and human agency.  Now comes generative AI.  It doesn't just add another layer to the internet.  It introduces a fundamentally different architecture and logic.

We're moving from open protocols to centralized models, gated (?) and controlled by a handful of actors.  AI shifts the internet pluralism towards convergence, replacing question enquiry with predictive narration and reducing users agency.  This isn't just a technical shifty.  It is about who gets to define knowledge, shift, are discourse and influence decisions.

The a profound governance challenge and choice about the kind of digital future we want.  If we are serious about preserving user agency, democrat oversight and open informative ecosystem, the core internet values can serve as sign posts to guide us.  But it needs active support, updated policies and cross‑sector commitment.  This is where the practise begins.

The good news is we are not starting from scratch from UNESCO's AI ethics framework to the EU AI act, the USAI Bill of Rights and efforts by Mozilla and others, we're seeing real momentum to root AI governance in shared fundamental values.

So yes there is a real divergence.  But also real opportunities to shape what comes next.  And that's focus today.  With that I'll hand it over to Luca Belli.  Thank you.

>> LUCA BELLI: Thank you very much Pari and Oliver.  And also, let me ‑‑ is this working?

Are you sure because I'm not hearing myself?

>> I am hear.

>> LUCA BELLI: Okay.  So my headphone is not working.  It is not useful when I have to hear myself anyway.

So thank you very much Oliver and Pari for having organised this.  And for having been the driving force of this session.  Actually builds upon what we have already done last year in our first joint venture that was already quite successful.

And I think that ‑‑ what's already emerged, I always say it is good to build upon the sessions and building blocks and reports we have already elaborated.  So that we move move forward.

And something that already emerged a source of consensus last year in Riyadh are two main point.  First we have already discussed pretty much 20 years here at IGF Internet Governance and internet regulation.  So we can start to distill some of these teachings and lessons into what we could apply to regulate the evolution of AI and AI governance.

And second, and to quote what Vint, expression Vint used last year, the internet and AI are two different beasts.  So we're speaking about two things that are two digital phenomenon.  But they are quite different.  And the internet, as Pari was reminding us very eloquently has been built on an open, decentralized, transparent, interoperability architecture that made the success of the internet the past 70 years, 50 years.  50 years at least since Vint penned it in '74.

But yeah, the question is how we reconcile with this highly centralized AI architecture?  And I think that here there is a very important point we have been working on on net neutrality and internet openness debit debate the past year.  The concept interoperability.  The capacity of the internet to evolve to unfiltered contributions of the users.  Is the consequence of the fundamental core internet values.  Openness, transparency, is to create a level playing field, are a capacity to innovate, to share and use applications, services, content to make the internet evolve according to how the users want to do.

So users, not only users, passive users.  Are presumers.  They create the internet.  This is fundamental tension with AI that is frequently proprietary, non interoperable, very opaque.  Both in the datasets that are used for training.  Usually the result of massive scraping of both personal data and copyrighted content in very peculiar ways that might be considered illegal in most countries with data protection or copyright legislation.

And then the training and the output of it is very much opaque for the users.  And very few companies can do this and supply this.  So there is an enormous concentration phenomenon ongoing, which is quite the opposite of what the original internet philosophy was about.

Now to discuss this point we have a series of fantastic speakers today.  I think that as I was mentioning before as we are celebrating 51 years of the paper on by Vint and Bob Kant on the internet networking protocol, protocol for internet working networks, if I'm not mistaken.  I think the first person should go away is Vint.  Pari please the floor is yours to present Vint.

>> PARI ESFANDIARI: Thank you very much.  We have two actually overarching questions and we would like our speakers to focus on those overarching questions.

I would read it for you.  How can the internet's foundation of principles of openness and decentralisation guide transparent and accountable AI governance, particularly as generative AI becomes a main Gateway to content.  And the second question, how can fundamentally network neutrality principles such as generativety and competition on a level playing field apply to AI infrastructure, AI models, and content creation?

So Vint drawing on unique experiences in both funding architecture of the internet and your work with the private sector.  We are curious to hear your comments on these questions.  Over to you.

>> VINT CERF: Well thank you so much for this opportunity.  I want to remind everyone that I am not an expert on artificial intelligence.  I can barely manage my own intelligence let alone artificial.  But I work with a company that has invested very heavily in AI and in AI‑based services.  So I can reflect a little bit of that in trying to respond to these very important questions.

The first thing that I would observe is that the internet was intended to be accessible to everyone.  And I think the AI efforts are reflective of that as well.  The large language models ‑‑ well let me distinguish between large language models and machine learning tools for just a moment.  All of you are well aware that AI has been an object of study since the 1960.  Its gone through several iterations of phases.

Most recent is machine learning, reinforcement learning and then large language models.

The reinforcement learning mechanisms have given us things like programmes that can beat the best players of go.  Programmes that can tell you how proteins fold up and tells us something about their functionality.  And recently at Google, alpha evolve which is an artificial intelligence system that will invent other software to solve problems for you.

The large language models that we interact with embody huge amounts of content.  But they are specialized when they interact with the users.  You use the term "prompting" to elicit output from these large language models.  And the point I want to make here is that every time someone interacts with one of those, they are specializing it to their interests and their needs.

So in the sense we have a very distributed ability to adapt a particular large language model to a particular problem or to respond to a particular question.  And that's important.  The fact that we are able to personalize our interaction with these sources of information is a very important element of useful access.

The question about interoperability of the various machine learning systems is partly answered by the agent model idea.  That is to say large language models are becoming mechanisms by which we can elicit not only responses but also actions to be taken.  So the so called agentic generative AI is upon us.

And consonant with that are two other standards being developed.  One is agent to agent interaction and the second is called MCP which is a model context protocol that gives you ‑‑ to give the ‑‑ these artificial intelligence a concept of the world in which they are actually operating.

The reason these are so important and they create interoperability among various agentic systems is that it is very important for precision.  It is important that the agents when they interact with us and when they interact with each other to have a well‑defined context.

In which that interaction takes place.

We need clarity.  And we need confidence that the semantics are matched between the two agents.

If anyone has ever played that parlor game called telephone, where you whisper something in someone's ear and they whisper in the next person's here and you go down the line.  Whatever comes out at the other end is almost never what started at the beginning.  We don't want change of agents to get confused so mechanisms try to make that work a lot better.

I this is a very important notion for us to ingest into the work of the core internet values.  Except they will have to become core AI values, which is clarity.  And interaction among the various agents.  Of course among other things.

Last point I would make is that as you interact with large language models, so called prompting exchanges.  One of the biggest questions we always have is how accurate is the output that we get from these things.  We all know about hallucination and the generation of counterfactual output coming from agents.

It is very important that provenance of the information that is used by the agents or by the large language models and references be available for our critical thinking and critical evaluation of what we get back.  And so once again that is a kind of core internet value.  How do I evaluate?  Or how can I evaluate the output of these systems to satisfy myself that the content and the response is accurate?

So those are just a few ideas that I think should inform work of these dynamic coalitions.  As we project ourselves into this online AI environment.

So I'll stop there because I'm sure other people have many more important things to say?  Response to these questions.

>> PARI ESFANDIARI: Thank you very much, Vint.

And for that very informative discussion.  And with that, I would go to Sandrine.  Sandrine, based on your experience shaping digital strategies within government, how would you see this?

Thank you.

>> SANDRINE ELMI HERSI: Thank you.  Let me first to say that it is a real pleasure to join the session today and to discuss this important topic with partners from the net neutrality and core internet values coalitions.  And before we ask how to apply openness and transparency to AI governance, I would like to insist on the "why."  Why this application has become essential.

So as it was already covered by Vint, LLMs and tools are becoming a new default point of entry to online content and services for users.  Since our conversation at the last IGF, at Riyadh, we've been seeing this trend accelerating.  So the development of use of individual chat bots but also establishment of risk funds engines integrating into mainstream tools, generative AI is also increasingly directly embedded in end users devices.  And we're also seeing a shift from early generation LLMs to new generation of systems that now included in AI tools..  And looking ahead.  A wide range of users actions into a single AI interface.

So question is really will tomorrow's internet still be open,dy centralized and user driven if most of our online action is mediated by a handful of AI tools?

So now regarding the how, French regulatory authority for (?) communications is currently conducting technical hearings and testing to explore this very question.

All through our report currently in development, we can already identify three main areas for to apply to AI governance.  First on accelerating AI transparency.  Understanding generative AI models, what data they use.  How they process information and what limits they have is a prerequisite for trust.

There is some progress.  More and more players are now engaging with researchers and through sector initiatives and code of conduct.  But many models remain black boxes.  We need greater openness especially to research community to improve explainability and also efficiency of models.

The second area is preserving the notion of intelligence, which is the original spirit of internet, intelligence among users and applications.  Not centralized in platforms or infrastructure.

And ensure that users able to choose among diverse sources.  This may require working on the technical and economic conditions that shape AI outputs to guarantee a certain level of neutrality, plurality of use and openness to a diverse range of content creators and innovators.

Last but not least, regarding the principle of non discrimination that is also at the centre part of net neutrality, so net neutrality, non discrimination principles was originally applied to prevent internet service providers from (?) their own services in markets.

Today's ISPs are not the only digital gate keepers that can narrow the perspective and freedom of choice of end users.  We're now working on what extent this principle of non discrimination and openness can be extended to AI infrastructure and models.  But also mow AI creates and presents content.  On this very shortly, we notably investigate two questions.

The first one is how to preserve the openness of AI markets.  Notably through ensuring that plurality of economic players have access to key inputs necessary for LLM development, including data, computing resources but also NRG.

And second question we are also diving into is ensuring that we keep diversity of content on the internet, knowing that when they use AI chat bots and response engine and users only have access to one answer instead of hundreds of web pages.  So we must ensure that generative AI is not simply identifying already dominant sources but is open to smaller and independent content creators and innovators.  That might mean in the future working on defining sectorwide frameworks or interconnections on fair conditions.  As it was done for IP (?).

And to end, the goal is not to block innovation but on the contrary to make sure that innovation and AI are compatible with preserving internet as a common good.

>> LUCA BELLI: Thank you very much Sandrine for these very excellent thoughts.  And I think it is very good to see how you were illustrating that what has been done in terms of internet openness, regulation or net neutrality base over the past 15 years is precisely trying to enshrine into law the regional philosophy of openness, transparency and decentralisation and plurality of the internet.  And make sure what we can call gatekeepers or point of controls emerge, they behave correctly.  And if necessary a law protects rights of the users and the regulator oversee it is laws to make sure that the obligations are implemented.

Now what is very difficult now is to understand who are the new gatekeepers and how to implement the law that may be still does not even exist in that he has terms.  I would like to now give the floor to Renna Mielli.  Coordinator of CGI in this moment.  And CGI also been leading debate on net neutrality and openness and many AI issue in Brazil.  Renata the floor is yours.

>> RENATA MIELLI: Thank you, Luca.  We are deepening the debate we started in Riyadh.  When we discuss AI from perspective sovereignty and the empowerment of the Global South last year.  And how to reduce the exist asymmetries in this feud.

And now we're talking about how to bridge.  And to contribute the session, I choose to look at the work we have done in CGI.BR.  On principles for the internet and try to reflect on what makes sense and what does not make sense when we're thinking about AI in a perspective of establishing a set of principles for development, implementation in lieu of AI technologies.

Taking into account what Luca just said about the differences, the high economic concentration, opacity of the systems.  And taking into account also what Vint said.  They are two different beasts.  But despite these differences, I believe this exercise to try to under ‑‑ interlink these principles is interesting and is possible because in the case of the catalogue, we take to look principles general without superficial.  Constitute complete framework that should be observed and applied, developed and adjusted according to each technology.

And just a point very quickly.  Principles we have are freedom, privacy and human rights.  Democratickic collaborative governance.  Universality, diversity, innovation, interoperability, net neutrality, (?) security and stability.  Standardisation and interoperability.  And legal and regulatory environment.  These our 10 principles.

In the sense I would like to start by looking at what is not covered when we are talking about AI in this.  And the first thing I see and lot people are mentioning a lot, is transparency and explainability.  Because this two principles are very essential when you talk about AI.  Because it involves a series of procedures that are not thought of in the same way when we are dealing with internet.

Internet is open.  Internet are decentralized.  All the protocols are built in very collaborative way.  But this is not the case of AI.

So AI system governance and deployment and development needs to ensure high levels of transparency.  Especially for the social impact assessment of this type of technology.  As well for the creation of compliance process that ensures other principles like accountability, fairness and responsibility.

In other words, we're discussing specific (?) for AI that were not necessary (?) Internet Governance.  In terms suggest, I like point which ones can be in some way interoperable with AI principles.

In this case, I think of course freedom, human rights, democratic and collaborative governance.  Universality in terms of access and benefits of AI for diversity when talking about language, culture, inclusion of all kind of expressions.  Also standardisation and interoperability between and with the various models.  And of course we need legal and regulatory environment for these systems.

We can think that this, the perspective use for the internet governance is applicable to AI principles in contest.  From another perspective, principles like security needs to be addressed with another two other principles, safe and trustworthy.  And ethical.  I point out another one.

So the convenience, discussion impacts on rights like privacy and data protection.

Finally, an important part of this exercise of evaluating Internet Governance principles and their possible alignment with AI governance principles is to identify what was conceived for the internet that is not applicable in the AI context.  (?) because I don't have more time.  I point the principles of net neutrality because the proposal here is to preserve net neutrality in relation to telecommunications infrastructure.  And this is not applicable to AI.  And there is neutrality in the technology itself.  AI is not neutral.

And I think inoperability is a another principle that is easily transferred from the internet to AI.  Because here we have to understand the responsibility and the AI chen.  So this is some thoughts I have to share.  In this beginning of this panel.  Thank you very much.

>> LUCA BELLI: Thank you very much Renata.  And also bringing to the picture something extremely relevant I think for which the IGF is also appropriate forum being a UN forum.

The fact that we have been debating this for 20 years.  There are lot of debates also going on in the Global South about this since at least 20 years.  But what we see in terms of mainstream debates and policy making and even construction of AI infrastructure especially cloud infrastructure is an enormous predominance of what we could call the globally north.  So it is very interesting to start to bring into the debate Global South voices.  And continuing with Ms. Hadia Elminiawi, here representing the African continent, which is an aenormous responsibility.  So please Hadia, the floor is yours.

>> HADIA ELMINIAWI: Thank you so much and happy to be part of this very important discussion.

So let me first start to highlight the similarities between AI and the internet.  Making the core values well suited for AI governance.

General purpose have wide impact on product growth and connectivity.  And AI can be considered one of those general purpose technologies impacting economic growth maybe quicker than any other general purpose, technology, that has emerged in the past such as steam engine, electrifications, computers.

AI is driving changes in aspects of life.  Health care, agriculture, finance, servicing, policies and governance.  By definition AI isn't just one technology but it is a constellation of them.  Including machine learning, natural language processing and robotics that all work together.

Similarly, the internet stands as another powerful general purpose technology that has fundamentally changed the way we live, work and interact.  Enabling new ways of communication, education, services provision and conducting businesses.

The internet infrastructure is foundational to artificial intelligence.  Enabling cloud services including managing on site data centres and realtime applications.  In addition many of these services and applications that are being delivered over the internet infrastructure are using AI to deliver better experiences, services and products to users.

So when a it comes to Africa, the capabilities of African countries regarding AI, vary significantly across the continent due the differences in availability of resource, infrastructure, including reliable and efficient electricity.  Broadband connectivity, data infrastructure like data centres and cloud services.  Accesses to sets of quality data.

AI‑related educational skills.  Research and innovation.  And investments.

So last year in July 2024, the African executive council endorsed the African union continental AI strategy.  The continental AI strategy is considered pivotal to achieving the aspirations of the sustainable development goals.

And likewise, the internet plays a critical role in achieving the sustainable development goals.  No poverty, good health and well‑being.  Quality of education.  Digital industry innovation and infrastructure.  Other relevant regulatory approach around the globe include EU's AI act adopted in 2024, the executive order for removing barriers to American leadership in AI, in January 2025.  Sectoral overall in the US.

The UK framework for AI regulation.  The 2023 G7's guiding principles and code of conduct.  And China also has developed some rules.  Egypt also has the second edition of the national artificial intelligence strategy in 2025.

So in all the strategies, we see some of the core principles that have shaped the internet such as openness, interoperability and neutrality.  Guiding various AI governance strategies.

So the question now becomes how do we translate these aagreed principles and frameworks into actions.  And some cases what do those principles in practical terms mean or look like?

So let's look at openness and transparency.  What does this mean?

>> LUCA BELLI: May I ask you to wrap up in 30 seconds.

>> HADIA ELMINIAWI: Sure.  That will be very quick.  I'm almost done.  Open access to requiring AI model maybe.  Requiring to include component of full understanding and auditing.  But what does ensuring transparent algorithms in practical terms mean?  It is realistic or even desirable to expect all AI models be made fully open source?  Given the amount of capital investment in these model, requiring complete openness, good investment in models ‑‑ hindering innovation.  At the same time transparency and openness raise some important ethical and security concerns.  Is it (?) to allow unrestricted access to tools that could be used to plan weapon or plan harmful actions.  We need layered safeguard.

AI algorithms on top of other AI algorithms to ensure responsible and secure use.  So what alternative solution could can we consider?  One possibility could be requiring all AI developers to implement robust safety guardrails and have these guard rails open source rather than the models themselves.

In additions, AI developers could be required to publish the safety guardrails but they have to in place.

I guess this is an open discussion.  And with that, I would like to wrap up and thank you.

>> PARI ESFANDIARI: Thank you very much Hadia.  And on that, I think I want to thank all of the panelists for their insightful contribution and now I want to invite our invited community members to comment on what they have heard.

You are also welcome to share your own views on the broader issues we have touched upon.  And on that, I would just start with Roxana, Roxana Radu you have five minutes.  Please start.

>> ROXANA RADU: Thank you very much.  Sorry for not being able to join you in person.  I would like to start by saying there is a flourishing discussion now around ethics and principles in AI governance.  In fact it is what we've seen developed the last 5 or 6 years.  It is a pretty of ethical standards and guidelines and values to adhere to.  But a key difference with Internet Governance is the level of maturity in this discussions.  And also, the ability to integrate those values that are newlyoffed into technical policy and legal standards.  What we've done in Internet Governance the last 30 years is much more than identifying core values.  We apply them.  We've embedded them into core practises.  And we are continuing to refine these practises day by day.

I think there are four key areas that require attention at this point in time where we can bridge the Internet Governance debates and the AI governance discussions.  First is question of market concentration.  Luca was already alluding to gatekeepers, how do we define them in this new space.  Highly concentrated ownership of the technology, of the infrastructure.  And so on and so forth.

Second, is the diversity and the equity in participation and engaging different stakeholders, but also stakeholders from parts of the world that are not equally represented.

Thirdly, there is a hard‑learned less of personal data collection, use and misuse.  We have more than 40 years of experience with that in the Internet Governance space.  And we've placed emphasis on data minimization, to not collect more than what you need.  This lesson does not seem to apply to AI.  In fact it is the opposite.  Collect data even if you are not sure about its purpose currently.  Machines might figure out a way to use that data in the future.

Is the opposite of what we've been practising in recent years in Internet Governance.

And fourthly, and very much linked to this previous points, there is a timely discussion now around how to integrate some of these core values into technical standards.  With AI there seems to be a preference for unilateral standards.  The (?) developing their own standards, through APIs, verse globally negotiated standards where a broader community can contribute and those voluntary standards could then be adopted by companies and by participants in those processes more broadly.

I think we need to zoom in on some of this ways of bringing those core values into practises.  And it is very opportune to do that now at the IGF.

Thank you.

>> LUCA BELLI: Thank you very much Roxana.  And I think that there are some interesting points that are emerging here.  Also something I want to very briefly comment on because it was raised before.  Is that we are discussing here how core internet values can apply to AI.  And I think it is interesting do this in joint venture with the (?) actually implementation of core internet values into law.  As any lawyer that has studied mont skew would tell you what comes in the law is the spirit of law.

I remember 10 years ago writing an article on the spirit of the net, where I was mentioning precise.  Net neutrality enshrining into law the spirit of the net.  The core internet values.

So we have to understand a way to translate this into an applicable way to AI.  And I think that is the huge challenge we have here today.  And pretty sure that our friend Bill Drake knows how to solve this challenge.  I believe the floor is yours.

>> WILLIAM DRAKE: Obviously I do not.  Thank you.

Well, first of all, congratulate the organisers of this session on putting together an interesting concept.  Trying to figure how you map internet properties and values into the AI space I think is definitely a worthwhile activity.  As Roxana noted, it kind of builds on the discussions at international level recent years about ethics.  Whether UNESCO and other places.

And I think that, you know, it is worth carrying this forward.  But I would start by noting just there is a few constraining factors.  Three in particular.  First, conceptually, let's bear in mind, again, going back what Vint said.  We're talking different beasts.  You know, we're not talking here about relatively bounded set of network operators and soen O. we're talking about vast and diverse range of AI processes and services in an unlimited range of application areas.  From medicine to environment.  And beyond.

So which internet properties will apply generally or in specific contexts?  Simply can't be assumed.  We need to do close investigation and mapping.  And I think there is a great project there for somebody who wants to develop that matrix.  I look forward to reading whoever does that first.

There are reasons to wonder whether some of these things really do apply clearly.  Renna suggested net neutral might not be so directly applicable.  There are a lot of other challenges I think there intellectually.

Second is material interests of the private actors involved.  Luca referred to the concentration issues.  It is nice to think about values but I wouldn't expect the US and Chinese companies involved in this space to join in AI engineering task force and hum their support for voluntary international standards.  To the contrary they have kind of demonstrated that they will do pretty much anything to promote their interests at this phase.  Including sponsoring military parades for dear leaders in Washington.  Similar so on.

So it is unclear how much they would embrace any externally originated constructs like neutrality, openness, transparency.  Et cetera.  That don't really fit well into their immediate profitability profile.

And how well these things would apply to very large online platforms and search engines et cetera.  Again, real challenges there.

And lastly of course, material interests of states.  Net neutrality is forbidden in the United States now.  Applying it to AI of course would be two.  Generally speaking multi laterally regulatory interventions are impossible to contemplate in the Trump era.  At least for those of us who are North America.  And I'm not sure what China would sign onto in that context.

So in principle you would like to think though that transparency and openness with regard to governance processes, especially international governance processes, could be pursued.  And there, you know, I would just like to flag a couple quick points before I run out of time.  Lessons from Internet Governance I think are relevant.  One we have to be real clear about where there is an actual demand for international governance and regimes and application of these kinds of values is so on.

We simply can't just assume because the technology is there and issues is there that there is a functional demand.  Often people point and say oh there is some new phenomena we must have governance arrangements but very often it is not equally distributed across actors and the highfalutin aspirations don't get fulfilled.

We used to talk about safety.  Lot of discussion around safety.  Now suddenly safety is out the window and we're all talking well we want to promote innovation and investment.

So it is easy to say that we have these kind of ‑‑ this demand to do all these new wonderful normative things.  Bruh in reality, you know, when push comes to shove, we have to look at where there is a real functional demand.  Where do you actually need international governance, interoperability or organisation of rules?

Until telecom space.  We had to have non interference.  Telecom networks had to be interconnected and have standards to allow the networks to pass traffic between them.  So there was a strong incentive for states to get on board and do something even if they had different visions how to to that and you could fight over it.

What are those aspects of the AI process that absolutely require some kind of coordination or harmonisation?  It is not entirely clear.  And I think we can't just assume that.

Other last point and I'm going to run out of time.  Is just to say, going, dish as someone who was around 20 years ago and remembers the fight over Internet Governance and what is Internet Governance and so on.  We're in a liminal moment like we were 20 years ago, where people are not clear.  What is the phenomena, how do we define it?  What does governance mean in this?  Et cetera.

This requires a great deal more thinking when applying to the AI space and specificitities.  I hear a lot of discussions in the UN where people seem to be just grafting constructs from other international policy environment onto AI saying we'll apply the same rules.  And like applying the same rules for the telephone to telegraph and every new technology we're going to look through the lens of previous technologies.  But often that doesn't work so well.

And my last point and then I'll stop.  Multi lateral action, I'd be very careful in thinking about.  I notice that the G77 in China in a reaction to the co‑facilitators stuff on the AI is saying that they want binding international commitments coming out of the UN process.  That they will not accept as purely informal agreements coming out of the UN process.

I look at what is going on in the AI space, I'm thinking, seriously?  What kind of binding international agreements are we going to be negotiating how in the United Nations in the near term?  And if you set that up at the front end is the object you are trying to drive towards.  You can just see how difficult all this is going to become very quickly.  Probably went over 5 minutes so I'll stop.  Thank you.

>> PARI ESFANDIARI: Thank you very much Bill.  And for the sake of time I'm not going to reflect you back to awful lot of information in that section.  But we don't have enough time.  So therefore I directly go to Shuyan.  Floor is yours.

>> SHUYAN WU: Thank you.  Hello everyone.  Pleasure to hear to attend this important discussion.  From China, one of the worlds largest telecom operate.  I would like to share the practises and experiences from China to internet and AI governance.  In age of internet we continue to promote the development of internet ecosystem towards fairness, transparency and inclusiveness.  This commitment is reflected in our efforts across infrastructure development, protection and bridging the digital divide.

In terms of infrastructure development we strive to ensure equal access and to inclusive use of internet services.  (?) networks across the country.  We've also build the world's largest and most extensive 5G, in.

Second, when it comes to protecting users rights and interests, we work actively to create a transparent and trustworthy online environment.  We provide clear user friendly service mechanisms and have introduced quality management tools to ensure users rights to information and independent decision making.

For specific (?) elderly and minors we focus on fraud prevention education and customized services to build safer and greener digital space.  Third to Bridgette digital divide and support inclusive growth we've implemented ‑‑ (?).  And have tailored our smart services to their needs.

For minors in rural areas, education cloud network services helping to reduce gap in education resources between urban and rural communities.  As transition from internet era to the age of AI, actively adapting its experience and capabilities to the (?) of AI governance.  Striving to build digital existence between universal access, decentralisation, transparency and inclusiveness.  We are investing in AI infrastructure to remote resource sharing and encourage decentralized innovation.  Backed by our strong computing power, data resources and solutions large larger models and AI development platforms.  At the same time leverage AI capabilities to build transparent and trustworthy digital environment.

Effective safeguarding user rights.  For instance, China mobile AI powered detection technologies in scenarios like (?) and financial services to help users identify false or harmful content.  More we are committed to ensuring benefits of AI (?).  Personalized education and (?).  Interactive solutions.

Elderly, AI powered entertainment, house monitoring and safety services.  And rural areas, doctor system delivers quality health care to remote communities.

That is all for my sharing.  Thank you.

>> OLIVIER CREPIN-LEBLOND: Thank you very much.  And now we're going to open the floor for your input and for your feedback on what we've had so far.  I'm the remote participation, online participation moderators as well.  And there's been a really interesting debate going on on online.  I'm not sure how many of you been following this.  I was going to ask whether we could have the two main participants that were speaking back and forth online.  Alejandro and Vint Cerf as well.  Because Vint of course is always active both online and with us.

And then we'll have after those two, then we'll start with the queue also in the room.  Yeah?

All right.  Let's get going.  Alejandro you have the floor.

>> Thank you.  Good morning.  Can you hear me well?

>> OLIVIER CREPIN-LEBLOND: Yes.  Thank you.

>> Thank you.  I was making these points also in discussion core internet values, dynamic coalition.  If you are trying to look at translating what experience of governance from internet to AI intelligence I think few points valuable and many made already.  Just group them.

First is you have to define pretty well what you want to govern.  What branch of enormous world of artificial intelligence you actually want to apply some governance.  Otherwise you will have some serious ill effects.

Usic AI for molecular modelling, protein folding and so forth, that kind of problem.  Or using it a back office system for detecting fraud in credit cards and so forth.  Are very different beasts in turn.

So it is very important not to regulate one of them, regulate with such generality that rules from one of them will impede progress in others where they are absolutely not necessary.

Second what I think is very important and we learned from Internet Governance from 30 years of Internet Governance.  Is make sure you are governing the right thing.  In the following sense.  What does AI as the internet in turn bring new to things that we already know?  What rules do we already have that we can just apply or modify for taking into account AI?

For example we have purchasing rules, especially in governance, where you know constraints that you have on systems that you can bye for government.  Like they cannot be discriminatory.  They cannot be harmful and so forth so you can apply those rules instead of creating a this whole new world.  Like medical devices.  Already so many rules for automated medical devices.  You can extend those to artificial intelligence.  The harms and application of harms.  These will be different.  Amplified.  Probability and uncertainty.  But we now how to deal with that and just need to change the scale and better understanding of these factors.

Next, what do you expect to obtain in governance?  Do you want more competition?  Do you want reduction of discrimination and bias?  Do you want more respectful property.  Global access to resources for the Global South?  And so forth.  Because these will determine the institutional and organisational design.

And next and most important and something net (?), for example, does with other (?).  Is how to you actually bring these different stakeholders?  Who the stakeholders and how do you bring them to the table?  If you want to regulate large language models provided over the internet for chatbots like dominant aspect of public discussion these days, why would they come to the table?  Why would open AI, Google, Meta, et cetera, not to speak about (?) and other providers in China which are operating on completely different sets of rules.

Why would they come together and agree to limit themselves in some way?  Also to sit at the table with people who are their users or their client, potentially their competitors if something rises from their innovation.

And especially how do you bring them together to put some money into the operation of the system?  To agree to have a structure, to agree to have their hands tied in some extent.

  What has happened, for example, Internet Governance is very different things for let's say domain name system and (?) scams.  For domain name system you have companies fearing that strong rules for competition would come from the US government and agree finally to come together with Civil Society and technical community which also a key point.  Experts have always to be at the table.  As ICANN paper has stated very recently for Internet Governance, technical community is not one more participant.  It is a pillar.  And you know know the limitations and capabilities of that technology.  Are I'll stop there thank you.

>> OLIVIER CREPIN-LEBLOND: Thank you Alejandro.  Next Vint Cerf.

>> VINT CERF: First I have to unmute.  So thank you so much Alex.  I always just enjoy your line of reasoning.

Let me suggest this a couple of small points.

First, with regard to regulation of AI‑based applications, I think a focus of attention should be on risk to the users of those technologies and of course potential liability for the provider of those applications.  So a high risk application such as medical diagnosis or recommended medical treatment or maybe financial advice ought to have a high level of safety associated with it.  Which suggests that if there is regulation, the provider of the service has to show due diligence that they have taken steps that are widely agreed to reduce risk for the users.

So risk is probably a very important metric here.

And the concurrently, liability will be very important metric for action by the providers of AI‑based services.

I think that another thing which is of significance is the province provenance of the material used to train these large language models, for example.  And the explainability, chain of reasoning, change of thought.  Those sort of things to help us understand the output that comes from interacting with these large language models.

And finally, I mentioned this earlier, but let me receipt rate.  That the agent to agent protocol and the model context protocols are there, I think, partly to make things work better.  More reliably.  But they might also be important for limiting liability.  In other words, there is a motivation for implementing these things with great care.  And designing them with great care.  So that it is clear, for example in a multi agent interaction which agents might be responsible for what outcomes.  Again, something that relates to liability for parties who are offering these products and services.

So I'll stop there.  I hope that others who are participating in this will be able to elaborate on some of these ideas.

>> OLIVIER CREPIN-LEBLOND: Thank you, Vint.  Just one point.  Earlier in the chat you mentioned I'm seeing "indelible ways for identify source content use to explain AI models."  Could you explain a bit.

>> VINT CERF: Yes.  I was trying to refer to provenance here.  The thing people worry about is the material used to train the model may be of uncertain origin?  And if someone says how can I rely on this model?  How do I know what it was trained on?

Here I think it should be possible to identify what the sources were.  In a way that is incontrovertible.  Digitally signed documents or materials that whose provenance can be established is important.  Because then we can go back and say to the parties providing those things, or ask them questions about the verifiability of the material that is in that training data.

>> OLIVIER CREPIN-LEBLOND: Okay.  Thanks very much for this and apologies for the wait.  But please over to the gentlemen standing at the microphone.  And please introduce yourself in your interaction.  Thank you.

>> Hi.  Thank you for the excellent panel another.  ImDominique (?).  I work for (?) oversee work around AI and its impact on the web.  So place where lot of web standards are being developed.

I guess I wanted to make two remarks.  One on scope and one maybe on incentives for governance.  One of the topics that was brought up.

In terms of scope we are at the IGF and its been mentioned number of times AI is extremely broad.

One I think useful way to segment the problem is to look at the intersection of AI and internet.  And there are number of those.  Like AI has been fed from lot of web content to lot of web content is now being produced through AI.  AI is starting as Vint was describing to be used as agents on the web and on internet.

So looking exactly at this intersections and what AI changes to the existing governance expectations and rules in this systems I think is (?) more productive than trying to have a general conversation about AI.  Too broad of a topic.

And I guess leads me to this question around what are the incentives for this AI systems operators to come to the table around this governance discussion?  And I think at the end of the day, while the internet, the web is a critical component of the strategy about both building their tools and distributing the tools.  And they can only make that remain true if they don't impoverish the ecosystem to a point where there is no more content they can fit on or no more services that could accept reuse or integrate with systems.

So at the end of the day I think it is really a matter of making sure the (?) emerging agents architecture that Vint was describing, that we understand what are the expectation for these agents.  Learn from rules that already exist.  For instance, in web space we have a number of very clear expectations as to what you ought do if you are a brother, literally a user agent.  And understanding how they apply to AI‑based is going to be (?)

>> OLIVIER CREPIN-LEBLOND: Thank you very much.  Next in line.  Please introduce yourself.

>> My name is Andrew cap Ling.  In this context I'm an internet standards and Internet Governance enthusiast.  To build on Bill's comment, I probably wouldn't start from here either.  But here we are are, and we're probably too late, to be somewhat pessimistic.

But if I was going to look anywhere to start.  It wouldn't be the internet.  I'd probably look closely at lessons from social media specifically.  Where we've got in my opinion a small number of highly dominant players who are disinterested in collaborative multistakeholder initiatives, and list that are commercially worthwhile to them.  If we look another internet model we try to collaborate and build a multistakeholder governance model.  I don't think there is a commercial imperative for the players to do that.  It is far too easy to game the system, take a long time and by the time agreed it will be irrelevant.

So if I was to start anywhere, I'd look closely at duty of care as a key sort of requirement.  And also explore why we wouldn't apply the precautionary principle widely and leave ‑‑ use those as two foundational building blocks.  I wouldn't start with Internet Governance.  Apologies for the pessimism.  But I think we have to be pragmatic and realistic where we are.

>> OLIVIER CREPIN-LEBLOND: Quite a British intervention.  Thank you very much.

Over to Luca or look to the conclusions.

>> LUCA BELLI: I think we can go.  We have 6 minutes.  Do we have any other comment or questions in the room?  I don't see any hands.  We have exhausted the comments from online participates.  I think we can go for a round of very quick conclusions.  Like very prehistoric tweets, of 240 characters.

>> PARI ESFANDIARI: We don't have time because we've got to no go now to Yik.

>> LUCA BELLI: Well ‑‑ ooh Yik Chen.  Sorry.  As a ChatGPT will distill all the knowledge in 5 minutes result.

>> YIK CHAN CHIN: Okay.  Thank you very much for giving me 5 minutes to make some comment.  I'm from ‑‑ it is very interesting to have the joint section between.  I found the discussion is really fascinating.  And so my observations, there are two observation.  Also based on the past three years research on AI governance.  For example, with these two report on (?) question and interoperability are big issues.  I think there are two issues I would like to make comment.

First about the institutional secting.  Because Bill ask how can we collaborate at global level.  And what other initiative or interest.

I think first of all we know that there is a UN, you know, process going on in terms of the scientific panels and also global dialogues.  So we probably give some, you know, opportunities and little bit trust to do them and hold on to see what are the outcome.

And second, I think from my experience, what really makes a difference between AI governance and Internet Governance or social media governance is that we learn from our past experience.  Especially, you know, social medias experience.  So we have a such a vibrant discussion, you know, intervention, early intervention of precautional principles as British said and different stakeholders, from Civil Society, from academia and from the industry.

So I think in that sense, you know, we are much more precaution than the social media and internet eras.  So we should probably will make a difference.  And certainly, the second one in terms of which area we should look at?  For my experience and also the PNS experience, I agree with Vint.  First of all risk.  Risk is very important.  And secondly, the safety issues.  And of course liability.  With reliability is mechanism.  We hold the AI development, developers and deployments accountable.  So that is very important.

The third of course is interoperability.  So when we talk about interoperability, it is not only about a principle, ethics norms, but also standards.  So standards will play a significant role in regulating AI.  And we're glad still lot of progress in terms of AI standards making, for example the UN, there is lot of the standard.  They are going to have a announcement of the use standard in terms of AI act.

But there is also huge progress of standard making in China in terms of safety issue and moral, ethical issue.  So I think the AI standard will be one of the very crucial area to AI in the future.  I'll stop here.  Thank you very much.

>> OLIVIER CREPIN-LEBLOND: Thank you.  Haveich and there are two minutes left.  I guess to ask any of our co‑moderators on their reflections.  I was going to say one tweet in each participate but I don't know if we can do it in the two minutes?  Should we try.

>> PARI ESFANDIARI: Why not.

>> OLIVIER CREPIN-LEBLOND: Let's start with the table then the person furthest to my right, your left.  Bill Drake.

>> LUCA BELLI: A message of hope in 20 seconds.

>> WILLIAM DRAKE: Message of hope in 20 seconds wow.  I was going to say abandon all hope.  All right.

Well I'd just echo again the point about being clear about exactly what demand is there for what kind of governance over what kinds of processes.

Too much of the discussion is just too generic and high‑level to be very meaningful when we get down to the real nitty‑gritty of what is going on in different domains of AI development and application.  And so we need a dose of realism there.  But I like the idea of the mapping effort that you are trying to do.  And I look forward to seeing you guys develop it more.

>> OLIVIER CREPIN-LEBLOND: Thank you, Bill.

>> SHUYAN WU: I hope I have another chance to exchange our ideas with all of you.  Thank you.

>> OLIVIER CREPIN-LEBLOND: Thank you.  Hadia.

>> HADIA ELMINIAWI: Regional and international strategies and cooperations should not be seen as conflicting with national sovereigntities.  National and international strategies, cooperations and collaboration should go in parallel and hand in hand.

They should support and strengthen the goal of one another.  They need to be, to have aligned objectives and implement simultaneously.

>> OLIVIER CREPIN-LEBLOND: Thank you.  Hadia.  Sandrine.

>> SANDRINE ELMI HERSI: We can no longer think AI governance and Internet Governance as separate entities.  As we noted today, the strong intellects between LLMs and internet content and services.  Applying internet core principles to AI is not a whim or accessory.  It is the only way to preserve the openness and richness of the internet we spend years to build.

And we can and must act now to establish multi stakeholder approach with that in mind.

>> OLIVIER CREPIN-LEBLOND: Thank you.  Renata?

>> RENATA MIELLI: Just three words.  How to transform this principles into technical standards.  We talk about this.  And I want to say we need oversight, agency and regulation.  And we need to remember that governance and regulation are two different things.  And governance needs to be multi stakeholder.  And we national regulations for AI systems.

>> OLIVIER CREPIN-LEBLOND: Thank you.  Roxana.

>> ROXANA RADU: I'll just say that we need to walk the talk.  Now that we've done this initial brainstorming session, I look forward to seeing what we can come up together in terms of bridging this gap between, what we've learned in Internet Governance and where we're starting at AI discussions.  This is not to say everything applies but we've learned a lot.  And we shouldn't reinvent the wheel.

>> OLIVIER CREPIN-LEBLOND: Thank you.  And finally, Vint?

>> VINT CERF: I think my summary here is very simple.  We just have to make sure that when we build these systems we take safety in mind for all of the users.  That's going take a concerted effort for all of us.

>> OLIVIER CREPIN-LEBLOND: Thank you very much.  And if anybody in the room is interested in continuing in discussion, which I hope you are after this session, then please come over to the stage and share your details with us.  You can duet onto the DC's mailing lists and continue the discussion and participate in future such work.

Thank you.