The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> PAUL FEHLINGER: Perfect. Welcome, everybody, to our Town Hall. I'm very happy to see familiar faces. And we have an absolutely amazing line-up of speakers today. And physically here, with us, and also online. So this is the Town Hall that is called Towards Ethical Principles for Responsible Technology. And it is the first Town Hall at the Internet Governance Forum of Project Liberties Institute. I'm Paul Fehlinger, Governor of that new organization. And before I tell you more about what we do, I would very much like to introduce our speakers today.
And maybe each of you can just quickly mention your name and where you come from, with what organization you are. I start to the left, Elizabeth.
>> ELIZABETH THOMAS-RAYNAUD: Thank you, Paul. Thank you, Paul. My name is Elizabeth Thomas-Raynaud, and I'm with the OECD Global Forum on Technology.
>> HIROKI HABUKA: I'm Hiroki Habuka, a research professor at Kyoto University Faculty of Law and used to work for the Japanese government until last year. Thank you.
>> PAUL FEHLINGER: I think it takes a second if you speak.
>> VIVIAN SCHILLER: Vivian Schiller, Aspen Digital. We focus on media and technology on the impact on society and based in Washington, DC. Happy to be here.
>> PAUL FEHLINGER: Wonderful. And I just got a message from our panelists that apparently the Zoom link that was on our event page led them to a different session.
Maybe I can ask the technical team to how give me the right Zoom think so I can share it with the codirector who is who is joining us remotely from Australia.
And with me remotely today is also Sarah Nicole who is our policy and research associate who probably is in the same wrong session right now. So if somebody could send me by e-mail the right Zoom link, that would be absolutely amazing. Maybe somebody of the technology team could take care of this.
We want to talk today about the future. We want to talk about what is next in the innovation pipeline and how we can foster responsible innovation and look at things like the economy, artificial intelligence, web3. Neurotechnology and all of this. The underlying question today is how do we create a fair sustainable and innovative digital future. So those are very big topics. And so is the mission of Project Liberties Institute.
Our mission is to advance responsible innovation and the ethical governance for the common good. We were founded by Frank McCourt and we do three innings. We catalyse solution-oriented research to foster evidence-based governance innovation and we are fortunate to have three founding academic partners already, Stanford University in Silicon Valley, Georgetown University a Washington, DC and in Paris and France.
Our second mission pillar, we bring together leaders spanning international organizations, governments, entrepreneurs, businesses, technologists, academia, investors, Civil Society. And we lead three initiatives. And one you will hear a bit more on topics such as the good technical governance of web3 technical infrastructures for creating for public and private innovation makers or on ethical principles for responsible technology. This is particular for an organization such as ours.
And we have DSNP, the Decentralized Social Networking Protocol which allows to build social web applications that enable users to share the data and also gives the possibility and the infrastructure to economically participate in value creation with personal data.
So what is the intention for this session? We want to talk about ethical principles for responsible technology. And we want to look at the entire innovation cycle. How technology is designed. How we invest in new technologies. Deploy commercially new technologies and regulate new technologies. Both the substantive side of ethical principles but also processes for responsive innovation in the ecosystem.
So with this, I would like to start with the first round of questions. But before this, could I ask if somebody has the right Zoom link for online participants?
>> I sent it.
>> PAUL FEHLINGER: You are amazing. Thank you. I hope that Paul can join us soon as well. Thank you very much.
Hiroki, if we think about ethical principles, the first thing a lot of you realize and you are all experts on I see here in the panel and I recognise some of you in the real world all worked on very important sets. Somebody from UNESCO said for the identification for UI they identified more than 630 different principles so there is no shortage on ethical principles for innovative technology.
And there is the sort of tension that we have so many principles that actually nobody knows any more what to do. And that is a big problem for our society that is heavily digitalized. This is slightly worrisome for where we are in the ecosystem.
So my first question to our amazing speakers: So what is the state of responsible innovation from your view? And also, as a sort of sub question, what do you think we do learn or should have learned or already learned from the past 10 or 20 years of technology development? Where are we today? And I would like to sort of give you each the floor and ask you to really give us first your ecosystem view. And you will have an opportunity to talk about the work you are leading a bit later.
But I want to start by what is the state of responsible innovation today. Maybe Elizabeth if we could start with you.
>> ELIZABETH THOMAS-RAYNAUD: Thank you, Paul. I will just start out with a few thoughts that I had before and might embellish this a little bit.
I think the first thing is there is quite a strong consensus around the need for significant and timely work on these questions. We are seeing a lot of different -- we have seen and are seeing new initiatives that are cropping up.
I think there are traditional approaches and also recent lessons that give policy makers and stakeholders an impetus to say we need to start thinking collectively further upstream and finding development in line with respecting human right, building in privacy and security and accountability by design. Not as sort of add-no features. Trying to encase the technology after certain concerns are raised or issues developed. Global cooperation has been cited from the early discussions as needing to be sort of an essential feature of the effort given the borderless nature of many of the technologies we are talking about and the impact they have on citizens around the world. The approaches need to be broader than national or regional.
So it's important that they are directed at a human-centric and values-based and rights-oriented development and use of the technology. That can't be done at the end of the cycle, and it is something that is sort of forcing us to say are we getting this right? Are we waiting too long? Do we need to think about this in a line with each other sooner in the equation?
And then we need to factor those technological implications but also the societal and offer resolutions that are adaptive to the people both in the culture and needs. I'm not contradicting myself, we need borderless global approaches, but with the informed sensitive considerations of that. There is a strong need for that.
There are instances where we have seen this working in some of the dialogues on the IGF at certain issues and then go into a technical activity or policy making activity and you get a better action and more initiative that can have impact in the community because of the thoughtful discussions that take place earlier.
And then, of course, we need to think about how to enable the ecosystem and the technologies. Not just mitigate some of the concerns that we could have. The last thing I will mention on this, there are a lot of spaces so it could be easy to say and there are lots of actors contributing and all of that is positive.
Could be easy to say there are too many things happening. focus on are we moving in the same direction and reinforcing and having complementary efforts that we are making and then sort of playing to the strengths and linking up those rather than trying to create a one-stop shop for everything.
>> PAUL FEHLINGER: Thank you so much. I just recognised that Paul joined us. Happy that it works. Quickly want to introduce yourself and I will give you the floor after Hiroko.
>> PAUL TWOMEY: I hope you can hear me okay. I'm the co-chair of the Global Initiative for Digital Empowerment which is a group of 70-80 experts from about 29 countries concerned about the data governance rules and technology and innovation. And sort of a founding figure and CEO of ICANN for those who have a long memory.
>> PAUL FEHLINGER: Happy to have you with us remotely. Hiroki, you mentioned you used to be in the Japanese government and you are in an interesting position because you led important work on agile approaches to the governance of disruptive technology, cyber physical systems. I think the name was Society 5.0, which is all of what we just said from XR to artificial intelligence, all of it together as a complex, how do you govern this. So you wrote a very important paper for the Japanese government, a strategy paper in that regard. I think you are very well posed as well to answer the question.
Like what where are we with the state of responsible innovation today when we look across the entire cycle? Are you confident? How do you assess the status quo?
>> HIROKI HABUKA: Thank you very much for the extremely kind words.
Yes, so let me talk about history of AI governance or AI regulation. So as you mentioned, now we have like 630 AI principles. Started from like 2010 so just after we started to implement the technologies.
And according to my understanding there are 600 principles, but the pillars are almost similar. Fairness, safety, security, privacy, transparency, accountability. We have all these on the table. But how to implement it is a problem question. And then after 2020s, some countries started to make new regulations on AI so apparently the EU AI act would be one of the significant new regulations on AI. And also Canada is now discussing a new regulation on high impact AI. So there are a lot of discussion going on.
Now, Japan hasn't adopted a comprehensive approach. So we take more the specific and more soft law-based approach. So let's see how it goes. so it should be always risk based. Whether you regulate it or not, we already have some very risk management framework for AI such as United States NIST AI management framework or ISO. Also, Japan has some AI guidelines for companies or the operators of the AI and they look similar that says you should do impact assessment.
And then the risks based on the multistakeholder and reviewed or monitored sometimes by a third-party. Of course, the contents are different than each other. Now we understand what we should do and we should do the risk management processes in iterative manner or agile manner I have to say.
Real question is what is the impact? For example, what is the privacy that should be protected under the AI or GenerativeAI or how to balance risks and benefits of AI. For example, if you use, say, AI in the camera in the public space, it will dramatically increase the privacy risk but also dramatically increase the efficiency of safety of the public society. How to balance those different values which are in the tradeoff situation would be the real question.
So, you know, defining values. Balancing values. Or how to solve it. I mean government doesn't have any idea about how to technically solve this question. So we always need to, you know, talk with tech people as well. So, you know, those questions cannot be solved solely by the government.
And that's why we need multistakeholder dialogue in an agile manner. And so Japanese agile governance concept is based on that. We always have to try to be more multistakeholder and agile in governance. Governance doesn't only mean regulation but also in a more soft law guidelines or democratic processes or corporate governance. How to materialise rule of law in the government and multistakeholder manner is a real question. I don't think any single person has the correct answer. All of us are struggling with that.
>> PAUL FEHLINGER: Thank you very much. That's interesting because you highlight the fact that this is a process and a journey for all of the stakeholders to come to grasp.
I want to give the floor to Paul who joined you remotely. In your view, what is the state of responsible innovation in general today and what do you hope we have learned from the first wave of technological innovation, the past one or two decades if we look back? What is your assessment of the status quo, Paul?
>> PAUL TWOMEY: I'm optimistic and pessimistic. So what we have been saying in Australia, to bob each way.
Let me say where I'm optimistic. I think what we are seeing interestingly often from sort of larger or more established institutions a real sense of trying to think about ethics as it applies to the evolving technologies. And I would point to a couple of things.
You know, quite specific. I think we are seeing people who are producing ethical frameworks. Initiatives in the Silicon Valley, Santa Clara and the Centre for Culture in the Vatican, of all things, working together on a road map for ethical technology development. Practical road map.
So there is an example of people who are sort of working on how do you put this into effect. A lot of companies that I'm aware of have also taken the lessons from our community and said use multistakeholder processes to begin to identify issues early in the product development stage so that they are not caught, as you said before, with having developed something and having to fix the problems afterwards.
How do you run a series of processes with various stakeholders that help identify issues. That part is interesting. AI is a good case study in some of this.
I would say that to give an example of potentially where I think I'm pessimistic or we have a challenge. One thing to stay for established corporate or corporations have a view about this. It is another thing together for what happens in the startup space. One example is the implementation of facial recognition where we have heard now through reporting that people like Google and others said you should not do this. And then Clearview turns around and breaches human rights and copyright or whatever because it is a startup.
That dynamic is going to keep continuing. While I'm a fan of the creative -- destructive creative aspects of capitalism and innovation, you know, this is all good. There is a big difference, I think there is a very big ethical difference between boards and investors and VCs saying we want to upset an industry structure or supply chain and find new ways of innovating.
Take, for example, the car share riding companies as an example. A big difference between that. He said all of the students want to create the businesses. I said sovereign risk. The question is how long before that risk is going to hit you. I think that is a route for the sovereigns to send signals to the VCs, not just to the big corporates but to the VCs saying be careful about the stuff you start throwing money at. If it begins to breach human rights and other issues, they will come down on you hard. I think that is an important issue.
>> PAUL FEHLINGER: Thank you so much, Paul. This is an amazing framing from the international landscape to the view of one specific government on more agile processes.
Paul, thank you for mentioning the entire innovation cycle and the role of startups and role of VCs because those are questions we need to discuss. I want to give the floor to Vivian who has a very particular view because she has been a very accomplished journalist. Where are we today?
>> VIVIAN SCHILLER: Thanks, Paul. My co-panelists have said so many smart things that I'm busy crossing things off because I don't want to be too repetitive. Like you said, I'm a journalist and journalists, we are observers and reporters.
A few observations. We are too slow. A lot of processes understand they need to be inclusive, of course, they do. But the technology is moving so fast and no matter how hard we try to futureproof what we are trying to govern, we are still always going to be behind. And we need to think about that. Related to that, I don't think we always have the right people in the room for the conversations. There needs to be much more emphasis on technologists in addition to all of the other key stakeholders looking at issues. Civil Society groups and government, et cetera. We need technologists.
We also need big tech in the room. I cannot tell you how many times I have been in rooms with big tech not represented. And when I ask they go well, they are the problem so we don't want to let the fox in the henhouse. But the fox -- I don't know how to continue with that metaphor.
First of all, they hold an incredible amount of power to make the change and understand how a let of the technology works better than anyone on the outside possibly could. They need to be in the room. That doesn't mean you necessarily have to have complete consensus in the room including with big tech about your decisions but they need to be there.
My next point is going to sound like it is the opposite of what I just said, but we also have to remember that I have seen so much focus on -- in various contexts when we're talking about AI. Oh, well, we've got an agreement now with Google and OpenAI and Microsoft so we're set. They are incredibly powerful players and they need to be in the room. But there is an entire world of open source out there that are not represented. And that is just going to become more substantial.
Just a couple of other points. We are too quick to see regulation as the answer to everything. And Hiroki made this point. We need to think more broadly about governance. And then I will also repeat something because it is worth repeating that Elizabeth said which is we need to be thinking much more upstream in the process. The whole idea of safety or security by design is critically important. Thank you.
>> PAUL FEHLINGER: Thank you so much for the first framing round. The reason why you are sitting here on the panel is because you all lead very important initiatives in the ecosystem of the use of new technologies.
And I would like to sort of give you the floor to explain a bit to the people who follow us here in the room and online what exactly you do and how we design technology and invest in tech knowledge, how we build and deploy technology and regulate technology.
Where in the innovation ecosystem do you fit in with the kind of work that you are leading? And Elizabeth, you are leading the OECD Global Forum on Technology, which is a very interesting forum. What is the vision and role of this forum?
>> ELIZABETH THOMAS-RAYNAUD: Thank you very much, Paul and Vivian. I feel like you just handed me the greatest ball to hit so I will to it really well. But it's really gratifying to hear the points that you're making because it helps me feel encouraged by the approaches that we are hoping to pursue with the Global Technology Forum, which we like to call GF Tech.
So this is something -- those that know OECD may be familiar that it has policy committees that work on policy topics, sometimes they even agree on principles, like the OECD AI principle. And these are tables of predominantly government delegates within the OECD membership. And then they have stakeholder communities in the digital policy area, technical community, Civil Society, labour and trade, and they have the private sector. And so they work on the policy issues.
The OECD Technology Form is not about that. It is about opening up to a wider dialogue with non-OECD members included with other stakeholders from other areas of expertise. This was something that was launched during the ministerial of the digital economy policy in Spain and the Canary Islands in December of 2022. And we had our first inaugural event alongside the OEC ministerial event this June.
Now the forum is not just an annual event, it is a venue for regular in-depth strategic and multistakeholder dialogue. So it has that wider community, industry, academia, Civil Society included in that. We intend to do it through two tracks.
The first track is going to explore technologies that are identified as ripe for immediate work. And so these technologies are going to be looked at through a lens of cross-cutting themes. Sustainable development and resilient societies. Responsible values-based and rights-based oriented technologies to achieve human-centric technological transformations. Bridging digital and technology divides.
The other track is horizon scanning. And this is Where we're going to bring a broader community in an event or some activity like that in order to explore longer term opportunities and risks, figure out where the lights are happening on other technologies that may not be already explored and discussed to identify and analyse the emerging technologies that may be of interest for work at a later stage.
In those discussions, we'll probably also end up talking a bit about the convergence issues. Obviously you're going to talk about quantum, it is hard to do so without AI. Synthetic biology, hard to do without some of the interplay in the technologies. I gave it away, but the initial three technologies that we're going to focus on initially in the forum are immersive technologies, synthetic biology, and quantum technologies.
And so in order to go into those technology discussions, we are going to do what Vivian said we should do -- which is we're going to bring the technologists into the room. We have the government delegates working in the policy committees. they are going to help us identify national experts and building up a community of that broader scope. But we are looking for real technologists and we are also looking for experts in the ecosystem. Understanding how the technology works in the ecosystem and what are the implications of that.
We are going to try to channel the insights. And again, to the point of going too slowly, this is also an opportunity for us to use Focus Groups as a policy accelerator to orient OECD and its partners towards the most relevant and needed policy work. So we take the priorities and we do them first.
And then the focus groups will also share amongst themselves those insights and perspectives, they'll hopefully increase collective understanding of the technologies as well as those ecosystems I keep mentioning. We're going to look to them to help identify gaps and bright spots as well as to provide examples and data and do what OECD is known for, building up the evidence base to help inform policymakers. I will stop there.
>> PAUL FEHLINGER: Thank you so much. This is such a bold vision, I think, and so necessary in the ecosystem to try to bring together the different threads and basically operate in this level. We are often surprised by new technology, and I think this forum is an important initiative to get on top of the innovation curve from your point of view. So fascinating.
Hiroki, you already mentioned you worked in the field of artificial intelligence. Many might not have heard about what the agile governance. Explain it at the legal level of what they say, a 10-year-old. What does that mean for responsible innovation?
>> HIROKI HABUKA: The most simple meaning of agile governance is to make our governance systems agile, multistakeholder in a distributed way. The reason it is clear based on our discussion so far, technology is changing so fast. And in Japan it takes at least two years to make a new regulation on even revise an existing regulation. And it is just too long time considering the pace of the speed.
We needed to change our approaches. What is the alternative? Market. It doesn't necessarily work very well because of a lot of reasons. For example in the huge information gap between the government and private sectors in the way that the government has now much less information than the private sectors. Or, you know, negotiation of power.
If you just want to use the service provided by the big tech company and if don't have any other options, you just click yes to terms and conditions of privacy policies. The market is not always perfect. They have norms and ethics. Again, it is good, but it is not always so helpful. First of all, it is really hard to define the meaning of principles like privacy and security and safety because AI is a system which moves just based on the probability. So it doesn't give you a clear-cut answer. It always answers in a probabilistic manner. So we need to define the values which is really difficult.
And also we are not always correctly understanding what the risks are or how the technology is. And we just easily get furious about the easy to understand news, but the problem is caused by the new technologies. But it is also difficult to understand the benefit of the new technologies.
For those reasons, for all of the these different mechanism will not work perfect manner. We have to combine the different tools so that we can make our technology more trustworthy. And the only solution for only direction we can go is just try different approaches and see if it works or not. And if it doesn't work very well, we should quickly update it.
When we talk about regulations, the first question we should ask is, is this activity already regulated or not? If yes, if it is already regulated, for example, in car driving or giving legal advice or medical advice. Then if you try to make AI do the same work, we need alternative regulation for AI. But how to make the same question. Maybe it is better to have another regulation for it. If the activity is not regulated for human beings, we have to ask why we need a new regulation because of the reason that this is done by AI.
What is the AI does is statistically analyse the past data and give you the most probable answer which has been done by human beings for long years. And AI can do it more quickly and efficiently. It will -- anyway, so we always have to consider the questions.
And there will be no answer before you try. So you only understand the risks after you try it. So that's why we really need interactive approaches. And in Japan traditionally we believe that the government shouldn't make mistakes. There should be no mistakes by the government. But we just have to change our mindset. We have to admit that the government can make mistakes and so is private citizen.
This is what is necessary for software development.
>> PAUL FEHLINGER: It sounds like -- it makes me think you used yourself the term mindset shift because it is almost counter intuitive -- and especially from a public sector point of view -- to experiment and to potentially expose or accept a certain amount of risk or uncertainty.
But what I find very interesting in this approach is that by spelling it out and saying there is always uncertainty and risk. You don't really know. If you know what could go wrong but don't know to what extent, you call it risk, okay. But an interesting sort of mindset shift to say if this is how the reality looks like what do we do?
I want to give the floor to Paul because he leads a very interesting initiative that he already mentioned himself which is yet a different puzzle piece in the international landscape or the ecosystem.
An example of an initiative in the public interest to work more on the infrastructure level. Paul, can you explain a bit more, what do you want to achieve and how does it enhance responsible users of technology and responsible innovation?
>> PAUL TWOMEY: I was on mute. I will try that again. Thank you.
The guide is an international initiative by accident. It was a conversation that I had with a then head of the Keeler Institute for Economics out of Saudi Arabia in the G20 meeting where I think the buses were late or something and we ended up having a long conversation about capitalism.
And we decided people should really think about this and what is the implications. The first product of thinking through what are the implications of that model we have got in the digital economy from an economist perspective, and he is an senior international economist, and then some people from the communications and Internet and what have you, policy process.
And we have been joined by 70 or 80 of our closest friends, if you like. I say that jokingly. People who are experts in various areas around the topics that come up. What has been the consequences of this analysis? One is we actually thought through the whole market structure for the international digital economy. If you think about it in simple terms, we have users, consumers. You have got digital service providers over here and that is basic. I sell you something and you buy something sort of model.
And for e-commerce they interact with other producers and manufacturers. If that was just the digital economy, it would all be fine and that works as a market in the sense that the individuals who are involved actually do have power. Partly because they have the power in the sales and goods act, et cetera, but they also have an understanding of what's going on. One of the best indications that this works as a market is that the margins are relatively thin and the consumer is king.
It is about $5 trillion size market at the moment per annum. And that's fine. If that was it, we wouldn't be here. Of course, we all know there is the dirty underground activity which is the service provider saying in return for my letting you participate in the international digital economy, you can get my connection or whatever; in return, for which I'm going to have these unbelievable amounts of information about you that you've got no idea about how much. I would argue not a single person on the planet has got an idea of how much data gets awarded.
Now, in thinking through that, we looked at it and said what, that data gets reviewed, but the data is done in such a way that it is the market was all driven by advertisers and others.
And the skills all sit in that market. But the people who should be the consumers are not consumers; they are the product. So we focused a lot about what would be the implications if you changed that such that the consumers -- that the users were actually consumers and they could play a role in the market for personal data in particular? And we're not necessarily talking about making data privacy rights, but we're saying essentially if people have the right to decide who has access to key parts of their personal data and under what terms and have the right of association and representation such that the skills that are sitting in one part of the market already could come and actually work for consumers, we would end up with a sort of liberalization and innovation that we saw in financial services over a long period of time.
As Dennis would say, you are basically taking what took place between 1860 and 1870 in the industrial revolution. That whole process that was both bringing people together and empowering them, which ended up with the middle class. So we think that there are several propositions, we've been in discussions with a number of governments and entities about where just making the shift in that policy -- and there is a lot more detail behind that -- could end up with addressing many of the things that have been misalignments in the digital economy. Because at the moment it is driven by only by one incentive system, which is advertising, and that advertising serves only one part of what is an ecosystem.
>> PAUL FEHLINGER: Thank you very much, Paul. An interesting initiative and a very important one. And in many ways follows a similar philosophy that also led to the development of the de-centralized social networking protocol that we are stewarding the governance of.
I want to give the floor to Vivian. And you sort of have two hats on the panel. You are with Aspen Digital and Aspen Digital does a lot of amazing things. But you're also jointly the global multistakeholder initiative on ethics and principles for responsible technology.
But before we get to this, I just wanted to also give you the opportunity to tell us a bit more. Aspen Digital, what are you doing and how does it foster responsible innovation?
>> VIVIAN SCHILLER: I will not talk about our project together, I will save that for a couple minutes. I will talk more broadly about Aspen Digital. Our favourite topics are ones that are incredibly complicated and tangled and thorny and fast-moving and where there is a lot of chaos and a lot of misunderstanding. We love those. Good news is there is a lot of issues like that out there. So we have plenty of work to do.
What we do best, we're not researchers, but we bring groups together across sectors for information sharing and sense making. I know this sounds simple, but in many ways it is -- what we have discovered, it is really an incredible gap because groups -- it never ceases to be shocking to me that groups are not talking to each other when it comes to complicated issues. People in government. People in -- first of all, just the US government. People in the US government, different parts of the US government are not talking to each other.
I cannot tell you how many meetings I've had with different representatives from different parts of the United States government meeting each other for their first time and going oh, you are doing that? So are we. So we bring those together. If we did nothing else, I think that would be worth it.
Groups from government. Groups from the private sector. Private sector tech, but also the rest of the private sector who are not tech companies who often get left out of the conversation. Academics and researchers, Civil Society groups. And we try to bring not the public in but at least representatives of the public. And we do not drive the action but ignite the ability to drive action.
A couple of quick examples. A lot of focus on information integrity. And we had something called the Aspen Commission on Information Disorder, which was a panel of people across sectors where we identified complicated issues. Lots of and lots of people working on it and not a lot of action happening.
So what we did there, which is our model for everything we do, is to look at all of the great people who are already doing the work. We are not there to reinvent the wheel. We're not trying to pretend that we are the first in the space. But what we do is we elevate those best ideas and then sort of distill them with the group of experts and we drive action. Which actually we did when it came to the recommendations from this commission.
Cybersecurity is a big area of focus for us. Both US domestically and globally. We bring groups together to again share critical information and then drive action. Countless examples there. We have something -- another example, the tech accountability to hold them responsible for their commitments that they have made but don't necessarily follow through on when it comes to diversity, equity and inclusion. Both in the way that they recruit and hire, the way that they upskill; but also the way that they approach the development and release of products and how those impact under represented or vulnerable communities.
And of course lately everything is about AI. We have been asked to come in and sort of help different groups figure out what's going on. Literally We convene members of Congress at their request to help educate them on -- you know, take sort of like, you know, so many people are in the weeds, there's so many groups that have specific interests. So we don't have a particular interest in any -- you know, we are unaffiliated so we are able to bring in experts and help bring sense-making to them.
We have done the same thing, and we continue to do the same thing for journalists and news media. We have convened all around AI, we have convened the heads of philanthropic and foundations to help them understand what is happening with AI. And that can impact the philanthropic giving and doing the same thing with private sector companies around how they think about AI and labour markets and setting off on very, very substantial work around the intersection of AI, elections and trust. I think that most people in the room know that 2024 is going to be a gigantic year for elections, and unprecedented number of very consequential number of national elections.
That's a little bit what we do without mentioning our project, Paul. And what are you giggling about, Paul?
>> PAUL FEHLINGER: A collector item. Elizabeth has a collector's notepad from the second global conference hosted in 2018 and I was very surprised to see this five years later here in Kyoto.
>> VIVIAN SCHILLER: The power of Swag is very important.
>> PAUL FEHLINGER: This swag has a lot of carbon footprint as well. I think as a segue to tell you a bit more about the global multistakeholder initiatives that we are leading on ethical principles for technology. Mark, if I can put you on the spot. You asked a perfect question in the chat that is the perfect segue to the next part of our discussion. Ask your amazing question which is exactly the provocation we need. Please introduce yourself.
>> MARK CARVELLE: Mark Carvelle, I'm an Internet Governance consultant. I work with a Dynamic Coalition here at the IGF on cybersecurity standards deployment. Our coordinator is here at the back.
My question was really about, going right back to the beginning actually when you started to talk about ethical innovation. I mean that sounds like a wonderful ambition and a starting point to address risks. Innovation is inherently groundbreaking. It's getting into -- it's from the Latin nova, new. Everything is new. And the designer of the product or service is focused on the new market opportunity. History shows that innovation can actually lead to unintended consequences and impacts. And even, you know, inadvertently, things that were not anticipated.
And sometimes maybe when a product or service is maturing through development phase some of those things might become apparent, but they tend to, you know, the commercial instinct is to try and sort of skirt around that and keep going.
So the question in my mind how you can be certain in pursuing ethical innovation when, you know, there are those risks of consequences that would derail such an ambition but which may be unstoppable. So that was my question that I put in the chat.
>> PAUL FEHLINGER: This is -- thank you, I could not have invented a better segue. This is exactly -- there is a name for this, by the way. This is called the Collin Ridge dilemma. You cannot regulate something, address something -- you know technology has a life cycle. You don't know what happens with the technology, what it does itself or what people might or might not do with it. How can you regulate it before it's mainstream adopted.
So this is a conundrum which almost sounds impossible with the world we live in and With the massive breakthrough technologies coming we already know of today from the computer brain interfaces, quantum AI, and XR which will transform our entire economies and social fabric. A question we have to ask ourselves, even if it is an incredibly wicked problem. This is why we created this initiative. It is incredibly bold, but yet I hope we approach this with the necessary level of humility because this is very, very difficult. And this is why we decided to consult the smartest people we can find on those questions.
And this global initiative that we launched a couple of months ago I think has five criteria or elements that make it special or particular.
The first element is we want to look at new technology at large. Not just one vertical bucket like AI or XR. We want to look at new technology. Over and over again it is the same mechanics of very disruptive things arriving and us not knowing what to do with it with all the different economic and social factors involved. And we want to take a 360-degree view in the work throughout the entire innovation cycle.
Because a lot of efforts historically focused just on regulation at the very end. We want to look from the onset, how is technology designed and developed? How is capital allocated through venture capital and other modes? How are things commercially deployed? And ultimately, yes, how things are regulated, of course.
And I think a third element is we don't only want to look at principles. And have yet the 631st set of the principles. But we want to look at what I think especially Hiroki highlighted very strongly at the processes managing uncertainty, managing risk. This is a complex dynamic equilibrium. A mindset shift that we hear from everybody we talk to.
Our ambition is that what comes out of the process is operational. We don't need a 631st set of principles. There are extremely good sets of principles already out there. The question that everybody, no matter what stakeholder group you are in in the ecosystem is wondering so what, how does this work in practice and apply to my work. Yes, it is difficult, but we need to figure this out. And this is part of talking about ethical principles. And last, but not least, and this goes to what Paul talked about, enabling the infrastructures of the entire ecosystem and how this interplays with ethics by embedding them or enhancing them.
So, with this, Vivian, I would like to kick the ball back. Can you tell us a bit more about what we actually do, how we do it and what we have already done and what are the next steps?
>> VIVIAN SCHILLER: Thanks for laying out the core principles that guide the work. It is important and has been a wonderful project and has been great to collaborate with Paul and his colleagues on the Project Liberties Institute on this.
We have been doing a series of consultations because we want to make sure that we are building upon all of that. We have been doing a series of consultations around the world. We are bringing together leaders and advocates and builders and funders from various sectors. We've had 10 different meetings so far across -- in a couple of weeks, it will be five continents.
We have included to date 200 people across those consultations s. We started at Rights Con in Costa Rica. We were then in Kenya. Then in Paris for a European consultation. we are here now. And then the final set of consultations will be at Stanford in the US in a couple of weeks.
And everything that we have heard and that we have learned through these consultations will influence and guide a draft document that we will be sharing for feedback in mid-December then with a final document that we will hope to be finished in February of next year. So we are on a very fast timeline. It will have very specific proposed actions.
As Paul mentioned, it will not just be a set of principles; it will be a sets of processes and the so what and so who as well.
So, you know, we have learned, I don't want to preempt. We are still synthesizing everything. We learned a lot, but a couple of insights which have come up on this call today, I mean on this panel today so far. So we all of the 630 principles are out there but the problem is still with us. Our work is not done. That is one thing that is very clear. Another insight, regulation is very powerful tool, but not the only tool. Often what we fail to take into consideration -- and, Hiroki, you alluded to this -- we don't have to set a new law or regulation for every new technology that comes along. Often, it is existing technologies that already exist -- sorry, mixing up my words.
Existing regulations or laws that exist can be applied. Human rights law, or copyright or human protection or safety standards. They are in place and they are time tested. And, you know, it's maybe not always obvious how they apply to new technologies, but we should start there.
The third is that there are -- we must -- Elizabeth, you alluded to this earlier. These ideas cannot just be coming from the US and the EU, but from all over the world. The kinds of ideas and amazing things we heard at the Kenya consultation from across Africa were really enlightening and stunning, for example.
We talked about this earlier on the panel. We need to find ways to influence what is built before it enters the market. To do that, we need more than regulatory approaches. We need to have sort of open development process that balances speed and innovation. Because those are important. We cannot ignore those. With public interests. So that's where we are. But much, much more to come.
>> PAUL FEHLINGER: This is why I would like to ask the question to everybody following us remotely and in the room and to our amazing speakers.
So how should we develop, invest, deploy and regulate new technologies? In a nutshell, what do you think is the important crux in the ecosystem that is not yet working as it should work from your point of view? Are there some remarks? Yes, please take the floor and please introduce yourself.
>> AUDIENCE: Thank you, Paul. Coordinator of the DC Internet Standards Security and Safety. I think about 50% of our members here at present. At least 50% here in the room.
What I miss in this discussion is the following: We have now several research reports out and all talk to the same thing. Governments discuss cybersecurity and they don't procure by design. All of the research that we have done show that there is hardly any government in the world who has in the procurement documents something about cybersecurity let alone the Internet standards that run the Internet.
They don't recognise them in their legislation. So we talk about the public core of the Internet and protecting the public core, they don't even recognise what the public core is. When we talk about ICT in whatever form, should there be a component where governments take a lead when they buy it because that would be a major driver for industry and create a level playing field. Everybody not delivering would not be selected. Would that be an option?
>> PAUL FEHLINGER: I think this is a very interesting remark and yet another remark that highlights the capital investment, procurement and ethics when it comes to new technology and how you implement standards or roll out standards across an ecosystem.
Thank you for this question. Let's collect a few more, if somebody else would like to take the floor from the audience or online. Otherwise, I will maybe kick it back to Paul who is following us from Australian.
>> PAUL TWOMEY: Just picking up on the cyber security one. I wanted to -- sorry, if I'm going to target -- if there is somebody from the room from the OECD, then I just have to target them.
Here is the challenge I put to the OECD audience. We presently in most parts of the world have a governance system for private companies which relies on the concept of a board having some form of accountability for how it works. And yet what we have increasingly in the cybersecurity example is a court of labour, what we have is boards who are made up of lawyers and accountants. And overwhelmingly lawyers and accountants. Because they were the risk issues for the last 150 to 200 years in companies.
But nearly every company now is a digital company. Nearly every company is equipped to become a data company. We talk about digitalisation and yet we don't have sitting as a core part of the curriculum or the general multilateral governance models around corporate governance and any signaling which says the board should actually know something about technologies and around data.
There is a clear issue there around cybersecurity. But the other part of it is around also the ethics of technology. The lawyers and accountants are there often because they are trained in the backgrounds around not only accuracy but also ethics and what is the right thing to do.
We should doing the same thing, I think, in trying to put a challenge to the corporate governance models where we have we have much more building in of accountability of the ethics and the operations of data and technologies inside companies.
So I put that to you as one of the things that could be a real forcing mechanism. Because when the companies themselves have board members who worry about these things, the vendors will change behavior.
>> PAUL FEHLINGER: Thank you, Paul. I think that is a very important point you raise on the bucket of deploying new technologies. And I think we can even look at other sectors.
If we talk about sustainability in climate, especially on board governance level. It is not perfect, but if we compare it to the technology sector, it is much more developed in terms of there is a need to act. And we just start now thanks to artificial intelligence and it sort of being a global cross-cutting massive issue.
Those discussions I think stop. We are at the very, very early stage. I think this is a very important point. I see Hiroki nodding a lot. What do you think across the innovation cycle, so what is your recommendation? What is missing?
>> HIROKI HABUKA: First of all, what I believe missing is the mindset of each stakeholder. For example, the government should change the mind set that they cannot change everything and rule everything and expect over everything. Government should be more like facilitator role or maybe incentive provider role rather than controlling role.
>> PAUL FEHLINGER: Can I just interject? Does this mean anything goes? An ultra liberal approach to technology?
>> HIROKI HABUKA: When I said incentive provider, incentive provider, the liability sanction mechanisms in a way that promotes more ethical behavior or private parties initiative on setting their own ethics and implementing it.
For example, if as it is now, if we make the regulation in a very specific manner, it will on one hand harm innovation but on the other hand -- instead of that regulation should be more principle-based or process-based rather than the specific, you know, prospective rule based ones.
But still there will be a gap between what the regulation said and actual operations. So we need some intermediate rules. Consider, the quick tech perspective for Civil Society perspective and update in more agile manner. And also it is a soft law so you can even -- can achieve the better goals in a different way. Is this a kind of the type of regulation which we could consider.
And also if we considered liability systems, at least in Japan now if you disclose the bad information then you will be criticized more. Because it has more news value and also the regulators just coming to you and try to triggered sanctions.
Maybe we could make new incentive mechanism that would give award for companies that detected the problems and reported it and cooperated with the investigation or maybe suggest new improvement measures. We could give award for those companies even after the accident to incentivise the companies to be more ethical after something bad happens. This kind of the design of regulation or liability systems are something we have to consider instead of trying to understand everything and consider trying to control everything. So that is the governance part.
And, of course, the citizens have to understand there is no perfection in regulation or the new technologies. So we have to always consider there is always tradeoffs and focus not only on the risks but also to the opportunity risks which will be a lot if we just miss the opportunity to new technologies.
Maybe a lot of maybe a lot of public service will not be delivered or happen because of that.
>> PAUL FEHLINGER: I think you started with the mindset shift and finished with talking about almost the culture of innovation and regulation at large because how we react to mistakes. This is more than just a mindset, this is a fundamental shift in our risk averse cultures where everything is focused on safety. And this is counter intuitive to the mainstream way that we approach regulation. Thanks for sharing that. That is the kind of topics we need to discuss.
Martin, if you could introduce yourself and please give us your sort of hope and view.
>> AUDIENCE: Thanks, Paul. I'm Martin Boltoman, a pleasure to be here. And thanks for that. And basically, the segue you just gave is the responsible disclosures to responsible technologies, right. I think the principle for responsible disclosure is that you avoid more problems by being transparent and active and reduce the risk.
And the good news about responsible disclosures is you are not the only country trying to find its way in it. There is many countries can learn a lot around the world. Making responsible disclosures ethical around the world would be a thing that would favour that. We have seen some examples where responsible disclosures have led to better acceptance of that company.
It is not hopeless, and I think it is the only way. Same thing for responsible technologies. Paul, I appreciated what you said about boards are responsible and have a responsibility. And some of that may be ethical. The problem with ethical responsibility is that it is also different, like law, per jurisdiction. Ethics is not the same around the world.
How can we come to some global understanding because we talk about global technologies that may be deployed in one country and maybe used in 70 other countries even if the service is originating from a third country or the same country.
So that is why I think some kind of global guidance is important. At least with the -- you can generate some global practice that sets the standard around the world for companies and maybe also governments will take into account when they are thinking of how to implement it.
The biggest foundation is also that governments don't only drive in their own jurisdiction because there is law everywhere. You may end up with what is impossible to deliver for the global companies that deliver the services and things. The earlier we have some global understanding about good practice and the project may well contribute to that, the better.
>> PAUL FEHLINGER: Thank you so much, Martin. I want to give the floor to Elizabeth to tell us given the amazing opportunity with this ambitious mandate that GFTech is the acronym I just learned, has -- what is your hope towards the future of constant innovation and how we deal those cutting-edge breakthrough technologies and public interest developments of them?
>> ELIZABETH THOMAS-RAYNAUD: That's very ambitious. I will be a little bit more local or what are the practical elements that we can offer to the equation.
One thing that strikes me as we start to look at some of the technologies, perhaps not every single one of them, is that we have a lot of policy work there.
And if you take the immersive technologies. And people talk about the metaverse. There are a lot of things already there. Some of the questions like all of the biometric data that's going to be collected and the speed and intensity and what are the implications of that. And that's going to then go into the discussion of the privacy commissioners other who look at the issues and try to figure out what elements need to be tweaked or even if the OECD privacy framework it will apply.
But what are the nuances? What are the details? Other than that, there is those component parts that already exist. One of the things when we think about how we can help and support that exercise in the OECD context.
What already exists in the policy areas. Security, connectivity. Privacy. But also things like competition and IP rights and trade and all of the issues that have the lever component.
Paul, good for you for being provocative. I also think the private seconder piece and understanding what the leavers are there and the procurement idea. I'm very much hoping this is what exactly what we are going to hear inside those Focus Groups are all the different ideas. And then some of the things we can do is go away and understand the measurement side of that. So what happens when people are using that example in this sector and what can we understand and what are the unintended consequences of such ideas and where are the opportunities that might exist that we haven't tapped to tweak the policy.
The last thing I will say about that is that I think we are not looking for -- the principles get you started and orient and helps everyone understand where north is. But once you have north, you have to figure out how do you actually -- you know, what are the different roads that you are going to take in getting there and how do you get around the roundabouts and other issues to take that analogy a bit further.
I was going to use the term toolkit, but since I'm on this little journey on my road, I'm going to use the backpack. So what are the things you need? You need your compass, you need your granola, whatever.
So you pull in the policies and develop the kind of guidance and understanding. And it can be -- I think the AI example inside the OECD right now. There were the principles done, but there is this experts group and they are working constantly to help understand things like compute and the demand for compute. And what is happening because of the certain policies that are in place for certain markets. And what are the implications of that on the way the technology is developing and also divides and other things.
So all of those questions and pursuits come together. Finally, it is a composite of things that get brought together to help the understanding rather then anything sort of very visionary and overarching, if I may.
>> PAUL FEHLINGER: Thank you so much. Last, but not least, and you might be slightly biased in this. Vivian, what is your hope to how we address responsible innovation and what we can achieve with the global process and all of the wisdom that people share with us?
>> VIVIAN SCHILLER: I think it's everything we have been talking about, marrying the principles in a practical way to see those principles turn into action. So the processes like Hiroki was talking about. Or the -- and the global multistakeholder nature of it as the other panelists have been taking about. And making sure that the dream is that we can actually see some of this.
All of these smart people that we have been talking to throughout all these consultations to see their incredible ideas actually come to pass and influence and take us on a better journey. I will pick up your metaphor, Elizabeth. So even though we're heading south, We need to take a right turn here and a left turn there and what passengers we're going to pick up along the way.
>> PAUL FEHLINGER: Thank you for sharing this. If I sort of ask this question myself as a sort of final after thought on this. I hope that this global consultative process can help to shed light on the known unknowns, as they call it, in risk management.
We cannot address what we don't know. And right now, the ecosystem with all of the hype on AI and tomorrow's lines and quantum and who is actually talking and we are not talking enough about computer brain interfaces which will be again a complete game changer on how we interact with technology often more is different.
What are the things we need to discuss. All of the things here today from the kind of fora we need and who needs to be involved. Questions on mindset shift and cultures of regulation that exist up until now and learnings that come from previous web 1 sort of innovations.
Things we have learned from how to deal with disruptive technologies. All of this we need alignment in the ecosystem and I'm personally carefully almost pessimistic. I believe there is no perfect set of principles for how to address this. This is extremely complex. If you ask 200 people, you get slightly different views. The question, then if this is the ecosystem we live in, how do we optimize for this? We don't the discussion on how ethical principles actually works in practice.
We can learn from other areas. I want to are finish on this, because a note of also kindness to the ecosystem. If we look at health, bioethics. Should you clone a human? Yes or no. And sort of the rapid reaction after the sheep Dottie was cloned and people said this is too much, let's not go there.
In health and medicine, we have millennia of people thinking about what is good life and what life should be saved and health and all of this. Technology is 30 years old so we are still learning and it is part of where we are in the process. It is very important that we have the focus of the entire energy and the ecosystem.
Thank you for the amazing work and important work they are leading in the ecosystem. Thank you to all of your in the audience. I know we are standing between you and the German reception with German traditional beverages.
So thank you very much. If you want to get engaged in the work of Project Liberties Institute, please reach out. And enjoy the evening in Kyoto. Thank you.