The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> Hello? You can hear?
>> OLIVIER ALAIS: Yes, I can hear you. I'm just wondering if the room can hear us, it's working fine?
>> Yes. I can hear you.
>> OLIVIER ALAIS: Are we missing anyone in the room, or we are all here?
>> Yeah. Gbenga, I'm not sure is here yet. No.
>> MODERATOR: Good afternoon, in Riyadh and good morning and good night where everyone s. here happy to introduce you all to the session today of building trust with technical standards and human rights. We all recognize this fundamental stakeholder approach that there is a real need to have a rights‑based approach to technical standards, and here we have today and we're very happy to introduce quite a number of good experts in the room from Civil Society, government, and corporate perspective as well.
I will now just hand it over to Olivier to give an initial introduction.
>> OLIVIER ALAIS: Thanks a lot. Good afternoon and good morning to all colleagues and friends. So it's great to welcome you to this session. It's organized by Czech republic. OHU and UNCHR, and how to include human rights in emerging technologies to build technical standards and build trust. At the ITU we understand the technical standards and of our connected world, and traditionally we've been focused on two goals, technical and commercial success. Today, we must mention the human rights perspective.
So this new approach requires us to ask person questions. It's about how to protect privacy and data, how to ensure freedom of expression and access to information. How do we guarantee nondiscrimination and inclusivity.
So addressing this question it's vital to build trust. Emerging technology to adopt and serve everyone.
So what is this matter now? Technology like AI, Internet of Things, metaverse offer incredible opportunity but as create challenges, and without clear guidance as technology can unintentionally hurt those it is to protect. That's why collaboration and it's very important and this type of venue is very important and for example the freedom coalition had a joint statement explicitly linking technical standards and human rights. The recent ITU resolution on metaverse, it's a milestone also and it was two months ago. It is a first with explicitly referring to human rights.
And our partnership with OHCHR isle s a key, and with human rights specialists and to the commitment to implement the Global Digital Compact by turning human rights principles into technical guidance.
So thanks a lot for your attention, and I look forward to this conversation and to working together to ensure technology is managed best. I'm giving you back the floor. Thank you.
>> MODERATOR: Thank you so much, Olivier. To speak to these issues, we have five different speakers today and I'm very happy to introduce them. Marek Janovsky is First Secretary for Cyber Diplomacy at the permanent mission of Czech Republic in Jeffa is onlining online. Shirani clark from the ministry of digital technology.
Yoo Jin Kim from OHCHR joining online, thanks for joining.
And Gbenga Sesan executive director of Paradigm Initiative on my right. And finally on my left Florian Ostmann Director of Regulatory Governance at Alan Turing Institute is also joining online. Let's kick off the discussion. Starting with Marek. The first question is how can international cyber efforts promote the inclusion of human rights and development of technical standards for emerging technologies?
>> MAREK JANOVSKY: Hello, everybody. I hope you can hear me. Greetings from Geneva from the Czech Republic Mission. I'm glad to be here with you with the expert community. I will try to be brief. I hope not to ebbing seed much time. So if I am too long, please don't hesitate to stop me. I've got a few points and some questions.
So to your question, actually, diplomatic cyber community, what we can do, actually, is to keep raising awareness to these matters. These matters of linking human rights and the development of the standards is not a self‑standing issue. It's actually linked to, let's say, a broader international relations and how new and emerging technologies change the international relations. So, we're not talking about a vacuum. It's a change of, let's say, environment that we live in as humans and as societies. So that is one of the reasons why I think the diplomatic communities, globally, started to take interest and raising interest in these matters. So just by way of introduction, I wanted to just point to that.
I think the first thing I would like to mention is the attention we need to pay to the whole cycle of new and emerging technologies and of their development. What we've been working on here with ITU specifically and the OHCHR colleagues is one of the points in the cycle, its standard development, standardization. But there is also inception, there is also you know in the youth development, and also disposal of technologies that the other phases that need to be, let's say, heeded. So we're now talking to you, to the expert audience, I think that you have experience with the other ones as well.
The third element that I would like to mention is that it is important for us, for the diplomatic cyber community, or let's say new tech people working in diplomacy is to try to break the silos between specifically the experts who work in human rights, such as the High Commissioner's Office in Geneva or elsewhere as well, and specific bodies, such as ITU or ISO or IEC, et cetera, or IEEE. It's important that the ITU is not the only game in the town and there are others that can join the efforts and need to join the efforts in order for the tech and the digital transformation to be a success.
Another point I would like to mention is the importance of youth. I would like to actually just point out that young people, they're actually even in Riyadh now, NGOs and others, they make a crucial part in this. So it would be actually quite, I would be happy to hear from them what they think about this important link.
And the ‑‑ maybe one of the other points is, and this is the question that I wanted to basically raise, is how the IGF could help to actually advance this. Because the diplomatic community is one thing, where we need a wholesale approach to a change of paradigm in how we actually perceive the development and youth of the UN emerging technologies. I think it is key not only diplomatic communities and other join into this effort, and each of us play a role in making a awk ses. Once we decide to follow the path of digital transformation, we don't have much choice but to try to make it safe and human rights based. Thank you very much.
>> MODERATOR: Thank you very much, Marek. That was a really good overview actually of the multistakeholder approach. Maybe now zooming to the Saudi Arabia experience. I'll ask Shirani specifically about the work in Riyadh concerning nondiscrimination. What are the use cases of impact of technology biases in Saudi Arabia and what solutions have you seen implemented here to address those challenges.
>> SHIRANI DE CLERCQ: Thank you, again, for inviting me to this great panel. In our ministry, the ministry where I work, there is a technology of foresight department. For the monthly meeting someone on the team tests the latest apps on the market, these days mostly the AI, Gen AI and presents the pros and cons during a meeting. Be when a team that is observed is the biases within the applications.
A simple example, when you ask a Gen AI model to predict a tra digitsal Saudi family, you sometimes end up having a Saudi woman with a ‑‑ (audio broke up) ‑‑ we also have other stereotypes but to comment on the issue. Is it really a big deal to be falsely misrepresented. In the digital world, an entire minority could be falsely represented or even erased, erased from, for example, an AI‑assisted recruitment process. So it could become a big issue in the wrong run.
So bias in AI systems often stems from how data is collected and how models are trained, but improving the fairness of AI requires more than just diversifying the datasets. While in trying the data reflects the full range of cull cultures and practices in it ‑‑ we must also design AI systems that don't automatically have what they think is unusual.
Important is the composition of the development team. Inclusive teams with present various backgrounds and experiences are more likely to recognize blind spots and treat algorithms with genuine data that reflects their populations.
So what do we do in these situations? In 2023, the Saudi Data and AI Authority, SDAA issued a AI aimed at responsible use of AI technologies. Recently in September 2024, they issued the AI adoption framework designed in a use‑case driven methodology, a much ‑‑ a very flexible approach.
There is another issue on language. Have you ever heard of the principle of linguistic relativity? So it says that the way people think of the world is influenced directly by the language people use to talk about it. Why some percent of Internet users are Arabic of Internet users, only 8% of Internet content is in Arabic.
So models trained on modern standard Arabic fails to understand regional dialects, so research shows that customizing models to local language variations, significantly improves the accuracy. For example, Arab a Arab focused language models who dialed accuracy about 84% to 92%, simply by incorporating model‑specific data. In doing so we not only improve technology effectiveness, but as ensure that the digital world genuinely represents Saudi linguistic and cultural richness. So it's very important for us.
>> MODERATOR: Yeah. Now switching gears to also YooJin, maybe you could speak a bit more on the technologies, we heard the Saudi perspective, and now maybe the global trends we've been observing from the office.
>> YOO JIN KIM: Thank you. Thank you so much. Thank you for this question. It's great to see some familiar faces online and on site. So to start, I would mention, again, a report that we had published last year on the relationship between human rights and technical standards if relation to digital technologies.
So, to recap, you know, on one hand the report showed how technical standards are related to the enjoyment of human rights. For instance, trusting that many standards define processes and actions that directly respond to certain human rights‑related concerns.
So to be more concrete, some examples include standards on privacy by design, privacy risk assessment, management, perhaps accessibility standards on the web, for example, which allow people with impaired vision to navigate and access the Internet.
So, the ways in which these standards that I've mentioned as an example, the way they're designed are important at protecting the rights of privacy, for example, freedom of expression and association, non‑discrimination and the right to life. In essence, across the whole spectrum of human rights, although I've only listed just a few rights here.
And let's look at a few more recent examples. The Internet engineering task force, has done a great amount of work around this, and right now they're currently discussing an Internet protocol related to air tags and (?), so the working group on DULT, and DULT stands for detecting unwanted location trackers accessory protocol, and so this protocol is being discussed in ITF to protect people against being unknowingly tracked. And it's discussing a real issue in a way, you know, that tackles gender and domestic violence cases to create a standard that allows the air tags to communicate with Bluetooth devices so that the person being tracked can detect and discover the hidden tag.
So, and on the other side, I would say our report showed that the risk to ‑‑ related to the standard development, for example, standards that define technical features that are necessary for digital infrastructures, functioning, have particular relevance for human rights. As we have seen the transmission control protocol and HTTP, and so in this case the weakness or lack thereof, or lack of encryption in these protocols can facilitate mass surveillance programs that systematically undermine the right to privacy and facilitate targeted surveillance both by the state and non‑state actors.
So there, of course, been some really important resolutions recently from the UN General Assembly, Human Rights Council, and reports, of course on human rights and technical standards. But other reports issued for example on Internet shutdowns, right to privacy in the digital age, the risks that relate to new and emerging technologies. And other reports such as the use of AI in border management by law enforcement, which gets into the use of AI, for example, facial recognition technologies, realtime and disproportionate impacts that it has.
I think one important thing to note is the lack of transparency, which has really been the undertone and the development and design, and the use of AI, and this really ‑‑ the lack of transparency and thus the lack of accountability has really led to some harm and increased risk to human rights.
Let me just conclude now that while technical standards can have an important role in, you know, creating conditions that are conducive to exercising human rights. Clearly, there are cases and risks posed to human rights by the way human rights ‑‑ by the way standards are designed but as the way they are deployed. So, this is why I really need to put human rights at the center and front of digital technologies and the standards that underpin them. We have to make sewer that standards in process ses really rest on multistakeholder principles and become as transparent, inclusive, and open as possible. Thank you.
>> MODERATOR: Yes. We heard from Yoo and Shirani and Marek and important of voices in this process and Marek also mentioned change in paradigm, it's good to have someone from the paradigm initiative. I'm sure you've heard that many times before. So Gbenga, how can Civil Society organizations influence the tech standards‑setting processes and include voices that are traditionally excluded in the room.
>> GBENGA SESAN: Thank you. I definitely heard part mentioned and I've been hearing that all week. That's great. It's good to know our brand is spreading. Very quick claims. I organized my response on the five Ps so then it makes it easier to remember.
The first is prioritization. It's important for Civil Society to prioritize participation in conversations around technical standards. And I say that because for many years, even myself and a few colleagues, you know, had conversations around you know is it worth it? Because it's expensive to participate, you need to decide, and that is where you begin to ask yourself the question of what is the connection between human rights and these technical standards. An thanks to the Office of the High Commissioner, your report and some of the conversations we've had have helped in terms of painting that picture. If you don't have the conversations, you know, at the design stage and the standards are set, you will then end up with fighting the fires eventually.
The second is participation because it's one thing to complain that, you know, there are issues, but it's another thing to participate and to bring knowledge to the table. I think this is really important because when you bring knowledge to the table, you may just be presenting a side of the conversation that people have not even considered at all. We see that in our work when we have conversations with for example the judiciary. The knowledge about human rights or digital rights are obviously not what they discuss every day, so it's helpful for them to see that.
A third is partnership. I said earlier today in one of the sessions that we need to see the spirit of ‑‑ it's not just about governments being in the room. Even the sense of governments, so government is a representation of the people. So governments need to start going for technical standards conversations with Civil Society, with business, and you know with ‑‑ of course they don't have a choice with the technical community, so there is a partnership that brings be in all of these elements.
Which then brings me to the fourth point which is people. At the end of the day my suspicion and reality is that the services are focused on markets, and the markets are made up of people. So if the people are not at the center of the experience, it's like the UX. I think that Civil Society has a unique opportunity to bring user, you know, lived experience to the table of this conversation. We were talking about Internet fragmentation, and we've got all technical until we stepped back and said wait a second, how does fragmentation affect the Internet user and that was ‑‑ that was very helpful.
Of course, finally, is the process itself. I was glad it hear Yoogin mentioned earlier something about privacy by design. This is where we need to begin to have conversations about privacy. Anything ‑‑ basically talking about human rights by design. It is not human rights consideration. It is a fact that human rights because it is people focused is at the center of the entire process. It's at the center, of the you know, conversation that we're having about technical standards.
>> MODERATOR: Thank you, Gbenga. Those are very important points. I'm now going to turn to Florian about how to make that actually effective and actually incorporate the human rights perspective into the technical standard processes, and where you see the challenges in this. The question comes to you.
>> FLORIAN OSTMANN: Thank you very much. Thank you for the invitation. It's great to be part of this discussion. I'll be speaker from the perspective of work at the Alan Turing Institute, the UK institute for AI and an initiative we set up a couple of years ago called the AI Standards Hub. I'll say more about that later if I get a chance. But just to say, our work is focused on AI rather than digital technologies more broadly, but I think a lot of the considerations apply more broadly as well.
I think I won't go into too much detail on, you know, why AI raises human rights implications. I think that's already been eloquently set out by my previous colleagues on the panel. But just to illustrate as we've heard, there are privacy implications. AI can have questions for physical safety. Important questions around nondiscrimination, due process when AI is used in legal or administrative processes, and important questions around surveillance, you know, just to name a few important human rights aspects.
So it's very clear that AI raises human rights questions. Now, the question is, you know, why is it important for human rights to be considered in the context of standardization for AI? I think one important thing to emphasize here is that standardization, we've seen over the last couple of years is increasingly being looked at as a tool for AI governance. Often through important links between regulation and standards, so in the E context, there is an important between the AI act and standards being developed at the European level. Standards aren't just a standalone tool but they have important things of the landscape, and considered for AI governance more broader, and that's why it's important and absolutely critical that AI are part of standardization otherwise standards won't be able to play that overall enabling role.
One thing that I also wanted to briefly sort of emphasize or highlight is the importance to think about the use of technology rather than just properties of systems, because I think that's an important shift. To some extent if you think about traditional domains of standardization where standards are primarily about properties and specifications for systems. That is very important in the AI context as it is everywhere, but as we've heard already, some of the most critical human rights‑relled impacts in the AI context may be associated with the use of systems, regardless of whether the properties meet certain specifications. And so, again, that's something where I think standardization needs to broaden its scope if standards are meant to play this broad role of governance enablers, and think about the use rather than just properties of systems.
Now, to the second part, you know, what are the main challenges for including human rights expertise and standards development? I think at the high level there are sort of two points. The first one is that we are dealing with two different cultures. You know, think about the standardization community and the human rights community. There are different concepts, conceptual frameworks at play, different languages, you know, and different cultures of collaborating.
The second one is this simple point about simply being different communities. The people involved in standardization, traditionally aren't ‑‑ are separate from a group of people who are traditionally focused or professional focus on human rights and human rights due diligence.
Now the fact that there are these different communities, means that stakeholders that have expertise in human rights and human rights due diligence are not particularly familiar in many cases with standardization as a field, and that creates obstacles for them to actively engage. It starts with the fact that the space is very complex, there is a wide range of standards‑development organizations. They each have their own rules for participating, so it's difficult to understand what are the most important developments, what are the most important standards projects, and then how do I get involved in those, given that there are different rules.
Secondly, obstacles around skills. So skills and knowledge about how does the standards development process work. Knowledge about, you know, what are different types of standards. There is a very common misconception, I think, around the notion of technical standards. We tend to avoid using the term technical standards because it tends to imply, you know, the content of standards developed in organizations like the ITU, ISO by necessity is particularly technical, but it is the case that for some standards but not the case for all standards. That often creates misunderstandings.
And then as the last point, of course, it has to be mentioned there are important challenges around resourcing as has already been mentioned by previous speakers as well. It's important to recognize, I think, that if you think about the multistakeholder approach to standards development, certain stakeholder groups, you know, have a business case for being involved in standards development industry. It's the opposite example for that. Important pockets of human rights expertise, you know, are to be found outside of industry, especially in Civil Society space, and it's much more difficult for Civil Society organizations to find resources and make the business case for being involved. That is changing ‑‑ I mean the awareness is changing, but the challenge of finding resources remains the same so far.
>> MODERATOR: Thank you so much, Florian. I think at this time, we would like to open the discussion to the floor and check if anyone would have any questions that you would like to pose to our speakers here. Apologies. I think, yeah, maybe it's that time of the day. Then we can also move on for other questions that we had for the speakers, and maybe putting Marek back online. What role do you envision that the Czech Republic plays for government in ensuring, you know, standards align with the multistakeholder process and also include the human rights perspective in development?
>> MAREK JANOVSKY: Yeah. Many thanks again. On this national perspective, I would like to just maybe refer very quickly to what I have already mentioned at the beginning of the panel. It's the silos breaking. If we are to succeed, we need to make sure that people actually communicate. So the experts on both sides find a way to respect one another and do not disregard one another in terms of some kind of a joint work. I think it sounds small, but proves to be a very difficult thing to do from a diplomatic perspective. That's maybe one point. Again, breaking silos. That's what the Czech republic in the EU does, not alone because we would not be able to do that, but we try to foster even gross regional dialogue. So as we have heard before, we need to talk ‑‑ for example, the EU needs to talk with African countries because we're actually facing some of the joint or, you know, challenges in this ever‑increasing digital world. Linguistically, for example, as we have heard before, it's the same issue. I have to subscribe to the previous speaker saying that once we execs pose to different language, we think about different language, we of course subconsciously change the way that we perceive the world, that's for sure.
But there are other things that are similar to that, which are actually influencing our brains and thoughts. I'll probably come back to that.
Another point maybe worth mentioning outside of the outrage which we want to foster, actually here through you guys, the IGF would need to play a bigger role in this and seriously the Czech Republic would like to actually, you know, not to task, but to ask the IGF to be of help in this specific area of work because we think that we need more opinions, more let's say recommendations from you, because you're the experts that can actually help. Be it the governments, be it the Civil Society organizations, be it researchers, be it the companies that are there.
That brings me to the point of private companies. I think this is also one of the ways or elements that the Czech Republic is trying to foster. That's why we're extremely grateful for the OHCHR's project Btech that is being at least run in Geneva. I think it would benefit from worldwide coverage as well, just trying to talk to not only the big tech, but trying to talk to the emerging and the S MEs. Again, to the young people who are actually driving companies, who are at the, let's say, frontier development of these applications. Because you know, basically the world is going to be theirs, and they need to step up and, you know, to tell us how to do it as well. To develop, but to develop in a responsible way.
So I would end there. And maybe just to say a few remarks, if I may, to the previous conversation. It's interesting to focus on standardization and the youth. I agree. But maybe a question to the audience or to the other colleagues. When we talk about new technologies such as neuroscience and other things, other applications which are going to be directly ‑‑ which are already directly focusing on our brain activities, on our thoughts, how can we improve the users capacities and skills. In my view, there is no way. It needs to be already clean and responsible from the inception part. You know, the systemic proprieties need to be done in a good way, in a responsible way. I don't think that the user who is being swooned by countless applications and you know Internet of Things, connectivity, et cetera, is going to be able to focus on responsible usage and skills only. I think just from a daily usage perspective, there is no way.
So, again, the inception and the first phases of the development cycle are extremely key to get it right. Thank you.
>> MODERATOR: Thank you so much, Marek. We have a lady in the room who would love to ask a question at this point. I'll just pass the mic.
>> AUDIENCE MEMBER: First of all, thank you so much for this insightful discussion. Let me ‑‑ allow me to introduce myself. My name is Ms. Nefrigi the representative for the Saudi Green Building Forum an organization committed to sustainable development and also fostering innovative principles.
Now, the discussion highlighted critical gaps in how human rights are considered in technical standards for emerge being innovations. Despite the rapid advancement overlooking these principles, ebbing poses marginalized to discrimination, privacy violation, lack of transparency.
Now, this challenge is further compounded by limited involvement of diverse stakeholders, which threatens to create unsafe and uninclusive technological environments.
Now, to address these issues, there is a pressing they'd to ensure active participations from all sectors in the development of these technical standards. By integrating a human rights‑based approach, we can design the systems that prioritize transparency, fairness, and accountability, and also strengthening collaborations between public and private sector and Civil Society is equally crucial to ensure these standards reflect the needs of all communities.
Now, moving forward, actionable steps include creating a human rights guide to align technical standards with principles like equality, equity, and justice.
Also, we can establish a robust multistakeholder platform that can foster the exchange of experiences and expertise, and also best practices while regular human rights impact assessments will ensure alignment with Sustainable Development Goals.
Now, my question is, how can we create a scalable model for integrating human rights into technical standards without slowing the pace of innovation, as well as how can we involve as His Excellency said, the private sector, to implement these standards. Thank you.
>> MODERATOR: Thank you so much. That was a really good question. I would just invite anyone in the room to comment on that because I think you raised multiple points there that actually spoke to all of what ‑‑ all of the speeches of the panelists today. So anyone who wants to respond? Gbenga maybe?
>> GBENGA SESAN: Thanks. That's a fantastic question, actually. One of the reasons this is fantastic is because it actually takes us to the center of innovation. You know, the question is how do we ensure standards are humanrights compliant without slowing down the pace of innovation. I think it's important to say this. Innovation is about providing ‑‑ improving experiences. Right. Improving experiences, making services better, doing things in different ways. At the center of innovation are the people who interact with these experiences. And the whole sense of human rights is basically saying that we want to make sure that the rights of those who engage with those platforms, who interact with these experiences is respected.
A very simple example that I would love to give is international norms. If we bake the fact that shutdowns are not allowed, which is now part of G DC, all countries agree, 29.D, no shutdowns, that itself doesn't slow down the improvement of the Internet, but it accelerates it. It means people can use it, people can give feedback, people can have an experience of the entire and unfragmented Internet. So thanks for asking that, because it actually, you know, takes us to the point of, you know, why human rights conversations are so helpful with technical standards because they actually promote the sense of innovation. It's not about the people. It's not about the person creating the tool but about the user experience and how their rights are respected.
>> MODERATOR: Can you hear us okay? I'm just seeing from the chat that the audio from the room is apparently a bit low for participants online. Please do continue to type in the chat. Continue to type in the chat and we'll try to get it sorted. Thanks.
Yeah. I wanted to see if anyone else was interested in responding to the question that was posed? If not, then I might actually move on, two questions both to you Jin and Shirani directly to what you raised, which was in Saudi Arabia, you spoke about the challenges of the inclusion of minorities and marginalized communities. What are the financial and social implications do you think that need to be tackled to tackle this issue?
>> SHIRANI DE CLERCQ: Something to the approach, one of the questions of the dilemma between financial revenue and ethical social values, traditional values. It's always a dilemma everywhere, whatever country it is. You always have to balance them. So there is just two or three examples I have. There is no answer, of course. So one of the projects, like in the race for the AI leader, you have a project the contribute to 5.1, projected to 5.1 of a billion dollars to GDP by 2030. So there are many projects, and one of them that are identified is the Kiaserata, pushed by or supported by satera and GDP. It's 160‑million dollars to fund more than 20 AI startups in Saudi. While these startups can make returns, in their implementation of Saudi AI ethic principles that has been deployed in 20 (?) and the framework last year is going to slow down the whole process. You have an operative, KPIs where the companies have to become future unicorns, that's the main objective. But on the other side there is a principle that comes in and which says that this has to be by design, you have to include ethical values in it, why do you do it and how do you do it?
The others are like unbias datasets with financial constraints, so you know about the Arabic language model developed by Saudi Arabia. It addresses often ‑‑ it addresses biases often found in global LLMs like GDP4 that represents Arabic dialects. While expanding this.gap, expanding such initiatives costs enormously. So what do you fund? These kind of languages or a future unique unicorn that makes you proud.
And something else which is more difficult to discuss about this platform. It has 1.8‑million daily digital verification users, or tevakalem, you say it like that, it's just developed during the COVID period. I'm sorry for the pronunciation. It is 19.9‑million users. ‑‑ or no. 17.9‑million users which is enormous. On one side, you need data to reduce biases. I'm sorry, on one side you need data to reduce biases. On the other side there is the privacy role. Saudi Arabia has published, I think in 2023, the PDLP which is give lent to the GDPR to protect data privacy in Saudi Arabia. The real question is the same as yours, how do you classify and prioritize what is the most important thing for a country. I suppose it's depends on the time period of the economic situation, because not all countries will face the same problematic at the same time.
And I'm really for the agile method. No standardization can be deployed in one shot. I think you have to try, adjust, and have the feeling that the AI frameworks, the ethical framework has this methodology agile, that means that you test with the use case methodology, and if it works, you improve and improve or implement it one by one. Thank you.
>> MODERATOR: Thank you. Also, building on that point and also the fact that our kind speaker, I mean participant raised earlier about principles and standards that are already there on human rights principles ‑‑ I'm sorry. We have another question, so I'll pause and please feel free to speak.
>> AUDIENCE MEMBER: Thank you. Yeah. I think I'm sitting on the wrong side of the room. I have a question about Blockchain standards making and multilateralism. We did a project. I'm from India. I worked on a project. It was a Indo Australian project. We started working on Blockchain standards. It was a three‑year project and by the end of it we were just looking at what the involvement in standardization processes for multilateral forums was, so we worked with BIS and Standards Australia and also worked on an international level.
The problem that we found was that most people just didn't participate in standardization processes, not because they didn't know that it was important, but because it's not practically feasible. So I think it's important to get like a Civil Society approach, and then like from small and medium enterprises. But contributing to standards conversations, which is something that we've also attempted to do, is an extremely resource and time‑intensive process. It's something that requires significant upselling. You have to upskill yourself, you have to spend a lot of time working on these things. It's also something that you don't get paid for. Most of us do it in addition to the work that we do.
So in that context, unless like even when we did computer project, we came up with like a roadmap of how potentially standards organizations could get people more invested in the standards‑making progress. I think unless there is significant incentives being offered, which some governments are doing, but the majority of them don't do. That's not something likely to change. A large organization could have people contribute in it. Small and medium can't afford it, Civil Society can't afford it. It requires a lot of expertise and a lot of time. In that kind of standards‑making conversation, unless it drastically changes in the next two years where it becomes possible to contribute without that much investment, I'm not sure how a multilateral conversation on standards would really work out.
>> MODERATOR: That's a really point important, and I do recognize Florian spoke a bit to this before. Yeah. Florian, please comment to respond.
>> FLORIAN OSTMANN: Yeah. Thank you. I'll just briefly come in also because I need to drop out in 5 minutes, I'm afraid.
I think it's a really important point. So we don't work on Blockchain, but I think the point applies more generally really, or when it comes to Civil Society involvement in standards development for emerging technologies.
And just I'll briefly say, sort of give some examples of what we've done. I think the problem really needs to be addressed at different layers. One thing we've done in the AI space with the AI Standards Hub is to build a database of standards projects that are under development. That's something that didn't exist before, and so you know there is a real challenge at the very foundational level for organizations that aren't familiar with the space of understanding, tracking what is going on, you know, across the large number of ADOs that are active in the space, and then to be able to decide what is the standards project that's, with my limited resources, is the one that I should be focused on and engaging. That's an example for how we try to contribute to solving that in the AI space through that kind of database that tracks those developments.
Skdly, you know, to your point on skills and upskilling, so we've developed a range of e‑learning materials. We also have an occasion in‑person events. Some of those are not AI specific, so it might be useful if you're interested to take a look. We have several sort of e‑learning modules on, you know, how the standardization process works and the role of different SDOs, for example.
And then lastly, and I think that is sort of probably the most important point and where the most value lies because the biggest difference can be made, is thinking about ways for lowering the barrier to participating and contributing to standards development. So traditionally, of course, the way to contribute is to join a committee, whatever committee that is. And in some organization, if it's ISO or IEC, or others, it's through the miracle media at the national level, and in other cases the study group like in the ITU. It's a formal process and quite a time commitment, right, that can be quite daunting and many organizations might feel they're unable to join a committee. We've been trying to experiment with ways of lowering that barrier in collaboration with SDOs, for example, with Senlac and working group chairs to create spaces in workshops, for example, that in a one‑off provide opportunities for interested organizations, especially those that struggle, so we have a dedicated workstream to Civil Society, to Civil Society organizations and look at a project that's under development and give targeted input on certain questions that the committee can then consider. You know, in the committee's work.
I think that's something that we've had a lot of positive feedback on and we'll try to do more, hopefully had a workshop for Civil Society organizations to feed into European standards for risk management, for AI in February, so then who is interested, please take a look at our website.
We've been very pleased and as we're proud with our partnership with Yoojin and colleagues from OHCHR having done a couple of events today and we'll also be working together on a summit that the AI Hub will host in March. You'll find more on the AI summit on it where Civil Society inclusion will be a really important focus.
>> MODERATOR: Thank you very much, Florian. I want to also share from the room a comment from the chat, in my opinion, it's best to have responsibilities before rights. Making responsible citizens is now crucial to building trust using technology standards. We need a trust layer lower than the network layer in the ISO/OSI model. Thank you. I think it also relates to the yes as we speak about responsibilities before rights. Yoojin, you could speak a bit more on the project, the vtech project that encourages companies to actually meet the responsibilities that they have to protect the rights of individuals as well.
>> YOO JIN KIM: Sure. I'm really glad to hear from my other colleagues on the panel here the challenges that the non‑technical community, the CSO face, which I think is really important to tackle. Just to note, it's really, we understand that standards development takes time, it's a complex and challenging environment, so really we feed all stakeholders involved and a continued effort and collaborative manner to make sure that human rights are front and center to tech standards development. It's not, you know, it's not something that we can flick on a switch or flick it off to make these happen. So, I think also with ‑‑ it really highlights the porches of the panel that we have here today with, you know, diverse participants on this panel.
So, I think I will focus a bit more like you said on the vtech nonl and the companies and perhaps outline some of our steps in the next year. The role of the tech companies in the standard‑setting space is something that we would really like to stress and emphasize a more next year. We already engage with them with tech companies through our vtech which is mentioned, which focuses on the role of tech companies and how they can operationalize the UN guiding principles on business and human rights. So within this Vtech project, our office has been able to provide ‑‑ has been able to discuss how to foster business conduct responsibly when it comes to AI, provide technical guidance to tech companies, operationalizing the UNGDP and recently published a paper on generative AI, and also a taxonomy of human rights risk connected to generative AI, so this is really some of the examples and more recent outputs that we have in order to provide guidance to tech companies. I'm happy to share a link where you can access these resources under the work that we have done so far. But the challenge now is really looking at how to engage meaningfully with companies on technical standards for human rights. Because companies are, you know, stakeholder group that is really crucial to stay engaged because they take part of center processes, and sometimes at various fora, they're the ones driving some of this tech standards development. So human rights due diligence is a key element. I think that should be highlighted. It's a human rights due diligence for both companies and for standard‑setting organizations. And in order for us to raise awareness vis‑a‑vis tech companies and also standard‑development organizations we will be conducting some con tul taitions next year to better understand their view and how to better engage with them and better by meaning more effectively. And this really, again, brings us to the importance of participation and multistakeholderism. It's already been noted, but the challenge really is on getting non‑technical communities involved. You know, when I say that, it's Civil Society organizations, it's academia, but as noting that there is disparity in participation for various reasons between the global north and the global majority, and also you know I think I heard from the floor as well and from other panelists that we need to have more SMEs. I think Marek mentioned this. The importance is also not just engaging with big tech, but looking at the ecosystem. Right. We also need to be engaging with beyond entrepreneurs who are really at the frontiers of tech development and also small and medium‑size enterprises.
So, I think and I hope this is clear. I mean there is enormous benefit to multistakeholder participation. Gbenga mentioned this, but you know it's really important to bring in a perspective side of the conversation that maybe no one in the room has heard before, and that's really the value add of multistakeholder participation in developing technologies and standards that serve everybody.
And in this sense, I would like to emphasize that IGF, you know, the Internet governance forum has been an important venue to discuss in a multistakeholder setting, you know, human rights and technical standards, but a whole host of other issues in this space. That said, I think we're now arriving at such a critical juncture, right, with the steps to implementing the Global Digital Compact, the review of the multistakeholder model at the WSIS + 20 next year. So let me finish on a note now by saying that in the GDC, if you look at line 58 in the adopted text, it places a great importance to human rights in the standards. Relates to AI standards that must respect human rights, so we must also emphasize the importance of this forum to continue to discuss how to develop and deploy technical standards that can be human rights respecting and that will foster more public trust in different technologies. Thank you so as much.
>> MODERATOR: That was a really good summary of the whole thing. Also, at the end of the day, we are here at IGF and we hope to discuss this conversation at the next IGF. So, it's been great to have all of you online. I would like to maybe call upon Olivier if you have final words. I've been told we have two minutes left so we might get kicked out of the room. But Olivier, please do come in.
>> OLIVIER ALAIS: Thanks a lot. It was a very interesting conversation and thanks a lot for all the ears. Of course, technical standards are is to be human rights enablers. We need to be more collaborative and to have a multistakeholder effort. We need also as we said to come with actionable solutions for technical community and really to be in line with the Global Digital Compact. Thanks a lot for being here. I know that we're all still working together to move on on standardization and human rights.
>> MODERATOR: Thank you. Marek, Yoojin, Banga, Sirvani. Thank you.
>> OLIVIER ALAIS: Thanks a lot to all. Thank you.