The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> CARLOS AFFONSO SOUZA: So hey, good morning. It is a pleasure to have you all here. I'm Carlos Affonso Souza for the Institute, for the Technology Society of ITS Rio. This is a Round Table jointly organized by ITS Rio and the Centre for Communication Governance, National Law University. New Delhi. So the purpose of this Round Table is for us to take this time to discuss questions around Artificial Intelligence and inclusion. This is a follow‑up from an event that we jointly organized with our friends from Berkman Klein Center at Harvard University back in Rio de Janeiro from November 8 to 10. And our main goal in this session is to bring you the take‑aways from our event in November and to present to the enlarged audience of IGF and to bring and to get new perspectives, new ideas on how we take this conversation forward since it is a very important topic concerning the future of AI and how especially Developing Countries can play an important role in this conversation.
So in the slide that you see here it is a brief overview of the event that we hosted in Rio last month. It was a three‑day event. On the very first day we framed it around building shared understanding on concepts on AI and inclusion with two keynotes. One more technical, another one more social impacts on AI. And right after this ‑‑ those two keynotes we had a preliminary session on advancing equality in the Global South focusing on challenges that are brought from the so‑called AI revolution.
On the two we explored opportunities and challenges with a Plenary session on data and economic inclusion. And then we break out for ten breakout groups discussing issues as different as algorithm and design health and well‑being and concluded this day with possible approaches and solutions. And then on Day Three we focused on action points on areas for research, education and interface building. And that's the conversation that lead us to here.
So those are the slides that just show you a little bit of what happened on those three days. So that was Day One on building a shared understanding. Day Two focusing on opportunities and challenges. And there you can see the breakout meetings, the breakout groups. So that you can see what are the topics that we are interested in. User behavior expectations, algorithm and design, data infrastructure, business models, law and governance, I understand it is a topic that a good chunk of the room is interested in shifting industries and workplace, social inclusion and humanitarian crisis prevention and mitigation. So after doing those meetings on Day Two we focused on Day Three in our take‑aways and action points.
So in this slide I would just like ‑‑ want to refer to you, if we can go to website that we ‑‑ this is in the second one. So in the second link you have the website for the events. And if you click on the resources tab up there in the front, you will have access to reading materials, the outputs of the event. So this could be helpful if you are starting a research on issues concerning AI and inclusion. This could be a good starting point where you can see the reading materials that we prepare for our event and some other documents that might be helpful.
So can we go back to the slides to talk a little bit about the attendance of the event, the diversity of the group that we ‑‑ that we brought together in Rio. I would like to call Sandra, if you want to address it on the dot plot of the event, that's the very first link there.
>> Sandra: Maybe just very briefly. I work at the Berkman Klein Center. One of the goals we had for the event is to connect people in the field more with each other. So we created this tool at the Berkman Klein Center called dot plot which is based on survey data by attendees. It gives the opportunity to explore who was there, where do people come from, what kind of stakeholder communities do they represent. And then we ask them, for instance, about application areas or biggest challenges and opportunities and depending on what people answered obviously it rearranges the dots. Each dot is representing one person. And if you are particularly curious and say okay who is particularly interested in a challenge you can also click on the dots. And it lets you see whose response it was. So if you feel particularly inclined to connect with someone there are options there on how to connect. And so that was one of the tools to basically connect people at the conference's symposium with each other.
>> CARLOS AFFONSO SOUZA: Thanks, Sandra. And if we go back to the slides, I would like to call our friend Malavika Jayaram to talk about the conceptual framing of the event.
>> MALAVIKA JAYARAM: We had two provacative keynotes. One from a computer scientist and the second one was from the humanities looking at it from a very, very different perspective. And the keynote that we had from Colombia was really interesting because she talked about how, you know, there is all this hype around AI but it is not a new phenomenon and something we had from the '50s and how much of those tropes and narratives about AI still help shape the way we see it today and ‑‑ a lot of the talk about discrimination is really something that's happened in a different sphere. But is now coming in to the AI conversation more and more. And see talks about the different levels at which discrimination can take place and the ways that whether it is at the level of algorithm. It is datasets that make up Artificial Intelligence that they are trained on that reinforce the results and outcomes. And how ‑‑ there is a lot of deficiency of understanding how the algorithms and how the systems discriminate and exclude people.
She came up with a very interesting 4 Ds framework as she called it for achieving inclusive AI. So the four Ds were to develop knowledge and promote efficiency through education to close this digital gap. And this is not something new to all of you. We have been thinking through all of the conversations we are having around access and inclusion and whether it is mere connectivity or something as sophisticated as Artificial Intelligence. In terms of deciphering models and this is something that's a hot topic in terms of discussions around black box and whether it is even possible ‑‑ even if the GDPR is requiring the right to explanation. How this can actually be implemented on a technical level. And that's interpretability both in the short term as well as long. She also stressed the need to deidentify data. And we could achieve a level of anonymity. And that's one way in which you can prevent discrimination or minimize it and preserve the privacy of the people that are represented in the datasets because ultimately it is not data. It is people behind all the data and we have to remember that.
And she also talked about the fourth D was debias and how you identify and treat bias in data, the ways in which this are not exactly representative of the people that they have proxies for or account for bias in the machine learning algorithms themselves while reducing bias in the data collection and analysis.
The second keynote from Dr. Nishant contested the whole framing of AI inclusion. And he talked about these two themes mutually constitute each other and they are in constant play. They have to be done in a mutually reinforcing manner. And he talked about a few attentions that I think we see in a lot of the ways these applications are developed, about how it is either ‑‑ it is attention between the real and artificial. It is attention between something that's aspirational versus something that's really big and being represented in the data. And he talked about if we want to do this properly we can't think of inclusion as a target. It is not something that we look at as a topdown measure that we have to achieve. It is something that has to come in below.
And he talked about how one way to do that is to rethink and to denaturalize the units of measurement. Showing how lived realities and processes are constructed on multiple scales and to situate both AI and inclusion within a much larger ecosystem. So you move as he said from living with AI as if AI is something external to us to living within AI. As in we are part of an ecosystem where AI actually influences and shapes a lot of our lived experiences. So by shaping it and thinking of it as a continuously, constantly negotiated entity, instead of something that's stable and static and some external reference he thought that was a very good way to actually think of inclusion as a real motive and lived reality to keep reinforcing this sort of narrative.
So these were either two things that were great conclusions and I think for me what was really powerful here is by actually moving away from inclusion as something external to looking at it as something where people have agency and choice and actually have a right of self‑determination and how they represented within AI systems and affects real outcomes and their experiences and their life chances.
So I think to sum up what we are seeing is all of the old AI narratives that we see through movies, through science fiction and pop culture. Not all of them are actually equitable narratives. And if we want them to be a measure of empowerment rather than something that exacerbates existing digital realities with a need to collaborate meaningfully as a community in a really intersectional way across interdisciplinary silos.
>> CARLOS AFFONSO SOUZA: So thanks. And our next slides present the research questions that were clear in this event that we want to take that to you. So Chinmayi, if you want to be on this.
>> CHINMAYI ARUN: If you take a look at the research questions, specifically ones like how user expectations defer in the Global South from the Global North you will see that the framing of the questions themselves have a truly global point of view. And specifically takes itself in to account. It was very valuable that this influence took place in real and academic around the world with a reasonable geographic balance. So the list is up there.
What I'm going to do is I'm going to describe the research question that is most exciting to me and why I feel that it creates opportunity for all of us to work together at an early stage. And then second one Sandra is especially ‑‑ she has been kind enough to introduce them. And then after that I would invite some of you to pick up research questions from the slide and discuss things that you are thinking about doing, ways in which you might think you might want to get in words because it would be nice to hear from you.
My favorite part is how user expectations differ in the Global South from the Global North and how can AI design, accommodate these varied expectations. As someone that has worked on Internet governance for awhile and able to moderate the panel from the Global South, it was valuable for me to think about how the Internet Governance narratives are useful in understanding how we want to engage with Artificial Intelligence. We know how architecture has been influential, but there has been a lot of complaints that the architecture has created on the Global North and in a lot of cases that accounts for the specific problems that are created in the Global South. For AI researchers being encouraged to think about this from a Global South point of view and to engage with formation at the architecture level early on. And that's something that is keen on getting all of it. Sandra, if you want to discuss the one about user stakeholders.
>> Sandra: One of the questions in the room was many people at the symposium came from areas such as, for instance, education, or health. A key question was how does AI base technologies, how do they impact young people, usually 12 to 18‑year‑olds. So minors. We were particularly interested to explore questions around education and well‑being, but there are other areas that we think are also relevant in the youth context such as the future of work, whether the skills that we need to equip young people with in order to be successful in this new environment. We are interested in questions around privacy and safety. So how can some of these technologies be used to enhance young people's privacy or to protect them from harm. We are interested at community events, but beyond it we look at entertainment questions and, for instance, can AI based technologies, for instance, enable creativity among young people or detect other areas where young people are increasingly contributing and participating in different ways. Thank you.
>> CARLOS AFFONSO SOUZA: And if I can take your permission to see if anybody else would like to discuss the questions. If anyone wants to take a look, is there a question out there that you are researching or interested in?
>> PARTICIPANT: Good morning, everyone. Claudio, University in Brazil and Portugal. I'm doing research specifically relating to landscape is AI and law enforcement. So I don't think that anyone is specifically designed over there. But still I would like to take the opportunity to ‑‑ to record a compliment to all of the organizers. And I would like to ‑‑ the comment I would like to make I don't believe ‑‑ I am going to make a statement with a historic background. I might be wrong and it is a bad place to be wrong about historic references. I do not see that such ‑‑ that technology with such power of impact was so widely discussed as we are doing now with Artificial Intelligence.
So with the inclusion of so many different parties of so many different backgrounds and we have a chance here because, of course, I follow the symposium and I wasn't able to be in Brazil back then. I tried to participate and I followed it and I followed the developments and outputs. We have a session for Latin America specifically which is proposed rights on. Next year I will let the community to be informed if it is interested. But clearly we have a chance here, we are discussing very disruptive technology with the opportunity not debating but in time to make necessary interventions, if it is possible. And I don't think we have had the opportunity with any other of the very disruptive technologies in history.
About dataset, datasets are a clear concern. We have no clear guidelines or principles for training datasets. Much of the datasets are being used to develop the systems to become from the geographic centers of power historically the same ones. And this is ‑‑ this is ‑‑ this was one of my concerns absolutely addressed during the symposium. I once more congratulate the public about the organization of the event and thank you very much for the update.
>> CARLOS AFFONSO SOUZA: Thanks. And I think we will definitely address those issues very, very shortly. And just give the floor to the gentleman here on the left. And then a couple of questions there. We will start here on the left.
>> PARTICIPANT: I have been thinking about this issue for quite awhile. In the beginning most I think I was stuck in to thinking the machines are coming, kind of thinking but lately for the past few years my thinking was much different. And that is that this will actually empower a given say class or ‑‑ not to use flat language but a given set of people, very limited number of people and just empower to set up world guidance around them. And this aspect that I think it is kind of being missed. And if you are talking about reacting to, slowest reaction, because the reaction needs to be a space that develops some system measures that will distribute the wealth that will inevitably accumulate and that's key things to start thinking about Artificial Intelligence. Because it will be a very powerful tool for those who can use it. And those who will use it will be the empowered ones. And the answer for that has to be within early governing entities. The fate has been that ‑‑ maybe a new one is coming. Perhaps an empire, perhaps corporations but they have to be some kind of an entity that can respond to it. And I think that needs to be a very important research question that is addressed.
>> CARLOS AFFONSO SOUZA: Thanks for that. It is a very, very helpful comment. Let me just do three very quick comments in terms of housekeeping. So first if you want to take a look on the videos of the symposium, they are all online on Youtube. So if you go to your favorite search provider and type like AI and inclusion, you will find the videos of the symposium as well. And I really have to mention that when we organized the symposium it was something that come up as a result of the ethics and governance of AI fund. And a good chunk of the research that led to the symposium was done by the centers that are part of the global network for Internet and society research centers, the NLC. We are all part of this network of Internet and society research centers. And we were taking the NLC as a hub to discuss those issues. If you want to know more about the NLC, feel free to reach out to any of us by the end of the session. We will be more than happy to talk about the network with you.
And one additional housekeeping information when you make your comment please say your name and organization so that we can keep track of your comments and make this a proper conversation, even though the setting of this room is somewhat weird because like sometimes people are talking behind your back. But so let's go for another round of comments. So next.
>> PARTICIPANT: So hi. My name is Benin. I had the chance of being in the Rio symposium. And I think that one of the main take‑aways that I came out of which touches about the key excellent points that were made here is about encouraging participatory design of AI and empowering people to set the rules of the game. One of the things that I have been participating in is shaping standards around autonomous and intelligent systems which is a process that has been made by IEEE. It is free to join. Completely voluntary. Everyone can contribute. And these standards are holding to a pretty significant factor in setting the rules of the game. There is an interest in doing that. Make sure we represent all empowered people. So help us with the rules of the game but help us benefit from it.
>> CARLOS AFFONSO SOUZA: Perhaps two comments there in the middle. Go ahead.
>> PARTICIPANT: Yeah. My name is Arito and I come from society for children in India. When I look at these questions and correcting ‑‑ I get the sense that the entire debate is being framed from a default framework of the logic. I think that the categorization should be of notions like inclusion and agencies. I think it is very political notions. And I would think that the default frame of Artificial Intelligence should be from a larger social civilization in which market is one thing. I think it would be a misnomer to think that something would be a Global South perspective just by having the youth of Global South included in the markets online by getting the youth to design products or having a voice in inclusive design, to get them to be part of the global economy and not necessarily achieving a Global South perspective.
So I would think that as is an important question to be framed in this context is through the prism of (?). And it is really important that the cities of tomorrow and the rural areas can claim Artificial Intelligence so that local populations can determine how they can plan their agriculture vis‑a‑vi say data that comes from, you know, agriculture or climate, et cetera, et cetera. So that you are actually looking at self‑determination and inclusion in the political sense. Thank you.
>> CARLOS AFFONSO SOUZA: Thank you. One here. And the one in the back and then we are going to go back to our panel and have any comments. And then we go back to another round of comments.
>> PARTICIPANT: My name is Gabon, Internet Society Switzerland. I find it difficult to separate a number of things when we talk about AI. One is who owns the actual AI. And then when we talk about inclusion, inclusion in to what. We have a major incorporation. We are fighting over the frames citizens all over the world and they are the more powerful and richest companies that develop this AI. They developed in a way that they want to be ‑‑ to include you and advances in new marketing that they finance, help them for that ‑‑ for reaching that goal. The people who are doing research in AI, in Universities also belong to Facebook and Google and use the systems and possibly are influenced by the AI that these large organizations are using and are being made with the advances of marketing.
Maybe limited ‑‑ they don't really necessarily want to go towards. So the question is how do we reconcile the benefits that AI gives us by understanding how we work and all the information that we provide with AI that is used by the owner of the AI to get us to do things that we would not necessarily want to do.
>> CARLOS AFFONSO SOUZA: Thank you. So comments, reactions so that we can take the comments?
>> PANELIST: I wanted to say the beautiful thing about the IGF is that you get to have this conversation with the stakeholders. So thank you. I know the particular notes and this will be a part of the conversation. A quick side note I don't know how many people know the AI narrative and support. I think that you might find that some of the framing is just that the people at the symposium were probably part of the similar process how AI is being developed, where it is being developed. It is not so much timing to the market so much as responding to the direction in which it seems to be going. But we accept the better ‑‑ that there is probably additional questions that they should be asking and ‑‑ assuming the end, how there can be further engagement for anyone that is interested.
>> CARLOS AFFONSO SOUZA: Thanks. Let me just have two comments. Parminder and Stefanie has their hands.
>> PARMINDER SIGH: I am Parminder. I would put it ‑‑ some of the same language. I think I am surprised that Artificial Intelligence is a structure of power. It is not a small thing. The whole power structure does not change with Artificial Intelligence. How can we even address the question of inclusion without mentioning the word regulation? In the set of descriptions I heard and the question because we cannot visualize this as an individual, how many other people you may touch, they are powerless against the big structure. And one structure can lead the structure and that structure is degradation and design is very ‑‑ it is a ‑‑ which is helpful but does not represent interest. It works inclusive comments and then the participants can help that as much as we have been in ‑‑ participants in the design of Facebook or Google over the last ten years. It is completely controlled by the large corporations today. And I know there will be debate with market and state. We have to respond to where we stand. And as we stand Artificial Intelligence controls the corporation and a big actor, the other big actor in the state and regulation. Participate in designs and may go very predictable. Thank you.
>> PANELIST: Thank you. I will give my comments because we also ‑‑ I work in Egypt as access management. And we also attended the conference. And our interest would be more the economic side. We were ‑‑ there were also discussions about what are the models that AI has. What is the role of data and what ‑‑ the take‑away questions that we came away with are ‑‑ what are some of the alternative business models to ‑‑ to companies and Google and Facebook are doing in a role that user data and knowledge ‑‑ AI, hidden by our team. So we are trying to find and look for examples of open source or maybe data comments used especially with companies or startups that are working outside the Global North. So yeah. This is ‑‑ thanks.
>> CARLOS AFFONSO SOUZA: Thanks, Stef. So the idea of bringing the research questions to you is to have a glimpse of a couple of questions that we made ourselves during the events but the idea, of course, is for us to push this conversation forward and take you to submissions and action points that we can actually cooperate on this subject matter. So to discuss with you about action points and solutions, I would ask Sebastian from Berkman Klein Center to fully address your ideas on submissions and action points.
>> SEBASTIAN SPOSITO: I am just ‑‑ just to kind of highlight a few things. Perhaps importantly there was three days of conversation around 170 people from more than 40 countries. It has been an extremely rich conversation with dozens of breakouts. So I cannot do justice to the richness of those debates, but in the spirit of carrying it forward I want to share four take‑aways for me personally. And it is a little bit like powerpoint karaoke because someone else drafted the powerpoints. Forgive me for that.
While we were listing the various challenges that AI poses for inclusion, including some of these conceptual questions, I think one thing that emerged while moving from problem description to solution space is what is in the right altitude that we need. And it goes back to actually your comment. So what's the perspective to adopt? I think what crystallizes this, we need to look at it as an ecosystem problem. Disruptions we see there are fundamental. There are structural shifts happening. And we have to address them at the systems level. There is a lot of debate how much can it be pushed down to the individual so that cannot, for instance, when talking about education and skills, right? So just re‑educate people. And the individual has to learn how to succeed in this new world of AI and in order to participate in society is that the approach. And there was a lot of skepticism that you can push it back to the individual to succeed. But rather that we need prominence, that we need nation state. So this ecosystem perspective was an important one. We think one analogy that was resonating with me is to look at it as a problem as big as our environment and protection. So that kind of was the first take‑away.
The second take‑away for me when we were discussing brainstorming possible solutions to some of these inclusion problems was about interdisciplinarity. It became clear and it confirmed all again how hard it is to talk about AI based technology inclusion across disciplines. Computer sciences and engineers may have different perspectives than the social scientists. So we were struggling even with basic language issues and vocabularies that we don't share from the beginning. At IGF we have an advantage because we have been working for a while. This is a big challenge going forward. Just the term citizenship was mentioned. It is a great term for many of us, but then it doesn't work for other parts of the world. So we can replace one term with another. And a very strong disagreement on these basic terms. It is a hard problem I think of how do we create this interoperability across disciplines and how do we translate fundamental terms from one area to the other.
The third take‑away for me was again going back to something that was said earlier, crystallize during the conversations, there are sometimes perhaps social solutions to the technical problems and vice versa. Technical solutions to social problems. In the category of social solutions to technical problems, going back to the problem of unemployment and impact of AI on labor, the idea of universal basic income or the idea of a solidarity fund not only the advanced countries may benefit from the technology but that we have some sort of a shared benefit of these technologies. The robo tax is another example of social responses to a technical ‑‑ technically induced challenges.
On the other side technical solutions to social problems going back to something that Malavika mentioned, this problem of bias and the different sources of bias when it comes to AI based systems of sort of biases within the dataset and creating or design of systems or whether the bias is with us in society and how do we deal with these different sources of bias. It is just one deep dive.
And one technical solution was a focusing on bias and datasets. And there was quite a bit of conversation and education also by the computer scientist and data scientist. How can we be biassed datasets and have training data doesn't share some of the characteristics that shares in society which is typically a biassed society.
So the last point I want to make is also as a cross‑cutting theme is about timing and speed. I think there was product knowledge. The inclusion train is moving fast and decisions are already made who are the winners and who are the losers. And that we have to be really fast as a society, as policymakers, as position makers to not make this window of opportunity as I mentioned earlier that we really put the right policies in place, to make sure that the next wave of technology benefits all people and doesn't increase the existing gaps and divides.
So how do we do that? Particularly going back to the research questions, where we have such uncertainties, where we don't have a strong evidence base yet. Where, you know, study A and study B does another thing. So we have to act on conditions of complexity and uncertainties. We don't know many things that we have to make decisions. What does that mean and what new types of governance frameworks that need help and make sure that you make learning mechanisms in to policies and frameworks. These are some things that emerged in addition to some concrete action items that we see on the screen. And we can perhaps touch on at the end of the discussion. Thank you.
>> CARLOS AFFONSO SOUZA: Let me open the floor for comments. Go ahead on the left.
>> PARTICIPANT: Yeah. I ran a workshop two years ago where big ‑‑ the same sort of questions were addressed that we have now and a few outcomes. We have politicians. We have people from education. We have people from appropriations. Everyone in the room with an open ‑‑ open mic session. Where are we at this point in time. When we ask people for education you will see any programs in screens and higher education on Artificial Intelligence and preparing on youth for society and also simply nobody saw any programs on ‑‑ in high schools and build the secondary education after that. So not talking about Universities.
Politicians have absolutely no interest with any of my colleagues. And we had several different countries on the table. Early open European Parliament, no interest in my colleagues in the disruptive techniques that are coming our way. In other words, we hear what politicians say and we are basically not doing anything. The same went from big corporation, there was a guy saying I am not going to say my name because I don't want to be known. But it is really worse when you think where we are already. And didn't want to go ‑‑ put the blame on the program but actually stepped forward and said that. In other words, we are in a disruptive phase where nobody knows where we are. Where people who could make decisions at least two years, not interested to engage in the discussion at all.
In other words, how do we drive this discussion forward and there was really no one in the room and it was bigger than this one. It was better attended that one. I think there were 150 people in the room and no one had an answer. All the industrial changes people lose out and people win and others adapt. And that's the phase we are at now except that nobody and the last one is jobs we discuss, nobody saw any jobs, any losing jobs. And I think when the first change started rolling over we could imagine there could be thousands of jobs in screen change rolling.
Imagine the new jobs that created. Nobody can see what the new jobs are at this point in time and I think that's something very important. I will end with the comment I read yesterday saying we need to hire people to make themselves dispensable and hire them again to make sure that they do it again. And that means that we are in sort of a creating society that will have boundaries that we can't imagine yet. And also lead to different power source, perhaps even different state perspectives that we are in a very disruptive phase. And I think from the ‑‑ when we end this phase in IGF, perhaps 2025 that we may not recognize where we are ‑‑ what we are living in. I don't have an answer. Sorry. Just a lot of questions.
>> CARLOS AFFONSO SOUZA: Thanks for your comment. And since you are mentioning IGF and John and those who have been, John, I believe have very fond memories of that IGF and that lovely city. But interesting to see how the issue of Artificial Intelligence has been around IGF for like a number of years now. But the discussion seems to be getting in to a situation in which we have a number of workshops, Round Tables like this one. And this IGF on Artificial Intelligence framing slightly different scenarios of the issue of Artificial Intelligence. But let me go back to you and say what do you see changing from 2015 to 2017. So after those two years.
>> SEBASTIAN SPOSITO: That's a very hard one. I'm not in Artificial Intelligence. I am in simple workshops. But when I ‑‑ to my right presenting the questions you asked were exactly the same that we asked in that session. So the question here is no different. I think that is ‑‑ I think a session like you organized, that is something which may have changed in those two years. So there is more perception of the urgency. But just as an idea, if it is 2018 or wherever we are, Hong Kong, we will probably hear why not use a session like that, completely not related to the IGF and bring all these minds together and fund industry and fund Government to come up with some sort of framework or some sort of action fill and that should take them out of the IGF. And I know that's ‑‑ that sort of programs do not exist. That's something I am working on. And I try to make that a feasible thing but I'm going to start that discussion here.
But that is something which if you think is of interest, there are so many bright minds walking around here that may not have been at your academia conferences but are here and what can that change in perception of a topic like this, but it is one of the main topics that we should be discussing right now and are at IGF. What can change at the IGF.
>> CARLOS AFFONSO SOUZA: This is a ‑‑ I couldn't agree more. And I think it is really showing that we have the discussion being so I wouldn't say well represented because we have like a good number of events, panels and workshops. But by the end of today are we able to get a conclusion from all of those events that could really push this conversation forward and then IGF can live up to its mandate in being this hub that brings people together to really advance the conversation on one particular topic. Let me just see if you want to address something and then come back to Claudio and to the one next.
>> CHINMAYI ARUN: I want to say that a purpose of this follow‑up we get to meet people like you and we don't rest in the academic silos. I wanted to make a side note as someone who has asked the same question. They were not available for many years as I know my other institutions. But there is a value to that. The point that we have keeping alive and it works out and we should continue asking and continue thinking about it.
>> CARLOS AFFONSO SOUZA: So just ‑‑ in the order. Malavika, if you want to jump in and then on the left and then we go back to Claudio.
>> MALAVIKA JAYARAM: A quick response on the education point, I think what we have seen is the trend so far has been that everyone else has to learn how to court. We have to learn nontechnical people are asked to do the labor of learning, how the technology works in order to safeguard our rights. But as these issues of bias and discrimination are getting more traction in the mainstream media we are seeing more of a shift. Why don't all of you computer scientists learn something about ethics and the law and governance and regulation. Because it has to be something that is joined by different disciplines. And we had a thread on the community. People say we don't have academic curricula that address, fix and design. Does anybody know of it any. And it turned in to such a huge flurry of information and this happened on Twitter as well. But they are now very, very huge resources of all different syllabuses that happen out there. That's one way we can actually help.
And I think the other fact is that again comes back to what Parminder was saying, you mentioned as well, you don't actually work in AI. I think that's a fundamental problem. None of us really work in AI as computer scientists. We are required to understand this. The amount of labor of everyone around the world say here's what I am an expert on, how is this affected by AI or how can I change it. The whole world has to work on AI, whether they like it or not. That's where the funding is. Right? Or that's where the movement is and we can sort of become ‑‑ we have reached a point where it is hard to say you don't affect or are affected by AI or that you don't know.
Something you do is being touched by it implicitly or explicitly. I think that's a fundamental problem that people who want experts and need to understand how to navigate it. So I think all these questions and participation are so important because this is a very reductive summary that we are giving of a rich conversation. If you look at the materials you will see words like colonialism and subject. So I think it is something very, very rich that I think interest ‑‑ and civilian monitors. So we would encourage you to play with the material that's already out there. And add some more.
>> (Off microphone).
>> CARLOS AFFONSO SOUZA: Thank you. Three comments here. (Off microphone).
>> PARTICIPANT: Thank you. My name is John Ken. I am a Google technology fellow at the European Health Forum. I had comments but I will try and make a few short comments. Thank you for putting this on. It is wonderful to be a part of. I think to the question of how young people can be involved in Artificial Intelligence, we should ask them and let them tell us how they want to be involved. And we should do it in ‑‑ on channels that they participate in.
A brief anecdote. The trend designated driver in the United States was designed by a Harvard school of public health in partnership with the Writer's Gild of America where they had TV characters on TV shows to use the term "designated driver" to show that driving drunk was bad. We can think about it in a big way without propaganda. And I think also that young people in this conversation have to be protected from exporting experimentation.
It is already a problem and going to be even more of a problem if you have kids in mainstreams. If we don't start regulating the way that social media is designed to be like a gambling addiction, I don't think that we will have the attention that we need.
And the last point I will make we have on the board storytelling. I think that is really important. I think that, I don't know if that's an American example but the new deal is it taught in high school, the joint initiative that got people back to work. And I wonder if there is some equivalent in AI that we are missing is kind of a unifying narrative of the future that we all want and if we can start there. So maybe you can tell us moving forward.
>> PARTICIPANT: Comments? Just a quick note on Government engagement and Government involvement. If we take it back to 2015 I bet that was an absolutely right assessment, no Government would even think of it. If we take the same measurement today, I think you might have a very different approach. And that tomorrow at 9 we have a meeting with the Internet Society of China where we list policy initiatives from various sources including governments where policy initiatives from a report in the White House, House of Lords in the UK concerning other things but AI. And that means that yes, it is exhausting and it is very hard. I would risk answering the question that yes, IGF is the right Forum of that. Meeting organized in 2015 be AI for inclusion and we have IGF at Governments and then started. We are in the very early stages but that's not the very same ‑‑ they are not uninterested. They might have been in 2015. Thank you.
>> CARLOS AFFONSO SOUZA: Thanks, Claudio. Let me remind you that Parminder is organizing a workshop on this as well. It is going to be Wednesday afternoon here. This is me doing publicity for a friend's workshop.
>> PARTICIPANT: Hello. From the youth ‑‑ IPR fellow. I have seen all these proposed action items like from the zero Global Pulse, my whole thing. And I am seeing this proposed action items but education, Global South participation, influence and a lot of proposing actions that have been floating around. But we don't have ‑‑ we don't have anything Global South so far. So can we view ‑‑ how ‑‑ can we change and I think we have been ‑‑ all this stuff are going to happen in the next years. And like oh, the companies open their ‑‑ and their ‑‑ will be created universal based income.
So my question is how do you find a strange and interested in the future of AI when I can see a future where ‑‑ that we are doing in your hands to fight back for the next 20 years and fight back to the companies? Fight back all the bad stuff that the companies they own AI systems, we will do ‑‑ how do we fight this?
>> CARLOS AFFONSO SOUZA: Thanks. We will get a very short answer to that in the moment. And it is not a movie. I am going to take two quick comments and then we return. So KF.
>> PARTICIPANT: Yes. Thank you. I am Yana. Innovation advisor and global Forum for democracy and I wanted to address this question of involving young people in education. I see on the slide there is revamping education. Yesterday we had an event on, a free event on digital literacy and harnessing Big Data and this came up in our approach. And the session I suggested that you should add or compliment this STEM approach with HELLP. These are the kind of developments that are missing in the STEM approach.
When you are talking about many people in education it is currently so old‑fashioned and we are even going back. There are countries now including France that are beginning ‑‑ forbidding children to come with their Smartphones. I think there is a very big importance to commit some of these workshops that we are having even here in the IGF in a completely different context.
Second one of the gentlemen's idea is to create a special AI event where we will get different strands of information. And coming to the second point I also attended the social responsibility and ethics in Artificial Intelligence workshop this morning and the panel outlined very clearly that the ultimate goal or one of the goals of the Artificial Intelligence, whether it will be achieved to imitate the cognitive abilities of humans.
Right approach to Artificial Intelligence but I want to ask a provacative question, if we create Artificial Intelligence that has emotions that humans have, don't we have to ‑‑ too many Human Rights approach and create an Artificial Intelligence rights approach. Thank you.
>> CARLOS AFFONSO SOUZA: That will take us to the great question about personality for autonomous systems and things like that. Let me go back to KF if you want to jump in.
>> PARTICIPANT: I think there are three layers of exclusion that truncate or supposed to truncate our millennium goals and this team of artificial inclusion. No. 1 is AI, replacing human jobs. Why income gap between the poor and rich who have access to AI. But that's not a new issue as technology has developed even before the ‑‑ information technology, displaced people for centuries. And some of these countries have been passing that work or state model. And I think some of the same responses have been developed.
The second layer of exclusion is some of our humans are biassed including people in evaluating other people's work and hopefully ‑‑ if we ‑‑ if you automate you can determine that bias. And worse that this can ‑‑ discriminatory practices that we have engaged in unbeknownst to us. AI can be used backwards. AI can be used to detect discriminatory dependencies in us. And we can come up with better new ‑‑ if you look at the take‑aways, one of them is informative equivalence. That was my contribution. Basically what we are saying is that, you know, our decision making has been changed already. And better decision making with algorithms it may worsen the problem that we had before. What we do, what we change the results. I mean it is ‑‑ if we are aspirational, we are not science based. So if the Ambassadors do not comport with what we wanted, then we can always treat and those ‑‑ so the bigger question is what keeps AI, right?
We have to go back to our ethics, classical ethics stipulations on how we want to work together. I'm ‑‑ I teach ‑‑ people say teaching is the best opportunity to learn what's really ‑‑ what's really meant by this is that, you know, teaching is difficult. So we think reallocating AI is bad, but the truth is we don't want to teach. What ethical instructions they want to give to AI.
Now the third layer of exclusion which is this, AI is a program. AI is a program. So, you know, AI in a few years can be sold to different people. People in African regions can run AIs in their homes. It is not rules making, access to the cloud. So the program itself may be regularly available. What is not regularly available is the data to train. Right now a small number of groups and governments building the silos of personal data, and we don't know whether the data ‑‑ how or (?), making that data more readily available to different people who should have access to what the next level of AI is. So that's why we have data protection law as take‑aways vary.
So just wanted to lay it out there that, you know, that's why I think inclusionary strategy has to be discriminatory. There is exclusion at an ethical level and exclusion at a data availability privacy level.
>> CARLOS AFFONSO SOUZA: Thanks. Yes. So three comments. There. Can you take ‑‑ and then go back here.
>> PARTICIPANT: Yeah. My name is Bishka. And I work with a non‑profit in India called Point of View. So actually I'm going to try and frame this as question but I'm not sure how to. So let me ‑‑ one ‑‑ what is the most common example that I think of when I think about AI is actually auto correct, right? Which we have on our phones, et cetera, and we use all the time. So what I want to do is actually ask how can we build a sort of ‑‑ I think this is really useful. And I'm wondering how can we build a sort of research agenda for Artificial Intelligence that takes in to account everyday users of Artificial Intelligence. And sort of comes up with, you know, answers, et cetera, that are relatable to people's everyday experiences. So that we don't sort of end up creating another digital event with Artificial Intelligence.
>> Thank you.
>> MALAVIKA JAYARAM: Just to point out the final comment. This is a different ecopoint. This is a workshop by APC. And it is of law center from India. It is called making Artificial Intelligence Work for Equity and Social Justice. That's on Wednesday. That's what I wanted to say. And the comment I want to make is a quick comment. It was a question I think some of you told about what changed between 2015 and 2017. Being a person from NGO working for justice and development I would like to point out how I feel better informed about Artificial Intelligence and what properly changed in the last two years. Jack described where AI used to be a program that popped up on the screen. AI is completely inconsequence. It is kind of merged and disappeared in to society. And that is something that we really, really ‑‑ it is not about a pacemaker which is in our heart which is connected to the network. It is about the way in which mergers are taking place between one center and there. Mergers which actually are about controlling C data and solid data so the planet can be controlled through Artificial Intelligence and market scale can be controlled. It is everywhere. And it is everything. It is not about it being everyone. It is about how to become everything. Here is where I want to prioritize again. And I don't think I have the vocabulary. And we are all struggling with the vocabulary. We think about the user and we think about a young person as a user. But I would like to ask the question, it sounds and subjectivity rationale or is it something that is still out there. I think we need to understand subjectivity as this. We are all workers. We are citizens. We are everything. We are not just users. So Artificial Intelligence has taken even the job.
So at some level it is a question that economic, political social, et cetera. So I would urge that the research looks at agencies. Something that is fed on structures so that the sounds and subjectivities that we are talking about and self‑determination about a larger social structural phenomenon. This is not to deny individual rights but to talk about individuals in relation to society.
>> CARLOS AFFONSO SOUZA: Thank you. So we are heading towards the end of the session. So we have two comments here. Now I go back to Sandra. Yes. Okay.
>> PARTICIPANT: Thank you. Thank you. I'm Kira. My knowledge on this topic is quite limited, but you have remarked on some questions that I would like to hear more about. Just to answer how Saudi Arabia gets expertise for Artificial Intelligence and this has another session, there are some important questions but what are the rights of ‑‑ and more importantly what are the obligations because you saw in March this year how bot, Artificial Intelligence point and in 24 hours it became ‑‑ so questions like such as what are DPOs. People could have Artificial Intelligence systems. Other ‑‑ under which set of frameworks do they operate. And also importantly what kind of repercussions there should be in case of accident. Thank you.
>> PARTICIPANT: Hi. My name is Hondura. I am a policy and advisor. I very much like your idea about the looking at different ‑‑ I mentioned of relation between problem description and the solution phase. And in this sense you already mentioned two dimensions. First of all, the social and solution to technical problems and technical solutions to social problems. I want to add and expand these two dimensions with two other dimensions which, first of all, could be and I want to encourage everyone in the room to think also about technical solutions to technical problems which is maybe, first of all, we had to ‑‑ we had to address space problems, IP for five. And now we already are in transition to at least P6.
And the second thing I want to add that we need to think about social solutions to social problems which is maybe migration and integration. But in general I very much like your idea about the two dimensions.
>> CARLOS AFFONSO SOUZA: Thank you. You want to go ahead? Go ahead.
>> PANELIST: I would like to respond to one word. We want to capitalize that we are customers and not users. Systems between ICT ministry and us and customers we change. And that's something which I think needs a lot more attention. And we start seeing ourselves as customers and that comes with rights and users come with dependencies and that may change a lot ‑‑
>> CARLOS AFFONSO SOUZA: Thank you. We are coming towards the end of the session. And I go back to my friends here on the left if you want to have any comments in the session. Sandra.
>> Sandra: I just ‑‑ the director of the user media project at the Berkman Klein Center, I am happy to see all these people in the room who are interested in huge issues. If you look at some of the biggest reports around AI these issues are rarely mentioned. So I wanted to end with basically on my end with three invitations. The first one is the meeting with colleagues on January 15 and 16 and events on youth issues and AI in Costa Rica. If you are from Latin America and interested in it, please let me know. The second one is that someone mentioned that we should be talking to young people. At Berkman Klein we are going to start the next round of Focus Groups with young people. Please send those questions to me. And I am happy to answer the questions on your behalf. And the third point someone mentioned that we should be doing more educational ‑‑ Berkman Klein, we are codesigning educational activities around AI with youth. I could very much use your help. If you are interested in helping, let me know.
>> CARLOS AFFONSO SOUZA: Thanks. Anyone else?
>> PANELIST: I wanted to say that labeling ‑‑ regulation point of view, along the lines of the initial question that I highlighted here that second time, that the new results should engage from the start. I hear all my colleagues in the room and I think the last two things, what do we need to do immediately that part of these norms are not frozen. And the other is a ‑‑ where ‑‑ for my Indian colleagues we have a platform that you are all a part. Perhaps we can discuss this a little further locally. And I look to meeting you in the IGF and discussing these questions more and learning from your disciplines and how I can be helpful and what I need to understand.
>> CARLOS AFFONSO SOUZA: Thank you for the comment about users. One of my colleagues Alex very famously said at Stockholm, one is technology and the other is the drug industry. It talks a lot about the power structures in which we view them as something belong and as ‑‑ a mass of data. So I'm glad you brought that up. And following on Jaya's point I am based in Hong Kong. And we have been running a series of events on AI in Asia. Looking at solutions are different in Asia and whether the problems that AI are trying to solve are different, address different societal balances. So we have one in Hong Kong, one in Tokyo and one in Seoul. We will be having one in India, one possibly and one in Indonesia this year. If you are interested let me know and we would love to include people thinking about this across different disciplines. We are doing one on evidence‑based approaches and how you can measure the impact of AI. And that's 4:30 to 6 on Wednesday. And we are presenting our findings at 11:15 on Wednesday. Guys, this is a quick note to say that big thank you for the comments and suggestions as we move this conversation. Sebastian, you want to say something?
>> SEBASTIAN SPOSITO: I'm sorry but I want to say that I am a user, to make transport system and I think I am a user of Internet and before ‑‑ if we want to have a discussion, I think it is important to say, but don't forget that I don't pay everything. I use Internet system. And therefore I am not just a customer. I am much more than a customer. I am a user. Thank you.
>> CARLOS AFFONSO SOUZA: We have a few minutes. And this looks like we are getting to our discussion on the top user. And Sebastian and I hope you are not addicted to private transportation. Quite the opposite. Let me go back and have a question here. Comment here and then one in the back.
>> PANELIST: Yes. Thank you very much. Neither customer or user works. User has different relation and customer means it is clearly business. I suggest to think of it as participant or cocreator, especially when it comes to Artificial Intelligence. This would be my comment.
>> PANELIST: Okay. Sorry. User is ‑‑ user is a definition of human being in relation to the machine which I don't like. You are not a user of machine. I mean centered on the machine. It is not very ingenious but not for nonhuman beings. There is a customer and a citizen and Chair, why would customer and consumer, citizenship does not work everywhere. We are in a new thing here. But I think we need to reclaim some parts of the citizenship. It is simply about everyone being equal and everyone having rights which come from a social contract. And I think there is a simple concept is everyone agrees to. Superior to ‑‑ so that's not what mature things of citizenship. And so also that's important, none of them can be removed. The concept ‑‑ citizenship is also a good one.
>> CARLOS AFFONSO SOUZA: Thank you. Comment here on the left. Very quick.
>> PANELIST: Just very quickly, I think that ‑‑ I'm sorry, he gave the key station here and that is if we understand this whole thing as relational then it doesn't really matter. If we are talking about a computer that is generating data it is the relations of matter and those relations are what we focus on. Not individual note as much as valuable human individual life is, it is relations that are going to be changing and that's ‑‑ that needs to be our focus.
>> CARLOS AFFONSO SOUZA: Thanks for that. I'm afraid we have to wrap up. I'm sorry. We are sort of running out of time. And I believe there will be another session in the room. So this is just thanks for Sebastian for opening up this bonus track of our traction on concept of user. And I really had to thank you all for your comments and suggestions. This is a quick reminder that you can find or you can see that much, that's the link for the Web ‑‑ for the event in Rio, symposium. And if you want to know more about the Google network of Internet and society research centers, reach out to us. This was a pleasure. And my big take‑away is we need to make IGF a place for this conversation, is organized, coordinated, that we can really advance this conversation. Thanks for taking part in the dialogue. Hope to see you in some other AI event quite soon. See you next time.
(Applause.)