The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> MODERATOR: Hello. Hello, everyone. Hello. Good afternoon, everyone. Good afternoon. I work for the Swiss Office of Communications. I was asked to moderate this session. Thank you very much to the session organizers, first of all for this honor and I will try to give you a short introduction of this 90 minute session. We are going to discuss AI, Artificial Intelligence and the future of diplomacy, what's in store.
Later on, Katharina will give you more details how the session is structured, but, first of all, as somebody who more‑or‑less is engaged in diplomacy, although from non‑diplomatic ministry, I wanted to share with you some personal‑impressions about Artificial Intelligence. And this leads later on to how the session is structured.
AI as a topic is, of course, something which is on the agenda of all of us who are dealing with international topics related to digital policy and we are seeing that AI comes up in many different formats, be it in ITU, here in IGF, in many other place, Council of Europe. That's a plethora of places where AI is really popping up as a very important topic, even in relation with the lethal autonomous weapons.
So it goes from what's the impact of AI on freedom of expression on democratic rights, as we are discussing in the counsellor of Europe, much more serious but let's more critical topics, to what extent autonomous weapons can be something that can be admitted by the international community.
We heard the U.N. Secretary General only 10 days ago or one week ago in Lisbon, calling for the prohibition of lethal autonomous weapons.
Then, we have, of course, Artificial Intelligence and the geopolitical implications of it. We are seeing a run of national regional strategies to position different regions, different economic military alliances in this unit, because as will be mentioned later on, Artificial Intelligence is being seen as one key technology, which in the future may change the power structure of our international relations.
Lastly, but not ‑‑ but not less importantly, of course Artificial Intelligence is also an instrument for the practitioners, those working in diplomatic environments. At least, in my case, we don't use it so much, but I'm aware of colleagues in other parts of the Swiss administration, who, for instance, use algorithmic tools and big data to monitor what's the perception of Switzerland in other countries which are important to ‑‑ which are important to us.
So, I guess that with this and also with perhaps with a plea to all of us to try to distinguish when we are talking about Artificial Intelligence as specific Artificial Intelligence or Artificial Intelligence related to specific functions, and when we are talking about a more future general Artificial Intelligence, which at least in ‑‑ as far as I know ‑‑ is still some decades
Away.
I would like to introduce also ‑‑ if the music allows ‑‑‑
[ Music ]
After this commercial break.
We have for these three streams of work, we have three excellent leads. First of all, we have Mike Nelson, here to my left, who also teaches in Georgetown University, who will be leading on AI as a topic on the international agenda. I'm very interested in learning from you.
Then, after we have also Claudio to the right of me, a professor of the state university in Brazil and has many other hats on. He will be leading our work on Artificial Intelligence and the global geopolitical environment.
Lastly, from the DiploFoundation will introduce us to Artificial Intelligence as a tool for diplomacy. I think perhaps Katharina, you want to give more details how we structure this interactive session and then we continue.
>> Thank you, first of all for coming. I see and heard there are a number of people that tried to get into this session and couldn't. Consider yourself privileged to being here. This session is a little different. We will try to be as interactive as possible.
After I finish explaining the format you will hear a pitch on these three topics and will split up in three groups corresponded into the topics, these groups, guided but us but led mainly by you to see what is your knowledge about this, what is your position on these questions. We will have 30 minutes of the interactive part.
The way we will do this, one group in the back and we will turn the chairs around so you can sit on both sides of the tables. One group over here and a third group will try to find space outside to discuss. After all this commotion and 30 minutes of discussion, we come back here.
Then, again, it's over to you, because each group needs one or two rapporteurs to discuss it and we will hear your answers. Be prepared for quite some movement and interaction during the session.
>> MODERATOR: Thank you so much, Katharina. I think we start with pictures. I have first in my line, do you want to go first? It makes sense we look at the political environment with Claudio.
>> LUCENA LUCENO CLAUDIO: Thank you very much. We are trying to adopt law enforcement measures. That's the connection. We are approaching it with a slightly different perspective the future of diplomacy. I think it was very interesting it was mentioned in the beginning the idea we didn't make that disclaimer in the session. I notice an interesting point. We're not interested in general AI, we're looking at also or mechanisms. To be specific we're looking at big data, how this impacts the environment.
For our group I'm looking at initiatives, AI related initiatives. We are going to look or discuss some institutional ones. You may have heard about the Montreal declaration headed by the University of Montreal with a more ethics approach. You might have heard about the Toronto declaration earlier this year, instruments we will discuss.
You might have also heard about the ethically designed, from the electronic and electrical engineering, all of which touch ‑‑ these initiatives do not necessarily plan to regulate or intervene in a hard law way in Artificial Intelligence but they do touch it from some ‑‑ perspective.
We have a number of strategies, as I mentioned. They are already there, many are drafted, some are already finished. This is a movement in the past two years in a systematic way.
And the third is the international ones. We have a joint initiative in AI for good coming from the ITU in connection with other organizations, the World Health Organization this year and legislation, purely speaking, we have the report from earlier last year. It has been that long, a proposal from European Parliament, that touches some aspects of Artificial Intelligence, getting to the point of promoting in the controversial article, 59D, legal personality for robots.
These are details we will focus at and focus much less on what they aim at but what they represent for the developer in the geopolitical scenario in the international arena. That's pretty much our focus.
>> MODERATOR: Perfect, Claudio. Thank you so much.
Now, it's the turn for your pitch. Let's see how many you convinced to take part in your group, Mike.
>> MIKE NELSON: It looks like we have a good group around the table. I'm Mike Nelson and started working with Senator Gore in 1988. Since then, for at least 15 of those years I had a list of the most emotional buttons that get people excited about internet policy. They all begin with the letter P, privacy, policy, protectionism, policing, psychology, procurement, payments.
It turns out, when you take this list I've got, almost all are hot buttons when we talk about Artificial Intelligence. That's the last time I will use the phrase "Artificial Intelligence," a term that's been around since 1958 and has at least 37 different definitions.
Like you, I will focus on the question of big data and how machine‑learning allows us to pull out insights. Not going to talk about the robots or vision systems or speech algorithms, I will talk about data and how you do something creative with it. That power to give companies and individuals super human capabilities is why everyone's freaking out. That's why there are at least five sessions at this meeting, talking about AI, when they really mean machine‑learning.
This meeting is important because it will inform discussions that go back to national capitals and other organizations.
Let me give you my list of how national governments will try to understand machine‑learning and impact what they do.
Obviously, the one that has gotten a lot of attention here is about content. How will we use machine‑learning to differentiate between legal content and illegal content.
There's another one, another big issue I haven't heard it mentioned yet. That is competition and antitrust. There's a lot of concern the biggest companies who have the biggest amounts of data are going to have data dominance. They are going to have the best most powerful machine‑learning algorithms because they have the data. This isn't just Microsoft and IBM, this is Baidu and some of the Chinese companies as well. Third issue, privacy, is machine‑learning going to allow companies and countries to learn so much about us you will have a sense we have no privacy.
Even if we haven't shared information about private habits they will infer data and things we don't want to share.
Fourth, cybersecurity, where the cloud comes in. We provide protection to 12 million websites. That means we have data on what 2.5 billion internet users are doing every month, as a result, we can use machine‑learning to find out where the bad guys are.
The most recent issues, elections and the use of social media to alter those issues. Alter election results. Let me finish with the last three, four, obviously, machine‑learning and job creation.
Number 7, machine‑learning and defense. There's a concern the foreigners will take over the technology and use it for military advantage.
Number 8, you mentioned it already. Policing and the use of machine‑learning an detecting crime on and offline.
Then, the last is another relatively new one. But what are we doing to our kids with smart toys that interact with them as if they were human or super human? So, anyway, those are nine issues I hope come up. I imagine several of you will bring a 10th and 11th and 12th issue to the discussion and what national governments are doing. Thank you. Sorry it took so long.
>> MODERATOR: Thanks so much, Mike.
Now, we have Katharina.
>> KATHARINA HONE: I guess the pressure is on. I will focus on the tool for diplomats and policymakers. Let me start with three preliminary observations or three questions.
My first question is, are we starting from the right kind of question when we look at AI applications as a tool for diplomacy and policy‑making? Are we looking at what is technically possible or are we looking at the tasks and problems we're facing in these areas?
Second, I want to talk about policymakers or things that stick out like a sore thumb.
Third, a human approach, in the sense I will be looking at AI, Artificial Intelligence, or are we looking at intelligence augmentation, service providers that augment human intelligence or a totally new form of intelligence.
Let me talk about the three tools for AI for policy and policy‑making I have in mind.
Usually at the Foundation we are talking about various functions and how the various policies can support the functions. Here, I have broken it down further. I will talk about speaking, writing, researching.
As you can guess, we're in the natural area of long processing and machine‑learning and ability to understand language and interact with human beings.
Speaking, we're looking at questions of check box, when it comes with the first contact with citizens and automatic messages in terms of public diplomacy or interaction.
Writing. We've seen a great example this year, project debater from IBM which was able to reason a debate with human beings.
Abstracting from that I'll be talking about then possibility of AI writing speeches and how can this be taken and how far are we comfortable with this complication of AI.
Lastly, reaching, talking about AI in preparation for policy talks and interaction. We've seen a couple examples of this, one recent example is cognitive traded a visor, an AI that works with natural language processing that looks at legal documents, and is able to give the researcher a very quick overview of specific topics of documents available for research.
The three questions I think we can focus on, what are the opportunities and limits given current developments, what constitutes human control and I think it elevates autonomous weapons. The question comes up how happy are we to relinquish intelligence to a second entity, how happy are we to outsource the task to speaking, writing, research to something not human.
Lastly, what kind of dangers and pitfalls do we see as these dangers become more prominent?
>> Perfect, Katharina.
>> MODERATOR: Give her the three pictures. Now, I think we break up into the three groups. If you want to discuss about geopolitics with Claudio?
>> Maybe we should see which ones and the smallest group leaves the room. Ask them.
>> MODERATOR: Perhaps Claudio may stand up and those that wanted to follow geopolitics stand up and then we go with Mike and Katharina. So we see what is more‑or‑less the proportion. So geopolitics.
>> MIKE NELSON: The stakes are high.
>> MODERATOR: Let's go with Mike.
>> MIKE NELSON: Is he learning everything?
>> MODERATOR: This is a bit smaller. Then we have Katharina, AI for diplomacy.
>> KATERINA HONE: Tools.
>> MODERATOR: Okay.
[ Audio quiet ].
>> MIKE NELSON. The bar. Back that way.
>> MODERATOR: For the lead person, please come back at 52. 52 is the hour to come back.
[ Groups disperse ]
[ Audio quiet ]
Standing by
>> I'm so sorry.
>> MODERATOR: Dear colleagues, dear friends, friends in the middle of the room, gentleman.
Okay. I think that you were so engaged that you would have continued for one hour more, at least, in this breakout groups, but we have limited timing, so we have to continue now as plenary and I would ask now in the opposite order the leads of the different breakout groups to report on their insights, on what they have been discussing in such an engaged manner during the last 30 minutes.
So, I would start with Katharina. Are you ready?
>> KATHARINA HONE: Well, I'm not going to summarize that which I'm happy about. It's a formidable task. I have two volunteers to volunteer to summarize. Shall we start with you? Yeah.
>> So we were looking at automatic tools. One of the things we look at is AI use and speech‑writing and research. We have a lot of great ideas, which is translation, the translation and the possibility of Google translation and providing a customizable translation work right now.
Of course, we look at how to take account of using data, analytics on audience, while do speech‑writing and then as was mentioned, this can be open for a lot of possibilities and diplomatic writings. Other one is AI in preparation of negotiation, using trade agreement and war region as an example, how it takes a long time to go through all the process. Using AI would cut down on the research by training machine language and research. Those are the three tools we looked at.
>> Well, another point that was discussed during the consultations was the disarmament question, the use of Artificial Intelligence in the development of non‑lethal weapons poses ethical questions, and there is a reluctance on how to regulate that because there are interests from the private sector and other countries in this regard.
There was a discussion about when writing a speech, what to take into consideration in the output, maybe gather information first about the audience and then build in the information, based on the mechanism of fact checking of what we are going to deliver. And also taking into consideration the sentiment, analysis or adopting the sentiment‑analysis approach to target the audience correctly.
Other questions coming from India on how to use the Artificial Intelligence in the public policy‑making.
Another question or another point was about the chatbox for customer services.
And the last point, will this make people happy, using Artificial Intelligence tools. That's a question open.
>> MODERATOR: Okay. Thank you so much. For me, I'm looking forward to benefitting from automated speech‑writing. That would be very practical.
Perhaps, Mike, in about three minutes, you or any of yours could summarize your discussions?
>> MICHAEL NELSON: I will talk for 10 seconds to say when I introduced our topic and laid out nine possible topics and predicted we would have a 10th, 11th and 12th and we did. We mostly tried to focus on machine‑learning and big data and focus on how government can do the right thing or wrong thing, both by themselves and encouraging the private sector to do the right things. Anyway, we had two people that didn't move fast enough and got volunteered.
>> So one of the points that came out was transparency, especially in the government, to have government leading by example. And to establish trust with the population. That goes through education as well, to show to the public how do you ‑‑ the algorithm is made and raise awareness on digital literacy.
We also talked about responsibility and if AI is managing it, who will be held responsible, and the role of the courts in that case.
Would you like to ‑‑
>> So, among many other things we have set a consensus on the fact machine‑learning systems can be used or seen or perceived as an opportunity to detect existing biases, and therefore, then, after we have used machine‑learning systems to identify problems, talk together a moment among the stakeholders to find solutions.
>> MICHAEL NELSON: Any particular examples of successes? We did have some nice examples both of successes and failures. Do you want to talk to that? No?
Just on the successes, we have seen machine‑learning used quite effectively in cybersecurity and policing activities but in the failure, school admissions, monitor infrastructure, and trying to make sure everybody has good roads. There's a lot of places if you don't get a full dataset, you end up with a biased conclusion. That was one of the recurring things.
Thank you so much for that and hope we kept within our three minutes.
>> MODERATOR: You did great. Thank you so much, Mike. That sounds interesting. I think, Claudio, are you ready? Do you have your mic?
>> LUCENO CLAUDIO: No. But my others do. We don't have a way to see this developing in the future. The idea is to look at which issues government consider relevant in the strategy they're devising now and then possible alternatives to build and to consolidate or improve the geopolitical position over the future. So Elena, would you like to start?
>> ELENA: Yes. We had a few inputs from countries and the satisfaction they face in AI right now. From Australia, and India ‑‑ and what came out was not a lot of regulation from AI except for initiators in the EU.
We wonder if the fact that the European Union would like to have some legalism on Artificial Intelligence to scare up investors as a paradox over there.
Although we talked about digital capacities and the fact some countries could be doomed to stay out of the system if they don't have access to that. And the countries need to ask themselves if they actually want to keep the balance or is Artificial Intelligence a way to change the balance? I think, if it's accurate.
>> MODERATOR: It is accurate. If you allow me one second more. Concerning the access to data thing, a very interesting proposition came from our friends from Amsterdam and a professor from the United States, about ways to access this data incorporation. This could be food for thought, if you could elaborate a minute or two on that?
>> Yes. We're working on a protocol or practical platform for companies to exchange data. For example, we have exchange ‑‑ we have stock exchange, right. We don't have a data exchange yet. While there's a strong interest for anyone for any country, any company to exchange more data.
One example I gave is flights data, for example, okay, land airplanes, have a black box, generate data for security purposes, efficiency purposes, but they just analyze their own data. If you aggregate this data to make more data it will benefit companies. You need it across any sector to get a protocol, platform, very practical to exchange data. That's what we're working on.
>> MODERATOR: Have you written something up on this?
>> Yes. We call it the Amsterdam Data Exchange, if you want to know more about that, get in touch with me..
>> MODERATOR: Okay. We still have the report from the breakout groups and we still have 23 minutes to discuss. The floor is open, really. Anyone wants to break the ice, wants to react to something which has been explained now or something that still is lingering from the breakout groups or something that has not been discussed yet but which is related to the three main topics we were handling.
I see Mike, who is eager to take the mic.
>> MICHAEL NELSON: Not because I want to say anything, I want to ask something. How many people here are working for a government or with an intergovernmental organization?
Okay. So, I've been to a lot of U.N. meetings and I.T. meetings this year. And I keep hearing governments say, we're really concerned about Artificial Intelligence. We've got to do something. Have any of you figured out what you need to do? This is a serious question.
What tool do you have to control machine‑learning and Artificial Intelligence?
>> I would say, first, we start defining what Artificial Intelligence is. What comes out of Artificial Intelligence because Artificial Intelligence is a theme, and what comes out of it are the tools. When it comes to tools, you have machine translation, speech recognition, autonomous weapons, so on. There is a confusion about Artificial Intelligence. It is probably overhyped during the last few years. Everybody has given it more importance than it deserves. No one can explain it. It looks just like a ghost, put it needs to be defined first.
>> MICHAEL NELSON: That was actually the most important low hanging fruit that our group identified, the second one was R&D, that countries are pouring a lot of money into. Other tools you have here?
>> I disagree with my colleague.
I think we need to pay attention to Artificial Intelligence. In the working group we were talking about a constructive criteria to discuss several applications of Artificial Intelligence. I must say specifically in the field of lethal autonomous weapons we have international criteria alone that may apply abruptly to the designed employment and use of lethal autonomous weapons. There are many potential applications. Some are beneficial, some, we don't know yet. At the rapid pace at which technologies are evolving and transforming our lives, I would say we do need to pay serious attention to what's going on. Thank you.
>> MICHAEL NELSON: Another lever we talked about was procurements. Governments can make sure they're buying transparent tools to make sure they're buying without discrimination. That was another lever. Back here.
>> I would say given every country has an identified workforce as an issue, that's something every country has is make investments in their work force. Make R&D investments and education investments if you expect it to be transformative.
>> Off that, we talked about government being a role model in our group and leading by example in terms of their own transparency in this was an algorithmic decision we made on your behalf as the government. If the government can lead in that way, consumers will in turn demand and expect the same from private sector.
>> MICHAEL NELSON: Thank you very much.
>> I just want to identify not something our government is actually doing, a problem we're struggling with. That is we don't have as a government resources to hire the best experts in the field so as to actually come up with solutions. I don't know if that's a general issue with governments or not but something we're struggling with. To give an example in the field of cybercrime, our federal cybercrime unit is down to the last two people because others are working for private companies where they get paid a lot more.
>> MICHAEL NELSON: That's definitely a problem we see in Switzerland, for instance. I saw here. You want to chime in?
>> Yes. Thank you. My name is Maria. I represent the government of Quebec. In the area of Artificial Intelligence, in December, we will language an observatory of social impact. We give money to different organizations who work in the ethics of Artificial Intelligence. For example, we have a group in Montreal that did ‑‑ and presented on December 15th and are very involved in ethic issues, how to use Artificial Intelligence in a good way.
We give money to different groups who are working in these issues and here in Inesco, we are involved in different activities and different, you know, steps to maybe we're going to see if we can have something, you know, maybe a declaration or something like this in ethics, I guess, is really important, ethics.
When we say it's important, it's not just to work, we do things. Yeah.
>> MODERATOR: Thank you so much. Perhaps two follow‑up questions. How did it work with our stakeholder groups beyond government? And how do you interact on the international arena? So, are you following these issues here in the IGF, also elsewhere, I guess, how do you tackle that?
>> We are here in UNESCO, so we work in different activities and discretions, and in Quebec, we have different rules and expert, here, too. So, it's how the government can be involved. But we need an international guideline, you know, like a convention. So we see ‑‑ we have to be all together to decide something. It's not only Quebec.
We observe, and in our territory, we are working with different groups, different expert to think about ethic, think about laws, so I guess we are in the beginning of this big ‑‑ so we are in all the world, we are not alone in these issues.
It's important to know, when you say government ‑‑ the government said, okay, we like ethics, we want ethics, and we do things for that. It's not just the word for us. I don't know for the others.
>> MODERATOR: I thank Katharina would react on this?
>> KATHARINA HONE: Not directly but this question of what governments are we talking about? What countries are we talking about?
In our group discussion, one important question came up was the issues of divide. Applications for tools of diplomats generally being used, what about developing countries?
What about small states? We're talking about a lot of investments some countries can afford and thinking it goes into that some countries can afford and others can't. There's a question of the divide and from our perspective the Foundation, question of capacity development.
We talk about developing AI tools, AI applications. Are they being shared? How are they being shared?
Can we think of mechanisms to do that?
One important thing to keep reminding ourselves in this room, what governments are we talking about? What countries are we talking about? Are we keeping in mind those that might not have the same opportunities to start investing and experimenting in these areas.
>> MODERATOR: Any reaction on that? Because, at least, as far as I'm aware, there are some strategies or some countries from, let's say, emerging economies, who are taking a lead on this because they see this as an opportunity to leapfrog in terms of development. How do you see this question? If there are no reactions on this, do you agree with the idea put forward that we need an international convention on Artificial Intelligence?
And what kind of convention? Who will be drafting that? Who will be discussing that? So, is there any reaction? Yes.
>> Looking at how this ecosystem is evolving, I think we are looking, in fact, in that small group, we came up with great ideas with how it can be used. I think over the next year or so we should be able to get some clearer idea where it is and probably look at doing that. It's premature at this time unless we want to set up a vision what we want to do.
>> MODERATOR: Any other opinions on this?
>> I'm Lynn, from an NGO of Finland. From the NGO side, there was a Toronto declaration protecting the rights and non‑discrimination in machine‑learning systems. I agree with the previous speaker we don't really have a comprehensive understanding about the AI or intelligence systems or machine‑learning or how it will change the world, the way we do things. In general, I would say it's important when we start using various AI systems or machine‑learning, we need to reiterate it again and again and understanding what's going on instead of trusting, this is going well.
There will be unintended consequences, there will be biases we don't realize are there, et cetera, et cetera. It will be very much an educational process. Or as we say, Siberia educates.
>> MODERATOR: That's a good opinion.
>> I would like to react to the argument we should be fully aware what Artificial Intelligence is before trying to put some limits on it. I think it would be too late when we realize what it really is, we may not have the option to really do something about it. I think we are already aware of some principles that should be upheld in front of this trend.
Although we do not fully know what Artificial Intelligence is going to be in the future, we should affirm some principles that stand in the way of resolutions of Artificial Intelligence in the future.
>> MODERATOR: What principles?
>> Well, I don't have a list of them, but for the fact the decision of life or death shouldn't be given or entrusted to machine. The fact there should be always a way to retake the control of some process that is given or entrusted to machine, that is not something one way straight, that we should be able to get back control when necessary. To be aware that machine ‑‑ that a machine is being used, when you make a decision.
Those are like three principles that come to my mind right now.
>> MODERATOR: That's very pretty substantive list of very basic principles.
I think our Finnish colleague wants to chime in again.
>> Yes. There is already a boatload of various items of international law, international conventions, so maybe it's more about how to apply those to rely on machinery. If we manage to make a world where more nations, more the international convention of human rights would be followed better it would be a much better world already. I don't think we need so much new standards but apply the existing stuff more and better.
>> MODERATOR: Thank you so much. I see some colleagues are leaving the room already because we are nearing the end of this session, so I would ask all of you, especially the leads, what do you see as the possible follow‑up for this? How do we avoid that we come back in one year in Berlin and we discuss the same thing again from scratch? How do you see this getting forward in the next 12 months? How do we make a difference with the rich engagement we had in this room? Also in other sessions about Artificial Intelligence, which are happening during these three days?
Anyone wants to take the lead?
>> I'm moderating another session in nine minutes. I would like to ‑‑ first thing, when Mike made the second call, I realized that our group, in spite of the fact we were dealing with how states view AI and how they view it as a tool to perform an international scenario, our group was not left with many other people that raised their hands as working for governments or the diplomatic services.
This may be one take away. If governments came in a year and had a clearer view of which are the issues that they considered could be impacted to them as state institutions, concerning the application of ‑‑ I don't like to use the expression AI again, I was looking for machine‑learning analytics of big data, how this decision process matters to them as states.
Second, take away from the rich discussion we had, the contribution mechanisms was brought from our friend from Amsterdam. The idea of cooperation mechanisms could be a take‑away for the next year. Thank you very much.
I apologize for having to leave for the next session.
>> MODERATOR: Thank you for these substantive ideas. I think Mike wanted to talk?
>> MICHAEL NELSON: Just a big thank you to those who broke out in our session and bigger thank you to those in government who chimed in. When I was in government it was always safer not to open my mouth.
I think next steps are up to each of us, to go back and take some of the things we've learned here, particularly some of the references, some of the examples, and share those.
In our group, I mentioned, I heard over and over again, Stephen Hawking thinks Artificial Intelligence will take over the planet. We need to do something about it, have a reasonable discussion about it, and understand what can and cannot be done. To that end, tomorrow, you can go to a best practices forum session on internet of things, big data and Artificial Intelligence..
We have a report on the IGF website. You can engage in that discussion, point us to more sources of good information, point us to more examples. That's a very early draft, so it's not too late to have a huge impact on paper, I think, will be read by a lot of people. It's being read right now in Dubai, where the ITU is trying to figure out what to do about Artificial Intelligence.
>> MODERATOR: That's very reasonable, sensible proposal to look into that best practices forums paper, which I glanced over, and I saw very many interesting and useful information, work, for instance, in the Council of Europe.
I think that surely, Katharina wants to give us final remarks from her side.
>> KATHARINA HONE: Very quickly because time is advancing.
I have two points to conclude, we need to seriously think about meaningful human control and not the area of lethal weapons but area of various AI applications, how many control are we comfortable relinquishing and decision‑making, cognitive ability do we want to retain, what kind of analysis are we happy to outsource, what kind of analysis should come from us? How are we comfortable in that?
The other point, going back to the question of what about next year? Will we have the same kind of conversation? And the answer I would encourage more experimentation. We are seeing lots of small scale projects and pilot projects. I think there needs to be more of that, exploration, trial and error, when it comes to tools of policy‑making and learn from that. I think that's the only option we have going forward.
>> MODERATOR: Okay. Thank you so much. I think we have three minutes left, but if we are done with the discussion, we can give you back two minutes of your life now. Unless there is any final burning comment you want to make.
Otherwise, I suggest, and I urge you to follow up with this good work, with these good discussions, and that you also profit from the summary of this session that will be surely be made by the platform. With this, I'd like to sincerely thank you for your engagement and being here this afternoon. Thank you.