The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> MODERATOR: Good afternoon. We should begin. Welcome to the workshop on AI, Intelligence Machine and Smart Policies. We only have 90 minutes on this topic. We know that AI is getting into our daily lives, so with that kind of lack of progress of AI, AI also (static) . From the policy issue, and (?) it self‑had two conferences on AI ‑‑ their conference in Paris and 2016, the fast approach of the Japanese of ‑‑ interior affairs and communication had a conference in Tokyo. And this is co‑organized with the Japanese ministry of interior and communications and I would like to say thank you to the minister to support this event. We have great panelists with us today. Because we only have 90 minutes, we would like to focus on the policy agenda, policy priorities on AI today and if you would like to make comments, first ‑‑ (?). So, today we are focused on the issues related with policy.
First I would like to thank the minister at the Japanese ministry of international affairs and communications and provide an overview of (?) AI's impact on society and public policy response that is needed to ensure that AI benefits people widely. (?), please.
>> Thank you, Mr. Chairman. (lost audio) .
>> -- telecommunications. It is my great pleasure to have a speech on this decision. We, the government of Japan would like to pay your highest (?) tremendous achievement. We especially thank you ‑‑ giving the thought provoking speech on AI. Today I would like to talk about ‑‑ Japan and address the ‑‑ issues of AI. Next, please.
First of all, I would like to explain the background. As you know, research and development on the use of AI have been rapidly advancing in recent years. We are expected to progress dramatically in years to come. And with this development, the AI systems will be connected to each other over the internet or other networks. We call it AI networking. And it will revolve in the future. Enormous benefits are expected for humans and society and the economy such as AI's contributions to solving various ‑‑ that the international committee are confronted with. But there are also some risks associated with AI, such as transparency, lack of transparency and judgment and loss of control.
From the viewpoint of promoting the benefits, as well as mitigating the risks, it is necessary to ‑‑ social, economic, and legal issues. In particular, services which AI systems will be provided beyond the ‑‑ of others by networks, it's essential to have international discussions to deal with such issues.
Recently there have been a lot of discussions (?) of AI around the world. Here on this map, are some of them. There are ‑‑ by private sector groups such as the ‑‑ (?) institute. There are also studies at the government level such as one by white house. Furthermore, discussions are going on.
I would like to show you some of the international discussions. Once Japan hosted the G7 meeting in (?). The then minister proposed that the G7 countries take the lead in international discussions (?) AI guidelines which are guidelines that AI developers should take into consideration in terms of promoting the benefits and mitigating the risks for the society and the economy. The ministries agreed to her proposal.
Next, please.
This year's meeting in Italy, the ICT ‑‑ continued the discussion about AI and the minister's declaration. It embraced the importance of AI saying we recognize that AI could bring immense benefits to economies and societies ‑‑ and which drives innovation and growth in today's economy. We believe that all stakeholders have a role to play in enforcing and promoting ‑‑ perspectives.
Next, please. Moreover, as this person explained a few weeks ago, a conference on AI in Paris ‑‑ (?) communications. I, as a participant, felt that we had many useful and fruitful discussion about various aspects of AI including policy matters.
Next, please.
The groups and organizations in the world are now discussing aspects of social, economic, and ‑‑ issues of AI from their perspectives. Today I would like to show you a study and a discussion conducted in Japan. In 2016, my ministry established an advisory expert group to study social, economic, political, and other issues to promoting AI networking. Many of the members from ‑‑ and private sectors. One thing the conference is doing is to assist the socioeconomic impact. This is an ‑‑ (?) over the conferences this year. It offers inputs, as well as benefits and risks of AI networking. There are two types of ‑‑ one, (?) from the viewpoint of AI developers, and ‑‑ from viewpoint of AI users. The assessment is conducted in several areas based on ‑‑ (?).
Next, please. The conference made several decisions from the scenario analyses. I will show you typical ones among them. First, although independently, our systems have positive impact, corroboration (?) to our significant impact. Secondly, employment opportunities in some business views may decrease, transform employees to higher valued positions or the ‑‑ employment opportunities is expected.
Third, throughout the different cases, it can find the common risks and it's important ‑‑ with them. Next, please.
Some people are afraid that as more systems or robots are used, humans will be replaced and may lose their jobs. On this employment issue, the conference made some suggestions. For example, there are some workers that might be expected by the spread of AI, didn't new opportunities to be created is important, as well as education and the human resource development of job seekers for the smooth transition to the new opportunities.
Next, please.
One other thing the conference has done is to formulate the draft of AI ‑‑ gatherings for developers, prepare for international discussions such as meetings such as G7 meeting and the OECD.
Next, please. The basic ‑‑ of the draft along the guidelines are these five items. In particular, the first one, achieving a centered society is very important as many other groups have pointed out.
The second one is the gatherings are non‑binding disability and best practices among stakeholders internationally is will important. In particular, I would like you to know that by saying non‑binding ‑‑ law, the conference does not intend to regulate AI. The impact of the guidelines are around the principles, there are nine principles that developers of AI should take into consideration, which are declaration, transparency, control ability, safety, security, privacy, ethics ‑‑ (?) accountability.
Next, please.
This shows which each of the principles mean, what each of principle means. The first one is that AI developers should pay attention to the interconnect activity of AI system developers should pay attention to the viability of the input and outputs of AI systems and ‑‑ over their judgments.
I think these principles can be helpful when you consider AI's social, (?) so I hope they will be shared among stakeholders. Next, please.
Measure in the ‑‑ (?) meeting in Italy. An example of events to foster exchange of views on AI. From now on, it is this ‑‑ to have further information sharing and discussions to deepen our understanding of opportunities and challenges brought by AI. Now, this G7 ‑‑ emphasizes. I hope that today's discussion ‑‑ information sharing and this session will help deepen your understanding of the opportunities and challenges that AI will bring.
Thank you very much.
(Applause).
Thanks, vice minister and the Japanese minister of interior and communications. Thank you very much. Now I would like to invite Carolyn Nguyen and ask her to provide an overview of the work. Carolyn, please?
CAROLYN NGUYEN: Thank you, chair. So, I'm going to give a brief overview of the old emerging work on AI. AI has risen to the ‑‑ can I change slides? Can we change slides? AI has risen to the top of the policy agenda for many of the (?) countries as well as non‑member countries and many of our stakeholder groups. This slide shows just a few of the budding initiatives, but there are many more. And AI is important to the OECD because of its very strong impact on economic and social well‑being, and impact on things like productivity, employment business models, things that the OECD cares about, as well as effecting societies in terms of inequality and well‑being and many other aspects. So, next slide, please. So, the as Mr. Masahiko Tominaga said, began a multi‑stakeholder dialogue to study the social and economic impacts of AI in a kind of structured manner, both the good implications and the bad ones, or the less good ones, rather. And the idea of the work at this stage is to take stock of who is doing what and the similarities and differences that are emerging. And early next year we begin the analytical work.
Also looking at ways -- and that's kind of the bread and butter of the OECD -- looking at ways that we can measure some of the impacts of AI. And then based on this analytical work and stock taking as well as management if we manage to get some solid measurements, we might work on high level principles for AI to help guide the development of AI. Some sort of very high level, non‑binding framework that can help governments sort of provide a checklist of things to consider when you are developing a policy in this area.
So, what I would like to do now is give a brief overview of a recent and very successful conference we held a couple months ago. And informing our work moving forth, so, the specificity of the event ‑‑ go to the next slide, please.
The specificity of this event compared to other events is that it brought together many of the different policy, public policy areas, so not just the part of, that focuses on digital economy policy, but also the ones that focus on employment policy and that bring together labor ministries and their shareholder groups, and we also had data protection agencies throughout member and non‑member countries, consumer protection authorities, science, and space agencies, just to name a few, because AI impact, and we can't look at one area in isolation from the others, sort of.
So, we kicked off the event with a keynote video address by Gary, and it was funny, he told us he claims to be the first knowledge worker to have his job displaced by AI when he lost the ‑‑ 20 years ago.
Compared to, I mean, what came out of, a few things that came out of that conference was that compared to other technologies, many were surprised by the speed at which how fast AI has been diffusing and transforming human life in the last couple of years, in particular with deep learning software, and applications that sounded like sci‑fi just a few years ago are now already or will soon be part of our daily lives, I can't way for the driverless cars. But smart shopping assistance, translation, speech recognition, et cetera, even very, very, very precise mapping information.
So we discussed the promises, we tried to have a balance between looking at the promises of AI and the economic and social and also looking at the risk, not only to maximize the benefit but also mitigate the risks and mitigate them and try to find way to address them. We discussed the promises of, by allowing, AI allowing us to discern complex patterns, detect irregularities, or any variations from normal patterns and allocating resources efficiently in fields like health, transportation and security, but many other sectors. So, we dug into a few specific applications.
Next slide.
Specific applications. And had a focus on applications in the areas of space and space data, as well as AI and the use of AI used for scientific Discovery to accelerate scientific Discovery and revolutionize the way a scientific process takes place, that was quite fascinating. And we also discussed a few other unusual applications, like this project, next slide, please. This project, by Google ‑‑ agricultural, machine learning will find similarity issues ‑‑ from thousands of museums, galleries and institutions worldwide. That's a fantastic application, it's just amazing, I advise everyone to go have a look.
So, of course, we had a full day on policy concerns, challenges for policy makers, as AI penetrates economies and societies. A major focus is job, and the capacities are going very fast ‑‑ a recent publication that began to measure computer skills compared to the adult skills that we usually measure for adults is ‑‑ a study, and we found that only 11 percent of adults are actually above AI skill level in literacy skills. And so the measurement, that's just the beginning of measurement work. But we need to measure this, we need to find ways to measure exactly what we are talking about, to find solutions or mitigate or take advantage of the opportunities as well.
Some concerns also include very strongly accentuating the ‑‑ with potential massive unemployment, technological unemployment, downward impact on wages, not just long term, but also in the very short term in some industries, for example, that we dove into, for example, the trucking industry. This could take place in the next five years. And also with potential competition concern and centralization of power.
When privacy was again a big issue with the data protection authorities participating. And that conversation focused on transparency of algorithms, access to data, and the governance more broadly of platform businesses, safety, responsibility, and liability, I'm almost done, these are, each topic is a very large topic. So just to go over them in a few minutes is difficult. But the concerns were raised by algorithms that impact people directly, and there was debate on the different roles of stakeholders, including software developers, hardware developers, the device of the algorithm, and in some cases, the driver or the operator of the machinery, and transparency and explain ability were also huge issues.
So, some of the common threats that emerged from the event was the need for international analysis and benchmarking of social and ethical implications of AI technologies. There was a really strong willingness to engage by the OECD with other groups and vice versa for such multi‑stakeholder cooperation and dialogue with the governance, of course, with the business community, very strongly and we have representatives here today. Carolyn from Microsoft, as well as more technologists organizations like the IEEE, the EU, the G7 and G20 are also cooperating, beginning close cooperation with those groups. So, with that, I believe, thank you very much on, that that's a short introduction.
MODERATOR: We had the excellent conference last October in Paris on AI. We had great speakers. And our next speaker is one of those great speakers we had at the October conference. Doctor Bryson, affiliated with the information of ‑‑ Princeton University. And ethical issues by AI. Joanna, please?
JOANNA BRYSON: You go thank you very much. I will sort of talk but I will keep advancing my slides so that I can keep track. If you want to look, you can try. But don't worry about the slides, okay?
So the first thing I was going to do is define intelligence, and I know that we have done this a lot. Oh, you can see.
So, intelligence is doing the right thing at the right time. Sorry, let me go back. Intelligence is doing the right thing at the right time. In a ‑‑ environment, we don't want to say the chairs are doing the right thing at the right time and they are done. It's something about agility. I use this simple minimalist definition, you will see, it will be used for later. So if we are going to do that, I think, we need to be able to perceive the context, we need to have a set of actions that you can perform, and know what commonly gets thought of in psychology of intelligence is how you associate these two things. One thing that it has taught us, it's not easy to detect context and learn how to do things like juggle and balance and throw and run. All three of these things are hard.
So, the next thing I was going to say, obviously, if you advance the slides, general intelligence is a complete myth. The reason is finding the right thing to do at the right time is actually a matter of search and search is challenged by something called common ‑‑ it's a computer science term to describe a problem that we will see if we can visualize here, the ‑‑ imagine you had a robot that could only do two things, right? What you have just seen is that you are skipping ahead, and only four ‑‑ I can't do the math in my head, but if only a few steps, you have huge numbers, and ‑‑ you have 10,000 possible plans, it seems simple but go from here to here in nine months. That's incredible. There's more games of ‑‑ more than 35 moves, games of chess than there are atoms in the universe.
When people are worried about a single break through that suddenly will have perfect intelligence, artificial general intelligence is a confounding of two ideas: One is that some algorithm will give us perfection, that's not going to happen because intelligence is depending on computation and computation as physical process, it takes time, space, and energy. And therefore we can get faster and faster, but we will never know everything.
The thing about the 35 move games of chess taking up more atoms than the universe, even if we could search that, where would we put it? Time and space are issues. How is it that we get so smart? How is it is that we can play chess in this situation? How is it that humans are so smart? It's because we exploit what has already been done, all right? So machine learning is explained the search that humans have already done. That's why we are able to train it. Actually that's why we are so smart and so different from the other apes.
We are also exploiting the history that has happened before us. That's how we ‑‑ the ecosystem, we have gotten good at communicating results among humans. This work that you may have heard about and we will leave it for Q&A, about what is expecting normally considered racist or sexist, it's that women are more associated with domestic things and men with careers, first of all, the same thing that humans show, we can find them by looking at word embedding which is standard machine learning representations drawn from text on the Web.
And not only can we find the same biases that everyone thought was horrible in humans, also this exact same biases are almost perfectly ‑‑ with the actual truth about what jobs women have. So the graph on the left, the Y axis is the expectations of the bias machine learning elements and the X axis, the US labor statistics for a bunch of words. So the words are word like programmer, Doctor, nurse. And you can see that it's almost perfectly correlated. 90 percent correlated, that's despite the fact that we had to do some scraping for that data. The Web is biased towards America.
All right. So, I want to point out that this does not imply that AI's are a human and moral subject. I want to confirm what we said earlier, we need to be human centered. First of all, what is ethics? Ethics as way that society defines and governs itself ensuring some kind of group level for the society.
AI and ethics are both ‑‑ there is no necessary position that AI fits into ethics, because we can change AI's that will fit our ethics or week change our lives and put AI around them. And the reason I bring up this somewhat complicated case is because, next couple, please, there was a proposal in the European parliament to talk about how the legal agency for AI, to make it a legal person. The only reason that legal person hood works for cooperations to the extent that it does work for corporations, is because humans are behind them and humans. The ones who get sanctioned. So the executive board of the company, right?
So the ‑‑ my point is, well, I will go ahead, how much time do I have left? Do you want me to stop or continue? All right. I will go. So, we will go ahead and do the next slide.
Some people, I worry about this, there are people who want AI to replace us, to be our new children. And I also hear, unfortunately, sometimes in the corridors of power, a discussion that there is a permanent separation of those with money and those that don't have it. We can't displace humans. Here we hear people talking quite openly, we want to help all people. I'm going to give you a functionalist reason for that: First of all, it doesn't make sense to replace us with AI, all of our values are based around things that help humanities, even aesthetics, helping a group of apes coordinate. But I'm going to skip the next slide.
I want to get to this slide. Yeah. Sorry, the next slide after that. Next one after that. Keep going. You got it. All right. So, what are people for? Right?
One of the things that we know in biology is the more variation you have, the more robust you have you are. So where does innovation come from? It comes from having a lot of people that have very different educations, very different backgrounds. It's ‑‑ fundamental ‑‑ of natural selection, if you are in machine learning, you should know it. It's not okay for us either to restrict people using AI and box them into little things or to replace a bunch of people with a single algorithm. I would argue that as one of the reasons you get fake news. We have a single algorithm. It's easy to learn how to get around it if you have hundreds of people editing your news, you are not likely to get around all of them and you have a better chance for a whistle blower. I will stop here, we can talk about employment later.
>> Thanks, we have an opportunity to discuss implementation. So, thank you.
(Applause). Now I would like to invite our next speaker, Carolyn Nguyen, and she will provide a beneficiary of opportunities and challenges of AI and also introduced the ‑‑ on occupational intelligence to benefit people and society. Carolyn?
CAROLYN NGUYEN: Thank you so much. And also thank you to the OECD for enabling me to take part on this dialogue on artificial intelligence. I have titled my talk realizing the promise of artificial intelligence. Let's talk about what is that prom promise. There is a huge economic promise in the sense that recent research has estimated that AI could double (?) economic growth rate by the year of 2035 for some developed countries and boost productivity for up to 40 percent.
There are about 12 countries that were cited by the report for the US, it's a growth rate of 2.6 to 4.6 percent. Or accumulative increase of up to $8.3 trillion in gross value added to the economy. One of the reasons for starting a discussion with us, it's really important to talk about how ICT can enable realization of the STGs, and within the UN agencies, there's an embracement of AI as an accelerating ‑‑ the AI agenda. One of the other things that I do want to emphasize, the future doesn't just happen. And at the IGF with opportunities like this, where we have the opportunity to sit down and share, have conversation about the stakeholders, through dialogue such as this, it's great that we can come together and talk about our different views and then how do we shape the development of AI? It's about shaping the development of AI that I want to talk about.
We talk about the promise of artificial intelligence gathering has done an excellent job of summarizing some of the applications. I just want to go back a little bit to say how does AI work? As alluded to by Joanna and Carolyn. What we think of is that AI is really more accurately portrayed and called computational intelligence, it's a tool. I think these are some of the points that Joanna has made, very nicely as well. It can help to make recommendation and provide insights based on past learnings.
And AI can enable new leapfrogs when they are used by subject matter experts, let me give you an example. In healthcare, in the work of our lab in Cambridge, England, with project AI, our researchers use AI technology to lower the amount of time doctors need to diagnosis can cancerous cells. It's a perfect combination in terms of what AI can do, computation intelligence and detection with what humans can do which is very much about let's spend more time to understand a personalized treatment. This really gets us to sort of the next part of it.
And one of the things that was really gratifying to see at the OECD conference on AI was that everyone really, really believed in the development of AI to be human sender. At Microsoft, we firmly believe in using AI to empower and create new opportunities for every person and every organization, but can then lead to this leapfrog. That's why if you look at this, our vision is very much about using AI to amplify human ingenuity. We believe in order for AI to realize a huge potential, the first thing it needs to be is to be very much centered, and ‑‑ (?) when what humans are good at, creativity, and the ability to frame problems, fairness, judgment, the ability to collaborate, et cetera. And so, next, please.
When we start to look at this, what ‑‑ what does it take for AI to be broadly adopted. And at Microsoft, we use the term designing AI to learn trust, a trust worthy AI rather than ethics, because ethics, this was in the example in the definition that Joanna used, we believe that ethics is used as a society. So as a global ‑‑ it needs to be trusted by all the stakeholders involved. From that perspective, here are some of the principles, it was interesting because I looked at the principles that were put on the table and proposed by vice minister Masahiko Tominaga and there's a perfect overlap in terms of reliability and safety, fairness, privacy and security, but also inclusiveness.
We believe that AI needs to support and enable every person. And underlying all of this is transparency and accountability. I want to take one of the dimensions which is fairness. This was brought up by, I believe Joe Anna, and one of the things, in the word embedded, what she used that's a clear example of how AI can be used as a tool to address issues in fairness.
Fairness means understanding what about the algorithm, is it in the data sets, for example, or is it in the way the algorithm works? And what we believe and when we look at fairness is, first of all, let's make sure we can understand, what are the characteristics of the data sets, et cetera. But in terms of what to do with that, that's a very nuance conversation that needs all the different stakeholders to be at the table because what is fair is different from one community to another, one application to another and one society to another.
Next slide, please.
So, the last issue that I want to address in my comments is really that in order for AI again to realize its potential and to be trust worthy, we really need to address issues around future network. Here there is not consensus on whether the net benefits of AI will be positive or negative, however, I think what everyone will agree on is that AI will change the nature of work. So, for example, in three ways: The type of work. The research shows 20 to 30 percent of the working age population are engaged in some kind of independent work. The type of jobs will change because skills that people have learned when they first enter college won't be applicable, for example, and types of jobs. These are issues that we are working with others to try to understand.
So we believe that because many of the skills are changing, the conversation around work needs to be changed to one around skills. And one of the issues is very much about how do we collect and identify what skills are needed and compare that with what are some of the skills that are available. So, one step that's necessary, actually, as to agree, for example, a taxonomy of skills that is used. That's an example, lifelong development, but also, in digital skills. Here is where AI can provide some of the answers in terms of enabling people to understand what a social logical issues, and lastly, very much that work of protection and social safety net issues really need to be addressed.
Next slide, please.
So, I was also asked to talk a little bit about the partnership on AI. This is an organization that was formed in September of 2016, there's currently more than 50 partner organizations across the world. And this includes business, but also, for example, civil society, on the board are the human rights watch, et cetera. UNICEF is also involved in one of the organization. Academics, et cetera, I it's got organizations not just from the US but from all over the world. We recognize that AI as new technology as it develops, what's best is really for stakeholders and people who are involved in the development, deployment of the technology to get together and share best practices, because this is a way in which we can talk about how to mitigate challenges, et cetera, and really very much faster AI to realize that potential for enabling and transforming societies.
So, next slide, please.
Just some policy considerations, and I believe that all of this is already addressed within the other slide in terms of, at this point in time, what we believe is necessary to realize the potential is very much policy frameworks that would enable broad employment and continued innovation by involving multi‑stakeholders, like this forum and the other forums.
We believe it needs to be anchored in practical principle, not only high level principle, but how the principles added, sharing the best practices. Skills training is a major part of that. And governance really need to continue to fund research because AI can be a part of that solution. How can AI help fairness and transparency measures, for example? And to come to a point that she alluded to, how do we address questions about availability to enable AI. Thank you very much for your attention.
(Applause).
MODERATOR: Thanks, Carolyn. One of the ‑‑ therapeutics and ‑‑ our next speaker, Mr.‑‑ technology policy at the IEEE. (?) for ethical considerations and artificial intelligence and ‑‑ system and also provide the ‑‑ viewpoint on priorities for multi‑stakeholder cooperations.
>> Thank you so much and thank you to the OECD, it's a great pleasure to share the work we are doing. As you are hearing, many great initiatives programs on the way and we are very honored to be part of the collection of activities. This afternoon I will share with you what we have been doing in this space, through our global initiative. So, next slide, please.
>> It's a little slow.
>> Okay. So, the work actually officially began in IEEE in late 2015 when we are examining these issues and how a technical body like IEEE can engage in them. The mission is to advance technology for the benefit of humanity and we have been focusing on that for humanity piece. Looking at it from a well‑being perspective, primarily.
So, we gathered a lot of momentum and participants, and we officially launched the program in 2016. Right now we have well over 200, probably closer to 300, I need to update the slide every day. But we have experts from all around the world, not just North America, we also have from African nations as well as Asia, et cetera. And we are pleased to say at this point we have 13 working groups working on various topics, I know it's a little bit of an eye chart but I'm (?) we started looking at general principles, but we have got an amazing response.
Looking at different aspects regarding policy, law, and last year we even engaged and created new working groups on mixed reality. When you think about the intersection of intelligent systems and AI with augmented reality and mixed reality. And looking at it from a well‑being perspective as well. We are pleased to say we just completed the second version of the its ethically ‑‑ it's not for public comments, I believe the comments are due ‑‑ excuse me, April 30th.
So, if you are interested, you can Google this and I will be more than happy to meet with you to explain all of that. IEEE is rooted in being open and transparent and inclusive. There are values. We know that the work really needs to be representative of the community and interested parties and stakeholders and it's a multi‑stakeholder process of how we are developing this. Again, more to come on that. But we are very open to getting feedback from anyone who wants to get involved in this.
The other thing to note, and I will get to that in a few seconds, is this work inspired 11 standards. It's a little bit of an interesting space for us, because we are technical standards developer. So how you incorporate ethics and these issues into your standards. The way we look at it is, if you think of common practices like security by design and privacy by design, sort of think they are ethics in a sense or trust by design in hour standards can play a role in examining those issues.
So, on version two (?) we do recommend that people take a look at it, it's quite a substantial piece of work based on contributions from all those 13 committees I mentioned. It produces a set of recommendations, if you will, primarily for technologists and developers but also for policy makers and take into consideration the various aspects of the complex set of issues that we address when we look at AI as we have been discussing on this panel, and this document helps to frame them with the set of practical recommendations that people can use moving forward.
Next slide.
So, as I noted, the standards that have been produced out of this. And I know at the standards board meeting of IEEE about a week and a half ago, I think three more standards were added to this list. So the work is growing. Just so you know, for those who are not familiar with the standards world, the P mean it's a project, so it's under development. All these projects that you see they have working groups. They are global in nature. People from all around the world are working on these.
As the working groups for the document itself in the initiative of AI, these working groups are also open. And to anyone who has an interest. And we can definitely make this information available to you and you can look them up, but you can get involved in these. You can see here we are looking at a range of issues from data privacy to well‑being as well, and looking at how we can put indexes together on this. We are looking at this, think of it as a hybrid standard. Looking at technical aspects and technology, but again, how can we address these issues of AI, economists and ‑‑ from a standardization perspective as well. Sort of a new era of standardization is sort of evolving as the standardization ecosystems change and because of the challenges we are facing and the new levels of technology that are impacting standardization.
So, that's all I had. I know we want to get into a dialogue, I don't want to necessarily be labor going over the initiative. I know I have been asked to share some insights on the policy aspect. As we have been hearing, autonomous and intelligence systems are becoming increasingly a part of our society. We know that we are here, they are here already, sometimes we don't realize it, and we see the great promise of what can come because of this level of technology, but also we need to be going ‑‑ with our eyes wide open and be asked about some of the challenges and how we might address them. The use of these new powerful technologies can promote a range of social goods that can spur development in numerous applications, including commerce, healthcare, politics, public safety, et cetera. But to protect the public, resulting from these applications, effective public policies and government regulations will be needed.
The goals of an effective AI policy should center around the protection and promotion of safety, privacy, intellectual rights, as well as the ‑‑ impact of these systems on society. Without policies designed with these considerations in mind, there can be critical failures, loss of life and high profile social controversies, such events can stifle entire industries, so regulation do not advance ‑‑ so, from the IEEE's perspective informed by the work we have been doing so far, we want to ensure that best practices are developed and that can inform and help educate the policy makers, we believe that effective AI policies should embody a rights based approach that, we look at it from five basic principles, that the developed workforce expertise in the technology, that it includes ethics as a core competency in research and development leadership, that regulate AI ‑‑ the intelligence systems, and to educate the public on the societal impact of these technologies. With that, I know we are getting close to some dialogue here, I will close with that, thank you for your time. Thank you.
(Applause).
>> Thanks, Carolyn. I do believe that policy maker is ‑‑ (?) here. Consider the ‑‑ progress ‑‑ combining knowledge and skill set, not just to the general public, but to the policy makers is very important.
With that, I would like to introduce our last speaker, Jean‑Marc Rickli, from king's college and also teaches. He is also an advisor to the AI initiative on the governance of AI and he will provide an overview of the project. You have the floor.
>> Thank you very much. I am in charge of global ‑‑ and resilience. But what I want you to present this afternoon is just initiative from the global civic debate, the AI initiative for society. And it originates from the understanding that AI is ‑‑ double use, technology, and if you have followed discussion, for instance ‑‑ within this building on autonomous system, you will see there's increasing discrepancy about what is ‑‑ and the policies that are being adopted. So the idea of this global civic initiative is to give population voice in a driving debate, and really the purpose is to give global citizen a voice in learning AI so that it is maximizes benefits and the debate is open and inclusive.
So, this initiative was launched on the seventh of September, two months ago, and it complies four phases. The first phase was a Discovery phase, where people could hear concerns as well as the ideas about AI. And from that phase, we moved into a new ideation phase in November that will last up until this month, and where we have six clusters of different ideas and I will come to that in a few seconds. The next phase will be an exploration phase, where basically the ideas that came up in the ideation phase will be put into tension between positive and negative impact, and the last phase, conversion phase, the idea is to come up with (?) recommendation. Next slide.
So, the methodology is really an exercise of ‑‑ intelligence where you have four different types of individuals besides those contributing. And I don't want to go into too much detail, but basically people who are trying to extract the meaning of the different threats and discussions that are taking place on the platform.
Now as I said, we are in the second phase of the debate of ideation phase, and basically here, 16 have been identified. The first one is about reinventing machine and ‑‑ the second one is security, and third one, governance of AI, the fourth ‑‑ the workforce for the age of AI, the fifth is about drive AI for public good and the sixth one is imageries of AI. If you scroll down, within the first one, there are ‑‑ one, it's about greater human rights in the age of AI, and embedded ethics and values in machine, and resolving algorithm bias. In the AI safety and security, the free sub topics are possible risk to humanity, the renew ability of cybersecurity and the risk of global AI ‑‑ and the third cluster on the governance ‑‑ one is about enabling free flow of data, the second is creating regulatory sand boxes, the third is about solving for explain ability and the fourth is developing and ‑‑ regulation.
The fourth basket is about adapting the workforce for the age of AI. And free ‑‑ new skills and cognitive ‑‑ 21st century a (?) and fourth button. Fifth has to do with integrating social arts and ‑‑ including communities. The last one is ‑‑ AI with two ‑‑ collected imageries and reinventing the future. So each of these platforms, you can basically log on the website which is AI civic debate and you can participate, provide inputs and your opinions about each of these different theme attics. Beyond that, there have been event created around these different themes, for instance, two days ago, in connection with IEEE, there was a webinar on imageries of ‑‑ and the idea is to gather as much as possible support towards, towards different issues of, that relates to artificial intelligence, and (?) and these, the conclusion, the findings that will ‑‑ the knowledge that will be extracted from this different debate on the platform will then be delivered to different organizations as well as government.
The idea is to use that as a way to make people, as global citizens, contribute to the debate on AI, to have some kind of a say, and to maybe guide some government policies or organization policies about where we should go in terms of the values that we want to put into ‑‑ (?). Very last one.
So, this is the website, AI civic debate.org. This is where to have a direct say on these issues. Thank you for that.
(Applause).
>> Thank you, for the initiative on civic debate.
I should say that vice minister excused himself to take an airplane. On behalf of vice minister Masahiko Tominaga, we have Mr.‑‑ here, so if you have a question towards to Mr. Masahiko Tominaga, Mr. Norbert will provide responses on those questions. Having said that, would I like to open the floor. If you have any comments or questions on these panelist presentations, please raise your hands.
>> Hi, I'm from Brazil, I'm here as part of the IGF program. My question has to do with qualifications skills required to deal with AI.
The first part of innovation and development of the AI technology, but I have concern about the qualification process so that people of user of AI, I don't see an effort ‑‑ on this matter. What I see is a very serious politics of intellectual property enforcement and protection from the technology creators, instead of policy of access to technology (?). I would like to know, then, what's the agenda or what should be the agenda for working qualification in the ‑‑ given the fact that we are basically changing the labor force and people will be required to have higher skills to manage AI.
>> Thank you.
>> Johnny, University of Cambridge, I guess mine is a ‑‑ in that presentation, call was made to pay for more research. My question is, is Microsoft doing all this they can to pay tax and to fund that research, if that's (?). Thank you.
>> Thank you. Yes, please.
>> Yes, this goes to the comment about AI as doing the right thing at the right time.
Who gets to define what is right? I realize that's part of the ethics discussion but I'm also particularly hearing comments about how it is the user maintained agency when they may or may not have had input into what that definition of right is, they might not be able to intervene if they disagree with a the system's definition of right is.
>> First we will address the questions and we will open will floor again. Joanna I will ask you to address the first and third questions.
>> I'm going to be rude and address all three of them very quickly, because the first and the second relate to each other, actually. A lot of people worry about AI creating unemployment, but in fact unemployment is at a record low in the areas that have a lot of AI right now. And the real problem is not unemployment, it is inequality and inequality comes down to wealth redistribution.
And you are absolutely right, that the core thing that we have to realize when we see this incredibly low level of distribution, it's incredibly high inequality, the elite need to realize the same as the rest of us, that it is not in their advantage to have the level of anarchy we are seeing right now. The last time this happened led to World War I and the cash and World War II. I don't think you can take these two things apart.
It is a big issue in the global south, also in Asia, that there isn't the welfare state to support people as they transition between jobs. And I think that's a huge issue we need to think about across the globe. We expect from increasing intelligence, increasing pace and change. How do we help people across these?
And I have heard people like ‑‑ and large companies to expect that they are essentially going to have part of their workforce on sabbatical, like a fifth of the workforce is going to be retrained. How does that work for people in the big economy, how does that work for people? And in villages in the global south? I think the way it works is, as long as we keep enough money circulating, it doesn't have to be huge, I'm not talking about old school communism, but as long as we keep the minimum wage up, and put money in big tech and put some money out in wages, people will find, they are innovative, they will find ways to generate new services for each other and they will find ways to produce flows of capital. But this is what government is for, it needs to get rid of the (?).
>> I'm sorry, who has agency? I think answered that all in one question. Sorry about that.
>> Thank you for the questions. With regards to the working skills questions, a couple different things that we are doing in terms of ensuring wide access to AI capabilities and these have to do with the kinds of services that we are trying to make available on the platform and our notion here is to enable micro, small, and median ‑‑ we also have open source, some capabilities, we open sourced exactly the same kind of tools we are using, on AI ‑‑ (?) interesting, integrating solutions are using AI to address the challenges locally. We do have a program to make all of that available.
Another part of the training and the social work process, that's not really talked about a lot, the social logical studies behind barriers that people have to retraining, for example, we are doing a lot of work in this area to address, so when we think about policy, we think about balancing our economic, technical, social, cultural and governance, and what we are seeing is that as technology such as AI becomes an integral part of society, understanding the social culture aspects of this is really important. Microsoft research is very ‑‑ in a sense that we have some fairly prominent sociologist working in India as well as other parts of the world, doing studies to some of these issues to understand, that's the way a 20‑year‑old will not learn very different in a global ‑‑ across their career development. This is why we are ‑‑ looking, using those capability to look at social ‑‑ and what are some of the ideas around that.
And the question of research, Microsoft invests in itself, invests in research, both on the technology side as well as on the social logical side, comply with all regulations in every country that we operate in. We also maintain research facilities in multiple countries around the world to again, in all of those areas, we collaborate heavily with academics across the world.
>> Thank you, opening up the floor, I would like to provide my take on those three questions. By the way, I you didn't do introduce myself, I'm a professor at the State University of ‑‑ Korea. Before that, I was the deputy ministry of the ministry of science and ‑‑ in Korea. So, as I ‑‑ took ‑‑ from the government perspective, when I was deputy minister, when you look at this issue, when you look at the project that the OCD ‑‑ and well‑being, not just ‑‑ (?) issue of inequality. Inequality of income and wealth, and ‑‑ (?) that's the reason that ‑‑ economists point out the ‑‑ (?). I'm not quite sure when you look at the statistics (?).
So it's more complicated and compressed issue. So I do not want to touch up on that issue. But using this technology on AI to address ‑‑ and well‑being is critical. And our colleague mentioned this ‑‑ issue. We are talking about AI here, but at the same time ‑‑ the global population is, they are not connected to the internet. In one part of the world we are talking about how to use this fact technology, artificial intelligence, and emerging technologies. In another part of the world, having any kind of experience on this kind of technology, here we are sitting at the ‑‑ (?). We have to look at this issue from the global perspective. But ‑‑ this issue, (?) this use of the technology and access part is one thing, but (?) and matters to address this issue is much more difficult than recognizing the problem itself. That's the reason we need more discussion on this issue and that's the reason that we always ‑‑ ITU, and many people are working on that.
At this table, I don't think nobody can provide the right answer or set of answers which can resolve all matters. But using consensus about the urgency and the importance of the issue is the first step. And that's the kind of ideas where we can start to gather ‑‑ that's the reason I ‑‑ working on this project I'm going to, and hopefully at the end of next year, we will provide some comfort kind of findings which we could share, obviously we could share together with the community.
And I would want to say that this agency ‑‑ because we do not have a governance system to address this AI related matters. That's the reason ‑‑ is (?). Working together with ‑‑ technical community. We have choice ‑‑ and the internet governance. But in the internet governance, we have this mechanism icon, and actually ‑‑ nobody actually ‑‑ the global population did not have an opportunity to agree or disagree with the creation of icon, but ‑‑ I got maybe ‑‑ involvement, with the development of this technology and here we have this AI.
The implication of AI might be ‑‑ the internet itself, we have to think about what kind of model we will use to address this issue for governing the AI. And we actually got lessons from the internet, nobody, nobody could provide the answer. That's the reason that the stakeholders approach ‑‑ and that's the reason we need this kind of program to find out a solution together. Having said that, I would like to open the floor again. Any questions? I'm not quite sure you are ‑‑ but, please.
>> Hello, everyone. We have a question from one of the participants. He is Carlos and he is asking how do you think the internet governance is connected with AI policy making and if they believe that in the future we will need to ask like the ‑‑ to discuss more deeper about the artificial intelligence, when this topic turns out more and more important for our society.
>> Thank you for the question, I think I already provided the response on that specific topic. I would like to, again, yes, sir, ladies first, and then ‑‑
>> Hi. Theresa here, Oxford Institute. I was thinking, also, in this direction. And my question is simple and more concrete. And it's especially concerning the responsibility responsible application of AI. It's basically how can we ‑‑ are we dealing with the question of short term growth of algorithms and AI implications in terms of more long term implications for evolution of the society, if I rephrase that basically how can we avoid algorithmic determination caused by serving us information from algorithms that are supported by AI instead of the other way, instead of us influencing dynamic or the AI for the future, for the alignment with our thinking? Yeah.
>> I'm wondering, when we are talking about issues around transparency, bias and fairness in AI, most of the solutions that I tend to hear focus on the developer side, on the technologists side saying we need more training for the people developing the AI systems. I wonder if we don't have that backwards?
Shouldn't we actually be training the public policy professionals in how to understand AI systems and to work directly with the developers? These are people trained and their professions are balancing issues to create policy. I don't hear a lot of people talking about the policy for the need with these type of specialties within the profession. I'm curious what the panelists views are on it.
>> Thank you. Yes, please.
>> Thank you, Mr. Chairman. My name is ‑‑ from Tokyo. And I really enjoyed this panel. I have some stupid questions, if I may. How many of you have heard the case or the ‑‑ case ‑‑ which was ‑‑ early in November. Have you heard of that? Okay ‑‑ it's a digital publication, it's not AI, or not related, but the headline says that they are sort of ‑‑ by the practical ‑‑ (?) diffusion of the digital culture.
And countries against globalization, they want to be their own country first, and certain elements against being left out from the technological development. But my question is that, have you ‑‑ IEEE considered we are going to consider these kind of new activists, activism against ‑‑ the AI, on the presentation by ‑‑ it could come uncontrollable. So ‑‑ that humans at large may not be able to control, but also the humans in the south or in the very much against ‑‑ the status quo may target certain kind of activities of AI, and which might be difficult to deal with. Those are my concerns and question. Thank you.
MODERATOR: Thank you, we have approximately 15 minutes. So, we will take the silly questions first. And I will open the floor again once more. And we will take three more questions and wrap up the whole session. And this is it, last session. So we have some liberty to be (?). So, we have three questions: The first one is about the technology determinism, how we can avoid that and how we can tainting advantage of this AI, not actually ‑‑ become the victim of this ‑‑ development. And I like this part, actually, because (?) we call the government as a policy maker, that we could ‑‑ policy maker, which means that they have the kind of legitimacy to make the decision.
And, but sometimes actually, when you look at this technological development, sometimes engineers and scientists themselves make some kind of fundamental decisions, and ‑‑ the impact could be influenced, all the ‑‑ of society. So I think that it's a very good question. So ‑‑ (?) intention to address these social issues, not particularly ‑‑ issues. So, I think, I will ask Karen first about the third question and then I want to ask ‑‑ and the first question, John, can you address ‑‑
>> The first one.
>> Joanna.
>> I thought I had a minute to think about this.
So, if I'm understanding the question, because it was a little bit complex, we are talking about the social issues of the ‑‑ yeah. Our initiative is open, and we are looking at the social implications, you know, when we talk about well‑being, that is also part of it.
So the initiative itself in its second document has not necessarily have a working group dedicated to that or incorporated that as part of their dialogue in their work. However, you do catch my attention, we are looking to expand that work and to address these types of issues that are arising. So my answer, you know, it's that we would bring that back, and I welcome you to bring that back to the initiative to see how they can address that and how they are going to handle that. After the session I would be happy to get your contact and we will connect with you.
>> I'm looking forward to responding to this kind of question ‑‑ our organization that ‑‑ growth and well‑being. (?) but the thing is, actually, this, addressing this issue is not providing technical solutions. We have some social policy measures to address this kind of issues, so ‑‑ is a body which addresses these issues, and (?) actually working on that agenda. And in terms of the second question about the, how to educate the policy makers, and giving them the kind of right knowledge and skill sets to address this issue, I think this is the ‑‑ (?) I had been working for the government for 29 years, and whenever new issues coming up, even the, when they understand the sense of the problem, they have to deal with the Congress, the legislature, and they are still with their media. They have to deal with the kind of incumbents who are affected, who would be affected by this kind of disruptive technology.
So, it's not easy, but at the same time, it was very important and critical, because they are the kind of body ‑‑ formulate policy. That's the reason I think this kind of multi‑stakeholder approach is very important, including the technical community like IEEE. So, I think kind of a dialogue with technical community and also the civic society, which would be so affected by this policy ‑‑ (?) in order to do the right things ‑‑ has some kind of flexibility to ‑‑ (?) engineers within the government systems and each government has different kind of recruiting systems, so I think this ‑‑ should think about how to make the system more ‑‑ (?) or advising on these kind of technical issues. I would like to invite Joanna.
>> Here we go.
JOANNA BRYSON: So, the question was this thing about the algorithmic‑izing, I can't say it, and I'm a native English speaker. This is related to, I talked about the problem with artificial general intelligence, this question is related to the problem of super intelligence, although it's a better description of it than you normally get.
And super intelligence is an idea that's been kicked around for about a century now, that if you can learn how to learn, that you are going to get this exponential increase in intelligence. Some people are afraid that that's what AI is going to do without us. But I would actually argue that this is what we have been doing since we have had writing. So if you look at the period, 10,000 years ago, when we got writing, since then we have been exponentially increasing in numbers. And now we are dominating the bio mass of the planet and sustainable is one of our two biggest problems, that and inequality.
So, it's a problem, it's one we are embedded in. That's why I like the way you cast it about, that we have this concern about, you know, what the north is doing to the south and what the big tech is doing to the rest of us and what government is doing to us if it has all of our data. And I think the only thing we can do is be vigilant, I think this is the ongoing power of politics, one of the previous questioners asked ‑‑ (?) that definition I gave was about intelligence in general.
The only difference between natural and artificial intelligence is the artificial intelligence is generated by artifacts, which means we are responsible for having built it, right? So computationally they are the same but the difference is about our responsibility and how we govern it. And I think the things ‑‑ I have been very cheered by this panel because there didn't use to be this consensus about the importance of transparency and audit ability. But contra to something said on the previous panel, I want to say that's not even a tradeoff.
When people go and look at it, when they make the systems easier to understand, it turns out they are easier to de‑bug. The rules we are using to figure out how to make it clear, and audit humans without having the synapses, in the worst way, it's like deal dealing with a bunch of accountants, and we have more information than that. There's no easy answer, it's the ongoing problem of how do we govern ourselves and do these tradeoffs. I think it's important that we sign up for auditing and regulation and governance hand and we do pay our taxes and contribute to governance and make sure we are vigilant and doing the right thing.
>> Thank you for the questions. I just want to add a couple of thoughts, to the question the determinism, one of the things we look at and try to emphasize is that AI is really automating what human beings are already doing. I think this was part of the part that Joanna made, for example, when you are looking at use of algorithms and noun application as an example, so, today human beings are doing this.
And so what's going on with artificial intelligence that now computational intelligence is trying to automate part of the tasks but we absolutely believe that human beings are ultimately responsible and needs to understand what is involved in the recommendation and also information around that recommendation. If a particular application process was approved, or let's say rejected, how certain is that decision? 80 percent certain or 40 percent? And what makes up the remainder if it's 80, what makes up the 20 percent? Let me give you a funny example, right?
The external, when he stepped off, retired, applied to get his mortgage, his house refinanced and his application was rejected. He didn't have a job but he was applying for it, it was more than he could provably show he was earning, and so (audio) , there needs to be an understanding of what the technology is used for, and what pilot was developed, et cetera.
So, we very, very much believe in developing and working with others to make sure they understand, transparency, but ‑‑ let's develop a transparency template in terms of what went into the data, what was used to test the algorithms, or how was the algorithms chosen, et cetera, and this is where standards can be interesting and helpful conversation. I think ‑‑ then there is a tendency to ascribe more authority to technology than exists.
The question of policy maker, the question of AI is a the last long of policy that policy makers doesn't understand and have had a negative attitude towards. And what we are seeing in research and surveys across the world is this tension between having to rely on something that they don't understand but the need to use it is further exacerbated in the conversation around artificial intelligence. There are some really good projects that we have come across and collaborated in, in terms of how to use AI to better policy development. I think those are some interesting, that sits at that interesting ‑‑ but completely agree with you, that this is very much of a needed conversation.
>> Thank you.
>> I have two remarks, I very much agree with you that the ‑‑ tendency to solve the technological problems, with technological solution, I think that the ‑‑ from the internet of things, cares about the attitude ‑‑ it's impossible for a human being to deal with that. We look at technological solution that will create in itself its own problems. I think that ‑‑ has ‑‑ (?) a risk society. We are creating the conditions for us to become more and more vulnerable.
On the other hand, on the issue of urgency, what we are witnessing is that we empowering (audio) . That we have never seen in human history. A single human being like Snowden can steal the amount of data that is beyond what was thought possible a few years ago, this is new. And in my feeling of national security, field of national security, there are strong concerns about how the technological development will (?) the human being.
What you are doing is you are shifting the den from what is used to be here within the frame ‑‑ to almost 7 billion ‑‑ of securities. We are talking about AI. But if we talk about synthetic biology ‑‑ the ability for people to interfere with genetics and we had a case of ‑‑ a hacker a few weeks ago, it's this would create huge security problems and concern in the consider very near future. There needs to be a debate, not just in the value of being cut into algorithm, but also the way the technologies will be used. I can only tell you, I have been working on this for quite a while, and just looks at the use of drones. These people are very reactive. So what I'm really afraid is that even if 99.99 percent of the population has good intentions, you give a power to a very small minority to have a tremendous impact and this should be a global concern.
>> Yes, and that is a real concern. And I think we collectively address that issue together.
So, the floor is open again. Yes, please.
>> Thank you for giving me the opportunity. I'm a journalist for international ‑‑ watch. Although challenges have been described as being very serious, I was wondering where does the intellectual property issues fit into the policy debate, with the increasing use of trade secrets, for example, and patterns and copyrights. Thank you.
>> Thank you. You two are the final two. Please.
>> Hi, my name is ‑‑ from Geneva. I have maybe a naive question here, and you may have partly answered a little earlier. We have seen there's a lot of positive potential of AI, especially human and artificial intelligence working together. And it's really good to see so many frameworks around channeling that energy into the right way.
My question is: What kind of ‑‑ do we have that prevents this kind of abuse or in French they say ‑‑ that runs away from us, either on purpose or on its own. It seems to me that AI is very different to previous technologies that we have had ‑‑ (?) revolutions. And I think we need to treat it in a different way. Thank you.
>> Hi. I'm Judy, I'm from an organization that deals with ‑‑ and digital rights. And my question is that, we know that bias in algorithm concerns related to the right of ‑‑ (?) in AI. The algorithm ‑‑ patterns and disregard differences in specificities. This can occur because for hidden bias in developers or because of the bias in the data bases. One impact on the decision ‑‑ on my diverse database, on the one hand, or to have more diverse developers in the other hand. Which brings the importance of a more democratic access to the databases.
So my question is how do you reconcile this with the image to protect personal data involving highly sensitive ‑‑ (?) if we are talking about right of cognition, and the fact that immunization is something that can be reversed, especially when the ‑‑ is to collect more and more data.
>> We are, I think ‑‑ if you have more questions, I think after the session, I think formally, we have come to the session, we are already beyond the scheduled time. What I would like to do is give the floor to all panelists. So, you all know that the questions presented to us, if you can address one or any of those questions, take advantage of this opportunity and if you want to make any kind of ‑‑ remarks based on the kind of comments and interventions from the floor, I think this is an opportunity. We will start with Marc.
>> I just want to ‑‑ I feel this is ‑‑ an issue. For instance ‑‑ the use of AI in weapons system, autonomous weapon systems, have been debated in the UN for the last few years. And there has been no agreement so far among states about what kind of values or limitation that we should put into them. And the idea of the so‑called ‑‑ group ‑‑ (?) was originally to think about should we preempt or band a weapons system the from being developed and the analogy was with blinding lasers that were banned in the 90's, that these weapons would be weapons that would ‑‑ in the military field. The safeguards, very difficult to say.
I think that this is where a global society and civil society should play a role by putting some pressure on governments to put some safeguards, because once the genius is out, my fear is it will be too late. If you take 3D printing for instance, and very early you had the first codes that were ‑‑ publish open source on the internet to create ‑‑ 3D printed guns, they are not very accurate, but they are out there.
So, if you take a virus ‑‑ like (?) by Israel and the US disable the ‑‑ this virus is still out there. Once the code is out, it is out. Putting safeguards is, I don't think that any of us have a solution, but gathering supports and opinions and pressuring government through a global ‑‑ is very important. These are key issues.
>> Thank you for the question. With regards to safeguards, beyond the autonomous weapons and the more security related issues ‑‑ (audio) . Related to AI at this point but one point we are beginning is, an important part of the work actually is to determine where we have existing frameworks that apply to AI or where we might need new safeguards, so it's important not to start setting up new safe guards and ‑‑ in the area of privacy protection, might in some cases may be adequate and some cases may not. That's one of the main purposes of the work we are beginning.
>> Okay. Here we go. I want to get to the bias question to start out with, then I will wrap up. I want to point out from my talk, if you remember the slide with the actual picture, bias is not solved just by getting enough data. So yes, we should have diversity and be careful about the data. But the fact is the world is unfair now. If we go to the right sessions, we hear that a lot. It's an ongoing problem. Fairness is not something you get by adding noise into the vectors, then you get nothing. You just get noise.
Right? Fairness is about helping the poll particulars, about humanity, choosing what other goals are. And I really wanted to say that in the previous one, you talk about (audio) . About tech, but big tech is basically becoming some of the big countries in the world and it needs to know more about the social sciences, this is a two‑way problem.
So, this is sort of my wrapping up statement, then, and with respect to things running away from us, as I said, you could think of the technology we already have, our dominance of the ecosystem, there are almost no mammals left on the surface that aren't people or that we eat or play with. We see these nice movies of animals vanishing, almost nothing left. Our society is a runaway process and AI is a way of the doing these dominations.
This is about politics, not a separate thing, though I do agree we have to keep working on the technology. I don't mean to be overly scary, I really do love this stuff and some of the innovations people are coming up with. For example, the IEEE standards, that's one way to try ‑‑ by which technologists are trying to connect to politics, we are trying to offer the standards as we try to understand how to make transparency easier. We do have to be under regulation, we have to if I can out how to make that as effective and as flexible and to be able to deal with the future. We have a joint goal here and government is something that we use to have substantial stable societies.
>> Yeah, I was going to ‑‑ I echo, Joanna, what you said, I was asking to address the bias as well. There is no simple fix, there is no simple answer. It's really going to take a collective from many disciplines, generations, cultures. From a technical community perspective, we look at it in reference, my opening presentation about the initiative that we have at IEEE and the document ethically align design. We are looking at how we can address these issues.
Technologists can address these issues and challenges, including bias, you know, if we look at how we are developing the technology, that they are looking at the ethics and the impact. It's not the one single answer. But I think if we all sort of play our role and we keep an open dialogue, and we keep, we break down our silos, the different sectors that we are in, or communities we represent, then we can make progress. But it's going to take time.
>> Thank you. I want to address the question ‑‑ and take that into a broader conversation around regulations. So, to the question of IP as well as copyright, et cetera, we do agree that AI needs data, however existing laws apply, so privacy ‑‑ with regards to the copyright issue, it's enabling reasonable use of the assets, copyrights are there to protect the commercial capability of assets, what AI is doing, for example, if you are looking at building translation machine or text recognition machine, there's a need to go out there to scrape the data, to learn, to train the algorithms, to learn the language and context and the structure of the language. That's very different from what copyright law was meant more initially.
There was a sense around the room and others, that AI today is not regulated. That is not true. AI is regulated by existing regulations, for example, in the US, we have the federal credit reporting act that pre ‑‑ privacy regulations, et cetera, all of those regulations apply to any applications of AI, safety in vehicles, as well as horizontal regulations, such as the ‑‑ and we absolutely agree that all of those laws need to be complied to if the technology goes to be trust worthy, going back to the point I made.
The last point I want to make is going back to, because we are at the IGF and there's a strong belief here in terms of multi‑stakeholders, the panelists have brought this up already, I think building up safeguards, making the technology trust worthy would cause all of us to be at the table so we can understand what are the issues. How can the technology be a part of the solution, what are the other issues? Technology is not the answer to everything.
How can policy makers sit at the table and think of the issues that need to be addressed, what can be shared through best practices because the technology is evolving so quickly. That's why we are all here and that's where the safeguard is going to come from, we can identify what is important, and it becomes that sort of complex, how do we work together to make this technology work? Because we are all here ‑‑ we have the power and ability to shape the future.
>> Thanks, Karen. I think that's very good remark as a panel. I would like to say thanks to our special panelists here. Thanks to Mr. Masahiko Tominaga, who is not with us at this point. But thank you very much. I also want to say thanks to all the participants and your active engagement in this session. Actually, going back to before going back to this IPR issue, there are many issues such as ‑‑ (?) develops new technology ‑‑ IPR. So, this is just one example. There are so many new and policy issues here in relation to artificial intelligence. And this issue is not the future issue, this is the present issue.
And ‑‑ (?) we should address in the global level. This is by definition a global issue and we have to work together, and I think that the discussion and the IGF is very important, because all stakeholders should be involved in this discussion. And I'm looking forward to receiving, actually, more comments on this discussion, because obviously we are going to ‑‑ is open to everybody and you can visit our website, the OECD website. We have a dedicated website. So we appreciate your comments on the kind of work of the OECD, actually encompasses all policies. Having said that, I would like the say thanks to everybody and thank you very much.
(Applause).