IGF 2018 - Day 2 - Salle VIII - OF17 Children and AI - securing child rights for the ai generation

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR:  Good morning, everyone.  We're going to start on time.  We only have an hour.  We realize this session is in the middle of two other sessions. 

Today we are going to talk about children's rights and emerging technologies.  My name is Jasmina Byrne.  I'm from UNICEF.  And today's panelists are Jenny Bernstein.  She's working with UNICEF office of innovation and an urban innovation specialist and also leading work with world economic foreman.  Then I'll introduce Sandra Cortesi, which many of you already know, Center for internet and society at Harvard University.  Steve is my colleague who is now digital policy specialist in UNICEF headquarters.  And Fanchon is Chief of Unit ICT in education focusing on ICT education policy and professional development and emerging technologies.  And finally, we have John.  Currently with the University of Cambridge. 

So we'll start with Jenny who will tell us a little more about this initiative that UNICEF has started.  So just to clarify, we really want this to be an interactive session.  We hope to get a lot of input from you and give contact details so we can continue having a conversation with you and we hope that this could be really a joint effort.  Jenny, over to you. 
     >> JENNIE BERNSTEIN:   All right.  Okay.  Does this work?

Fantastic.  I'm also with UNICEF, and I work on our innovation team.  We just started some work trying to explore the intersection of children's rights and AI.  And I think it's important to frame this discussion first by stating that UNICEF, as an organization, our main mandate and what we are all trying to do on a daily basis is to protect the rights of every child. 

When we talk about children's rights, we talk about it from the framework that is given to us from the convention on the rights of the child.  So this is a very comprehensive document that's wholistic in nature.  It's not just talking about, let's say, physical or social rights of the child.  There's also a deference to political rights and emotional rights.  And that's something that we think is really valuable when talking about emerging spaces and challenges.  So it's really valuable for us to look back to this framework. 

It's also a public sector mandate.  So states that are subscribers and upholders of this convention are mandated to uphold it.  And there are mechanisms in place to make that actionable.  That's another really valuable thing. 

There's private sector relevance.  We'll talk a lot about that today.  And it's nearly universal convention. 

And within it, there are a few things just to keep in the back of our minds that are kind of underpinning the entire very long convention.  Respect for the views of the child.  So really making sure that children's views are respected and maintained and also heard.  Nondiscrimination.  The right to survival and development.  Those are pretty self-explanatory but critical to everything we'll discuss.  And then dignity and the best interests of the child which is harder to define.  But also central to the discussion. 

So in the children and AI work is a new initiative very much in the exploratory phases trying to map out research that can help us understand the most pressing areas of opportunity and, of course, risk.  So this is about getting the right people to the table.  Understanding that's not only technologists or child rights advocates.  But really everyone who has an experience with or has a concern around advancing children's rights in the context of generation AI and the AI age. 

We are designing early stage research to map out existing literature that's out there that signals where people have stepped in to either advance child rights or protect against abuses when it relates to a specific AI application.  I'll talk briefly about some of those.  What we're trying to get to with this work which we expect to be many years of collaboration is actionable recommendations for governments, companies, care givers and so on for how we can take our new‑found understanding of other rights and risks at play and move children's rights to the center of not just the conversation but the action being taken. 

And so we've built a framework with very high‑level areas of both opportunities and risks.  Pretty self-explanatory.  I won't go into too much detail.  Of course, we see tremendous opportunity around what adaptable AI can mean especially when you think of things like tailoring learning systems to a specific child's needs and context. 

What can big data tell us about how to target resources and design interventions in UNICEF is trying to map every school in the world using satellite imagery and AI and that will help us to understand gaps in services and where we can target work. 

Cognitive support can span a whole span of interventions and AI can help us enhance our existing inter electric actual abilities and fill in gaps where needed.  And this is closely related to enabling accessibility.  Children that are differently abled stand to gain almost the most through AI and that's the opportunities there are almost endless.  And not necessarily tied to these are risks that we are committed to understanding.  Privacy, safety and security are probably at the center of most conversations when it comes to human rights and child rights as they intersect with the digital age and AI, in particular. 

There's also less visible concerns around access to services.  When we enable machines to dictate who gets accepted to a school, who gets a job or is lent credit or insurance, we see reinforced existing biases.  New patterns of inequality and a whole suite of risks we don't understand. 

When it comes to livelihood and dignity, there will be a huge disruption in terms of what employment means, what work means in the age of automation.  And we are concerned about understanding what that means for children's dignity and preparing them for this next phase of life.  And we think there are also emotional, psychological implications for what it means when a child is interacting with a smart device as their best friend, for instance.  And that's a really nascent space of research.  

Apologies for the formatting here. 

We have heard from the Berkeley students that we're working with who conducted exploratory research.  Where AI is interfacing with children in a visible direct nature.  Educational robots.  That market is growing tremendously quickly in places like the U.S.

And just sort of a side note here, we recognize that these case studies are very global north focused.  That's where a lot of the advances in technology are being seen to date.  But something we want out of this conversation is to look at other examples in case studies.  So just to keep that in mind. 

But as deep learning enables more sophisticated robots to be put into traditional education settings and teaching children in nontraditional settings as well, we're interested in trying to understand what the interaction between a child and a robot really means in terms of their learning process.  And fundamental rights to education.  There's a lot of literature out there around the benefits and skills and giving new opportunities to children that are differently abled. 

There's less literature available around the down side of these things and that's self‑evident.  So we're curious to understand the right to protection and exploitation and what that means when children are interacting with robots and learning environments.  A similar example is seen in smart choice which are devices targeted at children and unknown to children are collecting data and surveilling children's behavior and this presents new challenges around data and privacy.  And questions around what it means for children to be engaging with these artificial intelligence products at different points in their life. 

So the rights here aren't just around privacy and data misuse but around the duty to protect and whose duty is it to step in and make sure smart choice around children and representing their rights.  As I mentioned, this is a very long‑term research initiative that UNICEF is leading with our partners.  And part of our mission here today is to figure out who else needs to be at the table for this conversation to be really fruitful and yield those recommendations and partnerships that we hope to see come out of this.  And questions we'll resist is if these are the right areas to look into.  Of course, there's so much that we could address.  This is an extremely ambitious research agenda.  We'd love to hear if there are more specific case studies from you all that we can share and integrate and who else we should be working with.  So going to pass it off to Sandra now. 
     >> SANDRA CORTESI:   Yes.  Well, first of all, thanks so much for having me as part of this session.  My name is Sandra Cortesi.  I'm here with my colleagues who helps me lead with whom I collaborate on this project.  My background is in psychology.  I'm not a rights expert as the title of the session suggests.  I have many colleagues who are rights experts and I just wanted to make a quick pitch for a report that came out very recently looking not just at young people.  When I say young people, I mean ages 12 to 18.  So in the U.S., minors. 

I want to make a pitch for this report and share just a few observations.  The first one is that determining the impacts of AI on human rights is really not easy as these technologies are being introduced in institutions and spaces in the case of young people as spaces such as schools or entertainment industry or healthcare systems that are not rights neutral. 

Second point, determining the human rights impact of AI is that in many cases, these rights ‑‑ the impact of those rights are complicated, because they contradict each other.  So if you think about the rights for more privacy versus the rights for more safety, it doesn't mean they go hand in hand. 

And when you measure the impacts, again, of human rights on AI and the other way around, you can say that AI impacts the full range of human rights as Jennie just mentioned and showed a few to you.  And that the impacts are not distributed equally.  So certain communities or certain populations are more affected than others either in a positive or negative way which I think is important to remember.  In the case of young people if you read most of the big reports that have come out, young people are rarely mentioned in it.  So I feel like the verdict is still out to an extent.  What we do is we have defined for us the four areas that we think are relevant again when we talk about youth ages 12 to 18 which are these four.  And I won't go into each of them.  I only have very few minutes.  But the four: identity, privacy, learning and wellbeing. 

And across these four areas, you can imagine a range of rights in the sense of the CRC and how they apply.  So when we talk about identity, you could think about the right of leisure, play and culture to be abstract.  But, for instance, we are looking into art created by AI.  And what does it mean for a young person?  How will it impact his or her motivations to create their own artistic work in that context?

For instance, we look at privacy and we are figuring out what does it mean when young people are in the space where much of the adult world around them trying to protect them and how does the right for protection, how does it play from the perspective of youth?  Who decides basically and at what cost will it come and who is carrying the costs?

Paragraph raft in the learning space, we are looking at education and feel there is a need to develop curricular material around ethical considerations of AI systems.  We're not sure who is developing this.  And basically, what narrative is going to be used when developing these materials.  And many questions.  One of them is how will this voice operated assistance shape young people's behavior in a positive or negative way?

How can they be used to shape behavior?

If you look at the next slide, this is one way of how we're trying to have a slightly more wholistic conversation.  In most cases, young people are considered just the users of these technologies.  So in most cases we talk about of the deployment of these tools and the impact it will have.  By splitting it up and looking at the design process and the evaluation stage, if you split it up, it will have a slightly more nuance of the issues.  You see some of the areas to give you one question or each.  To me, it's interesting on the design side.  We feel like we have to make sure they have the interfaces to participate and co‑design the policies that will shape the futures.  How are we going to do that? That is a big challenge.  On the development side, one question is how can we make sure that technologies do not identify and reinforce the biases of their designers or available training data?  Here, again, biases we have young people themselves and training data that is often not including youth data.  And the last question we have, how can we ensure youth can understand how decisions are made in order to maintain confidence in the system and institutions?  So I think these are some of the high‑level questions I'm curious to hear the response. 
     >> JASMINA BYRNE:  Thank you so much.  We've seen the two presentations and we'll hear quick reactions and responses.  Steve, will you go first and tell us a few words about your work and your thinking about it?
     >> Thanks.  So I'm on Jasmina's team.  It's good to be here.  And I want to just say that we had the policy lab are focusing on digital skills and literacies for children.  But particularly interesting in this context, what does that mean in an AI world?

And so Jennie gave good examples of advanced and developed examples of toys.  For example, if you open Google maps, your route suggestion is AI base.  Auto correct on your phone is AI base.  Netflix kids’ recommendations of what to watch next is AI based.  So some of these are benign and they are useful and improve our user experience. 

And then there's the other end of the scale which is more ‑‑ I come from education tech background.  So within education as more data is collected about children, there's an incredible opportunity to streamline and personalize education based on each individual child.  You can begin to stream and profile a child in a way that defines them for the rest of their life.  And says this child isn't so good at math in grade 7.  So shouldn't follow this career. 

Even though it's potentially a good example, it can define you in a way that perhaps posting something that you shouldn't have done on Facebook can define you for the rest of your life and your memories never being forgotten. 

So how do we develop the digital skills and literacies of children in a way that makes them survive and thrive in an AI world?

And so the traditional digital literacies and using the tools and the technical skills and how do we get children to be users and conscious consumers?  You have rights.  You have a right to question that.  And how is that data being used?

And can you get that data back and delete it?

Can you own it?

Critical thinking skills.  And very important in this case the reflective skills.  And this applies so much to adults as well as to children.  Thinking around the AI of ethics and fairness and inclusion.  These are not things you think of when you are using auto correct or posting on Snapchat.  But increasingly, they are important.  Those are the questions we need to have at the back of our minds all the time.  So we are working closely with Jennie and innovation team.  And maybe if I can end with two quick questions, we're going to be asking what policies enable inclusion of children in the AI design process?

Not just users but also as co‑creators.

And thinking around the private sector.  Private sector are our partners and they have much power in developing the systems that hold so much potential.  How do we raise the flag and how do we work with the private sector and governments to ensure the CRC is held at the heart of the products and the services being developed?

Thank you. 
     >> JASMINA BYRNE:  Thanks.  Over to you for a few words. 
     >> Putting at the top prior to artificial intelligence in the co‑areas.  And we are developing an instrument on the recommendation relating to AI.  Should be adopted to my memory is 2022.  We are still developing it.  And we, actually, just to relay on the ideas earlier.  We're developing policy guideline.  And hopefully, we can complete it during the mobile week 2019.  We have been organizing for 70 years.  Next one is special edition.  We call artificial intelligence augmented or we call it a special edition that will take place from four to eight months.  I would like to all the panelists to join us. 

And we are loose working with other private sector in the readiness of sector to adopt AI in education and learning including capacity of the policy makers and index for the readiness sector in Artificial Intelligence so more like AI readiness index.  At the same time, we are also developing curriculum that could guide especially the public institutions in terms of what curriculum and skills for the next generation.  I want to ask another fundamental question.  We are talking about how we should prepare the next generation to become active and responsible users and the creators of artificial intelligence.  For this purpose, we should answer a question.  What is the intelligence we should develop?  For the individual benefits and the public.  We should think of four layers.  What are the unique intelligence we should reinforce in front of stronger artificial intelligence?

So in terms of, for example, we have been outsourcing our working memory to the computer of memory.  We have been out sourcing our information processing to the Google and other search engines. 

So we continue to outsource our decision‑making ability to machine learning, to the deeper learning.  So what's the boundary?

I know what the boundary should be dynamic.  It keeps changing.  We need to have a principle about what are the unique intelligence we should develop amongst our next generation. 

So the second layer is the developer.  So what cognitive ability we should develop among the next generation.  Both in terms of the data privacy but more about when to stop using machine and use our own intelligence. 

And the third layer is the human machine.  That could help the next generation to find a job in the AI area.  AI creating a new job.  And creating the job loss. 

So in order to help them find a job in AI economy.  And the question relating to this is social values we should develop that is related to the development including what you said.  So I still need to ask the question we should think of the human machine collective intelligence before we develop and use AI education for learning.  That's all. 
     >> JASMINA BYRNE:  Thank you so much.  These are really interesting observations.  And just to mention things and questions that concern us as well, working at a number of sections on employability and skills and the skills related to use of technologies and related.  Finally, over to you a few concluding words and then we have about 25 minutes for discussion. 
     >> Thank you.  I'll keep my comments short.  I had dinner with the CEO of a big Chinese ed tech company about two weeks ago.  He was describing the process for how they want to prepare the workforce.  When we look at learning now, we think of some people are physical learners or visual learners or auditory learners.  You can think about these things and pixels.  Very low resolution.  And what AI will allow is to see the sequences and bring more resolution to this picture of how a child learns.  Someone who is a physical learner or each of us is very different.  He said they have 30,000 indicators in their system in China for different ways people can learn.  You might hear this and think this is fantastic, but I would argue it's not.  Although this will enable us to do new things, it's also a trap.  Part of that is because it will predict for children, as you've heard today, it will predict for children who they have to be in the world and can't go on to do the profession they want because 30,000 indicators suggest they are not going to learn it sufficiently well for them to be useful doing that job. 

Weirdly, that's not so different from the system we have already.  If you don't score highly enough in high school, you can't go to the college you want to go to.  May have a successful career.  My worry is AI will make this much worse.  My comments are as simple as this.  First of all, we need to break the myth of seeing children as incapable of contributing to the important decisions.  Strong evidence to suggest children are much smarter than artificial intelligence.  The developmental psychologists who looked at the way children learned and show children and teenagers especially are wired to not discount hypothesis and they think like scientists.  They say what if?

Machine learning and AI thinks like adults.  It looks for existing patterns and trains to see does this conform to what I already know?

Young people are willing to take an experimental view of what is right or wrong or how things ought to be.  And I see this in young people's treatment of gender and sexuality.  They refuse you are a man or a woman.  And I think we're all better off for it.  When we think of how young people can participate in the future, we have to remember that they are better equipped than adults are. 

And we should recognize a childhood, the period of our lives we call childhood is an act of resistance and not a process of conforming young people.  We could think of it as a laboratory and not a factory, so you are not harvested but honored for your contribution. 

To closeout, looking to the future in terms of AI and children, the power rests in two areas, in my view, in the future with artificial intelligence.  The first area is choosing metrics.  Which metrics we optimize for.  As you've heard from our experts today, young people need to be a part of that process and part of the co‑design process. 

There's a high school 300 meters from here and not one young person in this room.  What right do we have to choose the young people's future for them?

And in fact, it would be as simple as instituting a system where young people are brought in and almost randomly like jury duty for kids to be a part of this process. 

And similarly, lawmakers have suggested doing Ts and Cs so teenagers can understand them. 

To closeout, the secondary of power in an era of AI, I would argue is choosing what is not measured.  We make a decision not to optimize aspects and childhood would be one of them.  The first is a right to be forgotten for youth.  We should think about the data that's collected about young people as sensitive and they should have greater design affordances given to them to delete their Facebook or Snapchat or Kik activity from the age of 18 and prior should they choose to or even during childhood.  That should be easier and should be made easier.  The second is a right to fluidity.  This is partly to move us away from this belief that big data equals truth.  It doesn't.  Big data informs our understanding of the world but is not necessarily what is true of the world.  And young people should be allowed to have fluidity in terms of what categories are put upon them.  The trouble that metaphor gave me is pixels describe the natural world.  If a color relates to what you see,  categories of how people learn are not rooted in natural science.  They are just opinions.  I won't get into that. 

The final is a right to flourish which is maybe not the right way to convey this point.  The idea here is that we must recognize that training young people for the world today is not necessarily an adequate, I think, use of their talent.  We have to imagine the future can be better than it is today.  And not just train them to be economic cogs on a wheel.  We've seen in the rise of suicide rate this isn't working.  Thank you. 
     >> JASMINA BYRNE:  Thank you, Johnny, on this excellent philosophical contribution to our discussion.  I think we can argue whether we should have children in this room or not.  I hear several times people mention importance of child participation.  How do we make sure participation in meaningful and representative of all children and groups?  We have about 25 minutes for your observations, comments and discussion.

We have more questions than answers and we put back the slide where we are asking these questions.  And when the session is over, reach out to us, contact us.  You have Steve's and Jennie's emails.  I hope we form a little group of those people interested in working and exploring this topic more.  Just please introduce yourselves. 
     >> AUDIENCE:   I'm a member of the GNI board.  Just to pick up on children participation.  A company has done that.  We have children advisory panels and had throughout the company and for those interested, some results of those advisory panels available on our home page.  And so we are trying.  As you said, it's difficult to find so it is meaningful.  When it comes to the panels, there are outputs we have published.  Thanks. 
     >> JASMINA BYRNE:  Wonderful.  Do we have any participants with comments or questions?  No?

Anybody else from the room?
     >> AUDIENCE:   Hello.  My name is Milga.  I've worked in the tech sector for 20 years, and I'm working with UNICEF at the moment as a consultant on children and online gaming looking at issues that sector poses for child rights.  And in my previous work with a telecom operator, I worked with business unit specifically taking children's rights and business principles and applying those to telecom operators and what were the most salient child right's issues.  And about how to engage the private sector.  It was one of the champions that UNICEF found.  They were working on the digital chat rooms and games online as well.  This is one way to go. 

As you mentioned, artificial intelligence is coming to all sectors and identifying some champion companies to dig deeper willing to open up their operations and really look at the business models and how the company works.  And how they interact with communities.  I think that's definitely one solution in my past. 
     >> JASMINA BYRNE:  Thank you so much.  Definitely working with businesses.  UNICEF has always had a really good collaboration.  But really bringing other businesses around the table and talking about what it means in terms of Artificial Intelligence.  And I'll add my comments at the end. 
     >> AUDIENCE:   Hi.  Thank you for all the talks.  It was really interesting.  I'm Vanessa.  I just wanted to focus on an issue which seems really important to me.  What is so specific about this generation of native users of Artificial Intelligence?  And I have a feeling this could be very important in our    discussion.  So I would really love to hear about this. 
     >> I push back on the idea of digital natives.  I think it's too broad a brush to paint on youth.  Not all youth have the same access to band width and interfaces.  Sandra would be in a better position to talk about that than I would.  Personally, I stay away from talking about youth and children in my own work.  I prefer to talk about millennials and the particular challenges they face in a world in which 1% of the global population is on track to control two thirds of the world's wealth in 12 years from now.  If we aren't talking about that, I feel like we're neglecting ‑‑ we have a responsibility to account for very particular set of challenges.  I think you are right.  I also think that's a separate conversation or not. 
     >> JASMINA BYRNE:  We have one more question over there. 
     >> AUDIENCE:   Hi.  Gabe here from the youth IGF in Germany.  I have two points.  One is a question, a rather quick one.  And I want to explain it a little bit.  The question is why is it better if humans or something decide what happened with children rather than AI?  If your parents, if your teachers or something do that for you, that's okay.  And that's, in my case, the point.  As, for example, rather recently diagnosed with ADHD which looking backwards I would be very happy if AI would have diagnosed me with it like 20 years ago and would have saved lots of my struggles. But no one noticed because I have ADHD, but I'm not hyperactive so no one noticed it which makes stuff sometimes hard.  The other point is about the youth participation. 

I'm 28 but represent the youth being older than half of the youth population.  And, again, also kind of pushing back on the digital native idea.  What I do see and what I've learned is generally technology, new technology is always adapted first by young people because they grow up with it.  The older you get, the more you know already, the changes in the world are rather harder to adapt.  You are adapted to some kind of world.  And therefore, I think it is pretty important to get young people involved.  Even if it is exhausting.  And yet they do have a different approach and therefore from my point of view, very important to always have them at the table because they have a different view to the world.  They grow up with it differently. 
     >> JASMINA BYRNE:  Thank you.  Very challenging question.  Let's see who wants to try to answer.  Steve and Sandra. 
     >> Very good points.  Just quickly on the first question, why shouldn't AI decide?

It's a good question.  I think the issue that's often raised is that we have a problem when the AI decision is in a black box that nobody understands.  So with the humans, you could potentially question that and say you could have ‑‑ in theory.  I know what you are saying.  A lot of what we're saying today we're trying to create a world that doesn't exist.  In the AI world that's aspirational.  And to undue biases. 

And stream them in the education or to make criminal justice decisions around them whether to hire or fire in ways that are not transparent.  And that's problematic.  When you don't understand the algorithm and the data behind it.  That's one of the big principles is this accountability and transparency.  And the New York City government, I think is one of the first governments in the world that is now mandating that any city funds being spent on IT systems that have algorithms need to be open and open to scrutiny.  Because often companies will say we can't show you because this is our intellectual property.  And then on the youth, just one point of reflection.  If you look at some of the cycles happening between generations, I read a report recently that said the average American teenager is more ‑‑ drinks less than their parents, smokes less, swears less.  This is the average.  It's an interesting example that is more conscious of the environment, recycles more.  So my secret hope is maybe we were the first generation to impact these technologies where the next generation is more if we can get this right and engage them and create the right boundaries is more conscious and ask more questions than we are with the systems we use and the way we've given our data away and our behavior. 

So maybe the next generation will be more circumspect.  That's the secret hope. 
     >> JASMINA BYRNE:  Thank you, Steve.  And obviously, any data processing machine learning is only as good as the data you put into it.  So if the data that was collected by a human is bias, you are going to have a bias outcome, obviously.  Sandra. 
     >> SANDRA CORTESI:   My response would have been almost the same.  I was going to say, on one hand, there are many areas where I do believe AI would make better decisions than a human.  Could be healthcare, could be decision making about potential applicants or candidates.  In that case, I do think it would be good for a young person to understand how the technology came to that decision.  I think that will strengthen the trust that youth have institutions and systems.  On the other hand, it's a youth session and not adult sessions.  Most technologies are currently being developed with adult data and systems that use adult data to make decisions about youth are just, in my opinion, inherently complicated.  They might not make decisions that are in youth favor. 

So I think, at least, you should have youth along that decision‑making process so in case the machine makes a decision that is not in youth favor, someone is able to detect that and mention it and point to it so we can change the system itself.  Thank you. 
     >> JASMINA BYRNE:  Thanks, Sandra.  Yes?
     >> AUDIENCE:   Thank you.  My name is Martin, and I work in the joint research center.  I'm also a psychologist and computer scientist.  We have been carrying out some kind of foresight work in my institution on the impact of Artificial Intelligence in education and training, learning.  We have come with a few messages.  I will send it to you in a few days.  I would like to send you a few messages we have come up.  We definitely enter the classrooms.  Thanks to Artificial Intelligence would be possible that you create personalized environments and even the deal with big problems that we have had with children and we cannot tackle and arriving on time.  They can also detect this in time.  Dyslexia disorders, attentional disorders and the like.  And then the classrooms will look more master thanks to the technology.  On the other hand, we have seen obscure sides.  You remember the well‑known principles.  Could put children into clusters.

And this is something that's a popular concern.  AI will classify people and people will do what they are expected to do in the future.  Kind of science fiction.  Could be.  There is some part true in this.  It's about saying to the policy makers it can happen and solutions should be provided to prevent that.  In Artificial Intelligence we are, I think, in my perception, we are very often thinking of other context.  If we think of kids, that changes the game completely different. 

Actually, one of the models that circulates here and there and I support this in education, no Artificial Intelligence without UI.  No AI without user interface.  Which is something that can compliment.  In a way, expectations by putting reality.  We are not dealing with adults.  We are dealing with vulnerable groups like children.  Thank you. 
     >> JASMINA BYRNE:  Thank you.  Do we have anybody else here?

I just wanted to go back, if possible, to you.  We keep mentioning education and use of AI and education and data.  As we had challenges with data in terms of the quality of data, the problems of clustering children in a group.  On the other hand, benefits of personalized learning.  How do you see we can overcome some of these challenges in education, in particular?
     >> I think, so far, the discussion on the use of AI in education has been focused on the use of AI to personalize the learning process meaning that we are using AI to do the analytics and tray to learning pattern to provide some of the wise or push, we call it smart content, learning positivity to the learners.  To me, only one perspective of the use of AI education.  We need to recall all the discussion.  Who are designing the data structure?

Which means what tags are we putting on the data?

And who are developing the algorithm is based on the human algorithm until maybe some of the learning system adopts like the Google zero.  But, so far, the use of the AI algorithm in the learning system have not reached this level yet.  We basically still, the designer developing the data structure, the data and the algorithm which means it's human ‑‑ machine deficient.

And there's some government that are deploying learning system, for example, in China.  In Beijing, they are deploying a large learning system.  And I want to read and see what's the pilot test after using this learning system for million students after three years and how the machines are helping.  The machine teacher and the human teacher who are coaching the student at the same time and in the learning data every day from every student from the secondary schools.  So I want to see for data collection what will come out.  That's one perspective.  For the others, I just want to mention the area of use of AI in education.  For example, how we could use AI to reduce or breakthrough the barriers of access to education.  The barrier includes the language.  Next year is the year, we call the week of indigenous language.  We have many, many of language.  How AI can help us. 

These kinds of people cannot access more than 90% of online content is not accessible because they cannot understand English.  Can really AI help us to reduce this kind of barrier including the person with a disability?

How and to what extent AI can help them to break through the barriers.  Even to access to basic education.  I think this is a topic we need to think very critically. 

And for the other purpose, can AI help in remote test?

For example, can we conduct face recognition based on the mobile phone and how can we use this technology to promote the test and even the remote certification for education?

I think we have ‑‑ more importantly, the most potential area is to use AI for the educational management information system which means how to enable the data‑based policy making and education management.  So I think we need to think wider based on the use of AI to personalize the learning process.  So that's my input.  I believe you have more idea than me.  Thank you. 
     >> JASMINA BYRNE:  Wonderful.  A lot of food for thought.  I kind of feel like the generation now, the young people today, are the ones that we experiment with most.  Don't know how they are going to turn out.  We'll have to wait a few years to see what happens with this in China.  Yes?
     >> AUDIENCE:   My name is Andres.  I work for international society.  Thank you for the presentations.  I have a question related to the home environment and the family dynamics.  I know this is a round table on children rights, but I wonder what would be the role of parents and how the family dynamics would change as we push for this kind of co‑design of AI and pushing for more control of data from the youth?  What kind of preparation and learning would the parents also have for enabling these at home?
     >> JASMINA BYRNE:  Wow.  Yes.  Training.  Anybody volunteer to answer this question?

What can parents do and what kind of learning parents need to have in order to support their children to be engaged in this and even under the AI now‑a‑days?

Jennie do you also and say a few concluding words?
     >> JENNIE BERNSTEIN:  Sure.  Thank you so much for your question.  This might be the most satisfying response but just to note that I think a lot of the principles and guidelines and recommendations that are emerging in the space of AI and how to manage it against or including ethical principles or human rights guidelines focus a lot either on governments or other companies and how those actors should be responding.  And something that we're really clear on from the very outside of this children and AI large initiative is care givers not just parents but other people who are raising children need to be a target audience for us and we're trying to identify what the right format for presenting recommendations to a very diverse group of those types of care givers should look like.  So it's a research question we have that we don't have great answers to. 
     >> I think one idea which when we're looking at the children online gaming. I think the private sector has a responsibility to help parents in their task.  They also have an incentive to do so.  When they do help parents to feel more in control of the children's use of technology and interaction, then they are lessee motion Al responses and less regulation based on emotional responses.  So I think in talking to the private sector about how they can help parents understand how their solutions work, you know, everything from what data they collect and et cetera would be a solution. 
     >> JASMINA BYRNE:  We have somebody here at the back. 
     >> AUDIENCE:   Hello.  I work for polish center.  I work for national research and we conducted research about IOT and internet of choice.  Is it an opportunity or challenge?

And related together with the ‑‑ so with the emergency response teams.  We did it from two focuses, one social and another technological.  And as a result, we published the guide for parents, the special focus was exactly on parents.  And about privacy also from two points of view, the social and technological.  Now there's no time, if anybody would like to talk about this research, what kind of data, is it safe, how to take the data, but also how the family life can change.  I think it's extremely important issue, ethical issue.  Kids have no idea they are being recorded.  Parents even were surprised they can listen to all conversations the kids are having with the toys.  So I think it's extremely interesting topic as well how the family life can change. 
     >> JASMINA BYRNE:  We need to wrap up.  This is the end of our session.  We heard some really interesting wonderful insights.  We ask who else we should be working with.  It's great to have here the European commission, academic psychologists as well as private sector.  I think we missed government people in yesterday's panel.  We had ministries from Afghanistan.  And now, really, we would like to see how to continue this conversation and bring on board different stakeholders to see what is needed really in terms of recommendations.  Do we need regulation or self-regulation in the private sector?

Do we need more loose recommendations that everybody can adhere to when it comes to children's rights and business ‑‑ children's rights and AI. 

So please stay in touch with us and thank you all for your participation and have a great day today.