The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> CORINA CALUGARU: So I hope that under this platform that the new one will manage to increase the visibility of the Council, but especially to have more attention to the setting of the standards and implementing standards because nowadays it's important to have the protection of online and offline, the right, but at the same time to understand and recognize and understand by using the Internet we have a lot of challenges, and these challenges are on both sides so by having the recommendation of Internet intermediaries, we are setting up obligations for the states, for the Government, but at the same time for Internet intermediaries and as I said, all of the legal binding for recommendations in order to help us is really important to have the understanding, but at the same time to share responsibilities because at the end of the day we want to have peace and standards, but we should understand at the same time that all of us has some responsibilities, and we hope that further discussions will able to help us in this understanding.  So I offer now the floor to our moderator.

>> MODERATOR:  Yes, thanks so much to the Council of Europe.  My name is Wolfgang Schulz.  I'm a law professor from Germany.  I hope that sounds not too terrifying.  I will not really give an input again here because we only have 55 minutes now for the discussion and Q and A, and the only thing I would say at the beginning is that I have been studying Internet‑related issues for a couple of years and with many issues, I would say that at the end, we see that they are not so many structural changes.  Things are amplified, things are changing a little in this or that direction, but there are some points where we really find structural changes and that is that intermediaries are a new category of organisations that are extremely important, helpful, but also powerful.

And so a lot of recent discussions in the Member States on a European level and international level are revolving around intermediaries.  I am happy that I have four panelists here that are extremely knowledgeable when it comes to this issue and I will for sake of time saving just give you the names and affiliation, and then you, of course, are free later on to get some additional input on your work and so on.

So, first of all, Niccolo Singalis, Rio De Janeiro, he is an expert on intermediaries, especially studies terms of service and its effect on Human Rights.  Great that you want to share your experience and knowledge for that.  It's Karmen Turk from Triniti law firm and University of Tartu.  She was Vice Chair of the committee that drafted that recommendation on the role and responsibility of Internet intermediaries, I had at pleasure to work together on that, and I hope very much she will give us input from the discussion we had and demonstrate what we had found out and would like the Committee of Ministers to recommend when it comes through in February.  Marco Pancini from Google public policy and Government relation team and head of the commissions team, European Commissions team at Google.  Thanks for being here and sharing your views, Marco Pancini and Andy O'Connell, from Facebook, the public policy team is here as well the so it's great that we have the opportunity not only to talk to intermediaries, but have representatives of two of the most intermediary here with us.  That's the setting, we will have a discussion about 25 minutes among the four of you and I think I will open the floor to we can have some input from you, questions and remarks, of course.  We have remote participation as well, and so I will hand the floor at the end of the Q and A to you, and then we will get some comments from people who have not the opportunity to join us here.

So I will start with Andy, if I may, and what I would like to ask you and you, Marco later on as well is about the actual practices of dealing with problematic, call it problematic now, content.  What are the procedures in place?  What role play your internal guidelines?  To what extent do you reflect what Governments want you to do?  Maybe that we are all on the same page what the actual procedures are and feel free to mention problematic cases as well and changes in policy recently so that we know what we are talking about?

>> ANDY O'CONNELL:  Thanks for having us and thanks, everybody, for coming.  I will keep it brief, I'm Andy O'Connell from Facebook's public policy team in California.  I'm focuses on issues of Human Rights and free expression so I'm really excited for this conversation.  A word about Facebook's mission, it's to give people the power to build community.  It used to be to make the world more open and connected, but at its core what we are trying to do is empower people to have voice and to build community, so we think from a Human Rights perspective our mission is completely aligned with those principles and that is what people every day at Facebook are trying to do is to give people more voice to promote these values.

As a practical matter we think Facebook is a very helpful tool for advancing Human Rights causes and for empowering people to exercise their Human Rights so we think it's perfectly aligned with what we are talking about here today.  One more word about our Human Rights approach, so as a member of the global network initiative, we have committed to the GNI principles which are consistent with the UN guiding principles on business and Human Rights and we are an active participant in GNI including the GNI audit process whereby every two years our Human Rights compliance program is audited independently.  So we are really proud of that work.  Part of that work and part of what my team does is focus on Human Rights due diligence for both products and policies, and, again, we do that on ongoing basis.

On the specific question about how we deal with content issues, how we think about problematic content, again, trying to be very brief, at a high level we have a set of community standards that are publicly available that outline the type of behavior and the type of content that is not permitted on Facebook.  Those community standards are informed by consultations with Civil Society, with academics, with people in this room, with Governments, and at their core what we are trying to do is build a community and have a set of guidelines that allow us to achieve our mission of giving people voice, and that means specific things like prohibiting behavior that we think is detrimental to safety.  Prohibiting behavior that creates an environment where people don't feel empowered or respected.  Practically that's things like bullying, harassment, endorsement of terrorism, those sorts of issues.  Those are laid out in our community standards which are public online.

In terms of enforcing those rules, we are, we largely rely on reports from the community of users, so any piece of content on Facebook can be reported as violating our community standards, at which point we have teams around the world working 24 hours a day in dozens of languages to review those reports to assess them against our community standards and to take appropriate action.

The last thing I will just say is when it comes to specific Government requests for censorship, we review those initially to see how, whether the content is a violation of our community standards because sometimes Governments do report corn tent to us like hate ‑‑ content to us like hate speech which does violate our community standards in which case we will take it down under our terms of service.  In the event it is not a community standards issue, we will do a legal due diligence process, which might result in us making the content unavailable in that jurisdiction.  In those cases we will publish in our transparency report every six months data by country on how we dealt with those Government censorship requests.

And then just the last, a last word on transparency, we do think transparency and improving transparency needs to be an important part of our strategy as the private sector but also working with Civil Society and Governments so we have really been pushing on this transparency theme and we are trying to be a lot more public about our consultations, how we are thinking about these issues and one thing I will point everyone toward is our Facebook news room.  There are two things to highlight.  One is we have a recurring blog series called hard questions where we publicly grapple with really challenging issues as it relates to social media, Facebook, technology policy.

I think it was just on Friday we published one on the question of is social media good or bad for your well-being and we try to grapple with those issues publicly because these are important things to think through.  The other part of the news room I would highlight is our news feed FYI.  We get a lot of requests for clarification on how Facebook's news feed works.  News feed is the main screen you land on when you open the Facebook app.  It's the scrolling list of all of the posts by your friends and family, everybody you have chosen to follow or friend.

We get a lot of questions about how that works, what are the principles animating it so we have a dedicated series of blog posts explaining how that works and when whenever we make significant changes we are public about that.  So I will leave it there in anticipation of questions.

>> WOLFGANG SCHULZ:  Thank you so much for sharing.  I remember a couple of years ago the parliamentary commission and I was there as a witness as well and there was Facebook present and the demand of the members of Parliament saying you have to adapt the policy to regions and countries and so on, and the answer of the Facebook representative is, no, the beauty of the service is that it is global.  Is that still the case or do you allow not only when it comes to law enforcement, but when it comes to policies you set up on your open for this regional specifics?

>> ANDY O'CONNELL:  The community standards are global, but we do try to take into account regional specifics, particularly when it comes to specifics like news worthiness.  We announced last year that there are certain circumstances under which content that would otherwise violate community standards ought to be left up because it's news worthy.  The situation where that comes up most frequently as you might imagine is graphic violence but that is important for documenting Human Rights abuses and disseminating information about ongoing Human Rights abuses so we take that into consideration when making enforcement decisions.

We also, as we have announced in a couple of different ways, we are investing more and more in making sure that our community operations teams who actually review these user reports have more and more local context and knowledge beyond just improving our language coverage.

>> PANELIST:  I am working on these issues a few years and one key pillar in our approach to these is the multistakeholder kind of problem we are having today, that's why we work very hard to partner with special (?) to start looking into this in a less emergency status model.  (Audio is breaking up).

The GNI was already first expanding to try to look at this issue from a different perspective which it cannot always be like companies presenting our position in answer to the question, but coming together and looking at some of the issues that are on the table and trying to come up with a joint solution for some of them.  Now, with this in the last months we made some announcements the way we deal with controversial content up there for discussion or some interesting feedback, we do (Audio breaking up).  You as big companies need to look at your action and consider the impact of these actions, why there are.  We should be all together in striving for solution.  Going back to the announcement, there are three different levels we focus on, one focus is our policy side, so I absolutely agree with you that there are benefits to being global.

It is not perceived as legal, it's not against our policies but we as a society started to consider.  For this kind of content, we believe that the project is not ‑‑ (Audio breaking up).  This has actually had an impact.  80% of user agrees for some of this content.  We know that more and more we focus on what is legal and what is illegal.  It is very important.  On top of that, we are really focusing a lot in trying to make sure that we use the best (Audio breaking up) we propose not to make human review of this content useless, but actually to help humans in order to do effective job.  Today in the last month, we have progression that is very, very important.  We basically consider 98% of the videos we remove are caught by our machine learning algorithm.

So what we are having here is an additional source of flex, so notices that our organisation is receiving for content.  And this is very useful.  How does it work, and there are a lot of questions about that.  Actually is this the result of the great signal that we are receiving from the experts that we have across the World Reporting content to us.  So our algorithm learns from the good quality that we are receiving from NGOs across the world and learn how to be even more effective in flagging this content, but, of course, in terms of people using the contest we say the human factor is important.  That's why we made the pledge to reach a number of 10,000 people working on controversial content on line by the end of 2018 and that's for us another way to show that that's a commitment that the company wants to take to be more responsible.

>> MODERATOR:  Thanks much, extremely interesting.  Maybe on your last point, using AI.  We know from research in other areas that there is always the risk that when you have expert systems that the people just accept what the systems address.  Do we have some safeguards in place so that there is no overblocking just because people press delete when the system recommends that, and maybe an additional question, I was a little get puzzled a couple of weeks ago when I read on the blog something about the limited state policy of Google and the content that it is not really illegal but somehow doubtful or questions by many is somehow not deleted, but put in a confined space and is not dealt with the other content.  Is that a new tendency to be tougher on that or what kind of policy shift does that reflect, if any?

>> PANELIST:  So that's the example that I was mentioning.  So through these new policies what we do is we go back to a pure hosting status so the content is available on line through the link it can be reached, but since we don't agree actually and we perceive that a lot of people in this society don't agree with this kind of message, we don't want to create any kind of engagement or any kind of facilitation to the spreading of this content.  So, again, that's also why to answer to a societal question that we are receiving.

We as an Internet company, an important Internet provider needs to be responsible to what are the voices that are raised in the society in relation to the service that we provide.  Going back to your first question on AI, human issues are important factors in two of the activities.  One, first activities in training the machine.  AI machines are trained not based on artificial cases but on the real flags that you are receiving across the world.  And the second human review.  Human review every flag at the same, using the same criteria.

And by the way, the level of quality of machine in terms of flagging content is the same.  So we have a new expert working together with humans trained by machine and that can help us to do a better job.

>> MODERATOR:  Thank you, I'm coming to Luca and you have done research on that especially when it comes to content policies of intermediaries, what kind of patents do you see and significant shifts and you are free to comment on what you have heard from our colleagues from Google and Facebook, of course.

>> LUCA BELL:  Thank you very much, Wolfgang.  I should put a disclaimer in that I am not from the Center for Technology and Society.  I will mention the study that we conducted while I was at the centre that was on terms of service and Human Rights so trying to measure the extent to which companies were complying with freedom of expression, due process and privacy standards in their terms of service.  This was actually a success story from the IGF because we elaborated some recommendation in terms of service and Human Rights at the 2014 IGF in Istanbul.  We developed a Working Group that then produces recommendation, and then on the basis of those recommendations, we created a methodology to basically score the extent to which companies were complying with those three rights, and we found, well, a number of problematic issues.

I think in the interest of time and not going to go in detail in that regard, but something that is not very often brought to the public discussion is not so much, well, the concern over data accumulation, collect, exportation, but also with regard to due process.  So are individuals to be going to be put in position to make choices in an informed manner about their use of the platform?  And in particular to us if they were problematic where the notification before removing content, users are usually not informed about this kind of removal.  The second issue was on resolutions, so there were typically a class action waiver that was imposed as parts of the terms of service and mandatory jurisdiction, so companies would require consumers to file lawsuits in California, which might be quite problematic for a number of consumers, and as well these were some requirements that were in the terms of service in addition to having alternative resolution mechanism that was required to be the only way of resolving certain kinds of disputes.  So consumers were giving away their right to sue certain companies.

I mean, I'm not naming any particular companies at the moment.  We did a study of over 50 online platforms and major ones were a part of this study.  But this study was the first input that was provided from the dynamic coalition on platform responsibility.  Let me put a flag to the dynamic coalition's work here.  This is a Forum that meets annually at the IGF to discuss platforms in particular with regard to the respect of Human Rights.

And this, I think, is in line with what is happening at the Council of Europe, because I was very happy to see that this recommendation for the first time focuses on the notion of responsibility of intermediaries.  Previously we always had the discussion about liability and trying to limit and clarify how far this liability would stretch, but I think it's important to also discuss the notion of responsibility because otherwise this is going to develop only by policy makers in drafting the laws and maybe insert them into trade agreements that are automatically implemented into laws without the participation of Civil Society.

Or they can be inserted into private agreements between copyright holders and intermediaries regarding what she should do, what their responsibility is.  So I think we need that sort of discussion, and I welcome everyone to come tomorrow to our session which will be from 9:00 to 10:00 in the morning.  The two small points I wanted to make in the discussion are regarding the range of measures that can be adopted to tackle what is called illegal content or inappropriate content.  I think there are different options, and I'm personally not in favor of going straight to regulation without showing that there is a market failure.  So there is, you know, I think the strongest form of intervention is to say you should do something.  It's common control regulation.  Then there is a slightly stronger measure of intervention this is impose secondary liability, which is to say if you don't do something, you might be liable for not doing it.  Then there is another measure which is co‑regulation.

So you create some principles at the Government or European Commission or policy making level, and then you ask intermediaries to abide by those principles, and then you have a last type of approach, which is you completely leave this to self‑regulation.  Maybe you impose limits but you say this is a matter for the market to sort out.  And I think particularly when you are talking about, you know, in the very early phase.  There needs to be some discussion at the level of industry and the market needs to figure out what the best solutions are, and then when we can identify that there is a market failure, there is to do something basically.

There is a sliding scale and the more you have a company with a significant position in the market, the more the state should be concerned about potential detrimental effect of living this simply to the market itself.  And I think the recommendation of intermediaries of the Council of Europe goes in this direction.  So it mentions an issue of a sliding scale approach, and also important is to consider the incentives.  This is something that the recommendation mentions if you are imposing the cost of taking very granular measures, for example, to balance the interest of copyright holders versus the freedom of expression, you need to realize that if you are imposing those costs on the intermediary itself which is in the condition of trying to minimize the expenditure that you will do in that regard, then you are potentially having the detrimental effects on freedom of expression.

So I think in general there is some consensus that the cost should be greater on, well, the copyright holders or if we are talking about terrorists, it should be somehow subsidized by the public authorities.  The second quick point I wanted to make is I think in an era of big data, and Artificial Intelligence, due process requires a bit more than simply a notification.  I think that the new matter that is increasingly being discussed and increasingly relevant to user autonomy is the right to explanation for any automated decision that significantly impacts individuals.  This is part of our data protection framework in Europe and increasingly is discussed also in other jurisdiction.

So the question is how do you provide an explanation for this kind of decisions?  I think, and it's just my last pledge is that you will need to have a notion of personalized transparency as much as you have today a personalized service.  Because what we have today in transparency report is an aggregated notion of the kind of interventions and removals and handover of data that intermediaries do, but you don't know how as a user that affects you individually.  So I think that's something to discuss and offer the way forward.

>> MODERATOR:  Thanks very much for the input, whatever questions, self‑regulation is an interesting topic but maybe we come to that later and hand over to Karmen Turk giving her view and expressing elements from the recommendations we have drafted.

>> KARMEN TURK: I have four minutes I think to try to convince you why the Council of Europe spent the last two years in drafting recommendation for intermediaries with regards to their duties and responsibilities and why was this time well spent in my opinion.  So what have you before?  We had quite a number of Human Rights cases which affect 47 member states over the Council of Europe.  We started with Estonia which was probably a built overwhelming with regard to strict liability for hate speech.  We moved on to Hungary saying that, well, we need to take into account whether there is risk that the platform would close down and we would have less fora for expression, a very good approach from the court.  But it got better. 

Next one was Bick versus Sweden saying that, well, there needs to be even more arguments to consider before you would make a decision that intermediary should be liable for speeches they are not the author of.  And last but not least, Google saying we need to have a benchmark and the benchmark needs to be that there needs to be serious harm suffered by the applicant in order for them to have a standing.  I think that's very rational direction of the code to move to especially considering everybody that we have heard that the companies are considering these issues themselves. 

We are talking about market failures from and this was just legal framework and there is still a lot of uncertainty and, of course, we have challenges that the states need to address in order to be states.  Let's start from any content, how Marco put it that we are con concerned about that.  That content is such a wide concept.  It could be a viral information that you just are enabled to stop.  If something goes viral it doesn't matter what country it started from.  No country, no state, no person or even probably no platform really has the possibility to stop it.  But, of course, viral content is not the main concern.  The main concern is radicalization and terrorism on line.

That makes the heads of state sometimes to resort to very strange sayings that the Internet will be good and pure and we will strip the terrorists of their safe places online and so forth.  In addition to terrorism and radicalization, we have fake news and misinformation and last but definitely not least intellectual property that comes crawling from every corner and rightfully so.  For hate speech we have Germany that has quite an interesting approach in the national legal system.  I know Wolfgang Schulz is very proud of it.  For hate speech we have Italy and I'm not naming ‑‑ many countries trying to have initiatives how to tackle something that is fake news in somebody's mind.

So, of course, to take all of this together, we are in a very scrambled place.  No state, no private sector actor or no user would really know how to act, so I think Council of Europe should try to find or rebalance the situation, and I tried really to bring back the rights on line, the same as off yin principle into this declaration.  Try to find how to put those back into this very muddy waters of intermediaries and online content moving through those online intermediaries.

So in my opinion I think this panel has ended well if at least one person has taken this draft recommendation, print it out and read it in whole and said, well, this is balancing indeed, but just a few points.  These very long and substantiated recommendations, but three main points I think that makes it very different and will give it power to find rebalance again.  Firstly, it is not an undertaking based.  It is based on functions. 

So let's take industry right to me.  Let's take Facebook or Google that in some circumstances they may be authors of content in some circumstances they could be very traditional service providers and in some circumstances they could have functions that are intermediating, so in that case we are only talking about those functions and not as an undertaking as a whole that has been approached from let's say European Commission with an E‑commerce directive that we are talking about undertaking, but we could have thousands of functions of the company.

The second important point in my opinion from the recommendation is that it really tries to tackle the issue that we have so many different intermediaries.  We have intermediaries that are platforms for the pictures of cats and dogs.  We have intermediaries that are platform for public political debate, parliamentary debates.  So it's very hard to see too much common in between those platforms.  So we have these recommendations really tries to take this nuance and graduated approach to every single function that we are trying to address.  So not company, but other function.  And last but not least what was already mentioned by the representative of Facebook that it is about duties and responsibilities, not just for states but also for intermediaries.  Of course, this could sound a bit uncommon because Council of Europe is an international organisation so how could there be any obligations to private sector.

This was, as already mentioned, it goes through UN principles on business and Human Rights that oblige everyone active in this field of Internet, so through the multistakeholder process, to make sure that you do no harm to Internet or users and if necessary, not only states, but also the companies are under obligation to take positive measures in order to insure Human Rights.  So it's not just the states to do something, it's for all of us to do something.  So the recommendation really has a duty for a state to do something, but also duty or responsibility for an intermediary to do something.  That's start from transparency and we could end up with all of the other subject themes that are there.

And it is built up in this way.  It has two chapters for states and for intermediaries, and coming from principles, the UN guiding principles and I think it's a very interesting read and I think it will make a difference of how we see this fear at least for the 47 Member States of Europe.  So that was my very quick introduction.  I went over time.  I always do, I'm very sorry about that.

Thank you.

>> MODERATOR:  Thank you.  It's such a fascinating issue and I hope that one of you will ask, please come and give me more detail, but now I will open the floor so that you have the opportunity to ask questions or give comments or whatever you like.  Please be brief, please introduce yourself briefly and if it's a question, please let us know who you address it to.  Yes, please.

>> AUDIENCE MEMBER:  Net Locks, so I was wondering if there has been an effort to improve the real-time reporting on platforms?  Because right now the reports come through, say, at the end of six‑month cycles and it seems that real time reporting could really improve the reporting, and support communities who need to maybe apply for an objection?

>> MODERATOR:  What do you want to comment on that?

>> AUDIENCE MEMBER:  Perhaps on the platform side with either Facebook or Google.

>> MODERATOR:  Which one of you?

>> ANDY O'CONNELL:  So the short answer is it is something we would discuss internally and we are always trying to get better in our transparency including the timelines.  The big challenge in kind of repairing those aggregate reports every six months is making sure it's all accurate.  It would be really problematic if we published something in a transparency report and had to correct it like a week later so that is like the big challenge we are facing.

I would just say, and I think this connects to your question and something Nicola said about personalized transparency.  We do do a lot of transparency things in real time.  If you are the user whose content is removed, you get a notification in real time that explains what is happening and why.  Admittedly the kind of level of detail in that communication has not always been great and we are getting a lot better at that, but there is still room for improvement and it's a big focus for the teams who work on that, and then just another example, personalized transparency on Facebook, for example, whenever you see an ad, you can click on why am I seeing this, and it's that explanation that you are talking about that explains why that ad was delivered to you.

Maybe the last example is if you are a user who is trying to access content that has been censored by a local Government and you follow the link to that, it's not just a 404 page.  It tells you this content is being restricted based on local loss.  Which, again, that happens in real time.

>> MODERATOR:  Thank you.  Could you press the microphone button again.  Thanks so much.  You wanted to comment as well, very quickly that we hear very clear and loud.  So we were one of the first but along the way we understood that transparency could not add anything to the debate, so what we are trying to do at the moment is really to understand what kind of data like what kind of time frame of publication are needed, and, of course, as soon as this will be launched, the feedback from everyone will be more than welcome.  But, yes, this is a very, very clear and important request.

>> MODERATOR:  Thank you.  Please.  Do we have a microphone?  Please, if possible, can you approach a microphone so that the text is visible as well and can be ‑‑ thanks so much for that.

>> AUDIENCE MEMBER:  I'm Civil Society organisation called IT for Change in India, and my question is about Facebook community standards.  So when there are global standards or guidelines, especially the context of gender‑based violence, a lot of women sites group from the south guidelines as they stand now will not capture specific dimensions of harassment that may pose women in the south in certain particularly cultural context in great risk.  I want to know what is the thinking about that and.

>> AUDIENCE MEMBER:  Any thinking around changing this gender‑based violence and coming up with more culturally rooted responsibility?

>> MODERATOR:  Thank you.  Will you comment on that?

>> ANDY O'CONNELL:  That is a really important issue.  And I have a member of the global safety team and she and her colleagues in particular have done outreach in India to make sure that we understand that set of issues in India, and to sort of hearken back to something I said earlier, we definitely do want to take that cultural context into consideration when making these enforcement decisions, and also when making the policies themselves.  So is that an area where we have got it right?  I'm not sure, but we definitely hear the concern and we have lots of people who are focused on it, and I would be happy to connect you with my colleague who’s here and working on that exact issue.

>> MODERATOR:  Thank you.

>> KARMEN TURK:  The platform has main objective as well to share and understand better the sittings and Council of Europe to we have the Council of Europe Convention on violence against women and it is not limited to just the Member States of the Council, but it's open to country that's are not members of the Council of Europe, but through worldwide as in the case of the Convention 108 on the cybercrime.  So we have the platform where the Internet companies, Internet intermediaries are taking into account international standards we have on gender‑based violence as well.

>> ANDY O'CONNELL:  Hate speech, incitement, harassment, gender‑based violence are not allowed on the platform.  The issue you are bringing up is an important one is making sure we understand that someone in one context may be considered harassment or does create safety risk in a different situation.  You are spot on that we always need to stay on top of that and make sure we are constantly improving that in various context.

>> MODERATOR:  Thank you for the clarification.

>> AUDIENCE MEMBER:  My name is the Richard Wichfield.  My question is about terms of service or community standards.  They are often at very high level, and if you.  I Government, for example, talking about restriction and freedom of expression saying something like we don't allow hate speech would breach the principle of legality.  It's precise, it is a vague term and doesn't allow people to know what is and isn't allowed.  So my question for Karmen Turk do the draft recommendations or your work on this suggest that there should be some equivalent obligation to provide terms of service that are sufficiently clear and precise to allow users what and won't be allowed in different context.  The follow‑up question to the platforms themselves is you obviously have this internal guidance presumably on interpreting and implementing the terms of service so why not make it public so users can understand what will and won't be allowed.

>> MODERATOR:  Thank you for that.  So, Carmen, will you start.

>> KARMEN TURK:  Thank you.  I think the issue of legality in regards to content is an internal sink hole.  No one is able to solve it unless you go through eight years through courts and you get a decision.  However, of course, you need to find a way out, because there are people who are suffering and there is this content that seems to be out of the boundaries that we seem to accept.  It's not ‑‑ so it’s a difficult question, but how to approach this then?  Because this recommendation really tries to find this balance again, and I think the limits of the balance in addition to legality is the due process, due process, by this I mean remedies.  Remedies always have to be there and by remedies I don't only mean remedies, it can be explanation, it could be anything.  It doesn't have to be judicial, but the remedy has to be there, and at least in the end of the line there needs to be access to judicial remedies, and I think through this and taking into account the transparency demands and publications from all sides there, I think that the overarching idea of the recommendation is at least that the states cannot delegate this very difficult assessment of get as they have been doing the last five or six years, so we are taking idea behind it and it can be only achieved through always providing remedies access to judicial and non‑judicial ones and transparency.  I'm not sure if that gave you the answer but if you are the one person who reads the recommendation after that, then.

>> MODERATOR:  Comments on this.

>> PANELIST:  What you raise is a very important current issue.  So I'm not sure that we can define this as an implication of a principle legality when it comes to the privacy sector.  We have instruments in the law that take issue with too much discretion in the terms of service.  So there was an action by consumer protection authorities in the fall of last year that was taking issue with the fact that companies, you know, they just vaguely define what content is illegal.  And this goes against what the European Commission did in its code of conduct in speech.  And then it required the company to further or it allowed them to further define it, but the companies don't want to give the wrong definition.  So they have very broad and general clauses, and this was then taken up by the consumer protection authority and they said, no, you cannot keep it as it is, so it’s quite interesting.

And just to add to that, last week the Article 29 working party issued some guidelines on the notion of transparency applying only in the field of data protection law, but interestingly they say that when you use a clause saying we might use your data for providing a better service or to provide a personalized service or for research purposes, that's not enough.  It's not a valid concern for users.

>> MODERATOR:  Specification, an interesting issue in itself, do you want to comment on that as well, and then I have three interventions and then we are already over, or will be over time.  Marco?  Andy?

>> ANDY O'CONNELL:  It's a challenging issue and our community standards are written for a general audience.  The audience is the people who use Facebook and making sure they understand what's expected of them and what's not permitted, but we completely agree that there is a need for more explanation on issues like hate speech and terrorist content in particular, and our strategy on that so far has not been to increase the length of our community standards because frankly we already face a lot of criticism that they are too long already and too complicated.  So the strategy we have adopted is this hard questions blog post I mentioned earlier and both on hate speech and terrorist content we have got a multi-thousand word essay explaining our thinking on those issues, and I would expect to see a lot more from us in terms of transparency on those issues because as I said at the beginning, we believe in the principles of transparency and accountability, but we have all of the other interests of users we are trying to advance. 

>> PANELIST:  The code of conduct we signed in the commission, and we would like in a fair way defining hate speech or not like giving enough focus on this specific point.  So the code of conduct makes reference to the European Commission on hate speech and define what is illegal, hate speech in a way which is consistent with the highest standard, international standard.  You can discuss later, but let's say that for ‑‑ we can say that we can live with this definition.  And in looking to that also our policies that we found our policies align with the decision.

>> MODERATOR:  I will sacrifice my closing remarks and then Mr. Bodner has the opportunity to continue as well.  And if you feel comfortable just briefly commenting on that, please.

>> AUDIENCE MEMBER:  My name is (?) from the German Opportunities Foundation and my question goes to Karmen.  I know in parallel to you drafting the recommendations a group called IT which I had the honour to work with was working on recommendations for the Council of Europe on children's rights in the digital world and I have seen, I have read your recommendations and I have seen you have put in a reference to child pornography.  I would have preferred to have also a reference directly to children's rights.  Have you been in exchange, have you considered to give it more impact and to recommend to the Internet intermediaries to take into account children's rights?

>> MODERATOR:  Please make a note and later on, please.

>> AUDIENCE MEMBER:  Hi, I'm professor at the University of Calgary, and my question is really about the scope of the duty to respect Human Rights.  I was really pleased to hear that this is kind of a focal point of the draft recommendations, so I'm quite interested to see it, but I know in my reading and the work I have done on it, I have always conceived of it as broadly the business' impact on all Human Rights, but there seems to be, and I love the platforms to comment on this and perhaps comment on what you have drafted about, you know, the disconnect seems to be, well, what is the scope of that duty to respect.  Sometimes when I see the work, say, with the global network initiative, it's quite focused on Government interferences with Human Rights, but not as much when it comes to how platforms deal with private interferences when we look at the publication or transparency of requests for takedown of content, for example, it's focused on Government interferences but there is no data on private requests for data takedown.

So there seems to be this issue about what that scope is, at least maybe a different viewpoint on it.  So I would love clarification from the platforms on how the duty to respect Human Rights and what its scope is.

>> MODERATOR:  Thank you.

>> AUDIENCE MEMBER:  My name is Gabriel Foot. I'm from Brazil, and I want to know more about how big companies manage their transparency about metadata, and how they have a policy or how they lead with this.  Thank you.

>> MODERATOR:  Thank you.

>> AUDIENCE MEMBER:  European Broadcasting Union.  I have listened with much interest about the new policy about disturbing content, which is not taken down, but it does not allow engagement or does not facilitate distribution.  I mean, is that the, let's say the remedy for the problem that we have seen in social media and elsewhere that some radical extremist content sometimes also hate speech is in a way dealt with or its ranked up because it creates emotional reactions, it creates interaction, sharing, et cetera, as mainstream media content may, for example, lose in that game of visibility.

An answer to that, I mean, this kind of negative results for Democratic debate, but extremist content gets more visibility is remedied.

>> MODERATOR:  Thank you.  Please be brief.  We are already over time.

>> KATARINE GARCIA:  Katarine Garcia from the Dutch delegation to IGF.  I have taken up the question of our colleagues from India.  I would like to know whether, perhaps, the Council of Europe could inform about there was a motion on the Internet ombudsman that was promoting enhanced accountability for such vulnerable groups such as women, transgender, children and other specific vulnerable groups and the motion did not develop, but if you could elaborate on that and what was the state of that?

>> MODERATOR:  Thanks for your question.  We have no remote questions so we can go back to the panel, I would say, by the way, Marco told me before that he has to leave sharp at 20 past as it was planned, but he is not escaping the questions, so but just has another engagement, so, Andy, will you pick up the points that were addressed to you.

>> ANDY O'CONNELL:  I will try to answer all of those in 90 seconds.  So on the scope of the duty to respect, I think very candidly it's something all of us as a community, all of us are trying to grapple with and understand.  On the specific piece about private sector takedowns, transparency around community standards enforcement in particular intellectual property takedowns I think that is a space where we can and will do better as Facebook.  So I would expect to see more on that front.

On the question of metadata transparency, I, as I understand the question you are asking about sort of disclosing metadata to Governments so that is included in our transparency report and I don't want to speak for Google, but I'm pretty sure it is in theirs as well.  And on the question about extremist content being promoted in ranking algorithmic ranking issues, I would say that hasn't been what we have necessarily observed, but more than that, that is something that can be adjusted and because things like extremist content, hate speech, terrorism are prohibited on our platforms, that is something we can address, and so even if in previous iterations or in certain company algorithms that does happen there are ways to address it and certainly from the perspective of our company and we don't allow that content and we wouldn't want it on the platforms so we wouldn't want it to be promoted in algorithm content.

>> MODERATOR:  Thank you.

>> AUDIENCE MEMBER: I think it would be interesting to look at developing some sort of principle or responsibility as we have in data protection for (?) due process.  (Audio breaking up) technical measures to insure that they are able to comply with these standards and also to test at all time that's they are in compliance.  We don't have something in parallel in another so I think this could be interesting way to go.

>> PANELIST:  I think I'm not lying if I say we spend hours on discussing which vulnerable groups' name is in the recommendation, and I'm also honest that we cannot leave the room as friends after that, but we did make a decision in the sense that because there are so many vulnerable groups, gaming industry, elderly people, entertainment platforms, so forth, so forth, so we decided to only take the most serious crimes to name them and treat all of the others without discriminating one vulnerable or very vulnerable group against the other.  So that was the compromise we reached in the end.

>> PANELIST:  I will adhere as for the case on draft recommendation on Internet intermediaries, in the platform of down can criminal of Europe and Internet companies the idea was to have a general platform where we can add and take into account (Audio breaking up) I would like to create some priorities where the case of Internet we have both sides to take into account.  At the same time we have (Audio garbled).

>> MODERATOR:  Thank you all.  Thanks to the panel.  You will all know that in regions where we celebrate Christmas, everybody makes a wish list and my wish is short, one wish is that you all grab a copy of recommendations and read it during lunch and the second one I can fulfill myself, I have never used a kind of hammer like this to end this session, and I will do that now with thanking you Saul and having a good IGF meeting.  Closed.

(Applause).

(Concluded at 1:28 p.m.)