The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> MODERATOR: Good morning, can you hear me. Welcome to the DC on Platform responsibility. We have Julia Reda and David Kaye here as keynote speakers and a panel that will join us after we start our introduction and keynote remarks. As you may see, this is a very peculiar configuration of the room. We will have to move quite fast between the keynote and the panelists to be guaranteed a smooth course of the session.
As you know, we have been producing a lot of outcomes over the past years, not only outcomes, but human service, in the annex of the book, which is the annual official outcome of the DCPR. We had felt the need for a book to start a conversation a multi‑stakeholder debate.
By the way, there are still some copies of the book around, so if you want to grab one, feel free until they're finished. The idea ‑‑ the main idea of doing this volume on platform regulations is a very obvious but not so explicit observation that many of us have been doing over the past years, working on it as academics or practitioners. It is that platforms are undertaking a role of private regulators, private police of cyber space. Different from the ordinary regulators and police they're not bound by geographical limits by peers, but by the architecture of the platform, which is the cyber limit of their cyber land. They have quasi judicial, quasi executive powers. They have quasi regulatory power because they unilaterally define rules that have to be respected by all the users. And users have upon the possibility to take it or leave. They can only respect the rules. The rules will be implemented in the architecture of the platform by the users and algorithmically used by terms of service.
Obviously the platforms in many terms, do not need public execution of their rules. They is algorithmically implement them and implement also automatically the decision to take in the terms of service. By the way, when a content is abused when something is fake news, that can be done by the platform itself with no precise criteria of what the fake terms mean. This is the policy debate going on over the past year. We have a lot of material here for excellent discussion. I would like for Nicolo to introduce the work we have done.
>> NICOLO ZINGALES: I wanted to briefly mention the connection between the reliability and platform responsibility. In our chapter for this book, we touched upon three issues which I think exemplify the role of platforms as regulators. This is the consequence of the reliability rules imposed on the platform. The consequences that platforms have duties of care that are not clearly defined under the law. Our three examples that we pose in this chapter are the case of injunctions against the particular or known party intermediaries, intermediaries that have nothing to do with content, but somehow are involved in the transmission of the content. Sometimes imposed the obligational result. So they will have to effectively prevent access to corporate infringing material. But they will have to implement measures which also, in a way, preserve the user rights and freedom of expression among those. And the cost of taking these measures in a grander way so they can balance on the one hand copyright interests and the interests of society more generally and the users, in the particular case fall on them. So of course, there is this incentive that platforms have to minimize that cost and take measures that are basically as blunt as possible. We see this increasing with automated enforcement mechanisms. So this is one example.
We have a second example of basically delegation of public functions to platforms, in the case of the right to be forgotten. You know, you have this basic obligation on search engines following the Google spam decision to release the links to content that is inappropriate or no longer relevant, according to the rules of the data protection regulation.
So in this case, the Court did not impose very clear safeguards for the freedom of expression. The platforms are to be the regulators defining what content should stay up. The incentive is not clearly aligned with user's interest in this case. And the third example is with regard to hate speech. So we have this code of conduct that is in the I.T. sector signed with the European Commission whereby they have to quickly react to notifications of possible illegal content under the prevention of racism, phobia, and hate speech.
If you cannot define what content is in great detail or to define their own illegality. This might be different under the standards of international law. This is another case where you have platforms.
(Background audio overriding speaker)
>> NICOLO ZINGALES: By private delegation of power. Before that, we have two very interesting keynote, by David Kaye, U.N. special Rapporteur for the protection and promotion of freedom of opinion and expression. And Julia Reda from the European parliament who is heavily engaged in the discussion of online platforms in the past couple of years, at least.
So with that, I will leave the floor to David.
>> David Kaye: Many of the writers are sitting here looking at me. I feel like we should be switched around and you should be here telling us what to think about the issues. I want to make only a couple of brief remarks. So I'll try to put them in the context of what I think is a remarkable moment. I know those of you who have been here a number of years, I say this in the space, it can be lonely. When you are work in the stacks of the library ‑‑ since I was old enough to be in the stacks of the library. Now we're Googling or using different database and doing research and publishing our work. It doesn't always involve the kind of interactions in forums like this.
What I think is interesting in this moment is the work that is encapsulated in this volume is in many ways the public policy issue of the day. I mean, it is not how does it regulate content or how are the modes of protecting privacy, how do they protect fundamental rights, so forth. That is not exactly what this volume is about any more, in many ways. It is also about the way platform regulation has become perceived as an issue of democratic governance. Around the world. I think that this is especially evident in Europe where European regulators are looking at what the platforms are doing, in a way that, I think regulators and policymakers around the world are not doing. Right?
So most places around the world are ‑‑ to use a nontechnical term ‑‑ freaking out over the dominance of particular companies and search. But they're not doing that much regulation particularly in repressive societies, just demanding take down.
Europe is actually taking steps, for better or worse, that are deeply involved in regulating or starting the process of regulating this space. They're starting to look at, well, what are the platforms actually doing? And they're doing it in a context, I think, where there is a sense that control over public space which many people see the platforms as controlling, the control over that public space has been essentially outsourced inadvertently, if you can inadvertently outsource, to private actors. So I think it is just natural for places that are used to regulating to regulate. They're doing it in this space.
We already heard some of the spaces, whether it is in the context of the code of conduct or the context of the copyright directive or the context of a new effort on approaching this information of propaganda.
I will just close here. I think this is an extremely important moment to have a volume like this. Because ‑‑ hopefully policymakers will take this ‑‑ or their staffers will take them and boil them down to bullet points, because there is important material here that should frame for regulators and policymakers how they think about the platforms, the risks that regulation entails. And also for the platforms themselves what they being do as a defensive move in the face of regulation, which generally speaking, my view is regulation tends towards overregulation. And I think that the private companies are going to need to deal with that. But also a guide to users to think about what it is that they're using when they're online, the nature of that space, which is a very fundamental question about how we consider online space as public or not public.
And also to engage the public more generally in thinking about these kinds of questions. So congratulations to all of you for producing the volume. It will be my ‑‑ maybe it will be my winter reading. But it is really fantastic that you have done this at an essential moment. Thank you.
>> NICOLO ZINGALES: That you David. Julia.
>> Julia Reda: Thank you, Nicolo and Luca for letting my participate. I'm in the middle of the discussion we are having with the EU, dealing particularly with the question of content removal. On the one hand, there has been the communication from the European Commission on tackling illegal content online, which should be called removing or preventing illegal content online. That is what this communication is about. In which the European Commission strongly encourages platforms to use automated content removal technology. And the same time, we have the copyright directive where it is proposed to actually make the content removal mandatory on quite a large number of platforms acting on the Internet. And what I perceive in this debate around platform regulation is a great irony, that on the one hand, increasingly, the kind of honeymoon period of online platforms is over.
We have seen some of the negative effects that the platforms power can have on our democracy, and more and more people and coefficients, especially, are viewing algorithms employed by platforms as scary. They intransparent and put us in different boxes we don't feel we fit into. This is one of the main reasons why there is a strong call for the platform regulations. There is a perception that the platforms have become too smart that don't act as neutral hosts or conveyors of information any more.
The irony is the kind of regulation that is proposed, which is Internet filters, online content recognition technologies are exactly exacerbating the problems. They give more power to algorithms, because at the end of the day, this is what filters are. They are intransparent. So there will be more intransparent decisions made based on the algorithms, more discrimination, and the end of the day, more power will be given to the platforms to change the rules under which we communicate on the Internet by encoding them into algorithms. So in a way, we are treating a problem that we're perceived with a medicine that is exactly more of the same thing.
And I would really like to discuss this great irony that, in a way, we are very uneasy about the power that algorithms have, about our individual lives, and I think we have also seen with the data protection regulation, that in Europe, there is really a need and demand from the public to have decisions made about them by humans and not by algorithm.
That at the same time, in the kinds of tangible proposal on platform regulation that we have, there is a strong emphasis on having more decisions made by algorithms.
One example of the phenomenon. In the copyright proposal, where copyright infringing material is supposed to be deleted by algorithms, there is a slight problem, that is the copyright exception. From a technological point of view, copyright exceptions are anomalies, because it is relatively ‑‑ well, relatively easy, between a particular copyrighted material, you can compare it to the material that is uploaded and you can recognize it. The question whether or not a copyright exception is applied, whether the thing you are looking at is a parody, whether it is a legal quotation, whether it is used for educational purposes is an extremely difficult distinction for an algorithm to make. Generally, they simply do not interpret the copyright exception. They treat it as an anomaly that is not possible for the algorithm to detect. And if there are no sanctions for removing too much material, but there are sanctions for removing too little material, the platforms are invariably going to err on the side of caution.
The problem with that is the copyright exceptions are not just a technical or legal anomaly, they're also the embodiment of fundamental rights in copyright law. They're there in order to protect the competing interests, such as the fundamental rights to education, the access to information, and the freedom of expression. So if we delegate decision‑making to private companies who are either legally obliged or strongly encouraged to use tools that are not able to respect fundamental rights or implement the parts of the law that are there in order to respect fundamental rights, then we're undermining the application of the fundamental rights on the Internet.
And this is doubly concerning because there is the open question whether or not the charter of fundamental rights or human rights in general can be enforced against private companies.
So if we continue to go in this direction of delegating decision‑making power and implementing power of our laws into private companies, we also need to have the conversation to what extent we can enforce our fundamental right regime against those companies. There is interesting food for thought on whether this should be done, how it should be done, and whether we have law based fundamental rights.
>> NICOLO ZINGALES: Thank you Julia. And the food for thought. We will ask the authors to join us. We have Krzysztof Garstka and David Erdos, and Krisztina Huszti‑Orban from University of Essex. Lofred Madzou from French digital council, Emily Laidlaw from University of Calgary, Maryant Fernandez Perez from European digital rights. And Rolf Weber from University of Zurich. We have a big set of panelists. We will regulate from here.
I hope you can manage to keep three or four minutes per person so we can also debate at the end. We start with Emily, following the order of the book. Please, Emily, the floor is yours.
>> EMILY LAIDLAW: Hi, everybody. I'm Emily Laidlaw. I know everybody wants a discussion about the particular issues. I will talk about platform responsibility and human rights. The general theme of the questions I have been asking myself are, what are the responsibilities of platforms for human rights standards? Or another way I approach the question is what exactly do we expect of them?
So the work I have done and I talk about in the book is that intersection of corporate‑social responsibility, regulation and human rights. We're really in the early days of exploring a lot of the particular questions, yet at the same time, I think we're at a critical junction where a lot of social issues when it comes to online abuse, fake news, data ‑‑ fill in the blank ‑‑ whatever you want to say there are all occurring between law and corporate responsibility.
Now, I have done quite a bit of work creating regulating models, but I want to focus on major issues I see going forward in this area of law, CSR and human rights.
There is really four that I think about at the moment. I don't by any means think this captures all of them. One is consider for a moment online abuse. If I company more narrowly defines freedom of expression than the base legal standard ‑‑ and we see this often with the community standards on social media sites ‑‑ is this a company being socially responsible or censorship. Is this privatization or living up to the human rights responsibility? The uncomfortable reality is it is both and it depends on context. What we need is a more open conversation going forward with companies of what exactly it means to kind of walk that particular line. The other issue is what are the ways that we can use law or craft law to incentivize social responsibility? I have looked at a lot of different things here. One is requiring that Internet companies report annually on the human rights impact that we have seen in other areas of law, these types of commitments. It is almost like good parenting where I won't tell you what to do, but I will not be prescriptive that way, but I will mandate you report back what you are doing. Stronger co-models, and governmental rule ‑‑ although that might make people uncomfortable as well. Dispute resolution. How do you harmonize intermediary rules and can you do that?
I think a lot is less about old school strategies of commanding control, but more about system of governance with legal and nonlegal elements.
Now, the last two points I want to make are to do with business and human rights, in particular, Ruggy's guiding principles. An enduring question that we haven't resolved is the effect of human rights. Does this require respect for all instances of human rights, including by private parties? My reading of it is that it does involve respect for all interference of human rights. My view and the approach taken by many platforms.
So if my friend posts some intimate embarrassing picture of my on social media, does a company have a duty to respect my privacy? Separate from whatever is outlined in the community standards? Is there an obligation or social expectation to respect my privacy? Or are we just looking at a duty that involves restrictions related to government interferences with rights? This impacts the last question I have, which is ‑‑ and I will be quick about this here.
My last question is really about the right to a remedy under the guiding principles. In general, that is an underdeveloped area in the guiding principles. But it is a particularly pressing problem when we look at platform governance. My main question is how do you resolve this? We have talked a lot here about issues of fake news, is there a right to be heard, should you have a right to appeal, how do you deal with algorithm and decision‑making. My question is what makes a dispute resolution system? I have done others about defamation reform. I went to the beginning, said how do you resolve a dispute? We need to look at the guiding principles in human rights terms means when you look at the area of legal reservation. And I wrapped up for you now, Nicolo.
>> NICOLO ZINGALES: Thank you, Emily. Now to Maryant Fernandez Perez.
>> MARYANT FERNANDEZ PEREZ: I'm here representing 35NGO, we have (?) (Not speaking into microphone)
we see a trend in Europe to deal with several policy objectives. Big companies can become not only the police but also the Internet legislator, educator and judicial power. That is Internet privatized enforcement. You have public authorities that are not taking diligent or appropriate steps for the framework that is respectful towards rights, rule of law and basic principles. The commission's communication on illegal content is an example of this. You also have newer approaches such as (?) Or repelling units.
On the other hand, there is hope. I see that there is quite a bit of work and support in international arena towards having a rights based collegiate approach. We welcome the work in Council of Europe and (?) And they worked on this draft. We appreciate the work by David Kaye and by Julia Reda and others that have visited several times to the European Commission. So you may be wondering, okay, what is your take on all of this? From the European DR perspective, we believe it is a competence based approach. So how do we do this?
First, we propose to have an understanding and respect for fundamental rights framework. What does is it mean for a public authority to respect the fundamental rights and more specifically what does it mean to have act 251 of the charter that says that restrictions on fundamental rights needs to be provided by for by law. As you know, a push for companies to take the lead and action on the base of the census service and not based on democratically based law. That is the first step. And also, we need to see for example, whether there is any robust protection for the process, safeguard for the legal content, how is the right to remedy being implemented, so on, so forth. That least and second point, which is learning from here, we need to have an approach where we really address what has worked in the past, what has not worked, what to do about it. The third point that we want to bring is building a flexible and effective approach to the issue. We're seeing that regulation by algorithm is not the solution. Also one fit all is not a solution.
If we regulate the Internet as if it is Google, it will become one. We don't want to move forward into that direction. We keep regulating this area on content regulation on the basis of copyright holder interest. We want to solve really the public policy objective. By removing content, the solution ‑‑ (?) Service is not done, quite the opposite. It is very superficial action, which does not determine whether the public objective is being achieved.
So in the paper, we go a step forward and say we recommend in the European Union to have a direct connection. We have seen with recent political developments that we're not there yet. So before we go into there and you see in the chapter that we wrote, my colleague and myself, that, you know, we propose a point that the directive, can include prevalence indication, also trying to map which stakeholders we're trying to make them to react and take responsibility. I was just talking about the big companies or not. And many other aspects that you can find out in that.
>> NICOLO ZINGALES: I guess to remind to be brief, please. Due mainly to the fact that this year our session has been slashed from 90 to 60 Minutes. By the way, I would like to urge you to strive to digest where the mag has decided to give 60 Minutes of a session that work the entire year, rather than 90 as we have in the previous years, write the IGF and ask where they decide to slash the session. We can go on and have 10 extra minutes at the end, since we started a delay at the queue of the entrance.
Next one is David.
>> David Erdos: David Erdos University of Cambridge. We have heard about the right to be forgotten off a search engine, which we have heard quite a bit today. I think it is fair to say that is an issue that is generally seen as an EU issue and even potentially an issue that is linked just to one idiosyncratic decision that the Court and they negated fame. The purpose is to have understanding, through comparing what is going on in the EU with bitransnational, beyond the EU, the framework. So we looked at the broader Council of Europe. The OECD which bridges a number of different regions. The Asia‑Pacific framework. The framework in Southeast Asia and the SOS framework, in terms of what do they say about the right to be forgotten and search engines.
I think this question has publicity not only because of the decision, but the reality becoming key in Europe itself as it looks to what is the reach of the Google's fame decision itself. It is an issue to be decided again by the Courts of justice. Krzysztof Garstka will go through the implications of those. But what I want to do is briefly clarify what we mean by the right to be forgotten. And what is necessary for that to apply to search engines.
What we don't mean by the right to be forgotten is one specific right.
Yes, the new data general protection agency does have something that is the badge to the right to be forgotten. That is not in the current EU protection framework.
Rather the right to be forgotten is a broad concept, basically saying that individuals that are concerned about potential or actual damage should have some ability to restrict access to even publish personal data about them, so long as there is no overriding legitimate interest to oppose that.
So turning back to the EU, it was that concept which was seen to stand behind more specific aspects of this, the right to erasure and right to objection.
Secondly, what is necessary for that to identify search engine? Fundamentally it is necessary that they're considered to be controllers, controllers of data. Again, in the EU, that demand to say alone or jointly they determine the purposes and means of processes. The Court of justice actually found that was clearly the case. No one was forcing a search engine to collect data and use it in a particular way. But they also laid emphasis on what the purpose of the UK protection was about. Protecting fundamental rights of privacy and related rights, which was severely impacted by the nature of modern technology and search on individuals name across a vast range of different sources.
There are myriad of issues in it debate, which we have absolutely no time to go into. It is those that also confront the transnational data protection, which I will pass over now.
>> KRZYSZTOF GARSTKA: Thanks, David. Within the framework, we looked for the presence of a specific publication right, which would enable the data processing to violate the data protection. And what we found is that four frameworks are that of the EU, Council of Europe understand others can be seen as containing such a right. Two frameworks were found to only contain the rights as inaccurate. As opposed to data violating the sensitivity. Apex instruments are the principles and contains the requirements to provide a remedy that they violate.
Hence, the right we're looking for could be found in its entirety. This interpretation, however, is found to be possible. Now, secondly, we assess the instrument in search for the definition of a data controller it is in the search engines (audio is fading in and out, speaker not close enough to microphone)
By lack of stand alone definition. Now OCD and (?) Were a less viable match. Require us to look closer at the decision‑making role of the data. We use OCD as an example. We had to decide on the means of processing. With the deciding of the contents and view personal data from the guidance. Now, in general, in general, search engines are not deciding the content, but over the use. Yes, by deciding to collect, retrieve, store, and disclose information on the platform.
Now the CGU found it does not matter if the search engine chose or not, and we do think it should, it should be seen as a data controller.
To summarize our findings, how can the EU, five out of six international data protection frameworks we studied, could with the right interpretation accommodate the tenets of the Google claim or make clear and direct model? The one framework where it is not possible is a minor one, with (audio fading in and out)
what can be done? In our view, it is for the international consensus for this. The way we propose for this to occur is the international content of the commissions, the way in which this body could act by beginning work on the resolution or the right to be forgotten.
By the consensus on this matter, on the international arena, it would not only enhance the presence of a right of those and prevent further legal fragmentation of cyber space and arise in the court order such as the one in the trade secrets. Thank you.
>> NICOLO ZINGALES: Thank you very much. Now, we move on to Rolf Weber.
>> ROLF WEBER: Thank you very much for the presentation. My introducing statement reads basically as following. The growing discussions about data particularly in Europe does not make it equal. The online platform market with the quantity and quality of data available on certain platforms, the assessment of issues on the platform marks to come, the narrative, the question of who owns the data becomes crucial as data can be reused by legal owner on the one side and the actual controller on the other side. And the question supports what extent is it possible that one is exploiting the other. In my contribution, I looked at four regulatory challenges. The access to data and data sharing. And I quickly now address the challenges.
Collective ownership goes back to the already well known concept of share and develop. We have come across that in actual data analytics. Such a concept, the data control, is providing it in a usable form, equally, we should be allowed to take advantage of the application in order to analyze their own data and draw conclusions from it.
The legal equation part is less clear. We have certain instruments, for example, cooperative, quite often used nowadays in the health environment. We have the form of joint ownership, however, the situation is by far less than clear and further research, maybe the reaction will be needed.
The data for stability, and it will be interested to change the content of this for data use from time to time. The right to data prosperity has the right that those that control the data have the right to remove data at the request of the purpose from one online provider from one platform to another. As you know all data portability, as far as personal data is concerned is not enshrined. 30 of the ‑‑ I'm not going to details of this. However, nonpersonal data is not governed each if the data would play a role in respect of machine‑generated data. This is why we do have competitional reinstruments, that costing effect. It is lead to cost creation of market barriers, where you provide this. However, this state, so at the local state, antitrust proceedings are usually lengthy and costly. Meaning in practice, the antitrust play a limited role and can play a limited role in the future, as long as they do not have new alternative procedural elements such as ADR in the competition law.
Third challenge is access to data. Obviously as far as personal data is concerned, we do have ATPR rules, which also obviously do not apply in the field of machine‑generated data, as long as they're not personal. In this context, we know now the most recent connotation of the European Commission called building a European data economy. It is about 11 months old. And it has a couple of objectives, such as improving access to anonymous retention of data. (Audio not clear, speaker not close enough to microphone)
we have to wait what is going to happen in this context over the next couple of weeks or months.
And finally, the fourth challenge, data sharing. Here, again, I would like to draw some thoughts from you on the occasion it has composed a compulsory license, granting access to data, the facilitating of data sharing, to increased availability of data, compulsory license regime is obviously again an instrument. And this means that some antitrust law have to be taken into account. In particular, because this evolves around the France terms, fair, reasonable and nondiscriminatory terms.
This avenue would be a possible way to improve data sharing, however, again, procedurals could be introduce, which would allow the implementation of the data sharing ideas on short term and not only after a couple (?)
>> NICOLO ZINGALES: Thank you. We have seen the first two rights of the book dealing with human rights, now data governance. Now we switch to the third part of the book which discusses exploring the new roles of platforms. We start with the presentation from Lofred Madzou and Julie.
>> LOFRED MADZOU: I am presenting today for my colleague Julie Herzon. We are going to dive into what we publish on the book.
I won't go that much into this topic, but data is like how do you hold them accountable in whatever policy response that you can bring about with the issues.
Basically when introduce the principle for the low in 2016 France. Which is basically the cornerstone of our framework for all of the time frames. It can be basically built on two main pillars. First is transparency. Online platforms should be transparent in the processing of data and activities to the end users and society at large. That is the first main pillar. The second one is basically visibility.
We shouldn't just be transparent to activities, but you should give the means to check actually what we do with our own data. That is the critical point here. Basically, you know, with the extent on this, we think we should have the capabilities. So the question becomes how do you develop the capabilities for online platforms. You know, more engineering capabilities with lawyers and engineers to work more closely around the issues. I will let my colleague Julie finish this.
>> JULIE: Okay. Good. What we say in our contribution is the content for various platforms and availability. Platforms are the expression. We propose to introduce a broad content of accountability. (?) That would be a complement on the going above a lot of already existing rights, but we suffer from inefficient because we don't have that, as we said, the capabilities to make them efficient, because we do not have an understanding of what is going on to produce metrics. We do not have enough conventions to agree on what to be receivable.
So we propose that the principle under which you need to develop the first one should be are they open to public and to publicly and have the ability to reach out to the public. When platforms do not agree to be evaluated from the outside. This would mean, of course, regulators need to have the capabilities to access from data inside and run to the outside. Also that you really need to build in the future, stronger alliances between regulators, academia, and the civil society, to work on the capabilities.
We already have that kind of discourse around the world, the government as a platform. So the idea would be to rely on bottom‑up information stream to detect the best practices and good practices also. But there is another reason for this. It is not only about (?) Because we know that the capabilities of separate firms distributed in the society. That is in the future, we can foresee that we will be a problem. For example, we know that we start to realize that personalization of content, personalization algorithm creates some adverse effect. They may tend to discriminate categories of the population and we do not have the capabilities to foresee the next generation of people that will be discriminated. We need to have the capabilities of the people and the advocacy groups to run from the outside of the platform, to access governance and make the algorithm you heard and make the reference with the data towards the regulators.
>> NICOLO ZINGALES: Excellent. And something to mention, in the book, also, you propose the agency, the trust, and digital prosper economy. People can read further in the book. You can ask questions later.
We have to move on to the next speaker, Krisztina Huszti‑Orban.
>> KRISZTINA HUSZTI‑ORBAN: Thank you very much. I worked with the project called human rights data and technology at the University of Essex. My contribution this year focuses on the human rights experience related to content regulation by social media platforms when dealing with terrorist and extremist events. This is in reaction to providers increasing pressure including the need to monitor content posted by users. The baseline for the analysis is the standards used to decide how to deal with content, whether to permit it or whether to remove it or block it. And the procedures employed in this regard should be complied with international human rights standards. My analysis looks at whether this is the case.
When it comes to this standards used by social media platform to decide on the legality or permissibility of content, they look in two main directions for guidance. On one hand, of course, there are terms of service, community standards and internal policy. This is something that has been addressed at length at the IGF during other sessions and issues about shortcomings when it comes to human rights compliance when terms of service have been highlighted.
I would like to focus on the other direction that social media platforms have to look at, which is laws that are related to terrorism, attempted terrorism and violent extremism.
As we know, there is no universally agreed definition of terrorism or violent extremism. Definitions are found in a host of domestic laws and domestic standards that are relevant to this area are extremely diverse.
There has been sustained criticism of many domestic definitions of terrorism and violent extremism by human right bodies, NGOs and other stakeholders for being overly gross and encroaching on freedom of expression.
Under some of the definitions, almost any kind of views that deviate from the social norms accepted by the majority may be suppressed and measures may target the thought, belief, opinion as opposed to actual conduct. When businesses, social media platforms implement the laws, they contribute to the negative human rights impact of the laws and measures, and this poses a tension between the business' obligation to apply with the domestic laws of the jurisdiction where they operate and their responsibility to respect human rights under the United Nations guiding principles.
In addition, different standards and jurisdictions comes with its own challenges and may result in overcensoring. That brings us to the importance of the human rights compliance of procedures and assessing content. Assessing whether a certain post or certain side of content is terrorist or violent, extremist in nature generally rivers a ‑‑ requires a sophisticated legal analysis. We have little information on how the platforms use this determination. When they use artificial intelligence or human review. The use artificial intelligence have been highlighted. Algorithms are not foolproof. It could be over and underinclusive.
However the human rights expertise is not without problem. Doing this kind of moderating right is requiring social media platforms employ a highly trained workforce. Based on the reports we have seen, it seems this frequently low pays and sufficiently train moderators, that are good with the freedom of expression online. Because of the shortcomings, it is very important that there is due process and adequate safeguards when it comes to content regulation and moderation. That is so far lacking. There is information at their disposal about the process and challenge decisions. Platforms on the other hand face their own challenges because they need to deal with horrendous amount of content and the reliability framework enclosed in them. They receive little to no help and guidance from public authorities. Considering all of this, the platforms are like to err on the side of overcensoring.
Which leads to my conclusion, there is much to be said in the favor of self‑regulation and the state p. When it comes to online platforms carrying out quasi enforcement and adjudicated function, they cannot just sit back and leave it to companies to regulate the processes.
The state obligation to protect human rights under the international human rights covenant sets up the proper framework and create conditions that make the human right compliance content possible. This is something where we need more.
>> NICOLO ZINGALES: Thank you for this interesting thoughts. Now we end up with Natasha who is speaking about how digital platform may become revenue talk points and then we will open it up to the floor.
>> NATASHA: (?) Canada. I wanted to make four quick points in relation to my chapter. The first is the role of the state. Despite varying narratives of the state in retreat, the state in absence, voluntary regulation, and in particular the platform by state. The state makes a strong, direct role in shaping, calling for, demanding regulation by the platform. The U.S. Government, U.K. government, European Commission have employed strategies of encouraging or coercing, Internet intermediaries to assume greater enforcement responsibilities, including legislation and in the case of Google in the United States, threat and actual criminal investigation.
The second point is that the goal of state pressure on intermediaries to act and assume greater responsibility is to institute what I call beyond compliant regulation. This refers to pressure on the Internet intermediaries who exceed their legal responsibilities in the absence of legislation and absence of formal legal orders. Intermediaries occupy this certain regulatory environment. The legal dispute about the nature of the responsibilities, the degree to which they're responsible for third party content on their platform, as my fellow speakers have talked about. We see intermediaries are vulnerable to state pressure and more willing to adopt the self‑regulatory measures in the absence of the legal requirement to do so.
Unlike legislation this beyond compliance is highly flexible, highly elastic, which means states can continually pressure the companies to adopt more and more responsibilities in the absence of accountability and oversight. The third point is that traditionally we have seen the big Internet platforms, especially the large U.S.‑based ones, pressured by powerful states. So powerful states, pushing the companies to act as regulators in the absence of legislation. We are seeing powerful multicultural corporations in the copyrights, trademark, pharmaceutical apparel firms, pushing them to act to protect the property rights. This is because of protection of intellectual property rights is a key political and economic priority especially for the U.S. Government. We are seeing the U.S. Government pushing the companies to adopt greater enforcement responsibilities to protect intellectual property rights.
The fourth point is why are the countries, the states pushing for beyond compliance enforcement? One reason that the U.S. is doing so is that it is able to embed the preferred standard, the preferred qualities for the protection of intellectual property rights within the big Internet platforms. So as these Internet platforms set and enforce rules they're acting in the way that the United States prefers in relation to intellectual property rights.
I think this raises a number of different questions, and what we're seeing is a pattern of global rule diffusion in which rules are set quite secretly among closed groups of corporate and political actors, largely in the global, United States, some areas in the north, these are diffused globally. One question I want to focus on or raise is whose rules apply and where. This raises a host of questions and something others will talk about as well.
>> NICOLO ZINGALES: Thank you, Natasha. The last point resonates with the initial chapter that we called YouTube and I regulate, which demonstrates that some of the platforms are proxy to extend national regulations in this case that U.S. copyright regime globally. That is a question of jurisdiction.
So we can open the floor for a couple of questions. Do we have some in remote questions? Or would we like to look at the moderator? Do we have questions from online participants.
>> No, we have people watching it.
>> NICOLO ZINGALES: Do they have a question to do quickly. Don't be shy. We will take two from the floor. Please go ahead.
>> QUESTION: Hi, my name is David Luca. I am with the European institute. I have a question, coming to the point about how these strong corporates in areas of the copyright and trademark put pressure on the platform for the regulation. I'm wondering, in a similar context of fake news, where does the pressure come from there? You don't really have private access. You do have private media, that put some pressure, but not the same kind of power.
>> NICOLO ZINGALES: We will take another question, and then leave for the panelists to give final remarks.
Okay. Wonderful. We have finalists having final remarks and finish almost on time.
>> PANELIST: Fake news is an interesting question. I think it raises the broader question of government turning as de facto, go‑to regulators. Whatever the issue of the day is, extremism, copyright, interference in elections, they're turning to the platforms saying you have great algorithms, experienced engineers, do something. I think that is problematic. The "something" is ill‑defined. We don't know what they're doing, how they're doing it. The platforms have admitted they're ill equipped to distinguish legality, from legality online. It is complex questions. And how they undertake the regulation is murky. It raises a host of questions that the idea of the platforms as de facto regulators.
>> NICOLO ZINGALES: Add a remark. In the nature of platforms, it is to maximize revenue and minimize cost. The goal of the platform in implementing or designing a remedy will be the reduction of cost and the most effective ‑‑ cost‑effective remedy. Not the most respectful of human rights. They're the wrong stakeholder that should take this role and implement it. But I think there are other remarks there.
>> PANELIST: I will jump in with one comment because I think my fellow panelists captured a lot here. One comment is a lot, in some ways is not new. That pressure from government on the intermediaries to regulate has been there from the beginning. If we look back at issues of child pornography, a lot of that was kind of in a way, outsourced to the regulators to try to handle.
The issue that I think is new in the area of fake news is that it is more difficult to implement a system of regulation. In a way, what is has done is exposed this arena of shadow regulation, happening for years Burke we can't pinpoint an easy method to actually regulate it. We're faced with this double problem.
>> PANELIST: Just a couple of broad points on the issue. I think maybe from a data protection point of view it is clearer from going right back to the early 1980s, these new technologies, whether that digital archives going on, do have enormous power. And their activities create a lot of good things and create very real negative effects on people. And it is appropriate, I think, for that to being looked at by society and the solution to be looked at. I don't think we should lose sight of that.
The second thing I would say is that we do need to be a bit careful that we don't miss this incentive. These companies have a great incentive to disseminate a lot of information, whether it is harmful or not harmful and also I would say as an aside, intellectual property. They have an incentive to provide the material, even if it is questionably legal because they are the go‑to point.
But again, that may produce many, many social costs. We need precise regulation, with precise responsibilities, with good accountability mechanisms. That doesn't mean no regulation or self‑regulation that is incredibly weak. We need effective regulation.
>> NICOLO ZINGALES: I want to pick up two issues mentioned. One is about incentives that you and Luca mentioned. The other is about accountability.
These platforms have a duty to shareholders to maximize revenues. So that is why they're going to adopt the least expensive measures, the least sophisticated sort of measures. I think what we should do in our discussion, maybe as a task for next year, you know, develop some ideas of how accountability can be injected into the corporate culture of the firms so you know, they will have to recognize to their stakeholders that some of the measures need to be taken. They're part of the simply the due diligence that corporations should embrace as part of their own in society. So as a parallel to what we have done in previous years, yesterday we had a meeting amongst the authors of the book.
We thought about things that could be done over the course of the next year. This should be a dynamic coalition, not a static one. The idea is that we're now brainstorming about possible projects. You're welcome to approach us after the session. One of the focuses might indeed be about this accountability role or platform.
Last comment that we can have.
>> Thank you. The couple of comments. The first one, there seems to be an accountability platform, but we see a lack of accountability in the others involved in the environment. Such as government and NGO like who, for example, there have been some cases where government and legislators are (audio fading in and out, speaker not close enough to microphone)
we ask out of that how many investigations have led to prosecution. We're talking to parents. Is there anything there? The answer you are seeing is that there is none whatsoever. We see there needs to be accountability and responsibility by all parties involved.
A second point instead of regulation, we see privatized law enforcement. It is not self‑regulation by the platform, because they're not deciding, you know, because they think they need to fight the terrorism as they do it, they're really facing political pressure. It is not new. I think that was said by my colleague. It has been there a long time. We need to ensure the counterbalancing obligation that the companies respect human rights, and give the recommendations from the Council of Europe and also from the (audio fading in and out)
>> NICOLO ZINGALES: By the way. There is a call for permissions on the topic of content regulation and the digital age, which is open until today, I think. Or tomorrow. You're still in time.
Last comment?
>> ROLF WEBER: I want to respond to David and his presentation topic. I think the experience is related to the case that is not only ideal to move over to the provider. I mean, Google has installed huge administration for looking at the talking issues. Nevertheless, you have to see the cost potential, social cost and you make the private company more or less a touch. With the human rights, the freedom of expression is not really ideal.
>> NICOLO ZINGALES: Krzysztof Garstka wanted the right to reply?
>> KRZYSZTOF GARSTKA: I think more than the right to reply, but a right to conversation. I have a different comment that is probably right before the break. It is about algorithm. Some of you might be wondering, there is so much algorithmic governance and regulation, why not have academia and have them looking at algorithms, which are good and bad? Well, it is actually highly problematic, because first of all, it is not just an issue of transparency that the algorithms are kept secret, they're for the business, like with YouTube, the mechanism there (not close enough to microphone to hear individual speaking)
those algorithms are changing. The YouTube algorithm changes every hour. It could be different. That is a very, very difficult thing to do.
>> NICOLO ZINGALES: With the injection of optimism regarding algorithms, I think we can close the meeting. Thank you very much. If you want to make suggestions on the future works, we have a mailing list. Unfortunately, that has not been blogged by the mag yet. I advise you ‑‑ blocked by the mag yet. I advise you to use it. If you don't find information on the Byzantine IGF website, put in your browser the UNIGF platforms, you will arrive on the page with information. Thank you very much, everyone.
(Applause)
(session concluded 10:18 a.m. CET)