Check-in and access this session from the IGF Schedule.

IGF 2021 WS #261 Best Practices in Content Moderation and Human Rights

    Time
    Wednesday, 8th December, 2021 (16:30 UTC) - Wednesday, 8th December, 2021 (17:30 UTC)
    Room
    Ballroom C

    Organizer 1: Abby Vollmer, GitHub
    Organizer 2: Jan Rydzak, Ranking digital rights
    Organizer 3: Vladimir Cortes, Article 19
    Organizer 4: Peter Cihon, GitHub

    Speaker 1: Vladimir Cortes, Civil Society, Latin American and Caribbean Group (GRULAC)
    Speaker 2: Abby Vollmer, Private Sector, Western European and Others Group (WEOG)
    Speaker 3: Malu Berges, Private Sector, Asia-Pacific Group
    Speaker 4: Brenda Dvoskin, Civil Society, Latin American and Caribbean Group (GRULAC)
    Speaker 5: Allison Davenport, Civil Society, Western European and Others Group (WEOG)

    Moderator

    Veszna Wessenauer, Civil Society, Eastern European Group

    Online Moderator

    Veszna Wessenauer, Civil Society, Eastern European Group

    Rapporteur

    Peter Cihon, Private Sector, Western European and Others Group (WEOG)

    Format

    Panel - Auditorium - 60 Min

    Policy Question(s)

    Digital policy and human rights frameworks: What is the relationship between digital policy and development and the established international frameworks for civil and political rights as set out in the Universal Declaration on Human Rights and the International Covenant on Civil and Political Rights and further interpretation of these in the online context provided by various resolutions of the Human Rights Council? How do policy makers and other stakeholders effectively connect these global instruments and interpretations to national contexts? What is the role of different local, national, regional and international stakeholders in achieving digital inclusion that meets the requirements of users in all communities?

    This session explores how human rights frameworks can inform and guide content moderation policies on digital platforms. As platforms endeavor to serve increasingly global user bases, they owe it to their users to establish policies that are fair and appropriate, and protective of their users’ rights. Some platforms and civil society organizations are actively pursuing strategies to connect global human rights instruments to national and community contexts to support users’ human rights. Several fundamental human rights, such as the rights to free expression, free association, and peaceful assembly, have clear relevance to content moderation, as does the right to remedy for violation of those rights. Others, like the right to privacy, apply to any platform that is collecting and sharing their users’ information. Just as these contexts vary, so too do platforms: often discussions tend to focus on a few, large platforms. As such, the session seeks to increase the diversity of voices and platforms within the discussion while addressing a range of human rights. Broadly, this session focuses on some of these strategies, with a discussion on their opportunities and challenges in an effort to distill best practices.

    SDGs

    16.6
    16.7
    16.8

    Targets: The session aims to identify best practices for using human rights law to guide content moderation policies of digital platforms. This aim and the content of the discussion support accountable institutions (16.6), incorporating participatory decision-making (16.7), and stronger participation of users from developing countries in the governance of digital platforms (16.8).

    Description:

    In his 2019 report, Clément Nyaletsossi Voule, Special Rapporteur on Rights to Freedom of Peaceful Assembly and of Association, observed that online platforms have become gatekeepers, “wielding enormous power over whether individuals and civil society actors can access and participate in the democratic space.” The intervening years and, particularly, the COVID-19 pandemic have only increased our collective use of digital platforms. Online and off, the rights to free expression, free association, peaceful assembly, and privacy are essential.

    Stakeholders in civil society and the private sector have important roles to play in translating these rights into online protections that work in practice and meet the needs of community and national context. This session explores the different roles that these stakeholders play at the international, regional, and national levels and the opportunities for stakeholders to collaborate to improve human-rights respecting content moderation.

    The session’s panel discussion will feature speakers from civil society, academia, and private sector digital platforms.

    Ranking Digital Rights (RDR) and Article 19 will share their experiences as established global civil society groups. RDR publishes its annual Corporate Accountability Index that assesses how platform and telecom companies’ policies on content moderation and transparency reporting are implemented in practice. Article 19 runs its global #MissingVoices campaign to demonstrate the impact and surface stories of content removals from leading platforms. Both RDR and Article 19 use their public campaigns and direct outreach to the private sector in order to improve global practices.

    The panel discussion will also feature diverse perspectives from digital platforms. Discussions of content moderation policies and human rights often focus on a few, global platforms. However, the digital ecosystem varies considerably in practice. The Wikimedia Foundation, the host of Wikipedia and several other open-knowledge projects, will share how human rights can serve as a stable underpinning for making consistent and fair decisions about content, features, and policies on platforms with a global reach. Further, they will share their efforts defending the free expression rights of their community of contributors, who have long-established governance and moderation structures that newer regulations aimed at promoting platform responsibility threaten. This panel will also feature the global software collaboration platform GitHub and Indian social media platform ShareChat (and its sister platform Moj). GitHub supports software developer collaboration among local communities and international collaborators, alike, and takes numerous steps to align its platform policies with human rights protections, including by holding consultations on policy updates and using a company-supported, community-driven content moderation approach to enforce them. ShareChat was created to serve Indian users who speak one of 15 Indic languages, expanding social access beyond a few dominant languages online. Together with its sister app, Moj, ShareChat has over 250 million monthly active users. ShareChat uses a combination of human moderators and AI tools to ensure safe participation on its services. Both GitHub and ShareChat will share their experiences with content moderation on the panel to fill a gap in the discussion that often focuses on a few global social media platforms.

    To complement the experiences of leading civil society organizations and private-sector platforms, the panel will also feature the academic research of Brenda Dvoskin, who has documented civil society’s participation in platform’s consultative processes.

    The panel aims to illustrate how human rights law can help inform best practices in stakeholder consultation, drafting of platform rules, and enforcing those rules to inform the activities of both civil society and private sector platforms to promote human rights online.

    Participants include: - Jan Rydzak, Ranking Digital Rights - Vladimir Cortés, Article19 - Abby Vollmer, GitHub - Berges Y. Malu, ShareChat & Moj - Brenda Dvoskin, Harvard Law School - Allison Davenport, Wikimedia Foundation

    Proposed agenda: - 35 minutes - moderated panel discussion - 20 minutes - open Q&A from the audience - 5 minutes - concluding remarks from each speaker

    Expected Outcomes

    This session will educate panelists and the audience alike on best practices for using human rights law to guide content moderation policies of digital platforms. The moderator will seek to draw out from the panelists and discussion best practices for platform providers in drafting their rules, approaches to policy creation and enforcement, and civil society advocacy toward a human rights-respecting online platform ecosystem. After the event, the organizers will publish a blog that summarizes the best practices in order to share them with a wider audience.

    The session will be, tentatively held in a hybrid format. The moderator will encourage wide and accessible online attendee participation by inviting (and compiling) questions via chat while the panelists are speaking and subsequently welcoming verbal Q&A from the audience.

    Online Participation

    Usage of IGF Official Tool. Additional Tools proposed: In introducing the topic of the session, the moderator will use slides with written text to aid those in the audience that may not be native English speakers. Panelists will be similarly invited to use slides for their introductory remarks. To facilitate discussion, the moderator will use the platform to invite audience input both through spoken questions and typed chat. We aim to increase participation by having the different organizations running the session promote it in advance on their social media accounts, and inviting the speakers to do the same.

    Key Takeaways (* deadline at the end of the session day)

    There was near consensus among panelists on the usefulness of human rights frameworks to guide platform activities, particularly around transparency for content moderation processes. There was disagreement, however, on the extent that platforms can reserve their rights as private actors: are they akin to restaurants or does their scale and ubiquity in life require international obligations beyond voluntarily using guiding human rights principles?

    Call to Action (* deadline at the end of the session day)

    Allison Davenport called on policymakers to ensure that regulation reflects the diversity of content moderation approaches taken by diverse platforms. Vladimir Cortes issued a call to ensure that continuing discussion of platforms’ roles in digital lives follows a multistakeholder model, so that it can benefit from civil society, private actors, and government expertise.

    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    The session explored differences in perspectives and approaches to using human rights as a guide for platform content moderation. After brief introductions, the panelists discussed how to weigh the need to protect freedom of speech with risks of harmful content. 

    Berges Malu, Director of Policy at Sharechat, described his company’s approach in the Indian context. The Indian constitution does not provide for free speech in its entirety, and Sharechat will remove content that is illegal under Indian law. It also may remove or restrict the reach/virality of content that is not good for the community. This approach has reduced instances of harmful speech, in contrast to the experiences of other platforms in India.

    Abby Vollmer, Director of Platform Policy and Counsel at GitHub, described her company’s approach. Human rights offers a guide in shaping content moderation practices for GitHub’s 73 million developers worldwide. Context and culture matters, so when taking content moderation actions, GitHub considers where a user may come across content and perceive it. For example, a user’s avatar often appears without context and thus may be the subject of enforcement. However, the same image could potentially be permissible elsewhere on the platform with sufficient context. Ultimately, GitHub is guided by a principle of “least restrictive means” in content-related enforcement and in many cases will start by contacting the user in question directly to see if they can remedy the situation before taking action on content. 

    Veszna Wessenauer, Research Manager at Ranking Digital Rights, introduced civil society strategies around platform content governance, noting that the concept includes practices beyond content moderation. 

    Vladimir Cortes, Digital Rights Program Officer at Article 19, described how his organization analyzes the impacts that platform content governance decisions have on local groups, including artists, activists, collectives, as well as femistist and LGBT groups. In one case, a Mexican journalist whose content was in his native Zapotec language faced mistaken takedowns. They’ve found problems with Facebook’s transparency in enforcing its community standards: individuals did not understand which standard was violated and how. Article 19 advocates for these groups to improve platform practices to align with human rights standards. They understand that freedom of expression is not without limit, but restrictions must be legal and proportionate.

    Allison Davenport, Senior Policy Counsel at Wikimedia explained how Wikipedia approaches content moderation in a very different way than dominant platforms like Facebook. Wikis are divided by language communities, not political jurisdiction, so the global Spanish-speaking community sets their own standards, complicating compliance with a potential law in the country of Spain, for example. Wikimedia staff do some limited content moderation due to legal requirements and toxicity concerns, including child and sexual exploitation content. However, its predominantly a community-based model. She offered a warning on regulation: often legislators assume platforms exercise top-down control on content, which contradict with how the community moderation model actually works. 

    Brenda Dvoskin, PhD researcher at Harvard Law School, challenged the presumption that platforms should align their approach to human rights law. In her perspective, applying this framework to companies does not reflect its original meaning. Considering the UN Guiding Principles on Business and Human Rights, companies are expected to identify which rights individuals should have and then not infringe upon them. However, in practice, platforms have specific rules that impact these rights. For example, The New York Times comment section prohibits the use of ALL CAPS: is this an infringement on human rights? In seeking to apply human rights frameworks to platforms, people make a political choice that seeks to erase normative preferences for a “objective” standard–this process of hiding is irresponsible. 

    Brenda’s intervention pushed panelists to articulate how they viewed human rights as a useful framework for content governance. 

    • Berges described platforms as akin to a restaurant, which has the right to kick you out for being out without it being a violation of human rights. For him, political censorship is concerning, but that human rights as an absolute does not work. 
    • Allison described one appeal of the human rights framework is that it acknowledges that platforms are ubiquitous and inescapable in certain areas: people live online, they get jobs via platforms, they interact with family. 
    • Abby agreed that companies are private and legally can make rules. That’s where human rights come in: as a platform looking to draft rules, human rights law provides an instructive starting point.
    • Vladimir challenged the restaurant analogy. Platforms shape civic space and have huge scale. They are relevant for protests and democratic participation. Even in private spaces, the UN Human Rights Committee has interpreted the right of peaceful assembly to include private spaces. Some platforms are incorporating the Rabat Plan of Action rules on hate speech. Although human rights apply differently than to states, they are important for platforms.

    Panelists consolidated around a need for transparency in platform content governance

    • Berges described the need to help users understand why their content may be taken down, and Indian regulations that require monthly transparency reports that tally takedowns and their justifications as well as an appeal process.
    • Veszna explained Ranking Digital Rights’ approach, which calls on platforms to transparently disclose practices for public evaluation. Their evaluation criteria prioritize transparency in the content governance process.
    • Vladimir encouraged the audience to review the Santa Clara Principles that include an emphasis on transparency, accountability, and explainability.

    Brenda agreed that restraints on companies are needed, to ensure that platforms operate in the public interest. Too often human rights are used a heuristic for the public interest, but human rights entails a balance between values, which are not necessarily in the public interest. For example, popular campaigns to push back against hollocaust denial demonstrate public interest that does not align with human rights protections for free expression.

    An audience member, who worked for an Iranian regulator, raised a question on how to ensure content diversity amid a concentration of media power. Vladimir shared Article 19’s approach: to (1) invite multistakeholder advice/input and (2) to promote unbundling as a pro-competitive remedy that separates hosting of content and content moderation itself, in an effort to decentralize.

    A review of #WS261 found no contributions aside from those of the organizers. GitHub published a summary available here: https://github.blog/2021-12-13-github-at-the-un-internet-governance-forum/