Check-in and access this session from the IGF Schedule.

IGF 2019 WS #28
Towards Human Trust in AI

    Subtheme

    Organizer 1: EVA THELISSON, AI Transparency Institute
    Organizer 2: Himanshu Verma, HES-SO, EPFL
    Organizer 3: Carmela Troncoso, EPFL

    Speaker 1: Raja Chatilla, Technical Community, Western European and Others Group (WEOG)
    Speaker 2: Karine Perset, Intergovernmental Organization, Western European and Others Group (WEOG)
    Speaker 3: Maël Pégny, Civil Society, Western European and Others Group (WEOG)
    Speaker 4: Friedhelm Hummel, Technical Community, Western European and Others Group (WEOG)
    Speaker 5: Joanna Bryson, Technical Community, Western European and Others Group (WEOG)
    Speaker 6: Michele Loi, Technical Community, Western European and Others Group (WEOG)
    Speaker 7: Johan Rochel, Civil Society, Western European and Others Group (WEOG)
    Speaker 8: Martin Jaggi, Civil Society, Western European and Others Group (WEOG)
    Speaker 9: Carmela Troncoso, Technical Community, Western European and Others Group (WEOG)

    Moderator

    Carmela Troncoso, Technical Community, Western European and Others Group (WEOG)

    Online Moderator

    Himanshu Verma, Technical Community, Asia-Pacific Group

    Rapporteur

    EVA THELISSON, Civil Society, Western European and Others Group (WEOG)

    Format

    Break-out Group Discussions - Round Tables - 90 Min

    Policy Question(s)

    The workshop will address the following questions :

    Standards : Addressing conflicts between quality of datasets (representativeness), data minimization, fairness and algorithmic bias.

    Autonomous Intelligent systems and decision making processes : Appropriate combinaison of algorithmic autonomy and accountability of actors to avoid a dilution of liability in decision making processes.

    Responsibilities of Intermediaries : should major internet platforms demonstrate due dilligence in policing content posted on platforms as any proactive content monitoring can have detrimental human rights implications?

    Trustworthy AI : Expanding existing efforts and cooperation to define metrics of trustworthiness.

    Liability : Which Liability Schema is the most appropriate to foster innovation while developing safeguards for the users of AI-based systems?

    Lawfulness : which procedural standards should assess the legality of AI services ? ex: lips reading, personality profiles based on voice and facial analysis.

    Notification : Handling of notification oof users and their capacity to object to AI outcome having negative legal effect.

    Trust-Building measures : Appropriate combinaison of Hard Law (oversight body, safety authority with prior safety check and authorization for high risk AI-based systems) and Soft Law (certification, labels, recommendations...)

    Due Diligence : Mechanisms for carrying out Due Dilligence prior deployment of High-Risks systems on the market as a way to demonstrate accountability for private actors.

    Digital Ethics : Standards to embed ethics principles in the design of technologies. Addressing the need to the ethical challenges of augmented cognition.

    Re-skilling of workforce : Addressing the need of re-skilling of workforce and financial social safeguards. Who should bear the cost?

    Innovation : Appropriate governance framework to foster innovation while protecting fundamental freedoms and human rights.

    Transparency : Expanding efforts on self-explanatory systems and human control of AI systems.

    Economy of AI : Addressing the trade-off between Privacy and wealth creation due to the virtuous cycle "data collection, low-cost of predictions, wealth creation, fair redistribution".

    SDGs

    GOAL 3: Good Health and Well-Being
    GOAL 4: Quality Education
    GOAL 5: Gender Equality
    GOAL 8: Decent Work and Economic Growth
    GOAL 9: Industry, Innovation and Infrastructure
    GOAL 10: Reduced Inequalities
    GOAL 11: Sustainable Cities and Communities
    GOAL 12: Responsible Production and Consumption
    GOAL 16: Peace, Justice and Strong Institutions
    GOAL 17: Partnerships for the Goals

    Description: On 23 March 2019, over 40 senior-level participants from academia, industry, civil society and international organizations met in Geneva, Switzerland for the first AI Governance Forum.
    In Geneva, participants in the AI Governance Forum (https://ai-gf.com) identified the benefits and risks of Artificial Intelligence and the need for a specific AI Governance.
    Following this first conference, the same participants proposed to submit a workshop proposal at the Internet Governance Forum in order to define a Work Plan, which formulates concrete common objectives that stakeholders set for themselves, and lays out a clear list of components for the development of operational frameworks.
    These components will guide the multistakeholders policy development work within the AI Governance Program of the AI Transparency Institute. The workshop will be a milestone moment to identify concrete focus and priorities to develop policy standards and operational solutions to major AI challenges.

    Expected Outcomes: The fundamental objective of the workshop is to build Human Trust in AI between disparate countries to foster social acceptance and growth creation while respecting values and principles of democracies. The participants of the AI Governance Forum have agreed upon achieving clarification and coherence with respect to the following points as a common objective :

    - Applicable norms to AI systems
    - Respective obligations of states and respective responsibilities and protections of other actors
    - Decision-making, standards and procedures, including the escalation path for individual decisions and appeals mechanisms.
    - The necessary due diligence process and transparency standards that should be applied to AI actors across borders as well as the legitimate lawfulness of AI systems prior deployment on the market.
    - The implementation of AI Trustworthiness principles and appropriate metrics.
    - The challenges of algorithmic detection of abusive content and making algorithmic tools more broadly available and transparent.

    Best practices will be developed on those topics.

    The session organizers plan to facilitate and encourage interaction and participation during the session asking open questions and using Flipchart and small thematic working groups of 3-4 persons each. The organizers have experience in organizing this kind of workshops. Himanshu Verma is also a researcher in human interactions and education. He knows very well how to engage speakers in a discussion.

    Relevance to Theme: Societies are becoming increasingly dependent on Datasets feeding Artificial intelligence and Machine Learning technologies. Due to the economies of scale and the decrease of the cost of algorithtmic predictions, all sectors are impacted by Artificial Intelligence : healthcare, transport, insurance, administration, commerce, news, advertising, entertainment, robots, social media, votation, weapons...

    AI technologies are able to process large quantity of data from multiples sources with strong computational power. This results in huge efficient gains, scalability and growth creation. AI Technologies decreases the cost of productions of AI outcomes like predictions, scoring, medical diagnoses, decision-making algorithms, which facilitates the widespread of AI Technologies in all sectors. Therefore, AI provides a significant competitive advantage for companies having access to large datasets.

    Data is the oxygen of digital economy as it allows companies to provide personalized services and goods and generate growth creation. Data Access and Data Analytics play a central role in the digital economy. AI Policy will have to deal with the trade-off between data sharing and data protection/ privacy.

    In order to be build Human-Trust in AI Technologies, quality control of AI should be put in place. Fairness is a key concern, as AI replicates our implicit biases. Algorithmic bias comes from the data input, vectorization, labeling and learning context can result in discriminations. One of the main challenge is to build due diligence mechanisms throughout the AI Lifecycle, and in particular to assess the quality of datasets ( complete, unbiased, representativeness…).
    Principles, standards, and best practices can be embedded into the design of Technologies (Privacy-by-Design, Ethics-by-Design). IF engineering digital trust is becoming of key importance, trust must be built for the whole ecosystem.

    Several institutions are engaged in AI Governance frameworks and a concertation is ongoing. Industry invests in Soft Law mechanisms, Singapore published an AI Governance Framework, OECD published Draft Recommendation of the Council on Artificial Intelligence dated 14/15 March 2019, EU Commission gathered a High-Level Group of Experts to enact Ethics Guidelines on AI.
    AI technologies may result in a dilution of responsibilities due du the number of intermediaries involved. Therefore, it is essential to define a clear liability schema (i.e determine who assumes what responsibility in the event of an algorithmic accident). Only humans should be accountable for the proper functioning of AI systems. Due Diligence processes should be put in place to assess the AI Trustworthiness can be assessed based on the information providing by the system and advising what the AI system is doing, and how decisions were taken. Transparency allows corporations to demonstrate due diligence. If we legislate and adjudicate for accountability, transparency will follow.
    Trustworthy AI should respect all applicable laws and regulations. Specific due diligence can help verify the application of each of the key requirements: Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems. AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility. Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

    Relevance to Internet Governance: The specific questions guiding the workshop discussions are related to the appropriate AI Governance Framework to foster innovation while promoting human rights, economic inclusion, life-long learning education and fair distribution of wealth generated by AI Technology. The AI Transparency Institute plays a key role in organizing stronger coordination among efforts currently underway in that regard and in developing best practices in this field.

    Online Participation

    Usage of IGF Tool