IGF 2024 WS #205 Contextualising Fairness: AI Governance in Asia

    Organizer 1: Nidhi Singh, 🔒
    Organizer 2: Tejaswita Kharel, Centre for Communication Governance at National Law University Delhi
    Organizer 3: Joanne D'Cunha, Centre for Communication Governance

    Speaker 1: Jason Grant Allen, Civil Society, Asia-Pacific Group
    Speaker 2: Jhalak Mrignayani Kakkar, Civil Society, Asia-Pacific Group
    Speaker 3: Prateek Sibal, Intergovernmental Organization, Intergovernmental Organization

    Moderator

    Nidhi Singh, Civil Society, Asia-Pacific Group

    Online Moderator

    Tejaswita Kharel, Civil Society, Asia-Pacific Group

    Rapporteur

    Joanne D'Cunha, Civil Society, Asia-Pacific Group

    Format

    Roundtable
    Duration (minutes): 60
    Format description: The workshop based roundtable layout would be ideal for our session as it would foster collaborative discussion. This format allows for interaction and collaboration between participants which will allow for diverse perspectives to be shared and explored effectively.
    We plan for the session to be information and collaborative. The session will begin with a 5-minute introduction, followed by two 10-minute sections on (i) overview of fairness in AI, and (ii) Fairness in Singapore and India. This will be followed by a 15-minute simulation exercise on bias and discrimination in AI models. We will then conduct a 15-minute open discussion by participants on fairness metrics in their respective jurisdictions. The session will end with a 5-minute reflection by speakers, and some closing remarks. The roundtable layout would be especially useful for the open discussion in the session, allowing participants to collaboratively craft a fairness metric contextualised to their context.

    Policy Question(s)

    A. How can we make AI fair? What does the principle of fairness entail in the governance of AI?
    B. Are existing metrics of AI fairness adequate to account for the unique issues that arise in a socio-culturally diverse region like Asia?
    C. What are the metrics across which fairness is measured? How can we make these metrics more representative of Asian contexts?

    What will participants gain from attending this session? In this session, participants will learn what fairness entails, the need for context-specific fairness metrics, and how to identify biases and discrimination in AI models. By the end of the session, participants will be able to identify bespoke metrics to evaluate fairness in AI in their socio-cultural contexts. Ensuring fair and ethical AI, contextualised to socio-cultural nuances would enable a more permissive environment for innovation.
    Through discussions on contextualised ethical governance of emerging technologies, participants will be better equipped to contribute to future dialogue surrounding governance of emerging technologies. These conversations will work towards balancing risks which the use of AI systems present, hence fostering a more safe, secure, and trustworthy attitude towards use of AI systems.
    This session will highlight to the participants that the principles and ethics guiding the governance of emerging technologies are subjective in nature and must be tailored to specific regional, social, and local contexts.

    Description:

    This collaborative workshop session (roundtable format) will focus on the principle of fairness in AI, emphasising the need for context-specific fairness metrics for ethical governance. We will unpack the multifaceted concept of fairness in AI by discussing key components of the principle of fairness (equality, bias and non-discrimination, inclusivity, and reliability). While these components are relevant globally, their interpretation varies across jurisdictions. For example, unlike western liberal democracies, factors such as caste or religion are key aspects of non-discrimination in India. Understanding these components is essential for developing and deploying AI systems that are safe, secure, and trustworthy.
    As the concept of fairness in AI has often been developed focusing primarily on the US and Europe, it may be difficult to adopt them in Asian countries which have unique socio-cultural contexts and may interpret fairness differently. We will discuss fairness in India and Singapore to showcase how the concept varies across Asia, and from the broader global concept of fairness.
    Further, we will discuss case studies like the biassed AI job recommendation system in Indonesia to illustrate the complexities of fairness in AI. We will also conduct a simulation exercise using a hypothetical model to illustrate the potential for bias to manifest through data points such as age, gender, address, etc. Finally, we will conduct an open discussion to gain perspectives from participants on fairness metrics in their own countries and to analyse how fairness as a concept differs based on their socio-cultural contexts.
    This session will leverage learning from an Asia-level dialogue conducted by SMU and CCG which brought together diverse stakeholders from the APAC region to discuss the multifaceted concept of fairness in AI. We will also have a speaker from UNESCO who has experience with AI norms and can speak to global perspectives on Fairness in AI.

    Expected Outcomes

    We will use the session as a platform to introduce the idea of contextualising ethical governance of technologies by discussing subjective metrics that guide the principle of fairness in AI. The dialogue will help reframe conversations around fairness with an Asian perspective.
    We will use the insights we have gathered about contextualising AI towards our work on Ethical AI. As an academic legal research centre which focuses on Global Majority and Asia-Pacific perspectives in technology policy. The workshop will bring together organisations/researchers to form a community to build momentum towards sharing the Global majority and Asian perspective in the global norms development processes. The session will also create additional pathways for research and collaboration between stakeholders in the ecosystem. Discussing the learnings and key takeaways from the session we will upload a post on the CCG Blog and produce a podcast on the CCG Tech Podcast as well.

    Hybrid Format: This workshop session includes a case study discussion (eg. biassed AI job recommendation system in Indonesia), a simulation exercise illustrating bias in AI, and an open discussion between participants encouraging discussion on their perspectives of fairness metrics. Through these interactive sections in the sessions, we aim to engage and interact with participants, both online and offline.
    In order to facilitate seamless interaction, we will have both an onsite and an online moderator who will moderate the session to ensure equitable participation. We will also ensure that all participants are provided with the opportunity to ask questions and clarifications both during the session and afterwards. Further, in order to engage with the online and onsite participants, we will use interactive features such as mentimeter, white-board, polling, etc. With these, we will be able to conduct an inclusive and accessible session with active and meaningful participation for all participants and speakers.