Scaling Responsible AI Solutions (SRAIS) | 2023 Global Cohort
Description and Context
The first iteration of the SRAIS project was initiated by the Global Partnership on AI (GPAI) working group on Responsible AI, as part of the approved work plan in 2023. Proposed following the vision by GPAI Experts & project co-leads Amir Banifatemi and Francesca Rossi, the project was successful in delivering concrete results within a year. Initially titled RAI deployment Challenge & Fund, the project was founded on the premise that artificial intelligence (AI) seemed to present solutions to many challenges across different domains, but that there was then a widespread understanding of the range of potential risks and harms to people and the planet that AI could produce if conceived, designed, and governed in irresponsible ways. In response to this, many proposals, frameworks and laws had been advanced for the responsible development and use of AI systems. In parallel, more and more AI ‘solutions’ were emerging around the world, which attempted to contribute to the public good, whilst upholding best-practice standards of responsibility. It was important that AI systems that met responsible AI (RAI) best practices and had positive socio-environmental impacts were supported to grow and reach potential users and communities who could benefit from them.
However, nascent AI initiatives had encountered challenges when it came to practically implementing RAI principles, as well as scaling AI solutions. Key RAI challenges included mitigating bias and discrimination, ensuring representativeness and contextual appropriateness, transparency and explainability of processes and outcomes, upholding human rights, and ensuring that AI did not reproduce or exacerbate inequities. Frameworks for RAI had proliferated, but tended to remain at a high-level, without technical guidelines for implementation in various uses and contexts. At the same time, the process of scaling itself could introduce obstacles and complications to realising or preserving RAI adherence. It was recognized that all major countries and jurisdictions had established recommendations and guidelines for the responsible use of AI, with over 100 published then since 2017. Organizations like ISO, IEEE, and OECD were creating frameworks for AI classification and governance. However, the interpretation and implementation of these standards into scalable solutions were at the time limited.There was a growing awareness among leaders in public and private sectors about the need for a responsible approach to AI, but a clear path to systematically achieve this while creating value for organizations and communities was partly missing. Implementing Responsible AI across organizations and deploying it at scale remained problematic.
From January to October 2023 the Global Partnership on Artificial Intelligence (GPAI)’s Working Group on Responsible AI undertook a project in response to these challenges. The project, called Scaling Responsible AI Solutions (SRAIS), set out to match teams working on RAI solutions with mentors with relevant expertise, in order to identify challenges teams were facing with both responsibility and scaling, and assist in tackling these challenges. In response to an initial call for participation, 23 teams from 14 countries on 5 continents applied to take part in the project. The project was guided in particular by OECD’s Recommendation on Artificial Intelligence (2019).
Reports and Research
Partners
Global Partnership on AI (GPAI)
The Global Partnership on Artificial Intelligence (GPAI) is an integrated partnership that brings together OECD members and GPAI countries to advance an ambitious agenda for implementing human-centric, safe, secure and trustworthy artificial intelligence (AI) embodied in the principles of the OECD Recommendation on AI.
International Center of Expertise In Montreal on Artificial Intelligence (CEIMIA)
CEIMIA is positioned as a key player in the responsible development of artificial intelligence, based on the principles of ethics, human rights, inclusion, diversity, innovation and economic growth. CEMIA delivers high-impact projects in responsible AI through influential scientific diplomacy on an international scale.
Objectives
The SRAIS project was established with the core objective of producing tangible outcomes towards scaling Responsible AI (RAI).
Aligned with GPAI’s goals of strategic guidance on AI governance, development, use, and global cooperation, SRAIS focused on:
- Deploying and scaling Responsible AI solutions.
- Showcasing concrete results from these deployments.
- Fostering cross-functional collaboration among participating teams and experts.
- Defining and adopting robust performance metrics for RAI.
- Share insigghts about the challenges and opportunities for RAI to the internal RAI community
- Formulate general reccomendations for AI teams and policymakers for fostering the scaling of RAI solutions
Highlights and Takeaways
Five teams participated in a mentoring process focused on Responsible AI (RAI) principles, selected for their public good potential, ability to institutionalize RAI, and existing scaling and RAI challenges. These teams represented diverse country contexts and sectors. The mentoring involved initial workshops to share challenges and match teams and mentors, followed by three one-on-one mentorship sessions per team. During these individual sessions, the focus was on pinpointing a crucial responsibility challenge, then crafting a tailored ‘RAI deep dive summary,’ and finally, outlining an action plan to implement a mitigation strategy for that specific challenge. The goal was to ensure these deep dives summaries were broadly relevant for other AI actors seeking to meet responsibility standards during scaling. Those plans were then evaluated by an expert committee to provide further feedback and advice to stregthen the team’s plans.
The 2023 SRAIS project highlighted common challenges in integrating and validating RAI principles and scaling responsibly. These included resource maintenance, establishing robust data governance, stakeholder consultation, building trust, safe testing, ensuring appropriateness and safety across contexts, human-centredness, user education, and sustaining RAI adherence. Based on mentor engagement, recommendations emerged for policymakers and AI teams. Among these recommendations were facilitating safe testing, equitable access to AI infrastructure (secure datasets, computational and connectivity infrastructure), considering RAI throughout the AI lifecycle, and incentivizing safety over speed. Following the project’s success, a report was prepared to summarize the mentorship experience and its learnings, specifically highlighting the challenges and opportunities in scaling responsible AI. The SRAIS project then planned to expand mentorship to more teams and countries in subsequent years, aiming to systematically analyze experiences to potentially create a detailed “blueprint” for scaling responsible AI solutions.
That year, one team was highlighted as a RAI Changemekers for their successful participation and team effort to deply a responsible AI solution and evidence of readiness to scale responsibly. The team at ErgoCub developped thourough trustworthy AI guidelines for wearables in healthcare and industry along with a solid and detaild plan about how they would implement it in their operations.
‘’The SRAIS project gave us the unique opportunity to deepen and advance the state-of-the-art in the standardization methods for deploying Responsible AI-powered smart clothes, namely wearables, applied to future workers in industry and healthcare application domains. We have been able to define the steps towards a framework that manages all risks associated with the concept, design, and deployment of AI-powered wearables that predict health indicators of future workers, without impacting their privacy and while guaranteeing reliable outcomes. Now the problem is to understand how the framework may lead to clear regulations impacting Italy and Europe, while ensuring a level of compatibility with the international standards.‘’
Daniele Pucci, Researcher, Head of Artificial and Mechanical Intelligence, Center for Robotics and Intelligent Systems, Istituto Italiano di Tecnologia
SRAIS Mentee – RAI Changemaker – SRAIS 2023 cohort
Participating Teams
Wysdom -Smart AI Analytics Tools, Canada
Description
Acquired by Calabrio in 2024, Wysdom is a leader in AI and virtual agent performance solutions.
Summary of RAI progress
Developped responsible AI pillars for conversational systems analytics
COMPREHENSIV - At Home Univesal Primary Health Care, India
Description
COMPREHENSIV is a smartphone application designed to be used by trained field personnel to screen for and manage a large range of early-stage disease conditions, in real time, based on images of the conditions and multidimensional context.
Summary of RAI progress
Summary of RAI progress : Identified and documented the challenges and strategies for responsible data collection and management for future AI healthcare applications.
ErgoCub - Artificial Intelligence in Wearables and Robotics for Assessment, Italy
Description
ErgoCub develops embodied AI technologies, including wearables and humanoid robots, to prevent musculoskeletal disorders in workers and for use in healthcare. They highlight musculoskeletal diseases as the most common occupational illness worldwide.
Summary of RAI progress
Developped trustworthy AI guidelines for wearables in healthcare and industry.
Jalisco's AI Forest Mapping Systemm Mexico
Description
ErgoCub develops embodied AI technologies, including wearables and humanoid robots, to prevent musculoskeletal disorders in workers and for use in healthcare. They highlight musculoskeletal diseases as the most common occupational illness worldwide.
Summary of RAI progress
Developped trustworthy AI guidelines for wearables in healthcare and industry.
Participation and Feedback Platform, Germany
Description
Particip.ai One is a telephone voicebot-based participation and feedback platform, intended for use by citizens, employees and consumers to voice their feedback and participate in decision-making processes.
Summary of RAI progress
Developped Guidelines for responsible use of AI voice and chatbot technology for people-centred participation and feedback.
