Scaling Responsible AI Solutions (SRAIS) | 2024 Global Cohort

Description and Context

The 2024 SRAIS cycle marked the second year of a growing global initiative led by GPAI and CEIMIA, This year, the project expanded its scope and consolidated its methodology, supporting 21 AI teams worldwide in addressing the challenges that arise when responsible principles meet the realities of scaling and challenges such as data governance, cultural integration, regulatory compliance, interoperability, and the broader social, economic, and environmental impacts of AI systems.

Through a structured mentorship program, SRAIS 2024 paired innovators with expert mentors to identify priority obstacles and develop actionable strategies, while also producing insights and recommendations that extend beyond individual teams. By combining tailored guidance with knowledge-sharing, this year’s cycle advanced the creation of an international community of practice dedicated to scaling AI responsibly, strengthening both the credibility and competitiveness of responsible approaches in global markets.

Please note that all names of individuals, organizations, and their affiliations mentioned on website reflect their status at the time of publication of the report covering this SRAIS mentorship track. We acknowledge that names and affiliations may evolve over time.

Partners

GPAI_Associated_W_BCKG

Global Partnership on AI (GPAI)

The Global Partnership on Artificial Intelligence (GPAI) is an integrated partnership that brings together OECD members and GPAI countries to advance an ambitious agenda for implementing human-centric, safe, secure and trustworthy artificial intelligence (AI) embodied in the principles of the OECD Recommendation on AI.

ceimia logo2

International Center of Expertise In Montreal on Artificial Intelligence (CEIMIA)

CEIMIA is positioned as a key player in the responsible development of artificial intelligence, based on the principles of ethics, human rights, inclusion, diversity, innovation and economic growth.  CEMIA delivers high-impact projects in responsible AI through influential scientific diplomacy on an international scale.

Objectives

SRAIS aims to support AI innovators worldwide in scaling their solutions responsibly; to provide hands-on mentorship that bridges the gap between Responsible AI principles and real-world deployment. 

This strategic vision extends beyond individual projects, with the aim of consolidating a global community of practice dedicated to responsible and scalable AI. 

Through this collaborative effort, our goal is to create an enriching exchange of knowledge and best practices, strengthening both the credibility and competitiveness of responsible AI approaches on the global stage.

Highlights and Takeaways

In 2024, the second edition of the GPAI SRAIS project, sponsored by CEIMIA and the Global Partnership on AI, attracted participants from around the world to its mentorship program. During the latter year, the project has grown in scope and impact, and has taken strides towards consolidating a global network of

collaboration and knowledge-sharing, with the creation of an independent and complementary SRAIS track in Africa. Teams  and experts from Poland emerged as a particularly invested community of AI practitioners, demonstrating a strong dedication to responsible AI.

The Orange Innovation Poland team earned “Responsible AI Changemaker” recognition for their innovative work on two projects: a “traceability solution” that balances privacy with continuous improvement in product personalization, and a document outlining the uses, objectives, and “rules of engagement” for a satisfaction recognition system that enhances virtual agent interactions.

The broader discussions between experts and teams allowed to generate more insights that are useful for the advancement or the practice of responsible AI. 

Scaling responsibly is complex

It is not enough to embed RAI principles at the design stage: responsibility must be maintained across the full lifecycle, from conceptualization to global deployment.

Long-term lens needed

AI systems must be assessed not only for immediate risks (bias, privacy, safety) but also for their broader social, economic, and developmental impacts, including effects on human rights, livelihoods, and sustainability.

Responsibility cannot be automated

Technical fixes are essential, but true responsibility requires ongoing dialogue, collaboration, and shared governance across stakeholders.

Trade-offs are interconnected

Efforts to improve fairness, privacy, or robustness in one area may create new challenges in data governance, labour practices, or environmental impact.

Need for stronger RAI literacy

Developers, regulators, and users often lack nuanced understanding of responsible AI principles. Mentorship and knowledge-sharing, as demonstrated by SRAIS, are critical to building capacity and embedding responsibility in scaling practices.

Participating Teams

Group 118

ASLAC (Automatic Sign Language Avatar Creation) – Migam.ai (Poland)

Description

Cloud-based sign-language translation service using AI/avatars to expand accessibility for deaf and hard-of-hearing users globally.

Summary of RAI progress

Developed a framework for responsible acquisition and management of video-based training data, ensuring privacy of data subjects and compliance with intellectual property. Designed a “Data Clean Room” to depersonalize sensitive training data.

Group 118

Jalisco Diabetic Retinopathy Detection & Referral (Mexico)

Description

AI-driven predictive tool to detect diabetic retinopathy and support referrals in clinical settings, developed with Jalisco State Government.

Summary of RAI progress

Worked on aligning outputs with local clinical needs, creating secure long-term strategies for data sharing/storage, and ensuring integration into medical workflows. Emphasized transparency, explainability, and environmental impact.

Group 118

Personality AI – Orange Innovation Poland

Description

Hyper-personalization system based on the OCEAN framework, running locally on users’ devices without transmitting personal data externally.

Summary of RAI progress

Created a “traceability solution” and a “Privacy Guardian” dashboard to keep personalization relevant while protecting privacy and ensuring informed consent.

Group 118

PredictGAD – ICGEB (Inde)

Description

AI model supporting glaucoma diagnostics by analyzing ASOCT scans to detect angle dysgenesis.

Summary of RAI progress

Expanded dataset diversity to reduce demographic bias, ensured explainability, and explored frameworks for patient data privacy and follow-up mechanisms.

Group 118

Employee Performance Management, Learning & Development (Poland)

Description

AI platform for employee development, training, and performance management.

Summary of RAI progress

Focused on transparency and clarity so users understand how tasks, recommendations, and rewards are generated—whether by managers or the system.

Group 118

Knowledge Chat – Websensa (Poland)

Description

LLM-based assistant enabling organizations to query internal knowledge bases.

Summary of RAI progress

Developed comprehensive, audience-specific guidelines to mitigate hallucinations and ensure transparency, data security, and explainability.

Group 118

Satisfaction Recognition in AI Conversations (Poland)

Description

AI system for measuring and recognizing user satisfaction in interactions.

Summary of RAI progress

Defined objectives, safe-use guidelines, and responsibility frameworks to prevent manipulation, inaccuracies, or disruptions in user experience.

Group 118

Responsible AI Co-worker – AgentAnalytics.AI (India)

Description

Multi-agent LLM orchestration platform providing “AI co-workers” for SMEs, monitored by RAI oversight agents.

Summary of RAI progress

Focused on fairness in multi-agent orchestration and preventing privacy leaks, while defining how responsibility scales with flexible use cases.

Project Team

Amir Banifatemi

2024 SRAIS Project Co-Ŀead

Professional Affiliation: Cognizant (Chief Responsible AI Officer), USA

Francesca Rossi

2024 SRAIS Project Co-Ŀead

Professional Affiliation: 
IBM Research, USA

Arnaud Quenneville-Langis

Professional Affiliation: Project Manager, CEIMIA, Canada

Laëtitia Vu

Professional Affiliation: Project Coordinator, CEIMIA, Canada

Stephanie King

Professional Affiliation: Director of AI initiatives, CEIMIA, Canada

Kelle Howson

Specialized Writer

Professional Affiliation:
African Observatory on Responsible AI (AORAI), South Africa

Cohort Mentors

Borys Stokalsk

Mentor

Professional affiliation: RETHINK, Poland

Anurag Agrawal

Mentor

Professional Affiliation: Ashoka University, India

Stéphanie Camaréna

Mentor

Professional Affiliation: Source Transitions, Australia

Gilles Fayad

Mentor

Professional affiliation: IEEE Standards Association, Canada

Monica Lopez

Mentor

Professional Affiliation: Cognitive Insights for Artificial Intelligence (CIfAI), USA

Zümrüt Muftuoglu

Mentor

Professional Affiliation: Digital Transformation Office of the Presidency of the Republic of Türkiye / Yildiz Technical University, Türkiye

Furukawa Naohiro

Mentor

Professional affiliation: ABEJA, Inc., Japan

Sandro Radovanović

Mentor

Professional Affiliation: University of Belgrade, Serbia

Nava Shaked

Mentor

Professional Affiliation: Holon Institute of Technology (HIT), Israel

Amitabh Nag

Mentor

Professional affiliation: BHASHINI, India

Caroline Gans Combe

Mentor

Professional Affiliation: INSEEC / OMNES Education, France

Ricardo Baeza-Yates

Advisor

Professional Affiliation: Institute for Experiential AI of Northeastern University, USA

Ulises Cortés

Advisor

Professional affiliation: Universitat Politècnica de Catalunya, Spain

Kudakwashe Dandajena

Advisor

Professional Affiliation: African Institute for Mathematical Sciences (AIMS), Rwanda

Andrea Jacobs

Advisor

Professional Affiliation: Code Caribbean, Antigua

Privacy Overview

We use cookies to maintain security, enhance your experience, and improve our websites. For more information, see our privacy policy