AI

Helping changemakers navigate the world of AI

A Changemaker’s Compass for Navigating the World of AI

This hub is practical starting point that guides you on a four-part journey: from understanding the core Dilemmas (the ethical, environmental, and privacy risks of AI adoption) and exploring Case Studies (real-world examples from our community), to building Practical AI Skills and, finally an open source Framework (our open-source manifesto and policy).

The Dilemmas

This first set of resources frames the core hard questions we must ask. It explores the critical dilemmas—from algorithmic bias and environmental costs to data privacy and the AI divide that we must navigate before building or deploying a solution.
Climate Action MENA 2022 | Supported by SAP

Navigating Algorithmic Bias & Justice

Read more

Navigating Algorithmic Bias & Justice

If we’re not actively fighting bias in our AI, we are complicit in building systems of oppression.


The pressure to use AI to scale our work is immense. But many of us are hesitant, and rightfully so. We have a massive responsibility to ensure any AI we use, build, or advocate for does not ignore or discriminate against the very communities we serve. If we are to engage with this technology, we must build for fairness and justice, ensuring these tools reflect our missions.

The Challenge

The core challenge is that neutral data doesn’t exist. Data comes from our unequal world. If we feed an AI biased hiring data from the past, it will become a biased hiring tool for the future. To truly serve everyone, any AI solution must evolve to include a justice lens from day one, actively checking and correcting for these hidden biases.

Key Resources for Algorithmic Justice

Ready to build fairness into any AI you use? Here are some trusted resources to get you started.

1. Foundational Knowledge

  • The Algorithmic Justice League (AJL): Founded by Joy Buolamwini, this is the place to start. Their groundbreaking research uncovers real-world bias in facial recognition and other systems, showing exactly what’s at stake.
  • Weapons of Math Destruction (Book): In this super-accessible book, Cathy O’Neil explains how algorithms can scale up inequality. It’s a must-read for understanding the problem.
  • Race After Technology (Book): A deeper, foundational dive by Ruha Benjamin into how new technologies can reinforce old injustices .
  • The AI Now Institute: Want to understand the big picture? AI Now publishes fantastic, easy-to-read reports on the social and economic impact of AI.

2. Frameworks & Global Declarations

3. Technical Toolkits

  • AI Fairness 360 (IBM): An open-source toolkit with code to help tech teams detect and reduce bias in their machine learning models .
  • Google’s What-If Tool: A great visual tool that lets you poke your AI model to see how it behaves with different kinds of data, helping you spot potential bias .
  • AI Explainability 360 (IBM): A related toolkit that helps you understand why your model is making certain decisions .
  • TensorFlow Responsible AI Toolkit: A library from Google’s TensorFlow team to help developers build fairness and privacy into their models from the start .
  • Microsoft Responsible AI Toolbox: A suite of tools from Microsoft to help you build AI systems that are more fair, interpretable, and secure .

4. Deeper Dives & Learning

  • AI Incident Database: A database of real-world AI failures. It’s an incredible resource for learning from others’ mistakes to avoid repeating them.
  • AI Ethics: Global Perspectives: A free, self-paced course that explores the ethical implications of AI from a global and interdisciplinary perspective .
  • Stanford Embedded Ethics: An amazing program that embeds ethics lessons directly into core computer science courses. They share many of their materials, which are great for changemakers with a tech background.
  • EU AI Act Explorer Tool: Especially for Europe-based initiatives, this unofficial tool helps you understand if your AI system might be considered high-risk under the new law .
  • The Oxford Handbook of Ethics of AI: For a very deep, academic dive, this handbook brings together leading scholars to discuss the full spectrum of AI ethics .
Slide23

Navigating AI’s Environmental Costs

Read more

Navigating AI’s Environmental Costs

The negative environmental impact of AI is huge, and still unfolding.


The term ‘AI’ is a clunky catch-all, obscuring the vast differences in each tool’s environmental impact. While some applications are light, we are deeply critical of Generative AI, which carries an immense environmental toll. Training these large models requires massive amounts of energy and water, creating a direct tension for any changemaker focused on climate justice.

The Challenge

The challenge is the hidden cost of the emerging ‘AI Empire’. The largest Generative AI models are controlled by a few massive companies, built on a model of environmental extraction. These companies are not transparent about their energy and water consumption, making it nearly impossible to measure the true footprint. This lack of transparency is a core problem. We must challenge this model, ask “who truly benefits?”, and prioritize ‘Green AI’, tools that are efficient, transparent, and don’t undermine our planetary goals.

Key Resources for Sustainable AI

Here’s how you can start measuring, and reducing, the environmental impact of technology.

1. Understand the Footprint

  • “Green AI” (The Paper): Start by reading the paper that started this conversation. It argues we should prioritize AI that is efficient, not just powerful—a direct challenge to the “bigger is better” model.
  • ML CO2 Impact Calculator: A useful online tool that helps you estimate the carbon emissions from training a machine learning model. This makes the invisible cost visible.
  • Greenpeace’s Clicking Clean Report: This report ranks major tech companies on their renewable energy use. It’s essential for holding the “AI Empire” accountable for its energy sources.
  • Climate Change AI: A global non-profit initiative and community that catalyzes impactful work at the intersection of climate change and machine learning. A key community for resources and networking.

2. Build Lighter, Smarter Models

  • Hugging Face (Platform): This is a massive hub for AI models. Look for smaller, pre-trained models (like DistilBERT) that are good enough for your task, instead of training a large, power-intensive model from scratch.
  • The TinyML Foundation: Explore Tiny Machine Learning. This is a whole field and foundation dedicated to running AI models on tiny, low-power devices (like a small sensor). It’s a field focused on AI efficiency.
  • CodeCarbon: An open-source Python tool that helps you track the CO2 emissions from your code. It’s a very practical way to measure your footprint as you build.

3. Join the Sustainable Tech Community

  • The ClimateAction.tech Community: This is a global community of thousands of tech professionals, designers, and changemakers who are all dedicated to building climate-friendly technology. A strong community for asking questions.
  • The Green Web Foundation Directory: Is your website hosted on a server powered by renewable energy? This tool lets you check any URL and find green hosting providers.
  • Google Cloud Carbon Footprint: If you use Google Cloud, this is their built-in tool to help you measure, report, and reduce your app’s carbon emissions. (Microsoft and AWS have similar tools!)
  • Stanford HAI – Sustainable AI: A research hub from Stanford’s Human-Centered AI institute, with papers and ideas on how to make AI more environmentally and socially sustainable.
CXC_Alex.Kleis (2 von 50)

Navigating AI & The Future of Work

Read more

Navigating AI & The Future of Work

How to use AI to augment our teams and creativity, not just replace them with efficiency.


Much of our work is about creating decent jobs and empowering communities. So it’s understandable to be worried about AI’s potential to automate those very jobs. We are also concerned about its potential to flatten the unique human creativity our work depends on. This creates a tension. We must ask: “What exactly are we optimising for?”. We have an opportunity to lead this transition, focusing on AI as a tool for upskilling, not just for cutting costs.

The Challenge

The challenge isn’t just the technology; it’s managing the human transition. AI is effective at repetitive tasks, which can create space for deeper, human-centered work (like mentoring, strategy, and community building). But this is not a given. Our solutions must evolve to include AI as a partner, helping us reskill our teams and refocusing our energy on the work only humans can do.

Key Resources for a People-First Future

The goal is to use AI as a tool that helps a team, not replaces it. Here are some resources to help you build an augmentation strategy.

1. Understand the New World of Work

  • World Economic Forum’s “Future of Jobs” Report: This report is a key source for data on what skills are becoming more valuable. (Spoiler: It’s critical thinking, creativity, and emotional intelligence—all human skills!)
  • OECD AI Principles: Take a look at their principles for “inclusive growth, sustainable development and well-being.” It’s a strong framework for thinking about the big economic picture.

2. Upskill Your Team (and Yourself!)

  • “AI for Everyone” by Andrew Ng (Coursera): This is a popular and accessible course for non-techies. It’s perfect for demystifying AI for your entire team, so everyone feels empowered, not scared.
  • Google’s AI Essentials: A free course from Google designed for everyone, not just techies, to understand and use AI tools in their day-to-day work.

3. Build to Augment, Not Automate

  • “Human-in-the-Loop” (HITL) (Concept): This is a key design principle. Build systems where the AI suggests and a human decides. This keeps your team’s wisdom and values in control.
We’re hiring: Working Student

Navigating the AI Divide

Read more

Navigating the AI Divide

We must challenge the emerging ‘AI Empire’ and ask who truly benefits from a technology trained on an unequal past.


Impact looks different in every community. So, why would AI be any different? We are deeply critical of an AI revolution that is not globally inclusive. Most large AI models are built by Western companies, trained on English-language data, and reflect Western cultural norms . This is not a neutral act. We have a responsibility to challenge this emerging ‘AI Empire’ and ensure these tools serve all communities, not just the ones where the tech was born .

The Challenge

The biggest challenge is the AI Divide, or Digital Colonization. Most of the world’s largest AI models are built by Western companies, trained on English-language data, and reflect Western cultural norms . This means they often fail to understand other languages, can’t process local contexts, and sometimes offer solutions that are completely wrong for a different culture . We can’t just copy and paste a Silicon Valley solution and expect it to work in Cairo, Nairobi, or Bogotá .

Key Hubs & Inclusive Communities

The solution isn’t to bring AI to the Global Majority—it’s to connect with and learn from the AI communities already there. Here are some of the key hubs, networks, and resources leading this charge.

1. Africa-Led AI Hubs & Research

The AI ecosystem across Africa is one of the most vibrant and fastest-growing in the world, serving as a powerful example of community-led innovation. Here are some of the key players to follow:

  • Data Science Africa (DSA): A non-profit that builds local capacity by running workshops and mentorship programs for data scientists across the continent.
  • Data Science Nigeria: This organization is building a world-class AI ecosystem in Nigeria through education, research, and community-building.
  • AI4D (Artificial Intelligence for Development): A major program that funds and supports AI research and policy across Africa to help achieve the UN’s Sustainable Development Goals (SDGs).
  • Deep Learning Indaba: An important annual gathering for the African AI community. Their mission is to Strengthen African AI.
  • AI and Human Rights in Africa Course (UCT): A course from the University of Cape Town that specifically addresses the unique human rights challenges and opportunities of AI in the African context.
  • Masakhane: A grassroots, community-led research movement to build and support Natural Language Processing (NLP) for African languages. Their work is essential for building an AI that understands everyone.

2. Global Majority Research & Policy

3. Inclusive Communities & Networks

These groups are building the inclusive, diverse, and representative AI community we all need.

  • Black in AI: A global community and non-profit working to increase the presence, inclusion, and visibility of Black people in the field of AI.
  • Women in Machine Learning (WiML): A global organization that supports and promotes women and gender minorities working in machine learning.
  • DataKind: A global non-profit that connects data scientists with social impact organizations for pro-bono projects, bringing skills to the frontline.
  • Indigenous AI: A hub exploring AI from an Indigenous perspective, focusing on data sovereignty, cultural protocols, and building AI that respects and incorporates Indigenous knowledge.

4. Youth & Skill-Building

  • Intel AI for Youth Program: A global program that partners with governments and communities to provide AI skills to young people, helping build the next generation of “AI-ready” leaders.
  • AI4ALL Ignite: An organization dedicated to opening doors to AI for underrepresented youth, running free summer programs and mentorship for high school students.
  • AI for Good – Asia: A foundation that supports and scales AI-driven social impact ventures across Asia, with a strong focus on building a robust regional ecosystem.
IMG_0313

Navigating AI & Data Privacy

Read more

Navigating AI & Data Privacy

How do we use AI to serve our communities without putting their data, and their trust, at risk?


In the social impact space, our data is one of our most sensitive assets. It represents the real lives of our community members, our donors, and our partners. But the dominant AI models are built on data. Many ‘free’ Generative AI tools work by training on the prompts we give them. Pasting a sensitive community story, a donor list, or internal notes into one of these tools is not a neutral act. It can be a profound breach of trust, sending that private data to a corporate server, where we lose all control.

The Challenge

The challenge is two-fold. First is Data Leakage: We are using black-box tools without any transparency, risking our community’s privacy for the sake of efficiency. Second is Data Minimization: Our values demand we collect less data and protect it. AI models demand more data to be effective. This is a direct conflict. We must evolve from being data collectors to being data stewards, ensuring any AI we use serves our mission, not the data-hungry models of Big Tech.

Key Resources for Data Stewardship & Privacy

Here are some key resources to help you navigate AI with a privacy-first mindset.

1. Understand the Principles & Laws

2. Explore Privacy-Preserving Tools & Techniques

  • Open Data Institute: A Guide to Anonymisation: You can’t leak data you don’t have. Anonymization is a key technique. This guide from the ODI explains how to do it.
  • LM Studio (Tool): One of the best ways to prevent data leaks is to never let your data leave your computer. This tool lets you download and run powerful LLMs (like Llama) 100% locally on your own machine.
  • PrivateAI (Tool): A tool designed to find, remove, and replace Personally Identifiable Information (PII) from your data before you send it to an AI model, so you can use the model without leaking private info.
  • Hugging Face Gated Models: For more technical teams, Hugging Face (a key AI hub) allows creators to “gate” models, requiring users to request access. This is a step towards controlling who uses a model and on what data.

3. Learn About Data Governance & Sovereignty

  • Global Indigenous Data Alliance (GIDA): This group created the CARE Principles of Indigenous Data Governance (Collective Benefit, Authority to Control, Responsibility, and Ethics). It’s a foundational framework for data stewardship.
  • The Aapti Institute (India): A key research institution in the Global Majority focused on “data stewardship” and “data collaboratives”—new models of governance that empower communities to own and manage their own data.
  • The GovLab: Data Collaboratives: A research hub exploring new ways to share data for public good. This is essential for moving beyond “data extraction” to “data collaboration.”

Case Studies

We share inspiring stories from changemakers in our community from all around the world who are already navigating these tensions and building AI-powered solutions with intention.
CLB_5541 copy

XATOMS

Pioneering clean water solutions for all. Xatoms harnesses AI and quantum chemistry to design revolutionary materials that purify water sustainably and efficiently.
Read more

How Xatoms is Using AI and Quantum Chemistry to Design Water-Purifying Materials

Impact Area: Clean Water, Health, Climate Action AI Tool Type: Generative AI, Quantum Chemistry Simulation, Data Analysis

“We want to be the leading water purification company in the world, offering affordable and efficient solutions and reaching some of the most vulnerable communities in the world.” — Diana Virgovicova, Co-Founder of Xatoms & CXC Member

Billions of people still lack access to safe drinking water. Traditional methods often rely on energy-intensive processes or harsh chemicals. Diana Virgovicova, a CXC member and co-founder of Xatoms, has been tackling this since she was 17, when she discovered a way to break down microplastics using photocatalysis. Now, Xatoms aims to scale this vision: harnessing the power of light, guided by AI, to design entirely new materials that clean water sustainably.

The challenge isn’t just filtering water; it’s finding materials that can efficiently break down persistent pollutants – like pesticides or industrial chemicals – using only sunlight or simple LED light. Discovering these novel materials through traditional trial-and-error chemistry can take decades and millions of dollars.

Xatoms is accelerating this discovery process exponentially. They use AI combined with quantum chemistry simulations to rapidly design and test millions of potential *photocatalytic nanomaterials* in silico. This AI-driven approach identifies promising candidates – materials that activate under light to neutralize contaminants – in weeks or months, not years. Having secured $3 million in funding and launched pilot projects, they’re developing materials aiming to purify water significantly faster than current methods.

Xatoms isn’t just using AI to optimize an existing system; they’re deploying it at the frontiers of materials science to invent fundamentally new, sustainable solutions for a global crisis. It’s a powerful example of using AI for deep R&D to tackle root causes, aiming for a future where clean water is accessible to all..

Learn More

1700143237040

BENTHOS

Empowering ocean conservation with local wisdom. Benthos.ai’s AI-powered WhatsApp chatbot brings vital ocean data and community knowledge together to protect our marine ecosystems.
Read more

BENTHOS

How Benthos.ai Uses AI to Weave Local Knowledge into Ocean Conservation

Impact Area: Ocean Conservation, Indigenous Knowledge, Data Accessibility, AI Tools Type: Chatbot, Data Analysis, Retrieval-Augmented Generation (RAG)

“Our vision is to make nature conservation more effective by enabling better decisions with data that reflects on-the-ground reality.” — Francielly Monteiro, Founder of Benthos.ai

Coming from a polluted bay in Rio, Francielly Monteiro saw firsthand how local communities’ deep understanding of the ocean was often ignored in scientific discussions. Now based in Berlin, her tech non-profit, Benthos.ai, aims to change that by putting local knowledge and stewardship at the heart of marine conservation.

Benthos.ai tackles this with an AI-powered platform accessible via a WhatsApp chatbot – the first of its kind for ocean conservation. It delivers vital ocean data through both text and audio, making it usable regardless of income or literacy. Critically, instead of relying on generic LLMs, Benthos.ai uses a local-knowledge-first approach, combining Retrieval-Augmented Generation (RAG) with curated databases built in collaboration with communities. This allows them to, for example, work with artisanal fishers in Brazil to map mangroves using tech, science, and local expertise to fight real estate speculation and secure legal protection.

What’s truly mindful here is Benthos.ai’s deliberate choice to reject the standard AI approach. By prioritizing local knowledge and co-creating content with community representatives, they’re building AI that empowers, rather than marginalizes.

Learn More

IMG_7482

ZAIN SAMDANI

Explore Zain Samdani’s groundbreaking work using AI to make health rehabilitation more accessible and bring visual art to life for the visually impaired.
Read more

ZAIN SAMDANI

Zain Samdani: Using AI to Bridge Gaps in Health and Art

Impact Area: Health, Accessibility, Disability Inclusion, Arts Access AI Tool Type: AI Sensor Analysis, Computer Vision, AI-Generated 3D Models

“You have a 70% chance of going blind… That’s what the doctors told me in 2018.” — Zain Samdani, Founder of ExoHeal & Co-Initiator of ‘Please Touch This Art’

Zain Samdani isn’t just an entrepreneur; he’s a changemaker driven by deeply personal experiences. Witnessing his uncle’s struggle with paralysis and facing his own potential vision loss ignited a passion to use technology, particularly AI, to improve accessibility and quality of life for others.

The first challenge Zain tackled was the reality of hand paralysis rehabilitation: often time-consuming, psychologically strenuous, expensive, and lacking modern technology. This led him to found ExoHeal. His solution is Neuro-ExoHeal, a wearable exoskeletal robotic glove. Using AI algorithms to analyze sensor data, the glove provides personalized rehabilitation routines, aiming to help patients regain sensation and movement more effectively and affordably.

Facing his own potential blindness sparked another mission: making visual art accessible. Traditional tactile models in museums are rare and costly. Zain co-initiated ‘Please Touch This Art’, an innovative project that uses AI (likely computer vision) to interpret paintings and generate instructions for precise, tactile 3D models, often created via 3D printing. An AI voice assistant adds context, allowing visually impaired individuals to truly ‘be-greifen’ (grasp) art.

What’s mindful across both projects is the deep empathy driving the innovation. ExoHeal puts the patient’s lived experience at the center. ‘Please Touch This Art’ was co-designed every step of the way with blind and visually impaired associations, ensuring the solution was genuinely useful. Zain’s work shows how AI, when guided by personal understanding and community collaboration, can create truly meaningful tools for inclusion.

Learn More

20240315-jaron-soh-scaled-e1712131662836-1536×1536

VODA

VODA offers personalized, evidence-based digital therapy designed by and for the LGBTQIA+ community.
Read more

VODA

How Voda Uses AI to Build Affirming Mental Health Support for the LGBTQIA+ Community

Impact Area: Mental Health, LGBTQIA+ Inclusion AI Tool Type: Data Analysis, Personalized Content Delivery

“We believe everyone deserves access to mental health support that is both grounded in science and in empathy.” — Jaron Soh, Co-Founder & CEO of Voda

Finding mental health support that truly gets you is tough, especially for the LGBTQIA+ community who often face stigma and navigate systems where they cannot safely disclose their identities to clinicians. Voda, co-founded by Jaron Soh, was created to change that: offering a digital mental health companion specifically designed for the needs of queer people.

Photo: Voda featured as “Apple’s App of the Day”, Voda Founding Team: Chris Sheridan MBACP (Accred) (they/them), Jaron Soh (he/him), Dr. Kris Jack (he/him))

Research shows that LGBTQIA+ individuals are more than twice as likely to experience mental health disorders than their cisgender, heterosexual peers (Psychiatry.org). Yet much of the available support remains one-size-fits-all. Voda tackles this gap head-on.

The Voda app provides discreet and culturally competent mental health support through structured programs, self-guided practices, and reflective journaling prompts designed around the diverse experiences of LGBTQIA+ individuals. The app combines evidence-based therapeutic modalities (such as CBT, ACT and IFS) with anti-oppressive psychoeducation to provide clinically-informed, affirming guidance.

AI works behind the scenes, analyzing journaling patterns and usage to tailor content recommendations and personalize the therapeutic journeys for each individual. Since launching, they’ve supported over 45,000 users and secured significant funding to expand their reach.

VODA was built by and for the LGBTQIA+ community. The app is designed with accredited therapists, clinical psychologists and LGBTQIA+ community experts, ensuring every resource is relevant, affirming, and genuinely speaks to the community’s lived experiences.

Learn More

1517724031314

CHILLI

Discover Chilli, the app connecting citizen activists to high-impact climate campaigns, right from their phones.
Read more

CHILLI (GLOBAL)

How Chilli Uses AI to Turn Climate Anxiety into Real-World Action

Impact Area: Climate Action AI Tool Type: Generative AI, Data Analysis

We all know that feeling of climate doomscrolling. chilli is a social network for citizen activists designed to flip that script. It’s built to turn passive concern into real-world, strategic action by connecting thousands of people to the most effective climate campaigns, right from their phones.

This is where chilli’s AI assistant, ‘Pepper,’ comes in. It’s not just a simple chatbot. It uses machine learning to analyze data from thousands of past social movements to find the most impactful strategies. Then, it uses Generative AI to help anyone launch a multi-level campaign in seconds – drafting personalized emails, generating strategic social posts, and coordinating targeted calls. This has allowed their 85,000 users of app and website to take hundreds of thousands of digital actions, contributing to massive wins like the EU Nature Restoration Law and getting 29 banks to drop their financial support of the controversial EACOP oil pipeline. They’re now even piloting ways for users to directly fund activists through the app, helping them focus on impact, not income.

They use energy efficient generative models to make every email and comment people send unique and adapted to their values/tone to simplify and make it more impactful when people take climate advocacy actions.

Learn More

Evolena

FAIRCADO

Imagine effortlessly finding pre-owned treasures. Faircado’s app and browser extension make second-hand shopping the easiest choice, saving you money and reducing your carbon footprint.
Read more

FAIRCADO

How Faircado Uses AI to Make Second-Hand the Default Choice

Impact Area: Circular Economy, Climate Action AI Tool Type: Image Recognition, Natural Language Processing (NLP)

“Our mission is for second-hand items to become the obvious first choice for consumers.” — Evolena de Wilde d’Estmael, Co-Founder & CEO of Faircado

It’s tough work being a conscious consumer. Finding that specific item you need second-hand often means hours trawling through dozens of different websites. Faircado, a Berlin-based startup, wants to flip that script and make buying used items as easy, or even easier, than buying new.

Faircado’s clever solution is available as a free browser extension and mobile app. Using AI image recognition and natural language processing (NLP), it automatically scans the product you’re viewing online. Within seconds, it searches over 150 million products across 60+ partner websites to find you pre-owned alternatives, often saving users money and significantly reducing CO2 emissions. After success in Germany and France, they’re now scaling to new countries across Europe.

They aren’t marketing themselves as an ‘AI startup’ or chasing the latest hype. For them, AI is simply the how behind their core mission: making second-hand the easiest choice. By using smart tech to instantly find used alternatives while you shop, they are actively designing for less consumption, not more.

Learn More

Christopher Daccache

RECYCLER AI

Transform waste into a learning opportunity! Recycler AI’s smart sorting bins engage students with real-time feedback and education, fostering a new generation of environmental stewards.
Read more

RECYCLER AI

How Recycler AI is Turning School Bins into Sustainability Classrooms

Impact Area: Environmental Education, Waste Management, Youth Engagement AI Tool Type: Machine Learning, Computer Vision, Data Analysis

“Recycler AI empowers schools to cultivate a generation of environmentally conscious students equipped with the knowledge and skills to make a positive impact on the planet.” — Recycler AI Mission Statement

Recycling bins in schools often end up contaminated, and environmental education can feel disconnected from daily habits. Christopher Daccache, a software and environmental engineer from Beirut, saw an opportunity to tackle both problems at once. His venture, Recycler AI, aims to revolutionize waste management in schools by making it smart, engaging, and educational.

The challenge is twofold: standard recycling bins rely on users sorting correctly (which often doesn’t happen), leading to contaminated waste streams. At the same time, traditional environmental lessons might not stick or translate into real-world action for students.

Recycler AI’s solution is an AI-powered waste sorting bin designed specifically for schools. Using machine learning algorithms and sensors, the bin accurately identifies and sorts recyclable materials, drastically reducing contamination. But it doesn’t stop there. An interactive interface provides students with real-time feedback and engaging educational content about recycling and sustainability *as they use the bin*. The system also collects data on waste composition, helping schools optimize their overall waste management.

What’s mindful here is the holistic integration of technology and education. Recycler AI isn’t just a smarter bin; it’s a tool designed to foster long-term environmental stewardship. By meeting students where they are and making learning interactive and immediate, it aims to inspire a new generation of eco-conscious leaders, turning a daily chore into a moment of learning and impact.

Learn More

1760652230849

SIGNVERSE

Break down digital barriers! Signverse’s innovative AI avatars translate text and speech into African Sign Language in real-time, opening up a world of information for Deaf Africans.
Read more

SIGNVERSE

How Signverse’s AI Avatars are Opening Digital Doors for Deaf Africans

Impact Area: Accessibility, Inclusion AI Tool Type: Generative AI, Avatar Animation, Speech-to-Text

“We are bridging communication barriers for millions of Deaf individuals who have historically been excluded.” — Elly Savatia, Founder & CEO of Signverse

Imagine navigating a digital world where almost nothing speaks your language. That’s the reality for millions of Deaf Africans. Signverse, born in Kenya and Rwanda has a bold vision: an Africa where every digital platform, classroom, and service is instantly accessible to the Deaf community.

Signverse’s answer is Terp 360, an AI-powered platform. It uses 3D avatars, and motion capture to convert text and speech into accurate African Sign Language (AfSL) in real-time. Already deployed in classrooms, government portals, and websites across East Africa, it’s allowing Deaf users to access digital content independently, often for the first time. Backed by accelerator programs like Innovate Now, they’re now working to expand their AI vocabulary and support even more African sign languages.

What’s really mindful here is Signverse’s community-centered approach. Unlike many tech projects built for communities without their input, Signverse is recognized for prioritizing the actual needs of Deaf Africans. They’ve also focused on inclusion within the tool itself, creating representative avatars.

Learn More

Building Practical AI Skills

Once we are experimenting with AI tools, it’s about learning to use them mindfully and ethically. This collection is a starting point for the ‘how’, from building an AI strategy to prompting to finding the right tools.
Our Compass

A Changemaker’s Guide to AI Strategy & Governance

Read more

A Changemaker’s Guide to AI Strategy & Governance

“The challenge: How to translate our values into a practical, daily guide for using AI.”


You see the dilemmas. You see the tools. The biggest risk now is random, ad-hoc adoption by your team, which puts your community’s data and your organization’s integrity at risk. Before you can build skills, you must build a framework. This isn’t just about rules; it’s about translating your values—your “why”—into a practical “how” for your entire team .

The Challenge

The challenge is that most “AI Policies” are written by and for massive corporations. They are dense, legalistic, and don’t address the unique ethical tensions of the social impact sector. It’s hard to know where to start. The solution is to create a “living document”—a simple, mindful policy rooted in your values, not just legal compliance.

Key Resources for Building Your AI Policy

Here’s how you can start building a simple governance framework for your team.

1. See Real-World Examples (Start Here)

You don’t need to start from a blank page. Learn from others who have already built these frameworks.

  • The CXC Mindful AI Policy: This is our own internal “how”, shared as a practical example. It’s built on a Manifesto (the “why”) and this Policy (the “how”). We invite you to read it, adapt it, and use whatever serves you. Key concepts inside:
    • A “Traffic Light Check” (a simple Red/Yellow/Green system) for new tools .
    • An “AI Stewardship Circle” (a dedicated team for guidance) .
    • A “Dynamic AI Inventory” (a living list) to track vetted tools .
  • NetHope’s AI Ethics for Nonprofits Toolkit: A comprehensive toolkit with facilitator guides and case studies specifically designed to help non-profits build their own ethical frameworks.
  • The Markup: How to Write an AI Policy: A practical, journalistic guide on what questions to ask before you write your policy.
  • The Aapti Institute: AI Framework for Non-Profits: A framework focused on the Global Majority, helping organizations assess AI’s role in their context.

2. Vet Your Tools (The Inventory Process)

A core part of governance is vetting tools. Your policy should name a team or process (what we call our “AI Stewardship Circle” ) responsible for this. They will need good information to build a list of approved tools (what we call our “AI Inventory” ).

3. Establish Your “Rules of the Road”

Your policy needs simple, daily rules . Here are some of the most important ones to adapt for your own framework:

  • Data Scrubbing: A non-negotiable rule to anonymize any sensitive data (e.g., replacing names with “[Community Member]”) before pasting it into a public AI .
  • Set Clear Boundaries: Your policy must have non-negotiable “red lights”. For example, a core rule is “Human-in-the-Loop”: AI can assist a high-stakes decision (like selections), but can never make the final, un-reviewed decision about a person.
  • Proportionality Check: A rule to avoid high-energy uses (like video generation) for low-value tasks, respecting AI’s environmental cost .
  • Transparency: A rule to disclose when AI has been substantively used to generate content for external partners .
Our Compass

A Changemaker’s Guide to Effective Prompting

Read more

A Changemaker’s Guide to Effective Prompting

“The difference between a useful AI response and generic ‘AI slop’ isn’t just the tool, it’s the prompt.”


Many of us have experimented with AI and received generic, unhelpful, or hollow-feeling results—what our policy calls “AI slop”. The difference between this and a useful response often lies in the prompt. Learning to ask clear, contextual questions is a core skill for using these tools responsibly and avoiding the “doom-prompting” spirals that waste time.

The Challenge

The challenge is that our first prompts are usually too simple (“write a social media post”), so we get generic, wrong, or hollow answers . This can lead to a “doom-prompting” spiral, where we waste time trying to fix a bad output. This isn’t just a waste of our time; it’s also a waste of resources, as each new prompt uses more of the data center’s energy . We need to evolve from simple commands to giving clear, contextual instructions—just as we would with a new team member .

Key Resources for Effective Prompting

Here are some resources for levelling up your prompting skills.

1. Use a Simple Context Framework

A good prompt gives context. Here is a simple framework to follow:

  • Give it a Persona: Always start by giving the AI a role. “Act as a world-class grant writer for a non-profit focused on climate justice in Latin America.”
  • Give it a Task: Be specific. “Draft a 300-word funding proposal abstract.”
  • Give it Context: Paste in your mission statement, project goals, and target funder. “The proposal is for the ‘Future of Earth’ fund, which cares about community-led solutions.”
  • Give it an Example: If you have a past abstract that worked, paste it in and say, “Use a similar professional and urgent tone.”

2. Use Critical Prompting Techniques

LLMs are often agreeable and will tell you what you want to hear. To get more critical, useful feedback, you must prompt for it. Our Mindful AI Policy includes this “Anti-Sycophant” prompt chain :

  • Prompt 1: “Critically examine the core assumptions, unstated premises, and potential cognitive biases in my request above.”
  • Prompt 2 (After it answers): “Now, based only on logic and established facts, analyse my idea. Focus also on its limitations, counter-arguments, and potential downsides. What is being overlooked?”

3. Explore Prompting Guides & Inspiration

  • Google’s “Prompting Guide 101”: A free guide from Google that covers the basics of how to write effective prompts .
  • PromptBase (Marketplace): This is a marketplace where people buy and sell prompts. You don’t need to buy anything, but it’s a useful place for inspiration and to see how experts structure their instructions .
  • Prompting Webinars: Keep an eye on hubs like TechSoup and Wild Apricot’s Blog. They often host free webinars with prompt-specific formulas for fundraising and marketing .

4. Create a Team Prompt Library

  • Start a Simple Google Doc: When you write a prompt that gives you a perfect result, save it in a team Google Doc .
  • Make it a Team Resource: Create categories like “Social Media Prompts,” “Fundraising Prompts,” “Event Planning Prompts.” This turns one person’s skill into a team asset .

5. Foundational AI Learning

Want to understand the tech behind the prompt? These courses are perfect for building your foundational knowledge.

Yellow Light Check In

A Changemaker’s Guide to Finding AI Tools

Read more

A Changemaker’s Guide to Finding AI Tools

“The right AI tool isn’t the most powerful. It’s the one that solves the right problem.”


The AI world is noisy. There is a new “game-changing” tool every day. As changemakers with limited time and budgets, we can’t (and shouldn’t) try them all . Our goal is to be effective, which means we need a smart strategy for finding the right tools that will save us time, not waste it.

The Challenge

The biggest risk is “shiny object syndrome”. We hear about a new tool, sign up, waste three days trying to make it work, and then drop it. This drains our energy and our budget. We need to evolve from being AI-curious to being AI-strategic, starting with our problem, not the tool.

Key Resources for Finding the Right Tools

Here’s how to find the right tools for your mission without getting overwhelmed.

1. Start with the Problem First Framework

  • Step 1: Define the Pain. Write down a specific, repetitive task that drains your team’s time. (e.g., “It takes 10 hours a week to transcribe and summarize our community feedback calls.”)
  • Step 2: Define the Gain. What is the perfect outcome? (e.g., “Get a 95% accurate transcript and a 1-page summary of key themes within 30 minutes of each call.”)
  • Step 3: Search for That Solution. Now, you’re not just searching for “AI.” You’re searching for an “AI meeting transcription and summarization tool.” This is much easier to find and evaluate .

2. Use Vetted Hubs & Partners

  • TechSoup: This is your #1 stop. TechSoup vets technology and provides massive discounts (and sometimes free licenses) for non-profits. Always check here first .
  • Tech to the Rescue: This is one of our key partners. As a non-profit, you can get matched with a tech company that will build a custom AI solution for you pro bono, tailored to your specific workflow.
  • AI for Good Foundation: This global organization, linked to the UN, maintains a hub of AI for Good projects and tools. It’s a great place to see what’s being built specifically for the social impact space .
  • PoliSync & Globethics: These hubs also curate lists of tools and resources that are relevant for non-profits and mission-driven organizations .

3. Explore Big AI Directories (When you have a specific need)

  • There’s an AI for That: A massive, searchable database of AI tools. It’s overwhelming, but useful for discovering what’s possible once you’ve defined your problem.
  • Futurepedia: Similar to the above, this is another huge directory of AI tools, updated daily. Use their filters to narrow down your search .
  • Hugging Face: This is more for tech-savvy teams. It’s the world’s biggest hub for open-source AI models and datasets .
Google.org

A Changemaker’s Guide to Responsible Automation

Read more

A Changemaker’s Guide to Responsible Automation

“The goal: Using automation mindfully for repetitive admin, to free up time for high-value human connection.”


This is the next practical step in AI readiness. It’s not just using one tool; it’s getting your different tools to talk to each other . This is how you can automate repetitive, time-draining tasks (like data entry and report-building). Used mindfully, this can free up your energy for the deep, creative, human-centered work that only you can do .

The Challenge

Our tools often live in silos. Our donor list is in one spreadsheet, our email list is in Mailchimp, and our event signups are in a Typeform . We spend hours just copying and pasting data from one box to another . This is low-value work that burns out our teams. We can address this by building digital bridges that let the information flow automatically .

Key Resources for Responsible Automation

Here are the best tools and concepts for building your first automated workflows.

1. Use No-Code Digital Glue

  • Zapier / Make.com (Tools): These are the “digital glue” of the internet. They let you connect thousands of apps with no code . You build simple recipes: “When someone fills out my Typeform, automatically add their email to Mailchimp and create a new line in my Google Sheet” .
  • n8n.io (Tool): A powerful, open-source alternative to Zapier and Make. It gives you more control and can be cheaper (or even free) if your team has some technical skills .
  • TechSoup Discounts: Always check here first! Zapier, Make, and other tools often have big discounts for non-profits available through the TechSoup marketplace .

2. Automate Inside Your Existing Tools

  • Airtable Automations: If you use Airtable for project management, it has a powerful “Automations” tab . You can build triggers right inside your base. (e.g., “When a task ‘Status’ is changed to ‘Done,’ automatically send a Slack message to the #wins channel.”) .
  • Custom GPTs (OpenAI Feature): This is a simple, powerful form of automation. You can create your own “Grant Writing Assistant” GPT and feed it your mission, vision, and past successful grants. Now, it’s an automated expert on your organization .

3. Understand AI Agents (The Next Step)

  • What are they? Start thinking about AI not just as a tool, but as an agent you can delegate multi-step tasks to . (e.g., “Act as a research assistant. 1. Scan these 50 websites. 2. Find all mentions of ‘youth climate justice grants’. 3. Put the grant name, deadline, and URL into a table.”) .
  • Activepieces: An open-source, no-code tool that is built for creating more complex AI automations and agents that can run on their own .
  • Zapier’s Guide to AI Agents: A simple blog post from Zapier explaining what agents are and how you can start building simple versions of them right now .

Our Mindful AI Manifesto & Policy

We share our own internal governance framework on how we mindfully approach AI, particularly Generative AI, as an open-source resource. We invite you to read it, adapt it, and use whatever serves your own journey.

Supported by

SAP-Logo
EY

Pro Bono Tech Support

image_processing20230116-1815893-110pd49
Find out more about new opportunities for changemakers based / with impact in other regions, as well as news around our masterclasses, case studies and free AI resources. Sign up to our waitlist now and stay tuned!