AI for All: Reducing Bias, Building Trust

The “reflecting canopy” in the Marseille Vieux Port , also known as ‘L’Ombrière’r reflecting the diversity of users of the unique public space in the heart of Marseille.

AI is reshaping nearly every part of our lives – from how companies hire talent and doctors detect health risks to how cities manage energy and students learn, explore, and collaborate in new ways.

Yet, there is growing concern about issues of racial, disability and gender bias in AI and machine learning algorithms, and their wider impact on society at large. In the race to harness AI’s power, many organisations face a common challenge: data quality. When AI systems are built on biased or incomplete data, they risk replicating – and even amplifying – the inequalities that already exist in society.

In an era of rapid technological change, building trustworthy, transparent and fair AI has never been more critical. Black box systems that make decisions without explanations risk not only eroding public trust but also deepening social divides To create technology that truly serves humanity, we must design systems that are not only intelligent — but also ethical and co-evolving with the vision of a regenerative future.

Disaggregated data in urban planning matters because the act of counting people shows that they count.

The Importance of Disaggregated Data

A study from the Berkeley Haas Centre for Equity, Gender and Leadership, which examined 133 AI systems across various industries, found that 44% exhibited gender bias and 25% reflected both gender and racial bias. These findings highlight the urgent need for ethical AI development practices that ensure technology benefits everyone equally.

Take urban planning, for example. When data isn’t disaggregated by sex, gender, or other identity factors, it presents only a partial view of reality – overlooking key differences in how people experience cities.

Disaggregated data matters because the act of counting people shows that they count. Without it, planners and policymakers operate in ambiguity — designing policies and allocating resources for safety, transportation and public spaces without fully understanding the diverse needs, perceptions and experiences of different demographic groups, particularly women and girls.

By treating data as a living ecosystem — diverse, evolving, and context-aware — we can begin to build AI that reflects the full spectrum of human experience.

UNESCO developed the Readiness Assessment Methodology (RAM) — a tool that enables governments to assess how prepared they are for ethically aligned AI governance and implementation.

.

The Path to Ethical and Inclusive AI

In 2021, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence the first global standard-setting framework addressing the ethical use of AI. Endorsed unanimously by all 193 UNESCO Member States the Recommendation warns of the risks of AI embedding or amplifying bias, discrimination and inequality. It calls for principles such as transparency, explainability, human rights protections, gender equality and environmental sustainability to guide the responsible development of AI systems

At the same time, technology leaders are translating these ethical principles into action. With scalable and transparent tools — such as those available on IBM Z and LinuxONE — organizations can gain deeper insights into how and why AI models make specific predictions. This interpretability fosters accountability, fairness, and trust, laying the foundation for responsible innovation.

Ultimately, building ethical and inclusive AI requires collaboration between global policymakers, technologists and communities. It’s about developing technology that mirrors the diversity of the real world — moving beyond technical efficiency to embrace social responsibility and human-centered design.

How we can reduce bias in AI

Creating fair and inclusive AI starts with intentional design — from the data we collect to the people who build the models.

1. Diversify Data and Development Teams

Bias often begins at the data level. Ensuring datasets include a wide range of demographic and cultural perspectives — and involving diverse teams in model design — can drastically reduce blind spots. Inclusive data leads to inclusive decisions.

Take IBM’s AI Fairness 360, for example. Developed using diverse datasets and multi-disciplinary teams of engineers, ethicists, and social scientists, the toolkit detects and mitigates bias in AI models. This shows how inclusive data, combined with diverse human perspectives, can create AI systems that are fairer and more equitable across demographic groups.

2. Implement Continuous Bias Audits

AI systems should never be “set and forget.” Continuous monitoring, testing, and auditing help identify and correct emerging biases over time. Regular bias audits ensure that models evolve alongside society — remaining fair and relevant as cultural contexts, language, and social norms change. Ethical AI is not a fixed achievement — it’s a continuous process of learning, adaptation, and renewal.

Communities can build trust in AI through transparency, accountability, and community involvement in AI development and deployment

How We Can Build Trust in AI

Reducing bias is just one part of the equation. The other is building trust — ensuring that AI systems are transparent, accountable, and shaped by the people they affect.

1. Transparency and Accountability

Organisations must clearly communicate how AI systems are trained, what data they use, and how decisions are made. Open reporting builds public confidence and encourages responsible oversight.

2. External Ethics and Oversight Committees

Independent ethics boards or review committees can help ensure AI systems meet fairness and compliance standards. External oversight provides an essential safeguard against internal blind spots or commercial pressures that might compromise ethical integrity.

3. Community Engagement and Feedback Loops

Trust grows when the communities most affected by AI are included in its design and evaluation. Creating spaces for dialogue — where users can question, challenge, and influence AI systems — ensures that technology serves the many, not the few.

Like all technologies before it, artificial intelligence reflect the values of its creators

Beyond Inclusion: Why Co-Evolving Mutualism Matters for Ethical AI

Traditional inclusion methods often focus on adding underrepresented voices to existing systems. While necessary, this approach can still preserve existing power dynamics instead of transforming them.

Co-evolving mutualism, by contrast, offers a more transformative path. It treats AI development as an evolving living partnership between technology and the communities it serves. Rather than designing AI for people, we design it with people — continuously adapting algorithms, data, and processes to reflect evolving human values, needs, and feedback.

This approach moves ethical AI beyond static inclusion toward dynamic collaboration, where both technology and society learn and evolve together. It ensures AI systems are not only fair in design but resilient, responsive, and capable of addressing intersecting scenarios of gender, race, class, and accessibility.

AI holds immense potential to reflect and influence a more equitable world – if we build it responsibly. By embedding fairness, explainability, and accountability into every stage of development, we can ensure AI truly serves everyone.

Reducing bias isn’t just about better algorithms; it’s about better values. And as AI continues to evolve, our shared commitment to ethics will determine whether this technology deepens divisions or helps bring us closer together.

When we design AI systems that learn from the world and give back to it — balancing innovation with empathy, intelligence with co-evolution — we create technology that doesn’t just serve humanity but helps humanity thrive.


For listening to these discussions and to learn ways to reduce bias and build trust in AI you can visit IBM Z Day 2025.

My guests in this conversation include

Zinnya del Villar, Director of Data, Technology, and Innovation, Data-Pop Alliance

Jayesh Nair, AI Product Manager, IBM

May East