Why Fairness in AI Is More Complex Than You’ve Heard

In my previous article, “25 Definitions of Fairness in AI,” I dove into the various forms of bias that can manifest in AI systems and how different definitions of fairness aim to address these biases. If you haven’t had a chance to read it, I highly recommend doing so, as it sets the stage for the deeper exploration we’re about to embark on.

It wasn’t long ago that I was sitting in a meeting with a group of engineers discussing a new AI tool we were developing. We were excited about its potential to streamline hiring decisions by removing human biases from the process. Then someone raised a question: “But how do we ensure the algorithm itself is fair?” I paused. We were so focused on eliminating overt human bias that we hadn’t stopped to consider the biases lurking in the data or the very assumptions we were building into the system. That question lingered in my mind, and it sparked a deeper exploration into the concept of fairness in AI.

It’s easy to assume that fairness in AI is a straightforward problem—just treat everyone the same, right? But the more I’ve studied, the more I’ve realized how profoundly complex fairness can be, especially when we’re talking about systems designed to make decisions on our behalf. I was floored when I learned that there are about 25 different definitions of fairness in AI. How can something as fundamental as fairness, which seems like it should be universal, have so many interpretations?

It turns out, it’s because fairness is contextual. What’s fair in one situation might not be in another. This complexity is both staggering and necessary. And it’s this nuance that we need to understand if we’re going to create AI systems that genuinely serve everyone.

The Challenge of Defining Fairness

One of the central challenges of fairness in AI is that it depends on what we value. For instance, take the question of whether AI should be race-aware or race-blind. At first glance, it might seem obvious that ignoring race would ensure fairness. If the algorithm doesn’t “see” race, how could it be biased?

But the reality is more nuanced. Ignoring race can mean ignoring the historical and structural inequalities that persist in society. In some cases, race-aware models are necessary to ensure equitable outcomes because they account for these disparities. On the other hand, focusing on race too explicitly can reinforce divisions we’re trying to overcome. There’s no easy answer, and both approaches have their merits depending on the context.

I recall reading about a healthcare algorithm designed to allocate medical resources more efficiently. Initially, the developers opted for a race-blind approach, assuming it would avoid bias. However, what they found was that minority groups, who historically had less access to healthcare, were disproportionately deprioritized by the algorithm. By not considering race, the system unintentionally perpetuated existing inequalities. It’s a perfect example of how even well-intentioned designs can go wrong if we don’t take into account the broader context.

“Fairness is not about treating everyone equally; it’s about recognizing where people start and ensuring equitable outcomes.”

Unintended Consequences and Interconnected Systems

The ripple effects of AI decisions are another layer of complexity. Fixing bias in one area doesn’t necessarily mean we’ve achieved fairness across the board. AI operates within an interconnected web of systems—finance, healthcare, education, housing—and a change in one domain can have unexpected consequences elsewhere.

Imagine a company that uses AI to improve hiring practices, aiming to eliminate bias by focusing solely on qualifications and experience. Sounds great, right? But then consider how educational and career opportunities are unevenly distributed across socioeconomic backgrounds. If the algorithm only looks at formal qualifications, it might overlook candidates who didn’t have access to elite institutions or high-profile internships but have other valuable skills. In trying to solve one problem, we inadvertently create another.

This idea became very real to me during a project where we tried to optimize AI for lending decisions. Our goal was to reduce bias against minority applicants, and we made significant strides in ensuring fairness in credit scoring. But what we hadn’t fully considered was how these changes would affect downstream systems, like housing and insurance. The interconnectedness of these domains meant that a fairer lending algorithm had unintended consequences in other areas, highlighting the importance of a holistic view.

“In an ecosystem of interconnected systems, fairness in one domain can introduce unfairness in another.”

The Trade-off Between Fairness and Privacy

One of the most challenging trade-offs in AI development is between fairness and privacy. To make AI more fair, particularly in sensitive areas like financial services or healthcare, the models often require more granular data. This means that in pursuit of fairness, we’re also expanding surveillance—gathering more personal information to ensure the system isn’t biased.

A prime example is in financial lending. New AI-driven tools are offering a fairer assessment of applicants by incorporating a wide range of data points, from spending habits to social media activity. While this can reduce bias in lending decisions, it raises significant concerns about privacy. Is it fair to base someone’s financial future on their private life? And where do we draw the line between fairness and invasive data collection?

These are not hypothetical issues. They reflect real-world challenges that need thoughtful, ethical consideration. We have to ask ourselves whether fairness at the cost of privacy is truly fair at all, or whether we’re simply shifting the burden elsewhere.

When Fairness Becomes Personal

The more I reflect on the complexities of fairness in AI, the more personal this issue becomes. As someone who works closely with AI systems, I’m constantly reminded that the decisions made by these systems are not abstract—they affect real people in real ways. Whether it’s a wrongful arrest due to a flawed facial recognition algorithm or a missed opportunity for a loan, the stakes are incredibly high.

One of the hardest lessons I’ve learned is that fairness in AI isn’t something we can “solve” once and move on. It requires ongoing vigilance, constant questioning, and an openness to revise our assumptions. Fairness, much like the systems we’re trying to improve, is dynamic.

So, as we continue to develop AI, the question we should be asking ourselves isn’t just “Is this algorithm fair?” but rather, “What does fairness mean in this context, and who stands to benefit or be harmed by our definition?”

The challenge, then, is not just building fair algorithms but ensuring those algorithms operate within a broader framework of equity, responsibility, and respect for privacy.

The next time you interact with an AI system, whether it’s a hiring tool, a loan application, or even a recommendation engine, consider this: Fairness is not a single destination, but an ongoing journey. And it’s a journey that requires all of us to be vigilant, thoughtful, and most of all, humble.

A Personal Call to Explore the Deeper Side of Fairness in AI

Inspiration can strike in unexpected ways, and for me, one pivotal moment came after watching a TEDx talk by professor S. Craig Watkins (Wikipedia), titled Artificial Intelligence and the Future of Racial Justice. Watkins raises the alarm about how AI, left unchecked, can perpetuate the systemic inequalities we aim to eradicate. It’s a must-watch for anyone working in AI, as it frames the issue of fairness in a way that brings home the urgency of the matter.

I highly recommend this talk to anyone who, like me, is passionate about not just developing smarter systems, but fairer ones. It serves as both a warning and a guidepost for where our efforts should be focused as we move forward.