25 Definitions of Fairness in AI

Understanding Bias and Fairness in AI: More Complex Than We Think

We’ve all heard about bias in AI and the need for fairness in these systems, but the topic is much broader than it seems. When we talk about AI bias, it’s not just about addressing a single issue like racial or gender bias. Fairness in AI is multi-dimensional, and the decisions we make when designing these systems have real-world consequences for every aspect of society.

I realized this firsthand during a project we were working on. We thought we’d solved the problem of bias in a hiring algorithm by ensuring gender and racial equality. It wasn’t until later that we noticed that our algorithm was favoring candidates from elite schools, unintentionally discriminating against people from less privileged backgrounds. It was a perfect example of how fairness isn’t just about solving one kind of bias—it’s a complex web of ethical decisions.

Let’s dive into some of the specific definitions of fairness in AI, their complexities, and how they play out in the real world.


1. Equality of Opportunity

Description:
This type of fairness ensures that individuals who qualify for a certain opportunity, regardless of their background, have an equal chance to benefit. The idea is to level the playing field so that the only thing that matters is merit or ability.

Real-World Example:
Imagine an AI system used for hiring employees. If designed with equality of opportunity in mind, it would ensure that applicants are judged solely on their qualifications and not on factors like their race, gender, or where they went to school. For instance, someone from a small community college should have the same chance of being selected as someone from a prestigious Ivy League school, assuming their qualifications are comparable.


2. Equality of Outcome

Description:
Fairness here is about ensuring that different groups (e.g., racial, gender, socioeconomic) receive the same outcomes. It aims to correct for historical inequalities by adjusting decisions to create equal results for all groups.

Real-World Example:
In a university admissions system, equality of outcome would mean that if 30% of applicants are from a minority group, then 30% of those admitted should also come from that group, regardless of whether that’s reflective of their qualifications. This approach can help rectify past imbalances but can also lead to debates about whether it’s fair to other groups.


3. Demographic Parity

Description:
Demographic parity means that the decisions made by the AI (e.g., loan approvals, hiring decisions) are distributed equally across different demographic groups. If a model predicts who should get a loan, it should approve loans for different groups (e.g., men and women) at the same rate.

Real-World Example:
In a lending system, demographic parity would ensure that men and women receive loans at the same rate. If an AI system approves 70% of loan applications from men, it should also approve 70% of women’s applications. This helps avoid favoring one group over another but can be tricky if the groups have different levels of creditworthiness based on historical inequalities.


4. Equalized Odds

Description:
This fairness definition ensures that an AI system performs equally well for all demographic groups. It aims for equal true positive rates (e.g., correct loan approvals) and equal false positive rates (e.g., wrongful denials) for different groups.

Real-World Example:
In the case of predictive policing, an AI model that operates with equalized odds would ensure that it correctly identifies criminal activity at the same rate for different racial groups, and it would also ensure that it doesn’t falsely accuse one group more often than another. For instance, if a system has a 90% accuracy rate for identifying suspects among one racial group, it should have the same accuracy rate for others.


5. Predictive Parity

Description:
Predictive parity ensures that when the AI predicts a positive outcome (like approving a loan), the likelihood that the prediction is correct is the same for all demographic groups.

Real-World Example:
In a loan approval system, predictive parity means that if the AI predicts that an applicant will repay a loan, the likelihood of that prediction being correct should be the same for all groups. So, if the system predicts that 80% of applicants from a certain group will repay their loans, the actual repayment rate should be around 80% for all groups.


6. Proportional Representation

Description:
This fairness metric ensures that the results of an AI system proportionally represent the population. If a particular demographic makes up 30% of the general population, they should also make up 30% of the positive outcomes from the AI system.

Real-World Example:
Imagine an AI system that identifies job candidates for a company. If 30% of the local population is made up of a particular ethnic group, a system adhering to proportional representation would ensure that roughly 30% of the candidates it selects come from that group. This helps ensure that underrepresented groups have fair access to opportunities.


7. Individual Fairness

Description:
Individual fairness ensures that similar individuals are treated similarly by the AI system, regardless of their group membership.

Real-World Example:
In a healthcare system, if two patients have the same symptoms and medical history, individual fairness would ensure that they receive the same treatment recommendations, regardless of their race or gender. This fairness principle avoids group-based discrimination and focuses on the individual.


8. Group Fairness

Description:
Group fairness ensures that groups, rather than individuals, are treated fairly. This is especially relevant when considering historically marginalized groups, like racial or gender minorities.

Real-World Example:
In a hiring system, group fairness would ensure that underrepresented groups receive fair treatment as a collective, even if it means making adjustments to balance out historical disadvantages. For example, ensuring that women are considered for leadership roles at the same rate as men, even if individual qualifications seem slightly different, is a step towards correcting historical bias.


9. Fairness Through Unawareness

Description:
This principle suggests that fairness can be achieved by simply not including sensitive attributes like race or gender in the AI’s decision-making process. The idea is that if the model doesn’t “see” these attributes, it can’t discriminate based on them.

Real-World Example:
In a hiring system, fairness through unawareness would mean that the model does not use information about the applicant’s gender or race to make decisions. However, this can be problematic because biases can still creep in through other variables that are correlated with sensitive attributes, like ZIP codes.


10. Counterfactual Fairness

Description:
Counterfactual fairness ensures that an AI model’s decision would remain the same if a protected characteristic (like race or gender) were different. In other words, changing someone’s race or gender in a “counterfactual world” shouldn’t change the AI’s decision.

Real-World Example:
In a hiring algorithm, counterfactual fairness would mean that if a qualified candidate who is a woman were hypothetically treated as a man, the model would still recommend hiring her. This ensures that the decision isn’t influenced by the candidate’s gender.


11. Calibration Fairness

Description:
Calibration fairness ensures that the predicted scores from an AI model reflect the same likelihood of an outcome across different groups. If an AI predicts that a person has an 80% chance of success, that prediction should hold true for all demographic groups.

Real-World Example:
In a recidivism prediction system used in the criminal justice system, calibration fairness would mean that if the model predicts an 80% chance of reoffending for individuals, that prediction should be equally accurate for people of all racial or ethnic backgrounds.


12. Conditional Statistical Parity

Description:
This fairness metric ensures that outcomes are similar across groups when controlling for certain non-sensitive characteristics, like qualifications or performance metrics.

Real-World Example:
In a hiring system, conditional statistical parity would mean that applicants with the same level of education and work experience should have the same probability of being hired, regardless of their race or gender.


13. Rawlsian Fairness

Description:
Derived from philosopher John Rawls’ “Theory of Justice,” this fairness principle emphasizes improving the outcomes of the least advantaged members of society.

Real-World Example:
In healthcare, a Rawlsian fair AI system would prioritize care for the most vulnerable populations, such as low-income patients or those with pre-existing conditions, even if it means diverting resources from healthier, more privileged groups.


14. Meritocratic Fairness

Description:
This form of fairness focuses on rewarding individuals based on their effort and ability, rather than characteristics beyond their control, such as socioeconomic background.

Real-World Example:
A meritocratic AI system used for college admissions would ensure that students are evaluated primarily on their academic performance and extracurricular achievements, without being influenced by their family’s wealth or connections.


15. Max-Min Fairness

Description:
Max-min fairness prioritizes improving the welfare of the least advantaged group. The goal is to maximize the minimum outcome, ensuring that the worst-off benefit the most.

Real-World Example:
In a school admissions system, max-min fairness would prioritize access for students from the most disadvantaged backgrounds, ensuring that those with the least resources have the greatest opportunity to benefit.


16. Disparate Impact

Description:
Disparate impact occurs when an AI system disproportionately affects one demographic group over another, even if the system isn’t explicitly designed to do so. This definition of fairness ensures that the AI doesn’t unintentionally harm one group more than another.

Real-World Example:
In employment screening, a disparate impact might occur if an AI tool used to screen resumes unintentionally favors applicants from a particular race or gender because it’s trained on biased historical data. Even if the system doesn’t explicitly consider race or gender, it may still discriminate by favoring candidates from privileged backgrounds due to hidden biases in the data. For instance, an AI might favor candidates who use certain language patterns, which correlate with a particular demographic group.


17. Disparate Treatment

Description:
Disparate treatment occurs when an AI system explicitly treats people differently based on protected attributes like race, gender, or age. This type of fairness ensures that sensitive characteristics are not used as decision-making criteria unless absolutely necessary.

Real-World Example:
In an insurance underwriting process, if an AI system explicitly charges different premiums based on gender (e.g., higher premiums for women), that would be considered disparate treatment. A fair system would ensure that gender isn’t used as a criterion for decision-making unless it’s directly relevant (and legal), such as in health insurance, where actuarial data may justify certain considerations.


18. Unbiasedness

Description:
This fairness principle focuses on ensuring that an AI system does not show systematic bias in its predictions or decisions. The system should be designed to avoid any skew that favors or disadvantages any particular group.

Real-World Example:
An AI used in facial recognition should perform equally well across all demographic groups. However, many facial recognition systems have been shown to have higher error rates for people with darker skin tones. A truly unbiased system would have the same accuracy for all skin tones, ensuring that no group is unfairly misrepresented or mistreated.


19. Procedural Fairness

Description:
Procedural fairness refers to the fairness of the process by which decisions are made, rather than focusing solely on the outcomes. It’s about ensuring that the steps taken by the AI system are transparent, consistent, and impartial.

Real-World Example:
In hiring, an AI system that transparently follows the same decision-making process for all candidates, without any hidden or inconsistent rules, is adhering to procedural fairness. Candidates would know exactly how they were assessed, and the process would be open to scrutiny. This contrasts with opaque systems where applicants don’t understand why they were rejected.


20. Outcome Fairness

Description:
Outcome fairness ensures that the final decisions or predictions made by an AI system lead to equitable results, even if the process itself wasn’t perfectly fair. It focuses on correcting for disparities in the results of the system.

Real-World Example:
In the context of a hiring algorithm, outcome fairness would mean that the system’s final hiring decisions don’t disproportionately exclude underrepresented groups, even if those groups started with fewer resources. For example, an AI tool might be adjusted to ensure that women, who may have faced historical barriers in a particular field, are equally represented in the final pool of hires.


21. Inclusion Fairness

Description:
Inclusion fairness ensures that marginalized and underrepresented groups are included and treated equitably in the AI system’s decision-making process. It requires deliberate efforts to ensure that AI systems do not exclude or overlook these groups.

Real-World Example:
In a content recommendation system (such as a news or social media platform), inclusion fairness would ensure that news stories from minority perspectives or less popular viewpoints are still promoted and made visible to users, rather than being drowned out by more mainstream content. This prevents the reinforcement of existing power structures and ensures a diversity of viewpoints.


22. Utilitarian Fairness

Description:
Utilitarian fairness is rooted in the principle of maximizing the overall good for the greatest number of people. It focuses on designing AI systems that produce the best outcomes for the majority, even if this leads to some inequalities.

Real-World Example:
In public health, a utilitarian approach to fairness might involve using AI to allocate resources, such as vaccines or healthcare services, in a way that maximizes the overall health of the population. While some individuals may receive fewer resources or attention, the overall impact is positive for the majority.


23. Libertarian Fairness

Description:
Libertarian fairness is based on the idea of respecting individual freedoms and ensuring that AI systems do not restrict people’s autonomy or freedom of choice. It prioritizes personal liberty over equality of outcomes.

Real-World Example:
In a financial system, libertarian fairness would ensure that individuals have the freedom to access certain financial products or services without being restricted by overly paternalistic AI systems. For example, an AI-driven credit scoring system might give individuals more freedom to choose high-risk loans if they wish, rather than restricting their access based on predictive models that assume they will default.


24. No Harms Fairness

Description:
No harms fairness focuses on ensuring that AI systems do not cause harm to any particular group or individual, even if they are designed to benefit the majority. It prioritizes minimizing harm over achieving specific fairness metrics like equal outcomes.

Real-World Example:
In the criminal justice system, no harms fairness would require that an AI-driven risk assessment tool not increase the likelihood of wrongful convictions or harsher sentences for any group, even if the system improves accuracy overall. For instance, it would be unacceptable if the tool disproportionately harmed Black or Hispanic defendants, even if it was accurate for the majority of cases.


25. Equality of Access

Description:
Equality of access ensures that all individuals, regardless of their background or demographic, have equal access to the opportunities and resources provided by the AI system.

Real-World Example:
In education, an AI system designed to recommend scholarships or academic programs would ensure that students from rural areas, lower socioeconomic backgrounds, or underrepresented minorities have the same access to opportunities as their more privileged peers. This might mean providing additional resources to ensure that disadvantaged students have the same ability to apply for and access educational programs.

What’s Next: Moving from Theory to Action

Now that we’ve explored the various dimensions of fairness in AI, the question becomes: how do we apply these principles in practice? Designing AI systems that are truly fair requires more than just a theoretical understanding—it demands actionable steps, thoughtful design, and ongoing evaluation to ensure we’re addressing real-world biases effectively.

As AI developers, policymakers, and users, it’s crucial to reflect on these definitions and consider which are most relevant to the systems we’re building. We must ask ourselves: which definition of fairness best fits our specific context? How can we balance conflicting fairness goals without introducing new forms of inequality? And most importantly, how do we ensure that fairness in AI is not just a buzzword but a guiding principle?

Next Steps: To dig deeper into how fairness is defined and operationalized in AI, check out my next article, Why Fairness in AI Is More Complex Than You’ve Heard, where I explore the specific frameworks that guide fairness in AI systems and provide practical insights into how these can be applied in real-world scenarios.