
Artificial Intelligence is changing the way we make decisions — from who gets a loan, to who gets a job interview, to who is flagged for additional security checks. But there’s a challenge that goes beyond predictive performance: bias.
Bias in AI isn’t always intentional. Sometimes it’s the byproduct of skewed datasets, hidden correlations, or unexamined assumptions in algorithms. The result? Models that are accurate on paper but systematically disadvantage certain groups. In sensitive domains like finance, healthcare, or justice, this isn’t just a bug — it’s a social risk.
The Research Question
Our team at Dundalk Institute of Technology — Zahid Irfan, Róisín Loughran, Muhammad Adil Raja, and Fergal McCaffery — asked a simple but important question:
Can we design AI systems that are both accurate and fair — without sacrificing too much of either?
Our Approach: Cause Meets Evolution
To tackle this, we brought together two powerful ideas:
- Causal Bayesian Networks (CBNs)
These are graphical models that capture cause-and-effect relationships, not just correlations. By structuring knowledge in a causal way, we can better understand why predictions are made and detect pathways that lead to unfair outcomes. - Grammatical Evolution (GE)
An evolutionary algorithm that “evolves” solutions over generations, guided by a grammar that ensures valid models. Think of it as natural selection for algorithms — only the fittest survive.
We combined these in a multi-objective optimisation setting, using NSGA-II, to evolve CBNs that balanced two fitness measures:
- Accuracy — the percentage of correct predictions.
- Fairness — measured by Equal Opportunity Difference (EOD), which compares true positive rates across protected groups (in our case, male vs. female applicants).
Testing Ground: The German Credit Dataset
We tested our approach on the German Credit dataset, which contains credit application data with a notable gender imbalance (70% male, 30% female).
We ran two sets of experiments:
- Single-objective: optimise for fairness alone or accuracy alone.
- Multi-objective: optimise both together.
The Results
The results were striking:
- Single-objective fairness: high fairness, low accuracy.
- Single-objective accuracy: high accuracy, poor fairness.
- Multi-objective optimisation:
- Fairness improved by 32% compared to accuracy-only optimisation.
- Accuracy dropped by just 2.85% — a small price for a large fairness gain.
In other words, it is possible to have AI that is both fairer and still highly accurate.
Why This Matters
This work shows that fairness in AI isn’t just a theoretical ideal — it’s an achievable design goal. By combining causal reasoning with evolutionary search, we can navigate the trade-offs between accuracy and fairness more intelligently.
For industries deploying AI in sensitive decision-making, this means:
- More equitable outcomes.
- Greater transparency in why decisions are made.
- Less risk of unintentionally embedding social biases into automated systems.
Looking Ahead
We’ll be presenting this research at GECCO 2025 in Malaga, Spain. As AI becomes more embedded in critical infrastructure, balancing accuracy with fairness will be essential for maintaining public trust.
If we want AI to truly serve everyone, we need to design it with fairness as a first-class objective — not an afterthought.
Keywords: Fair AI, Causal Models, Bayesian Networks, Evolutionary Computation, Multi-objective Optimisation, Machine Learning Ethics, GECCO 2025.
If you found an error, highlight it and press Shift + Enter or click here to inform us.
Discover more from Psyops Prime
Subscribe to get the latest posts sent to your email.
Balancing Accuracy and Fairness in AI: A Multi-Objective Approach with Causal Bayesian Networks by Psyops Prime is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.