Skip to main content

Algorithmic bias


The problem of ‘algorithmic bias’ can arise where an AI-informed decision-making tool produces outputs that result in unfairness. Often this is caused by some forms of statistical bias. Algorithmic bias has arisen in AI-informed decision making in the criminal justice system, advertising, recruitment, healthcare, policing and elsewhere.

Algorithmic bias can sometimes have the effect of obscuring and entrenching unfairness or even unlawful discrimination in decision making. The Commission recommends greater guidance for government and non-government bodies in complying with anti-discrimination law in the context of AI-informed decision making.

Racism text, surrounded by binary code

Key messages

  • Algorithmic bias can result in unfair outcomes for individuals and communities.
  • Guidance is available on how to address the problem of algorithmic bias.


  • Recommendation 18: Anti-discrimination law

    The Australian Government should resource the Australian Human Rights Commission to produce guidelines for government and non-government bodies on complying with federal anti-discrimination laws in the use of AI-informed decision making. 


Technical Paper

Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias

The Commission published a Technical Paper on algorithmic bias in partnership with Gradient Institute, Consumer Policy Research Centre, CHOICE and CSIRO’s Data61. 

Using a synthetic data set, the Technical Paper tests how algorithmic bias can arise, using a hypothetical simulation: an electricity retailer using an AI-powered tool to decide how to offer its products to customers, and on what terms. 

The simulation identified five forms of algorithmic bias that may arise due to problems attributed to the data set, the use of AI itself, societal inequality, or a combination of these sources. 

The Paper investigates how algorithmic bias can arise in each scenario, the nature of any bias, and provides guidance regarding how these problems might be addressed. Specifically, it shows how these problems can be addressed by businesses acquiring more appropriate data, pre-processing the data, increasing the model complexity, modifying the AI system and changing the target variable. 

The Paper, the first of its kind in Australia, highlights the importance of multidisciplinary, multi-stakeholder cooperation to produce practical guidance for businesses wishing to use AI in a way that is responsible and complies with human rights.

Technical Paper Partners 

Gradient Institute
Consumer Policy research group
CSIRO Data61