Skip to main content

AI-informed decision making

Overview

Government and the private sector are using AI to make decisions. Many of those AI-informed decisions affect legal or similarly significant rights—in areas as diverse as social security, recruitment and financial services. 

The use of AI can lead to better, more data-driven and efficient decisions. But it can also bring risks—including to human rights. AI-informed decisions should be lawful, transparent and subject to human oversight and review. Reform is needed to make such decision making more accountable and protect against harm. 

When the Australian Government makes an automated decision that affects you

87%

support a right to appeal the decision

88%

want reasons for the decision

85%

want to know the decision was automated

Source: Essential Research Report for the Australian Human Rights Commission, 2020

robotics students

Key messages

  • AI is disrupting economic, social and governmental systems.
  • We must address the risk that AI can cause harm, including to human rights.
  • Governments and businesses should only use AI that respect our human rights.

Recommendations

  • Recommendation 2: Human rights impact assessment

    The Australian Government should introduce legislation to require that a human rights impact assessment (HRIA) be undertaken before any department or agency uses an AI-informed decision-making system to make administrative decisions.

    An HRIA should include public consultation, focusing on those most likely to be affected. An HRIA should assess whether the proposed AI-informed decision-making system:

    1. complies with Australia’s international human rights law obligations
    2. will involve automating any discretionary element of administrative decisions, including by reference to the Commonwealth Ombudsman’s Automated decision-making better practice guide and other expert guidance
    3. provides for appropriate review of decisions by human decision makers
    4. is authorised and governed by legislation.
  • Recommendation 3: Notification of AI

    The Australian Government should introduce legislation to require that any affected individual is notified where artificial intelligence is materially used in making an administrative decision. That notification should include information regarding how an affected individual can challenge the decision.

  • Recommendation 4: Audit of Government AI

    The Australian Government should commission an audit of all current or proposed use of AI-informed decision making by or on behalf of Government agencies. The AI Safety Commissioner (see Recommendation 22), or another suitable expert body, should conduct this audit.

  • Recommendation 5: Right to reasons

    The Australian Government should not make administrative decisions, including through the use of automation or artificial intelligence, if the decision maker cannot generate reasons or a technical explanation for an affected person.

  • Recommendation 6: Right to reasons

    The Australian Government should make clear that, where a person has a legal entitlement to reasons for a decision, this entitlement exists regardless of how the decision is made. To this end, relevant legislation including s 25D of the Acts Interpretation Act 1901 (Cth) should be amended to provide that:

    1. for the avoidance of doubt, the term ‘decision’ includes decisions made using automation and other forms of artificial intelligence
    2. where a person has a right to reasons the person is entitled also to a technical explanation of the decision, in a form that could be assessed and validated by a person with relevant technical expertise
    3. the decision maker must provide this technical explanation to the person within a reasonable time following any valid request.
  • Recommendation 7: Guidance on reasons

    The Australian Government should engage a suitable expert body, such as the AI Safety Commissioner (see Recommendation 22), to develop guidance for government and non-government bodies on how to generate reasons, including a technical explanation, for AI-informed decisions.

  • Recommendation 8: Right to review

    The Australian Government should introduce legislation to create or ensure a right to merits review, generally before an independent tribunal such as the Administrative Appeals Tribunal, for any AI-informed administrative decision.

  • Recommendation 9: Human rights impact assessment

    The Australian Government’s AI Ethics Principles should be used to encourage corporations and other non-government bodies to undertake a human rights impact assessment before using an AI-informed decision-making system. The Government should engage the AI Safety Commissioner (Recommendation 22) to issue guidance for the private sector on how to undertake human rights impact assessments.

  • Recommendation 10: Notification of AI

    The Australian Government should introduce legislation to require that any affected individual is notified when a corporation or other legal person materially uses AI in a decision-making process that affects the legal, or similarly significant, rights of the individual.

  • Recommendation 11: Responsibility for AI

    The Australian Government should introduce legislation that provides a rebuttable presumption that, where a corporation or other legal person is responsible for making a decision, that legal person is legally liable for the decision regardless of how it is made, including where the decision is automated or is made using artificial intelligence.

  • Recommendation 12: Right to reasons

    Centres of expertise, including the newly established Australian Research Council Centre of Excellence for Automated Decision-Making and Society, should prioritise research on the ‘explainability’ of AI-informed decision making.

  • Recommendation 13: Right to review

    The Australian Government should introduce legislation to provide that where a court, or regulatory, oversight or dispute resolution body, has power to order the production of information or other material from a corporation or other legal person:

    1. for the avoidance of doubt, the person must comply with this order even where the person uses a form of technology, such as artificial intelligence, that makes it difficult to comply with the order 
    2. if the person fails to comply with the order because of the technology the person uses, the body may draw an adverse inference about the decision-making process or other related matters.
  • Recommendation 14: Co- and self-regulation

    The Australian Government should convene a multi-disciplinary taskforce on AI-informed decision making, led by an independent body, such as the AI Safety Commissioner (Recommendation 22). The taskforce should:

    1. promote the use of human rights by design in this area
    2. advise on the development and use of voluntary standards and certification schemes
    3. advise on the development of one or more regulatory sandboxes focused on upholding human rights in the use of AI-informed decision making.

    The taskforce should consult widely in the public and private sectors, including with those whose human rights are likely to be significantly affected by AI-informed decision making.

  • Recommendation 15: Co- and self-regulation

    The Australian Government should appoint an independent body, such as the AI Safety Commissioner (Recommendation 22), to develop a tool to assist private sector bodies undertake human rights impact assessments (HRIAs) in developing AI-informed decision-making systems. The Australian Government should maintain a public register of completed HRIAs.

  • Recommendation 16: Co- and self-regulation

    The Australian Government should adopt a human rights approach to procurement of products and services that use artificial intelligence. The Department of Finance, in consultation with the Digital Transformation Agency and other key decision makers and stakeholders, should amend current procurement law, policy and guidance to require that human rights are protected in the design and development of any AI-informed decision-making tool procured by the Australian Government.

  • Recommendation 17: Co- and self-regulation

    The Australian Government should engage an expert body, such as the AI Safety Commissioner (Recommendation 22), to issue guidance to the private sector on good practice regarding human review, oversight and monitoring of AI-informed decision-making systems. This body should also advise the Government on ways to incentivise such good practice through the use of voluntary standards, certification schemes and government procurement rules.

AI systems are designed to discriminate, to amplify hierarchies, and to encode narrow classifications. When applied in contexts such as criminal justice, education, and hiring, they can reproduce and intensify existing structural inequalities.

PROF. KATE CRAWFORD, AUTHOR OF ATLAS OF AI (2021)

Predictive analytics, algorithms and other forms of artificial intelligence are highly likely to reproduce and exacerbate biases reflected in existing data and policies.

In-built forms of discrimination can fatally undermine the right to social protection for key groups and individuals.

There needs to be a concerted effort to identify and counteract such biases in designing the digital welfare state.

PHILIP ALSTON, UN SPECIAL RAPPORTEUR ON EXTREME POVERTY AND HUMAN RIGHTS