Skip to main content

Summary

Overview

New technologies, like artificial intelligence (AI), are reshaping our world. They bring powerful opportunities and threats, especially to our human rights. The Report sets out a roadmap for Australia to seize the opportunities and address the threats.

Australia should innovate consistently with our liberal democratic values. This means being consultative, inclusive and accountable. New tech should come with robust human rights safeguards.

The Australian Human Rights Commission is Australia’s national human rights institution. The Commission is independent and impartial. It aims to promote and protect human rights in Australia. 

The recommendations in the Report are informed by the Commission’s expertise, our research and extensive public consultation with the community, government, industry and academia.

The Report aims to foster a deeper understanding of the human rights implications for Australia of new and emerging technologies such as AI.

The Report is divided into four parts: 

  • Part A: a national strategy on new and emerging technologies
  • Part B: the use of artificial intelligence in decision making by government and the private sector
  • Part C: supporting effective regulation through the creation of an AI Safety Commissioner
  • Part D: accessible technology for people with disability.

National Strategy

National strategy on new and emerging technologies

What should be Australia’s overarching approach to new and emerging technologies?

International human rights law sets out globally-accepted legal principles that uphold the dignity of all people. As a liberal democracy, Australia should place human rights at the centre of its approach to technology, with a view to promoting fairness, equality and accountability in the use and development of all new and emerging technologies.

This approach provides the foundation for the Commission’s recommendations for law and other reform throughout the Report.

The Department of the Prime Minister and Cabinet is leading a process, on behalf of the Australian Government, to create the Digital Economy Strategy (previously the Digital Australia Strategy). This presents an excellent opportunity to articulate the key, big-picture elements of how Australia will respond to the rise of new and emerging technologies, such as AI.

The Commission urges the Australian Government to embrace technological innovation that holds true to our liberal democratic values. This means putting human rights at the centre of how Australia approaches new and emerging technologies.

The Commission recommends that the Digital Economy Strategy promote responsible innovation and human rights through measures including regulation, investment and education. This will help foster a firm foundation of public trust in new and emerging technologies that are used in Australia.

Read full section

AI-informed decision making

Artificial intelligence

AI is changing the way important decisions are made by government and the private sector—with significant implications for how human rights are fulfilled. 

Part B of the Report focuses on ‘AI-informed decision making’—the use of AI to make decisions that have legal or similarly significant effects on individuals. The Commission makes recommendations to ensure human rights are protected where AI is used in decision making; and to provide effective accountability for such decisions.

Government use of AI: Chapter 5

Before the Australian Government introduces a new AI system to make administrative decisions, it should be required to undertake a human rights impact assessment (HRIA) (Recommendation 2). Where such an AI-informed decision-making system is adopted by the Government, the Commission recommends:

  • measures to improve transparency, including notification of the use of AI and strengthening a right to reasons or an explanation for AI-informed administrative decisions (Recommendations 3-7)
  • an independent merits review for all AI-informed administrative decisions (Recommendation 8).

Private sector use of AI: Chapter 6

Human rights and accountability are also vitally important when corporations and other non-government entities use AI to make decisions. The Commission recommends that: 

  • corporations and other non-government bodies be encouraged to undertake HRIAs before using AI-informed decision-making systems (Recommendation 9)
  • individuals be notified about the use of AI-informed decisions affecting them (Recommendation 10).

Co- and self-regulation: Chapter 7

Co- and self-regulation should complement and support legal regulation to create better AI-informed decision-making systems, including through:

  • standards and certification for the use of AI in decision making 
  • ‘regulatory sandboxes’ that allow for experimentation and innovation (Recommendation 14)   
  • rules for government procurement of decision-making tools and systems (Recommendation 16).

Read full section

AI Safety Commissioner

Supporting effective regulation

Government agencies and the private sector are often unclear on how to develop and use AI lawfully, ethically and in conformity with human rights. 

Part C of the Report recommends the creation of an AI Safety Commissioner to address this problem, by supporting regulators, policy makers, government and business in applying laws and other standards in respect of AI-informed decision making.

The challenge of AI

Regulators face challenges in fulfilling their functions as the bodies they regulate make important changes in how they operate. 

Legislators and policy makers are under unprecedented pressure to ensure Australia has the right law and policy settings to address risks and take opportunities connected to the rise of AI. 

The unprecedented rise in AI presents a once-in-a-generation challenge to develop and apply regulation that supports positive innovation, while addressing risks of harm. 

Technical expertise and capacity building

We recommend the creation of an AI Safety Commissioner to provide technical expertise and capacity building. As an independent statutory office that champions the public interest, including human rights, an AI Safety Commissioner could help build public trust in the safe use of AI by:

  • providing expert guidance to government agencies and the private sector on how to comply with laws and ethical standards regarding the development and use of AI
  • working collaboratively to build the capacity of regulators and the broader 'regulatory ecosystem' to adapt and respond to the rise of AI in their respective areas of responsibility
  • monitoring trends in the use of AI in Australia and overseas, providing robust, independent and expert advice to legislators and policy makers with a view to addressing risks and taking opportunities connected to the rise of AI (Recommendation 22).

The AI Safety Commissioner should be independent from government in its structure, operations and legislative mandate. It may be incorporated into an existing body or formed as a new, separate body. It should be required to have regard to the impact of the development and use of AI on vulnerable and marginalised people in Australia, and draw on diverse expertise and perspectives (Recommendation 23).
 

Read full section

Facial recognition & biometric tech

Biometric technology and privacy

There is strong community concern regarding some forms or uses of biometric technology, especially facial recognition. Where biometric technologies are used in high-stakes decision making, such as policing, errors can increase the risk of human rights infringement and have an impact on individual privacy. 

The Commission recommends law reform to provide better human rights and privacy protection regarding the development and use of these technologies (Recommendations 19, 21), and a moratorium on the use of biometric technologies in high-risk decision making until such protections are in place (Recommendation 20).

Read full section

Algorithmic bias

Algorithmic bias, unfairness and discrimination

‘Algorithmic bias’ arises where an AI-informed decision-making tool produces outputs that result in unfairness or discrimination. Often this is caused by forms of statistical bias. Algorithmic bias has arisen in AI-informed decision making in the criminal justice system, advertising, recruitment, healthcare, policing and elsewhere. 

The Commission recommends greater guidance for government and non-government bodies in complying with anti-discrimination law in the context of AI-informed decision making (Recommendation 18).

Read full section

Accessible technology

Accessible technology

Good technology design can enable the participation of people with disability as never before—from the use of real-time live captioning to reliance on smart home assistants. On the other hand, poor design can cause significant harm, reducing the capacity of people with disability to participate in activities that are central to the enjoyment of their human rights, and their ability to live independently. 

Part D of the Report focuses on improving the accessibility of goods, services and facilities that use Digital Communication Technology for people with disability. 

Technology is an enabling right: Chapter 11

The accessibility of new technology, and especially of Digital Communication Technology, is an enabling right for people with disability because it is critical to the enjoyment of a range of other civil, political, economic, social and cultural rights.

Improving functional accessibility: Chapter 12

The ‘functional accessibility’ of goods, services and facilities that rely on Digital Communication Technology refers to the ability to use these things in practice. Problems in this area commonly relate to the user interface of tech-enabled products being designed in a way that excludes people with one or more disabilities. 

To improve functional accessibility, the Commission recommends: 

  • the creation of a new Disability Standard, focused on Digital Communication Technology (Recommendation 24)
  • new government procurement rules to require accessible goods, services and facilities (Recommendation 25)
  • measures to improve private sector use of accessible Digital Communication Technology (Recommendation 26).  

Broadcasting and audio-visual services: Chapter 13

The 21st century has seen a massive expansion in how content is delivered, including through subscription television, video and online platforms. Reform is needed to ensure that all media respect the right of people with disability to receive news, information and entertainment content in ways that they can understand. 

The Commission recommends reforms to facilitate:

  • increased audio description and captioning for broadcasting services, as well as video, film and online platforms (Recommendations 27-29)
  • reliable accessible information during emergency and important public announcements (Recommendation 30)
  • better monitoring of compliance with accessibility requirements and voluntary targets for the distribution of audio-visual content (Recommendation 31).  

Availability of new technology: Chapter 14

The availability of goods, services and facilities can be reduced where people with disability cannot afford them, or do not know about them. Exclusion can worsen inequality and disadvantage for people with disability. The Commission recommends: 

  • better provision of accessible information on how goods, services and facilities can be used by people with disability (Recommendation 32)
  • more accessible broadband internet by introducing a concessional rate for people with disability (Recommendation 33)
  • National Disability Insurance Scheme funding to improve access to Digital Communication Technology for people with disability (Recommendation 34). 

Design, education and capacity building: Chapter 15

Good design, education and capacity building can promote accessible Digital Communication Technology. 

The Commission recommends applying a 'human rights by design' approach including by the Australian Government adopting, promoting and modelling good practice and incorporating this approach into education, training, professional development and accreditation (Recommendations 35-38).

Read full section

cover of final report, people walking in field of binary numbers

Human Rights and Technology Final Report (2021)

This Report is the culmination of a major project on new and emerging technologies, including artificial intelligence. The Report reflects the Commission’s extensive public consultation regarding the impact of new technologies on human rights. 

""

Summary of the Final Report (2021)

This document is a summary of the Australian Human Rights Commission’s Human Rights and Technology Final Report

Our lives are being transformed as we discover new ways to connect, to understand and to be included. But we also face unprecedented threats to our basic rights. How we respond will shape the community we become.

EDWARD SANTOW, AUSTRALIAN HUMAN RIGHTS COMMISSIONER

Neither technology nor the disruption that comes with it is an exogenous force over which humans have no control. All of us are responsible for guiding its evolution.

PROF. KLAUS SCHWAB, EXECUTIVE CHAIRMAN, WORLD ECONOMIC FORUM

I also know we can still shape that world, and make it into a place that reflects our humanity, our cultures and our cares…It requires that we enter a conversation about the role of technology in Australian society, and about how we want to navigate being human in a digital world.

PROF. GENEVIEVE BELL, DIRECTOR, 3A INSTITUTE ANU