Artificial intelligence (AI), big data and decisions that affect human rights.

The Commission seeks stakeholder views on how best to protect and promote human rights in AI-informed decision making. By ‘AI-informed decision making’, the Commission refers to decision making which relies wholly or in part on artificial intelligence (AI). Most of these AI applications involve the application of machine-learning algorithms to big datasets. 

AI-informed decision making is being used more and more in everyday life. This includes the delivery of government services, justice and policing, entertainment, employment and banking. This kind of decision making raises ethical, moral and legal questions about how we protect human rights. For example, what options are provided for people to ask questions about decisions that are made about them?

If an AI-informed decision is made with no human to detect or correct the decision, incorrect decisions may be made which harm the human rights of an individual or group. 

It can be difficult to balance the positive outcomes from AI-informed decision making with the risks. Positive outcomes include the use of AI informed decision making to improve the accuracy of diagnoses and treatment of disease. The kinds of risks that have already been identified include the risk of biased decisions against someone’s gender, race, socio economic status or other aspect of who they are. This can mean that people can have their human rights limited and social inequality can increase.

More details on Artificial intelligence (AI), big data and decisions that affect human rights can be found in Chapter 6 of the Issues Paper which includes questions about Artificial intelligence (AI), big data and decisions that affect human rights.

AI, big data and decisions that affect human rights consultation questions

How well are human rights protected and promoted in AI-informed decision making? In particular, what are some practical examples of how AI-informed decision making can protect or threaten human rights?

How should Australian law protect human rights in respect of AI-informed decision making? In particular:

  1. What should be the overarching objectives of regulation in this area?
  2. What principles should be applied to achieve these objectives?
  3. Are there any gaps in how Australian law deals with this area? If so, what are they?
  4. What can we learn from how other countries are seeking to protect human rights in this area?

In addition to legislation, how should Australia protect human rights in AI-informed decision making? What role, if any, is there for:

  1. An organisation that takes a central role in promoting responsible innovation in AI-informed decision making?
  2. Self-regulatory or co-regulatory approaches?
  3. A ‘regulation by design’ approach?

You can make a submission here