Consultation now open: Australian Human Rights Commission and World Economic Forum White Paper on Artificial Intelligence: Governance and Leadership
The Australian Human Rights Commission is conducting a project on Human Rights and New Technology (the Project). As part of the Project, the Commission and the World Economic Forum are working together to explore models of governance and leadership on artificial intelligence (AI) in Australia.
This White Paper has been produced to support a consultation process that aims to identify how Australia can simultaneously foster innovation and protect human rights – as we see unprecedented growth in new technologies, such as AI. The White Paper complements the broader issues raised in the Commission’s Human Rights and Technology Issues Paper. The consultation conducted on the Issues Paper and White Paper will inform the Commission’s proposals for reform, to be released in mid-2019.
The White Paper asks whether Australia needs an organisation to take a central role in promoting responsible innovation in AI and related technology and, if so, what that organisation could look like.
Making a written submission
The Australian Human Rights Commission is calling for submissions responding to the questions in the White Paper on Artificial Intelligence: Governance and Leadership.
Submissions may be formal or informal, and can address some or all of the consultation questions.
The deadline for receiving submissions has been extended until 5pm, 18 March, 2019.
Submissions received after this date may not be considered by the Commission.
How will my submission be used?
The information collected through the consultation process will be used for the purposes of the Project and may be drawn upon, quoted or referred to in any Project documentation.
The Commission also intends to publish submissions on the Project website unless you state you do not wish the Commission to do so. Publication by the Commission is at the discretion of the Commission. We may decide not to publish submissions or to remove information in submissions, such as personal information, prior to publication.
If you would like your submission to be confidential or anonymous, please clearly state this when you make your submission. We will not publish confidential submissions on our website. We may use material from confidential submissions, such as quotes, case studies or other references in the report produced as a result of the Project. If we do this, we will remove any personal information so that you or other people referred to in your submission cannot be identified.
We will not release confidential submissions to anyone without your consent unless required by law. In the case of a request under the Freedom of Information Act 1982 (Cth) there are likely to be relevant exemptions to production (including for material obtained in confidence and personal information). We would consult you about any FOI request before any decision was made about releasing information.
The Commission’s submission policy provides further information on the use, publication and access to submissions. The submission policy is located at: https://www.humanrights.gov.au/submission-policy.
Submissions can be sent to the Commission:
- by email to firstname.lastname@example.org
- by post to: Human Rights and Technology Project, Australian Human Rights Commission, GPO Box 5218 Sydney NSW 2001
Please download and complete a submission form and send it to us with your submission.
What if I need to make a submission in a different way?
If you would like to make a submission in a different way please contact the Project team at email@example.com, Ph (02) 9284 9600 or TTY 1800 620 241.
Human Rights and Technology Project
The Australian Human Rights Commissioner is leading a project focusing on human rights and technology. More information about the human rights and technology project is available here.
Easy to read information on the focus areas of this project can be found in the links below.
The Australian Human Rights Commission has launched a major three year project on the intersection of human rights and new technologies. Focusing on responsible innovation, the project will explore the challenges and opportunities that technology poses to our human rights. The Commission is interested in hearing the views of members of the public, business, government and academia on the human rights impacts of new technologies.
As new technology reshapes our world, we should pursue innovation that reflects our national values, equality, fairness and liberal democracy. And we must also address the challenge that new technology could worsen inequality and disadvantage.
The impacts of new technologies are not experienced equally by all parts of the Australian community. New technologies may advance, or limit, one or more human rights.
Some of the human rights that may be engaged by new technology include:
- The right to equality and non-discrimination
- Freedom of expression
- Right to benefit from scientific progress
- Freedom from violence
- Accessibility for people with disability
- Right to privacy
- Right to education
- Access to information and safety for children
- Right to a fair trial and procedural fairness
The Australian Government has committed to respect, protect and fulfil these human rights, which are outlined in international human rights treaties.
The Australian Human Rights Commission Issues Paper outlines a number of human rights that are affected by different types of new technology including artificial intelligence-informed decision making and disability accessibility. It includes a number of questions about human rights and new technology.
Human rights and new technology consultation questions
What types of technology raise particular human rights concerns? Which human rights are particularly implicated?
Noting that particular groups within the Australian community can experience new technology differently, what are the key issues regarding new technologies for these groups of people (such as children and young people; older people; women and girls; LGBTI people; people of culturally and linguistically diverse backgrounds; Aboriginal and Torres Strait Islander peoples)?
The Commission is interested in hearing the views of members of the public, business, government and academia on the most appropriate frameworks for regulating new technology to protect and promote human rights. ‘Regulation’ refers to processes that aim to moderate individual and organisational behaviour, setting out the rules that everyone must abide by.
Regulation can come in different forms. For example, it may be a law, a set of standards for industry to follow, or a set of principles which guide the development and use of technology.
Regulating rapidly developing new technology can be challenging. Whatever form of regulation is adopted, it has to allow for innovation and new ideas, and be consistent with Australia’s commitment to human rights.
Many business and public leaders globally agree that regulation is important for new technologies. Facebook CEO, Mark Zuckerberg has noted that the issue is what the right regularity framework is rather than whether or not there should be regulation. Internationally many countries are introducing regulatory initiatives such as the new European regulation for data and privacy.
More details on regulation and new technology can be found in Chapter 5 of the Issues Paper which includes consultation questions about responsible innovation.
Artificial intelligence (AI), big data and decisions that affect human rights.
The Commission seeks stakeholder views on how best to protect and promote human rights in AI-informed decision making. By ‘AI-informed decision making’, the Commission refers to decision making which relies wholly or in part on artificial intelligence (AI). Most of these AI applications involve the application of machine-learning algorithms to big datasets.
AI-informed decision making is being used more and more in everyday life. This includes the delivery of government services, justice and policing, entertainment, employment and banking. This kind of decision making raises ethical, moral and legal questions about how we protect human rights. For example, what options are provided for people to ask questions about decisions that are made about them?
If an AI-informed decision is made with no human to detect or correct the decision, incorrect decisions may be made which harm the human rights of an individual or group.
It can be difficult to balance the positive outcomes from AI-informed decision making with the risks. Positive outcomes include the use of AI informed decision making to improve the accuracy of diagnoses and treatment of disease. The kinds of risks that have already been identified include the risk of biased decisions against someone’s gender, race, socio economic status or other aspect of who they are. This can mean that people can have their human rights limited and social inequality can increase.
More details on Artificial intelligence (AI), big data and decisions that affect human rights can be found in Chapter 6 of the Issues Paper which includes questions about Artificial intelligence (AI), big data and decisions that affect human rights.
New technology is becoming part of almost every aspect of life. It is central to our experience of daily activities including shopping, transport and accessing government services. The Commission seeks stakeholder views on how best to protect and promote the human rights of people with disability by promoting the accessibility and usability of new technology.
It is important that the whole community can access and use technology. This principle is referred to as ‘accessibility’. Just as everyone should be able to access our education system, public transport and buildings, technology also should be accessible to all.
Accessibility focuses on the experience of the person using the technology and minimising barriers to using that piece of technology. For example, a person with a vision impairment might use voice recognition, a mouse, touch screen or keyboard to input information into a device. To receive information from the device, they may use text-to-speech (TTS), magnification or Braille.
Using technology is an important way that people can participate in the community, education and employment, and political life. Technology must be accessible to everyone, regardless of their disability, race, religion, gender or other characteristics. Universal design allows technology to be used by all people in the community, as much as possible, without the need for other special features or assistive technology.
Accessible and assistive technologies for people with disability
Developers are creating technologies that improve participation and independence of people with disability. These developments can help people with disability enjoy their human rights protected by the Convention on the Rights of Persons with Disabilities. There are principles in the Convention that depend on technology being accessible for people with disability.
These principles include:
- respect for inherent dignity, individual autonomy including the freedom to make one’s own choices, and independence of persons
- full and effective participation and inclusion in society
- equality of opportunity.
Examples of innovations that protect and promote the human rights of people with disability include:
- An intelligent home assistant can assist people by carrying out household and daily tasks by recognising the voice speaking to it and completing a task.
- An application that allows a person who is blind or who has low vision to hold their smartphone camera to an everyday object. The application then describes the object or the person. It can help with, for example, identifying a product in a supermarket.
More details on accessible technology can be found in Chapter 7 of the Issues Paper which includes a number of questions about accessible technology.
An algorithm is a step-by-step procedure for solving a problem. It is used for calculation, data processing and automated reasoning. An algorithm can tell a computer what the author wants it to do, the computer then implements it, following each step, to accomplish the goal.
Artificial Intelligence (AI) is the theory and development of computer systems that can do tasks that normally require human intelligence. This includes decision making, visual perception, speech recognition, learning and problem solving. Current AI systems are capable of specific tasks such as internet searches, translating text or driving a car.
AI-informed decision making is made possible where AI, including through machine learning, applies and/or adjusts algorithms to big datasets. It can be used in areas such as risk assessment in policing.
Artificial General Intelligence (AGI) Artificial General Intelligence is an emerging area of AI research and refers to the development of AI systems that would have cognitive function similar to humans in their ability to learn and think. This means they would be able to accomplish more sophisticated cognitive tasks than current AI systems.
Assistive technology is the overarching term for technology that is specifically designed to support a person with a disability perform a task. An example of an assistive technology is a screen reader, which can assist a person who is blind, or who has a vision impairment, to read the content of a website. Correctly implemented universal design supports assistive technology when required.
Big data refers to the diverse sets of information produced in large volumes and processed at high speeds using AI. Data collected is analysed to understand trends and make predictions. AI can automatically process and analyse millions of data-sets quickly and efficiently and give it meaning.
Bitcoin is a system of open source peer-to-peer software for the creation and exchange of a type of a digital currency that can be encrypted. This is known as a cryptocurrency. Bitcoin is the first such system to be fully functional. Bitcoin operates through a distributed ledger such as Blockchain.
Blockchain is the foundation of cryptocurrencies like Bitcoin. Blockchain is an ever-growing set of data or information blocks that is shared and can continuously be updated simultaneously. These blocks can be stored across the internet, cannot be controlled by a single entity and have no sole point of failure.
A Chatbot is a computer program that simulates human conversation through voice commands or text or both. For example, in banking, a limited bot may be used to ask the caller questions to understand their needs. However, the Chatbot cannot understand a request if the customer responds with a different answer.
Data sovereignty is the concept that information which has been converted and stored is subject to the laws of the country in which it is located. Within the context of Indigenous rights, data sovereignty recognises the rights of Indigenous peoples to govern the collection, ownership and application of their data.
Digital economy refers to economic and social activities that are supported by information and communications technologies. This includes purchasing goods and services, banking and accessing education or entertainment using the internet and connected devices like smart phones. The digital economy impacts all industries and business types and influences the way we interact with each other every day.
The fourth industrial revolution refers to the fusion of technologies that blur the lines between physical, digital and biological spheres. This includes emerging technologies such as robotics, Artificial Intelligence, Blockchain, nanotechnology, The Internet of Things, and autonomous vehicles. Earlier phases of the industrial revolution are; phase one mechanised production with water and steam; phase two mass production with electricity; and phase three automated production with electronics and information technology.
Machine learning is an application of AI that enables computers to automatically learn and improve from experience without being explicitly programmed by a person. This is done by the computer collecting and using data to learn for themselves. For example, an email spam filter collecting data on known spam terminology and unknown email addresses, merging that information and making a prediction to identify and filter sources of spam.
The Internet of Things (IoT) refers to the ability of any device with an on and off switch to be connected to the internet and send and receive data. For example, on a personal level a coffee could brew when an alarm goes off, on a larger scale ‘smart cities’ could use devices to collect and analyse data to reduce waste and congestion.
Universal design refers to an accessible and inclusive approach to designing products and services, focusing on ensuring that people with disability, as well as others with specialised needs, are able to use those products and services. Applying universal design to technology means designing products, environments, programmes and services so they can be used by all people, to the greatest extent possible, without the need for specialised or adapted features. Correctly implemented universal design supports assistive technology when required.
A White Paper, co-authored with the World Economic Forum on Artificial Intelligence, Governance and Leadership, was published in early 2019. Consultations are now open and results will inform the Discussion Paper.
A Discussion Paper will be published in mid-2019. The Australian Human Rights Commission will be conducting consultations on the Discussion Paper proposals.
A Final Report and recommendations will be published by early 2020. Following publication, the Commission will focus on implementation of the Final Report.
For more information about participating in the consultation process please contact us at firstname.lastname@example.org or Ph (02) 9284 9600 or TTY 1800 620 241.