Human Rights and Technology Project Consultation.
The Australian Human Rights Commission is conducting consultations on human rights and new technology.
Written submissions to respond to the Issues Paper have now closed. For more information about participating in the consultation process please contact us at email@example.com or Ph (02) 9284 9600 or TTY 1800 620 241.
Click here for easy to read information on the focus areas of this project.
The Australian Human Rights Commission has launched a major three year project on the intersection of human rights and new technologies. Focusing on responsible innovation, the project will explore the challenges and opportunities that technology poses to our human rights. The Commission is interested in hearing the views of members of the public, business, government and academia on the human rights impacts of new technologies.
As new technology reshapes our world, we should pursue innovation that reflects our national values, equality, fairness and liberal democracy. And we must also address the challenge that new technology could worsen inequality and disadvantage.
The impacts of new technologies are not experienced equally by all parts of the Australian community. New technologies may advance, or limit, one or more human rights.
Some of the human rights that may be engaged by new technology include:
- The right to equality and non-discrimination
- Freedom of expression
- Right to benefit from scientific progress
- Freedom from violence
- Accessibility for people with disability
- Right to privacy
- Right to education
- Access to information and safety for children
- Right to a fair trial and procedural fairness
The Australian Government has committed to respect, protect and fulfil these human rights, which are outlined in international human rights treaties.
The Australian Human Rights Commission Issues Paper outlines a number of human rights that are affected by different types of new technology including artificial intelligence-informed decision making and disability accessibility. It includes a number of questions about human rights and new technology.
Human rights and new technology consultation questions
What types of technology raise particular human rights concerns? Which human rights are particularly implicated?
Noting that particular groups within the Australian community can experience new technology differently, what are the key issues regarding new technologies for these groups of people (such as children and young people; older people; women and girls; LGBTI people; people of culturally and linguistically diverse backgrounds; Aboriginal and Torres Strait Islander peoples)?
The Commission is interested in hearing the views of members of the public, business, government and academia on the most appropriate frameworks for regulating new technology to protect and promote human rights. ‘Regulation’ refers to processes that aim to moderate individual and organisational behaviour, setting out the rules that everyone must abide by.
Regulation can come in different forms. For example, it may be a law, a set of standards for industry to follow, or a set of principles which guide the development and use of technology.
Regulating rapidly developing new technology can be challenging. Whatever form of regulation is adopted, it has to allow for innovation and new ideas, and be consistent with Australia’s commitment to human rights.
Many business and public leaders globally agree that regulation is important for new technologies. Facebook CEO, Mark Zuckerberg has noted that the issue is what the right regularity framework is rather than whether or not there should be regulation. Internationally many countries are introducing regulatory initiatives such as the new European regulation for data and privacy.
More details on regulation and new technology can be found in Chapter 5 of the Issues Paper which includes consultation questions about responsible innovation.
Responsible innovation consultation questions
How should Australian law protect human rights in the development, use and application of new technologies? In particular:
- What gaps, if any, are there in this area of Australian law?
- What can we learn about the need for regulating new technologies, and the options for doing so, from international human rights law and the experiences of other countries? What principles should guide regulation in this area?
- In addition to legislation, how should the Australian Government, the private sector and others protect and promote human rights in the development of new technology?
Artificial intelligence (AI), big data and decisions that affect human rights.
The Commission seeks stakeholder views on how best to protect and promote human rights in AI-informed decision making. By ‘AI-informed decision making’, the Commission refers to decision making which relies wholly or in part on artificial intelligence (AI). Most of these AI applications involve the application of machine-learning algorithms to big datasets.
AI-informed decision making is being used more and more in everyday life. This includes the delivery of government services, justice and policing, entertainment, employment and banking. This kind of decision making raises ethical, moral and legal questions about how we protect human rights. For example, what options are provided for people to ask questions about decisions that are made about them?
If an AI-informed decision is made with no human to detect or correct the decision, incorrect decisions may be made which harm the human rights of an individual or group.
It can be difficult to balance the positive outcomes from AI-informed decision making with the risks. Positive outcomes include the use of AI informed decision making to improve the accuracy of diagnoses and treatment of disease. The kinds of risks that have already been identified include the risk of biased decisions against someone’s gender, race, socio economic status or other aspect of who they are. This can mean that people can have their human rights limited and social inequality can increase.
More details on Artificial intelligence (AI), big data and decisions that affect human rights can be found in Chapter 6 of the Issues Paper which includes questions about Artificial intelligence (AI), big data and decisions that affect human rights.
AI, big data and decisions that affect human rights consultation questions
How well are human rights protected and promoted in AI-informed decision making? In particular, what are some practical examples of how AI-informed decision making can protect or threaten human rights?
How should Australian law protect human rights in respect of AI-informed decision making? In particular:
- What should be the overarching objectives of regulation in this area?
- What principles should be applied to achieve these objectives?
- Are there any gaps in how Australian law deals with this area? If so, what are they?
- What can we learn from how other countries are seeking to protect human rights in this area?
In addition to legislation, how should Australia protect human rights in AI-informed decision making? What role, if any, is there for:
- An organisation that takes a central role in promoting responsible innovation in AI-informed decision making?
- Self-regulatory or co-regulatory approaches?
- A ‘regulation by design’ approach?
New technology is becoming part of almost every aspect of life. It is central to our experience of daily activities including shopping, transport and accessing government services. The Commission seeks stakeholder views on how best to protect and promote the human rights of people with disability by promoting the accessibility and usability of new technology.
It is important that the whole community can access and use technology. This principle is referred to as ‘accessibility’. Just as everyone should be able to access our education system, public transport and buildings, technology also should be accessible to all.
Accessibility focuses on the experience of the person using the technology and minimising barriers to using that piece of technology. For example, a person with a vision impairment might use voice recognition, a mouse, touch screen or keyboard to input information into a device. To receive information from the device, they may use text-to-speech (TTS), magnification or Braille.
Using technology is an important way that people can participate in the community, education and employment, and political life. Technology must be accessible to everyone, regardless of their disability, race, religion, gender or other characteristics. Universal design allows technology to be used by all people in the community, as much as possible, without the need for other special features or assistive technology.
Accessible and assistive technologies for people with disability
Developers are creating technologies that improve participation and independence of people with disability. These developments can help people with disability enjoy their human rights protected by the Convention on the Rights of Persons with Disabilities. There are principles in the Convention that depend on technology being accessible for people with disability.
These principles include:
- respect for inherent dignity, individual autonomy including the freedom to make one’s own choices, and independence of persons
- full and effective participation and inclusion in society
- equality of opportunity.
Examples of innovations that protect and promote the human rights of people with disability include:
- An intelligent home assistant can assist people by carrying out household and daily tasks by recognising the voice speaking to it and completing a task.
- An application that allows a person who is blind or who has low vision to hold their smartphone camera to an everyday object. The application then describes the object or the person. It can help with, for example, identifying a product in a supermarket.
More details on accessible technology can be found in Chapter 7 of the Issues Paper which includes a number of questions about accessible technology.
Accessible technology consultation questions
What opportunities and challenges currently exist for people with disability accessing technology?
What should be the Australian Government’s strategy in promoting accessible and innovative technology for people with disability? In particular:
- What, if any, changes to Australian law are needed to ensure new technology is accessible?
- What, if any, policy and other changes are needed in Australia to promote accessibility for new technology?
How can the private sector be encouraged or incentivised to develop and use accessible and inclusive technology, for example, through the use of universal design?
An algorithm is a step-by-step procedure for solving a problem. It is used for calculation, data processing and automated reasoning. An algorithm can tell a computer what the author wants it to do, the computer then implements it, following each step, to accomplish the goal.
Artificial Intelligence (AI) is the theory and development of computer systems that can do tasks that normally require human intelligence. This includes decision making, visual perception, speech recognition, learning and problem solving. Current AI systems are capable of specific tasks such as internet searches, translating text or driving a car.
AI-informed decision making is made possible where AI, including through machine learning, applies and/or adjusts algorithms to big datasets. It can be used in areas such as risk assessment in policing.
Artificial General Intelligence (AGI) Artificial General Intelligence is an emerging area of AI research and refers to the development of AI systems that would have cognitive function similar to humans in their ability to learn and think. This means they would be able to accomplish more sophisticated cognitive tasks than current AI systems.
Assistive technology is the overarching term for technology that is specifically designed to support a person with a disability perform a task. An example of an assistive technology is a screen reader, which can assist a person who is blind, or who has a vision impairment, to read the content of a website. Correctly implemented universal design supports assistive technology when required.
Big data refers to the diverse sets of information produced in large volumes and processed at high speeds using AI. Data collected is analysed to understand trends and make predictions. AI can automatically process and analyse millions of data-sets quickly and efficiently and give it meaning.
Bitcoin is a system of open source peer-to-peer software for the creation and exchange of a type of a digital currency that can be encrypted. This is known as a cryptocurrency. Bitcoin is the first such system to be fully functional. Bitcoin operates through a distributed ledger such as Blockchain.
Blockchain is the foundation of cryptocurrencies like Bitcoin. Blockchain is an ever-growing set of data or information blocks that is shared and can continuously be updated simultaneously. These blocks can be stored across the internet, cannot be controlled by a single entity and have no sole point of failure.
A Chatbot is a computer program that simulates human conversation through voice commands or text or both. For example, in banking, a limited bot may be used to ask the caller questions to understand their needs. However, the Chatbot cannot understand a request if the customer responds with a different answer.
Data sovereignty is the concept that information which has been converted and stored is subject to the laws of the country in which it is located. Within the context of Indigenous rights, data sovereignty recognises the rights of Indigenous peoples to govern the collection, ownership and application of their data.
Digital economy refers to economic and social activities that are supported by information and communications technologies. This includes purchasing goods and services, banking and accessing education or entertainment using the internet and connected devices like smart phones. The digital economy impacts all industries and business types and influences the way we interact with each other every day.
The fourth industrial revolution refers to the fusion of technologies that blur the lines between physical, digital and biological spheres. This includes emerging technologies such as robotics, Artificial Intelligence, Blockchain, nanotechnology, The Internet of Things, and autonomous vehicles. Earlier phases of the industrial revolution are; phase one mechanised production with water and steam; phase two mass production with electricity; and phase three automated production with electronics and information technology.
Machine learning is an application of AI that enables computers to automatically learn and improve from experience without being explicitly programmed by a person. This is done by the computer collecting and using data to learn for themselves. For example, an email spam filter collecting data on known spam terminology and unknown email addresses, merging that information and making a prediction to identify and filter sources of spam.
The Internet of Things (IoT) refers to the ability of any device with an on and off switch to be connected to the internet and send and receive data. For example, on a personal level a coffee could brew when an alarm goes off, on a larger scale ‘smart cities’ could use devices to collect and analyse data to reduce waste and congestion.
Universal design refers to an accessible and inclusive approach to designing products and services, focusing on ensuring that people with disability, as well as others with specialised needs, are able to use those products and services. Applying universal design to technology means designing products, environments, programmes and services so they can be used by all people, to the greatest extent possible, without the need for specialised or adapted features. Correctly implemented universal design supports assistive technology when required.