![]() |
Home
| Databases
| WorldLII
| Search
| Feedback
UNSW Law Society Court of Conscience |
PROTECTING HUMAN RIGHTS IN AI REGULATION
Sophie Farthing*
I Introduction
The concept of ‘artificial intelligence’ (‘AI’) is not a new phenomenon. In 1950, in his seminal paper, the great British scientist Alan Turing asked, ‘Can machines think?’, and set out some of the earliest theoretical underpinnings of machine learning. He expressed his belief that by the turn of the century the concept of ‘machines thinking’ would not be as far-fetched as it seemed in the 1950s.[1]
Fast forward to 2024, and AI has certainly emerged from the realms of science fiction. It is increasingly embedded in products we use, and services received, the way we interact with each other and engage with governments, in information we read and see, how we drive a car and in diagnosing complex medical conditions.[2] While AI has the potential to solve some of the greatest policy challenges of our time, it simultaneously poses a significant threat of harm to people and communities, and threatens to reinforce social inequality, undermine our human rights protections and challenge the very foundations of democracy.[3]
While countries all over the world have adopted a regulatory approach to AI, we are yet to see a comprehensive AI reform agenda adopted in Australia. At the time of writing, the Department of Industry, Science and Resources had just published a proposals paper to introduce mandatory guardrails for AI in high-risk settings.[4] The guardrails proposed by the Government ‘aim to address risks and harms from AI, build public trust and provide businesses with greater regulatory certainty’.[5]
As Australia moves towards a policy to regulate AI, it is important to have a clear objective of what we want regulation for AI to achieve. For AI to support human flourishing, the approach adopted will necessarily have to be human-centred. But what does human-centred mean? In this article, the case is made to ground Australia’s regulatory approach to AI in international human rights law. Applying this global set of substantive legal norms to AI development and deployment enables a clear understanding of AI harms, understand the risk, and provides informative structures to help balance the various vested interests in AI development and deployment. Crucially, applying a human rights approach will ensure individuals and society are protected from harm, while the societal and economic benefit that AI promises is pursued.
II What is AI?
There is no universally accepted definition of AI, rather the term refers to a cluster of data driven technologies. The OECD Principles for trustworthy AI defines an ‘AI system’ to be:
a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.[6]
AI generally relies on some form of machine learning, a term which refers to a computer being able to learn from a data input, such as identifying a pattern or regularity, without a specific instruction, usually involving an element of automation.[7]
AI has seen rapid development in recent years due to technological development and increases in computation power, combined with the technical ability to capture, store and use personal data. The release of the first generative AI model in November 2022 has seen AI become ubiquitous, as well as accessible to everyone, not only data scientists.
In its current state of development, AI should be thought of as being embedded in sociotechnical systems.[8] That is, AI is being used in decision making in a way that involves both technical infrastructure and human involvement. While there is some speculation on the risks attached to the development of artificial general intelligence[9]— that is, a future mode of AI computing that equates, or even surpasses, human intelligence— that technology has not yet developed. A sociotechnical approach is useful to guide regulatory approaches because it demands a consideration of how AI is embedded in current social and economic structures. In turn, this approach demands transparency and accountability regarding how AI is being developed and deployed, which underpins a human-centred approach to regulation focusing on safeguarding against known risks of the use of AI technologies.[10]
III How Does AI Impact Human Rights?
International human rights law refers to a set of substantive legal norms that bind states at the international level, which have been partially incorporated into domestic Australian law.[11] Key human rights are primarily set out in the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights.[12] These core agreements are supplemented by additional international agreements, focusing on the rights of women,[13] children,[14] and people with a disability.[15]
AI impacts a number of human rights. Biased recruitment algorithms[16] impact the right to non-discrimination and equality,[17] as do biased risk assessment tools in sentencing;[18] the right to life is impacted by the use of AI in Lethal Autonomous Weapons;[19] while the right to privacy is engaged by unregulated surveillance using facial recognition technology by private companies,[20] and the scraping of publicly available photos of children to train AI tools.[21] The newest AI technological leap, the development of generative AI, has led to a new wave of AI harms, broadly impacting several human rights ranging from the right to equality before the law to the right to be free from physical and psychological harm.[22]
IV Australia’s Proposed Regulatory Response to the Risk
Of Harms Posed By AI
The challenge faced by the Australian government, and governments all over the world, is how to balance the promise of AI to support economic growth and human flourishing, while putting in place the regulatory settings needed to protect humans from the kinds of harms articulated above. When it comes to new and emerging technologies like AI, regulation is at times held out to be the enemy of innovation. However, effective, fit-for-purpose regulation should be understood to facilitate responsible innovation by providing clear parameters within which innovators can operate.
Australia already has a number of laws that apply to AI, including discrimination, copyright, privacy and corporations law. AI does, however, challenge how we understand these laws to apply, and creates new issues for which there is no legal protection. Consequently, there is an immediate need for a comprehensive reform response to protect Australian individuals and society from AI-related harms.
For several years in overseas jurisdictions, governments have enacted various law and policy reforms specifically targeting AI. Most notably, the EU’s AI Act came into force in mid-2024. The world’s first comprehensive law on AI allocates different risk levels to AI applications, with various mitigating measures that must consequently be put in place to address the level of risk.[23] Other jurisdictions have approached the AI challenge in different ways. The United Kingdom, for example, determined in its 2023 White Paper to adopt a principles-based approach to inform the work of its current regulators.[24] More recently, the UK Government introduced legislation that will impact on the development and use of AI in the Data Protection and Digital Information Bill.[25]
In 2019, the Australian Government adopted the Australian AI Ethics Principles.[26] These eight voluntary principles are intended to be adopted by both public and private entities to ‘ensure AI is safe, secure and reliable’.[27] Broadly, these Principles support the adoption of AI that benefits individuals, society and the environment, provides for the AI systems that respect human rights and diversity, and puts in place governance structures to ensure transparency, accountability and reliability.[28]
In January 2024, the Australian Government published its interim response to its 2023 consultation on Safe and Responsible AI.[29] In this interim response, the Government acknowledged that current Australian laws may be inadequate to protect Australians from AI harms in high-risk contexts. It indicated its policy intent would be based on five principles reflecting its commitment to regulate AI in a way that ‘builds community trust and promotes innovation and adoption while balancing critical social and economic policy goals’.[30] These principles include adopting a risk-based framework, being balanced and proportionate, adopting a collaborative and transparent approach, being a trusted international partner and placing people and communities at the centre of its regulatory approach.[31]
Subsequently, in September 2024, the Government published a Proposals Paper to mandate guardrails for high-risk AI. The proposed Guardrails would impose obligations on developers and deployers of AI systems in high-risk settings to prevent harm from occurring and to force steps to be taken to make sure AI products are safe.[32]
V Grounding Australia’s Regulatory Approach in
International Human Rights Law
Regulation for AI is undoubtedly challenging. The pace of technological development tends to be much faster than regulatory processes. There are complex questions of how to assign liability across the AI life cycle of design, development and deployment. AI also poses challenges to understanding how existing laws apply to the use of AI, and the barriers AI might put in front of existing review and redress mechanisms. There are also gaps in our current regulatory framework where there is an existing AI harm that is not addressed by our domestic laws.
Relevant to all these questions is what we want regulation to do, and, the corollary, what is the harm we are seeking to address. Even the briefest review of regulatory approaches across the world presents a range of ideas of what we want AI to be, such as ‘safe’, ‘fair’, ‘responsible’ or ‘ethical’. The challenge is defining exactly how these terms are understood by private companies and governments across the AI life cycle.
One approach that would address this uncertainty is to adopt an international human rights law approach to AI regulation, providing a source of substantive and globally applicable legal and societal norms. This would also promote global interoperability, given international human rights ‘are currently the only internationally agreed set of moral and legal norms collectively expressed by humanity as central to living a life of dignity and respect’.[33] There are several examples of how a human rights approach can guide the development of AI regulation.
First, human rights provide a framework to define what should be considered ‘harmful’ – and, therefore, the subject of regulatory protection. The use of AI engages the full spectrum of human rights, in both positive and negative ways. As outlined above, AI impacts on the realisation of civil and political rights, such as the right to be free from discrimination, the right to privacy, and freedom of expression and association. It also engages a range of social, economic and cultural rights, such as the rights to employment, education and housing. AI also impacts the protection and promotion of the human rights of specific groups within our communities, such as children and women.[34]
It is significant that the Proposals Paper considering how to apply mandatory guardrails to safeguard against harm suggests that one way to determine whether the development or deployment of AI is high-risk would be to consider whether the AI system risks ‘adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations’.[35]
Secondly, human rights should be central to the regulatory objective. A regulatory objective is a standard provision in modern Australian laws, offering a concise summary of the Government’s aim in this area of law and policy. A regulatory objective articulates the outcomes a Government is seeking to advance in a law, regulation or reform process. The objects clause of the Net Zero Economy Authority Act 2024 (Cth), for example, supports the Albanese Government’s policy agenda to reduce emissions and make Australia a renewable energy superpower, while also ensuring support for regional areas and workers.[36] Any regulatory reform or legislation adopted to safeguard against AI harm should include an aim to support AI innovation that pursues societal and economic benefit, while also protecting human rights.
Third, international human rights law also offers a mechanism to balance competing interests: the proportionality test. International human rights law recognises that protecting some human rights may need to be balanced against other human rights and legitimate interests. The Siracusa Principles recognise non-absolute human rights can be limited where those limitations are lawful, and can be demonstrably justified in a free and democratic society.[37] In practice, this means asking whether the restriction on a human right is for a legitimate aim, and whether the measure is reasonable, necessary and proportionate in a specific context.
VI Building Community Trust: AI Regulation in Context
While Australia does not yet have comprehensive AI policy in place, it should be noted that proposals to address high-risk harms posed by AI is taking place in the context of a wider whole-of-economy reform agenda, including recent government inquiries, regulatory action and test case litigation, as well as concurrent consultation processes that consider the impact of AI.
The Attorney-General Department’s review of Australia’s privacy law, for example, proposed to strengthen the protection of personal information, give individuals greater control over their information and provide new pathways for redress for privacy breaches.[38] Reform of Australia’s Privacy Act 1988 (Cth) is long overdue; as currently articulated these laws are inadequate to address the impact of the use of data-driven technologies like AI. In September 2024, the Australian Government tabled the Privacy and Other Legislation Amendment Bill, which includes some—but, crucially, not all—of the measures proposed by the Attorney General Department’s review.
Current reform approaches also build on the work undertaken by Australia’s regulators that relates to AI. The Australian Human Rights Commission report on Human Rights and Technology,[39] the Australian Competition and Consumer Commission’s ongoing inquiry into digital platforms,[40] and the eSafety Commissioner’s program of work to ensure safer and more positive experiences online[41] are examples of ongoing, in-depth law reform work that should inform the development of AI regulation by the Australian Government.
VII Conclusion
The Australian Government has recognised that fulfilling the promise of AI will only be possible if Australians trust the underlying technology, and how it is used by public and private sector organisations. Context is also relevant; trust in AI is particularly important in high-risk decision making, such as receiving social services and child protection, in policing and in judicial proceedings. Research is now showing that Australians have significant concerns when it comes to AI, particularly about how their personal data is being used.[42] Australians are also increasingly demanding effective regulation for the use of AI.[43] Good regulation that protects and promotes human rights will build community trust in AI, is therefore imperative.
The adoption of AI presents an unprecedented regulatory challenge. The exponential rise in AI has driven the Fourth Industrial Revolution around the world, and yet Australia has been slow to adopt a clear and effective regulatory reform agenda. There is an opportunity for Australia to take a human-centred and economy-wide regulatory approach to AI. Using international human rights law as its north star, AI regulation in Australia should ensure our law is effective, coherent and innovation-enhancing, while also safeguarding against risks of harm. Such an approach would also find support from Australians, who are seeking safeguards so that AI technologies fulfil their great promise, without delivering a future we fear.
* Sophie Farthing is Head of the Policy Lab at the Human Technology Institute (HTI), University of Technology Sydney (UTS).
1 Alan Mathison Turing, ‘Computing Machinery and Intelligence’ (1950) 59(236) Mind 433.
[2] For examples of how AI is being used and its impact, see Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia Proposals Paper for Introducing Mandatory Guardrails for AI in High-risk Settings (Proposals Paper, September 2024) 12 (‘Mandatory Guardrails for AI Proposal Paper’); Department for Science, Innovation & Technology (UK), A Pro-innovation Approach to AI Regulation (Policy Paper, CP 815, March 2023) 7–11.
[3] See, eg, Australian Human Rights Commission, Human Rights and Technology (Final Report, 1 March 2021); United Nations Human Rights Office of the High Commissioner, Taxonomy of Human Rights Risks Connected to Generative AI (Report, 2023) (‘Taxonomy of Human Rights Risks Connected to Generative AI’); Australian Government Department of Home Affairs, Strengthening Australian Democracy: A Practical Agenda for Democratic Resilience (Report, 2024) 32–4; Stephanie A Bell and Anton Korinek, ‘AI’s Economic Peril’ (2023) 34(4) Journal of Democracy 151.
[4] Mandatory Guardrails for AI Proposal Paper (n 2).
[5] Ibid 2.
[6] Organisation for Economic Co-operation and Development, Principles for Trustworthy AI: Recommendation of the Council on Artificial Intelligence (OECD Legal Instruments, No 0449, adopted 22/05/2019, and amended on 03/05/2024) 7.
[7] Organisation for Economic Co-Operation and Development, Explanatory Memorandum on the Updated OECD Definition of an AI System (OECD Artificial Intelligence Papers, No 8, March 2024) 8.
[8] See Lyria Bennett Moses, ‘Regulating in the Face of Sociotechnical Change’ in Roger Brownsword, Eloise Scotford and Karen Yeung (eds), The Oxford Handbook of Law, Regulation and Technology (Oxford Handbooks, 2017) 573; Lyria Bennett Moses, ‘How to Think about Law, Regulation and Technology: Problems with ‘Technology’ as a Regulatory Target’ (2013) 5(1) Law, Innovation and Technology 1, 6.
[9] See, eg, ‘Statement on AI Risk’, Center for AI Safety (Open Letter, May 2023) <https://www.safe.ai/work/statement-on-ai-risk#press>.
[10] See generally Seth Lazar and Alondra Nelson, ‘AI Safety on Whose Terms?’ (2023) 381(6654) Science 138.
[11] See, eg, Sex Discrimination Act 1984 (Cth); Racial Discrimination Act 1975 (Cth); Disability Discrimination Act 1992 (Cth), Age Discrimination Act 2004 (Cth).
[12] International Covenant on Civil and Political Rights, opened for signature 16 December 1966, 999 UNTS 171 (entered into force 23 March 1976) (‘International Covenant on Civil and Political Rights’); International Covenant on Economic, Social and Cultural Rights, opened for signature 16 December 1966, 993 UNTS 3 (entered into force 3 January 1976) (‘International Covenant on Economic, Social and Cultural Rights’).
[13] United Nations Convention on the Elimination of All Forms of Discrimination Against Women, opened for signature 1 March 1980, 1249 UNTS 13 (entered into force 3 September 1981).
[14] United Nations Convention on the Rights of the Child, opened for signature 20 November 1989, 1577 UNTS 3 (entered into force 2 September 1990).
[15] United Nations Convention on the Rights of Persons with Disabilities, opened for signature 30 March 2007, 2515 UNTS 3 (entered into force 3 May 2008).
[16] Jeffrey Dastin, ‘Insight – Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women’, Reuters (online, 11 October 2018) <https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/>.
[17] See International Covenant on Civil and Political Rights (n 12) art 2; International Covenant on Economic, Social and Cultural Rights (n 12) art 2.
[18] Mallika Chawla, ‘COMPAS Case Study: Investigating Algorithmic Fairness of Predictive Policing’, Medium (online, 23 February 2022) <https://mallika-chawla.medium.com/compas-case-study-investigating-algorithmic-fairness-of-predictive-policing-339fe6e5dd72>.
[19] See, eg, Bethan McKernan and Harry Davies, ‘‘The Machine Did it Coldly’: Israel Used AI to Identify 37,000 Hamas Targets’, The Guardian (online, 4 April 2024) <https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes>.
[20] See, eg, Office of the Australian Information Commissioner (Cth), ‘Clearview AI Breached Australians’ Privacy’ (Media Release, 3 November 2021).
[21] Human Rights Watch, ‘Australia: Children’s Personal Photos Misused to Power AI Tools’ (Press Release, 2 July 2024).
[22] Taxonomy of Human Rights Risks Connected to Generative AI (n 3) 1.
[23] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down the Harmonised Rules of Artificial Intelligence and Amending Regulation [2024] OJ L 2024/1689.
[24] United Kingdom Government Department for Science, Innovation & Technology, A Pro-Innovation Approach to AI Regulation (Policy Paper, March 2023).
[25] At the time of writing, the Data Protection and Digital Information Bill (UK) had passed the House of Representatives and was being considered in the House of Lords.
[26] ‘Australia’s AI Ethics Principles’, Department of Industry, Science and Resources (Web Page, 2019) <https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles#:~:text=Principles%20at%20a%20glance&text=Fairness%3A%20AI%20systems%20should%20be,ensure%20the%20security%20of%20data.>.
[27] Ibid.
[28] Ibid.
[29] Department of Industry, Science and Resources (Cth), ‘The Australian Government’s Interim Response to Safe and Responsible AI Consultation’ (Media Release, 17 January 2024); see Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia Consultation: Australian Government’s Interim Response (Consultation Report, 2024) (‘Safe and Responsible AI in Australia Consultation’).
[30] Safe and Responsible AI in Australia Consultation (n 29) 18.
[31] Ibid.
[32] Mandatory Guardrails for AI Proposal Paper (n 2) 2, 35.
[33] Taxonomy of Human Rights Risks Connected to Generative AI (n 3) 1.
[34] See, eg, United Nations, General Comment No. 25 (2021) on Children’s Rights in relation to the Digital Environment, UN CRC/C/GC/25 (2 March 2021) [50]; Council of Europe, The Digital Dimension of Violence Against Women as Addressed By the Seven Mechanisms of the EDVAW Platform (Thematic Paper, 2022).
[35] Mandatory Guardrails for AI Proposal Paper (n 2) 19.
[36] Anthony Albanese and Chris Bowen, ‘Historic Legislation to Establish the Net Zero Economy Authority’ (Media Release, 27 March 2024).
[37] American Association for the International Commission of Jurists, Siracusa Principles: On the Limitation and Derogation Provisions in the International Covenant on Civil and Political Rights (Principles Paper, April 1985).
[38] Attorney-General’s Department (Cth), Privacy Act Review Report (Report, 2022) 122, 174.
[39] See Australian Human Rights Commission, Final Report: Human Rights and Technology (Final Report, 1 March 2021).
[40] Australian Consumer and Competition Commission, ‘Project overview’, Digital Platform Services Inquiry 2020-25 (Web Page, 2024) <https://www.accc.gov.au/inquiries-and-consultations/digital-platform-services-inquiry-2020-25>.
[41] Australian Government and eSafety Commissioner, Australia’s eSafety Strategy 2022-25 (Strategy Report, 2024).
[42] See Office of the Australian Information Commissioner (Cth), Australian Community Attitudes to Privacy Survey, (Survey, August 2023); Katharine Kemp, Charlotte Gupta, and Marianne Campbell, Singled Out – Consumer Understanding — and Misunderstanding — of Data Broking, Data Privacy, and What It Means for Them (Report, February 2024).
[43] Nicole Gillespie et al, Trust in Artificial Intelligence: A Global Study (Report, 2023) 4.
AustLII:
Copyright Policy
|
Disclaimers
|
Privacy Policy
|
Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawSocCConsc/2024/6.html