The Crucial Task of Regulating Artificial Intelligence

A person wearing a VR headset
a robotic hand reaches for a human hand

A new era

The year 2023 marked a new era of “AI hype”, rapidly steering policy makers towards discussions on the safety and regulation of new artificial intelligence (AI) technologies.

The feverish year in tech started with the launch of ChatGPT in late 2022 and ended with a landmark agreement on the EU AI Act being reached. The final text of this legislation is still being ironed out in technical meetings, but early signs indicate that the western world’s first “AI rulebook” falls short in a number of crucial areas. While the Act goes someway to protecting people from the harms of AI, it fails to ensure human rights protections, especially for the most marginalised.

The agreement came soon after the UK Government hosted an inaugural AI Safety Summit in November 2023, where global leaders, key industry players, and select civil society groups gathered to discuss the risks of AI. Robust debate on AI governance is welcomed and urgently needed. The key question for 2024 is whether these discussions will generate concrete commitments, and critically whether it will translate into further substantive action in other jurisdictions.

security cameras on a building wall

Whilst AI developments do present new opportunities and benefits, we must not ignore the documented dangers posed by AI tools when they are used as a means of societal control, mass surveillance and discrimination. All too often, AI systems are trained on massive amounts of private and public data – data which reflects societal injustices, often leading to biased outcomes and exacerbating inequalities.

From predictive policing tools, to automated systems that determine access to healthcare and social assistance, to monitoring the movement of migrants and refugees, the use of AI has flagrantly and consistently undermined the human rights of the most marginalised in society.

Other forms of AI, such as fraud detection algorithms, have also disproportionately impacted ethnic minorities, who have endured devastating financial problems, while facial recognition technology has been used by the police and security forces to target racialised communities and entrench Israel’s system of apartheid.

the letters AI repeating

What makes regulation of AI complex and challenging?

First, there is the vague nature of the term AI itself, making efforts to regulate this technology more cumbersome. There is no widespread consensus on the definition of AI because the term does not refer to a singular technology and rather encapsulates a myriad technological applications and methods.

The use of AI systems in many different domains across the public and private sector means a large number of varied stakeholders are involved in its development and deployment. Further, these systems cannot be strictly considered as hardware or software, but rather their impact comes down to the context in which they are developed and implemented and regulation must take this into account.

a person holding a smartphone

Crucial elements of regulation

Alongside the EU legislative process, the UK, US, and others have set out their distinct roadmaps to identifying the key risks of AI technologies and how they intend to mitigate these. Whilst there are many complexities of these legislative processes, this should not delay any efforts to protect people from the present and future harms of AI, and there are crucial elements that we, at Amnesty, know any proposed regulatory approach must contain.

Regulation must be legally binding and center the already documented harms to people subject to these systems. Commitments and principles on the “responsible” development and use of AI – the core of the current pro-innovation regulatory framework being pursued by the UK – do not offer an adequate protection against the risks of emerging technology and must be put on statutory footing.

Similarly, any regulation must include broader accountability mechanisms over and above technical evaluations that are being pushed by industry. Bans and prohibitions cannot be off the table for systems fundamentally incompatible with human rights, no matter how accurate or technically efficacious they purport to be.

a laptop using ChatGPT

Gaps in legislation

Others must learn from the EU process and ensure there are not loopholes for public and private sector players to circumvent regulatory obligations. Removing any exemptions for AI used within national security or law enforcement is critical to achieving this. It is also important that where future regulation limits or prohibits the use of certain AI systems in one jurisdiction, no loopholes or regulatory gaps allow the same systems to be exported to other countries where they could be used to harm the human rights of marginalized groups. This remains a glaring gap in the UK, US, and EU approaches, as they fail to take into account the global power imbalances of these technologies. This is especially important for communities in the Global Majority whose voices are not represented in these discussions. There have already been documented cases of outsourced workers being exploited in Kenya and Pakistan by companies developing AI tools.

a person wearing a VR headset

Rights-respecting by design

As we enter 2024, now is the time to not only ensure that AI systems are rights-respecting by design. We must guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and centred within these discussions. 

More than lip service by lawmakers, we need binding regulation that holds companies and other key industry players to account. Profits must not come at the expense of human rights protections. International, regional and national governance efforts must complement and catalyse each other, and global discussions must not undermine meaningful national regulation or binding regulatory standards. Finally, we must learn from past attempts to regulate tech, which means ensuring robust mechanisms are introduced to allow victims of AI-inflicted rights violations to seek justice.

By David Nolan, Hajira Maryam & Michael Kleinman, Amnesty Tech