top of page

Artificial Intelligence Governance: Prospects for International Law


In recent years, the world has witnessed an overwhelming proliferation of Artificial Intelligence (AI). However, attempts to address risks involved with AI have not materialised with equal velocity at the international level. While the benefits of AI are alluring, AI implementation involves a slew of risks ranging from privacy concerns to algorithmic bias to energy consumption. These multifaceted risks must be addressed by regulations, which raises new questions for the future of AI governance.


Due to the scope of AI risks and the need to ensure ethical governance, this trend in AI proliferation must be coupled with an equally rapid development in international law to comprehensively address risks with regard to human rights and sustainable development. To determine prospects for future international law regarding AI, this article reviews the current state of play on regulation by exploring approaches applied in existing legal frameworks and voluntary guidelines across both international circles and domestic arenas.


At a special meeting held by the United Nations Economic and Social Council in May 2024, H.E. Paula Narváez summarised the pressing need for responsible AI governance: “Algorithmic bias, data privacy, and job displacement, among others, underscore the critical need for ethical and responsible AI governance frameworks that prioritise human rights, equity, and sustainability.”


Though the international community has been inherently slow to develop binding agreements, several major international organisations have addressed the risks posed by AI technology and produced voluntary guidelines regarding AI development. UNESCO produced a Recommendation on the Ethics of AI in 2021, addressing a variety of AI concerns, including algorithmic discrimination, privacy, data protection, and digital divide inequalities. It advises that states and companies should create due diligence and oversight tools to assess and mitigate AI’s impact on human rights and poverty.


The recommendation emphasises that participation from various stakeholders is needed for inclusive  AI governance, and it encourages states to involve all AI actors, from investors to engineers and users, to establish best practices and norms.


Outside the UN, G7 leaders endorsed the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems in 2023, a voluntary set of guiding principles for AI developers. The Code of Conduct recommends that organisations use a risk-based approach to developing management policies.


Though the Code’s 11 principles are generally vague in nature, it specifically encourages AI developers to apply content authentication mechanisms to help users identify AI-generated content, alongside implementing disclaimers to inform users when they are interacting with AI. The Code also calls for public reporting of AI system capabilities and limitations, the advancement of international technical standards, and information sharing across industry, academia, governments and civil society.


These recommendations serve as starting points from which future international law can be drawn. The Hiroshima Code of Conduct has already been used to inform the domestic AI regulatory approach of several G7 members, such as Japan and the U.S. Notably, the Code’s guidance directly inspired certain sections of the EU AI Act, including the Act’s transparency obligations. Members of the European Parliament have even expressed support for maturing the G7’s Code of Conduct to a level where AI models demonstrating compliance with the Code could receive a “presumption of conformity” to the EU AI Act’s regulations, demonstrating openness to interoperability.


In terms of establishing swift AI legislation, Europe has proved to be a trailblazer. The Council of Europe’s Framework Convention on Artificial Intelligence, adopted in May 2024, is the first-ever legally binding international agreement designed to safeguard AI systems. The Convention particularly focuses on AI’s potential to interfere with human rights, democracy, and the rule of law. It covers the entirety of the AI system lifecycle, from design to decommissioning, across both the public and private sectors.


The treaty establishes requirements for transparency and oversight, with parties obliged to assess the need for a moratorium in the case of AI uses that are incompatible with human rights standards. Parties must ensure that AI systems respect privacy rights, prevent discrimination, and implement procedural safeguards such as notifying AI users that they are interacting with AI. It is also the responsibility of parties to ensure that AI systems are not used in a manner that undermines democratic processes.


However, the Convention’s provisions do not apply to national security matters, so long as those activities are not in conflict with international law. The treaty is open to signature by non-Council of Europe member states and has been signed by the United States, Canada, Uruguay, and others. The Council of Europe’s Convention provides a basis for how international AI standards can be approached.


Meanwhile, the landmark EU Artificial Intelligence Act, adopted in 2024, was the world’s first comprehensive standalone legislation regulating AI, with Brussels anticipating that the text would serve as a blueprint for AI legislation in other jurisdictions. The European Parliament’s objective was to ensure that AI is transparent, non-discriminatory, safe, and environmentally friendly.


The Act takes a risk-based approach, classifying AI systems into tiers based on their potential risks to individuals. Tiers consist of unacceptable risk, high-risk, limited risk, and minimal risk. Certain uses of AI, including real-time remote biometric identification and cognitive behavioural manipulation, have been deemed unacceptable and thereby banned, with some exceptions for biometric identification in cases of national security. The horizontal and risk-based approach is a promising framework that future international agreements are likely to implement.


These two European-led legal instruments are impressive feats that set a positive precedent for the future of AI regulation. Yet a crucial challenge that lawmakers face is the need to balance risk prevention and regulation with fostering innovation, experimentation, and development. This came to the fore during negotiations for the Framework Convention, with the U.S. reportedly advocating for more lenient obligations for the private sector. Prospects for stronger international frameworks at the global level are thus likely to be stifled by technological competition, particularly as the U.S. and China vie to be leaders in AI. Another hurdle for achieving robust international agreements is the lack of cohesion in domestic legislation. AI regulation in both the U.S. and China – the world’s top two AI developers – remains largely fragmented, which makes developing wide-scale international regulations challenging.


In contrast to the EU, China does not have comprehensive, centralised AI legislation. Without a unified legal definition of AI––although the term has been defined within certain technical standards and guidelines––China has approached AI regulation by promulgating technical standards and implementing industry-specific rules.


Regulations in China have been targeted at specific high-risk uses, such as recommendation algorithms and deep synthesis. Local regulations have also been issued, such as in Shanghai. Likewise, AI regulation in the U.S. consists of a patchwork of individual state laws and federal consumer protection laws. This piecemeal state of AI governance is likely to make compliance challenging for companies that operate across states. Comprehensive federal legislation on the regulation of AI has not yet been achieved.


With the Trump administration focusing on competitively strengthening AI innovation, it is unlikely that the U.S. will follow the EU’s model of swiftly imposing centralised regulations. In January 2025, President Trump loosened restrictions when he signed Executive Order “Removing Barriers to American Leadership in AI.” This order rescinded Biden's previous executive order for “The Safe, Secure, and Trustworthy Development and Use of AI,” which had built upon principles outlined in the Hiroshima Code of Conduct. These developments suggest that stronger international regulations would not receive support from the current administration.


Trends in AI legislation and guidelines suggest that future international law surrounding AI would benefit from using a risk-based and non-sector-specific approach, in consultation with AI manufacturers and researchers. Innovative legislation produced by Europe showcases the great potential for international frameworks to create effective safeguards. Future international conventions are likely to address the recurring regulatory themes, which include the need to ensure transparency, oversight, and alignment with human rights, while at times making concessions for national security.


A main obstacle to the pursuit of thoroughly binding international conventions is that major AI-developing countries are hesitant to endorse frameworks that might impede AI innovation, and their domestic policies remain serpentine and evolving, such that advances in establishing regulations at the international level are likely to be incremental. This leaves AI governance, for the foreseeable future, to rest largely in the hands of national or local jurisdictions, which would benefit from following international guidelines.


Image by DeltaWorks via Pixabay

bottom of page