Quantcast
Channel: Global – Future of Privacy Forum
Viewing all articles
Browse latest Browse all 29

The World’s First Binding Treaty on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law: Regulation of AI in Broad Strokes

$
0
0

The Council of Europe’s (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (Framework Convention on AI) was adopted on 17 May 2024, after two years of drafting and negotiation. This is the world’s first binding treaty on AI, focusing on protecting human rights, democracy and the rule of law, and it will be open for States’ signature from 5 September 2024.

FPF has published a Two–Page Fact Sheet overview of the Framework Convention on AI.

While efforts to regulate the development and deployment of Artificial Intelligence (AI) systems have, for the most part, unfolded at national or regional level, there has been increased focus on the steps taken by the international community to negotiate and design cross-border regulatory frameworks. It is in this way that the data protection community, technology lawyers, and AI experts now have the crucial task of increasingly looking beyond regional borders for a holistic view of legislative frameworks aiming to regulate AI. 

The Framework Convention on AI is one such significant initiative, which is spearheaded by the CoE, an International Organization founded in 1949 with the goal of promoting and advocating for human rights, democracy, and the rule of law. Recognizing that AI systems are developed and deployed across borders, an ad-hoc intergovernmental Committee on Artificial Intelligence (CAI) was established under the auspices of the CoE in January 2022, and tasked with launching a binding legal framework on the development, design, and application of AI systems. 

There are several key reasons as to why the treaty is a significant and influential development in the field of global AI law and governance, not only in the context of the CoE and its Member States, but around the world. 

Firstly, the Framework Convention was drafted by the CAI, composed of Ministers representing not only the CoE’s 46 Member States, but also of Ministers or high-level representatives from the Governments of the United States, Canada, Mexico, Japan, Israel, Ecuador, Peru, Uruguay, and Argentina. In addition to representatives of prominent human rights groups, the meetings of the CAI and the drafting of the Framework Convention included representatives of the European Commission, the European Data Protection Supervisor, and of the private sector. Inter-governmental and multi-stakeholder participation in the drafting of a cross-border, binding instrument is often a critical factor in determining its impact. Crucially, the Framework Convention will also be open for ratification to countries that are not members of the CoE. 

Secondly, the importance of the Framework Convention lies in its scope and content. In addition to general obligations to respect and uphold human rights, it aims to establish a risk-based approach to regulating AI and a number of common principles related to activities within the entire lifecycle of AI systems. Its general principles include, among others, respect for human dignity; transparency and oversight; accountability and responsibility; non-discrimination; and privacy and personal data protection. States Parties to the Framework Convention will have to adopt appropriate legislative and administrative measures which give effect to the provisions of this instrument in their domestic laws. In this way, the Framework Convention has the potential to affect ongoing national and regional efforts to design and adopt binding AI laws, and may be uniquely positioned to advance interoperability

With this brief overview in mind, this blog post contextualizes the work and mandate of the CAI in the context of the CoE and international law. It follows on to provide an outline of the Framework Convention, its scope, applicability, and key principles, including its risk-based approach. It then highlights its position towards fostering international cooperation in the field of cross-border AI governance through the establishment of a ‘Conference of the Parties.’ The post also draws some initial points of comparison with the EU AI Act and the CoE’s Convention for the Protection of Individuals with Regards to the Processing of Personal Data, otherwise known as Convention 108. 

  1. Human Rights Are At The Center of the Council of Europe’s Work, Including the Mandate of the Committee on Artificial Intelligence (CAI)

The CoE comprises 46 Member States, 27 of which are Member States of the European Union, and includes Turkey, Ukraine and the United Kingdom. In addition to its Member States, a number of countries hold the status of “Observer States”, meaning that they can cooperate with the CoE, be a part of its Committees (including the CAI), and become Parties to its Conventions. Observer States include Canada, the United States, Japan, Mexico, and the Holy See. Through the Observer State mechanism, CoE initiatives have an increasingly broader reach well beyond the confines of European borders.

As an International Organization, the CoE has played a key role in the development of binding human rights treaties, including the European Convention on Human Rights (ECHR), and Convention 108. Leveraging its experience in advancing both human rights and a high level of personal data protection, among other issues, the CoE has been well-placed to bring members of the international community together to begin to define the parameters of an AI law that is cross-border in nature. 

Since its inception in January 2022, the CAI’s work falls under the human rights pillar of the CoE, as part of the Programme on the Effective Implementation of the ECHR, and the sub-Programme on the freedom of expression and information, media and data protection. It is therefore grounded in existing human rights obligations, including the rights to privacy and personal data protection. In order to grasp the possible impacts of such a treaty, it is crucial to understand how it will function under international law, while drawing a comparison between the Framework Convention on AI and Convention 108. 

1.1. International Law in Action to Protect People in the Age of Computing: From Convention 108 to the Framework Convention

Traditionally, international law governs relations between States. It defines States’ legal responsibilities in their conduct with each other, within the States’ boundaries, and in their treatment of individuals. One of the ways in which international law governs the conduct and relations between States is through the drafting and ratification of international conventions or treaties. Treaties are legally binding instruments that govern the rights, duties, and obligations of participating States. Through treaties, international law encompasses many areas including human rights, world trade, economic development, and the processing of personal data. 

It is on the basis of this treaty mechanism under international law that the CoE Convention 108 opened for signature on 28 January 1981 as the first legally binding, international instrument in the data protection field. Under Convention 108, States Parties to the treaty are required to take the necessary steps in their domestic legislation to apply its principles to ensure respect in their territory for the fundamental rights of all individuals with regard to the processing of their personal data. 

In 2018, the CoE finalized the modernization of Convention 108 through the Amending Protocol CETS No. 223. While the principle-based Convention 108 was designed to be technology-neutral, its modernization was deemed necessary for two key reasons: 1) to address challenges resulting from the use of new information and communication technologies, and 2) to strengthen the Convention’s effective implementation. 

Through the process of modernization, Convention 108 is now better recognized as Convention 108+, and as of January 2024 has 55 State Parties.  Modernized Convention 108+ is also better aligned with the EU General Data Protection Regulation (GDPR), particularly with the expansion of its Article 9 on rights of the data subject, which now includes the individual right “not to be subject to a decision significantly affecting him or her based solely on automated processing of personal data” (automated decision-making).  

As the only international, binding treaty on personal data protection, Convention 108 is an important reference point for the Framework Convention on AI. Already in its Preamble, the Framework Convention makes reference to the privacy rights of individuals and the protection of personal data, as applicable through Convention 108.  Furthermore, both Conventions are similarly grounded in human rights and recognize the close interplay between new technologies, personal data processing, and the possible impacts of these on people’s rights. 

Notably, and unlike Convention 108, the Framework Convention on AI takes the form of a so-called “framework convention”, a type of legally binding treaty which establishes broader commitments for its parties. In essence, a framework convention serves as an umbrella document which lays down principles and objectives, while leaving room for stricter and more prescriptive standards and their implementation to domestic legislation. 

Framework conventions are effective in creating a coherent treaty regime, while elevating the political will for action and leaving room for consensus on the finer details for a later stage. In this way, and considering that the Framework Convention on AI will also be open for ratification to non-Member States of the CoE, the instrument may become more attractive to a greater number of countries. 

  1. The Framework Convention on AI Proposes a Risk-Based Approach and General Principles Focusing on Equality and Human Dignity 

2.1. A Harmonized Definition of an AI System

One of the first challenges of international cooperation and rule-making is the need to agree on common definitions. This has been particularly relevant in the context of AI governance and policy, as national, regional and international bodies have consistently negotiated to agree on a common definition for AI. The Framework Convention on AI addresses this in its Article 2, adopting the OECD’s definition of an AI system as a “machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.” 

Promoted by one of the leading International Organizations in the global AI governance conversation, the OECD’s definition of an AI system has also been relevant in regional contexts. For example, the EU’s Artificial Intelligence Act (EU AI Act), which was given the final green light on 21 May 2024, adopts a very similar definition of an AI system. Similarly, Brazil’s draft AI Bill also adopts the OECD’s definition, showing the country’s intention to align its legislation with the mounting international consensus on a common definition for AI. It is also worth noting that the United States President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and the recently enacted Colorado AI Act also adopt an AI definition that is similar in scope to the OECD definition.  

The alignment on definitions is not insignificant, as it is by first agreeing on the subject matter of rule-making that a body of specific, intentional rules and principles can emerge. Furthermore, an initial alignment on definitions can help to already establish common ground for facilitating interoperability between different AI governance frameworks internationally. 

2.2. The Framework Convention Only Applies to Public Authorities and Private Actors acting on their behalf

Before outlining the principles and obligations elaborated by the Framework Convention, it is important to establish the treaty’s scope and applicability. Its Article 3 states that the Convention covers “the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and the rule of law.” 

Notably, the draft of the Framework Convention on AI from 18 December 2023, which formed the basis for negotiations until its final adoption date in May 2024, made several and consistent references to the lifecycle of an AI system as including the design, development, use and decommissioning stages. However, the finalized Framework Convention on AI makes reference to these stages only once, in its Preamble. With the treaty’s signature and implementation later this year, it still remains to be seen how the lifecycle of an AI system will be interpreted by States Parties in practice, and how this will impact the scope of applicability of the Convention in different countries’ domestic laws.  

Regarding scope, Article 3(1)(a) elaborates that each Party to the Framework Convention on AI will have to apply its principles and obligations within the lifecycle of AI systems undertaken by public authorities, or private actors acting on their behalf. Private actors will only fall under the scope of the Convention if they meet two requirements: 1) the country in which they are established or in which they develop or deploy their AI products and services is a State Party to the Convention, and 2) they are designing, developing or deploying artificial intelligence systems on behalf of that State Parties’ public authorities. 

Therefore, the Framework Convention does not by itself, once ratified by States Parties, provide obligations for all private actors with a role in the lifecycle of AI systems, unless States Parties decide to extend its scope in national law.

In addition to defining what falls within the scope of the Framework Convention, it similarly defines matters which do not fall under its purview. Article 3(2) provides that a Party to the Convention shall not be required to apply its obligations to activities within the lifecycle of AI systems related to the protection of its national security interests. States Parties are, however, nevertheless under an obligation to comply with applicable international laws and human rights obligations, including for purposes of national security. 

The Framework Convention will similarly not apply to research and development activities regarding AI systems not yet made available for use, unless their testing has the potential to interfere with human rights, democracy and the rule of law (Article 3(3)). Finally, the Framework Convention will not apply to matters relating to national defence (Article 3(4)). 

2.3. General Obligations and Common Principles Include Accountability, Individual Autonomy, Safe Innovation 

Instead of opting for more prescriptive requirements, the Framework Convention on AI opts for establishing a broader, umbrella approach for international AI law, while making specific and continued reference to existing obligations, such as those found in international human rights law. 

Articles 4 and 5 of the Framework Convention on AI address the requirements to ensure that activities within the lifecycle of AI systems are consistent with obligations to protect human rights, that they are not used to undermine democratic processes, and that they respect the rule of law. This includes seeking to protect individuals’ fair access and participation in public debate, and their ability to freely form opinions

In addition, in Articles 7 to 13, seven common principles are elaborated which would apply in relation to activities within the lifecycle of AI systems:

  1. Respect for human dignity and individual autonomy (Article 7); 
  2. Maintain measures to ensure that adequate transparency and oversight requirements tailored to specific contexts and risks are in place (Article 8);
  3. Adopt or maintain measures to ensure accountability and responsibility for adverse impacts on human rights, democracy and the rule of law (Article 9);
  4. Ensure that activities within the lifecycle of AI systems respect equality, including gender equality, and the prohibition of discrimination as provided under applicable international or domestic legislation; Article 10 on equality and discrimination also goes beyond by including a positive obligation to maintain measures aimed at overcoming inequalities to achieve fair, just and equitable outcomes in relation to the lifecycle of AI systems (Article 10); 
  5. Adopt or maintain measures to ensure that the privacy of individuals and their personal data are protected, including through international laws, standards and frameworks, and that effective guarantees and safeguards are put in place (Article 11); 
  6. Take measures to promote the reliability of AI systems and trust in their outputs, which could include requirements related to adequate quality and security (Article 12); 
  7. Establish controlled environments for developing, experimenting and testing AI systems under the supervision of competent authorities (Article 13). 

The agreed upon principles attempt to strike a balance between stipulating broad, yet effective principles on the one hand, and determining the requirements which should be left to Member States’ discretion within their own jurisdictions and domestic legislation on the other. 

Notably, the draft of the Framework Convention from 18 December 2023 included a general principle related to adopting and maintaining measures to preserve health, with the option of adopting a clause to include the protection of the environment in the scope of the principle. Similarly, in the same draft text from 18 December 2023, the previous iteration of above-mentioned Article 12 also included options to specify more prescriptive requirements regarding accuracy, performance, data quality, data integrity, data security, governance, cybersecurity and robustness. Both of these articles were amended over time during negotiations, and did not make it into the final text of the Convention. 

A separate Article 21 specifically states that nothing in the Framework Convention shall be construed as limiting, derogating from or otherwise affecting human rights and obligations that may already be guaranteed under other relevant laws. Article 22 goes further to state that the Convention also does not limit the possibility of a State Party to grant wider protection in their domestic law. This is an important addition to the text, particularly at a time in which many countries and regions are drafting and adopting AI legislation. 

2.4. The Risk-Based Approach is Different Than That of the EU AI Act, and it Mitigates Adverse Impacts of AI Systems

In its Article 1 on the object and purpose of the treaty, the Framework Convention on AI elaborates that measures implemented in the lifecycle of AI systems shall be “graduated and differentiated as may be necessary in view of the severity and probability of the occurrence of adverse impacts on human rights, democracy and the rule of law” (emphasis added). In this way, the Framework Convention on AI captures the risk-based approach that has become a familiar component of regulatory discussions and frameworks for AI thus far. 

Article 16(1) further outlines what the risk-based approach will entail in practice. It provides that each State Party shall adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by AI systems by considering actual and potential harms to human rights, democracy, and the rule of law. Article 16(2) proposes a set of broad requirements for assessing and mitigating risks, including to: 

  • Take due account of the context and intended use of an AI system (Article 16(2)(a));
  • Take due account of the severity and probability of potential impacts (Article 16(2)(b));
  • Consider, where appropriate, the perspective of all relevant stakeholders, in particular persons whose rights may be impacted (Article 16(2)(c)); 
  • Apply the risk-management requirements iteratively and throughout the lifecycle of AI systems (Article 16(2)(d));  
  • Include monitoring for risks and adverse impacts (Article 16(2)(e));
  • Include documentation of risks, actual and potential impacts, and on the risk management approach (Article 16(2)(f));
  • Require testing of artificial intelligence systems before making them available for first use and when they are significantly modified (Article 16(2)(g)). 

The risk-based approach principles adopted by the Framework Convention on AI have similarities with obligations we see in the EU AI Act, particularly in relation to requirements for risk monitoring, documentation and testing. However, the Framework Convention does not take a layered approach to risk (from limited risk to high risk) and as such it does not prescribe contexts or use-cases in which AI systems may be prohibited or banned. Rather, in its Article 16(4), the Framework Convention on AI leaves this discretion to each State Party to assess the need for a moratorium, ban or other appropriate measures in respect to certain uses of AI that may be incompatible with human rights. 

  1. A Newly Created Body Will Promote International Cooperation on AI Governance 

International cooperation and coordination in the field of AI governance has been called upon by many regional and international organizations and fora. Cross-border cooperation is consistently identified as a priority in the work of the OECD, forming one of the core tenets of the OECD AI Principles. Similarly, the United Nations’ High-Level Body on Artificial Intelligence is tasked with advancing an international, multi-stakeholder governance of AI, and calls for interoperability of AI frameworks and continued cooperation. The United Nations Human Rights Office of the High Commissioner recently released its Taxonomy of Human Rights Risks Connected to Generative AI, in the interests of stimulating international dialogue and agreement. At the intergovernmental level, the Group of 7 (G7) approved an international set of guiding principles on AI and a voluntary Code of Conduct for AI developers as part of the Hiroshima AI Process

The Framework Convention on AI aims to establish its own proposal for furthering international cooperation, on the basis of a two-pronged approach: the first, encompassed in its Article 23, calls for the formation of a “Conference of the Parties”, to be composed of representatives of the Parties to the treaty; and the second, encompassed in its Article 25, through which Parties are to exchange relevant information among themselves, and to assist States that are not Parties to the Convention to act consistently with its requirements with a view to becoming Parties to it. The Preamble similarly recognizes the value of fostering cooperation and of extending such cooperation to other States that share the same values

In this way, the Framework Convention on AI would encourage both continued cooperation and dialogue at the State Party level, as well as codify the requirement to take an inclusive stance towards countries which are not (yet) Parties to the treaty. This inclusive approach also extends to involving relevant non-State actors in the exchange of information on aspects of AI systems that may have an impact on human rights, democracy, and the rule of law, suggesting ongoing cooperation and exchange with public and private actors. 

For an insight into how such continued cooperation may work in practice under the auspices of the Conference of the Parties, we can draw a useful example from the Consultative Committee established under Convention 108. The Consultative Committee is composed of representatives of Parties to the Convention, and observers such as non-Member States, representatives of International Organizations and non-governmental organizations. The Consultative Committee meets three times a year, and is responsible for the interpretation of Convention 108 and for improving its implementation, ensuring that it remains fit-for-purpose and adapting to an ever-growing set of challenges posed by new data processing systems. 

  1. Closing Reflections: Future Areas of Interplay? 

As the world’s first treaty on artificial intelligence, the CoE’s Framework Convention on AI can help codify the key principles that any national or regional frameworks should include. With a strong foundation in human rights law, including respect for equality and non-discrimination, human dignity and individual autonomy, privacy and personal data protection, the concept behind the Framework Convention on AI is to act as a foundational, umbrella treaty beyond which more prescriptive rules can be adopted at country level. 

In this way, complementarity can be achieved between, for example, the Framework Convention on AI and the EU AI Act, and the Framework Convention on AI and Convention 108. In both cases, the EU AI Act and Convention 108 are both instruments which go beyond principles and into prescriptive requirements for the regulation of AI systems and the processing of personal data, respectively. From 5 September 2024, when the Framework Convention will formally open for signature and ratification by States, the breadth of adoption of the treaty beyond CoE Member States should be closely monitored, as well as how the mechanisms for international cooperation on AI regulation progress in practice. 

FPF has published a Two–Page Fact Sheet outlining the scope, key terms, general obligations and common principles, risk-based approach requirements, and guidance on international cooperation.


Viewing all articles
Browse latest Browse all 29

Trending Articles