Global AI Governance: Five Key Frameworks Explained

Global AI Governance: Five Key Frameworks Explained

With generative artificial intelligence (AI) technologies entering nearly every aspect of human life, it has become ever more urgent for organizations to develop AI systems that are trustworthy and subject to good governance. To that end, various international organizations and technical bodies have established standards for responsible AI development and deployment.

To make some sense of this rapidly evolving landscape of AI governance, this article summarizes five of the most influential AI-related standards or frameworks from different organizations. We begin with the OECD’s foundational AI principles, which established international consensus on AI values, as well as UNESCO’s recommendation on AI ethics, which addresses broad societal implications of AI development.

Following those are three more technical standards that translate high-level commitments into actionable practices: the U.S. National Institute of Standards and Technology (NIST) AI management framework, the ISO/IEC 42001 international standard for AI governance, and the IEEE 7000-2021 standard for ethical system design.

Taken together, these five standards should give organizations a solid foundation on which to build a responsible and ethical AI system.

In 2019, the Organisation for Economic Co-operation and Development (OECD)

In 2019, the Organisation for Economic Co-operation and Development (OECD), an intergovernmental group of developed nations, established five core principles that form a global consensus on the responsible and trustworthy governance of AI: (1) inclusive growth, sustainable development and well-being, (2) respect for the rule of law, human rights, and democratic values, including fairness and privacy, (3) transparency and explainability, (4) robustness, security, and safety, and (5) accountability.

These non-binding but influential principles emphasize a rights-based approach, guiding the development and deployment of AI systems in a way that promotes human rights and democratic values.

Governments around the world use the OECD recommendations and related tools to design policies and develop AI risk management frameworks, laying the groundwork for global interoperability across regulatory jurisdictions. OECD member countries are expected to actively support these principles and make their best efforts to implement them.

The General Conference of the United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted its _Recommendation on the Ethics of Artificial Intelligence_ in 2021 to address the broad societal implications of AI development.

Endorsed by all 194 member states, the recommendations promote human rights and fundamental freedoms, centering on the protection of human rights through principles of “Do No Harm,” safety and security, fairness and nondiscrimination, privacy, sustainability, transparency, human oversight, and accountability.

In January 2023, NIST released its AI Risk Management Framework (AI RMF), a voluntary set of guidelines addressed to individuals and organizations who want to act responsibly in developing products and services containing AI.

The AI RMF breaks down AI management into four core functions: (1) “Govern” – implementing policies to encourage a culture of risk awareness and management with respect to AI systems, (2) “Map” – ensuring that people within the organization thoroughly understand the risks and benefits of the AI system in question, (3) “Measure” – continuously testing and monitoring the AI system to ensure its trustworthiness, and (4) “Manage” – making sure that enough resources are allocated to deal with the mapped and measured risks.

ISO/IEC 42001 (“ISO 42001”) is an international standard promulgated in December 2023 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission.

This standard focuses on the _management_ structure of AI systems, as opposed to the AI systems themselves. It is billed as “the world’s first AI management system standard.” Although compliance with this standard is voluntary, ISO/IEC 42001 sets out a more formal set of guidelines that organizations can use to create and manage a well-functioning AI management system (or “AIMS”), while balancing governance with innovation.

IEEE 7000 was published in 2021 by the Institute of Electrical and Electronics Engineers, before generative AI exploded into public consciousness with the introduction of ChatGPT by OpenAI. This standard is addressed primarily to engineers and technical workers developing software-based products and services (or “systems”).

The IEEE 7000 standard consists of five main processes: (1) defining the system’s stakeholders and its expected operation and context of use, (2) eliciting ethical values from various stakeholders, (3) formulating specific ethical value requirements for the system, (4) ensuring that these ethical requirements are implemented into the design of the system, and (5) maintaining transparency throughout the process, including sharing how ethical concerns have been addressed during system design.

Impact of Responsible AI Governance Frameworks

Although these AI frameworks share common foundational elements, each has its own focus area and nuances. These governance frameworks are considered to be applicable both to developers and deployers and tend to be industry agnostic.

To illustrate, the EU AI Act follows the OECD’s definition of AI systems. Colorado’s AI Act requires deployers of high-risk AI systems to maintain a risk management program that is reasonable in light of established frameworks such as the AI RMF, ISO 42001, or other nationally or internationally recognized frameworks that are substantially equivalent.

These frameworks appear as reference points in subregulatory guidance, industry codes of conduct, and standards of practice that reflect prevailing industry norms.

Conclusion

The five standards described above—OECD AI principles, UNESCO Recommendation on AI ethics, AI RMF, ISO 42001, and IEEE 7000—are complementary rather than competing; all encourage ethical and responsible AI technology development but serve different purposes.

Organizations can layer these approaches to translate high-level ethical principles and AI risk management structures into concrete AI management controls and design standards. This approach can align AI assurance programs with binding laws and best practices.