info@shuralawfirm.com

Egypt:0020502313307

AI Ethics Guidance Manual 2025

AI Ethics Guidance Manual 2025

Share the article via:

AI Ethics Guidance Manual 2025

Introduction

The General Secretariat of the Gulf Cooperation Council issued the AI Ethics Guidance Manual 2025 as part of its efforts to develop institutional frameworks and keep pace with the rapid technological advancements worldwide. With the significant expansion in the use of artificial intelligence technologies, there has emerged an urgent need to strengthen the governance of these technologies to ensure enhanced performance efficiency, accuracy of outcomes, and protection of data and information.

The manual provides a unified ethical framework for the GCC countries to regulate the development and use of artificial intelligence systems, ensuring the preservation of human dignity, safeguarding values, and promoting the responsible and safe use of modern technologies. This framework aims to support the adoption of AI across the GCC at various technical, social, and political levels, based on four core values and seven key principles as follows:

First: The Four Core Values

1. Respect for Human Dignity, Freedom, and Autonomy:

The guideline emphasizes the intrinsic value of human beings, which must not be undermined or diminished in any way, including by artificial intelligence systems. In this context, it stresses the need to develop these systems in a manner that respects individuals’ physical and mental safety, preserves their cultural identity, and enables them to exercise their will freely and autonomously.

2. Respecting Sharia, the Constitution, and strengthening Gulf cohesion

The guide clarifies that artificial intelligence systems must comply with the principles of Islamic Sharia, the constitution, national legislation, and the applicable regulations in the GCC states, while respecting local standards and procedures. These systems must not undermine the effectiveness of such laws or diminish their impact.

3. Environmental Protection and Promotion of Sustainability:

The guide highlights that digital transformation initiatives face significant environmental challenges due to the potential negative impacts on ecosystems affecting humans and living beings. It therefore emphasizes the need to consider environmental risks arising from the operation and training of AI systems—such as carbon emissions and water usage for cooling—ensuring environmental protection and sustainability for future generations.

4. Promoting peaceful use to enhance human well-being:

The guide urges that AI systems contribute to improving the quality of life for individuals and communities, and that responsible authorities ensure these systems are not deployed in ways that threaten security, safety, or social cohesion. It also emphasizes that AI systems must be technically safe and reliable, taking into account the needs of vulnerable groups—such as persons with disabilities and the elderly—during development and implementation.

Second: The Seven Core Principles

1. Human-Centered Decision Principle

AI systems must ensure human autonomy and decision-making capacity by functioning as tools that empower society, support fundamental rights, and provide human oversight mechanisms to mitigate potential risks. This principle emphasizes enhancing individuals’ ability to make informed choices aligned with their goals.

2. Principle of Safety and Harm Prevention:

 The principle emphasizes adopting a preventive approach in developing AI systems, ensuring reliable behavior and minimizing unintended harm. The guide recommends implementing fail-safe mechanisms for high-risk scenarios, evaluating the accuracy of system predictions and decisions, and proactively testing systems along with safety measures.

3. Principle of fairness, equity, and non-discrimination:

This principle includes preventing biases in data and algorithms that may create prejudice or discrimination against specific groups or individuals, in addition to ensuring fair access by enabling individuals and communities to benefit from artificial intelligence, as well as establishing oversight processes to guarantee fairness and equity throughout all stages of the systems’ lifecycle, ensuring equal opportunities and minimizing the risks of discrimination.

4.Principle of Privacy and Data Protection:

  This principle emphasizes regulating data collection and processing, controlling access according to governance principles to prevent privacy breaches, and ensuring the protection of users' personal rights throughout all stages of system usage, by limiting unrestricted access to data, monitoring its integrity and quality, and establishing an appropriate framework for its protection.

5.Principle of Transparency and Explainability:

This principle requires enhancing the explainability and interpretability of artificial intelligence, which is crucial for building user trust in these systems, in addition to informing users through the relevant actors about any decision made based on AI-derived information and clarifying the mechanism by which such decisions are made, as well as documenting the processes that generate the outputs, thereby fostering trust and understanding.

6. Principle of Accountability and Responsibility:

This principle requires establishing mechanisms to ensure accountability for the performance of the AI system and identifying the legally responsible entity for the systems and their outcomes, setting clear regulations for evaluation and auditing, and activating mechanisms for reporting errors, thereby ensuring that these systems are subject to clear and defined responsibility.

7. Principle of Integrity and Non-Falsification:

This principle aims to prevent the falsification of facts and the exaggeration or distortion of AI capabilities for secondary purposes, such as profit-making, while establishing clear standards before deploying these systems on a wide scale to ensure their reliability and prevent unethical use.

Conclusion

The 2025 Guiding Framework for AI Ethics represents a strategic milestone in the Gulf Cooperation Council countries’ path toward building an advanced and secure digital ecosystem that respects privacy and values while balancing innovation and responsibility. Adopting this ethical framework is not only aimed at regulating the development of modern technologies but also seeks to ensure that artificial intelligence serves as a supportive and complementary tool for humans, rather than posing a threat to their rights or societal harmony.

By adhering to the four core values and the seven principles established by the framework, the GCC countries lay a solid foundation for effective governance that enables them to face future challenges, enhances their ability to leverage AI capabilities in supporting sustainable development, improving quality of life, and advancing the digital transformation journey in the region.

إن المرحلة المقبلة تتطلب تفعيل هذه المبادئ على أرض الواقع عبر تشريعات حديثة، واستراتيجيات واضحة، وبناء قدرات وطنية قادرة على الابتكار الأخلاقي والمسؤول. وعليه، فإن نجاح دول مجلس التعاون في صياغة نموذج متقدم للذكاء الاصطناعي الأخلاقي سيشكل علامة فارقة في مسيرتها نحو ريادة المستقبل الرقمي إقليميًا وعالميًا.

More Articles