Your Go-To Source for Market Insights

EU Moves Forward with General-Purpose AI Code of Practice

The European Union is making strides in defining the future of artificial intelligence through the establishment of its inaugural ‘General-Purpose AI Code of Practice,’ part of the broader AI Act. Spearheaded by the European AI Office, this initiative brings together hundreds of global experts from various fields, including academia, industry, and civil society, to collaboratively create a framework that tackles critical topics such as transparency, copyright, risk assessment, and governance.

The initiative kicked off with an online plenary session on September 30, attracting nearly 1,000 participants. This marks the beginning of a comprehensive effort that will lead to a finalized draft by April 2025. The Code of Practice is designed to serve as a foundational guideline for the application of the AI Act to general-purpose AI models, including large language models (LLMs) and AI technologies utilized across different industries.

To facilitate this development, four working groups have been formed, each led by notable figures in the industry. These groups will concentrate on key areas, including transparency and copyright issues, risk identification, technical risk mitigation, and internal risk management. Experts such as AI researcher Nuria Oliver and German copyright law expert Alexander Peukert are among those guiding these efforts. The European AI Office has indicated that these groups will convene between October 2024 and April 2025 to draft relevant provisions, gather input from stakeholders, and refine the Code of Practice through continuous dialogue.

The AI Act, which was passed by the European Parliament in March 2024, represents a groundbreaking legislative effort to regulate AI technology throughout the EU. It employs a risk-based framework for AI governance, categorizing systems by risk level and enforcing specific compliance requirements. This act is particularly significant for general-purpose AI models due to their wide-ranging applications and potential societal effects, often placing them in higher-risk classifications.

Despite the EU’s intentions, some prominent AI companies, including Meta, have expressed concerns that the regulations may be overly stringent and could hinder innovation. In light of this, the EU’s collaborative drafting process for the Code of Practice aims to strike a balance between ensuring safety and ethics while encouraging innovation. The multi-stakeholder consultation has already received over 430 contributions, which will inform the code’s development.

With the goal of setting a standard for the responsible development and management of general-purpose AI models by April 2025, the EU is focused on minimizing risks while maximizing societal advantages. As the global AI landscape continues to evolve, this initiative could significantly influence AI regulatory approaches in other countries, as many look to the EU for guidance on managing emerging technologies.

Send
Share
Share
Send