The Commission disclosed disagreements between general-purpose model providers and other stakeholders at the first Code of Practice plenary for general-purpose artificial intelligence (GPAI) on Monday (30 September).
For providers of GPAI systems like ChatGPT, the EU AI Act relies heavily on the Code of Practice, which will detail what the Act’s risk management and transparency requirements would entail in practice until standards are finalised, sometime in 2026.
On Monday, the Commission shared an international and academically skewed list of chairs and vice-chairs for the working groups that will draft the Code and “welcomed almost 1,000 participants” to the first virtual plenary of the drafting process, according to an email from a Commission spokesperson.
The working groups will receive input from three sources: a multi-stakeholder consultation, workshops with GPAI model providers and chairs and vice-chairs, and the Code of Practice plenaries.
The first GPAI provider workshop is scheduled for mid-October, and the Code’s first draft will be ready around 3 November, according to two sources.
“A comprehensive report [on the stakeholder consultation] will be published in autumn,” and the final version of the Code of Practice will be “published and presented in a closing plenary, which is expected to take place in April 2025,” the email from the spokesperson said.
The Commission presented slides with preliminary results from the stakeholder consultation at the plenary, which ended on 18 September and received “almost 430” submissions from industry, civil society, and academia, according to the Commission.
Provider and non-provider input
The slides, seen by Euractiv, showed statistics and presented measures many stakeholders want to include in the Code.
GPAI providers gave only 5% of the input, but the measures they supported were highlighted by a star.
The AI Act requires providers to summarise the data they use to train a GPAI system and report according to a template to be designed by the Commission’s AI Office.
About 70-80% of non-provider stakeholders want to include licensed content, data scraped from the internet and open data repositories in this template.
Meanwhile, GPAI providers supported disclosing licensed, scraped, proprietary, user-generated, and synthetic data used for training but were less supportive of sharing what open datasets they use.
In terms of risk assessment, GPAI providers were less keen than others on strict measures, like third-party audits or safety demonstrations related to specific risk thresholds.
Instead, most stakeholders, including GPAI providers, agree that the documentation should specify the license, what AI systems the model can be part of, and what tasks it intends to perform.
Academia and “experts in personal capacity”
According to the statistics presented at the plenary, the Commission received written input from a diverse group, with 32% responses from industry, 25% from rightsholders, 16% from civil society, and 13% from academia.
Previously, civil society organisations worried Big Tech would have too much influence over the process.
Meanwhile, out of the “almost 1000” stakeholders attending the first plenary, the two biggest groups were “experts in personal capacity,” (34%) and academia (30%).
With so many participants, one person involved in the drafting told Euractiv that “the Commission and the chairs will have to closely control the drafting and the comments that can be integrated; otherwise, the process won’t work.”
[Edited by Eliza Gkritsi/Martina Monti]