[ad_1]
Civil society organisations have criticised the Commission’s drafting of guidelines on ambiguous Artificial Intelligence (AI) Act prohibitions in a joint statement, demanding human rights and justice be at its core.
AI Act prohibitions on systems of “unacceptable risk” enter into force on 2 February, potentially setting the scale of the entire risk-based framework. However, the Commission has yet to provide guidelines on interpreting the legal text, and stakeholders complained that a December consultation was late and did not include a draft document despite one circulating inside the Commission.
In a statement published on Thursday, the organisations asked that problematic practices be included in the provisions to address what they see as “grave loopholes” in the final AI Act text.
They also want the Commission to decide that “simple” systems should count as AI in the wake of an AI definition consultation run in parallel to the prohibitions consultation.
They wrote that they are concerned developers could use the definition of AI and high-risk AI systems to bypass the AI Act’s obligations. This could be done by simplifying a system without changing its functionality.
The statement is signed by 21 civil society organisations and four professors, including Access Now, Amnesty International and European Digital Rights (EDRi).
In a separate comment, Blue Tiyavorabun, a policy advisor at EDRi, called out the Commission’s consultation process, calling it deeply deficient in transparency, inclusion and accessibility. Tiyavorabun added the Commision should give proper notice and publish the draft to ensure proper feedback can be given.
Setting the bar for the AI Act
The bulk of AI Act requirements apply to “high-risk” AI systems, less risky than the “unacceptable” prohibited use cases but more risky than the largely exempted “limited” risk areas.
The questions raised are foundational to the Act, deciding which systems are covered and the amount of obligations for covered systems.
Counsel and Programme Director for Equity and Data at the Centre for Democracy and Technology, Laura Lázaro Cabrera, told Euractiv that if too few AI systems are prohibited, it could skew the entire risk scale, raising the bar to be classified as both high and limited risk.
However, the Commission will have little time to review the feedback, let alone make changes, as a senior Commission official said at an event this week that the guidelines will be published “very soon.”
[Edited by Alice Taylor-Braçe]
[ad_2]
Source link