CAIDP Europe Calls for Stronger AI System Definition and Clearer Prohibited Practices in EU AI Act Guidelines

The Center for AI and Digital Policy Europe has submitted comprehensive comments to the European Union AI Office regarding the implementation guidelines for the AI Act, focusing on two critical areas: the definition of AI systems and prohibited practices.

CAIDP Europe proposes a more precise definition:

"A machine-based system that produces predictions, synthetic content, recommendations, decisions or other outputs that may mimic human behaviour and can influence physical or virtual environments. AI systems vary in safety, reliability, and transparency."

This proposed definition emphasizes:

·       Human control and oversight over AI systems' autonomy levels

·       Recognition that even systems with zero autonomy fall under the Act's scope

·       Clear distinction between safety-impacting or rights-impacting AI systems and simple software

·       Avoidance of anthropomorphizing language that attributes personality to AI systems

Scope of Prohibited Practices For Article 5 prohibitions, CAIDP Europe recommends:

1.     Private Sector Scope:

·       Explicit coverage of AI systems deployed by private companies

·       Recognition of dark patterns as manipulative or deceptive techniques

·       Clear distinction between "placing on the market," "putting into service," and "use" with corresponding stakeholder obligations

2.     Rights Protection:

·       Prohibition of systems impeding fundamental rights

·       Safeguards for procedural rights including remedy, fair trial, and presumption of innocence

·       Recognition of linguistic and cultural analysis as potential means of racial inference

Legal Framework Following the Court of Justice of the European Union's Ligues des droits humains (C-817/19) judgment, CAIDP Europe emphasizes:

1.     Algorithmic Transparency: Systems must function transparently with traceable results

2.     User Understanding: Affected individuals must comprehend how criteria and programs operate

3.     Human Oversight: Pre-determined criteria and human review for fundamental rights decisions

These recommendations aim to strengthen the AI Act's implementation while ensuring robust protection of fundamental rights and clear standards for the AI ecosystem.

Next
Next

CAIDP Comments on EU Code of Practice