CAIDP provides comments to French Competition Authority on AI

The Center for AI and Digital Policy provided detailed comments to the French Competition Authority in response to a public consultation on generative AI.

The use of image-to-text techniques to analyze images of people has staggering implications for personal privacy and personal autonomy, as it would give the user of GPT-4 the ability not only to link an image of a person to detailed personal data, available in the model, but also for OpenAI’s product GPT-4 to make recommendations and assessments, in a conversational manner, regarding the person.

The existence of open-source resources, such as models and data, is likely to reduce barriers to entry, encourage the emergence of new players, and improve the competitive functioning of the sector. […] But open source models also raise concerns about compliance with regulatory safeguards intended to protect public safety and fundamental rights. Without the ability to ascertain who is responsible for certain AI systems, it may become more difficult to protect important public interests.

Leaner foundation models that require less data and computing power can enable smaller companies or individuals with limited resources to participate in the AI market.

There are concerns about the lack of transparency in the functioning of gen AI systems, the data used to train them, issues of bias and fairness, potential intellectual property infringements, possible privacy violations, third-party risk, as well as security concerns.

To address these risks we recommend the following:

  1. The entire lifecycle of generative AI needs to be governed through a transparent monitoring system that is periodically reviewed and updated. Monitoring metrics should be based on human-centric metrics, and must include impact assessments towards human safety and risk minimization. The results of such assessments need to be published.

  2. Rigorous documentation and disclosure of training set data

  3. Within the human rights impact assessments any red flags (public safety, civil rights risks) need to be highlighted so that non-AI based systems will remain a viable alternative.

  4. Implementation of independent third-party audits of the system.

At CAIDP, we believe human-centric values and fair, competitive practices go hand in hand. Thus, we call for a human-centric approach to the design, development and deployment of generative AI systems. By this, we mean such systems must have - at their core - democratic values, privacy principles (data minimization and anonymization, including data capture, data protection, data quality, data privacy, transparency, and confidentiality).

Find the whole statement here.

Previous
Previous

CAIDP provides comments to UK Privacy Office on generative AI

Next
Next

CAIDP statement on AI Committee draft recommendation for Council of Europe AI Treaty