The European Union has published a document outlining the code of conduct for companies building large general purpose AI models.
The European Union yesterday published the first draft of the code of conduct for organizations that train general AI models. Companies thus have guidelines to manage risk and avoid fines. The published text is an initial version. The regulations are not expected to take their final form until spring 2025.
Code of conduct in four key areas
In the meantime, the broad outlines are already clear. The document is first attempt to establish a code of conduct for builders of advanced AI models trained with computational power in excess of 10²⁵ FLOPs. Companies expected to be covered include OpenAI, Google, Meta, Anthropic and Mistral. The European AI Act, which took effect Aug. 1, still left room for the detailed GPAI regulations to be fixed in the future. That has been done, and the document is now being presented for feedback and refinement.
The code of conduct addresses four key areas: transparency (e.g., providing information on web crawlers used), copyright compliance (citing sources when training AI models), risk assessment (to prevent cybercrime, discrimination and loss of control) and technical risk mitigation (such as failsafe access controls and model data protection).
Fines for violations could reach 35 million euros or seven percent of annual global profits. Companies can submit feedback until Nov. 28 and the final document will be released in May 2025.