As of August 2, companies like OpenAI, Google, and Meta must comply with obligations for “general purpose” AI models under the AI Act. Meta does not agree.
The European Commission has published guidelines to alert developers of “general-purpose” AI models to their obligations under the AI regulation. The guidelines aim to clarify who falls under the regulation and what is expected of these actors. The Commission seeks to provide legal clarity as the deadline approaches.
read also
European CEOs Want to Delay Implementation of AI Act, EU Refuses
These guidelines apply to companies developing AI applications for “general use”. Think of the likes of OpenAI, Google, Meta, and Anthropic. Their deadline is August 2.
Meta Does not Participate
However, the AI Act is met with lukewarm reception across the Atlantic. American tech companies and the government view the European rules mainly as an administrative and legal burden. Meta has vocally opposed the EU rules before.
In a public LinkedIn statement, Meta’s Chief Global Affairs Officer Joel Kaplen states that the (voluntary) code of conduct ‘goes too far’ and that Europe is ‘heading in the wrong direction’ with AI. During the rollout of Meta AI, the company faced considerable pushback in the EU. Meta will not willingly comply with the European AI rules. The Commission responds via Bloomberg that it will enforce compliance if necessary.
Exception for Open Source
The guidelines introduce technical criteria for developers to determine if their AI model is considered “general use”. Specific obligations apply to such models under the AI regulation. Only parties making “significant modifications” are subject to these obligations. Those making only minor changes are exempt.
Exceptions are also provided for open source. The guidelines clarify when providers of open-source AI models are exempt. The Commission aims to promote transparency without hindering innovation.
read also
Vance: ‘U.S. is ultimate leader in AI, EU welcome to follow behind if it deregulates’
Deadline Approaches
The AI Act is being implemented in phases. An initial set of rules already came into effect in February. From August 2, 2025, providers of general-purpose AI models entering the market after that date must comply with the obligations. Models with systemic risks are subject to a reporting requirement to the AI Office.
From August 2, 2026, the Commission will have enforcement powers. From then on, fines can also be imposed for non-compliance. AI models already on the market before August 2025 must be compliant with the obligations by August 2, 2027.
Although the guidelines are not legally binding, they indicate how the Commission interprets the AI regulation. They serve as a guide for enforcement and supplement the “code of conduct” for AI models launched this month. Developers are advised to evaluate their obligations in a timely manner and align their models accordingly.