European Commission clarifies definition of AI systems

ai act

The European Commission provides clarity on which systems fall within the scope of the AI regulation.

The European Commission has published guidelines to clarify the definition of an AI system. These should help in the correct application of the rules in the AI Regulation, which has been in force since Aug. 1. The guidelines are not binding and may be updated in the future based on new insights and practices.

Guidelines

The AI regulation, the first rules of which have been in effect since Feb. 2, classifies AI systems based on risk levels. There are prohibited systems, high-risk systems and systems that have transparency obligations. The new guidelines should clarify when a software system is considered an AI system under this legislation. This should help providers and other interested parties comply.

read also

First AI Act rules in effect: Overview of prohibited AI practices and obligations

The Commission published these guidelines to complement previous documents on prohibited AI practices. It emphasizes that the guidelines do not impose legal obligations, but are a tool for the interpretation and application of the AI regulation. The guidelines will be updated as necessary based on practical experience and new technological developments.

First rules in effect

With the enactment of the first provisions of the AI Act, some restrictions are already in place, including a ban on AI use scenarios deemed unacceptably risky. In addition, the regulations address AI literacy and the legal definition of AI systems.

The Commission has approved, but not yet formally endorsed, the draft guidelines. Further updates and adjustments remain possible.