The European Commission is considering weakening the AI Act. Violations of rules around high-risk AI that could negatively impact citizens’ lives would not automatically lead to fines because Trump and American Big Tech find that inconvenient.
The European Commission is considering weakening the AI Act under foreign pressure. That’s according to the
The AI Act has been in force since August 2024, but many important provisions are yet to come into effect. For example, the rules concerning high-risk AI systems will come into effect in August 2026. These are systems where the use of AI can pose serious risks to the health, safety, or fundamental rights of citizens.
Less Fundamental and less Transparent
In a proposal for adjustment under the guise of simplification, the Commission appears not to view these rights as so fundamental after all. In this highest risk category, the EU wants to introduce an additional year’s transition period during which rules for the protection of EU citizens can be violated with impunity by large tech companies.
This is striking, given that the AI Act and its rules have been known for some time. Innovation in AI is progressing so rapidly that current systems and models even date from after the directive’s introduction in 2024.
read also
AI Act in the Public Sector: a Delicate Balance
Initially, transparency also doesn’t need to be as stringent. Fines for violations of these rules will be postponed until August 2027, if the proposal is adopted. This should provide sufficient time for providers of AI tools, although it’s not the case that these tools have existed for a long time and require legacy software to be adapted. This concerns new software for which creators do not bother to build in transparency, because they find it complex and it would cost effort that could slow down their self-imposed extremely high pace of innovation.
Pressure from the US
The major American technology companies are supported by Donald Trump in their ruthless AI innovation sprint. Self-imposed rules regarding responsible AI development have been discarded for a while. The unreliability of AI systems also plays no role: solutions will be rolled out, whether they function properly or not.
The European AI Act promised to offer a minimum counterbalance to that approach, by at least demanding responsibility and transparency for critical systems. This is not to the liking of Big Tech and Trump, and the European Commission now seems inclined to dilute the AI Act significantly.
This approach is also gaining support in Europe. Large companies involved in AI have likewise opposed the AI Act. They want to innovate at an American pace, and they believe this is made more difficult by the new rules. They therefore fear a blow to their competitive position.
The Commission’s new proposal still needs to be approved by the member states. The measures and adjustments can still change before November 19.
