Since Sunday, the first rules of the European AI Act have been in effect.
The European AI Act bans a range of risky AI applications and imposes fines of up to €35 million or 7 percent of global turnover for violations. There will be a special AI office within the European Commission to serve as an advisory board to member states.
What rules?
Article 4: AI literacy
AI vendors must ensure adequate AI knowledge among staff and users. This includes training and education tailored to the technical complexity of AI systems and the target groups to which they apply.
Article 5: Banned AI practices
Several AI practices are banned, such as systems similar to China’s social credit system. There, people’s creditworthiness is adjusted based on their behavior and reputation. An overview:
- Manipulation and subliminal influence: AI that subconsciously manipulates behavior to make people make important choices that could harm them is not allowed.
- Exploitation of vulnerable groups: AI systems should not exploit vulnerabilities such as age or disabilities to influence behavior.
- Social credit systems: AI that assesses people over long periods of time based on personality traits and behavior is banned.
- Crime prediction: AI should not predict whether someone will commit a crime based solely on personality traits.
- Biometric Surveillance: The use of AI for facial recognition in public locations by law enforcement officers is strictly regulated and allowed only in exceptional cases.
This law applies to all companies operating within the EU, including non-European technology companies. Many major players, including Google, OpenAI and Microsoft, have said they will abide by the rules of the AI Act. Other organizations and companies, such as Meta and Apple, do not. They feel the law undermines innovation, but they are also simply required to comply with the rules.