What does the AI Act mean for the public sector? The legislation can foster trust in AI, but it walks a delicate tightrope between regulation, technology, and practical implementation.
The AI Act officially came into effect on August 1, 2024, and the rules are being implemented gradually. The legislation serves the higher purpose of guiding the rapid integration of AI into our daily lives in a controlled manner. This is certainly needed, as uncertainty about what is permissible hinders adoption in the business world.
This tension is even more palpable in the public sector, where decisions made by or with AI can have a significant impact on citizens. The Smals AI Competence Center recently brought together four experts from different disciplines to discuss the impact of the AI Act on the public sector. The panel believes in the positive impact of the law on trust in technology but sees obstacles for practical implementation.
read also
Smals Shows how a Digital Government Can Work
Systemic Risks
The rules of the AI Act are determined by a risk categorization of AI systems. For certain AI systems, the European legislator unequivocally issues a red card. This applies, among other things, to technology that can be used for assigning adverse social scores or the use of facial and emotion recognition in public spaces.
“Then there is the category of ‘high-risk AI systems’, which are subject to specific data quality requirements”, explains Thomas Gils, AI Act expert at KU Leuven.
As we go further down the ladder, we encounter AI systems that must comply with specific transparency obligations, such as chatbots and image generators. These systems can use AI models for ‘general purposes’, which most LLMs fall under. Depending on whether the model may or may not pose system risks, a further subdivision is made.
Strengthening GDPR
This already sounds complex, but in essence, the AI Act strengthens the GDPR, notes Kurt Maekelberghe, DPO at the Crossroads Bank for Social Security (KSZ). “The GDPR is fully applicable to AI systems. For high-risk systems, we must conduct a comprehensive analysis of the threats to the fundamental rights of individuals whose information is processed or who depend on the output. The government has no room for making wrong decisions”.
read also
Public Sector Broadly Adopts Generative AI but Encounters Challenges in Infrastructure and Security.
“Certain data can be retained for training models. Therefore, it is important to carefully review the contract with your supplier to ensure that your data is not kept longer than necessary for processing”, adds Katy Fokou, AI researcher for Smals.
Literacy
An important principle of the AI Act is “literacy”. The law mandates that people using AI have the necessary technical, practical, and legal knowledge to responsibly handle it. “In theory, this is very broad, as everyone with a smartphone essentially uses AI. Whether this will be enforced as broadly is another question”, says Gils.
Fokou clarifies how Smals approaches this. “We take a pragmatic approach, with specific training for each role. The first step is recognizing the importance of regulation, but also of innovation. AI tools are coming to market, so it is essential to inform everyone and raise awareness of what is and isn’t possible, to ‘demystify’ the technology and build trust. But I don’t think there is one single training that suits everyone”.
Karel Van Eeckhoutte, as Strategic Advisor for the National Social Security Office, testifies: “RSZ is strongly committed to increasing AI literacy within the organization. We offer a wide range of activities, including training sessions, newsletters, and after-work sessions. These cover what AI is, the forms it takes, and the risks associated with it. We actively encourage employees to ask questions and share experiences.”
Who is Responsible?
From literacy, it moves to governance. Maekelberghe points to a lack of uniform rules. “There is currently no material available that we can say if you follow this, you will succeed. The existing standards still need to be interpreted. We need to work on a practical model at this moment”.
The panel unanimously agrees that governance should not stand alone within the organization. “At RSZ, we work with a multidisciplinary AI Governance group, each with their own expertise. This team develops initiatives and provides support on the ground. In the Innovation Board, we involve all directors in determining the strategic direction around AI. This way, AI is considered organization-wide”, says Van Eeckhoutte.
“The Flemish government has developed a ‘playbook’ for the public sector,” Gils adds. “It’s an interesting document to look at, but as an organization, you need to distinguish where you want to go. In every AI project, you must involve people and inform them of risks and limitations, both technical and legal. This cannot happen if governance sits in an ‘ivory tower’. Who performs control and oversight is different for each project and company.”
AI in the Shadows
A broad governance strategy must guard against so-called shadow AI, where employees start using AI tools on their own. Maekelberghe: “The problem is related to the access employees have to applications on the internet. This is not a new phenomenon. Be cautious with tools that leak personal data or take it outside the company environment. As an employer, you must clearly define which resources can be used”.
“We saw this almost immediately with ChatGPT”, Fokou interjects. “One of the first measures we took within Smals was to establish policy rules for the use of generative AI. It’s normal for people to want to use these tools, but you must properly inform them on how to do so correctly”.
read also
From Agents to Your Thermostat: AI Cannot Be Put Into One Box
“You can whitelist or blacklist tools, but the latter is difficult because AI is now everywhere. How are you going to manage that?”, Van Eeckhoutte asks aloud. Maekelberghe quickly responds:
“I often find the discussion too blunt. The question for me is whether you can use AI for a specific purpose. Can you use ChatGPT to write a macro in Excel? Yes, but know that there may be errors in the code. Can you use ChatGPT to write personal files? No, because then you’re dealing with personal data. In practice, it’s often more nuanced”.
Role of the Government
The government plays an active role in providing tools and frameworks for compliance with the AI Act. In a complex country like Belgium, this is not always straightforward. “The Flemish government has been looking for two to three years at how to responsibly handle AI. Interregional consultation is therefore important. For the citizen, it doesn’t matter which government a tool comes from. We must avoid making the same mistakes that have been made in other countries”, says Gils.
The importance of collaboration between government institutions and external experts is highlighted here, analogous to the NIS2 law. “But doing this solely from a security perspective is too limited. You need to elevate it to the management, execution, and IT level to achieve good collaboration”, Maekelberghe points out.
Referee
The government will also need to take on the role of referee. “In principle, the AI Act employs the principle of market supervision, as we already know for medical equipment, for example. The provider is responsible for complying with certain standards before bringing a high-risk AI system to market. This is a very different form of supervision”, Gils explains.
Who is liable for damage caused by AI can be a complex question. Liability issues fall into the non-contractual, gray area. “Regarding AI, there are still many questions. The European Commission has withdrawn the proposed directive, so national legislation applies. Product liability for software still needs to be translated into Belgian law”, Gils continues.
Who the referee will be is also not entirely clear. The BIPT will likely act as the main regulator, but the legislation does not define what its role will be alongside other relevant bodies such as the FPS Economy or the Data Protection Authority. Not to mention potential regional issues that are always lurking around the corner in Belgium.
“We currently have no authorities to enforce the rules. The last political word has not yet been spoken on this”, Gils tactically keeps the ball in the middle. “The authorities have many things to implement, but we don’t even know when they will be ready”, Fokou sharply points out.
Finding Balance
The panel briefly discusses technical pitfalls that the government must avoid at all costs. These are essentially no different than for commercial companies, where “black boxes” and “vendor lock-in” are equally to be avoided. Fokou: “That architectural issue sometimes prevents us from making progress. There is not enough competition and not enough choice. Suppliers now have attractive formulas, but we don’t know what the final price will be”.
“In my opinion, you need a specific risk analysis here, taking into account the risks of major suppliers. These are traditional IT questions. I think too much energy is spent on developing LLMs ourselves for the public sector. If you make the economic analysis, it would mainly favor the already existing LLMs, but with the risk that continuity is not guaranteed,” Maekelbergh replies.
The implementation of the AI Act in the public sector will be a delicate balancing act. That is the short conclusion of a long story that has only just begun. The final word goes to Van Eeckhoutte:
“How you find the balance between regulation and innovation is a difficult question. The goal aimed at by the AI Act is good because it provides the safety and certainty necessary to set the tone. To what extent the AI Act will be a brake on innovation remains to be seen.
We don’t know what the final price of AI will be.
Katy Fokou, AI Researcher Smals
This editorial contribution was made in collaboration with our partner Smals.