Will companies be fined for not training employees in AI?

28.03.2025
Will companies be fined for not training employees in AI? 360WEDO
28.03.2025

The European Union wants companies to monitor their employees’ AI literacy. But what does this requirement – and the other regulations introduced by the AI Act – mean in practice? Let’s take a closer look.

Concerned about the rapid development of artificial intelligence and the risks it poses, the European Union adopted the AI Act last summer. It is the world’s first comprehensive regulation governing the use of artificial intelligence systems.

The EU aims to ensure that only safe, reliable, transparent, and non-discriminatory AI systems are used within member states.

The regulation applies to both public and private entities within the EU – as well as those outside the EU – that provide or deploy AI systems on the European market. In contrast, the AI Act imposes virtually no obligations on ordinary users. Instead, its primary goal is to protect them.

However, several companies and organizations have criticized the regulation. According to the Confederation of Industry and Transport of the Czech Republic (SP ČR), excessive bureaucracy could make the EU less competitive in the AI field. SP ČR joined other business organizations in signing an open letter to the European Commission expressing their concerns about the regulation.

The Artificial Intelligence Act (EU 2024/1689) officially came into force in August 2024, but full enforcement will not begin until August 2026. Exceptions include provisions on prohibited practices, administrative and enforcement systems, and sanctions, which take effect earlier.

In addition to setting rules, the AI Act introduces penalties for non-compliance. For example:

  • €35 million or 7% of annual global turnover for selling a prohibited AI system;
  • €15 million or 3% of annual turnover for violating other obligations;
  • €7.5 million or 1.5% of annual turnover for providing false information.

These penalty provisions came into force on February 2, 2025.

How to Save on Taxes in the Czech Republic: A Comprehensive Overview 360WEDO

AI Literacy: Will Companies Comply?

As part of Phase 1 of the AI Act, one article specifically addresses the need for sufficient AI literacy among employees in companies that use AI systems. This provision has sparked online reports in recent months claiming that companies have only a few months to train their employees or risk heavy fines.

However, as Ondřej Hanák from eLegal explains, while regular employee training will indeed be necessary, no authority will a priori monitor the level or frequency of that training.

In the future, once the AI Act is fully enforced – or if an issue arises due to an employee’s use of artificial intelligence – the company’s approach to AI implementation and employee training could be taken into account during investigations or legal proceedings.

If shortcomings are identified in this area, it could be considered an aggravating factor for the company. Still, the threat of large fines is exaggerated. There are no specific penalties tied to this article, assures lawyer Ondřej Hanák.

Ban on Manipulative Systems

The first part of the AI Act, which is already in force, focuses on banning certain types of AI systems – rather than addressing AI literacy. The regulation classifies AI systems into four risk levels:

  • Prohibited systems
  • High-risk systems
  • Limited-risk systems
  • Minimal-risk systems

The first category (completely prohibited systems) includes, for example, AI that uses manipulative techniques. Naturally, the term “manipulative” can be subjective; even advertising can, to some extent, be considered manipulative. In reality, this is a rather broad and vague concept that the European Union is now working to define more precisely.

In general, the regulation targets subconscious manipulation techniques that AI systems could use to influence individuals without their awareness, thereby impairing their ability to make informed decisions and potentially causing them serious harm, explains Ondřej Hanák.

In early February, the European Commission released guidelines offering a more detailed definition of these manipulative systems. The Czech Association for Artificial Intelligence summarized the key points of these guidelines as follows:

  • No manipulation through subconscious methods: Advertising or chatbots must not use hidden signals or subliminal techniques that bypass users’ conscious decision-making.
  • No exploitation of vulnerabilities: AI must not target children, the elderly, people with disabilities, or individuals in financially vulnerable situations.
  • Ban on social scoring: AI systems are prohibited from assessing citizens based on their behavior to assign social scores.
  • No predictive criminal profiling based solely on personality: AI systems must not predict the likelihood of someone committing a crime based on social media activity, psychological profiles, or similar personal traits.
  • Ban on the automatic collection of facial data: It is prohibited to automatically gather facial images from websites or surveillance cameras to build facial recognition databases.
  • No emotion recognition in the workplace or schools: Except for health and safety reasons, AI must not analyze employees’ or students’ emotions.
  • Restrictions on biometric data categorization: AI systems must not analyze biometric data to classify individuals by race, political views, religion, or sexual orientation.
  • Restrictions on real-time biometric surveillance: Police may not use real-time biometric identification in public spaces, except in specific cases such as searching for missing persons, addressing terrorist threats, or pursuing dangerous criminals.

The Second Phase of the AI Act: What Lies Ahead?

The second phase of the AI Act, scheduled for implementation in August 2026, focuses on AI systems in the remaining three risk categories. This includes high-risk systems that are not banned but will be subject to strict operational requirements imposed by the EU.

In practice, this could involve software used to evaluate CVs for university admissions or systems that assess applicants for secondary and higher education institutions. In the financial sector, high-risk AI might include systems that score clients for loans or detect potential fraud. According to Ondřej Hanák, the regulation primarily targets situations where AI plays a direct role in decision-making. Scenarios where AI merely provides recommendations are of less concern to the EU.

Limited-risk systems cover AI tools designed for interacting with people or generating artificial content.

For these systems, the EU will require at least a basic level of transparency – specifically, the obligation to inform users that they are interacting with an AI system.

Finally, AI systems classified as minimal risk are generally not affected by the AI Act and will remain largely unregulated.

Need Personal Advice?

For personalized guidance on laws and regulations in the Czech Republic that may affect your business, reach out to the experts at 360WEDO.

We specialize in business setup in the Czech Republic, outsourced accounting, and legal support – helping your business stay compliant and thrive.

https://www.mesec.cz/clanky/hrozi-firme-ktera-neproskoli-zamestnance-v-ai-pokuta-zbytecne-straseni-rika-pravnik

How to get started with 360 WEDO?

Send us the form and our specialist will contact you shortly
img