
The European Union wants companies to monitor their employees’ AI literacy. But what does this requirement – and the other regulations introduced by the AI Act – mean in practice? Let’s take a closer look.
Concerned about the rapid development of artificial intelligence and the risks it poses, the European Union adopted the AI Act last summer. It is the world’s first comprehensive regulation governing the use of artificial intelligence systems.
The EU aims to ensure that only safe, reliable, transparent, and non-discriminatory AI systems are used within member states.
The regulation applies to both public and private entities within the EU – as well as those outside the EU – that provide or deploy AI systems on the European market. In contrast, the AI Act imposes virtually no obligations on ordinary users. Instead, its primary goal is to protect them.
However, several companies and organizations have criticized the regulation. According to the Confederation of Industry and Transport of the Czech Republic (SP ČR), excessive bureaucracy could make the EU less competitive in the AI field. SP ČR joined other business organizations in signing an open letter to the European Commission expressing their concerns about the regulation.
The Artificial Intelligence Act (EU 2024/1689) officially came into force in August 2024, but full enforcement will not begin until August 2026. Exceptions include provisions on prohibited practices, administrative and enforcement systems, and sanctions, which take effect earlier.
In addition to setting rules, the AI Act introduces penalties for non-compliance. For example:
These penalty provisions came into force on February 2, 2025.
As part of Phase 1 of the AI Act, one article specifically addresses the need for sufficient AI literacy among employees in companies that use AI systems. This provision has sparked online reports in recent months claiming that companies have only a few months to train their employees or risk heavy fines.
However, as Ondřej Hanák from eLegal explains, while regular employee training will indeed be necessary, no authority will a priori monitor the level or frequency of that training.
In the future, once the AI Act is fully enforced – or if an issue arises due to an employee’s use of artificial intelligence – the company’s approach to AI implementation and employee training could be taken into account during investigations or legal proceedings.
If shortcomings are identified in this area, it could be considered an aggravating factor for the company. Still, the threat of large fines is exaggerated. There are no specific penalties tied to this article, assures lawyer Ondřej Hanák.
The first part of the AI Act, which is already in force, focuses on banning certain types of AI systems – rather than addressing AI literacy. The regulation classifies AI systems into four risk levels:
The first category (completely prohibited systems) includes, for example, AI that uses manipulative techniques. Naturally, the term “manipulative” can be subjective; even advertising can, to some extent, be considered manipulative. In reality, this is a rather broad and vague concept that the European Union is now working to define more precisely.
In general, the regulation targets subconscious manipulation techniques that AI systems could use to influence individuals without their awareness, thereby impairing their ability to make informed decisions and potentially causing them serious harm, explains Ondřej Hanák.
In early February, the European Commission released guidelines offering a more detailed definition of these manipulative systems. The Czech Association for Artificial Intelligence summarized the key points of these guidelines as follows:
The second phase of the AI Act, scheduled for implementation in August 2026, focuses on AI systems in the remaining three risk categories. This includes high-risk systems that are not banned but will be subject to strict operational requirements imposed by the EU.
In practice, this could involve software used to evaluate CVs for university admissions or systems that assess applicants for secondary and higher education institutions. In the financial sector, high-risk AI might include systems that score clients for loans or detect potential fraud. According to Ondřej Hanák, the regulation primarily targets situations where AI plays a direct role in decision-making. Scenarios where AI merely provides recommendations are of less concern to the EU.
Limited-risk systems cover AI tools designed for interacting with people or generating artificial content.
For these systems, the EU will require at least a basic level of transparency – specifically, the obligation to inform users that they are interacting with an AI system.
Finally, AI systems classified as minimal risk are generally not affected by the AI Act and will remain largely unregulated.
Need Personal Advice?
For personalized guidance on laws and regulations in the Czech Republic that may affect your business, reach out to the experts at 360WEDO.
We specialize in business setup in the Czech Republic, outsourced accounting, and legal support – helping your business stay compliant and thrive.