EU Artificial Intelligence Act

Enreach 14/03/2024
Clock icon 4 min

Europe is leading the world in regulating AI. Just yesterday, the European Parliament voted on the first EU regulatory framework for AI.

With 523 votes in favour and 46 against, the regulation has been given the green light to come into force by mid-2026.

All that remains is for member states to approve the final text and for it to be published in the EU’s Official Journal.


The law has its origins in the European Commission’s 2021 proposal to create the first regulatory framework for AI.

Three years ago, the Commission proposed new rules and measures aimed at making Europe the global hub for trustworthy artificial intelligence.

These rules focused on categorising AI by risk level and proposing regulatory measures for each.

Following the provisional agreement in December, the Commission launched the GenAI4EU initiative, which aims to support start-ups and SMEs in adopting this technology.

And more recently, in February 2024, it was agreed to set up the European Committee on Artificial Intelligence, which will include a representative from each member state to oversee the implementation of the law in their country.


With all the pieces in place, this law has two very clear objectives:

  • To ensure that artificial intelligence systems in the European Union are safe and respect the rights of citizens.
  • To stimulate investment and innovation in AI in Europe.


As mentioned above, this regulation imposes obligations on providers and users based on the level of risk posed by AI.


Narrow AI, also known as weak AI, is AI designed to perform a single task perfectly.

It is the most widely used in Europe and, as it poses no risk, it can be freely used according to the law.

This category includes virtual assistants (Siri, Alexa, Cortana…), spam filters in emails, autonomous vehicles, etc.


This AI, also known as AGI or strong AI, can perform any intellectual task that a human can do, but like any human, it doesn’t know everything.

The regulation stipulates that, while it does not pose a major threat to citizens, it must meet minimum transparency requirements to allow users to make informed decisions.

For example, when using AI systems such as chatbots, people must be aware that they are interacting with a bot and have the option to end the conversation at any time.

In addition, when using generative AI, such as ChatGPT, providers must state that the content is artificially generated and must incorporate specific models to prevent the generation of illegal content and respect copyright.


Thirdly, there is super-intelligent AI, or ASI, which surpasses human skills and abilities at any task. It is even thought to be able to think better and be more skilled than us.

Of course, the law indicates that this AI could endanger the safety and/or fundamental rights of consumers.

Therefore, it can only be put on the market if it respects fundamental rights and EU values, and it will be subject to greater scrutiny to ensure that it is used safely and responsibly.

Its use will be limited:

  • Critical infrastructure such as transport systems, as it could endanger the lives and health of citizens.
  • Education or training, such as examination marking or any process that determines a person’s career path.
  • Product safety components, such as the use of AI in robotic surgery.
  • Employment, workforce management and access to self-employment, such as resume review software.
  • Essential public and private services, such as credit ratings that deny citizens credit.
  • Law enforcement, which can interfere with people’s fundamental rights, such as the automation of visa application checks.
  • Administration of justice and democratic processes, such as AI solutions to search for court decisions.


This type of AI is also known as the Singularity and has been featured in the vast majority of science fiction films.

It is no wonder that it is seen not only as a threat to human beings and fundamental rights, but also as undermining democracy, the rule of law and the environment.

That is why AI that is deemed to pose an “unacceptable risk” will be banned in the EU once the regulation comes into force.

This group includes all AI systems that perform “social scoring”, those that use subliminal techniques, those that exploit the vulnerabilities of certain groups of people, or those that recognise emotions.


Following the adoption of the world’s first law regulating artificial intelligence, we can foresee that this is only the beginning.

This technology, which is increasingly present in more and more aspects of our daily lives, needs regulations that provide security and, of course, encourage its use.

Are you using AI to serve your customers, or thinking about starting? Find out how you can use this technology to free your agents from tasks where they do not add value.

Get in touch with our team of experts by calling +34 900 670 750 or sending a message through our website chat. We look forward to helping you!

Bell icon Subscribe Hearth icon Ask for a demo