The legal aspects of AI - European Union regulation

In our new series we present the legal aspects of using AI in advertising, such as the the privacy protection aspects and regulations, in this article we gathered what you should know about the regulations of the European Union.

In our new series, Lawyer Dr. Gábor Keszey gatheres the most important legal aspects of using AI in the creative industry, in the first part he introduces the current ruling of the European Union, then he lists the privacy protection aspects of using AI, then he presents the precedent-like ruling of the Italian government.  On our weKNOW content platform we dicuss the ethical problems appearing aroud using AI, but we also show what you can expect in the digital and creative advertising in 2024 and how weCAN agencies have been using AI in their dialy work, both the media and creative side too. 

The European Commission published the White Paper on Artificial Intelligence in 2020, which serves as the EU's unique regulatory framework for the development and application of AI. It emphasizes the necessity of building trust in AI and underscores that its application should be based on fundamental rights and values, such as human dignity and privacy protection. The EU's next step was the adoption of a draft regulation on the regulation of artificial intelligence by the Commission in April 2021, followed by the European Parliament's approval of the draft proposal in June 2023. The latest news regarding this is that in February 2024, representatives of the member states also adopted the final text of the draft regulation, which still needs to be approved by the European Parliament and relevant committees. The regulation is expected to come into force no later than the summer, with a one- or two-year transitional period being specified.

The draft regulation adopts a risk-based approach, establishing rules for the following groups of AI systems and solutions:

  • Applications and systems posing unacceptable risks, such as social scoring.
  • High-risk AI systems, which require extensive compliance requirements and obligations under the regulation. Examples include resume-screening tools used in job applications or AI systems employed in education.
  • AI systems with limited risks (such as chatbots) would only be subject to very mild transparency obligations: for example, it would be necessary to indicate that the content was generated by artificial intelligence, and ensure that the system does not produce unlawful or copyright-infringing content.
  • Applications that are not explicitly classified as risky or pose no risk (such as video games) largely remain unregulated.

The draft regulation also defines fines for violations of the legislation on artificial intelligence, which vary depending on the nature of the infringement and are determined as a percentage of the violating company's global annual turnover for the previous financial year or a predetermined amount. This amounts to €35 million or 7% for prohibited AI applications, €15 million or 3% for violations of obligations prescribed in the AI regulation, and €7.5 million or 1.5% for providing incorrect information. More proportional upper limits would be set for fines imposed on SMEs and startup innovative companies.

Published: May 6, 2024

Related topics






Related topics

Recently viewed articles

The purpose of our website is to provide information. All content has been compiled with the utmost care and is regularly checked. The page content is general, descriptive content, but there may be variations due to country-specific characteristics and legal regulations depending on the user / place of use.  The information on the webpage is not to be considered as business or legal advice for specific situations. The publisher shall not be liable for any legal consequences arising from the use of the information. If you want an official position, always contact the competent office if you need advice from the right expert.