The rise of the un-alive: AI Influencers changing in digital marketing?

The concept of AI influencers has surged in recent years, with virtual personalities like Lil Miquela, Shudu Gram, and Lu do Magalu captivating millions online. Still, the topic recently gained traction with META pulling their AI influencers from Facebook and Instagram after users rediscovered some of the profiles and engaged them in conversations, leading to viral controversy.

Meta introduced AI-powered profiles in September 2023 but discontinued most of them by mid-2024. However, some gained interest after Meta executive Connor Hayes announced plans to expand AI character profiles. These AI personas, like Liv (a Black queer mother) and Carter (a dating coach), interacted with users, leading to people questioning their creators, and the profiles’ answers caused controversy. Liv, for example, acknowledged that her development team lacked Black representation, highlighting concerns about diversity in AI development.

AI vs. human influencers

These digital personas, powered by AI, deep learning, and CGI, blur the lines between reality and fiction, effectively engaging audiences but also subjecting them to something that’s not real – even less real, than the world of human influencers. 

AI-driven personalities like Aitana and Lia Byte can bring the future of influencer marketing with their consistency, adaptability, and reliability in branding. Unlike their human counterparts, AI influencers don’t age, don’t have scandals (unless programmed otherwise), and can deliver brand messages with precision. Brands are increasingly integrating AI influencers into their marketing strategies for cost-effectiveness, scalability and brand consistency, it’s reported that 48.7% of marketers already use AI influencers in their campaigns. 

Ethical concerns

 One might think that their lack of true personality makes emotional connection and credibility more difficult to achieve, but reality says otherwise: even a poorly made Brad Pitt scam could be convincing enough to extort over 800 thousand euros of a French woman. This only shows that if not everyone, but many people are susceptible to accepting these created personas as reality. Even though these AI influencers are distinguishable from real people by their inconsistencies in their appearance (having different bodily and facial features in different images, rendering issues), they are really convincing.

Mariann Forgács, social media expert, Co-Founder and CEO of Be Social warns about the potential dangers: “AI influencers can be potentially harmful: not only by deceiving some followers by their existence but also shaping the body image of their followers with unrealistic looks, which is especially problematic for teenagers.”

As seen from the Brat Pitt scam, AI-generated deepfake content could also be misused for blackmail, revenge, or misinformation, making regulation crucial. Forgács adds, “Regulations could be made, but this whole field evolves so quickly that laws struggle to keep up. Educating people on how to recognize AI-generated content is key.”

AI Influencers in Central Eastern Europe

While AI influencers have gained traction in global markets, influencers specific for the CEE region are yet to flood social platforms. In Hungary, the @Aisa_megmondja has amassed over 3,000 followers, while in the Czech Republic, @adinainspirescz created by a company called Adison is growing in popularity.

The Future of AI in Influencer Marketing

The next era of AI influencers will likely involve hyper-personalized and interactive AI influencers, capable of responding to audiences in real-time with increasingly sophisticated behaviour. Whether they remain a niche trend or become an industry standard remains to be seen, but one thing is clear: AI influencers are just beginning to reshape the digital marketing landscape.

AI regulation of the EU

Having mentioned potential regulations in the field, the EU is quite ahead in this game. 2nd February is the first compliance deadline for the EU’s AI Act, the comprehensive AI regulatory framework approved by the European Parliament, which went into effect on 1st August.

The Act is aimed to cover use cases where AI might appear and interact with individuals. Here they define four risk levels:

  1. Minimal risk (e.g., email spam filters) will face no regulatory oversight;
  2. Limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight;
  3. High risk — AI for healthcare recommendations is one example — will face heavy regulatory oversight;
  4. Unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely.

Unacceptable activities include when AI

  • is used for social scoring (e.g., building risk profiles based on a person’s behaviour).
  • manipulates a person’s decisions subliminally or deceptively.
  • exploits vulnerabilities like age, disability, or socioeconomic status.
  • collects “real-time” biometric data in public places for law enforcement.
  • creates — or expands — facial recognition databases by scraping images online or from security cameras.

Companies using any of the prohibited AI applications in the EU will face fines, regardless of their HQ location. Penalties can reach up to €35 million (~$36 million) or 7% of their annual revenue from the previous fiscal year, whichever amount is higher.

Published: February 6, 2025

Related topics





Related topics

Recently viewed articles

The purpose of our website is to provide information. All content has been compiled with the utmost care and is regularly checked. The page content is general, descriptive content, but there may be variations due to country-specific characteristics and legal regulations depending on the user / place of use.  The information on the webpage is not to be considered as business or legal advice for specific situations. The publisher shall not be liable for any legal consequences arising from the use of the information. If you want an official position, always contact the competent office if you need advice from the right expert.