AI is stealing our jobs, data and art? - Ethical concerns about the use of generative AI

With the spread of AI technology in the industry, concerns regarding its safety and impact on our lives are also vocalized: these can stem from instinctive fear caused by lack of transparency in its operation, but also from very recent examples of how AI can abuse personal data, copyright, or simply because it wasn’t created to be perfect. In our article series on AI, we start with summarizing the most common ethical concerns around the usage of it.

Bias

Ethical concerns arise as AI systems are trained on biased data which can lead to discriminatory practices, even leading to legal implications. AI perpetuates gender bias in search results, for example favouring male figures when searching for influential leaders and generating sexualized depictions of schoolgirls – but not schoolboys. Search engines, driven by user clicks and location create echo chambers that only reinforce (often harmful) real-world biases online.

Privacy

Generative AI models, like large language models (LLMs), may include personally identifiable information (PII) and set challenges for consumers to locate and remove their data. Privacy concerns have been risen in the collection, processing, and storage of massive datasets for AI training. Furthermore, the use of AI in surveillance by law enforcement gathers worries about potential misuse. 

Distribution of harmful content

AI's automated content creation brings about productivity gains but raises risks of unintentional harm on many levels, from offensive language in generated emails to more concerning content: deepfakes, surpassing voice and facial recognition pose serious challenges. Impersonations using deepfakes can influence public opinion in politics, possibly even leading to market or election manipulation. The lack of oversight means significant threat, such as the recent and popular deepfake video of the Ukrainian president Volodymyr Zelenskyy.

Transparency

Transparency on AI algorithms is much needed and expected; deep learning processes are often obscure, lacking the disclosure of the "mechanics". Generative AI, like ChatGPT may hide data associations, raising concerns about trustworthiness. Understanding AI decision-making is vital, also making it transparent, especially in fields like healthcare and law enforcement, where even human lives might be at stake.

Accountability

The rise of using AI brings daily decision-making which has lead to accountability challenges. Identifying responsibility for negative outcomes becomes complex: does it belong to the companies validating purchased algorithms or the AI tool creators? 

Fallibility

AI-based decisions are prone to inaccuracies due to the inherent imperfections of software. Just like databases, games, and websites, AI algorithms including generative chatbots like ChatGPT can produce false information, as its primary task is to mimic human conversations – not to be precise. Thus, reliance on AI can be problematic for critical business models and analytics.

Job displacement

AI solutions have already sparked fears in employees for losing jobs as automation may replace roles. Ethical responses involve investing in workforce preparation for new roles created by generative AI applications, and at the moment weCAN partner agencies claim: Copywriters using AI will replace copywriters not using AI, based on the fact that currently it’s far from being able to replace humans, but those who can effectively use it can create a massive advantage.

Copyright ambiguities

A lawsuit against ChatGPT highlights AI's impact on intellectual property: writers, including George R. R. Martin and Jodi Picoult sued OpenAI for allegedly using their work to train AI without their permission. This case embodies fears about the livelihood of authors and the exploitation of intellectual property, along a broader issue of AI's influence on creativity, seen in the creation of an AI-generated Rembrandt painting in 2016. The question of authorship arises, which in this state heavily challenges traditional definitions.

Where are we heading?

Even though we don’t have definitive answers addressing all these concerns, the urgency for companies to prioritize ethical AI practices is rather obvious and what we can do is analyze and enforce transparency and action on companies, AI projects.

Published: January 30, 2024

Related topics






Related topics

Recently viewed articles

The purpose of our website is to provide information. All content has been compiled with the utmost care and is regularly checked. The page content is general, descriptive content, but there may be variations due to country-specific characteristics and legal regulations depending on the user / place of use.  The information on the webpage is not to be considered as business or legal advice for specific situations. The publisher shall not be liable for any legal consequences arising from the use of the information. If you want an official position, always contact the competent office if you need advice from the right expert.