Controversy between disruption and ethics
Artificial intelligence (AI) has made significant progress in generating images that are difficult to distinguish from those created by humans. This has been made possible using deep learning algorithms, which can be trained to recognize patterns and generate images that are like those in a training dataset. While AI-generated images have the potential to revolutionize a range of industries and applications, they also raise several ethical concerns.
One of the first widely known examples of image-generating AI models is the DeepFake technology, which uses machine learning to generate realistic videos of people saying and doing things that they never actually did. More recently, advancements in both computer vision and language models allow AI models such as DALL-E, Midjourney, and Stable diffusion to generate images based on user prompts. This progress is astonishing, as for this the model must both understand the semantic meaning of the prompts and generate a fitting image based on it. Image-generating AI models have now advanced to such a degree, that they can win art competitions, create stock images and compete with human digital artists.
While AI image-generating technology has been used for a range of creative applications, such as creating realistic-looking scenes in movies and video games, it has also raised concerns about its potential use for malicious purposes, such as spreading misinformation or manipulating public opinion.
Another ethical concern with AI-generated images is the potential for AI to be used to deceive or manipulate people. For example, AI-generated images could be used to create fake news stories or to impersonate real people online. This could have serious consequences, such as damaging reputations or inciting violence.
In addition, these AI systems have the potential to perpetuate or amplify existing biases and stereotypes. AI algorithms are trained on data sets that reflect the biases and prejudices of the people who created them. As a result, AI-generated images may perpetuate or amplify these biases, leading to negative consequences for marginalized groups.
Finally, there is the issue of ownership and control over AI-generated images. Who owns the rights to an AI-generated image, and who is responsible for its content? These are important questions that need to be addressed to ensure that AI-generated images are used ethically and responsibly. Following backlash by artists, the Stable diffusion model has for example removed the ability to copy artist styles to make NSFW content. If and how artists whose work has been used to train these models should be reimbursed remains unclear.
Overall, it is important for companies and organizations that use AI-generated images to be transparent about their use and to consider the potential ethical implications of their operation. It is also important for regulators and policymakers to develop guidelines and regulations to ensure that AI-generated images are used ethically and responsibly. In a prominent example, the proposed EU AI Act makes deep fake systems subjected to specific transparency obligations. By taking these steps, we can ensure that the benefits of AI-generated images are realized while minimizing the potential risks and negative consequences.
But enough about ethical concerns. Besides all the useful things image-generating models can do, they can of course also be used by an anonymous Calvin Risk team member to spend way too much time setting up Stable diffusion on their personal laptop and finding the best selection of Christmas prompts!