Why Andy Warhol would like – and dislike – AI

Andy Warhol AI image

In a series of blog posts about AI, I’ve been looking at how intelligent ChatGPT is, how good ChatGPT and Bing are when you employ them as a technology writer, and how the engineering team at Redgate is using GitHub Copilot to aid with writing code.

Now it’s time to take a look at image creation tools, and where better to start than Andy Warhol? I like Andy Warhol. He was a rebel as well as an artist, challenging convention, what art actually means and, I think, having fun at the same time as making a lot of money. He was also the precursor to images created by AI art because he often used existing images and changed the way people looked at them.

That’s why I think he would have liked the main image used for this piece. It was created by DALL·E 2, OpenAI’s image creation tool that produces realistic images and art from a description in natural language. The description I used to create it, in just a couple of minutes, was: “Create a picture of a modern data center, in the style of Andy Warhol.”

And there you go. It’s not brilliant, it’s not actually very good, frankly, but Andy Warhol would probably have found it amusing – and fascinating. He used to spend hours and hours creating his pieces in a variety of media from traced epidiascope projections of photographs to paintings, silk screens, photographs and sculpture. Me? I used a keyboard and it took two minutes.

But it also raises a question: how are DALL·E and other text-to-image AI engines like Stable Diffusion able to create images? The picture on the right, for example, is another from DALL·E and a result of the request: Create a picture of a modern datacenter, in a photorealistic style:

It’s not bad. It’s not great, either, but if you’re in a hurry, it’s usable, it’s free, and it literally took around a minute to create, from the moment I opened up DALL·E in a browser window to the time it downloaded to my PC.

In my previous posts about AI, I also used DALL·E to create a pencil drawing of a Viking Princess in a battlefield field in the style of Leonardo Da Vinci; an AI robot holding a pen as a 3D render; and a robot walking towards a cliff edge:

So surely, everyone wins, don’t they?

Er, no.

Welcome to the Achilles Heel of AI

The ability to create images, in seconds, for free, sounds wonderful. Everyone does indeed appear to be a winner, until you take a closer look at the fault line it opens up for AI. Not just for images, but for text as well, because AI needs machine learning as a first step. And machine learning needs training data. All of which begs a simple question:

Where is the training data coming from?

With images in particular, this suddenly becomes a relevant question. If DALL·E can create a photorealistic picture of something like a datacenter, it’s been trained with a lot of existing pictures of a lot of things … like datacenters. Where are they from? Who holds the copyright? Did OpenAI which developed DALL·E ask for permission to use them?

Getty Images has already asked the same kind of questions and is now suing Stability AI, the company behind Stable Diffusion over alleged copyright violation. This is serious stuff, with the February 2023 lawsuit reading: Stability AI has copied more than 12 million photographs from Getty Images’ collection, along with the associated captions and metadata, without permission from or compensation to Getty Images.


It’s also affecting the working lives of people who make a living from creating unique images. People like Greg Rutkowski, a Polish artist who creates fantasy landscapes, dragons and wizards using a classic painting style. His work really is good – so good that images in the style of Greg Rutkowski were requested over 400,000 times on Stable Diffusion, beating the usual suspects like Picasso and Leonardi Da Vinci.

In one way, it’s an affirmation of his skill and appeal as an artist. In another, it’s worrying because he never gave consent to use his images as the training data necessary to reproduce his style, and even he has trouble differentiating his own images from those produced by Stable Diffusion.

Hats off to Stable Diffusion this time which did listen to Greg Rutkowski and other artists whose work was being emulated. The ability of the tool to create images in the style of was restricted in Stable Diffusion 2.0, which proved to be very unpopular and the update was declared ‘nerfed’ by users.

But that’s not the end of the story – and this is the plot point where I suspect Andy Warhol’s fascination with AI would rapidly fade.

Say hello to LoRA

Alongside the ability to use Stable Diffusion to create free images from text prompts, users have another trick up their sleeves: Low-Rank Adaptation (LoRA) models. First proposed by Microsoft researchers in a paper published in 2021, LoRA is a method of reducing the number of trainable parameters required for Large Language Models by 10,000 times and the GPU memory requirement by three times.

LoRA models are open source, they’re fast and relatively easy to use, and they’re widely available from repositories like Civitai and Hugging Face. The important thing is that they can be used to train AI image generators like Stable Diffusion to emulate different concepts like characters and styles using as few as ten images

And that ability to create images in the style of Greg Rutkowski? It’s been enabled again using a LoRA model. And this time it’s open source and it can be replicated over and over with, at this point, no apparent way to stop it.

Which is why Warhol, who wasn’t afraid of blurring copyright lines himself, would I think have an issue. This ability to copy an artist’s whole style rather than a single image, and replicate it, for free, is a worry. The US Copyright Office issued a statement of policy on March 16, 2023, asserting that AI generated works are not protected by copyright. There still hasn’t been a decision on whether the works that are used to actually train the AI image generators are themselves protected by copyright.

So what about the AI engines? Do they have an opinion? OpenAI’s DALL·E 2 can’t speak so I asked its partner, ChatGPT, about it instead. Given that we started this piece with Andy Warhol, I thought we’d end with him as well. I particularly like the closing paragraph of ChatGPT’s response:

A screenshot of a computer Description automatically generated with low confidence