Dear friends of Daily Philosophy,
I’m happy to report that just two weeks after reaching 25,000 monthly pageviews on Daily Philosophy, we have now passed 30,000. This is wonderful and shows that we are reaching more and more readers who are interested in engaging with the world in a more thoughtful way than TV or many other mass media provide. If you’d also like to support philosophy publishing that makes a difference, please consider taking a premium subscription. For only 7 USD per month, you’d be supporting Daily Philosophy, giving a forum to our authors, and improving by a little bit the lives of the tens of thousands of readers who find and enjoy our articles every month. Thanks!
In other news, next week we’ll be recording the first episode after our one-year break of the Accented Philosophy podcast. We will be talking about the phenomenon of “quietly quitting” work and, more generally, the anti-work movement. What are its roots? What does it aim to achieve? And is it a healthy way of dealing with the pressures of modern work-life? I’ll post a notice here when the episode is live, so that you can go and listen to it — or just follow the link above right now and add Accented Philosophy to your podcast listening app!
And with this, we’re back at today’s topic: AI art. I recently taught a class on AI, and I thought I’d show the students a few examples of AI-generated papers and pictures. Having followed the developments in AI for years, I was surprised to see how stunned the students were when they first realised what AI programs were capable of.
So I thought that perhaps it would be good to showcase what AI programs can create right here in this newsletter. I will split up the discussion a bit. Today, I’ll just show you a few images and a brief comment on each — and next time, we will talk about the moral problems that these technologies might be creating.
Let’s jump in!
AI generating art
There are multiple systems currently on the market that are able to generate art, and you have probably heard some of the names: generative adversarial networks (GANs), Dall-E, or Stable Diffusion. I am not familiar with the details of how they work, and in what ways they are different from each other; but they all are trained on millions of images, so that they learn to associate a particular string of words (“hamster on a beach”) with a particular image content, in this case, a collection of images of hamsters on beaches. When the user enters a prompt to generate an image, the program will then compose an image that contains the partial images that the program has associated with the different parts of the prompt. So, for example, “a camel on a boat, in the style of Dali” will produce an image containing a camel, a boat, and stylistic elements that can be found across the works of Dali. Here’s what this looks like using Dreamstudio.ai, a service using Stable Diffusion to generate the images (all images scaled down to fit them into the newsletter; the prompts used to generate the images are in the captions):
One thing that soon becomes apparent is that these systems don’t analyse or understand the grammar of the prompts. They just see that they have some image elements for the words “camel,” “boat” and “Dali” and put these together into a new picture. Whether the camel is “on” or “under” the boat is (mostly?) left to chance. So, for example, the same prompt generates this image, which fits the intent of the query much less:
Beauty
The best-looking images are those where the mind of the observer has no reliable way to critically judge the success of the image generation process. Abstract images and painting styles that obscure the details work best and can create truly stunning output:
Keep reading with a 7-day free trial
Subscribe to Daily Philosophy to keep reading this post and get 7 days of free access to the full post archives.