Comparing and Contrasting Three Artificial Intelligence Text-to-Art Tools: Stable Diffusion, Midjourney, and DALL-E 2 (Plus a Tantalizing Preview of AI Text-to-Video Editing!)

HOUSEKEEPING NOTE: Yes, I know, I know—I’m off on yet another tangent on this blog! Please know that I will continue to post “news and views on social VR, virtual worlds, and the metaverse” (as the tagline of the RyanSchultz.com blog states) in the coming months! However, over the next few weeks, I will be focusing a bit on the exciting new world of AI-generated art. Patience! 😉

Artificial Intelligence (AI) tools which can create art from a natural-language text prompt are evolving at such a fast pace that it is making me a bit dizzy. Two years ago, if somebody had told me that you would be able to generate a convincing photograph or a detailed painting from a text description alone, I would have scoffed! Many felt that the realm of the artist or photographer would be among the last holdouts where a human being was necessary to produce good work. And yet, here we are, in mid-2022, with any number of public and private AI initiatives which can be used by both amateurs and professionals to generate stunning art!

In a recent interview by The Register‘s Thomas Claburn of David Holz (the former co-founder of augmented reality hardware firm Magic Leap, who founded Midjourney), there’s a brief explanation of how this burst of research and development activity got started:

The ability to create high-quality images from AI models using text input became a popular activity last year following the release of OpenAI’s CLIP (Contrastive Language–Image Pre-training), which was designed to evaluate how well generated images align with text descriptions. After its release, artist Ryan Murdock…found the process could be reversed – by providing text input, you could get image output with the help of other AI models.

After that, the generative art community embarked on a period of feverish exploration, publishing Python code to create images using a variety of models and techniques.

“Sometime last year, we saw that there were certain areas of AI that were progressing in really interesting ways,” Holz explained in an interview with The Register. “One of them was AI’s ability to understand language.”

Holz pointed to developments like transformers, a deep learning model that informs CLIP, and diffusion models, an alternative to GANs [Holz pointed to developments like transformers, a deep learning model that informs CLIP, and diffusion models, an alternative to GANs [models using Generative Adversarial Networks]. “The one that really struck my eye personally was the CLIP-guided diffusion,” he said, developed by Katherine Crawson…

If you need a (relatively) easy-to-understand explainer on how this new diffusion model works, well then, YouTube comes to your rescue with this video with 4 explanations at various levels of difficulty!


Before we get started, a few updates since my last blogpost on A.I.-generated art: After using up my free Midjourney credits, I decided to purchase a US$10-a-month subscription to continue to play around with it. This is enough credit to generate approximately 200 images per month. Also, as a thank you for being among the early beta testers of DALL-E 2, the AI art-generation tool by OpenAI, they have awarded me 100 free credits to use. You can buy additional credits in 115-generation increments for US$15, but given the hit-or-miss nature of the results returned, this means that DALL-E 2 is among the most expensive of the artificial intelligence art generators. It will be interesting to see if and how OpenAI will adjust their pricing as the newer competitors start to nip at their heels in this race!

And I can hardly believe my good fortune, because I have been accepted into the relatively small beta test group for a third AI text-to-art generation program! This new one is called Stable Diffusion, by Stability AI. Please note that if you were to try to get into the beta now, it’s probably too late; they have already announced that they have all the testers they need. I submitted my name 2-3 weeks ago, when I first heard about the project. Stable Diffusion is still available for researcher use, however.

Like Midjourney, Stable Diffusion uses a special Discord server with commands (instead of Midjourney’s /imagine, you use the prompt !dream, followed by a text description of what you want to see, plus you can add optional parameters to set the aspect ratio, the number of images returned, etc.). However, the Stable Diffusion team has already announced that they plan to move from Discord to a web-based interface like DALL-E 2 (we will be beta-testing that, too). Here’s a brief video glimpse of what the web interface could look like:


Given that I am among the relatively few people who currently have access to all three of the top publicly-available AI art-generation tools, I thought it would be interesting to create a chart comparing and contrasting all three programs. Please note that I am neither an artist nor an expert in artificial intelligence, just a novice user of all three tools! Almost all of the information in this chart has been gleaned from the websites of the projects, and online news reports, as well as the active subreddit communities for all three programs on Reddit, where users post pictures and ask questions. Also, all three tools are constantly being updated, so this chart might go very quickly out-of-date (although I will make an attempt to update it).

Name of ToolDALL-E 2MidjourneyStable Diffusion
CompanyOpenAIMidjourneyStability AI
AI Model UsedDiffusionDiffusionDiffusion
# Images Used
to Train the AI
400 millon“tens of millions”2 billion
User InterfacewebsiteDiscordDiscord (moving to website)
Cost to Usecredit system (115 for US$15)subscription (US$10-30 per month)currently free (beta)
Uses Text Promptsyesyesyes
Can Add Optional Argumentsnoyesyes
Non-Square Images?noyesyes
In-tool Editing?yesnono
Uncropping?yesnono
Generate Variations?yesyesyes (using seeds)
A comparison chart of three AI text-to-art tools: DALL-E 2, Midjourney, and Stable DIffusion

I have already shared a few images from my previous testing of DALL-E 2 and Midjourney here, here, and here, so I am not going to repost those images, but I wanted to share a couple of the first images I was able to create using Stable Diffusion (SD). To make these, I used the text prompt “a thatched cottage with lit windows by a lake in a lush green forest golden hour peaceful calm serene very highly detailed painting by thomas kinkade and albrecht bierstadt”:

I must admit that I am quite impressed by these pictures! I had asked SD for images with a height of 512 pixels and a width of 1024 pixels, but to my surprise, the second image was a wider one presented neatly in a white frame, which I cropped using my trusty SnagIt image editor! Also, it was not until after I submitted my prompt that I realized that the second artist’s name is actually ALBERT Bierstadt, not Albrecht! It doesn’t appear as if my typo made a big difference in the final output; perhaps for well-known artists, the last name alone is enough to indicate a desired art style?

Here are a few more samples of the kind of art which Stable Diffusion can create, taken from the pod-submissions thread on the SD Discord server:

Text prompt: “a beautiful landscape photography of Ciucas mountains mountains a dead intricate tree in the foreground sunset dramatic lighting by Marc Adamus”
Text prompt: “incredible wide screenshot ultrawide simple watercolor rough paper texture katsuhiro otomo ghost in the shell movie scene backlit distant shot”
Text prompt: “an award winning wallpaper of a beautiful grassy sunset clouds in the sky green field DSLR photography clear image”
Text prompt: “beautiful angel brown skin asymmetrical face ethereal volumetric light sharp focus”
Painting of people swimming (no text prompt shared)

You can see many more examples over at the r/StableDiffusion subreddit. Enjoy!

If you are curious about Stable Diffusion and want to learn more, there is a 1-1/2 hour podcast interview with Emad Mostaque, the founder of Stability AI (highly recommended!). You can also visit the Stability AI website, or follow them on social media: Twitter or LinkedIn.


I also wanted to submit the same text prompt to each of DALL-E 2, Midjourney, and Stable Diffusion, to see how the AI models in each would respond. Under each prompt you will see three square images: the first from DALL-E 2, the second from Midjourney, and the third from Stable Diffusion. (Click on each thumbnail image to see it in its full size on-screen.)

Text prompt: “the crowds at the Black Friday sales at Walmart, a masterpiece painting by Rembrandt van Rijn”

Note that none of the AI models are very good at getting the facial details correct for large crowds of people (all work better with just one face in the picture, like a portrait, although sometimes they struggle with matching eyes or hands). I would say that Midjourney is the clear winner here, although a longer, much more detailed prompt in DALL-E 2 or Stable Diffusion might have created an excellent picture.

Text prompt: “stunning breathtaking photo of a wood nymph with green hair and elf ears in a hazy forest at dusk. dark, moody, eerie lighting, brilliant use of glowing light and shadow. sigma 8.5mm f/1.4”

When I tired to generate a 1024-by-1024 image in Stable Diffusion, it kept giving me more than one wood nymph, even when I added words like “single” or “alone”, which is a known bug in the current early state of the program. I finally gave up and used a 512×512 image. The clear winner here is DALL-E 2, which has a truly impressive ability to mimic various camera styles and settings!

Text prompt: “a very highly detailed portrait of an African samurai by Tim Okamura”

In this case, the clear winner is Stable Diffusion with its incredible detail, even though, once again, I could not generate a 1024×1024 image because it kept giving me multiple heads! The DALL-E 2 image is a too stylized for my taste, and the Midjourney image, while nice, has eyes that don’t match (a common problem with all three tools).

And, if you enjoy this kind of thing, here’s a 15-minute YouTube video with 21 more head-to-head comparisons between Stable Diffusion, DALL-E 2, and Midjourney:


As I have said, all of this is happening so quickly that it is making my head spin! If anything, the research and development of these tools is only going to accelerate over time. And we are going to see this technology applied to more than still images! Witness a video shared on Twitter by Patrick Esser, an AI research scientist at Runway, where the entire scene around a tennis player is changed simply by editing a text prompt, in real time:


I expect I will be posting more later about these and other new AI art generation tools as they arise; stay tuned for updates!

A.I.-Generated Art: Comparing and Contrasting DALL-E 2 and Midjourney as Both Tools Move to an Open Beta

UPDATE Aug. 12th, 2022: I have just joined the beta test of Stable Diffusion, another AI art-generation program! For more information, please read Comparing and Contrasting Three Artificial Intelligence Text-to-Art Tools: Stable Diffusion, Midjourney, and DALL-E 2 (Plus a Tantalizing Preview of AI Text-to-Video Editing!)

You might remember that I was one of the lucky few who received an invitation to be part of the closed beta test (or “research preview”, as they called it) of DALL-E 2, a new artificial intelligence tool from a company called OpenAI, which can create art from a natural-language text prompt. (I blogged about it, sharing some of the images I created, here and here.)

Here are a few more pictures I generated using DALL-E 2 since then (along with the prompt text in the captions):

DALL-E 2 prompt: “feeling despair over a uncertain future digital art”
DALL-E 2 prompt: “feeling anxiety over an uncertain future digital art”
DALL-E 2 prompt: “feeling anxiety over a precarious future” (sensing a theme here?)
DALL-E 2 prompt” “award-winning detailed vibrant bright colorful knife painting by Françoise Nielly” (Note that this used an inpainting technique; I expanded the canvas borders and asked DALL-E 2 to fill them in to match the Nielly knife painting of the man’s face in the middle)

Meanwhile, other DALL-E 2 users have generated much better results than I could, by skillful use of the text prompts. Here are just a few examples from the r/dalle2 subReddit community of AI-generated images which impressed and sometimes even stunned me, with a direct link to the posts in the caption underneath each picture:

DALL-E 2 prompt: “an image of the Cosmic Mind, digital art”
DALL-E 2 pompt: “cyborg clown, CGSociety award winning render”
DALL-E 2 prompt: “a young girl stares directly at the camera, her blue hijab framing her face. The background is a blur of colours, possibly a market stall. The photo is taken from a low angle, making the girl appear vulnerable and child-like. Kodak Portra 400”
DALL-E 2 prompt: “a close-up photograph of a man with brown hair, ice-blue eyes, red and brown stubble Balbo beard, his face is narrow, with defined cheekbones, he has a scar on the left side of his lips, running down from his top to the bottom lip, he wears a dark-blue hoodie, the background is a blurred out city-scape”

As you can see by the last two images, you can get very detailed and technical in your text prompts, even including the model of camera used! (However, also note that in the fourth picture, DALL-E 2 ignored some specific details in the prompt.)

Yesterday, OpenAI sent me an email to annouce that DALL-E 2 was moving into open beta:

Our goal is to invite 1 million people over the coming weeks. Here’s relevant info about the beta:

Every DALL·E user will receive 50 free credits during their first month of use, and 15 free credits every subsequent month. You can buy additional credits in 115-generation increments for $15.

You’ll continue to use one credit for one DALL·E prompt generation — returning four images — or an edit or variation prompt, which returns three images.

We welcome feedback, and plan to explore other pricing options that will align with users’ creative processes as we learn more.

As thanks for your support during the research preview we’ve added an additional 100 credits to your account.

Before DALL-E 2 announced their new credits system, I had spent most of one day’s free prompts during the research preview to try and generate some repeating, seamless textures to apply to full-permissions mesh clothing I had purchased from the Second Life Marketplace. Most of my attempts were failures, pretty designs but not 100% seamless. However, I did manage to create a couple of floral patterns that worked:

So, instead of purchasing texture packs from without and outside of Second Life, I could, theoretically, generate unique textile patterns, apply them to mesh garments, and sell them, because according to the DALL-E 2 beta announcement I received:

Starting today, you get full rights to commercialize the images you create with DALL·E, so long as you follow our content policy and terms. These rights include rights to reprint, sell, and merchandise the images.

You get these rights regardless of whether you used a free or paid credit to generate images, and this includes images you’ve created before today during the research preview.

Will I? Probably not, because it took me somewhere between 20 and 30 text prompts to generate only two useful seamless patterns, so it’s just not cost effective. However, once AI art tools like DALL-E 2 learns how to generate seamless textures, it’s probably going to have some sort of impact on the texture industry, both within and outside of Second Life! (I can certainly see some enterprising soul set up a store and sell AI-generated art in a virtual world; SL is already full of galleries with human-generated art.)


Another cutting-edge AI art-generation program, called Midjourney (WARNING: ASCII art website!), has also announced an open beta. I had signed up to join the waiting list for an invitation several weeks ago, and when I checked my email, lo and behold, there it was!

Hi everyone,

We’re excited to have you as an early tester in the Midjourney Beta!

To expand the community sustainably, we’re giving everyone a limited trial (around 25 queries with the system), and then several options to buy a full membership.

Full memberships include; unlimited generations (or limited w a cheap tier), generous commercial terms and beta invites to give to friends.

Although both DALL-E 2 and Midjourney use human text prompts to generate art, they operate differently. While DALL-E 2 uses a website, Midjourney uses a special Discord server, where you enter your prompt as a special command, generating four rough thumbnail images, which you can then choose to upscale to a full-size image, or use as the basis for variations.

I took some screen captures of the process, so you can see how it works. I typed in “/imagine a magnificent sailing ship on a stormy sea”, and got this back:

The U buttons will upscale one of the four thumbnails, adding more details, while the V buttons generate variations, using one of the four thumbnails as a starting point. I choose thumbnail four and generated four variations of that picture:

Then, I went back and picked one of my original four images to upscale. You can actually watch as Midjourney slowly adds details to your image, it’s fascinating!

I then clicked on the Upscale to Max button, to receive the following image:

My first attempt at generating an image using Midjourney

Now, I am not exactly satisfied with this first attempt (that sailing ship looks rather spidery to me), but as with DALL-E 2, you get much better results with more specific, detailed text prompts. Here are a few examples I took from the Midjourney subReddit community (with links back to the posts in the captions):

Midjorney prompt: “cyberpunk soldier piloting a warship into battle, the atmosphere is like war, fog, artstation, photorealistic”
Midjourney prompt: “Dress made with flowers” (click to see a second one on Reddit)
Midjourney prompt: “a tiny stream of water flows through the forest floor, octane render, light reflection, extreme closeup, highly detailed, 4K

So, as you can see, you can get some pretty spectacular results, with incredible levels of detail! And unlike DALL-E 2, you can set the aspect ratio of your pictures (as was done in the fourth image generated). You do this with a special “–ar” command in your text prompt to Midjourney, e.g. “–ar 16:9” (here’s the online documentation explaining the various commands you can use).

And one area in which Midjourney appears to excel is horror:

Midjourney prompt: “a pained, tormented mind visualized as a spiraling path into the void”
Midjourney prompt: “a beautiful painting of Escape from tarkov in machinarium style, insanely detailed and intricate, golden ratio, hypermaximalist, elegant, ornate, luxury, elite, horror, creepy, ominous, haunting, matte painting, cinematic, cgsociety, James jean, Brian froud, ross tran”

You can see many more examples of depictions of horror in the postings to the Midjourney SubReddit; some are much creepier than these!


So, in comparing the two tools, I think that Midjourney offers more parameters to users (e.g. setting an aspect ratio), which DALL-E currently lacks. Midjourney also seems to produce much more detailed images than DALL-E 2 does, whereas DALL-E 2 is often astoundingly good at a much wider variety of tasks. For example, how about some angry bison logos for your football team?

I think these images are all very good! (Note that DALL-E 2 still struggles with text! Midjourney does too, but it gets the text correct more often than DALL-E 2 does at present. But note that might change over time as both systems evolve.)


So, the good news is that both DALL-E 2 and Midjourney are now in open beta, which means that more people (artists and non-artists alike) will get an opportunity to try them out. The bad news is that both still have long waiting lists, and with the move to beta, both DALL-E 2 and Midjourney have put limits in place as to how many free images you can generate.

Midjourney gives you a very limited trial period (about 25 prompts), and then urges you to pay for a subscription, with two options:

Basic membership gives you around 200 images per month for US$10 monthly; standard membership gives you unlimited use of Midjourney for US$30 a month.

For now, OpenAI has decided to set DALL-E 2’s pricing based on a credit system (similar to their GPT-3 AI text-generation tool), as described in the first quote in this blogpost. There’s no option for unlimited use of DALL-E 2 at any price, just options for buying credits in different amounts (and there are no volume discounts for purchasing larger amounts of credits at one time, either). The most you can by at once is 5,750 credits, which is US$750. So, yes, it can get quite expensive! (As far as I am aware, your unused credits carry over from one month to the next.)

There’s quite a bit of discussion about OpenAI’s DALL-E 2 pricing model in this thread in the r/dalle2 subReddit; many people are very unhappy with it, particularly since it can take a lot of trial and error with the DALL-E 2 text prompts to generate a desired result. One person said:

In my experience, using Dall-E 2 to generate concept arts for our next project, it takes me between 10 to 20 attempts to get something close to what I want (and I never got exactly what I was asking for)…

Dall-E 2, at this point, is not a professional tool. It’s not viable as one, unless you produce exactly the type of content the AI can produce instantly just the way you want it.

Dall-E 2, at this point, IS A TOY! And that’s OpenAI’s mistake right now. You can’t sell a toy the way you sell a professional service! I’m ready to pay for it because I’m experimenting with it. I’m having fun with it and, when it works, it provides me with images I can also use for professional project. However, I wont EVER spend hundreds of dollars on this just for fun, and I certainly wont pay that amount for it as a tool until it can provide me with better and more consistent results!

OpenAI is going after the WRONG TARGET! OpenAI should be seeling it at a much lower price for everyday people and enthusiasts who want to experiment with it because this is literally the only people who can be 100% satisfied with it at this point and these people wont pay hundreds of dollars per month to keep playing when there are other shiny toys out there, cheaper and more open, existing or about to.

Several commenters said that they will be moving from DALL-E 2 to Midjourney because of their more favourable pricing model, but of course it’s still early days. Also, there are any number of open-source AI art-generation projects in the works, and competition will likely mean more features (and better results!) at less cost over time. One thing is certain: we can anticipate an acceleration in improvement of these tools over time.

The future looks to be both exciting and scary! Exciting in the ability to generate art in a new way, which up until now has been restricted to experienced artists or photographers, and scary in that we can no longer trust our eyes that a photograph is real, or has been generated by artificial intelligence! Currently, both systems have rules in place to prevent the creation of deepfake images, but in future, things could get Black Mirror weird, and the implications to society could be substantial. (Perhaps now you will understand the first three DALL-E 2 text prompts I used, at the top of this blogpost!)

P.S. Fun fact: the founding CEO of Linden Lab (the makers of Second Life), Philip Rosedale, is one of the advisors to Midjourney, according to their website. Philip gets around! 😉

UPDATE July 22nd, 2022: Of course, the images generated by DALL-E 2 and Midjourney can then be used in other AI tools, such as WOMBO and Reface (please click the links to see all the blogposts I have written about these mobile apps).

Late yesterday, a member of the r/dalle2 community posted the following 18-second video, created by generating a photorealistic portrait of a woman using DALL-E 2, then submitting it to a tool similar to WOMBO and Reface called Deep Nostalgia:

What you see here is an AI-generated image, “animated” using another deep learning tool. This is a tantalizing glimpse into the future, where artificial intelligence can not only create still images, but eventually, video!

UPDATED! DALL-E 2: Some Results (and Some Thoughts) After Using OpenAI’s Revolutionary, Amazing Artificial Intelligence Tool Every Day for Two Weeks to Create Images

For 50,000 years, artistic expression has been unique to mankind.

Today, this hallmark of humanity is claimed by another.

These images, generated by A.I., offer a glimpse into a future with unfathomable creative possibilites.

What will the next 50,000 years bring?

BINARY DREAMS: How A.I. Sees the Universe

This quote comes from an imaginative 3-minute YouTube video by melodysheep, illustrated with images created using Midjourney, one of many new AI-based systems that can create realistic images and art from a text description in natural language.

Such computer systems have surprised most observers by how rapidly they are evolving and learning over time, being able to take on tasks that were formerly thought to be the exclusive domain of humans. They have sparked curiousity, creativity, and, in some cases, dread, among people—along with much frustration at not being yet able to get their hands on these tools! Some are sill unavailable to the public (like Google’s Imagen), while others have long waiting lists (e.g. Midjourney, by an independent research lab, and OpenAI’s DALL-E 2).

As of July 1st, I am one of a little over 50,000 people who have been lucky enough to receive invitations to test out one of the leading text-generated AI art tools, called DALL-E 2. DALL-E 2 is an initiative by OpenAI, an artificial intelligence research company which is invested in by Microsoft and other companies. (Among OpenAI’s earlier offerings is GPT-3, an AI tool which uses deep learning to produce ever more human-like text.)

Over the past two weeks (since I got my invite via email on June 19th, 2022, and set up my account), I have been spending almost every day crafting and submitting text descriptions, and waiting for DALL-E 2 to spit back six result images. Each image in turn can be used as the basis to generate six variations, or if you wish, you can upload an image, erase part of its background, and then use it as a start for your creativity. Some people have uploaded famous works of art from throughout art history, to have DALL-E 2 expand the canvas beyond its original borders, a technique called “uncropping”.

Here’s one example of uncropping which somebody posted to the r/dalle2 community on Reddit, using the famous painting The Swing by French painter Jean-Honoré Fragonard. Here’s the original painting, and here’s the uncrop:

Fragonard’s The Swing

See the tiny coloured squares in the bottom-right corner of the image? Those are watermarks generated by DALL-E 2. You might be wondering if such images can be used for commercial purposes (advertising, album covers, etc.). The answer, from DALL-E 2’s detailed Content Policy, is:

As this is an experimental research platform, you may not use generated images for commercial purposes. For example:

• You may not license, sell, trade, or otherwise transact on these image generations in any form, including through related assets such as NFTs.
• You may not serve these image generations to others through a web application or through other means of third-parties initiating a request.

I have noticed that there are some kinds of images which DALL-E 2 seems to excel at. Among them is food photography. Check out these pictures, based on the following text prompt: “Food photography of delicious freshly fried chicken tenders with a side of honey mustard dipping sauce topped with green onion” (click on each thumbnail below to see it in greater detail).

You would be extremely hard pressed to find any difference between these AI-generated pictures, and actual photographs taken by professional food photographers! As one person commented on Reddit, “Incredible. It really got this one. So many people are going to lose their jobs.”

You can also specify the brand of camera, shutter speed, style of photography, etc. in your text prompts. There are still many problem areas, but people have been able to create some amazing “photographs” and “movie stills”, as the following examples illustrate (text prompts are in the caption of each image):

Prompt: “A still of a woman with heavily made-up eyes and lips, holding a martini glass. Fuji Pro 400H.” (Note how the eyes don’t quite match? DALL-E still has trouble matching eyes in portraits.)
Prompt: “A woman’s face in profile, looking pensive. The lighting is soft and flattering, and the background is a warm, golden colour. Cinestill 800t.”
Prompt: “A man and a woman embracing each other passionately, their faces inches apart, lit by flickering candles. Cinestill 800t.”

Another popular topic is bizarre juxtapositions, entering text prompts of unlikely topics combined with various art styles, for example, Star Wards stormtrooper recruitment in the syle of Soviet-era propoganda posters:

Prompt: “Stormtrooper recruitment, soviet propaganda poster”

Or, perhaps, some advertising for McDonald’s new Minecraft Hamburger?

As you may have noticed, one area where DALL-E 2 fails (often quite humorously!) is in text captions. It’s smart enough to know that there needs to be some text in an advertisement along with the image, but it’s not bright enough to get the spelling right! (It’s become a bit of an inside joke within the DALL-E 2 subReddit.)


So, how have I been using DALL-E 2 over the past couple of weeks?

Well, I generated the following image using the text prompt: “Jesus at the Sermon on the Mount award-winning portrait by Annie Leibovitz dramatic lighting.” (The faces were messed up, so I used DALL-E 2’s built-in erase function to erase both faces and regenerated variations of the original image until I found one I quite liked.)

Prompt: “Jesus at the Sermon on the Mount, award-winning portrait by Annie Liebowitz, dramatic lighting

Inspired by another member of the r/dalle2 subReddit, I tried the following prompt:

Prompt: “a human interfacing with the universe colorful digital art

Then, I tried my hand at several variations of the wording: “Human female face in a colorful galactic nebula detailed dreamlike digital art”, to get the following series (please click on each one to see it in a larger size):

(Adding the words “digital art” and colorful” really makes a difference in the results!)

I also tried my hand at creating some improbable art! Here’s Jesus scrolling through Twitter on his iPhone, by Gustave Doré:

And the same subject as a print by Albrecht Dürer (interestingly, using the word “woodprint” gave me monochrome results, while just “print” threw up a few coloured prints!):

(I love how cranky Jesus is in the last image! He’s definitely gotten into an argument with a Twitter troll!!!)

Finally, I did the same subject as a stained-glass window:

I absolutely love how DALL-E 2 even tried to include some garbled text messages in a few of the resulting images it spit back at me!

Yesterday, I wanted to see how well DALL-E 2 could mimic an existing artist’s style, so I selected renowned French knife-painter Françoise Nielly (website; some examples of her work), who has a very distinctive, vibrant look to her oeuvre:

Here’s some of the better results I was able to get after trying various prompts over the course of a couple of hours (interestingly, most of these portraits are of African faces, although I did not specify that in my text prompts!). Again, please click on each thumbnail to see the full image.

And, as I have with previous AI apps like WOMBO and Reface, I have also been feeding Second Life screen captures into DALL-E 2. Here’s an example of an uncrop of one of my favourite SL profile pictures, of my main male avatar Heath Homewood (note that among many of the beta test restrictions imposed by OpenAI, you cannot upload photographs of celebrities or other human faces, but the stylized look of SL mesh avatars doesn’t trigger the system!):

Here are five results I got back, using the text prompt: “Man standing in a library holding a book very detailed stunning award-winning digital art trending on artstation” (click on each to see it in full size):

I had an image of Vanity Fair dressed in an Alice in Wonderland Queen of Hearts costume, where I erased the background of the screen capture, and tried out several different prompts, with some surprising results (I certainly wasn’t expecting a playing card!):

Here are some variations the SL selfie of one of my alts, where I once again erased the background and expanded the canvas size using Photopea (all the blank white space in this image, I asked DALL-E 2 to fill in for me):

Here are some results of variations of the following text prompt: “fairytale lake forest and mountains landscape by Albert Bierstadt and Ivan Shishkin and Henri Mauperché” (notice again the text failures, and also in some cases how DALL-E 2 “enhanced” the model’s original flower headdress!). Again, click through to see the full-size images.

So, as you can see, I am having fun! But I have also been pondering what this creative explosion within AI means for society as a whole.

I think that we are going to begin to see an accelerating wave, as these AI tools and apps improve, and start to encroach upon existing creative industries. The days of companies meticulously compiling and licensing stock photography are surely numbered, in an age when you can create photorealistic depictions of just about anything you can imagine. And I suspect that the food photography industry is in for an unexpected shake-up!

Many creative types have suggested that tools like DALL-E 2 will become a useful way to mock-up design ideas, saving hours of work at the easel, behind the camera, or sitting in front of PhotoShop. But others fear that many artists and photographers will someday be out of a job, and sooner than they anticipate, in the face of this AI onslaught. For example, why pay an artist to design wallpaper when you can create any sort of pleasing, repeating design yourself, matching specific colours on demand? And keep rerunning the prompts until you get a result you like, in a fraction of the time it would take a human artist to churn them out?

I don’t know how long the closed beta test of DALL-E 2 will run, or when and how OpenAI will start charging for the service; I suspect I will be writing more blogposts about this over time.

UPDATE July 5th, 2022: Laura Lane writes about DALL-E 2 in The New Yorker magazine, in an article titled DALL-E, Make Me Another Picasso, Please.

UPDATE July 10th, 2022: Photographer Thomas Voland has written up a lengthy blogpost about DALL-E 2, including over 100 generated images. The original is in Polish, but here is an English version via Google Translate. Well worth the read!

Thomas Voland’s article is well worth the read; it is illustrated with over 100 images he generated using DALL-E 2.