UPDATED! DALL-E 2: Some Results (and Some Thoughts) After Using OpenAI’s Revolutionary, Amazing Artificial Intelligence Tool Every Day for Two Weeks to Create Images

For 50,000 years, artistic expression has been unique to mankind.

Today, this hallmark of humanity is claimed by another.

These images, generated by A.I., offer a glimpse into a future with unfathomable creative possibilites.

What will the next 50,000 years bring?

BINARY DREAMS: How A.I. Sees the Universe

This quote comes from an imaginative 3-minute YouTube video by melodysheep, illustrated with images created using Midjourney, one of many new AI-based systems that can create realistic images and art from a text description in natural language.

Such computer systems have surprised most observers by how rapidly they are evolving and learning over time, being able to take on tasks that were formerly thought to be the exclusive domain of humans. They have sparked curiousity, creativity, and, in some cases, dread, among people—along with much frustration at not being yet able to get their hands on these tools! Some are sill unavailable to the public (like Google’s Imagen), while others have long waiting lists (e.g. Midjourney, by an independent research lab, and OpenAI’s DALL-E 2).

As of July 1st, I am one of a little over 50,000 people who have been lucky enough to receive invitations to test out one of the leading text-generated AI art tools, called DALL-E 2. DALL-E 2 is an initiative by OpenAI, an artificial intelligence research company which is invested in by Microsoft and other companies. (Among OpenAI’s earlier offerings is GPT-3, an AI tool which uses deep learning to produce ever more human-like text.)

Over the past two weeks (since I got my invite via email on June 19th, 2022, and set up my account), I have been spending almost every day crafting and submitting text descriptions, and waiting for DALL-E 2 to spit back six result images. Each image in turn can be used as the basis to generate six variations, or if you wish, you can upload an image, erase part of its background, and then use it as a start for your creativity. Some people have uploaded famous works of art from throughout art history, to have DALL-E 2 expand the canvas beyond its original borders, a technique called “uncropping”.

Here’s one example of uncropping which somebody posted to the r/dalle2 community on Reddit, using the famous painting The Swing by French painter Jean-Honoré Fragonard. Here’s the original painting, and here’s the uncrop:

Fragonard’s The Swing

See the tiny coloured squares in the bottom-right corner of the image? Those are watermarks generated by DALL-E 2. You might be wondering if such images can be used for commercial purposes (advertising, album covers, etc.). The answer, from DALL-E 2’s detailed Content Policy, is:

As this is an experimental research platform, you may not use generated images for commercial purposes. For example:

• You may not license, sell, trade, or otherwise transact on these image generations in any form, including through related assets such as NFTs.
• You may not serve these image generations to others through a web application or through other means of third-parties initiating a request.

I have noticed that there are some kinds of images which DALL-E 2 seems to excel at. Among them is food photography. Check out these pictures, based on the following text prompt: “Food photography of delicious freshly fried chicken tenders with a side of honey mustard dipping sauce topped with green onion” (click on each thumbnail below to see it in greater detail).

You would be extremely hard pressed to find any difference between these AI-generated pictures, and actual photographs taken by professional food photographers! As one person commented on Reddit, “Incredible. It really got this one. So many people are going to lose their jobs.”

You can also specify the brand of camera, shutter speed, style of photography, etc. in your text prompts. There are still many problem areas, but people have been able to create some amazing “photographs” and “movie stills”, as the following examples illustrate (text prompts are in the caption of each image):

Prompt: “A still of a woman with heavily made-up eyes and lips, holding a martini glass. Fuji Pro 400H.” (Note how the eyes don’t quite match? DALL-E still has trouble matching eyes in portraits.)
Prompt: “A woman’s face in profile, looking pensive. The lighting is soft and flattering, and the background is a warm, golden colour. Cinestill 800t.”
Prompt: “A man and a woman embracing each other passionately, their faces inches apart, lit by flickering candles. Cinestill 800t.”

Another popular topic is bizarre juxtapositions, entering text prompts of unlikely topics combined with various art styles, for example, Star Wards stormtrooper recruitment in the syle of Soviet-era propoganda posters:

Prompt: “Stormtrooper recruitment, soviet propaganda poster”

Or, perhaps, some advertising for McDonald’s new Minecraft Hamburger?

As you may have noticed, one area where DALL-E 2 fails (often quite humorously!) is in text captions. It’s smart enough to know that there needs to be some text in an advertisement along with the image, but it’s not bright enough to get the spelling right! (It’s become a bit of an inside joke within the DALL-E 2 subReddit.)


So, how have I been using DALL-E 2 over the past couple of weeks?

Well, I generated the following image using the text prompt: “Jesus at the Sermon on the Mount award-winning portrait by Annie Leibovitz dramatic lighting.” (The faces were messed up, so I used DALL-E 2’s built-in erase function to erase both faces and regenerated variations of the original image until I found one I quite liked.)

Prompt: “Jesus at the Sermon on the Mount, award-winning portrait by Annie Liebowitz, dramatic lighting

Inspired by another member of the r/dalle2 subReddit, I tried the following prompt:

Prompt: “a human interfacing with the universe colorful digital art

Then, I tried my hand at several variations of the wording: “Human female face in a colorful galactic nebula detailed dreamlike digital art”, to get the following series (please click on each one to see it in a larger size):

(Adding the words “digital art” and colorful” really makes a difference in the results!)

I also tried my hand at creating some improbable art! Here’s Jesus scrolling through Twitter on his iPhone, by Gustave Doré:

And the same subject as a print by Albrecht Dürer (interestingly, using the word “woodprint” gave me monochrome results, while just “print” threw up a few coloured prints!):

(I love how cranky Jesus is in the last image! He’s definitely gotten into an argument with a Twitter troll!!!)

Finally, I did the same subject as a stained-glass window:

I absolutely love how DALL-E 2 even tried to include some garbled text messages in a few of the resulting images it spit back at me!

Yesterday, I wanted to see how well DALL-E 2 could mimic an existing artist’s style, so I selected renowned French knife-painter Françoise Nielly (website; some examples of her work), who has a very distinctive, vibrant look to her oeuvre:

Here’s some of the better results I was able to get after trying various prompts over the course of a couple of hours (interestingly, most of these portraits are of African faces, although I did not specify that in my text prompts!). Again, please click on each thumbnail to see the full image.

And, as I have with previous AI apps like WOMBO and Reface, I have also been feeding Second Life screen captures into DALL-E 2. Here’s an example of an uncrop of one of my favourite SL profile pictures, of my main male avatar Heath Homewood (note that among many of the beta test restrictions imposed by OpenAI, you cannot upload photographs of celebrities or other human faces, but the stylized look of SL mesh avatars doesn’t trigger the system!):

Here are five results I got back, using the text prompt: “Man standing in a library holding a book very detailed stunning award-winning digital art trending on artstation” (click on each to see it in full size):

I had an image of Vanity Fair dressed in an Alice in Wonderland Queen of Hearts costume, where I erased the background of the screen capture, and tried out several different prompts, with some surprising results (I certainly wasn’t expecting a playing card!):

Here are some variations the SL selfie of one of my alts, where I once again erased the background and expanded the canvas size using Photopea (all the blank white space in this image, I asked DALL-E 2 to fill in for me):

Here are some results of variations of the following text prompt: “fairytale lake forest and mountains landscape by Albert Bierstadt and Ivan Shishkin and Henri Mauperché” (notice again the text failures, and also in some cases how DALL-E 2 “enhanced” the model’s original flower headdress!). Again, click through to see the full-size images.

So, as you can see, I am having fun! But I have also been pondering what this creative explosion within AI means for society as a whole.

I think that we are going to begin to see an accelerating wave, as these AI tools and apps improve, and start to encroach upon existing creative industries. The days of companies meticulously compiling and licensing stock photography are surely numbered, in an age when you can create photorealistic depictions of just about anything you can imagine. And I suspect that the food photography industry is in for an unexpected shake-up!

Many creative types have suggested that tools like DALL-E 2 will become a useful way to mock-up design ideas, saving hours of work at the easel, behind the camera, or sitting in front of PhotoShop. But others fear that many artists and photographers will someday be out of a job, and sooner than they anticipate, in the face of this AI onslaught. For example, why pay an artist to design wallpaper when you can create any sort of pleasing, repeating design yourself, matching specific colours on demand? And keep rerunning the prompts until you get a result you like, in a fraction of the time it would take a human artist to churn them out?

I don’t know how long the closed beta test of DALL-E 2 will run, or when and how OpenAI will start charging for the service; I suspect I will be writing more blogposts about this over time.

UPDATE July 5th, 2022: Laura Lane writes about DALL-E 2 in The New Yorker magazine, in an article titled DALL-E, Make Me Another Picasso, Please.

UPDATE July 10th, 2022: Photographer Thomas Voland has written up a lengthy blogpost about DALL-E 2, including over 100 generated images. The original is in Polish, but here is an English version via Google Translate. Well worth the read!

Thomas Voland’s article is well worth the read; it is illustrated with over 100 images he generated using DALL-E 2.

UPDATED! Dallying with DALL-E 2: My First Three Days Testing Out AI-Generated Art from Text Prompts (and Some Resulting Images!)

I know this post is off-topic, but I do hope you will indulge me! Today I checked my email and discovered that I have been among the first few lucky people to be accepted into the testing phase of DALL-E 2!

What is DALL-E 2? DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language. Here’s a two-minute video that explains the concept:

Vox has released a 13-minute YouTube video that explains the concept behind DALL-E 2 and related AI-generated art systems in more detail:

DALL-E 2 is a significant step up from the original DALL-E system, promising more realistic and accurate images with four times greater resolution! It can combine artistic concepts, attributes, and styles, as well as make realistic edits to existing images. It can also create variations of an image based on the original.

UPDATE June 23rd: According to its creators, who participated in an AMA session on the r/dalle2 subReddit, DALL-E 2 was trained using roughly 650,000,000 images along with their captions. These images were either licensed or publicly available on the internet.

So today, in my first day using DALL-E 2, I decided to put it through its paces, and I discovered some of the strengths—and weaknesses—of the AI program, from OpenAI.

First, I wanted to see what it could do with a selfie from Second Life of my main avatar, Vanity Fair.

I uploaded a picture and clicked on the Variations button, and it generated what looked like reasonable Second Life avatars with slight changes to the original, as if I had fiddled with the face sliders and tried on different wigs:

Then, I wanted to try erasing the background of the image, and using it with a text prompt: “Vanity Fair wearing a ballgown in a highly-realistic Regency Era ballroom with elegant dancers”.

Among the results I got back were these:

I love how it gave Vanity elf ears in the second picture! Then, I decided to erase the background from a shot of my main male SL avatar, Heath Homewood:

The text prompt I gave DALL-E 2 to fill in the erased area was “man in a highly detailed photograph of an elaborate steampunk landscape with airships and towers”. Here are five of the six results it spit back at me (please click on each image to see it in a larger size):

The backgrounds are all quite varied, and also quite intricate in some cases! I also noticed that the AI “augmented” Heath Homewood’s hair in some of the pictures, while it left it alone in others. Innnteresting…..

My next prompt, “smiling man wearing a virtual reality headset with a fantasy metaverse background very colourful and clean detailed advertising art”, also generated some astoundingly good results, any of which could easily be used in a magazine advertisement or article illustration! (Again, please click on the images to see them in full size.)

So, I continued. As my apartment patio looks out over a small forest known for its deer and rabbits, I decided to enter the same text prompt, “a lush green forest with deer and rabbits”, appending the text with an artistic style. In response to each prompt, I picked the best of the six pictures DALL-E 2 gave me back, along with the text prompts I used (in the captions below each picture).

A lush green forest with deer and rabbits digital art
A lush green forest with deer and rabbits impressionist art
A lush green forest with deer and rabbits by Johannes Verneer
A lush green forest with deer and rabbits by Salvafor Dali
A lush green forest with deer and rabbits by Andy Warhol
A lush green forest with deer and rabbits in the style of Sunday on La Grande Jatte by Georges Seurat
a lush green forest with deer and rabbits in the style of Inuit art
A lush green forest with deer and rabbits by Piet Mondrian
A lush green forest with deer and rabbits as a Disney cartoon
A lush green forest with deer and rabbits as a medieval tapestry
A lush green forest with deer and rabbits synthwave
A lush green forest with deer and rabbits cyberpunk
A lush green forest with deer and rabbits kawaii anime style (this wasn’t what I was expecting, but it’s so beautiful, like an illustration from a children’s book!)
A lush green forest with deer and rabbits chibi cartoon style
A lush green forest with deer and rabbits horror movie film still high quality
A lush green forest with deer and rabbits ancient Eqyptian carvings

While I am mightily impressed by these results, I did notice a few things. First, sometimes DALL-E 2 gave me a misshapen or mutated deer or rabbit, or even a mixture of a deer and a rabbit (and in one case, a deer merging into a tree!). Second, DALL-E 2 still seems to have a lot of trouble with faces, both of animals and of people (you can see this most clearly in the Disneyesque image above). In particular, you get terrible results when you put in the name of a real person, e.g. “Philip Rosedale wearing a crown and sitting on a throne in Second Life”, which gave some rather terrifying Frankenstein-looking versions of Philip that I will not share with you!

I did try “Strawberry Singh and Draxtor Despres dressed in Regency costumes in an episode of Bridgerton in Second Life”, and this is the best of the six results it spit back:

Strawberry Singh and Draxtor Despres dressed in Regency costumes in an episode of Bridgerton in Second Life

If you squint (a lot), you can just about make out the resemblances, but it’s very clear that presenting realistic human (or avatar!) faces is something DALL-E 2 is not really very good at yet. However, given how alarmingly quickly this technology has developed in a year (from DALL-E to DALL-E 2), the ability for AI-generated art to more accurately depict human faces realistically is probably not too far off…

However, the fact that you can already generate some amazing (if imperfect) art ahows the power of the technology,! This is AMAZING stuff.

But it also raises some rather unsettling questions. Will the realm of the professional human artist be supplanted by artificial intelligence? (More likely, tools like DALL-E 2 might be used as a prompt to inspire artists.) And, if so, what does that mean to other creative pursuits and jobs currently done by human beings? Will artists be out of a job, in much the same way as factory workers at Amazon are being replaced by robots?

Will we eventually have such realistic deep fake pictures and videos that they will be indistinguishable from unretouched shots filmed in real life? Are we going to reach the point where we can no longer distinguish what’s “real” from what’s AI-generated—or trust anything we see?

And how will all this impact the metaverse? (One metaverse platform, Sensorium Galaxy, is already experimenting with AI chatbots,)

So, like WOMBO and Reface (which I have writen about previously on this blog), DALL-E 2 is equal parts diverting and discomforting. But one thing is certain: I do plan to keep plugging text prompts into DALL-E 2, just to get a glimpse of where we’re going in this brave new world!

UPDATE June 23rd, 2022: I’ve spent the past couple of days playing around with DALL-E 2 a bit more, and I have discovered that, with the right kind of text prompts, you can generate some astoundingly photorealistic human profiles! Here are a couple of examples:

Prompt: “show the entire head and shoulders in a face forward picture of a handsome blonde man with blue eyes and a strong chin award winning photography 35mm realistic realism”
Prompt: “stunning breathtaking head and shoulders portrait of a beautiful African woman golden hour lighting. brilliant use of light and bokeh. Canon 85mm”

It doesn’t have to be a human, either; how about a wood nymph with green hair?

Prompt: “stunning breathtaking photo of a wood nymph with green hair and elf ears in a hazy forest at dusk. Dark, moody, eerie lighting, brilliant use of glowing light and shadow. Sigma 85mm f/1.4”

I’ve also dissocovered you can combine two or more artistic styles in one reault. Here are the six pictures DALL-E 2 spit back in response to the text prompt: “a cottage in a lush green forest with mountains in the background and a blue cloudy sky by Albert Bierstadt and Charles Victor Guilloux and Vilhelm Hammershøi” (please click on each picture to see it in a larger size):

However, there are also some prompts which fail miserably! For example, I tried to create an image using the text prompt: “steampunk gentleman in a top hat riding a penny farthing bicycle in a steampunk landscape with airships in the sky colorful digital art”, Here’s what I got back:

Here are four of those AI-generated pictures (click on each thumbnail to see a larger version):

It’s very clear that DALL-E 2 has no concept of what a penny farthing bicycle looks like! For your reference, here’s the results of a Google image search for the vehicle in question:

I assume that DALL-E 2 will get better the more images it is fed (including, hopefully, images of penny farthing bicycles!).

My last prompt yesterday was “Vogue fashion models eating cheeseburgers at MacDonalds”.

Now, while the thumbnails may look good, most of these pictures are nightmare material when you look at them full-size: mismatched, misshapen eyes, wonky face shapes, etc. Really uncanny valley stuff. In thumbnail number six, you can also clearly see that several of the Vogue fashion models have more than two hands!

So, while DALL-E 2 is certainly capable of generating stunning results, it is far from a perfect tool. I don’t think that human artists and designers have to worry about losing their jobs just yet! 😉

I leave you with this thought-provoking half-hour YouTube video by an industrial designer and professor named John Mauriello who claims, “with recent advancements in Artificial Intelligence design tools, we are about to see the biggest creative and cultural explosion since the invention of electricity in the 1890s.”

P.S. With my blogposts about AI tools such as WOMBO, Reface, and now DALL-E 2, plus my coverage of AI implementations of NPCs in social VR platforms such as Sensorium Galaxy, I decided it was time to create a new blogpost category called Artificial Intelligence (please give me a bit of time to go back and add this category to older blogposts, thanks!).

UPDATED! Sensorium Galaxy Update: The Now and the Not Yet

I have been monitoring the progress of the ambitious social VR platform Sensorium Galaxy ever since I first wrote about it on my blog in October of 2020. There have been a couple of very slickly-produced videos recently released by the company, teasing a forthcoming performance by superstar deejay David Guetta on the platform:

In a second video, David is shown being scanned in high resolution in order to create his avatar:

A recent tweet makes it sound like David Guetta’s performance in Sensorium Galaxy is imminent:

David Guetta – The #1 DJ in the World – is the next of ’The Chosen Ones’ to set off for Sensorium Galaxy’s PRISM World – the epicenter of entertainment in the digital metaverse. Don’t miss his upcoming epic shows. Register now to get early access.

Unfortunately, when you do register, all you are presented with is a downloadable technical demo, which requires a high-end gaming PC with either an Oculus Rift or HTC Vive tethered VR headset (alas, no Valve Index support yet):

The Sensorium Galaxy tech demo has some fairly steep hardware requirements

This is NOT a platform for the Oculus Quest crowd! While the social VR platform has not yet launched, the company is already selling full-body avatars in its online store for you to wear while attending future shows:

The company is also demoing its AI bots, including releasing a couple of “interview” videos, where they respond to a reporter’s questions:

I must confess that these chat bots, while certainly able to string together English sentences in response to questions, leave me a bit cold. Why you would want to engage in chitchat with an AI-enabled NPC, other than for novelty’s sake, to test it out for a few minutes? I’m not 100% convinced that a social VR platform really needs a feature like this, especially one where the obvious focus is on music performances.

With its provision of a ready-to-accept-your-cryptocurrency store before the actual product launch, its high-resolution scanning of celebrities, and its audacious, selling-the-sizzle-instead-of-the-steak promotion, Sensorium Galaxy reminds me of nothing so much as the ill-fated MATERIA ONE (formerly called Staramba Spaces; you can follow that sad saga here). MATERIA ONE, while embracing celebrity endorsers such as Paris Hilton and Hulk Hogan, foundered for any number of reasons: misplaced priorities, overweening ambition, and a limited target audience given its requirement for high-end PCVR.

Frankly, I’m not quite sure what to make of Sensorium Galaxy so far. I do know that, with my current hardware setup, that I cannot participate in it. The company is definitely trying to generate some serious buzz for the product, and I wish them every success in what is becoming a rather competitive marketplace for virtual events.

But as far as I can tell, and based on what I have seen and read so far, there’s a bit of a gap between the now and the not yet. I will continue to monitor Sensorium Galaxy as its develops!

UPDATE August 11th, 2021: I came across this April 2nd, 2021 press release which says:

Sensorium Corporation today announced the closed beta launch of Sensorium Galaxy — a social metaverse uniting people through high-quality virtual experiences. Selected users have gained access to the platform to explore worlds PRISM and MOTION.

The main goal of this closed beta test is to collect valuable insights to enhance the experiences at Sensorium Galaxy ahead of its public launch in Q2 2021.

“Sensorium Galaxy is revolutionizing how the arts are created, distributed, and enjoyed. From music festivals to dance shows, we’re creating the world’s first social metaverse where everyone can get together, experience high-quality virtual content, and find new opportunities for self-expression,” says Vladimir Kedrinsky, CEO at Sensorium Corporation.

“The SG beta test helps us streamline the in-platform user mechanics, and get actionable insights before the metaverse goes public in the upcoming months. Participants of this invite-only beta test are able to experience some of the sophisticated user-level mechanics that Sensorium Galaxy has to offer,” explains Ivan Nikitin, Head of Product at Sensorium Corporation.

So it sounds as though a lot of work is going on behind the scenes.

Feeding Second Life Selfies into WOMBO and Reface: Now with Duets!

Ever since I discovered it in March, I have been endlessly fascinated with what I can create by plugging Second Life avatar selfies into the two AI facial refacing and animation apps on my iPhone, WOMBO and Reface (here’s my original post, and I wrote three more here, here, and here).

There’s just something about the stylized, perfect way that Second Life avatars look that lends itself perfectly to being manipulated by WOMBO and Reface, which makes the resulting images and videos leap right over the Uncanny Valley for me!

WOMBO only creates lipsync videos, and Reface started off with inserting any face into still images, artworks, and Hollywood movie clips. But recently, Reface also started letting you take any selfie and create lipsync music videos as well:

Marilyn Monroe: “I wanna be loved by you…”

But not to be outdone, WOMBO has just released WOMBO combos: duets where you can select two facial images to animate! Here’s three examples:

Promiscuous, by Nelly Furtado
Anything You Can Do, from the musical Annie Get Your Gun

The following is one of my all-time favourite Monty Python comedy sketches! (My brother and I used to say this to each other all the time…)

“Help! Help! I’m being repressed!” from the movie Monty Python and the Holy Grail

Both WOMBO and Reface are available for Apple iOS and Android devices. They’re great fun, the results are entertaining, and I can recommend them both highly!