UPDATED! Dallying with DALL-E 2: My First Three Days Testing Out AI-Generated Art from Text Prompts (and Some Resulting Images!)

I know this post is off-topic, but I do hope you will indulge me! Today I checked my email and discovered that I have been among the first few lucky people to be accepted into the testing phase of DALL-E 2!

What is DALL-E 2? DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language. Here’s a two-minute video that explains the concept:

Vox has released a 13-minute YouTube video that explains the concept behind DALL-E 2 and related AI-generated art systems in more detail:

DALL-E 2 is a significant step up from the original DALL-E system, promising more realistic and accurate images with four times greater resolution! It can combine artistic concepts, attributes, and styles, as well as make realistic edits to existing images. It can also create variations of an image based on the original.

UPDATE June 23rd: According to its creators, who participated in an AMA session on the r/dalle2 subReddit, DALL-E 2 was trained using roughly 650,000,000 images along with their captions. These images were either licensed or publicly available on the internet.

So today, in my first day using DALL-E 2, I decided to put it through its paces, and I discovered some of the strengths—and weaknesses—of the AI program, from OpenAI.

First, I wanted to see what it could do with a selfie from Second Life of my main avatar, Vanity Fair.

I uploaded a picture and clicked on the Variations button, and it generated what looked like reasonable Second Life avatars with slight changes to the original, as if I had fiddled with the face sliders and tried on different wigs:

Then, I wanted to try erasing the background of the image, and using it with a text prompt: “Vanity Fair wearing a ballgown in a highly-realistic Regency Era ballroom with elegant dancers”.

Among the results I got back were these:

I love how it gave Vanity elf ears in the second picture! Then, I decided to erase the background from a shot of my main male SL avatar, Heath Homewood:

The text prompt I gave DALL-E 2 to fill in the erased area was “man in a highly detailed photograph of an elaborate steampunk landscape with airships and towers”. Here are five of the six results it spit back at me (please click on each image to see it in a larger size):

The backgrounds are all quite varied, and also quite intricate in some cases! I also noticed that the AI “augmented” Heath Homewood’s hair in some of the pictures, while it left it alone in others. Innnteresting…..

My next prompt, “smiling man wearing a virtual reality headset with a fantasy metaverse background very colourful and clean detailed advertising art”, also generated some astoundingly good results, any of which could easily be used in a magazine advertisement or article illustration! (Again, please click on the images to see them in full size.)

So, I continued. As my apartment patio looks out over a small forest known for its deer and rabbits, I decided to enter the same text prompt, “a lush green forest with deer and rabbits”, appending the text with an artistic style. In response to each prompt, I picked the best of the six pictures DALL-E 2 gave me back, along with the text prompts I used (in the captions below each picture).

A lush green forest with deer and rabbits digital art
A lush green forest with deer and rabbits impressionist art
A lush green forest with deer and rabbits by Johannes Verneer
A lush green forest with deer and rabbits by Salvafor Dali
A lush green forest with deer and rabbits by Andy Warhol
A lush green forest with deer and rabbits in the style of Sunday on La Grande Jatte by Georges Seurat
a lush green forest with deer and rabbits in the style of Inuit art
A lush green forest with deer and rabbits by Piet Mondrian
A lush green forest with deer and rabbits as a Disney cartoon
A lush green forest with deer and rabbits as a medieval tapestry
A lush green forest with deer and rabbits synthwave
A lush green forest with deer and rabbits cyberpunk
A lush green forest with deer and rabbits kawaii anime style (this wasn’t what I was expecting, but it’s so beautiful, like an illustration from a children’s book!)
A lush green forest with deer and rabbits chibi cartoon style
A lush green forest with deer and rabbits horror movie film still high quality
A lush green forest with deer and rabbits ancient Eqyptian carvings

While I am mightily impressed by these results, I did notice a few things. First, sometimes DALL-E 2 gave me a misshapen or mutated deer or rabbit, or even a mixture of a deer and a rabbit (and in one case, a deer merging into a tree!). Second, DALL-E 2 still seems to have a lot of trouble with faces, both of animals and of people (you can see this most clearly in the Disneyesque image above). In particular, you get terrible results when you put in the name of a real person, e.g. “Philip Rosedale wearing a crown and sitting on a throne in Second Life”, which gave some rather terrifying Frankenstein-looking versions of Philip that I will not share with you!

I did try “Strawberry Singh and Draxtor Despres dressed in Regency costumes in an episode of Bridgerton in Second Life”, and this is the best of the six results it spit back:

Strawberry Singh and Draxtor Despres dressed in Regency costumes in an episode of Bridgerton in Second Life

If you squint (a lot), you can just about make out the resemblances, but it’s very clear that presenting realistic human (or avatar!) faces is something DALL-E 2 is not really very good at yet. However, given how alarmingly quickly this technology has developed in a year (from DALL-E to DALL-E 2), the ability for AI-generated art to more accurately depict human faces realistically is probably not too far off…

However, the fact that you can already generate some amazing (if imperfect) art ahows the power of the technology,! This is AMAZING stuff.

But it also raises some rather unsettling questions. Will the realm of the professional human artist be supplanted by artificial intelligence? (More likely, tools like DALL-E 2 might be used as a prompt to inspire artists.) And, if so, what does that mean to other creative pursuits and jobs currently done by human beings? Will artists be out of a job, in much the same way as factory workers at Amazon are being replaced by robots?

Will we eventually have such realistic deep fake pictures and videos that they will be indistinguishable from unretouched shots filmed in real life? Are we going to reach the point where we can no longer distinguish what’s “real” from what’s AI-generated—or trust anything we see?

And how will all this impact the metaverse? (One metaverse platform, Sensorium Galaxy, is already experimenting with AI chatbots,)

So, like WOMBO and Reface (which I have writen about previously on this blog), DALL-E 2 is equal parts diverting and discomforting. But one thing is certain: I do plan to keep plugging text prompts into DALL-E 2, just to get a glimpse of where we’re going in this brave new world!

UPDATE June 23rd, 2022: I’ve spent the past couple of days playing around with DALL-E 2 a bit more, and I have discovered that, with the right kind of text prompts, you can generate some astoundingly photorealistic human profiles! Here are a couple of examples:

Prompt: “show the entire head and shoulders in a face forward picture of a handsome blonde man with blue eyes and a strong chin award winning photography 35mm realistic realism”
Prompt: “stunning breathtaking head and shoulders portrait of a beautiful African woman golden hour lighting. brilliant use of light and bokeh. Canon 85mm”

It doesn’t have to be a human, either; how about a wood nymph with green hair?

Prompt: “stunning breathtaking photo of a wood nymph with green hair and elf ears in a hazy forest at dusk. Dark, moody, eerie lighting, brilliant use of glowing light and shadow. Sigma 85mm f/1.4”

I’ve also dissocovered you can combine two or more artistic styles in one reault. Here are the six pictures DALL-E 2 spit back in response to the text prompt: “a cottage in a lush green forest with mountains in the background and a blue cloudy sky by Albert Bierstadt and Charles Victor Guilloux and Vilhelm Hammershøi” (please click on each picture to see it in a larger size):

However, there are also some prompts which fail miserably! For example, I tried to create an image using the text prompt: “steampunk gentleman in a top hat riding a penny farthing bicycle in a steampunk landscape with airships in the sky colorful digital art”, Here’s what I got back:

Here are four of those AI-generated pictures (click on each thumbnail to see a larger version):

It’s very clear that DALL-E 2 has no concept of what a penny farthing bicycle looks like! For your reference, here’s the results of a Google image search for the vehicle in question:

I assume that DALL-E 2 will get better the more images it is fed (including, hopefully, images of penny farthing bicycles!).

My last prompt yesterday was “Vogue fashion models eating cheeseburgers at MacDonalds”.

Now, while the thumbnails may look good, most of these pictures are nightmare material when you look at them full-size: mismatched, misshapen eyes, wonky face shapes, etc. Really uncanny valley stuff. In thumbnail number six, you can also clearly see that several of the Vogue fashion models have more than two hands!

So, while DALL-E 2 is certainly capable of generating stunning results, it is far from a perfect tool. I don’t think that human artists and designers have to worry about losing their jobs just yet! 😉

I leave you with this thought-provoking half-hour YouTube video by an industrial designer and professor named John Mauriello who claims, “with recent advancements in Artificial Intelligence design tools, we are about to see the biggest creative and cultural explosion since the invention of electricity in the 1890s.”

P.S. With my blogposts about AI tools such as WOMBO, Reface, and now DALL-E 2, plus my coverage of AI implementations of NPCs in social VR platforms such as Sensorium Galaxy, I decided it was time to create a new blogpost category called Artificial Intelligence (please give me a bit of time to go back and add this category to older blogposts, thanks!).

UPDATED! Sensorium Galaxy Update: The Now and the Not Yet

I have been monitoring the progress of the ambitious social VR platform Sensorium Galaxy ever since I first wrote about it on my blog in October of 2020. There have been a couple of very slickly-produced videos recently released by the company, teasing a forthcoming performance by superstar deejay David Guetta on the platform:

In a second video, David is shown being scanned in high resolution in order to create his avatar:

A recent tweet makes it sound like David Guetta’s performance in Sensorium Galaxy is imminent:

David Guetta – The #1 DJ in the World – is the next of ’The Chosen Ones’ to set off for Sensorium Galaxy’s PRISM World – the epicenter of entertainment in the digital metaverse. Don’t miss his upcoming epic shows. Register now to get early access.

Unfortunately, when you do register, all you are presented with is a downloadable technical demo, which requires a high-end gaming PC with either an Oculus Rift or HTC Vive tethered VR headset (alas, no Valve Index support yet):

The Sensorium Galaxy tech demo has some fairly steep hardware requirements

This is NOT a platform for the Oculus Quest crowd! While the social VR platform has not yet launched, the company is already selling full-body avatars in its online store for you to wear while attending future shows:

The company is also demoing its AI bots, including releasing a couple of “interview” videos, where they respond to a reporter’s questions:

I must confess that these chat bots, while certainly able to string together English sentences in response to questions, leave me a bit cold. Why you would want to engage in chitchat with an AI-enabled NPC, other than for novelty’s sake, to test it out for a few minutes? I’m not 100% convinced that a social VR platform really needs a feature like this, especially one where the obvious focus is on music performances.

With its provision of a ready-to-accept-your-cryptocurrency store before the actual product launch, its high-resolution scanning of celebrities, and its audacious, selling-the-sizzle-instead-of-the-steak promotion, Sensorium Galaxy reminds me of nothing so much as the ill-fated MATERIA ONE (formerly called Staramba Spaces; you can follow that sad saga here). MATERIA ONE, while embracing celebrity endorsers such as Paris Hilton and Hulk Hogan, foundered for any number of reasons: misplaced priorities, overweening ambition, and a limited target audience given its requirement for high-end PCVR.

Frankly, I’m not quite sure what to make of Sensorium Galaxy so far. I do know that, with my current hardware setup, that I cannot participate in it. The company is definitely trying to generate some serious buzz for the product, and I wish them every success in what is becoming a rather competitive marketplace for virtual events.

But as far as I can tell, and based on what I have seen and read so far, there’s a bit of a gap between the now and the not yet. I will continue to monitor Sensorium Galaxy as its develops!

UPDATE August 11th, 2021: I came across this April 2nd, 2021 press release which says:

Sensorium Corporation today announced the closed beta launch of Sensorium Galaxy — a social metaverse uniting people through high-quality virtual experiences. Selected users have gained access to the platform to explore worlds PRISM and MOTION.

The main goal of this closed beta test is to collect valuable insights to enhance the experiences at Sensorium Galaxy ahead of its public launch in Q2 2021.

“Sensorium Galaxy is revolutionizing how the arts are created, distributed, and enjoyed. From music festivals to dance shows, we’re creating the world’s first social metaverse where everyone can get together, experience high-quality virtual content, and find new opportunities for self-expression,” says Vladimir Kedrinsky, CEO at Sensorium Corporation.

“The SG beta test helps us streamline the in-platform user mechanics, and get actionable insights before the metaverse goes public in the upcoming months. Participants of this invite-only beta test are able to experience some of the sophisticated user-level mechanics that Sensorium Galaxy has to offer,” explains Ivan Nikitin, Head of Product at Sensorium Corporation.

So it sounds as though a lot of work is going on behind the scenes.

Feeding Second Life Selfies into WOMBO and Reface: Now with Duets!

Ever since I discovered it in March, I have been endlessly fascinated with what I can create by plugging Second Life avatar selfies into the two AI facial refacing and animation apps on my iPhone, WOMBO and Reface (here’s my original post, and I wrote three more here, here, and here).

There’s just something about the stylized, perfect way that Second Life avatars look that lends itself perfectly to being manipulated by WOMBO and Reface, which makes the resulting images and videos leap right over the Uncanny Valley for me!

WOMBO only creates lipsync videos, and Reface started off with inserting any face into still images, artworks, and Hollywood movie clips. But recently, Reface also started letting you take any selfie and create lipsync music videos as well:

Marilyn Monroe: “I wanna be loved by you…”

But not to be outdone, WOMBO has just released WOMBO combos: duets where you can select two facial images to animate! Here’s three examples:

Promiscuous, by Nelly Furtado
Anything You Can Do, from the musical Annie Get Your Gun

The following is one of my all-time favourite Monty Python comedy sketches! (My brother and I used to say this to each other all the time…)

“Help! Help! I’m being repressed!” from the movie Monty Python and the Holy Grail

Both WOMBO and Reface are available for Apple iOS and Android devices. They’re great fun, the results are entertaining, and I can recommend them both highly!

Vanity Goes Vintage! (As Does Moesha!)

The Reface app on my iPhone just uploaded a whole whack of vintage photographs to play with, so I had some fun tonight! Here’s what Vanity Fair looks like in Second Life:

And here is what my Vanity looks like as a vintage model! Just click on any thumbnail to see it in full size:

Not be left out of the fun is my Afro-Canadian model, Moesha Heartsong, who looks like this in her native Second Life (and whom has been through the Reface app before here on my blog):

And here is Moesha in a variety of vintage poses! Once again, you can click on any thumbnail to pull up a full-sized version.

Of course, you can then feed the Reface-d image into WOMBO, for even more fun and genre-bending, history-defying hilarity!!!

My Milkshake
Hollaback Girl

Between WOMBO and Reface, I am having so much fun! It’s helping me stay sane and entertained whilst under pandemic lockdown here in Winnipeg.