UPDATED! Dallying with DALL-E 2: My First Three Days Testing Out AI-Generated Art from Text Prompts (and Some Resulting Images!)

I know this post is off-topic, but I do hope you will indulge me! Today I checked my email and discovered that I have been among the first few lucky people to be accepted into the testing phase of DALL-E 2!

What is DALL-E 2? DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language. Here’s a two-minute video that explains the concept:

Vox has released a 13-minute YouTube video that explains the concept behind DALL-E 2 and related AI-generated art systems in more detail:

DALL-E 2 is a significant step up from the original DALL-E system, promising more realistic and accurate images with four times greater resolution! It can combine artistic concepts, attributes, and styles, as well as make realistic edits to existing images. It can also create variations of an image based on the original.

UPDATE June 23rd: According to its creators, who participated in an AMA session on the r/dalle2 subReddit, DALL-E 2 was trained using roughly 650,000,000 images along with their captions. These images were either licensed or publicly available on the internet.

So today, in my first day using DALL-E 2, I decided to put it through its paces, and I discovered some of the strengths—and weaknesses—of the AI program, from OpenAI.

First, I wanted to see what it could do with a selfie from Second Life of my main avatar, Vanity Fair.

I uploaded a picture and clicked on the Variations button, and it generated what looked like reasonable Second Life avatars with slight changes to the original, as if I had fiddled with the face sliders and tried on different wigs:

Then, I wanted to try erasing the background of the image, and using it with a text prompt: “Vanity Fair wearing a ballgown in a highly-realistic Regency Era ballroom with elegant dancers”.

Among the results I got back were these:

I love how it gave Vanity elf ears in the second picture! Then, I decided to erase the background from a shot of my main male SL avatar, Heath Homewood:

The text prompt I gave DALL-E 2 to fill in the erased area was “man in a highly detailed photograph of an elaborate steampunk landscape with airships and towers”. Here are five of the six results it spit back at me (please click on each image to see it in a larger size):

The backgrounds are all quite varied, and also quite intricate in some cases! I also noticed that the AI “augmented” Heath Homewood’s hair in some of the pictures, while it left it alone in others. Innnteresting…..

My next prompt, “smiling man wearing a virtual reality headset with a fantasy metaverse background very colourful and clean detailed advertising art”, also generated some astoundingly good results, any of which could easily be used in a magazine advertisement or article illustration! (Again, please click on the images to see them in full size.)

So, I continued. As my apartment patio looks out over a small forest known for its deer and rabbits, I decided to enter the same text prompt, “a lush green forest with deer and rabbits”, appending the text with an artistic style. In response to each prompt, I picked the best of the six pictures DALL-E 2 gave me back, along with the text prompts I used (in the captions below each picture).

A lush green forest with deer and rabbits digital art
A lush green forest with deer and rabbits impressionist art
A lush green forest with deer and rabbits by Johannes Verneer
A lush green forest with deer and rabbits by Salvafor Dali
A lush green forest with deer and rabbits by Andy Warhol
A lush green forest with deer and rabbits in the style of Sunday on La Grande Jatte by Georges Seurat
a lush green forest with deer and rabbits in the style of Inuit art
A lush green forest with deer and rabbits by Piet Mondrian
A lush green forest with deer and rabbits as a Disney cartoon
A lush green forest with deer and rabbits as a medieval tapestry
A lush green forest with deer and rabbits synthwave
A lush green forest with deer and rabbits cyberpunk
A lush green forest with deer and rabbits kawaii anime style (this wasn’t what I was expecting, but it’s so beautiful, like an illustration from a children’s book!)
A lush green forest with deer and rabbits chibi cartoon style
A lush green forest with deer and rabbits horror movie film still high quality
A lush green forest with deer and rabbits ancient Eqyptian carvings

While I am mightily impressed by these results, I did notice a few things. First, sometimes DALL-E 2 gave me a misshapen or mutated deer or rabbit, or even a mixture of a deer and a rabbit (and in one case, a deer merging into a tree!). Second, DALL-E 2 still seems to have a lot of trouble with faces, both of animals and of people (you can see this most clearly in the Disneyesque image above). In particular, you get terrible results when you put in the name of a real person, e.g. “Philip Rosedale wearing a crown and sitting on a throne in Second Life”, which gave some rather terrifying Frankenstein-looking versions of Philip that I will not share with you!

I did try “Strawberry Singh and Draxtor Despres dressed in Regency costumes in an episode of Bridgerton in Second Life”, and this is the best of the six results it spit back:

Strawberry Singh and Draxtor Despres dressed in Regency costumes in an episode of Bridgerton in Second Life

If you squint (a lot), you can just about make out the resemblances, but it’s very clear that presenting realistic human (or avatar!) faces is something DALL-E 2 is not really very good at yet. However, given how alarmingly quickly this technology has developed in a year (from DALL-E to DALL-E 2), the ability for AI-generated art to more accurately depict human faces realistically is probably not too far off…

However, the fact that you can already generate some amazing (if imperfect) art ahows the power of the technology,! This is AMAZING stuff.

But it also raises some rather unsettling questions. Will the realm of the professional human artist be supplanted by artificial intelligence? (More likely, tools like DALL-E 2 might be used as a prompt to inspire artists.) And, if so, what does that mean to other creative pursuits and jobs currently done by human beings? Will artists be out of a job, in much the same way as factory workers at Amazon are being replaced by robots?

Will we eventually have such realistic deep fake pictures and videos that they will be indistinguishable from unretouched shots filmed in real life? Are we going to reach the point where we can no longer distinguish what’s “real” from what’s AI-generated—or trust anything we see?

And how will all this impact the metaverse? (One metaverse platform, Sensorium Galaxy, is already experimenting with AI chatbots,)

So, like WOMBO and Reface (which I have writen about previously on this blog), DALL-E 2 is equal parts diverting and discomforting. But one thing is certain: I do plan to keep plugging text prompts into DALL-E 2, just to get a glimpse of where we’re going in this brave new world!

UPDATE June 23rd, 2022: I’ve spent the past couple of days playing around with DALL-E 2 a bit more, and I have discovered that, with the right kind of text prompts, you can generate some astoundingly photorealistic human profiles! Here are a couple of examples:

Prompt: “show the entire head and shoulders in a face forward picture of a handsome blonde man with blue eyes and a strong chin award winning photography 35mm realistic realism”
Prompt: “stunning breathtaking head and shoulders portrait of a beautiful African woman golden hour lighting. brilliant use of light and bokeh. Canon 85mm”

It doesn’t have to be a human, either; how about a wood nymph with green hair?

Prompt: “stunning breathtaking photo of a wood nymph with green hair and elf ears in a hazy forest at dusk. Dark, moody, eerie lighting, brilliant use of glowing light and shadow. Sigma 85mm f/1.4”

I’ve also dissocovered you can combine two or more artistic styles in one reault. Here are the six pictures DALL-E 2 spit back in response to the text prompt: “a cottage in a lush green forest with mountains in the background and a blue cloudy sky by Albert Bierstadt and Charles Victor Guilloux and Vilhelm Hammershøi” (please click on each picture to see it in a larger size):

However, there are also some prompts which fail miserably! For example, I tried to create an image using the text prompt: “steampunk gentleman in a top hat riding a penny farthing bicycle in a steampunk landscape with airships in the sky colorful digital art”, Here’s what I got back:

Here are four of those AI-generated pictures (click on each thumbnail to see a larger version):

It’s very clear that DALL-E 2 has no concept of what a penny farthing bicycle looks like! For your reference, here’s the results of a Google image search for the vehicle in question:

I assume that DALL-E 2 will get better the more images it is fed (including, hopefully, images of penny farthing bicycles!).

My last prompt yesterday was “Vogue fashion models eating cheeseburgers at MacDonalds”.

Now, while the thumbnails may look good, most of these pictures are nightmare material when you look at them full-size: mismatched, misshapen eyes, wonky face shapes, etc. Really uncanny valley stuff. In thumbnail number six, you can also clearly see that several of the Vogue fashion models have more than two hands!

So, while DALL-E 2 is certainly capable of generating stunning results, it is far from a perfect tool. I don’t think that human artists and designers have to worry about losing their jobs just yet! 😉

I leave you with this thought-provoking half-hour YouTube video by an industrial designer and professor named John Mauriello who claims, “with recent advancements in Artificial Intelligence design tools, we are about to see the biggest creative and cultural explosion since the invention of electricity in the 1890s.”

P.S. With my blogposts about AI tools such as WOMBO, Reface, and now DALL-E 2, plus my coverage of AI implementations of NPCs in social VR platforms such as Sensorium Galaxy, I decided it was time to create a new blogpost category called Artificial Intelligence (please give me a bit of time to go back and add this category to older blogposts, thanks!).

The Brittle Epoch: Canadian Artist Bryn Oh Unveils a New Immersive, Interactive Art Installation in Second Life

Brittle Epoch

Bryn Oh is a Canadian artist and sculptor whom I often have written about before on the RyanSchultz.com blog (here is a link to all my blogposts which mention her). She has long been active in the virtual world of Second Life, and has also created art installations on other metaverse platforms such as Sansar and the former High Fidelity.

On November 1st, 2021, Bryn unveiled her latest immersive, interactive art installation, a continuation of her previous work, called Brittle Epoch. Second Life blogger Inara Pey writes:

Opening on November 1st at her arts region Immersiva, is Bryn Oh’s latest work, entitled The Brittle Epochan installation that has been several months in development.

Most of the work is set in a mysterious frozen landscape, with a howling winter wind blowing the snow around, under an oversize full moon. The art installation makes good use of hidden teleporters to whisk you from scene to scene, and you are encouraged to click on everything in order to learn more about what happened! A HUD that is automatically attached to your viewer when you enter helps guide you through the experience (be sure to read all the signs at the entrance!).

Here’s a two-minute promotional video for The Brittle Epoch, released today by Linden Lab (the makers of Second Life):

You can pay a visit to the experience here. Inara Pey also notes:

As noted, Bryn’s installations all take place within the same over-arching universe, and thus share degrees of connectedness. As such, for those possibly unfamiliar with her work, or who wish to re-acquaint themselves with her themes and idea, I recommend the following resources:

Thanks to Inara for compiling this list of links! You can read her full review of The Brittle Epoch here.

Raspberry Dream Land: A Brief Introduction

Raspberry Dream Land is an invite-only artist platform which is accessible via desktop and VR devices. According to its “soft launch” announcement:

Raspberry Dream Labs is bringing to the world its brainchild –– the social WebXR event platform for progressive arts & entertainment – Raspberry Dream Land! A mecca of electronic music and alternative nights, creative hub and multisensory playroom opens its doors with an invite-only soft launch event in all major regions worldwide.

Our platform celebrates the solarpunk future coming into existence by uniting art, technology, sustainability and cyber-sexuality. Lose your avatar on the dance circuit to the live deep techno tunes, join the Central Plaza stage for the artists and brand talks, discover community generated 3D worlds, and experience the one-of-a-kind ‘Sense Magick’ Cyber-Tantra Ritual in the Underworld, the erotic playspace of the Future.

There’s not a whole lot of detail on the website so far, but the project has already attracted a number of artists:

Some of the artists associated with Raspberry Dream Land

There’s also a statement from the founder:

From multisensory academic VR study, 25+ IRL and VR events to [the] world’s 1st Burning Man in VR – over the past two years at Raspberry Dream Labs we explored how the potential of technologies-of-today can redefine self-expression, social entertainment and intimate connections.

While the interests in our events kept growing, we faced censorship from existing VR platforms which made us realize that while there is growing interest from users across the globe, there is no such platform that caters to these needs.

We are excited about our mission and the societal impact of what RD Land is going to unlock.

—Angelina Aleksandrovich, Founder, CEO and Creative Director of Raspberry Dream Labs

If Raspberry Dream Land intrigues you and you want to learn more, you can visit their website, join their Discord server, or follow the project on social media: Facebook, Instagram and Twitter. The platform is currently invite-only, but if you want to add your name to the waiting list, you can do so here.

New Art City: A Brief Introduction

New Art City is a virtual world platform created by students at San José State University (SJSU) in San Jose, California, which is intended to be a virtual gallery and exhibition toolkit for online art exhibitions. As SJSU’s Jon Oakes told me, “It’s like [Mozilla] Hubs, but for artists.”

According to the project’s website:

New Art City is a virtual exhibition platform for new media art with a focus on co-presence and experiencing digital art together. Shows are real-time multiplayer and accessed using a web browser on computer or mobile device, with no need to register, install extra software or enter any personal information. Using built-in tools to manage artworks and room layouts, curators and organizers can create a show and hold a virtual exhibition online. Participants can attend virtual openings together, chat and see each other moving around the space while experiencing digital art in its native format.

In its mission statement, New Art City’s curation and product design prioritize those who are disadvantaged by structural injustice. An inclusive and redistributive community is as important to this project as the toolkit itself, and the platform seeks to support artists who face barriers in the traditional art world, promoting and amplifying works by queer artists and artists of colour.

Galleries are accessible via the New Art City website, and run inside your web browser (Chrome is the preferred browser). New Art City is not yet compatible with VR headsets, but the creators have built it in a way where this will be possible in the future.

Examples of galleries in New Art City

There are already dozens of exhibits available for you to visit. New Art City is currently in private beta, and access to exhibitors is granted on an invite-only basis. They are planning to launch open signups soon, but in the meantime you may submit a proposal for access here.