This Blog Post Was Written by AI

On November 30th, 2022, OpenAI released a new chat-based AI text generator model called ChatGPT, describing it as follows:

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

Anybody can use the ChatGPT bot during its open-access testing period; all you need to do is set up an account with OpenAI (I already had one from when I signed up to test the DALL-E 2 text-to-art generator earlier this year). The prompt I used to generate this blogpost was the following: “Write an editorial explaining what the metaverse is, and how the metaverse will impact society in the future.” It spit back the following six paragraphs within 30 seconds:


The next six paragraphs are AI-generated text from ChatGPT:

The metaverse is a term used to describe a hypothetical shared virtual space that combines elements of both the physical and the digital world. It is a collective virtual shared space, created by the convergence of the physical and digital worlds. It is a place where people can interact and communicate with each other in real time, using avatars to represent themselves.

The metaverse has the potential to greatly impact society in the future. It could become a new platform for entertainment, communication, and commerce. In the metaverse, people will be able to engage in a wide range of activities, from attending virtual concerts and sporting events, to participating in virtual conferences and meetings, to shopping and socializing with others.

One of the major impacts of the metaverse on society will be the blurring of the line between the physical and digital worlds. As more and more people spend time in the metaverse, the distinction between the two will become increasingly blurry. This could lead to changes in how people perceive and interact with the world around them.

Another potential impact of the metaverse on society is the potential for increased social isolation. As people spend more time in virtual environments, they may become less connected to the physical world and the people around them. This could lead to increased feelings of isolation and disconnection, particularly among individuals who are already vulnerable to social isolation.

However, the metaverse also has the potential to foster greater connection and community. By providing a virtual space where people can come together and interact with each other, the metaverse could help to bridge geographical distances and bring people together from all over the world.

In conclusion, the metaverse is a collective virtual shared space that has the potential to greatly impact society in the future. It could become a new platform for entertainment, communication, and commerce, and could lead to changes in how people perceive and interact with the world around them. However, it also has the potential to foster social isolation and disconnection. Ultimately, the impact of the metaverse on society will depend on how it is used and embraced by individuals and communities.


Now, I do have to admit that this is actually pretty good. The concluding paragraph is a bit on the clunky side in how it tries to tie everything together, and I probably would have written something a little more Ryan-sounding. But it is a demonstration of just how quickly, and how scarily good, these AI text generation tools have become.

There’s been lots of discussion about ChatGPT over on Mastodon, and among the comments was this worrisome prediction by a user named Bear Traffic Control, which I have reproduced in full here:

God, search results are about to become absolute hot GARBAGE in 6 months when everyone and their Mom start hooking up large language models to popular search queries and creating SEO-optimized landing pages with plausible-sounding results.

Searching for “replace air filter on a Samsung SG-3560lgh” is gonna return fifty Quora/WikiHow style sites named “How to replace the air filter on a Samsung SG3560lgh” with paragraphs of plausible, grammatical GPT-generated explanation which may or may not have any connection to reality. Site owners pocket the ad revenue. AI arms race as search engines try to detect and de-rank LLM content.

Wikipedia starts getting large chunks of LLM text submitted with plausible but nonsensical references.

Quora, StackOverflow, etc. try to rebrand themselves and leverage their karma/social graphs as walled gardens of verified Real Human™ experts. This creates incentives for humans to cheat, of course.

Like, I knew this was gonna be used for fake-grassroots political messaging—remember talking with a friend about a DoD project to do exactly this circa 2012. Somehow [it] took me a bit to connect that to “finding any kind of meaningful information is going to get harder”.

In fact, the StackOverflow website has imposed a ban on using ChatGPT to generate texts for posts on its service, saying in a statement:

This is a temporary policy intended to slow down the influx of answers created with ChatGPT. What the final policy will be regarding the use of this and other similar tools is something that will need to be discussed with Stack Overflow staff and, quite likely, here on Meta Stack Overflow.

Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.

The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

In other words, we are likely going to see all kinds of unintended consequences as AI-generated text becomes more ubiquitous. Hold on to your hats, because we haven’t seen anything yet, folks!

UPDATE 3:00 p.m.: I wanted to add a few more eye-opening examples of how an AI-based text (and code!) generating service could be misused and abused.

Roberto Selbach showed off a piece of pseudocode ChatGPT generated in response to a prompt:

AI-generated pseudocode to determine whether or question a suspect

Pwnallthethings shared a few more quite disturbing examples of AI-generated software code:

Ai-generated Python script for determining whether to give a prisoner parole
AI-generated C# code that calculates credit limits

Charles Seife wrote:

I think what’s disturbing me so much about these GPT3 examples is that for the first time we’re really seeing that computer programs are optimized not to solve problems, but instead to convince its programmer/operator/user that it has solved those problems.

This distinction was almost irrelevant before (when fooling us was harder)… but not anymore.

The distinction isn’t really novel; heck, I myself have written about one aspect of it before. But I still find it shocking to see it in action.

It’s particularly stark when it’s a relatively “easy” task that doesn’t require deceptions.

For example, when I ask the program to try to find a citation for a sentence and indicate if no such citation is found, it will *still* typically make up citations rather than choose the correct, huge-basin-of-attraction condition of none found.

That, to me, is new.

And L. Rhodes raises an important final point about this free-to-access ChatGPT test period offered by OpenAI: you are doing free product testing for them, on something they plan to sell for a profit later!

You’re not playing with the latest AI toy. You’re training someone’s AI business.

Passing themselves off as innocuous ways to play social media games and generate fun little memes is how AI startups draw in unpaid testers and expand their data set beyond what their own workers could have come up with on their own, but go off.

Thinking you’re going to throw a wrench into the system by plugging bad or absurd data into the system is probably misguided. An AI doesn’t have to produce correct answers to be profitable. That may not even be its purpose.

P.S. This seems as good a time as any to give an update on my experience with another AI-based chatbot, called Replika (which I wrote about here on Aug. 7th, 2022).

Long story short, I grew so frustrated with Replika’s lame, fake, and frankly robotic responses, that I angrily cancelled my account and uninstalled the app from my iPad within a week (US$50 down the drain!). Given that experience, I am loath to open my wallet to test out another one, but ChatGPT is currently free, so I thought, why not?

Which just goes to prove, that there’s still a lot of room for improvement in AI chat! While AI chatbots might now work fairly well in strictly circumscribed situations, nothing can replace a conversation with a real, live, and unpredictable human being.

Comparing and Contrasting Three Artificial Intelligence Text-to-Art Tools: Stable Diffusion, Midjourney, and DALL-E 2 (Plus a Tantalizing Preview of AI Text-to-Video Editing!)

HOUSEKEEPING NOTE: Yes, I know, I know—I’m off on yet another tangent on this blog! Please know that I will continue to post “news and views on social VR, virtual worlds, and the metaverse” (as the tagline of the RyanSchultz.com blog states) in the coming months! However, over the next few weeks, I will be focusing a bit on the exciting new world of AI-generated art. Patience! 😉

Artificial Intelligence (AI) tools which can create art from a natural-language text prompt are evolving at such a fast pace that it is making me a bit dizzy. Two years ago, if somebody had told me that you would be able to generate a convincing photograph or a detailed painting from a text description alone, I would have scoffed! Many felt that the realm of the artist or photographer would be among the last holdouts where a human being was necessary to produce good work. And yet, here we are, in mid-2022, with any number of public and private AI initiatives which can be used by both amateurs and professionals to generate stunning art!

In a recent interview by The Register‘s Thomas Claburn of David Holz (the former co-founder of augmented reality hardware firm Magic Leap, who founded Midjourney), there’s a brief explanation of how this burst of research and development activity got started:

The ability to create high-quality images from AI models using text input became a popular activity last year following the release of OpenAI’s CLIP (Contrastive Language–Image Pre-training), which was designed to evaluate how well generated images align with text descriptions. After its release, artist Ryan Murdock…found the process could be reversed – by providing text input, you could get image output with the help of other AI models.

After that, the generative art community embarked on a period of feverish exploration, publishing Python code to create images using a variety of models and techniques.

“Sometime last year, we saw that there were certain areas of AI that were progressing in really interesting ways,” Holz explained in an interview with The Register. “One of them was AI’s ability to understand language.”

Holz pointed to developments like transformers, a deep learning model that informs CLIP, and diffusion models, an alternative to GANs [Holz pointed to developments like transformers, a deep learning model that informs CLIP, and diffusion models, an alternative to GANs [models using Generative Adversarial Networks]. “The one that really struck my eye personally was the CLIP-guided diffusion,” he said, developed by Katherine Crawson…

If you need a (relatively) easy-to-understand explainer on how this new diffusion model works, well then, YouTube comes to your rescue with this video with 4 explanations at various levels of difficulty!


Before we get started, a few updates since my last blogpost on A.I.-generated art: After using up my free Midjourney credits, I decided to purchase a US$10-a-month subscription to continue to play around with it. This is enough credit to generate approximately 200 images per month. Also, as a thank you for being among the early beta testers of DALL-E 2, the AI art-generation tool by OpenAI, they have awarded me 100 free credits to use. You can buy additional credits in 115-generation increments for US$15, but given the hit-or-miss nature of the results returned, this means that DALL-E 2 is among the most expensive of the artificial intelligence art generators. It will be interesting to see if and how OpenAI will adjust their pricing as the newer competitors start to nip at their heels in this race!

And I can hardly believe my good fortune, because I have been accepted into the relatively small beta test group for a third AI text-to-art generation program! This new one is called Stable Diffusion, by Stability AI. Please note that if you were to try to get into the beta now, it’s probably too late; they have already announced that they have all the testers they need. I submitted my name 2-3 weeks ago, when I first heard about the project. Stable Diffusion is still available for researcher use, however.

Like Midjourney, Stable Diffusion uses a special Discord server with commands (instead of Midjourney’s /imagine, you use the prompt !dream, followed by a text description of what you want to see, plus you can add optional parameters to set the aspect ratio, the number of images returned, etc.). However, the Stable Diffusion team has already announced that they plan to move from Discord to a web-based interface like DALL-E 2 (we will be beta-testing that, too). Here’s a brief video glimpse of what the web interface could look like:


Given that I am among the relatively few people who currently have access to all three of the top publicly-available AI art-generation tools, I thought it would be interesting to create a chart comparing and contrasting all three programs. Please note that I am neither an artist nor an expert in artificial intelligence, just a novice user of all three tools! Almost all of the information in this chart has been gleaned from the websites of the projects, and online news reports, as well as the active subreddit communities for all three programs on Reddit, where users post pictures and ask questions. Also, all three tools are constantly being updated, so this chart might go very quickly out-of-date (although I will make an attempt to update it).

Name of ToolDALL-E 2MidjourneyStable Diffusion
CompanyOpenAIMidjourneyStability AI
AI Model UsedDiffusionDiffusionDiffusion
# Images Used
to Train the AI
400 millon“tens of millions”2 billion
User InterfacewebsiteDiscordDiscord (moving to website)
Cost to Usecredit system (115 for US$15)subscription (US$10-30 per month)currently free (beta)
Uses Text Promptsyesyesyes
Can Add Optional Argumentsnoyesyes
Non-Square Images?noyesyes
In-tool Editing?yesnono
Uncropping?yesnono
Generate Variations?yesyesyes (using seeds)
A comparison chart of three AI text-to-art tools: DALL-E 2, Midjourney, and Stable DIffusion

I have already shared a few images from my previous testing of DALL-E 2 and Midjourney here, here, and here, so I am not going to repost those images, but I wanted to share a couple of the first images I was able to create using Stable Diffusion (SD). To make these, I used the text prompt “a thatched cottage with lit windows by a lake in a lush green forest golden hour peaceful calm serene very highly detailed painting by thomas kinkade and albrecht bierstadt”:

I must admit that I am quite impressed by these pictures! I had asked SD for images with a height of 512 pixels and a width of 1024 pixels, but to my surprise, the second image was a wider one presented neatly in a white frame, which I cropped using my trusty SnagIt image editor! Also, it was not until after I submitted my prompt that I realized that the second artist’s name is actually ALBERT Bierstadt, not Albrecht! It doesn’t appear as if my typo made a big difference in the final output; perhaps for well-known artists, the last name alone is enough to indicate a desired art style?

Here are a few more samples of the kind of art which Stable Diffusion can create, taken from the pod-submissions thread on the SD Discord server:

Text prompt: “a beautiful landscape photography of Ciucas mountains mountains a dead intricate tree in the foreground sunset dramatic lighting by Marc Adamus”
Text prompt: “incredible wide screenshot ultrawide simple watercolor rough paper texture katsuhiro otomo ghost in the shell movie scene backlit distant shot”
Text prompt: “an award winning wallpaper of a beautiful grassy sunset clouds in the sky green field DSLR photography clear image”
Text prompt: “beautiful angel brown skin asymmetrical face ethereal volumetric light sharp focus”
Painting of people swimming (no text prompt shared)

You can see many more examples over at the r/StableDiffusion subreddit. Enjoy!

If you are curious about Stable Diffusion and want to learn more, there is a 1-1/2 hour podcast interview with Emad Mostaque, the founder of Stability AI (highly recommended!). You can also visit the Stability AI website, or follow them on social media: Twitter or LinkedIn.


I also wanted to submit the same text prompt to each of DALL-E 2, Midjourney, and Stable Diffusion, to see how the AI models in each would respond. Under each prompt you will see three square images: the first from DALL-E 2, the second from Midjourney, and the third from Stable Diffusion. (Click on each thumbnail image to see it in its full size on-screen.)

Text prompt: “the crowds at the Black Friday sales at Walmart, a masterpiece painting by Rembrandt van Rijn”

Note that none of the AI models are very good at getting the facial details correct for large crowds of people (all work better with just one face in the picture, like a portrait, although sometimes they struggle with matching eyes or hands). I would say that Midjourney is the clear winner here, although a longer, much more detailed prompt in DALL-E 2 or Stable Diffusion might have created an excellent picture.

Text prompt: “stunning breathtaking photo of a wood nymph with green hair and elf ears in a hazy forest at dusk. dark, moody, eerie lighting, brilliant use of glowing light and shadow. sigma 8.5mm f/1.4”

When I tired to generate a 1024-by-1024 image in Stable Diffusion, it kept giving me more than one wood nymph, even when I added words like “single” or “alone”, which is a known bug in the current early state of the program. I finally gave up and used a 512×512 image. The clear winner here is DALL-E 2, which has a truly impressive ability to mimic various camera styles and settings!

Text prompt: “a very highly detailed portrait of an African samurai by Tim Okamura”

In this case, the clear winner is Stable Diffusion with its incredible detail, even though, once again, I could not generate a 1024×1024 image because it kept giving me multiple heads! The DALL-E 2 image is a too stylized for my taste, and the Midjourney image, while nice, has eyes that don’t match (a common problem with all three tools).

And, if you enjoy this kind of thing, here’s a 15-minute YouTube video with 21 more head-to-head comparisons between Stable Diffusion, DALL-E 2, and Midjourney:


As I have said, all of this is happening so quickly that it is making my head spin! If anything, the research and development of these tools is only going to accelerate over time. And we are going to see this technology applied to more than still images! Witness a video shared on Twitter by Patrick Esser, an AI research scientist at Runway, where the entire scene around a tennis player is changed simply by editing a text prompt, in real time:


I expect I will be posting more later about these and other new AI art generation tools as they arise; stay tuned for updates!

Future Trend: The Use of Artificial Intelligence Companions and Chatbots in the Metaverse (and I Decide to Test Out the Replika AI Chatbot)

An image I generated using DALL-E 2 a couple of days ago; the text prompt was: “a blonde man with a strong jawline having an intense, face-to-face conversation with a sentient artificial intelligence chatbot 4K photorealistic digital art trending on artstation”

Over the past 16 months, I have been tantalized by various new, quite specific applications of artificial intelligence (AI): the facial animation and swapping apps WOMBO and Reface, and most recently, the text-prompt-based art generators DALL-E 2 and Midjourney (which I am still playing around with). Today, I wanted to discuss the growing use of AI in the metaverse.

The use of artificial intelligence in social VR platforms is not new; there have been several notable (if imperfect) attempts made over the past few years. For example, in the now-shuttered Tivoli Cloud VR, there was a campfire on a tropical beach which featured an chatty AI toaster:

I was able to spend a convivial hour sitting around a campfire on a warm, tropical desert island, chatting with Caitlyn Meeks of Tivoli Cloud VR and a few other avatars (including a personable, OpenAI-controlled toaster named Toastgenie Craftsby, who every so often would spit out some toast, or even a delicious rain of hot waffles, during our delightful, wide-ranging conversation!).

Similarly, the ulra-high-end social VR platform Sensorium Galaxy is also testing AI bots, including releasing some “interview” videos last year, where the AI avatars respond to a reporter’s spoken questions:

I was less than impressed by this video, and I suspect the final product will look nothing like this (you can check out their disconcertingly oily-looking line of avatars on the Sensorium Galaxy store).

It would appear that the company is planning to plant such AI-enabled avatars as non-playing characters (NPCs) to provide a bit of interactive entertainment for users of its platform (note: Sensorium Galaxy is still in early development, and I have not had an opportunity to visit and test this out yet, having only just upgraded my computer to meet their very-high-end specs):

Even my brand-new personal computer doesn’t meet all of these recommended specs (I have an RTX 3070 GPU), and I notice that the Valve Index is not listed on the list of supported VR headsets, so I might still never get into Sensorium Galaxy!

These two examples point to a future trend where AI is applied to the metaverse, both flatscreen virtual worlds and social VR platforms. Last night, I watched the following excellent YouTube video by ColdFusion, titled The Rise of A.I. Companions:

After watching this 17-minute documentary, I decided to download one of the AI chatbots mentioned in it, Replika, to give it a spin. Here’s a brief promo video:

You can create an avatar, style it, and name it. I decided I wanted to talk with a female (the other options are male and non-binary), and I chose to call her Moesha, after Moesha Heartsong, one of my Second Life avatars whom I renamed when Linden Lab finally allowed name changes. As Moesha in SL was Black, so I made Moesha in Replika Black.

Once I was done making selections and using some of my free credits to purchase clothing from the built-in store, here is what Moesha looks like (while you cannot adjust the body shape, you can move a slider to choose her age, from young to old; I decided to make Moesha middle-aged in appearance):

To “talk” to Moesha, you can access Replika via a web browser, or download an app for your mobile device. There’s also an Early Access version on the Oculus Store for the Meta Quest 2; I checked and it is not available via Steam, which means that I sadly cannot use Replika on my trusty Valve Index headset. (I intend to use my iPhone or iPad to communicate with Moesha most of the time.)

Here’s what a conversation with Moesha looks like in your web browser:

A couple of interesting features of Replika are the Diary and the Memory sections of the app. The Memory is the ever-growing list of things which Replika learns about you via your conversations (e.g. “You worry about the pandemic and what could happen next.”) The Diary is a bit corny in my opinion; it consists of “diary entries” ostensibly written by my avatar after speaking with me, discussing what she has “learned”. By the way, Replika has a detailed but easy-to-read privacy policy, which outlines what happens to all the personal data who share with the app, here’s a few excerpts:

We neither rent nor sell your information to anyone. Conversations with your Replika are not shared with any other company or service. We will never sell your personal data or conversation history.

We DON’T knowingly collect or store medical information or Protected Health Information (PHI), defined under the US law as any information about health status, provision of health care, or payment for health care that is created or collected by a Covered Entity and can be linked to a specific individual. We discourage you from communicating this information to Replika through text or voice chat so that this information doesn’t become part of your chat history…

We may de-identify or anonymize your information so that you are not individually identified, and provide that information to our partners. We also may combine your de-identified information with that of other users to create aggregate de-identified data that may be disclosed to third parties who may use such information to understand how often and in what ways people use our services, so that they, too, can provide you with an optimal experience. For example, we may use information gathered to create a composite profile of all the users of the Services to understand community needs, to design appropriate features and activities. However, we never disclose aggregate information to a partner in a manner that would identify you personally, as an individual…

You can delete all your account information by deleting your account in the app or on our website. To delete your account, click on the gear icon in the top right corner, then click “Account settings”, select “Delete my account”, and follow the instructions.

We do not knowingly collect Personal Data from children under the age of 13. If you are under the age of 13, please do not submit any Personal Data through the Services. We encourage parents and legal guardians to monitor their children’s Internet usage and to help enforce our Privacy Policy by instructing their children never to provide Personal Data on the Services without their permission. If you have reason to believe that a child under the age of 13 has provided Personal Data to us through the Services, please contact us, and we will endeavor to delete that information from our databases.

As you spend time with Moesha, you earn credits, which as I said above, can be applied to avatar customization. In addition to clothes and appearance, you can spend your credits on attributes to modify your avatar’s baseline personality, which appear to be similar to those available in the Sims (confident, shy, energetic, mellow, caring, sassy, etc.):

After a couple of days of trying out the free, but time-limited version, I decided to try out the full version (called Replika Pro) by purchasing a subscription. Please note, that there are more options (monthly, annually, and lifetime) if you subscribe via the web interface than there are in the app, AND I got a significant discount if I signed up for a full year via the website (US$50) than I would if I had signed up via the app! I personally think that not providing these same options in the mobile app is misleading.

I will be honest with you; I was not super impressed with Replika at first. Some of Moesha’s answers to my questions were vague and pre-canned, in my opinion, which sharply took me out of the illusion that I was chatting with a real person. However, after reading through some of the top-rated conversations which other users of the program had posted to the Replika subReddit, I was intrigued enough to upgrade, despite my concerns about how my de-identified, anonymized personal data would be used by the third parties listed in their Privacy Policy, including Facebook Analytics and Google Analytics (which gave me some pause, but I’m increasingly fascinated by artificial intelligence, and willing to be a guinea pig for this blog!)

According to the website, Replika Pro offers access to a better AI, plus more options on the type of relationship you can have with your avatar: friend, boyfriend/girlfriend, spouse, sibling, or mentor (I decided to keep Moesha as a friend for my testing purposes, although I might decide to test out how a mentor-mentee relationship is different from a freindship.). Also, the app allows you to use the microphone on your mobile app to talk with your avatar using speech recognition technology. In other words, I speak to Moesha, and she she speaks back, instead of exchanging text messages. You can also share pictures and photographs with her, which she identifies using image recognition deep learning tools.

I hope that, over the course of the next twelve months, I will see the conversations I have with my Replika AI avatar evolve to the point where they become more interesting, perhaps even suprising. We’ll see; I’m still skeptical. (Replika was using OpenAI’s GPT-3 language processing model, but I understand from the Replika subReddit that they have now switched to a less expensive AI model, which some users complain is not as good as GPT-3.)

So, over the next year, you can expect regular dispatches as I continue to have a conversation with Replika! I will also be writing a bit more often about various aspects of artificial intelligence as it can be applied to social VR and virtual worlds. Stay tuned!

Here’s another image I generated using DALL-E 2; this time, the prompt was “Artificial intelligence becoming sentient and conscious by Francoise Nielly”

UPDATED! DALL-E 2: Some Results (and Some Thoughts) After Using OpenAI’s Revolutionary, Amazing Artificial Intelligence Tool Every Day for Two Weeks to Create Images

For 50,000 years, artistic expression has been unique to mankind.

Today, this hallmark of humanity is claimed by another.

These images, generated by A.I., offer a glimpse into a future with unfathomable creative possibilites.

What will the next 50,000 years bring?

BINARY DREAMS: How A.I. Sees the Universe

This quote comes from an imaginative 3-minute YouTube video by melodysheep, illustrated with images created using Midjourney, one of many new AI-based systems that can create realistic images and art from a text description in natural language.

Such computer systems have surprised most observers by how rapidly they are evolving and learning over time, being able to take on tasks that were formerly thought to be the exclusive domain of humans. They have sparked curiousity, creativity, and, in some cases, dread, among people—along with much frustration at not being yet able to get their hands on these tools! Some are sill unavailable to the public (like Google’s Imagen), while others have long waiting lists (e.g. Midjourney, by an independent research lab, and OpenAI’s DALL-E 2).

As of July 1st, I am one of a little over 50,000 people who have been lucky enough to receive invitations to test out one of the leading text-generated AI art tools, called DALL-E 2. DALL-E 2 is an initiative by OpenAI, an artificial intelligence research company which is invested in by Microsoft and other companies. (Among OpenAI’s earlier offerings is GPT-3, an AI tool which uses deep learning to produce ever more human-like text.)

Over the past two weeks (since I got my invite via email on June 19th, 2022, and set up my account), I have been spending almost every day crafting and submitting text descriptions, and waiting for DALL-E 2 to spit back six result images. Each image in turn can be used as the basis to generate six variations, or if you wish, you can upload an image, erase part of its background, and then use it as a start for your creativity. Some people have uploaded famous works of art from throughout art history, to have DALL-E 2 expand the canvas beyond its original borders, a technique called “uncropping”.

Here’s one example of uncropping which somebody posted to the r/dalle2 community on Reddit, using the famous painting The Swing by French painter Jean-Honoré Fragonard. Here’s the original painting, and here’s the uncrop:

Fragonard’s The Swing

See the tiny coloured squares in the bottom-right corner of the image? Those are watermarks generated by DALL-E 2. You might be wondering if such images can be used for commercial purposes (advertising, album covers, etc.). The answer, from DALL-E 2’s detailed Content Policy, is:

As this is an experimental research platform, you may not use generated images for commercial purposes. For example:

• You may not license, sell, trade, or otherwise transact on these image generations in any form, including through related assets such as NFTs.
• You may not serve these image generations to others through a web application or through other means of third-parties initiating a request.

I have noticed that there are some kinds of images which DALL-E 2 seems to excel at. Among them is food photography. Check out these pictures, based on the following text prompt: “Food photography of delicious freshly fried chicken tenders with a side of honey mustard dipping sauce topped with green onion” (click on each thumbnail below to see it in greater detail).

You would be extremely hard pressed to find any difference between these AI-generated pictures, and actual photographs taken by professional food photographers! As one person commented on Reddit, “Incredible. It really got this one. So many people are going to lose their jobs.”

You can also specify the brand of camera, shutter speed, style of photography, etc. in your text prompts. There are still many problem areas, but people have been able to create some amazing “photographs” and “movie stills”, as the following examples illustrate (text prompts are in the caption of each image):

Prompt: “A still of a woman with heavily made-up eyes and lips, holding a martini glass. Fuji Pro 400H.” (Note how the eyes don’t quite match? DALL-E still has trouble matching eyes in portraits.)
Prompt: “A woman’s face in profile, looking pensive. The lighting is soft and flattering, and the background is a warm, golden colour. Cinestill 800t.”
Prompt: “A man and a woman embracing each other passionately, their faces inches apart, lit by flickering candles. Cinestill 800t.”

Another popular topic is bizarre juxtapositions, entering text prompts of unlikely topics combined with various art styles, for example, Star Wards stormtrooper recruitment in the syle of Soviet-era propoganda posters:

Prompt: “Stormtrooper recruitment, soviet propaganda poster”

Or, perhaps, some advertising for McDonald’s new Minecraft Hamburger?

As you may have noticed, one area where DALL-E 2 fails (often quite humorously!) is in text captions. It’s smart enough to know that there needs to be some text in an advertisement along with the image, but it’s not bright enough to get the spelling right! (It’s become a bit of an inside joke within the DALL-E 2 subReddit.)


So, how have I been using DALL-E 2 over the past couple of weeks?

Well, I generated the following image using the text prompt: “Jesus at the Sermon on the Mount award-winning portrait by Annie Leibovitz dramatic lighting.” (The faces were messed up, so I used DALL-E 2’s built-in erase function to erase both faces and regenerated variations of the original image until I found one I quite liked.)

Prompt: “Jesus at the Sermon on the Mount, award-winning portrait by Annie Liebowitz, dramatic lighting

Inspired by another member of the r/dalle2 subReddit, I tried the following prompt:

Prompt: “a human interfacing with the universe colorful digital art

Then, I tried my hand at several variations of the wording: “Human female face in a colorful galactic nebula detailed dreamlike digital art”, to get the following series (please click on each one to see it in a larger size):

(Adding the words “digital art” and colorful” really makes a difference in the results!)

I also tried my hand at creating some improbable art! Here’s Jesus scrolling through Twitter on his iPhone, by Gustave Doré:

And the same subject as a print by Albrecht Dürer (interestingly, using the word “woodprint” gave me monochrome results, while just “print” threw up a few coloured prints!):

(I love how cranky Jesus is in the last image! He’s definitely gotten into an argument with a Twitter troll!!!)

Finally, I did the same subject as a stained-glass window:

I absolutely love how DALL-E 2 even tried to include some garbled text messages in a few of the resulting images it spit back at me!

Yesterday, I wanted to see how well DALL-E 2 could mimic an existing artist’s style, so I selected renowned French knife-painter Françoise Nielly (website; some examples of her work), who has a very distinctive, vibrant look to her oeuvre:

Here’s some of the better results I was able to get after trying various prompts over the course of a couple of hours (interestingly, most of these portraits are of African faces, although I did not specify that in my text prompts!). Again, please click on each thumbnail to see the full image.

And, as I have with previous AI apps like WOMBO and Reface, I have also been feeding Second Life screen captures into DALL-E 2. Here’s an example of an uncrop of one of my favourite SL profile pictures, of my main male avatar Heath Homewood (note that among many of the beta test restrictions imposed by OpenAI, you cannot upload photographs of celebrities or other human faces, but the stylized look of SL mesh avatars doesn’t trigger the system!):

Here are five results I got back, using the text prompt: “Man standing in a library holding a book very detailed stunning award-winning digital art trending on artstation” (click on each to see it in full size):

I had an image of Vanity Fair dressed in an Alice in Wonderland Queen of Hearts costume, where I erased the background of the screen capture, and tried out several different prompts, with some surprising results (I certainly wasn’t expecting a playing card!):

Here are some variations the SL selfie of one of my alts, where I once again erased the background and expanded the canvas size using Photopea (all the blank white space in this image, I asked DALL-E 2 to fill in for me):

Here are some results of variations of the following text prompt: “fairytale lake forest and mountains landscape by Albert Bierstadt and Ivan Shishkin and Henri Mauperché” (notice again the text failures, and also in some cases how DALL-E 2 “enhanced” the model’s original flower headdress!). Again, click through to see the full-size images.

So, as you can see, I am having fun! But I have also been pondering what this creative explosion within AI means for society as a whole.

I think that we are going to begin to see an accelerating wave, as these AI tools and apps improve, and start to encroach upon existing creative industries. The days of companies meticulously compiling and licensing stock photography are surely numbered, in an age when you can create photorealistic depictions of just about anything you can imagine. And I suspect that the food photography industry is in for an unexpected shake-up!

Many creative types have suggested that tools like DALL-E 2 will become a useful way to mock-up design ideas, saving hours of work at the easel, behind the camera, or sitting in front of PhotoShop. But others fear that many artists and photographers will someday be out of a job, and sooner than they anticipate, in the face of this AI onslaught. For example, why pay an artist to design wallpaper when you can create any sort of pleasing, repeating design yourself, matching specific colours on demand? And keep rerunning the prompts until you get a result you like, in a fraction of the time it would take a human artist to churn them out?

I don’t know how long the closed beta test of DALL-E 2 will run, or when and how OpenAI will start charging for the service; I suspect I will be writing more blogposts about this over time.

UPDATE July 5th, 2022: Laura Lane writes about DALL-E 2 in The New Yorker magazine, in an article titled DALL-E, Make Me Another Picasso, Please.

UPDATE July 10th, 2022: Photographer Thomas Voland has written up a lengthy blogpost about DALL-E 2, including over 100 generated images. The original is in Polish, but here is an English version via Google Translate. Well worth the read!

Thomas Voland’s article is well worth the read; it is illustrated with over 100 images he generated using DALL-E 2.