A.I.-Generated Art: Comparing and Contrasting DALL-E 2 and Midjourney as Both Tools Move to an Open Beta

UPDATE Aug. 12th, 2022: I have just joined the beta test of Stable Diffusion, another AI art-generation program! For more information, please read Comparing and Contrasting Three Artificial Intelligence Text-to-Art Tools: Stable Diffusion, Midjourney, and DALL-E 2 (Plus a Tantalizing Preview of AI Text-to-Video Editing!)

You might remember that I was one of the lucky few who received an invitation to be part of the closed beta test (or “research preview”, as they called it) of DALL-E 2, a new artificial intelligence tool from a company called OpenAI, which can create art from a natural-language text prompt. (I blogged about it, sharing some of the images I created, here and here.)

Here are a few more pictures I generated using DALL-E 2 since then (along with the prompt text in the captions):

DALL-E 2 prompt: “feeling despair over a uncertain future digital art”
DALL-E 2 prompt: “feeling anxiety over an uncertain future digital art”
DALL-E 2 prompt: “feeling anxiety over a precarious future” (sensing a theme here?)
DALL-E 2 prompt” “award-winning detailed vibrant bright colorful knife painting by Françoise Nielly” (Note that this used an inpainting technique; I expanded the canvas borders and asked DALL-E 2 to fill them in to match the Nielly knife painting of the man’s face in the middle)

Meanwhile, other DALL-E 2 users have generated much better results than I could, by skillful use of the text prompts. Here are just a few examples from the r/dalle2 subReddit community of AI-generated images which impressed and sometimes even stunned me, with a direct link to the posts in the caption underneath each picture:

DALL-E 2 prompt: “an image of the Cosmic Mind, digital art”
DALL-E 2 pompt: “cyborg clown, CGSociety award winning render”
DALL-E 2 prompt: “a young girl stares directly at the camera, her blue hijab framing her face. The background is a blur of colours, possibly a market stall. The photo is taken from a low angle, making the girl appear vulnerable and child-like. Kodak Portra 400”
DALL-E 2 prompt: “a close-up photograph of a man with brown hair, ice-blue eyes, red and brown stubble Balbo beard, his face is narrow, with defined cheekbones, he has a scar on the left side of his lips, running down from his top to the bottom lip, he wears a dark-blue hoodie, the background is a blurred out city-scape”

As you can see by the last two images, you can get very detailed and technical in your text prompts, even including the model of camera used! (However, also note that in the fourth picture, DALL-E 2 ignored some specific details in the prompt.)

Yesterday, OpenAI sent me an email to annouce that DALL-E 2 was moving into open beta:

Our goal is to invite 1 million people over the coming weeks. Here’s relevant info about the beta:

Every DALL·E user will receive 50 free credits during their first month of use, and 15 free credits every subsequent month. You can buy additional credits in 115-generation increments for $15.

You’ll continue to use one credit for one DALL·E prompt generation — returning four images — or an edit or variation prompt, which returns three images.

We welcome feedback, and plan to explore other pricing options that will align with users’ creative processes as we learn more.

As thanks for your support during the research preview we’ve added an additional 100 credits to your account.

Before DALL-E 2 announced their new credits system, I had spent most of one day’s free prompts during the research preview to try and generate some repeating, seamless textures to apply to full-permissions mesh clothing I had purchased from the Second Life Marketplace. Most of my attempts were failures, pretty designs but not 100% seamless. However, I did manage to create a couple of floral patterns that worked:

So, instead of purchasing texture packs from without and outside of Second Life, I could, theoretically, generate unique textile patterns, apply them to mesh garments, and sell them, because according to the DALL-E 2 beta announcement I received:

Starting today, you get full rights to commercialize the images you create with DALL·E, so long as you follow our content policy and terms. These rights include rights to reprint, sell, and merchandise the images.

You get these rights regardless of whether you used a free or paid credit to generate images, and this includes images you’ve created before today during the research preview.

Will I? Probably not, because it took me somewhere between 20 and 30 text prompts to generate only two useful seamless patterns, so it’s just not cost effective. However, once AI art tools like DALL-E 2 learns how to generate seamless textures, it’s probably going to have some sort of impact on the texture industry, both within and outside of Second Life! (I can certainly see some enterprising soul set up a store and sell AI-generated art in a virtual world; SL is already full of galleries with human-generated art.)


Another cutting-edge AI art-generation program, called Midjourney (WARNING: ASCII art website!), has also announced an open beta. I had signed up to join the waiting list for an invitation several weeks ago, and when I checked my email, lo and behold, there it was!

Hi everyone,

We’re excited to have you as an early tester in the Midjourney Beta!

To expand the community sustainably, we’re giving everyone a limited trial (around 25 queries with the system), and then several options to buy a full membership.

Full memberships include; unlimited generations (or limited w a cheap tier), generous commercial terms and beta invites to give to friends.

Although both DALL-E 2 and Midjourney use human text prompts to generate art, they operate differently. While DALL-E 2 uses a website, Midjourney uses a special Discord server, where you enter your prompt as a special command, generating four rough thumbnail images, which you can then choose to upscale to a full-size image, or use as the basis for variations.

I took some screen captures of the process, so you can see how it works. I typed in “/imagine a magnificent sailing ship on a stormy sea”, and got this back:

The U buttons will upscale one of the four thumbnails, adding more details, while the V buttons generate variations, using one of the four thumbnails as a starting point. I choose thumbnail four and generated four variations of that picture:

Then, I went back and picked one of my original four images to upscale. You can actually watch as Midjourney slowly adds details to your image, it’s fascinating!

I then clicked on the Upscale to Max button, to receive the following image:

My first attempt at generating an image using Midjourney

Now, I am not exactly satisfied with this first attempt (that sailing ship looks rather spidery to me), but as with DALL-E 2, you get much better results with more specific, detailed text prompts. Here are a few examples I took from the Midjourney subReddit community (with links back to the posts in the captions):

Midjorney prompt: “cyberpunk soldier piloting a warship into battle, the atmosphere is like war, fog, artstation, photorealistic”
Midjourney prompt: “Dress made with flowers” (click to see a second one on Reddit)
Midjourney prompt: “a tiny stream of water flows through the forest floor, octane render, light reflection, extreme closeup, highly detailed, 4K

So, as you can see, you can get some pretty spectacular results, with incredible levels of detail! And unlike DALL-E 2, you can set the aspect ratio of your pictures (as was done in the fourth image generated). You do this with a special “–ar” command in your text prompt to Midjourney, e.g. “–ar 16:9” (here’s the online documentation explaining the various commands you can use).

And one area in which Midjourney appears to excel is horror:

Midjourney prompt: “a pained, tormented mind visualized as a spiraling path into the void”
Midjourney prompt: “a beautiful painting of Escape from tarkov in machinarium style, insanely detailed and intricate, golden ratio, hypermaximalist, elegant, ornate, luxury, elite, horror, creepy, ominous, haunting, matte painting, cinematic, cgsociety, James jean, Brian froud, ross tran”

You can see many more examples of depictions of horror in the postings to the Midjourney SubReddit; some are much creepier than these!


So, in comparing the two tools, I think that Midjourney offers more parameters to users (e.g. setting an aspect ratio), which DALL-E currently lacks. Midjourney also seems to produce much more detailed images than DALL-E 2 does, whereas DALL-E 2 is often astoundingly good at a much wider variety of tasks. For example, how about some angry bison logos for your football team?

I think these images are all very good! (Note that DALL-E 2 still struggles with text! Midjourney does too, but it gets the text correct more often than DALL-E 2 does at present. But note that might change over time as both systems evolve.)


So, the good news is that both DALL-E 2 and Midjourney are now in open beta, which means that more people (artists and non-artists alike) will get an opportunity to try them out. The bad news is that both still have long waiting lists, and with the move to beta, both DALL-E 2 and Midjourney have put limits in place as to how many free images you can generate.

Midjourney gives you a very limited trial period (about 25 prompts), and then urges you to pay for a subscription, with two options:

Basic membership gives you around 200 images per month for US$10 monthly; standard membership gives you unlimited use of Midjourney for US$30 a month.

For now, OpenAI has decided to set DALL-E 2’s pricing based on a credit system (similar to their GPT-3 AI text-generation tool), as described in the first quote in this blogpost. There’s no option for unlimited use of DALL-E 2 at any price, just options for buying credits in different amounts (and there are no volume discounts for purchasing larger amounts of credits at one time, either). The most you can by at once is 5,750 credits, which is US$750. So, yes, it can get quite expensive! (As far as I am aware, your unused credits carry over from one month to the next.)

There’s quite a bit of discussion about OpenAI’s DALL-E 2 pricing model in this thread in the r/dalle2 subReddit; many people are very unhappy with it, particularly since it can take a lot of trial and error with the DALL-E 2 text prompts to generate a desired result. One person said:

In my experience, using Dall-E 2 to generate concept arts for our next project, it takes me between 10 to 20 attempts to get something close to what I want (and I never got exactly what I was asking for)…

Dall-E 2, at this point, is not a professional tool. It’s not viable as one, unless you produce exactly the type of content the AI can produce instantly just the way you want it.

Dall-E 2, at this point, IS A TOY! And that’s OpenAI’s mistake right now. You can’t sell a toy the way you sell a professional service! I’m ready to pay for it because I’m experimenting with it. I’m having fun with it and, when it works, it provides me with images I can also use for professional project. However, I wont EVER spend hundreds of dollars on this just for fun, and I certainly wont pay that amount for it as a tool until it can provide me with better and more consistent results!

OpenAI is going after the WRONG TARGET! OpenAI should be seeling it at a much lower price for everyday people and enthusiasts who want to experiment with it because this is literally the only people who can be 100% satisfied with it at this point and these people wont pay hundreds of dollars per month to keep playing when there are other shiny toys out there, cheaper and more open, existing or about to.

Several commenters said that they will be moving from DALL-E 2 to Midjourney because of their more favourable pricing model, but of course it’s still early days. Also, there are any number of open-source AI art-generation projects in the works, and competition will likely mean more features (and better results!) at less cost over time. One thing is certain: we can anticipate an acceleration in improvement of these tools over time.

The future looks to be both exciting and scary! Exciting in the ability to generate art in a new way, which up until now has been restricted to experienced artists or photographers, and scary in that we can no longer trust our eyes that a photograph is real, or has been generated by artificial intelligence! Currently, both systems have rules in place to prevent the creation of deepfake images, but in future, things could get Black Mirror weird, and the implications to society could be substantial. (Perhaps now you will understand the first three DALL-E 2 text prompts I used, at the top of this blogpost!)

P.S. Fun fact: the founding CEO of Linden Lab (the makers of Second Life), Philip Rosedale, is one of the advisors to Midjourney, according to their website. Philip gets around! 😉

UPDATE July 22nd, 2022: Of course, the images generated by DALL-E 2 and Midjourney can then be used in other AI tools, such as WOMBO and Reface (please click the links to see all the blogposts I have written about these mobile apps).

Late yesterday, a member of the r/dalle2 community posted the following 18-second video, created by generating a photorealistic portrait of a woman using DALL-E 2, then submitting it to a tool similar to WOMBO and Reface called Deep Nostalgia:

What you see here is an AI-generated image, “animated” using another deep learning tool. This is a tantalizing glimpse into the future, where artificial intelligence can not only create still images, but eventually, video!

UPDATED! EDITORIAL: Minecraft Bans NFT Servers and In-Game Items, Catching NFT Worlds Off Guard

In the wake of the ongoing cryptocrash, and the falling dominoes of crypto firms, I have been spending a bit of time lately learning more about the blockchain space, hanging out in various Reddit communities where such matters are discussed. As I commented on one post:

Crypto culture is kinda fascinating in a train wreck kind of way.

Yesterday, Mojang Studios (the makers of the phenomenally successful voxel-based building game/metaverse Minecraft, which is owned by Microsoft), posted the following announcement on their official blog:

Hello everyone! Recently, we’ve received some feedback from members of the community asking for clarification and transparency regarding Mojang Studios and Minecraft’s position on NFTs (non-fungible tokens) and blockchain. 

While we are in the process of updating our Minecraft Usage Guidelines to offer more precise guidance on new technologies, we wanted to take the opportunity to share our view that integrations of NFTs with Minecraft are generally not something we will support or allow.

This news appears to have come as a most unwelcome surprise to the blockchain gaming company NFT Worlds, which posted the following message to their Discord announcements channel* and to Twitter:

First and foremost – this out-of-nowhere announcement by Microsoft/Minecraft to outright ban all possible uses of NFTs & blockchain tech within Minecraft feels like a step backwards in innovation, and may even have painful downstream effects for them in the long run—we’ll see how that plays out.

Regardless, we’re working through this internally and have all hands on deck brainstorming solutions around the Minecraft EULA changes, as well as outright pivots for the NFT Worlds ecosystem and team if necessary.

Our order of operations in figuring this out is as follows.

We’re working to get in contact with the right decision makers within the Minecraft policy enforcement team as well as the general Minecraft studio to understand the details of this policy change, what the true internal motivators may have been, and how if at all we can find an alternative outcome that’s beneficial to the Minecraft player base as well as Microsoft’s vested interest in Blockchain / NFT technology and GameFi.

In the event after the above conversations we come to the conclusion we can continue to operate, the show goes on as it’s been.

However, if we’re truly banned because of the risk of C&D/DMCA/Lawsuit by Minecraft/Microsoft from innovating on top of the Minecraft ecosystem, we move forward, we pivot.

The first option from here is we transition into our own Minecraft-like game engine & games platform. There’s been dozens of minecraft-like game engines developed over the last decade by various 3rd party teams – These were people wanting to innovate beyond the idea of Minecraft and add their own spin on it. This option means acquiring one of these engines & development teams to join us, and developing on top of it to bring the same vision for NFT Worlds to fruition but with Minecraft & Microsoft entirely out of the picture with no ability to stop us.

The second option is a pivot to a GameFi platform as a service for any game developer or games studio to effortlessly implement the same proven, patent pending, friction removing tech for GameFi we’ve developed over the last year and have intentionally generalized the last 9 months in the event we decided to or needed to branch out into a GameFi platform. All the systems that we’ve already built would be extremely quick and easy for us to pivot to an implementation for anyone to use. The other interesting piece here is as soon as the Minecraft news was announced, we’ve had multiple other metaverse / gamefi projects immediately reaching out to us wanting to use this tech we’ve already proven, strongly kickstarting possible adoption of such a platform. If we go this route, existing NFT Worlds, $WRLD and Genesis Avatar holders would have an equivalent stake via token and/or NFT(s) related to this platform based on their NFT Worlds related holdings once launched.

Like always, we’d love to hear our community’s opinion on everything presented above.

Bottom line, we’re not leaving. We have the community, we have the war chest, and we know we can build.

Here’s more details on that “war chest” they’re talking about (this article is dated February 24th, 2022, well before the crypto crash):

Clearly, people are into it — NFT Worlds has already generated $90 million in trading [on Opensea], even though it gave the 10,000 worlds away for free and only makes money from “royalties and secondary sales.” Worlds are currently going for a minimum of $45,000.

Yes, that’s right—NFT Worlds created 10,000 “fully decentralized, fully customizable, community-driven, play-to-earn” Minecraft worlds, which people have been buying and selling on the NFT marketplace Opensea. Gee, I wonder what those US$45,000-apiece worlds are worth right now, because without Minecraft’s cooperation, they’re pretty much worthless.

Needless to say, over on the blockchain/crypto/NFT snark subReddit community called r/Buttcoin†, people are having an absolute field day discussing this! Honestly, you need to go over there and read through the discussion, it’s quite entertaining. One commenter, after reading NFT Worlds’ announcement above, summarized it hilariously:

TL;DR: We want Microsoft to know they are wrong, and we are innovators. If they don’t allow the project, we will make our own, better Minecraft with blackjack and hookers.

Another Redditor responded:

Wow, imagine running a business totally dependent on someone else, yet being caught unaware on major business decision of this entity on whom you completely dependent upon.

Sounds like just the way people in crypto do business…with zero awareness of whats going on around them.

Seriously…how on earth do you build a company whose business model goes out the window with a single decision by the corporation who RUNS THE PLATFORM THEY’RE DEPENDENT UPON?!?? This is a prime example of a harebrained, half-baked cryptoscheme that somebody hatched up and was able to earn a tidy profit from, selling highly volatile, speculative blockchain-based assets to ignorant customers, who perhaps thought that they would be able to sell them for a profit to the next fool who comes along. It’s maddening.

Minecraft goes on to explain its decision:

In our Minecraft Usage Guidelines, we outline how a server owner can charge for access, and that all players should have access to the same functionality. We have these rules to ensure that Minecraft remains a community where everyone has access to the same content. NFTs, however, can create models of scarcity and exclusion that conflict with our Guidelines and the spirit of Minecraft.

To ensure that Minecraft players have a safe and inclusive experience, blockchain technologies are not permitted to be integrated inside our client and server applications, nor may Minecraft in-game content such as worlds, skins, persona items, or other mods, be utilized by blockchain technology to create a scarce digital asset. Our reasons follow.

Some companies have recently launched NFT implementations that are associated with Minecraft world files and skin packs. Other examples of how NFTs and blockchain could be utilized with Minecraft include creating Minecraft collectible NFTs, allowing players to earn NFTs through activities performed on a server, or earning Minecraft NFT rewards for activities outside the game. 

Each of these uses of NFTs and other blockchain technologies creates digital ownership based on scarcity and exclusion, which does not align with Minecraft values of creative inclusion and playing together. NFTs are not inclusive of all our community and create a scenario of the haves and the have-nots. The speculative pricing and investment mentality around NFTs takes the focus away from playing the game and encourages profiteering, which we think is inconsistent with the long-term joy and success of our players.

Amen. 100%! CRYPTO ADDS NOTHING TO MINECRAFT! (I can’t believe I am cheering for Microsoft here…)

It is honestly refreshing to see yet another major corporation draw a line in the sand, and explain so clearly why they are drawing that line! Minecraft is a game, and games are supposed to be fun, people. (By the way, it would appear that Axie Infinity and all the other “play-to-earn” NFT games are bleeding users during this cryptocrash, as they can no longer earn enough to make it profitable. And it’s not just play-to-earm, it’s all the X-to-earn NFT schemes, like the NFT-based running app StepN, whose payouts to users have cratered in just two months.)

As I have editorialized before, a harsh, long, bitter crypto winter is going to shake out a lot of sketchy companies with poorly-thought-out plans, like NFT Worlds.

I suspect that NFT Worlds is going to go through a rough patch…

*To see this message, you will have to join the NFT Worlds Discord, which requires you to jump through a few hoops to verify that you’re a human being. I joined just to get a copy of the announcement, but I might stick around as a lurker, just to see how the company and its users attempt to spin this disaster 😉

†Seriously, if you haven’t checked out r/Buttcoin yet, please do so, along with Molly White excellent website, Web3 Is Going Just Great (the title is meant to be sarcastic), which outlines the latest crises, hoaxes, scams, and fiascoes in the blockchain space, keeping a running total of money lost to date in a ticker in the bottom right-hand corner.

UPDATE July 27th, 2022: Ars Technica has an update on the saga here. Apparently, the NFT Worlds token’s value has plummeted over 60 percent in a week following Mojang’s announcement.