UPDATED! A New Comparison Chart of 15 Social VR Platforms (Last Update: April 9th, 2024)

Image by Pexels from Pixabay

I have—finally!— had an opportunity to update my Google spreadsheet of social VR platforms, removing those which are no longer around, and adding a few new ones since the last update, which was several years ago. Please note that this spreadsheet is focused solely on metaverse platforms which support virtual reality, i.e. social VR. If a platform is flatscreen access only (e.g. Second Life), it is not included in this spreadsheet.

Please also note that I will no longer be writing about any metaverse platform which incorporates blockchain, cryptocurrencies, and/or Non-Fungible Tokens (NFTs).

Comparison Chart of 15 Social VR Platforms © Ryan Schultz, Published to RyanSchultz.com (Google Sheets version)

Please note: any changes made to this spreadsheet are done in real-time, as information comes in; this link will automatically update every five minutes.


Comparison Chart of 15 Social VR Platforms © Ryan Schultz, Published to RyanSchultz.com, Date of Last Update 9 April 2024 (PDF version)


Please note: this PDF has to be generated by hand, and I will only do it once per day, near the end of the day, if there have been any changes made that day, Therefore, you should consider the Google spreadsheet link to be the most up-to-date content!

UPDATE April 17th, 2024: Upon request, I have slightly updated the Google Sheets spreadsheet so that the column header text now always displays at the top of your screen, even when you scroll down! This means you don’t have to keep scrolling up and down to figure out what a spreadsheet cell value refers to.

Thank you to Mat of the XR Live Discord server for the suggestion, and the explanation of how to set it up! I learned something new today. 🙂

UPDATED TO VERSION 1.3! Second Life Steals, Deals, and Freebies: A New Comparison Chart of Seven Options for Free or Inexpensive Female Mesh Bodies (Including Senra Jamie)

Now that Linden Lab has launched the beta version of its Senra mesh starter avatars, I decided to take a stab at creating a comparison chart, comparing and contrasting six options for free or inexpensive (L$250 or less) female mesh bodies. (I will probably follow up with a similar chart for free/inexpensive male mesh bodies, female mesh heads, and male mesh heads.)

The six seven mesh bodies I have chosen for this chart are:

  • Senra Jamie, by Linden Lab (UPDATE March 16th, 2024: These are now out of beta test, and finalized to version 1.0)
  • Erika Zero X, by Kalhene
  • Atenea, by LucyBody
  • Classic Meshbody (often referred to as TMP)
  • eBody Classic (free version)
  • eBody Curvy (free version)
  • UPDATE Aug. 4th, 2023: After some hemming and hawing, I have decided to include the open-source Ruth 2.0 mesh bodies in this spreadsheet. You can find a list of vendors for Ruth 2.0-based mesh bodies here (scroll down to the Ruth2 section). While clothing specifically designed for Ruth 2.0 bodies is limited, with a good set of BoM/system alphas, some Maitreya Lara clothing and standard-size clothing does fit.

UPDATE March 15th, 2024: You used to have to pay to join the Erika Mesh Body group to pick up the free group gift of the Erika Zero X mesh body, but you can now join the group for free! I have updated my comparison chart to version 1.3 with this updated information. This is a lovely female mesh body which responds very well to the body sliders, allowing you a wide variety of body shapes, from thin and slim to “thicc” and curvy!

For each mesh body, I look at the following:

  • Price
  • Bakes on Mesh support
  • Bento support
  • Feet options and compatibility
  • Mesh clothing compatibility (please note that all Bakes on Mesh bodies support BoM/system layer clothing; here are some places where you can find those)
  • Mesh clothing availability (obviously a subjective estimate!)

You can view (but not edit) version 1.3 of my comparison chart here on Google Drive. I am open to suggestions for improving this chart, and I expect to keep it (somewhat) updated as the situation evolves over time. If you have any corrections, edits, or suggestions, please leave a comment, thanks!

Here’s a snapshot of version 1.3 of the comparison chart, which you can view and download in full size over on Flickr if you, like me, find the fine print a little too small:

Comparison Chart of Free and Inexpensive Female Mesh Bodies 16 March 2024

Please note that I have deliberately excluded some mesh bodies, for example, the free Altamura bodies you can pick up at various locations (because you cannot change the skin, and you cannot use Bakes on Mesh with them). I have also left out those bodies which have poor or even non-existent third-party designer support. An example of this would be the Ultra Vixen mesh body, which is now only free to avatars under 30 days old and—as far as I am aware—only has clothing that fits it, which is made and sold by the body’s creator.

Looking forward to hearing your comments and suggestions!

Comparing and Contrasting Three Artificial Intelligence Text-to-Art Tools: Stable Diffusion, Midjourney, and DALL-E 2 (Plus a Tantalizing Preview of AI Text-to-Video Editing!)

HOUSEKEEPING NOTE: Yes, I know, I know—I’m off on yet another tangent on this blog! Please know that I will continue to post “news and views on social VR, virtual worlds, and the metaverse” (as the tagline of the RyanSchultz.com blog states) in the coming months! However, over the next few weeks, I will be focusing a bit on the exciting new world of AI-generated art. Patience! 😉

Artificial Intelligence (AI) tools which can create art from a natural-language text prompt are evolving at such a fast pace that it is making me a bit dizzy. Two years ago, if somebody had told me that you would be able to generate a convincing photograph or a detailed painting from a text description alone, I would have scoffed! Many felt that the realm of the artist or photographer would be among the last holdouts where a human being was necessary to produce good work. And yet, here we are, in mid-2022, with any number of public and private AI initiatives which can be used by both amateurs and professionals to generate stunning art!

In a recent interview by The Register‘s Thomas Claburn of David Holz (the former co-founder of augmented reality hardware firm Magic Leap, who founded Midjourney), there’s a brief explanation of how this burst of research and development activity got started:

The ability to create high-quality images from AI models using text input became a popular activity last year following the release of OpenAI’s CLIP (Contrastive Language–Image Pre-training), which was designed to evaluate how well generated images align with text descriptions. After its release, artist Ryan Murdock…found the process could be reversed – by providing text input, you could get image output with the help of other AI models.

After that, the generative art community embarked on a period of feverish exploration, publishing Python code to create images using a variety of models and techniques.

“Sometime last year, we saw that there were certain areas of AI that were progressing in really interesting ways,” Holz explained in an interview with The Register. “One of them was AI’s ability to understand language.”

Holz pointed to developments like transformers, a deep learning model that informs CLIP, and diffusion models, an alternative to GANs [Holz pointed to developments like transformers, a deep learning model that informs CLIP, and diffusion models, an alternative to GANs [models using Generative Adversarial Networks]. “The one that really struck my eye personally was the CLIP-guided diffusion,” he said, developed by Katherine Crawson…

If you need a (relatively) easy-to-understand explainer on how this new diffusion model works, well then, YouTube comes to your rescue with this video with 4 explanations at various levels of difficulty!


Before we get started, a few updates since my last blogpost on A.I.-generated art: After using up my free Midjourney credits, I decided to purchase a US$10-a-month subscription to continue to play around with it. This is enough credit to generate approximately 200 images per month. Also, as a thank you for being among the early beta testers of DALL-E 2, the AI art-generation tool by OpenAI, they have awarded me 100 free credits to use. You can buy additional credits in 115-generation increments for US$15, but given the hit-or-miss nature of the results returned, this means that DALL-E 2 is among the most expensive of the artificial intelligence art generators. It will be interesting to see if and how OpenAI will adjust their pricing as the newer competitors start to nip at their heels in this race!

And I can hardly believe my good fortune, because I have been accepted into the relatively small beta test group for a third AI text-to-art generation program! This new one is called Stable Diffusion, by Stability AI. Please note that if you were to try to get into the beta now, it’s probably too late; they have already announced that they have all the testers they need. I submitted my name 2-3 weeks ago, when I first heard about the project. Stable Diffusion is still available for researcher use, however.

Like Midjourney, Stable Diffusion uses a special Discord server with commands (instead of Midjourney’s /imagine, you use the prompt !dream, followed by a text description of what you want to see, plus you can add optional parameters to set the aspect ratio, the number of images returned, etc.). However, the Stable Diffusion team has already announced that they plan to move from Discord to a web-based interface like DALL-E 2 (we will be beta-testing that, too). Here’s a brief video glimpse of what the web interface could look like:


Given that I am among the relatively few people who currently have access to all three of the top publicly-available AI art-generation tools, I thought it would be interesting to create a chart comparing and contrasting all three programs. Please note that I am neither an artist nor an expert in artificial intelligence, just a novice user of all three tools! Almost all of the information in this chart has been gleaned from the websites of the projects, and online news reports, as well as the active subreddit communities for all three programs on Reddit, where users post pictures and ask questions. Also, all three tools are constantly being updated, so this chart might go very quickly out-of-date (although I will make an attempt to update it).

Name of ToolDALL-E 2MidjourneyStable Diffusion
CompanyOpenAIMidjourneyStability AI
AI Model UsedDiffusionDiffusionDiffusion
# Images Used
to Train the AI
400 millon“tens of millions”2 billion
User InterfacewebsiteDiscordDiscord (moving to website)
Cost to Usecredit system (115 for US$15)subscription (US$10-30 per month)currently free (beta)
Uses Text Promptsyesyesyes
Can Add Optional Argumentsnoyesyes
Non-Square Images?noyesyes
In-tool Editing?yesnono
Uncropping?yesnono
Generate Variations?yesyesyes (using seeds)
A comparison chart of three AI text-to-art tools: DALL-E 2, Midjourney, and Stable DIffusion

I have already shared a few images from my previous testing of DALL-E 2 and Midjourney here, here, and here, so I am not going to repost those images, but I wanted to share a couple of the first images I was able to create using Stable Diffusion (SD). To make these, I used the text prompt “a thatched cottage with lit windows by a lake in a lush green forest golden hour peaceful calm serene very highly detailed painting by thomas kinkade and albrecht bierstadt”:

I must admit that I am quite impressed by these pictures! I had asked SD for images with a height of 512 pixels and a width of 1024 pixels, but to my surprise, the second image was a wider one presented neatly in a white frame, which I cropped using my trusty SnagIt image editor! Also, it was not until after I submitted my prompt that I realized that the second artist’s name is actually ALBERT Bierstadt, not Albrecht! It doesn’t appear as if my typo made a big difference in the final output; perhaps for well-known artists, the last name alone is enough to indicate a desired art style?

Here are a few more samples of the kind of art which Stable Diffusion can create, taken from the pod-submissions thread on the SD Discord server:

Text prompt: “a beautiful landscape photography of Ciucas mountains mountains a dead intricate tree in the foreground sunset dramatic lighting by Marc Adamus”
Text prompt: “incredible wide screenshot ultrawide simple watercolor rough paper texture katsuhiro otomo ghost in the shell movie scene backlit distant shot”
Text prompt: “an award winning wallpaper of a beautiful grassy sunset clouds in the sky green field DSLR photography clear image”
Text prompt: “beautiful angel brown skin asymmetrical face ethereal volumetric light sharp focus”
Painting of people swimming (no text prompt shared)

You can see many more examples over at the r/StableDiffusion subreddit. Enjoy!

If you are curious about Stable Diffusion and want to learn more, there is a 1-1/2 hour podcast interview with Emad Mostaque, the founder of Stability AI (highly recommended!). You can also visit the Stability AI website, or follow them on social media: Twitter or LinkedIn.


I also wanted to submit the same text prompt to each of DALL-E 2, Midjourney, and Stable Diffusion, to see how the AI models in each would respond. Under each prompt you will see three square images: the first from DALL-E 2, the second from Midjourney, and the third from Stable Diffusion. (Click on each thumbnail image to see it in its full size on-screen.)

Text prompt: “the crowds at the Black Friday sales at Walmart, a masterpiece painting by Rembrandt van Rijn”

Note that none of the AI models are very good at getting the facial details correct for large crowds of people (all work better with just one face in the picture, like a portrait, although sometimes they struggle with matching eyes or hands). I would say that Midjourney is the clear winner here, although a longer, much more detailed prompt in DALL-E 2 or Stable Diffusion might have created an excellent picture.

Text prompt: “stunning breathtaking photo of a wood nymph with green hair and elf ears in a hazy forest at dusk. dark, moody, eerie lighting, brilliant use of glowing light and shadow. sigma 8.5mm f/1.4”

When I tired to generate a 1024-by-1024 image in Stable Diffusion, it kept giving me more than one wood nymph, even when I added words like “single” or “alone”, which is a known bug in the current early state of the program. I finally gave up and used a 512×512 image. The clear winner here is DALL-E 2, which has a truly impressive ability to mimic various camera styles and settings!

Text prompt: “a very highly detailed portrait of an African samurai by Tim Okamura”

In this case, the clear winner is Stable Diffusion with its incredible detail, even though, once again, I could not generate a 1024×1024 image because it kept giving me multiple heads! The DALL-E 2 image is a too stylized for my taste, and the Midjourney image, while nice, has eyes that don’t match (a common problem with all three tools).

And, if you enjoy this kind of thing, here’s a 15-minute YouTube video with 21 more head-to-head comparisons between Stable Diffusion, DALL-E 2, and Midjourney:


As I have said, all of this is happening so quickly that it is making my head spin! If anything, the research and development of these tools is only going to accelerate over time. And we are going to see this technology applied to more than still images! Witness a video shared on Twitter by Patrick Esser, an AI research scientist at Runway, where the entire scene around a tennis player is changed simply by editing a text prompt, in real time:


I expect I will be posting more later about these and other new AI art generation tools as they arise; stay tuned for updates!

To Teleport or Not to Teleport: Teleporting Versus Walking in the Metaverse

Ever wish you could teleport in real life?
(Photo by Chris Briggs on Unsplash)

Earlier this week, I had a guided tour of the blockchain-based social VR platform Somnium Space, where I was informed by my tour guide that the virtual world had just implemented teleporting. Scattered throughout the one large, contiguous virtual landscape which comprises Somnium Space were teleporter hubs, where you could pull up a map, click on the teleporter hub you wanted to travel to, press a button, et voilà! You were instantly transported to your destination.

A teleporter hub in the central city square of Somnium Space (at night)
The red arrows indicate the location of teleporter hubs on the map

What makes Somnium Space unusual among metaverse platforms is that you cannot simply teleport from one place to another distant location; you either must make use of the provided teleporters, or walk/run/fly/swim to your destination. (Of course, you can certainly “short hop” using a limited form of teleporting, but that is only for shorter distances, not for instantly getting from one end of a large, contiguous landmass to another.)

In other words, the teleporter hubs of the Somnium Transportation System are set up much like a modern urban subway system, where you can only travel to a particular, pre-built subway station that is situated the nearest to your intended destination, and then walk the rest of the way. Many people might remember that in the very earliest days of Second Life, there were also teleporter hubs in the days before avatars could instantly teleport themselves from one location to another!

Another thing that sets Somnium Space apart from other social VR platforms is that there are only going to be so many “public” teleporter hubs. In face, some of these hubs are going to be auctioned off as NFTs (Non-Fungible Tokens), and the successful bidders with such a teleporter hub on their properties will be able to charge a cryptocurrency fee in order to use their teleporters! (In other words, they would operate much the same as a real-life toll road or highway.)

Closely intertwined with the idea of teleporting vs. walking is the layout of a metaverse platform. Is it one large contiguous landmass, like Somnium Space, Decentraland, Cryptovoxels, and (to a certain extent) Second Life? Or is it a collection of smaller worlds, like VRChat, Rec Room, Sansar, and Sinespace? If it is the former, then means of transportation (and ease of access to transportation) becomes more important. If it is the latter, then another tool which many of the newer social VR platforms offer is the ability to create a portal—either temporary or permanent— between two worlds. (Of course, you could consider a teleporter hub a portal.)

So, keeping all this in mind (particularly the distinction between SHORT HOP teleporting and teleporting to a DISTANT location), we can create a chart outlining the transportation affordances of the various metaverse platforms:

Name of Platform (Layout)Walk/Run? *Distance
Teleport?
**
Create Portals?
Second Life (mostly one contiguous landmass, with private islands)YESYESYES
Sinespace (separate worlds)YESNOYES
Sansar (separate worlds)YESNO (but you can create teleport hubs)YES
VRChat (separate worlds)YESNOYES
Rec Room (separate worlds)YESNOYES
AltspaceVR (separate worlds)YESNOYES
NeosVR (separate worlds)YESNOYES
Cryptovoxels (one contiguous landmass with some islands) YESNO (you can add coordinates to a URL, though)YES
Decentraland (one contiguous landmass) YESYES (/goto X,Y)NO
Somnium Space (one contiguous landmass)YESNO (but there are teleport hubs)NO (unless you count teleport hubs)
* – Can a user walk/run/fly/swim from one location to another? This includes SHORT HOP teleporting.
** – Can a user personally choose to teleport from one location to a second, DISTANT location?
† – Can a user create a temporary or permanent portal from one location to another?

Obviously, all metaverse platforms offer some form of personal locomotion for your avatar (walk, run, fly, swim, short-hop teleporting, etc.). This is standard.

It is also clear from this table that the metaverse platforms which consist of many smaller worlds (Sinespace, Sansar, VRChat, Rec Room, AltspaceVR, and NeosVR) all prefer the creation of temporary and permanent portals to allowing users to teleport great distances on their own steam. On the other hand, all the social VR platforms and virtual worlds which consist of one contiguous landmass tend to allow some form of teleportation across great distances.

You will notice that Cryptovoxels uses a rather brute-force method of “teleporting”, which consists of appending the coordinates to the end of the URL you enter into your web browser client (which are much the same as the coordinates which form part of the SLURLs used in Second Life, but not nearly as convenient in my opinion).

Transportation affordances are yet another way to classify metaverse platforms in my continuing effort to create a taxonomy of social VR platforms and virtual worlds.

So, what do you think? Have I made an error in my table? Do you have an opinion about the benefits of teleporting and portals versus walking around and exploring the landscape? I’d love to hear your opinions, so please leave a comment, thank you!