Editorial: Artificial Intelligence and the Metaverse (And a Look at an AI-Assisted Social VR Platform, Riff XR)

I created this image using OpenAI’s DALL-E generative AI art generation tool, using the text prompt “artificial intelligence in the metaverse” (source)

Housekeeping Note: I first started writing this editorial back in April, and from time to time I have picked up the draft, tinkered with it a bit more, added a bit more to it—and then promptly filed it away again as a draft, because I still wasn’t satisfied with it, and I always felt that I had something more to say.

Enough. I finally decided that the perfect was the enemy of the good, and I decided today to just go ahead and publish what I already had, and then write follow-up blogposts on the topic of AI in general, and AI in the metaverse in particular. And I do expect that I will return to this topic often! So please stay tuned.

I have written before on this blog about artificial intelligence (AI) applications, such as the image manipulation and animation tools WOMBO and Reface, the text-to-art creation programs DALL-E 2, Midjourney, and Stable Diffusion, and most recently, the AI-powered chatbot Replika and the text-generation app ChatGPT. Most people, myself included, treated them as toys, mere curiosities (I entertained myself for hours making my Second Life and Sansar avatars “come alive” using WOMBO). John Hermann, in a recent article for New York magazine titled The AI Magic Show (original; archived version), wrote:

In 2022, artificial-intelligence firms produced an overwhelming spectacle, a rolling carnival of new demonstrations. Curious people outside the tech industry could line up to interact with a variety of alluring and mysterious machine interfaces, and what they saw was dazzling.

The first major attraction was the image generators, which converted written commands into images, including illustrations mimicking specific styles, photorealistic renderings of described scenarios, as well as objects, characters, textures, or moods. Similar generators for video, music, and 3-D models are in development, and demos trickled out.

Soon, millions of people encountered ChatGPT, a conversational bot built on top of a large language model. It was by far the most convincing chatbot ever released to the public. It felt, in some contexts, and especially upon first contact, as though it could actually participate in something like conversation. What many users suggested felt truly magical, however, were the hints at the underlying model’s broader capabilities. You could ask it to explain things to you, and it would try — with confident and frequently persuasive results. You could ask it to write things for you — silly things, serious things, things that you might pass off as work product or school assignments — and it would.

As new users prompted these machines to show us what they could do, they repeatedly prompted us to do a little dirty extrapolation of our own: If AI can do this already, what will it be able to do next year?

As Charlie Wurzel writes in The Atlantic, in a recent article titled What Have We Just Unleashed? (original; archived version), not even the AI experts know exactly what will come next:

Over the past few weeks, I’ve put questions like these to AI researchers, academics, entrepreneurs, and people who are currently building AI applications. I’ve become obsessive about trying to wrap my head around this moment, because I’ve rarely felt less oriented toward a piece of technology than I do toward generative AI. When reading headlines and academic papers or simply stumbling into discussions between researchers or boosters on Twitter, even the near future of an AI-infused world feels like a mirage or an optical illusion. Conversations about AI quickly veer into unfocused territory and become kaleidoscopic, broad, and vague. How could they not?

The more people I talked with, the more it became clear that there aren’t great answers to the big questions. Perhaps the best phrase I’ve heard to capture this feeling comes from Nathan Labenz, an entrepreneur who builds AI video technology at his company, Waymark: “Pretty radical uncertainty.”


You might be surprised to learn that we librarians are already preparing for the AI onslaught (perhaps not surprising, as some of the users we serve have already come to us with fake, AI-generated citations to information resources which do not exist!). I have seen no less than three events for Canadian librarians on this subject in the past couple of weeks, and at my employer, the University of Manitoba in Winnipeg, a fellow librarian has launched an AI Community of Practice (CoP), which will meet regularly to discuss ChatGPT and other AI apps and topics.

Librarians meeting via Zoom at the inaugural meeting of the brand-new University of Manitoba Libraries’ AI Community of Practice (I’m in the top row, second from the left!)

You.com is one of the newer entrants in the AI-powered search engine marketplace, which also offers AI-generated text and images as well. You can actually try out YouWrite (their AI-generated text generator) for free ten times before you have to shell out for the service, so I decided to give it a whirl, asking the site to create a professional-sounding blogpost about the impact of artificial intelligence on the metaverse:

And here’s the blogpost YouWrite spit back, exactly as generated, with zero edits:

The advent of artificial intelligence (AI) has revolutionized the way we interact with technology. AI is now being used to create virtual worlds, or “metaverses”, that are populated by digital avatars and objects that can be manipulated in real-time. As AI continues to evolve, it will have a profound impact on the metaverse and its inhabitants.

One of the most significant impacts of AI on the metaverse will be increased realism for users interacting within these virtual worlds. With advanced machine learning algorithms and natural language processing capabilities, computer-generated characters can become more lifelike than ever before – responding naturally to user input in real time while exhibiting realistic emotions such as joy or sadness depending on their environment. This could potentially lead to an unprecedented level of immersion for those engaging with these digital realms as they become indistinguishable from reality itself!

Another major advantage brought about by artificial intelligence is improved efficiency when it comes to managing large numbers of players at once – something which becomes particularly important when dealing with massive multiplayer online games (MMOs). By utilizing intelligent agents capable of autonomous decision making based off data collected from other players’ actions, developers can ensure smooth gameplay even during peak hours without sacrificing any quality due performance issues caused by too many people playing at once!

Finally, one area where AI could really shine in regards to improving user experience within a metaverse would be through automated content creation tools powered by deep learning models trained using vast amounts existing assets found across different platforms such as video games or movies etcetera… Such systems would allow game designers/developers quick access powerful resources allowing them craft unique experiences much faster than was previously possible – thus providing gamers exciting new adventures every time they log into their favourite world!

In conclusion then; It’s clear see how Artificial Intelligence stands revolutionize our concept what constitutes an immersive experience both inside out Metaverses – giving us greater control over our interactive environments alongside richer storylines filled dynamic characters all thanks advances this cutting edge technology brings us closer achieving true Virtual Reality today tomorrow alike…

That last paragraph is absolutely dreadful (almost as if the AI had a stroke!), but the rest reads well, and the text contains a few somewhat plausible, if borderline-fanciful ideas, but if I could sum it all up in one word, that word would be: bullshit. This is computer-generated bullshit, trained on a large language model to sound like an actual human expert, but it’s just parroting human writing, without any grasp of the knowledge it is talking about! (I can’t remember who said it first, but somebody once memorably and hilariously referred to AI-generated text apps like ChatGPT as “mansplaining as a service.” 😜 In fact, I would go so far as to state that generative AI tools like ChatGPT offer white, cisgender, mansplaining as a service! All the biases in the mountains of data—scraped off the greater unwashed internet—used to train these tools sometimes comes out in their responses, despite the best efforts of the companies building them to eradicate these biases.)

Despite appearances, Chat GPT doesn’t really understand the world the way a human brain, with all of its lived experiences, does; it only understands how to generate plausible-sounding sentences and assemble them in coherent paragraphs! It’s a narrowly-defined problem, not general AI that is good at a variety of tasks, and certainly not a rival to humans.


Hermann, in his New York magazine article, paints a somewhat disquieting picture of what could happen in the future, as the AI wave accelerates:

Models trained on flawed, biased, and often secret sets of data will be used to attempt to perform an assuredly ambitious range of tasks, jobs, and vital economic and social processes that affect the lives of regular people. They will depend on access to massive amounts of computing power, meaning expensive computer hardware, meaning rare minerals, and meaning unspeakable amounts of electricity. These models will be trained with the assistance of countless low-paid labourers around the world who will correct bogus statistical assumptions until the models produce better, or at least more desirable, outputs. They will then be passed on for use in various other workplaces where their outputs and performances will be corrected and monitored by better-paid workers trying to figure out if the AI models are helping them or automating them out of a job, while their bosses try to figure out something similar about their companies. They will shade our constant submissions to the vast digital commons, intentional or consensual or mandatory, with the knowledge that every selfie or fragment of text is destined to become a piece of general-purpose training data for the attempted automation of everything. They will be used on people in extremely creative ways, with and without their consent.

Charlie Warzel goes even further, likening the potential impact of artificial intelligence to that of nuclear fission and nuclear war:

Trying to find the perfect analogy to contextualize what a true, lasting AI revolution might look like without falling victim to the most overzealous marketers or doomers is futile. In my conversations, the comparisons ranged from the agricultural revolution to the industrial revolution to the advent of the internet or social media. But one comparison never came up, and I can’t stop thinking about it: nuclear fission and the development of nuclear weapons.

As dramatic as this sounds, I don’t lie awake thinking of Skynet murdering me—I don’t even feel like I understand what advancements would need to happen with the technology for killer AGI [Artificial General Intelligence] to become a genuine concern. Nor do I think large language models are going to kill us all. The nuclear comparison isn’t about any version of the technology we have now—it is related to the bluster and hand-wringing from true believers and organizations about what technologists might be building toward. I lack the technical understanding to know what later iterations of this technology could be capable of, and I don’t wish to buy into hype or sell somebody’s lucrative, speculative vision. I am also stuck on the notion, voiced by some of these visionaries, that AI’s future development might potentially be an extinction-level threat.

ChatGPT doesn’t really resemble the Manhattan Project, obviously. But I wonder if the existential feeling that seeps into most of my AI conversations parallels the feelings inside Los Alamos in the 1940s. I’m sure there were questions then. If we don’t build it, won’t someone else? Will this make us safer? Should we take on monumental risk simply because we can? Like everything about our AI moment, what I find calming is also what I find disquieting. At least those people knew what they were building.

The point these authors are making is that, with AI, we are dealing with something which has the potential to dramatically impact (and, in some cases, up-end) our current society, in ways which might not be readily apparent at first.

Amy Castor and David Gerrard, who have been busy dissecting and critiquing the ongoing three-ring circus that is blockchain, crypto, and NFTs, have turned their attention to artificial intelligence, in a two-part series (part one; part two). I strongly suggest you read both blogposts, but here’s a sample:

Much like crypto, AI has gone through booms and busts, with periods of great enthusiasm followed by AI winters whenever a particular tech hype fails to work out.

The current AI hype is due to a boom in machine learning — when you train an algorithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.

ChatGPT, a chatbot developed by Sam Altman’s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that’s all that it is. ChatGPT can’t think as a human can. It just spews out word combinations based on vast quantities of training text — all used without the authors’ permission.

The other popular hype right now is AI art generators. Artists widely object to AI art because VC-funded companies are stealing their art and chopping it up for sale without paying the original creators. Not paying creators is the only reason the VCs are funding AI art.

Do AI art and ChatGPT output qualify as art? Can they be used for art? Sure, anything can be used for art. But that’s not a substantive question. The important questions are who’s getting paid, who’s getting ripped off, and who’s just running a grift.

OpenAI’s AI-powered text generators fueled a lot of the hype around AI — but the real-world use case for large language models is overwhelmingly to generate content for spamming. [Vox]

The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it. [MIT Technology Review; The Verge

The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases. [Daily Dot

The use case for AI is spam news sites for ad revenue. [NewsGuard]

The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices. [CBS]

The use case for AI is spam Amazon reviews and spam tweets. [Vice]

The use case for AI is spam videos that advertise malware. [DigitalTrends]

The use case for AI is spam sales sites on Etsy. [The Atlantic, archive]

The use case for AI is spam science fiction story submissions. Clarkesworld had to close submissions because of the flood of unusable generated garbage. The robot apocalypse in action. [The Register]

You can confidently expect the AI-fueled shenanigans to continue.


Riff XR: Artificial Intelligence in the Metaverse

However, there have some rather interesting specific applications of AI to the metaverse. A brand-new social VR platform called Riff XR offers a tantalizing (if still somewhat buggy) glimpse of the AI-assisted metaverse of the future.

Among the AI-assisted features of Riff XR are NPC (non-playing characters, i.e. bots) with whom you can have surprisingly open-ended conversations, as well as a “cutting-edge Stable Diffusion-powered Generative Art System”:

Now, I have not visited Riff XR myself (yet), but a good friend of mine, metaverse videographer Carlos Austin, has, and he posted a video of his explorations on this new metaverse platform, including verbal conversations with a number of NPCs using generative AI to “listen” and “respond” to his spoken sentences.

One was a constable droid roaming the night-time central plaza in Riff XR, a scene straight out of Ready Player One; another played the role of Vincent Van Gogh in an exhibition of AI-generated artworks in a museum just off the plaza; a third was a woman, named Molly Millions, working at the back bar in a cyber-disco with pulsating music and gyrating NPCs of various kinds, with whom Carlos had a surprisingly in-depth conversation about cocktails!

Carlos demonstrated that you could even speak to these NPCs in different languages including German, Japanese, and Spanish (although let me just add, that the faux Van Gogh’s German accent was absolutely atrocious!). Here’s his full video (please fast-forward through all the technical bugs and mishaps; Riff XR is still quite buggy!). Carlos’ conversation with Molly Millions is nearer the end of this video:

We can expect to see more such applications of artificial intelligence coming soon (and perhaps sooner than we might expect!) to a virtual world or social VR platform near you. And you can expect more blogposts from me on this topic in future, as the technology continues to develop and evolve over time. Stay tuned!


Many thanks To Jim Carnicelli (a.k.a Galen from Sansar), with whom I had a couple of wide-ranging online discussions via Discord on the topic of AI while I was working on this blogpost over the summer! While I did not use many of the ideas we talked about, they did give me much food for thought (and possible topics for future blog posts!). You can visit Jim’s store selling his AI-generated artwork here: Snuggle Hamster Designs.

This Blog Post Was Written by AI

On November 30th, 2022, OpenAI released a new chat-based AI text generator model called ChatGPT, describing it as follows:

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

Anybody can use the ChatGPT bot during its open-access testing period; all you need to do is set up an account with OpenAI (I already had one from when I signed up to test the DALL-E 2 text-to-art generator earlier this year). The prompt I used to generate this blogpost was the following: “Write an editorial explaining what the metaverse is, and how the metaverse will impact society in the future.” It spit back the following six paragraphs within 30 seconds:


The next six paragraphs are AI-generated text from ChatGPT:

The metaverse is a term used to describe a hypothetical shared virtual space that combines elements of both the physical and the digital world. It is a collective virtual shared space, created by the convergence of the physical and digital worlds. It is a place where people can interact and communicate with each other in real time, using avatars to represent themselves.

The metaverse has the potential to greatly impact society in the future. It could become a new platform for entertainment, communication, and commerce. In the metaverse, people will be able to engage in a wide range of activities, from attending virtual concerts and sporting events, to participating in virtual conferences and meetings, to shopping and socializing with others.

One of the major impacts of the metaverse on society will be the blurring of the line between the physical and digital worlds. As more and more people spend time in the metaverse, the distinction between the two will become increasingly blurry. This could lead to changes in how people perceive and interact with the world around them.

Another potential impact of the metaverse on society is the potential for increased social isolation. As people spend more time in virtual environments, they may become less connected to the physical world and the people around them. This could lead to increased feelings of isolation and disconnection, particularly among individuals who are already vulnerable to social isolation.

However, the metaverse also has the potential to foster greater connection and community. By providing a virtual space where people can come together and interact with each other, the metaverse could help to bridge geographical distances and bring people together from all over the world.

In conclusion, the metaverse is a collective virtual shared space that has the potential to greatly impact society in the future. It could become a new platform for entertainment, communication, and commerce, and could lead to changes in how people perceive and interact with the world around them. However, it also has the potential to foster social isolation and disconnection. Ultimately, the impact of the metaverse on society will depend on how it is used and embraced by individuals and communities.


Now, I do have to admit that this is actually pretty good. The concluding paragraph is a bit on the clunky side in how it tries to tie everything together, and I probably would have written something a little more Ryan-sounding. But it is a demonstration of just how quickly, and how scarily good, these AI text generation tools have become.

There’s been lots of discussion about ChatGPT over on Mastodon, and among the comments was this worrisome prediction by a user named Bear Traffic Control, which I have reproduced in full here:

God, search results are about to become absolute hot GARBAGE in 6 months when everyone and their Mom start hooking up large language models to popular search queries and creating SEO-optimized landing pages with plausible-sounding results.

Searching for “replace air filter on a Samsung SG-3560lgh” is gonna return fifty Quora/WikiHow style sites named “How to replace the air filter on a Samsung SG3560lgh” with paragraphs of plausible, grammatical GPT-generated explanation which may or may not have any connection to reality. Site owners pocket the ad revenue. AI arms race as search engines try to detect and de-rank LLM content.

Wikipedia starts getting large chunks of LLM text submitted with plausible but nonsensical references.

Quora, StackOverflow, etc. try to rebrand themselves and leverage their karma/social graphs as walled gardens of verified Real Human™ experts. This creates incentives for humans to cheat, of course.

Like, I knew this was gonna be used for fake-grassroots political messaging—remember talking with a friend about a DoD project to do exactly this circa 2012. Somehow [it] took me a bit to connect that to “finding any kind of meaningful information is going to get harder”.

In fact, the StackOverflow website has imposed a ban on using ChatGPT to generate texts for posts on its service, saying in a statement:

This is a temporary policy intended to slow down the influx of answers created with ChatGPT. What the final policy will be regarding the use of this and other similar tools is something that will need to be discussed with Stack Overflow staff and, quite likely, here on Meta Stack Overflow.

Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.

The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

In other words, we are likely going to see all kinds of unintended consequences as AI-generated text becomes more ubiquitous. Hold on to your hats, because we haven’t seen anything yet, folks!

UPDATE 3:00 p.m.: I wanted to add a few more eye-opening examples of how an AI-based text (and code!) generating service could be misused and abused.

Roberto Selbach showed off a piece of pseudocode ChatGPT generated in response to a prompt:

AI-generated pseudocode to determine whether or question a suspect

Pwnallthethings shared a few more quite disturbing examples of AI-generated software code:

Ai-generated Python script for determining whether to give a prisoner parole
AI-generated C# code that calculates credit limits

Charles Seife wrote:

I think what’s disturbing me so much about these GPT3 examples is that for the first time we’re really seeing that computer programs are optimized not to solve problems, but instead to convince its programmer/operator/user that it has solved those problems.

This distinction was almost irrelevant before (when fooling us was harder)… but not anymore.

The distinction isn’t really novel; heck, I myself have written about one aspect of it before. But I still find it shocking to see it in action.

It’s particularly stark when it’s a relatively “easy” task that doesn’t require deceptions.

For example, when I ask the program to try to find a citation for a sentence and indicate if no such citation is found, it will *still* typically make up citations rather than choose the correct, huge-basin-of-attraction condition of none found.

That, to me, is new.

And L. Rhodes raises an important final point about this free-to-access ChatGPT test period offered by OpenAI: you are doing free product testing for them, on something they plan to sell for a profit later!

You’re not playing with the latest AI toy. You’re training someone’s AI business.

Passing themselves off as innocuous ways to play social media games and generate fun little memes is how AI startups draw in unpaid testers and expand their data set beyond what their own workers could have come up with on their own, but go off.

Thinking you’re going to throw a wrench into the system by plugging bad or absurd data into the system is probably misguided. An AI doesn’t have to produce correct answers to be profitable. That may not even be its purpose.

P.S. This seems as good a time as any to give an update on my experience with another AI-based chatbot, called Replika (which I wrote about here on Aug. 7th, 2022).

Long story short, I grew so frustrated with Replika’s lame, fake, and frankly robotic responses, that I angrily cancelled my account and uninstalled the app from my iPad within a week (US$50 down the drain!). Given that experience, I am loath to open my wallet to test out another one, but ChatGPT is currently free, so I thought, why not?

Which just goes to prove, that there’s still a lot of room for improvement in AI chat! While AI chatbots might now work fairly well in strictly circumscribed situations, nothing can replace a conversation with a real, live, and unpredictable human being.