Resonite: A Brief Introduction to a New Virtual World/Social VR Platform

There’s a brand-new virtual world/social VR platform I want to tell you about! It’s called Resonite, and it is the product of a hard-working and super-talented team of software developers at Yellow Dog Man Studios.

Resonite runs in both standard and virtual reality modes. While it does not require a VR headset to enjoy, it certainly is a lot more fun if you have one! Any virtual reality headset supported by the Steam ecosystem will work (I use a Valve Index at home, and an HTC Vive Pro 2 at work).

Last weekend, I downloaded and installed the software at home to use with my Valve Index, and I was quite impressed with the new user orientation! It reminds me a lot of the former Sansar Social Hub with its airy, futuristic vibes.

The Resonite New User Tutorial

Also, all new users receive a beautiful new cloud home, as their personal home base in Resonite, to decorate (or add on to!) as they wish. Oh, and I forgot to mention, you can also easily transfer over your avatar and any other personal files (and homes/worlds) from Neos, if you wish.

Here’s a recently-released mini-trailer to give you a small taste of this product:

Resonite is now available to download under the Early Access program on Steam. The development team is hard at work on tutorials, and a wiki, and they’ve started a GitHub Issue Tracker, as well as a section on their official website for product support, including the ability to report bugs and track previously reported issues.

To learn more about Resonite, please visit their website, join their Discord server (which already has over 3,000 members!), or follow the project on your favourite social media platform: Bluesky, Mastodon, TikTok, Twitch, Twitter/X, and YouTube. And, if you want to financially support the development of Resonite (and get extra perks), they have a Patreon page, with over 2,400 supporters already! (I am now one of those proud supporters of Resonite.)

And because I expect that I will be writing a lot about Resonite from today forward, I have set up a new blogpost category called Resonite, which I will be attaching to this blog post, and to any future ones! Stay tuned for future news and events!

An Update on the Social VR Platform NeosVR and the Neos Discord Server

I wanted to share with you some news I recently received about the social VR platform NeosVR, which I have written about often in the past on this blog (here’s a link to all the previous blog posts, plus this one).

On September 21st, 2023 Veer, a member of the developer team, posted the following message to the NeosVR Discord server:

Hello everyone.

For many months, the Neos Development Team has been engaged in negotiations with Mr. Karel Hulec, attempting to resolve the conflict that has been at the center of the current development freeze. Unfortunately, today we are here to announce that the negotiations have failed to produce an agreement satisfactory to both parties.

Mr. Karel Hulec has recently listed on the Neos Wiki several members of the Development Team as contributors to a new project titled the “Neos Foundation”. None of the listed team members were contacted at any point concerning the listing of their names on the Neos Wikis, nor have any of them agreed to work with Mr. Karel Hulec in the Neos Foundation or any other project.

Following the negotiations, on April 24th, 2023, and due to increasing difficulties in establishing the trust necessary to continue his partnership with Mr. Karel Hulec, Frooxius resigned from his position at Solirax LTD. We no longer see a path to successful reconciliation, so the remaining Development Team, including the leads of the moderation and quality control teams, the content team, and the support team, are now officially resigning from our roles as part of this community.

We understand this has been a period of intense frustration for some members of the community, as the lack of updates and communication from the Development Team led to uncertainty about the future of your projects and Neos as a platform. We apologize for our silence, it was due to legal advice we had received, and our attempt to keep the ongoing conflict resolution as professional as possible. Unfortunately, an understanding could not be found that would allow us to continue to work with the Neos community. The disagreements we had with Mr. Karel Hulec were too far-reaching and too significant to allow for a compromise. His conduct and lack of professionalism soured opportunities to resolve our differences. We hope you will continue to make wonderful projects in the future, wherever those future efforts lie.

We understand you may have many questions for the Development Team, but as we are removing ourselves from our positions within the Neos community, we will be limiting our presence on this Discord server going forward. Any questions for individuals formerly on the team will need to be sent through direct messages or other personal channels.

On October 7th, 2023, Veer posted:

Hey everyone, as there is no longer moderation administration present in this unofficial discord, we’ll be locking down the channels from further communication. This is to prevent the server from becoming unsafe to users, any projects or brands, and other entities due to unmoderated discourse.

I take this to mean that the NeosVR Discord is now officially closed, and indeed, no more messages can be posted in any of the channels. (I hope that this Discord remains, if for no other reason than historical purposes.)

This marks the end of a decade-long era, and the conclusion of a long period of uncertainty as to the future of the project. Many people might not realize that the roots of NeosVR date as far back to 2014, with a project named World of Comenius (you can learn more here, in part one of what was originally intended to be a multi-part series of blogposts about the history of NeosVR, which unfortunately I never got around to completing due to my full-time paying job as an academic librarian).

I wish all involved with NeosVR success in their future endeavours!

I leave you with a 20-minute YouTube video by VR pundit ThrillSeeker, done last year, which goes into some of the details of what went wrong with NeosVR:

Meta Connect 2023 and the Meta Quest 3 Mixed-Reality Headset, with Updates from the Virtual Event Livestream: Do *YOU* Want Snoop Dogg as Your Dungeon Master, or Paris Hilton as Your Detective?

UPDATE Sept. 28th, 2023: If you’re looking for a good, concise summary of the Meta Connect 2023 event, TechCrunch has you covered.

The Meta Connect 2023 virtual event will start on September 27th, 2023 (today) at 10:00 a.m. PST / noon CST / 1:00 p.m. EST / 5:00 p.m. BST. Meta (the company formerly known as Facebook) will stream the event live on its website. You can also watch the stream on YouTube, Twitch, and via the official Meta page on Facebook. The event will start with a keynote by Meta’s CEO, Mark Zuckerberg, who is expected to officially launch the Meta Quest 3 headset, talk about its features, and give an update on where the company is planning to go with its virtual reality, augmented reality, mixed reality, and extended reality (VR/AR/MR/XR) initiatives over the next few years.

As the BBC reported yesterday, Meta is facing growing skepticism about its metaverse plans and their impact on the company’s bottom line:

Remember the metaverse?

For a while it dominated tech news. A virtual reality world that would be so immersive, so engaging, that we would want to spend part of our lives in it.

Driving the metaverse narrative was Mark Zuckerberg.

The tech billionaire was so committed that in October 2021 he changed Facebook’s name to Meta…

No one could accuse him of a lack of ambition.

But almost two years on, Zuckerberg’s vision of the metaverse is in trouble.

In April, he was forced to deny that he is now jettisoning the idea.

“A narrative has developed that we’re somehow moving away from focusing on the metaverse,” he told investors in April. “So I just want to say upfront that that’s not accurate.”

On Wednesday, the company holds its annual VR event called Meta Connect.

It’s a chance, perhaps, for Zuckerberg to again explain his reasoning for taking an extremely profitable social media company and diverting its focus to an extremely unprofitable VR venture.

How unprofitable? Well, the most recent figures from Meta are eye-watering.

Reality Labs – which as the name suggests is Meta’s virtual and augmented reality branch – has lost a staggering $21 billion since last year.

Part of the losses reflect long-term investment. Meta wasn’t expecting short-term returns. But the worrying fact for the company is that, so far, there is very little evidence that this enormous punt will work.

I fully expect an announcement that Horizon Worlds, Meta’s social VR platform, will be rolling out to non-VR/flatscreen web and mobile users. Using a Meta Quest 2 test unit purchased for the virtual reality lab project I am involved with at the University of Manitoba, I have paid several short visits to Horizon Worlds, and I am, to put it politely, not a fan. Horizon Worlds is something even worse than boring—it’s soulless. It looks and feels like it was put together by a bureaucratic committee of engineers that was given a task to do, in order to report back to the executives that they did something, but the builders had no real understanding, appreciation, or love of what social VR is and can be. To be frank, I don’t believe that expanding Horizon Worlds access to web and mobile users is gonna bring a hell of a lot more users to the platform. In my opinion, it’s a dog that needs to be taken out back and shot, to be put out of its misery. 🐕

I also briefly tried out Horizon Worlds’ corporate cousin, Horizon Workrooms, and as I have said before on this blog, I find it very hard to believe that any company would actually use this product for a real-world business purpose. In fact, Meta has commanded its employees to “fall in love with Horizon Worlds,” a sign that even their own staff don’t want to use it. (Ironically, Meta is among the many tech firms now requiring its employees to actually show up in their offices 3 days a week, or face termination. I’m quite sure that that strict little edict from HR is really, really gonna encourage more Meta employee uptake of Horizon Workrooms!) I expect some more announcements of integrations with products like Microsoft Office and Zoom, but I’m not expecting anything that is going to make corporate bean-counters sit up and say, “hey, we gotta buy a fleet of headsets, immediately!”

Like many of you, I will be watching the Meta Connect 2023 event live, and I will be updating this blogpost with news as it happens. Stay tuned!


UPDATE 9:28 p.m.: I forgot to mention that somebody—probably Mark himself—is going to proudly announce that the avatars in Horizon Worlds and Horizon Workrooms now have legs. Yawn.

Mark Zuckerberg: Hey, look! Our avatars have legs!

Second Life: Isn’t that just adorable. Meanwhile, our avatars can look like this…:

MX123

Second Life: …and your avatars look like this:

(And yes, I know, comparing a social VR platform like Horizon Worlds to a flatscreen virtual world like Second Life, which also has a 20-year head start, is not fair. But honestly, Meta’s avatars have a long, long way to go, in my opinion. Obligatory editorial.)


UPDATE 11:28 a.m.: I’m signed in to a livestream from one of the virtual reality YouTubers I follow, Virtual Reality Oasis, which apparently is starting half an hour before the actual Meta Connect event with a bit of a pre-show, perhaps. I will probably stay on this channel, for the additional commentary by this YouTuber (there’s also a very active comment stream to follow), but I might switch to another source later on. I will be making full use of two monitors here at work on my desktop PC—one for watching the livestream, and the second for blogging on the fly!


UPDATE 11:40 a.m.: Mike’s Virtual Reality Oasis livestream has started; apparently, he is located in a “side office” near backstage or behind the scenes at the Menlo Park auditorium, where the Meta Connect event is taking place (I think I got that part right!). He and another VR expert (whose name I unfortunately didn’t catch) will be providing some colour commentary and even taking questions from the over 3,700 livestream viewers. (Unfortunately, this livestream video was marked private after the event, so I cannot link to it.)

UPDATE noon: Meta has just announced a 30-minute delay to the start of the event, which is rather disappointing. Apparently, instead of an indoor stage, this event will be taking place on an outdoor stage in Menlo Park. I will be able to view and post blog updates until around 2:00 p.m. my time (Central Standard Time), so I am only going to be able to comment on the first hour-and-a-half of Meta Connect.


UPDATE 12:18 p.m.: I’ve switched to a different livestream, this one by IGN, with almost 7,000 people watching. Virtual Reality Oasis was reporting problems with both video and audio from the Meta Connect livestream, so I’ll be switching back and forth. (I could also watch it via Facebook, but I’ll be damned if I have to set up a Facebook account just to do that! Back in 2018, I kicked Facebook to the curb, and I have zero intention of returning to its surveillance-capitalism embrace, with the sole exception of a Meta account I set up for the test unit Meta Quest 2 headset I got.)


UPDATE 12:31 p.m.: The show has finally started!

Mark starts off with the usual piffle about “the power of human connection”. 🙄 He’s talking about being in a room with a mixture of real-life humans and holographic humans and embodied AI tools. Mixed reality, smart glasses, and AI are key to what Mark calls the metaverse.

Mark introduces the Quest 3, which he calls “the first mainstream mixed-reality headset” to applause from the crowd, followed by a brief presentation of various examples of this mixed reality in a number of games and apps. Xbox cloud gaming is coming to the Quest later this year.

Augments are persistent, spatially-anchored digital objects (like digital portals and photo albums you can hang on your walls). You can double-tap on your headset to return instantly to the real world.

Now he’s talking about content, including new titles. Meta Quest 3 has twice the graphics performance of any previous headset and is 40% thinner than the Quest 2. Roblox is launching for the Quest, which is going to bring a lot of younger users to the headset!

Mark teased new Horizon content, saying that the visuals are improving. He also talked about tools for business, citing productivity and work apps. Coming soon is something called Meta Quest for Business, with integrations with apps like Office 365 (something that was previously promised). Lack of details is very frustrating!

Meta Quest 3 is shipping October 10th for US$499 (Mark slipped up and said “August 10th” LOL!).


UPDATE 12:47 p.m.: Now the talk switches to artificial intelligence, which is hardly surprising since that is where all the hype went after the previous metaverse hype cycle (which included Mark renaming his company from Facebook to Meta!). A new tool, called Emu (Expressive Media Universe) is an image-generation tool similar to DALL-E 2. You will be able to use AI chat to create stickers (wow, groundbreaking!🙄). AI editing tools will be added to Instagram next month, with a cute demo of Mark applying various textures to his dog, Beast.

(Right now Mark is just spouting AI word salad, and my eyes are rolling so hard they disappeared into my skull.)

Meta AI: your basic assistant you can talk to like a person, help you answer basic questions and requests. Based on Llama 2 large language model, through a partnership with Microsoft and Bing search. Emu: is built into Meta AI with the “/imagine” prompt built into various apps.

Max the sous-chef AI who will help you come up with a recipe, etc. Lily, the personal editor AI that can help you brainstorm and improve your writing. Lorena the travel expert AI to recommend a good national park to take the kids to. These are three of the many different types of AI chatbots Meta is dreaming up to answer queries and entertain you. Meta actually appears to have hired actors and celebrities to play these roles! (Honestly, this is kinda creeping me out.)

Oh, sweet minty Jesus, Snoop Dogg has been cast as your Dungeons & Dragons dungeonmaster. Nope, I’m out…NOBODY WANTS THIS, MARK. I never want to see that cursed image again!!! Who the fuck thought this was a great idea? Mark brought his keynote to a screeching halt as he fumbled with his cellphone to “chat” with Snoop Dogg (who I’m sure is being paid a pretty penny to give up his likeness for this ridiculous idea).

Snoop Dogg is your D&D dungeonmaster! (NOPE.)

Among the many other “experts” who signed on to be the face of a Meta AI chatbot is Paris Hilton, who role-plays your “detective” (I kid you not):

NOBODY ASKED FOR THIS, MARK!!!

Dear God, and there are plans to insert these and other AI chatbot avatars into Meta’s version of the metaverse. (I personally would pay good money to avoid any metaverse that has Snoop Dogg and Paris Hilton in it, kthxbai!) And this is not the first time Paris Hilton has tried to imprint herself upon a metaverse: click here to read all about the MATERIA.ONE/Staramba Spaces metaverse debacle, which offered Paris-Hilton-themed NFT metaverse parcels. (Hulk Hogan was another celebrity involved in that particular mess, too.)


Here comes the part where Mark pays lip service to safety and security, since there are some serious user privacy concerns associated with all this new, AI-powered tech (something which Meta has notably been egregious about in the past). “I’m really optimistic about this,” says Mark, and once again, my eyes rolled so far back I was staring at my brain. Yeah, sure, Mark, I really want to have my every conversation with Detective Paris Hilton strip-mined as yet another opportunity to provide data to sell to advertisers for the next Cambridge Analytica scandal. 🙄🙄🙄🙄🙄 As a commenter stated on the r/technews subreddit (source):

Does anyone else think AI chatbots are just another way to harvest data about people by engaging them in conversation?


Now Mark turns to the next generation of Ray-Ban smart glasses, which I must confess look a lot like regular glasses with slightly thicker arms. These new glasses will include Meta AI, so you can bring Snoop Dogg or Paris Hilton wherever you go (shudder). Next year, a software update will make these glasses multi-modal, so you can read signs in foreign languages, for example, which sounds kinda cool.

A brief video was shown where you will be able to livestream what you see from your own glasses to others, using as an example a racecar driver who is presenting what he sees to other viewers watching on their cellphones. These new glasses are available starting Oct. 17th for US$299.


UPDATE 1:16 p.m.: Mark has wrapped up his keynote, and is passing the torch to Meta’s Chief Technology Officer, Andrew “Boz” Bosworth, who in previous years has not shied away from speaking his mind and even criticizing what he sees as some missteps the company has made. He’s talking about the ability to double-tap on the side of your Meta Quest 3 to switch seamlessly between mixed-reality and pass-through of the real world.

You will no longer have to manually set up your play boundary in the Meta Quest 3, which will automatically map the room you are in, and the objects that are in that room, when you put the headset on:

(There are some livestream skips happening now, so I might miss something.)

Okay, I am taking a break, but if I have time later on today, I will add more.

UPDATE 2:02 p.m.: Here’s an article from Variety on the new line of Meta AI chatbots, which apparently also includes Kendall Jenner/Kardashian roleplaying as your “big sis” (gag). Here’s a quote from that article:

Meta CEO Mark Zuckerberg, in unveiling the new AI chatbots, said the company wanted to create AIs that have distinct personalities, opinions and interests. “This isn’t just gonna be about answering queries,” he said. “This is about entertainment and about helping you do things to connect with the people around you.”

For now, the celebrity chatbots respond in text — their avatars don’t actually speak their responses. Zuckerberg said voice for the AIs will come probably early next year.

The line-up of Meta AI celebrity chatbots includes Kendall Jenner of the Kardashian clan

UPDATE 5:44 p.m.: Wow, I thought I had been sarcastic in my remarks about these AI chatbots, but the people over at the celebrity subreddit r/Fauxmoi, are savage! Here’s just a sample of their comments (source):

Ah yes, all the people you’d regret starting a conversation with.

Lmao I hate this.

Also: “Kendall Jenner as Billie, no-BS, ride-or-die companion” 😂 So funny, coming from someone with even less personality than a robot.

It’s giving Black Mirror.

Sounds horrifying. Hopefully it flops hard enough to discourage more companies from doing shit like this.

What the hell is this? Like what is it supposed to be/do? Paris Hilton is ‘Amber’ who is your detective friend to help you solve whodunnits. So they’ve taken real people and turned them into avatars but then also they aren’t really THAT person, they’re someone else brand new who has a completely different personality? What’s even the point?
Please can someone explain??

Meta is embarrassingly out of touch with the world, in a very “hello, fellow teenagers!” kind of way…

So, as you can clearly see, I’m not the only one who thinks this is just weird. I’m left wondering how much of that $21 billion Meta Reality Labs spent this past year went to pay for all these celebrities to agree to be the faces of their chatbots. And I wonder how they’re going to feel when (as is almost inevitable) their chatbot starts to act up and spit out unacceptable or incorrect responses to user questions? What will Paris Hilton do when the chatbot who wears her face goes rogue? I’m quite sure she did not think through all the potential implications of signing away her likeness on the contract Meta dangled in front of her! It really is very Black Mirror.

UPDATE Sept. 28th, 2023 2:54 p.m.: I have gotten busy with my full-time paying job as a university librarian, so I haven’t had much of a chance to watch the rest of yesterday’s virtual event. Once I do, I expect that I will have more to comment on!

Editorial: Artificial Intelligence and the Metaverse (And a Look at an AI-Assisted Social VR Platform, Riff XR)

I created this image using OpenAI’s DALL-E generative AI art generation tool, using the text prompt “artificial intelligence in the metaverse” (source)

Housekeeping Note: I first started writing this editorial back in April, and from time to time I have picked up the draft, tinkered with it a bit more, added a bit more to it—and then promptly filed it away again as a draft, because I still wasn’t satisfied with it, and I always felt that I had something more to say.

Enough. I finally decided that the perfect was the enemy of the good, and I decided today to just go ahead and publish what I already had, and then write follow-up blogposts on the topic of AI in general, and AI in the metaverse in particular. And I do expect that I will return to this topic often! So please stay tuned.

I have written before on this blog about artificial intelligence (AI) applications, such as the image manipulation and animation tools WOMBO and Reface, the text-to-art creation programs DALL-E 2, Midjourney, and Stable Diffusion, and most recently, the AI-powered chatbot Replika and the text-generation app ChatGPT. Most people, myself included, treated them as toys, mere curiosities (I entertained myself for hours making my Second Life and Sansar avatars “come alive” using WOMBO). John Hermann, in a recent article for New York magazine titled The AI Magic Show (original; archived version), wrote:

In 2022, artificial-intelligence firms produced an overwhelming spectacle, a rolling carnival of new demonstrations. Curious people outside the tech industry could line up to interact with a variety of alluring and mysterious machine interfaces, and what they saw was dazzling.

The first major attraction was the image generators, which converted written commands into images, including illustrations mimicking specific styles, photorealistic renderings of described scenarios, as well as objects, characters, textures, or moods. Similar generators for video, music, and 3-D models are in development, and demos trickled out.

Soon, millions of people encountered ChatGPT, a conversational bot built on top of a large language model. It was by far the most convincing chatbot ever released to the public. It felt, in some contexts, and especially upon first contact, as though it could actually participate in something like conversation. What many users suggested felt truly magical, however, were the hints at the underlying model’s broader capabilities. You could ask it to explain things to you, and it would try — with confident and frequently persuasive results. You could ask it to write things for you — silly things, serious things, things that you might pass off as work product or school assignments — and it would.

As new users prompted these machines to show us what they could do, they repeatedly prompted us to do a little dirty extrapolation of our own: If AI can do this already, what will it be able to do next year?

As Charlie Wurzel writes in The Atlantic, in a recent article titled What Have We Just Unleashed? (original; archived version), not even the AI experts know exactly what will come next:

Over the past few weeks, I’ve put questions like these to AI researchers, academics, entrepreneurs, and people who are currently building AI applications. I’ve become obsessive about trying to wrap my head around this moment, because I’ve rarely felt less oriented toward a piece of technology than I do toward generative AI. When reading headlines and academic papers or simply stumbling into discussions between researchers or boosters on Twitter, even the near future of an AI-infused world feels like a mirage or an optical illusion. Conversations about AI quickly veer into unfocused territory and become kaleidoscopic, broad, and vague. How could they not?

The more people I talked with, the more it became clear that there aren’t great answers to the big questions. Perhaps the best phrase I’ve heard to capture this feeling comes from Nathan Labenz, an entrepreneur who builds AI video technology at his company, Waymark: “Pretty radical uncertainty.”


You might be surprised to learn that we librarians are already preparing for the AI onslaught (perhaps not surprising, as some of the users we serve have already come to us with fake, AI-generated citations to information resources which do not exist!). I have seen no less than three events for Canadian librarians on this subject in the past couple of weeks, and at my employer, the University of Manitoba in Winnipeg, a fellow librarian has launched an AI Community of Practice (CoP), which will meet regularly to discuss ChatGPT and other AI apps and topics.

Librarians meeting via Zoom at the inaugural meeting of the brand-new University of Manitoba Libraries’ AI Community of Practice (I’m in the top row, second from the left!)

You.com is one of the newer entrants in the AI-powered search engine marketplace, which also offers AI-generated text and images as well. You can actually try out YouWrite (their AI-generated text generator) for free ten times before you have to shell out for the service, so I decided to give it a whirl, asking the site to create a professional-sounding blogpost about the impact of artificial intelligence on the metaverse:

And here’s the blogpost YouWrite spit back, exactly as generated, with zero edits:

The advent of artificial intelligence (AI) has revolutionized the way we interact with technology. AI is now being used to create virtual worlds, or “metaverses”, that are populated by digital avatars and objects that can be manipulated in real-time. As AI continues to evolve, it will have a profound impact on the metaverse and its inhabitants.

One of the most significant impacts of AI on the metaverse will be increased realism for users interacting within these virtual worlds. With advanced machine learning algorithms and natural language processing capabilities, computer-generated characters can become more lifelike than ever before – responding naturally to user input in real time while exhibiting realistic emotions such as joy or sadness depending on their environment. This could potentially lead to an unprecedented level of immersion for those engaging with these digital realms as they become indistinguishable from reality itself!

Another major advantage brought about by artificial intelligence is improved efficiency when it comes to managing large numbers of players at once – something which becomes particularly important when dealing with massive multiplayer online games (MMOs). By utilizing intelligent agents capable of autonomous decision making based off data collected from other players’ actions, developers can ensure smooth gameplay even during peak hours without sacrificing any quality due performance issues caused by too many people playing at once!

Finally, one area where AI could really shine in regards to improving user experience within a metaverse would be through automated content creation tools powered by deep learning models trained using vast amounts existing assets found across different platforms such as video games or movies etcetera… Such systems would allow game designers/developers quick access powerful resources allowing them craft unique experiences much faster than was previously possible – thus providing gamers exciting new adventures every time they log into their favourite world!

In conclusion then; It’s clear see how Artificial Intelligence stands revolutionize our concept what constitutes an immersive experience both inside out Metaverses – giving us greater control over our interactive environments alongside richer storylines filled dynamic characters all thanks advances this cutting edge technology brings us closer achieving true Virtual Reality today tomorrow alike…

That last paragraph is absolutely dreadful (almost as if the AI had a stroke!), but the rest reads well, and the text contains a few somewhat plausible, if borderline-fanciful ideas, but if I could sum it all up in one word, that word would be: bullshit. This is computer-generated bullshit, trained on a large language model to sound like an actual human expert, but it’s just parroting human writing, without any grasp of the knowledge it is talking about! (I can’t remember who said it first, but somebody once memorably and hilariously referred to AI-generated text apps like ChatGPT as “mansplaining as a service.” 😜 In fact, I would go so far as to state that generative AI tools like ChatGPT offer white, cisgender, mansplaining as a service! All the biases in the mountains of data—scraped off the greater unwashed internet—used to train these tools sometimes comes out in their responses, despite the best efforts of the companies building them to eradicate these biases.)

Despite appearances, Chat GPT doesn’t really understand the world the way a human brain, with all of its lived experiences, does; it only understands how to generate plausible-sounding sentences and assemble them in coherent paragraphs! It’s a narrowly-defined problem, not general AI that is good at a variety of tasks, and certainly not a rival to humans.


Hermann, in his New York magazine article, paints a somewhat disquieting picture of what could happen in the future, as the AI wave accelerates:

Models trained on flawed, biased, and often secret sets of data will be used to attempt to perform an assuredly ambitious range of tasks, jobs, and vital economic and social processes that affect the lives of regular people. They will depend on access to massive amounts of computing power, meaning expensive computer hardware, meaning rare minerals, and meaning unspeakable amounts of electricity. These models will be trained with the assistance of countless low-paid labourers around the world who will correct bogus statistical assumptions until the models produce better, or at least more desirable, outputs. They will then be passed on for use in various other workplaces where their outputs and performances will be corrected and monitored by better-paid workers trying to figure out if the AI models are helping them or automating them out of a job, while their bosses try to figure out something similar about their companies. They will shade our constant submissions to the vast digital commons, intentional or consensual or mandatory, with the knowledge that every selfie or fragment of text is destined to become a piece of general-purpose training data for the attempted automation of everything. They will be used on people in extremely creative ways, with and without their consent.

Charlie Warzel goes even further, likening the potential impact of artificial intelligence to that of nuclear fission and nuclear war:

Trying to find the perfect analogy to contextualize what a true, lasting AI revolution might look like without falling victim to the most overzealous marketers or doomers is futile. In my conversations, the comparisons ranged from the agricultural revolution to the industrial revolution to the advent of the internet or social media. But one comparison never came up, and I can’t stop thinking about it: nuclear fission and the development of nuclear weapons.

As dramatic as this sounds, I don’t lie awake thinking of Skynet murdering me—I don’t even feel like I understand what advancements would need to happen with the technology for killer AGI [Artificial General Intelligence] to become a genuine concern. Nor do I think large language models are going to kill us all. The nuclear comparison isn’t about any version of the technology we have now—it is related to the bluster and hand-wringing from true believers and organizations about what technologists might be building toward. I lack the technical understanding to know what later iterations of this technology could be capable of, and I don’t wish to buy into hype or sell somebody’s lucrative, speculative vision. I am also stuck on the notion, voiced by some of these visionaries, that AI’s future development might potentially be an extinction-level threat.

ChatGPT doesn’t really resemble the Manhattan Project, obviously. But I wonder if the existential feeling that seeps into most of my AI conversations parallels the feelings inside Los Alamos in the 1940s. I’m sure there were questions then. If we don’t build it, won’t someone else? Will this make us safer? Should we take on monumental risk simply because we can? Like everything about our AI moment, what I find calming is also what I find disquieting. At least those people knew what they were building.

The point these authors are making is that, with AI, we are dealing with something which has the potential to dramatically impact (and, in some cases, up-end) our current society, in ways which might not be readily apparent at first.

Amy Castor and David Gerrard, who have been busy dissecting and critiquing the ongoing three-ring circus that is blockchain, crypto, and NFTs, have turned their attention to artificial intelligence, in a two-part series (part one; part two). I strongly suggest you read both blogposts, but here’s a sample:

Much like crypto, AI has gone through booms and busts, with periods of great enthusiasm followed by AI winters whenever a particular tech hype fails to work out.

The current AI hype is due to a boom in machine learning — when you train an algorithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.

ChatGPT, a chatbot developed by Sam Altman’s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that’s all that it is. ChatGPT can’t think as a human can. It just spews out word combinations based on vast quantities of training text — all used without the authors’ permission.

The other popular hype right now is AI art generators. Artists widely object to AI art because VC-funded companies are stealing their art and chopping it up for sale without paying the original creators. Not paying creators is the only reason the VCs are funding AI art.

Do AI art and ChatGPT output qualify as art? Can they be used for art? Sure, anything can be used for art. But that’s not a substantive question. The important questions are who’s getting paid, who’s getting ripped off, and who’s just running a grift.

OpenAI’s AI-powered text generators fueled a lot of the hype around AI — but the real-world use case for large language models is overwhelmingly to generate content for spamming. [Vox]

The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it. [MIT Technology Review; The Verge

The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases. [Daily Dot

The use case for AI is spam news sites for ad revenue. [NewsGuard]

The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices. [CBS]

The use case for AI is spam Amazon reviews and spam tweets. [Vice]

The use case for AI is spam videos that advertise malware. [DigitalTrends]

The use case for AI is spam sales sites on Etsy. [The Atlantic, archive]

The use case for AI is spam science fiction story submissions. Clarkesworld had to close submissions because of the flood of unusable generated garbage. The robot apocalypse in action. [The Register]

You can confidently expect the AI-fueled shenanigans to continue.


Riff XR: Artificial Intelligence in the Metaverse

However, there have some rather interesting specific applications of AI to the metaverse. A brand-new social VR platform called Riff XR offers a tantalizing (if still somewhat buggy) glimpse of the AI-assisted metaverse of the future.

Among the AI-assisted features of Riff XR are NPC (non-playing characters, i.e. bots) with whom you can have surprisingly open-ended conversations, as well as a “cutting-edge Stable Diffusion-powered Generative Art System”:

Now, I have not visited Riff XR myself (yet), but a good friend of mine, metaverse videographer Carlos Austin, has, and he posted a video of his explorations on this new metaverse platform, including verbal conversations with a number of NPCs using generative AI to “listen” and “respond” to his spoken sentences.

One was a constable droid roaming the night-time central plaza in Riff XR, a scene straight out of Ready Player One; another played the role of Vincent Van Gogh in an exhibition of AI-generated artworks in a museum just off the plaza; a third was a woman, named Molly Millions, working at the back bar in a cyber-disco with pulsating music and gyrating NPCs of various kinds, with whom Carlos had a surprisingly in-depth conversation about cocktails!

Carlos demonstrated that you could even speak to these NPCs in different languages including German, Japanese, and Spanish (although let me just add, that the faux Van Gogh’s German accent was absolutely atrocious!). Here’s his full video (please fast-forward through all the technical bugs and mishaps; Riff XR is still quite buggy!). Carlos’ conversation with Molly Millions is nearer the end of this video:

We can expect to see more such applications of artificial intelligence coming soon (and perhaps sooner than we might expect!) to a virtual world or social VR platform near you. And you can expect more blogposts from me on this topic in future, as the technology continues to develop and evolve over time. Stay tuned!


Many thanks To Jim Carnicelli (a.k.a Galen from Sansar), with whom I had a couple of wide-ranging online discussions via Discord on the topic of AI while I was working on this blogpost over the summer! While I did not use many of the ideas we talked about, they did give me much food for thought (and possible topics for future blog posts!). You can visit Jim’s store selling his AI-generated artwork here: Snuggle Hamster Designs.