The Storyteller is Coming: Join Us for a Week of Events in Sinespace, October 2nd to 8th, 2023!

The Storyteller festival kicks off with Hakawati is the Storyteller on Monday, October 2nd

Mimi Marie, the tireless event organizer in the virtual world of Sinespace, pinged me to tell me about an exciting week of events taking place this coming week, from October 2nd to 8th, 2023. Here are the details from the official Sinespace blog:


Magic, Stories & Music. Carpet Rides & Genies. October 2nd – 8th

Stories: timeless knowledge-sharing tools from the caveman’s “How to Survive 101” to today’s “Culture for Dummies.” They kept early humans alive, shared values, and even sparked campfire karaoke. Today, they’re our wisdom-filled time capsules connecting past and present, shaping our history and our future. Have you talked to a friend today and told them something you did yesterday that amused you? Congratulations – that is the way we use storytelling every single day of our lives regardless of language, culture and location – we all do it – we all story-tell in our own way. 

So, if you ever doubted the power of storytelling, remember: it’s the original social network! October 2nd to 8th will be storytelling week and you are invited to join community events that celebrate authors, books, movies, or game stories.  You may even be invited to participate in story creation or share your own story if you feel inspired.

All times are in UTC (Coordinated Universal Time, also known as Greenwich Mean Time, or the time in London, England); use this converter to convert UTC times to Pacific Standard Time/PST, and this converter to convert UTC times to EST. The special story events are marked *STORY EVENT* below.

Monday, October 2nd, 2023

4 – 6pm – Ziggies with Jay and Les

10:30pm – 1am – Night Stories at Hakawati *STORY EVENT* Magic & Stories. Tunes With DJ Unruly Blues. Carpet Rides & Genies. Random Gifts. Join us at Hakawati, where magic and captivating tales await. DJ Unruly Blues will play his enchanting tunes, while the evening promises carpet rides, genies, and delightful surprises for all.

Tuesday, October 3rd, 2023

10pm – Midnight  – The Sounds of Story With Ghaelen *STORY EVENT* Music, Dance, Chat and a Story Circle – Join us for “The Sounds of Story With Ghaelen,” a captivating blend of music, dance, chat, and a heartwarming story circle. Ghaelen’s music sets the stage for an enchanting event, and in the story circle, people from all walks of life share tales, personal narratives, and folklore, creating a vibrant tapestry of human experiences. Don’t miss this unique opportunity to be swept away by melodies and stories, celebrating the magic of connection and storytelling.

Midnight – 1am – Unruly Blues at Ice

Wednesday, October 4th, 2023

2 – 4pm – Buzzz at the Beautiful Region *STORY EVENT* “Island? Fun? Stories? We’ve got plenty!” – immerse yourself in Buzzz at the Beautiful Region, where island fun meets storytelling. Get ready for a dose of random gold and an abundance of captivating tales.

4 – 5pm  – Community Hour at localhost

10.30 – Midnight – Movie Night with Spi at Spirals Fractal region

Thursday, October 5th, 2023

10pm – Midnight UTC – Bingo, Stories & Arabian Nights Trivia at Hakawati. At Hakawati, join us for an entertaining evening of Bingo, stories, and Arabian Nights trivia from 10 PM to midnight UTC. Win exciting gold and silver prizes while enjoying the company of friends and the delightful tunes spun by DJ Mizz, who will also be playing Timeless Tales. 

Midnight – 2am – Unruly Blues at the WC

Friday, October 6th, 2023

9 – 10pm – Embark on a captivating journey to “Treasure Island” narrated by the renowned real-life storyteller, Andy Jennings. *STORY EVENT* His enchanting storytelling prowess will transport you to the heart of adventure and intrigue. Don’t miss this opportunity to be spellbound by Andy’s storytelling magic. Discover more about him at https://www.andythestoryteller.com.

10pm – Midnight – Ziggies 

Midnight – 2am – Ambient Lounge Bingo

Saturday, October 7th, 2023

2 – 4am UTC – Hoedown with snowtracks.

10pm – Midnight UTC – 381 Club with Kat

Midnight – 2am UTC – Ziggies

Sunday, October 8th, 2023

10pm – Midnight UTC – 381 Club with Kat

Midnight – 1am UTC – Les in the WC

Monday 1 – 3 a.m. UTC – UnoSi & Arabian Nights Trivia at Hakawati *STORY EVENT* Join us at Hakawati for UnoSi and Arabian Nights Trivia. Win dazzling gold and silver prizes, enjoy a great time with your friends, and groove to DJ Mizz’s tunes.


Mimi tells me, “the event hosts are Ghaelen D’Lareh, Mizz and Mimi. Also a brand new feature. A magic carpet ride!”

She adds, “Sinespace is on a roll! if you ever get a free moment log into Sinespace and visit the region called Fireside Chat. Josh and others have created reactors [interactive elements] that are so cool!”

I am planning to attend as many of the Storytelling week events as possible, unless I have schedule conflicts with my full-time paying job as an academic librarian. Hope to see you there! For more information on Sinespace and to download a client, please visit their website.

Meta Connect 2023: The Post-Keynote Conversation Between Michael Abrash and Andrew “Boz” Bosworth

Andrew Bosworth and Michael Abrash

At yesterday’s Meta Connect 2023 event, after Mark Zuckerberg and a parade of Meta employees extolling the wonders of Meta products and services, there was a post-keynote, a conversation between Michael Abrash (the Chief Scientist of Meta’s Reality Labs) and Andrew “Boz” Bosworth (Meta’s CTO and Head of Reality Labs). I hadn’t had a chance to watch this yesterday because I was so busy, but I did have some time today to watch. Here are my notes.

First off, I was pissed off that the only way I could watch this conversation was via Facebook. This is so typical of the gatekeeping that Meta engages in: forcing you to connect to a Facebook website, where you aren’t given the option of disabling any cookies that are set. At least I was able to use Firefox browser’s Facebook Container plug-in to mitigate things somewhat, but yes, I was irritated. Believe me, if I could have found this conversation on YouTube or elsewhere, I would have watched it there!

To give another example, Meta has the gall to say that they’re embracing “open source”, yet they pull stunts like making several of their newly-announced games for the Meta Quest 2 and 3 exclusive to the Quest ecosystem, and unavailable to, say, Steam players. Not cool. But I disgress; let’s get back to the topic at hand: the conversation between Boz and Michael.

Michael Abrash talked about codec avatars and how they’re not quite yet at the place where your brain is fooled that you are looking at a real person (in the same way that, perhaps, a well-designed virtual space feels “real” and immersive to your brain, and not just an image you are looking at, something many of us have experienced). Here’s a recent example to give you a sense of just how quickly the technology is evolving (this is, to my knowledge, at a research stage only, and not yet commercially available):

Michael considers codec avatars to be something that will help the concept of the metaverse reach its full potential: a way to put people together in a virtual space that feels fully real.

When Boz asked Michael to reflect on what he’s working on that’s most inspiring to him, he replied with a beauty-pageant-contestant answer that everything they’ve been working on is important. Michael then replies:

If I had to pick one thing, I would say that the personalized, contextualized, ultra-low-friction, Ai interface is the thing that I find most exciting, and the reason is…the way that humans interact with the digital world has only changed once ever, and that really was Doug Engelbart, Xerox PARC, the Mac, and since then, we’ve been living in that world. And as we move into this world of mixing the real and virtual freely, we need a new way of interacting. And so I feel that that has to be this contextualized AI approach, and getting that to happen is the thing that I find most exciting. It is a once-in-a-lifetime opportunity to change the way that everybody lives.

Meta is playing a long game here; betting that the research work that they are doing now will lead to their dominating the virtual reality/augmented reality/mixed reality/extended reality marketplace, and it’s clear that they do see their AI work as being a key part of that. How that will play out remains to be seen, but it is fascinating to see two people talk about this in a public form (even if it is on Facebook!).

Let’s just hope and pray that this “once-in-a-lifetime opportunity to change the way that everybody lives” does not become some corporate-run, surveillance-capitalism dystopia!

Following this was segment called the “Developer State of the Union,” a promised deeper dive into tools, programs and features for Meta ecosystem developers. Funny how “ecosystem” sounds so much friendlier than “walled garden.” 😉

But I am going to pause my cranky snark, hit publish on this blogpost, and call it a day.

Meta Connect 2023 and the Meta Quest 3 Mixed-Reality Headset, with Updates from the Virtual Event Livestream: Do *YOU* Want Snoop Dogg as Your Dungeon Master, or Paris Hilton as Your Detective?

UPDATE Sept. 28th, 2023: If you’re looking for a good, concise summary of the Meta Connect 2023 event, TechCrunch has you covered.

The Meta Connect 2023 virtual event will start on September 27th, 2023 (today) at 10:00 a.m. PST / noon CST / 1:00 p.m. EST / 5:00 p.m. BST. Meta (the company formerly known as Facebook) will stream the event live on its website. You can also watch the stream on YouTube, Twitch, and via the official Meta page on Facebook. The event will start with a keynote by Meta’s CEO, Mark Zuckerberg, who is expected to officially launch the Meta Quest 3 headset, talk about its features, and give an update on where the company is planning to go with its virtual reality, augmented reality, mixed reality, and extended reality (VR/AR/MR/XR) initiatives over the next few years.

As the BBC reported yesterday, Meta is facing growing skepticism about its metaverse plans and their impact on the company’s bottom line:

Remember the metaverse?

For a while it dominated tech news. A virtual reality world that would be so immersive, so engaging, that we would want to spend part of our lives in it.

Driving the metaverse narrative was Mark Zuckerberg.

The tech billionaire was so committed that in October 2021 he changed Facebook’s name to Meta…

No one could accuse him of a lack of ambition.

But almost two years on, Zuckerberg’s vision of the metaverse is in trouble.

In April, he was forced to deny that he is now jettisoning the idea.

“A narrative has developed that we’re somehow moving away from focusing on the metaverse,” he told investors in April. “So I just want to say upfront that that’s not accurate.”

On Wednesday, the company holds its annual VR event called Meta Connect.

It’s a chance, perhaps, for Zuckerberg to again explain his reasoning for taking an extremely profitable social media company and diverting its focus to an extremely unprofitable VR venture.

How unprofitable? Well, the most recent figures from Meta are eye-watering.

Reality Labs – which as the name suggests is Meta’s virtual and augmented reality branch – has lost a staggering $21 billion since last year.

Part of the losses reflect long-term investment. Meta wasn’t expecting short-term returns. But the worrying fact for the company is that, so far, there is very little evidence that this enormous punt will work.

I fully expect an announcement that Horizon Worlds, Meta’s social VR platform, will be rolling out to non-VR/flatscreen web and mobile users. Using a Meta Quest 2 test unit purchased for the virtual reality lab project I am involved with at the University of Manitoba, I have paid several short visits to Horizon Worlds, and I am, to put it politely, not a fan. Horizon Worlds is something even worse than boring—it’s soulless. It looks and feels like it was put together by a bureaucratic committee of engineers that was given a task to do, in order to report back to the executives that they did something, but the builders had no real understanding, appreciation, or love of what social VR is and can be. To be frank, I don’t believe that expanding Horizon Worlds access to web and mobile users is gonna bring a hell of a lot more users to the platform. In my opinion, it’s a dog that needs to be taken out back and shot, to be put out of its misery. 🐕

I also briefly tried out Horizon Worlds’ corporate cousin, Horizon Workrooms, and as I have said before on this blog, I find it very hard to believe that any company would actually use this product for a real-world business purpose. In fact, Meta has commanded its employees to “fall in love with Horizon Worlds,” a sign that even their own staff don’t want to use it. (Ironically, Meta is among the many tech firms now requiring its employees to actually show up in their offices 3 days a week, or face termination. I’m quite sure that that strict little edict from HR is really, really gonna encourage more Meta employee uptake of Horizon Workrooms!) I expect some more announcements of integrations with products like Microsoft Office and Zoom, but I’m not expecting anything that is going to make corporate bean-counters sit up and say, “hey, we gotta buy a fleet of headsets, immediately!”

Like many of you, I will be watching the Meta Connect 2023 event live, and I will be updating this blogpost with news as it happens. Stay tuned!


UPDATE 9:28 p.m.: I forgot to mention that somebody—probably Mark himself—is going to proudly announce that the avatars in Horizon Worlds and Horizon Workrooms now have legs. Yawn.

Mark Zuckerberg: Hey, look! Our avatars have legs!

Second Life: Isn’t that just adorable. Meanwhile, our avatars can look like this…:

MX123

Second Life: …and your avatars look like this:

(And yes, I know, comparing a social VR platform like Horizon Worlds to a flatscreen virtual world like Second Life, which also has a 20-year head start, is not fair. But honestly, Meta’s avatars have a long, long way to go, in my opinion. Obligatory editorial.)


UPDATE 11:28 a.m.: I’m signed in to a livestream from one of the virtual reality YouTubers I follow, Virtual Reality Oasis, which apparently is starting half an hour before the actual Meta Connect event with a bit of a pre-show, perhaps. I will probably stay on this channel, for the additional commentary by this YouTuber (there’s also a very active comment stream to follow), but I might switch to another source later on. I will be making full use of two monitors here at work on my desktop PC—one for watching the livestream, and the second for blogging on the fly!


UPDATE 11:40 a.m.: Mike’s Virtual Reality Oasis livestream has started; apparently, he is located in a “side office” near backstage or behind the scenes at the Menlo Park auditorium, where the Meta Connect event is taking place (I think I got that part right!). He and another VR expert (whose name I unfortunately didn’t catch) will be providing some colour commentary and even taking questions from the over 3,700 livestream viewers. (Unfortunately, this livestream video was marked private after the event, so I cannot link to it.)

UPDATE noon: Meta has just announced a 30-minute delay to the start of the event, which is rather disappointing. Apparently, instead of an indoor stage, this event will be taking place on an outdoor stage in Menlo Park. I will be able to view and post blog updates until around 2:00 p.m. my time (Central Standard Time), so I am only going to be able to comment on the first hour-and-a-half of Meta Connect.


UPDATE 12:18 p.m.: I’ve switched to a different livestream, this one by IGN, with almost 7,000 people watching. Virtual Reality Oasis was reporting problems with both video and audio from the Meta Connect livestream, so I’ll be switching back and forth. (I could also watch it via Facebook, but I’ll be damned if I have to set up a Facebook account just to do that! Back in 2018, I kicked Facebook to the curb, and I have zero intention of returning to its surveillance-capitalism embrace, with the sole exception of a Meta account I set up for the test unit Meta Quest 2 headset I got.)


UPDATE 12:31 p.m.: The show has finally started!

Mark starts off with the usual piffle about “the power of human connection”. 🙄 He’s talking about being in a room with a mixture of real-life humans and holographic humans and embodied AI tools. Mixed reality, smart glasses, and AI are key to what Mark calls the metaverse.

Mark introduces the Quest 3, which he calls “the first mainstream mixed-reality headset” to applause from the crowd, followed by a brief presentation of various examples of this mixed reality in a number of games and apps. Xbox cloud gaming is coming to the Quest later this year.

Augments are persistent, spatially-anchored digital objects (like digital portals and photo albums you can hang on your walls). You can double-tap on your headset to return instantly to the real world.

Now he’s talking about content, including new titles. Meta Quest 3 has twice the graphics performance of any previous headset and is 40% thinner than the Quest 2. Roblox is launching for the Quest, which is going to bring a lot of younger users to the headset!

Mark teased new Horizon content, saying that the visuals are improving. He also talked about tools for business, citing productivity and work apps. Coming soon is something called Meta Quest for Business, with integrations with apps like Office 365 (something that was previously promised). Lack of details is very frustrating!

Meta Quest 3 is shipping October 10th for US$499 (Mark slipped up and said “August 10th” LOL!).


UPDATE 12:47 p.m.: Now the talk switches to artificial intelligence, which is hardly surprising since that is where all the hype went after the previous metaverse hype cycle (which included Mark renaming his company from Facebook to Meta!). A new tool, called Emu (Expressive Media Universe) is an image-generation tool similar to DALL-E 2. You will be able to use AI chat to create stickers (wow, groundbreaking!🙄). AI editing tools will be added to Instagram next month, with a cute demo of Mark applying various textures to his dog, Beast.

(Right now Mark is just spouting AI word salad, and my eyes are rolling so hard they disappeared into my skull.)

Meta AI: your basic assistant you can talk to like a person, help you answer basic questions and requests. Based on Llama 2 large language model, through a partnership with Microsoft and Bing search. Emu: is built into Meta AI with the “/imagine” prompt built into various apps.

Max the sous-chef AI who will help you come up with a recipe, etc. Lily, the personal editor AI that can help you brainstorm and improve your writing. Lorena the travel expert AI to recommend a good national park to take the kids to. These are three of the many different types of AI chatbots Meta is dreaming up to answer queries and entertain you. Meta actually appears to have hired actors and celebrities to play these roles! (Honestly, this is kinda creeping me out.)

Oh, sweet minty Jesus, Snoop Dogg has been cast as your Dungeons & Dragons dungeonmaster. Nope, I’m out…NOBODY WANTS THIS, MARK. I never want to see that cursed image again!!! Who the fuck thought this was a great idea? Mark brought his keynote to a screeching halt as he fumbled with his cellphone to “chat” with Snoop Dogg (who I’m sure is being paid a pretty penny to give up his likeness for this ridiculous idea).

Snoop Dogg is your D&D dungeonmaster! (NOPE.)

Among the many other “experts” who signed on to be the face of a Meta AI chatbot is Paris Hilton, who role-plays your “detective” (I kid you not):

NOBODY ASKED FOR THIS, MARK!!!

Dear God, and there are plans to insert these and other AI chatbot avatars into Meta’s version of the metaverse. (I personally would pay good money to avoid any metaverse that has Snoop Dogg and Paris Hilton in it, kthxbai!) And this is not the first time Paris Hilton has tried to imprint herself upon a metaverse: click here to read all about the MATERIA.ONE/Staramba Spaces metaverse debacle, which offered Paris-Hilton-themed NFT metaverse parcels. (Hulk Hogan was another celebrity involved in that particular mess, too.)


Here comes the part where Mark pays lip service to safety and security, since there are some serious user privacy concerns associated with all this new, AI-powered tech (something which Meta has notably been egregious about in the past). “I’m really optimistic about this,” says Mark, and once again, my eyes rolled so far back I was staring at my brain. Yeah, sure, Mark, I really want to have my every conversation with Detective Paris Hilton strip-mined as yet another opportunity to provide data to sell to advertisers for the next Cambridge Analytica scandal. 🙄🙄🙄🙄🙄 As a commenter stated on the r/technews subreddit (source):

Does anyone else think AI chatbots are just another way to harvest data about people by engaging them in conversation?


Now Mark turns to the next generation of Ray-Ban smart glasses, which I must confess look a lot like regular glasses with slightly thicker arms. These new glasses will include Meta AI, so you can bring Snoop Dogg or Paris Hilton wherever you go (shudder). Next year, a software update will make these glasses multi-modal, so you can read signs in foreign languages, for example, which sounds kinda cool.

A brief video was shown where you will be able to livestream what you see from your own glasses to others, using as an example a racecar driver who is presenting what he sees to other viewers watching on their cellphones. These new glasses are available starting Oct. 17th for US$299.


UPDATE 1:16 p.m.: Mark has wrapped up his keynote, and is passing the torch to Meta’s Chief Technology Officer, Andrew “Boz” Bosworth, who in previous years has not shied away from speaking his mind and even criticizing what he sees as some missteps the company has made. He’s talking about the ability to double-tap on the side of your Meta Quest 3 to switch seamlessly between mixed-reality and pass-through of the real world.

You will no longer have to manually set up your play boundary in the Meta Quest 3, which will automatically map the room you are in, and the objects that are in that room, when you put the headset on:

(There are some livestream skips happening now, so I might miss something.)

Okay, I am taking a break, but if I have time later on today, I will add more.

UPDATE 2:02 p.m.: Here’s an article from Variety on the new line of Meta AI chatbots, which apparently also includes Kendall Jenner/Kardashian roleplaying as your “big sis” (gag). Here’s a quote from that article:

Meta CEO Mark Zuckerberg, in unveiling the new AI chatbots, said the company wanted to create AIs that have distinct personalities, opinions and interests. “This isn’t just gonna be about answering queries,” he said. “This is about entertainment and about helping you do things to connect with the people around you.”

For now, the celebrity chatbots respond in text — their avatars don’t actually speak their responses. Zuckerberg said voice for the AIs will come probably early next year.

The line-up of Meta AI celebrity chatbots includes Kendall Jenner of the Kardashian clan

UPDATE 5:44 p.m.: Wow, I thought I had been sarcastic in my remarks about these AI chatbots, but the people over at the celebrity subreddit r/Fauxmoi, are savage! Here’s just a sample of their comments (source):

Ah yes, all the people you’d regret starting a conversation with.

Lmao I hate this.

Also: “Kendall Jenner as Billie, no-BS, ride-or-die companion” 😂 So funny, coming from someone with even less personality than a robot.

It’s giving Black Mirror.

Sounds horrifying. Hopefully it flops hard enough to discourage more companies from doing shit like this.

What the hell is this? Like what is it supposed to be/do? Paris Hilton is ‘Amber’ who is your detective friend to help you solve whodunnits. So they’ve taken real people and turned them into avatars but then also they aren’t really THAT person, they’re someone else brand new who has a completely different personality? What’s even the point?
Please can someone explain??

Meta is embarrassingly out of touch with the world, in a very “hello, fellow teenagers!” kind of way…

So, as you can clearly see, I’m not the only one who thinks this is just weird. I’m left wondering how much of that $21 billion Meta Reality Labs spent this past year went to pay for all these celebrities to agree to be the faces of their chatbots. And I wonder how they’re going to feel when (as is almost inevitable) their chatbot starts to act up and spit out unacceptable or incorrect responses to user questions? What will Paris Hilton do when the chatbot who wears her face goes rogue? I’m quite sure she did not think through all the potential implications of signing away her likeness on the contract Meta dangled in front of her! It really is very Black Mirror.

UPDATE Sept. 28th, 2023 2:54 p.m.: I have gotten busy with my full-time paying job as a university librarian, so I haven’t had much of a chance to watch the rest of yesterday’s virtual event. Once I do, I expect that I will have more to comment on!

Editorial: Artificial Intelligence and the Metaverse (And a Look at an AI-Assisted Social VR Platform, Riff XR)

I created this image using OpenAI’s DALL-E generative AI art generation tool, using the text prompt “artificial intelligence in the metaverse” (source)

Housekeeping Note: I first started writing this editorial back in April, and from time to time I have picked up the draft, tinkered with it a bit more, added a bit more to it—and then promptly filed it away again as a draft, because I still wasn’t satisfied with it, and I always felt that I had something more to say.

Enough. I finally decided that the perfect was the enemy of the good, and I decided today to just go ahead and publish what I already had, and then write follow-up blogposts on the topic of AI in general, and AI in the metaverse in particular. And I do expect that I will return to this topic often! So please stay tuned.

I have written before on this blog about artificial intelligence (AI) applications, such as the image manipulation and animation tools WOMBO and Reface, the text-to-art creation programs DALL-E 2, Midjourney, and Stable Diffusion, and most recently, the AI-powered chatbot Replika and the text-generation app ChatGPT. Most people, myself included, treated them as toys, mere curiosities (I entertained myself for hours making my Second Life and Sansar avatars “come alive” using WOMBO). John Hermann, in a recent article for New York magazine titled The AI Magic Show (original; archived version), wrote:

In 2022, artificial-intelligence firms produced an overwhelming spectacle, a rolling carnival of new demonstrations. Curious people outside the tech industry could line up to interact with a variety of alluring and mysterious machine interfaces, and what they saw was dazzling.

The first major attraction was the image generators, which converted written commands into images, including illustrations mimicking specific styles, photorealistic renderings of described scenarios, as well as objects, characters, textures, or moods. Similar generators for video, music, and 3-D models are in development, and demos trickled out.

Soon, millions of people encountered ChatGPT, a conversational bot built on top of a large language model. It was by far the most convincing chatbot ever released to the public. It felt, in some contexts, and especially upon first contact, as though it could actually participate in something like conversation. What many users suggested felt truly magical, however, were the hints at the underlying model’s broader capabilities. You could ask it to explain things to you, and it would try — with confident and frequently persuasive results. You could ask it to write things for you — silly things, serious things, things that you might pass off as work product or school assignments — and it would.

As new users prompted these machines to show us what they could do, they repeatedly prompted us to do a little dirty extrapolation of our own: If AI can do this already, what will it be able to do next year?

As Charlie Wurzel writes in The Atlantic, in a recent article titled What Have We Just Unleashed? (original; archived version), not even the AI experts know exactly what will come next:

Over the past few weeks, I’ve put questions like these to AI researchers, academics, entrepreneurs, and people who are currently building AI applications. I’ve become obsessive about trying to wrap my head around this moment, because I’ve rarely felt less oriented toward a piece of technology than I do toward generative AI. When reading headlines and academic papers or simply stumbling into discussions between researchers or boosters on Twitter, even the near future of an AI-infused world feels like a mirage or an optical illusion. Conversations about AI quickly veer into unfocused territory and become kaleidoscopic, broad, and vague. How could they not?

The more people I talked with, the more it became clear that there aren’t great answers to the big questions. Perhaps the best phrase I’ve heard to capture this feeling comes from Nathan Labenz, an entrepreneur who builds AI video technology at his company, Waymark: “Pretty radical uncertainty.”


You might be surprised to learn that we librarians are already preparing for the AI onslaught (perhaps not surprising, as some of the users we serve have already come to us with fake, AI-generated citations to information resources which do not exist!). I have seen no less than three events for Canadian librarians on this subject in the past couple of weeks, and at my employer, the University of Manitoba in Winnipeg, a fellow librarian has launched an AI Community of Practice (CoP), which will meet regularly to discuss ChatGPT and other AI apps and topics.

Librarians meeting via Zoom at the inaugural meeting of the brand-new University of Manitoba Libraries’ AI Community of Practice (I’m in the top row, second from the left!)

You.com is one of the newer entrants in the AI-powered search engine marketplace, which also offers AI-generated text and images as well. You can actually try out YouWrite (their AI-generated text generator) for free ten times before you have to shell out for the service, so I decided to give it a whirl, asking the site to create a professional-sounding blogpost about the impact of artificial intelligence on the metaverse:

And here’s the blogpost YouWrite spit back, exactly as generated, with zero edits:

The advent of artificial intelligence (AI) has revolutionized the way we interact with technology. AI is now being used to create virtual worlds, or “metaverses”, that are populated by digital avatars and objects that can be manipulated in real-time. As AI continues to evolve, it will have a profound impact on the metaverse and its inhabitants.

One of the most significant impacts of AI on the metaverse will be increased realism for users interacting within these virtual worlds. With advanced machine learning algorithms and natural language processing capabilities, computer-generated characters can become more lifelike than ever before – responding naturally to user input in real time while exhibiting realistic emotions such as joy or sadness depending on their environment. This could potentially lead to an unprecedented level of immersion for those engaging with these digital realms as they become indistinguishable from reality itself!

Another major advantage brought about by artificial intelligence is improved efficiency when it comes to managing large numbers of players at once – something which becomes particularly important when dealing with massive multiplayer online games (MMOs). By utilizing intelligent agents capable of autonomous decision making based off data collected from other players’ actions, developers can ensure smooth gameplay even during peak hours without sacrificing any quality due performance issues caused by too many people playing at once!

Finally, one area where AI could really shine in regards to improving user experience within a metaverse would be through automated content creation tools powered by deep learning models trained using vast amounts existing assets found across different platforms such as video games or movies etcetera… Such systems would allow game designers/developers quick access powerful resources allowing them craft unique experiences much faster than was previously possible – thus providing gamers exciting new adventures every time they log into their favourite world!

In conclusion then; It’s clear see how Artificial Intelligence stands revolutionize our concept what constitutes an immersive experience both inside out Metaverses – giving us greater control over our interactive environments alongside richer storylines filled dynamic characters all thanks advances this cutting edge technology brings us closer achieving true Virtual Reality today tomorrow alike…

That last paragraph is absolutely dreadful (almost as if the AI had a stroke!), but the rest reads well, and the text contains a few somewhat plausible, if borderline-fanciful ideas, but if I could sum it all up in one word, that word would be: bullshit. This is computer-generated bullshit, trained on a large language model to sound like an actual human expert, but it’s just parroting human writing, without any grasp of the knowledge it is talking about! (I can’t remember who said it first, but somebody once memorably and hilariously referred to AI-generated text apps like ChatGPT as “mansplaining as a service.” 😜 In fact, I would go so far as to state that generative AI tools like ChatGPT offer white, cisgender, mansplaining as a service! All the biases in the mountains of data—scraped off the greater unwashed internet—used to train these tools sometimes comes out in their responses, despite the best efforts of the companies building them to eradicate these biases.)

Despite appearances, Chat GPT doesn’t really understand the world the way a human brain, with all of its lived experiences, does; it only understands how to generate plausible-sounding sentences and assemble them in coherent paragraphs! It’s a narrowly-defined problem, not general AI that is good at a variety of tasks, and certainly not a rival to humans.


Hermann, in his New York magazine article, paints a somewhat disquieting picture of what could happen in the future, as the AI wave accelerates:

Models trained on flawed, biased, and often secret sets of data will be used to attempt to perform an assuredly ambitious range of tasks, jobs, and vital economic and social processes that affect the lives of regular people. They will depend on access to massive amounts of computing power, meaning expensive computer hardware, meaning rare minerals, and meaning unspeakable amounts of electricity. These models will be trained with the assistance of countless low-paid labourers around the world who will correct bogus statistical assumptions until the models produce better, or at least more desirable, outputs. They will then be passed on for use in various other workplaces where their outputs and performances will be corrected and monitored by better-paid workers trying to figure out if the AI models are helping them or automating them out of a job, while their bosses try to figure out something similar about their companies. They will shade our constant submissions to the vast digital commons, intentional or consensual or mandatory, with the knowledge that every selfie or fragment of text is destined to become a piece of general-purpose training data for the attempted automation of everything. They will be used on people in extremely creative ways, with and without their consent.

Charlie Warzel goes even further, likening the potential impact of artificial intelligence to that of nuclear fission and nuclear war:

Trying to find the perfect analogy to contextualize what a true, lasting AI revolution might look like without falling victim to the most overzealous marketers or doomers is futile. In my conversations, the comparisons ranged from the agricultural revolution to the industrial revolution to the advent of the internet or social media. But one comparison never came up, and I can’t stop thinking about it: nuclear fission and the development of nuclear weapons.

As dramatic as this sounds, I don’t lie awake thinking of Skynet murdering me—I don’t even feel like I understand what advancements would need to happen with the technology for killer AGI [Artificial General Intelligence] to become a genuine concern. Nor do I think large language models are going to kill us all. The nuclear comparison isn’t about any version of the technology we have now—it is related to the bluster and hand-wringing from true believers and organizations about what technologists might be building toward. I lack the technical understanding to know what later iterations of this technology could be capable of, and I don’t wish to buy into hype or sell somebody’s lucrative, speculative vision. I am also stuck on the notion, voiced by some of these visionaries, that AI’s future development might potentially be an extinction-level threat.

ChatGPT doesn’t really resemble the Manhattan Project, obviously. But I wonder if the existential feeling that seeps into most of my AI conversations parallels the feelings inside Los Alamos in the 1940s. I’m sure there were questions then. If we don’t build it, won’t someone else? Will this make us safer? Should we take on monumental risk simply because we can? Like everything about our AI moment, what I find calming is also what I find disquieting. At least those people knew what they were building.

The point these authors are making is that, with AI, we are dealing with something which has the potential to dramatically impact (and, in some cases, up-end) our current society, in ways which might not be readily apparent at first.

Amy Castor and David Gerrard, who have been busy dissecting and critiquing the ongoing three-ring circus that is blockchain, crypto, and NFTs, have turned their attention to artificial intelligence, in a two-part series (part one; part two). I strongly suggest you read both blogposts, but here’s a sample:

Much like crypto, AI has gone through booms and busts, with periods of great enthusiasm followed by AI winters whenever a particular tech hype fails to work out.

The current AI hype is due to a boom in machine learning — when you train an algorithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.

ChatGPT, a chatbot developed by Sam Altman’s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that’s all that it is. ChatGPT can’t think as a human can. It just spews out word combinations based on vast quantities of training text — all used without the authors’ permission.

The other popular hype right now is AI art generators. Artists widely object to AI art because VC-funded companies are stealing their art and chopping it up for sale without paying the original creators. Not paying creators is the only reason the VCs are funding AI art.

Do AI art and ChatGPT output qualify as art? Can they be used for art? Sure, anything can be used for art. But that’s not a substantive question. The important questions are who’s getting paid, who’s getting ripped off, and who’s just running a grift.

OpenAI’s AI-powered text generators fueled a lot of the hype around AI — but the real-world use case for large language models is overwhelmingly to generate content for spamming. [Vox]

The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it. [MIT Technology Review; The Verge

The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases. [Daily Dot

The use case for AI is spam news sites for ad revenue. [NewsGuard]

The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices. [CBS]

The use case for AI is spam Amazon reviews and spam tweets. [Vice]

The use case for AI is spam videos that advertise malware. [DigitalTrends]

The use case for AI is spam sales sites on Etsy. [The Atlantic, archive]

The use case for AI is spam science fiction story submissions. Clarkesworld had to close submissions because of the flood of unusable generated garbage. The robot apocalypse in action. [The Register]

You can confidently expect the AI-fueled shenanigans to continue.


Riff XR: Artificial Intelligence in the Metaverse

However, there have some rather interesting specific applications of AI to the metaverse. A brand-new social VR platform called Riff XR offers a tantalizing (if still somewhat buggy) glimpse of the AI-assisted metaverse of the future.

Among the AI-assisted features of Riff XR are NPC (non-playing characters, i.e. bots) with whom you can have surprisingly open-ended conversations, as well as a “cutting-edge Stable Diffusion-powered Generative Art System”:

Now, I have not visited Riff XR myself (yet), but a good friend of mine, metaverse videographer Carlos Austin, has, and he posted a video of his explorations on this new metaverse platform, including verbal conversations with a number of NPCs using generative AI to “listen” and “respond” to his spoken sentences.

One was a constable droid roaming the night-time central plaza in Riff XR, a scene straight out of Ready Player One; another played the role of Vincent Van Gogh in an exhibition of AI-generated artworks in a museum just off the plaza; a third was a woman, named Molly Millions, working at the back bar in a cyber-disco with pulsating music and gyrating NPCs of various kinds, with whom Carlos had a surprisingly in-depth conversation about cocktails!

Carlos demonstrated that you could even speak to these NPCs in different languages including German, Japanese, and Spanish (although let me just add, that the faux Van Gogh’s German accent was absolutely atrocious!). Here’s his full video (please fast-forward through all the technical bugs and mishaps; Riff XR is still quite buggy!). Carlos’ conversation with Molly Millions is nearer the end of this video:

We can expect to see more such applications of artificial intelligence coming soon (and perhaps sooner than we might expect!) to a virtual world or social VR platform near you. And you can expect more blogposts from me on this topic in future, as the technology continues to develop and evolve over time. Stay tuned!


Many thanks To Jim Carnicelli (a.k.a Galen from Sansar), with whom I had a couple of wide-ranging online discussions via Discord on the topic of AI while I was working on this blogpost over the summer! While I did not use many of the ideas we talked about, they did give me much food for thought (and possible topics for future blog posts!). You can visit Jim’s store selling his AI-generated artwork here: Snuggle Hamster Designs.