Heated Rivalry is Brokeback Mountain All Over Again (and How Coming Out Applies to Virtual Worlds, Social VR, and Other Forms of the Metaverse)

WARNING: This, my final post for 2025, is a long, meandering, and sometimes painfully personal blogpost. Consider yourself forewarned! 😉

Have you read this 2017 blogpost?
Sex and Gender Issues in Virtual Worlds: “The male/female dichotomy was viewed as binary and the technology (literally) codified that concept.”

Top picture: Heath Ledger as Ennis Del Mar and Jake Gyllenhaal as Jack Twist in the movie Brokeback Mountain. Bottom picture: Hudson Williams as Shane Hollander and Connor Storrie as Ilya Rozanov in the TV series Heated Rivalry (based on the novel of the same name, from the Game Changer series of novels written by Rachel Reid)

According to Google, the movie Brokeback Mountain was released in Canadian theatres on December 23rd, 2005, almost exactly 20 years ago from today, as I write this blogpost:

I did not go into the movie theatre to see Brokeback Mountain until 2006, accompanied by a couple of gay friends. At the time, I was aged 42, and still somewhat new to being an out, gay man—at least, compared to those who came out in their teens and twenties. I went in cracking jokes, but by the end of that movie, I was sobbing in my theatre seat. Brokeback Mountain touched me, moved me, and spoke to me in a way few other movies ever have. (I later bought it in DVD, but I still cannot bear to rewatch it, even twenty years later. At times, I felt as though I was suffocating while watching Jack Twist and Ennis Del Mar navigate a clandestine gay relationship in rural Wyoming over the decades from the 1960s to the 1980s.)

You see, I grew up following the dictates and expectations of my family and my church, and I married a woman I knew from my Lutheran church youth group when I was 24 and still a virgin. We lived through a disastrous two-year marriage in Toronto, until we separated. I came home to Winnipeg, landed a job with the University of Manitoba, and continued to suppress my sexuality by throwing myself into my work, until I experienced my first serious job burnout, and landed up in psych ward for treatment of clinical depression.

Afterward, with the help of regular talk therapy with a psychiatrist, I finally faced the truth that I was gay, coming out to myself first, and then coming out to my friends and family in my early thirties. I had had what so many people in my age group experienced—a truly wrenching coming out experience, where I felt that something was wrong with me, that I had something shameful to hide. Watching Brokeback Mountain brought all that back to me, to work through again, perhaps on a deeper level the second time. Great art has that ability to awaken feelings inside of you that you never knew you had. (Brokeback Mountain should have won best picture at the 2006 Oscars, instead of Crash, and you absolutely cannot convince me otherwise.)

Fan Fiction: stories involving popular fictional characters that are written by fans and often posted on the Internet (called also fanfic).

Slash Fiction: is a genre of fan fiction that focuses on romantic or sexual relationships between fictional characters of the same sex (also known as slashfic).

Brokeback Mountain ignited an absolute firestorm of fan fiction and, of course, the characters of Jack Twist and Ennis Del Mar were tailor-made for slash fiction writers and readers (ironically, many of whom were straight white women). The slashfic varied in quality from transcendent to abysmal, but the most popular stories (often posted to Livejournal, and running to dozens of chapters, even full-length novels at times) had thousands of passionate readers leaving comments, sparking long discussions.

During 2006 and 2007, before I had ever heard of a virtual world called Second Life, I dove deeply into the Brokeback Mountain fanfic community. While I was tempted to write my own slashfic, I knew that I could not compete against so many amazing, beautiful stories I had read on Livejournal—stories that brought me to both chills and tears at times. Instead, armed with plentiful screen captures of movie stills, and a rudimentary knowledge of PhotoShop, I turned my hand to creating tribute images (sometimes serious, sometimes funny). Below is one example; you can read this 2019 blogpost to see the rest.

Yes, I was obsessed. So, shortly after I first discovered Second Life (in a story I recount here), I found a service in SL that generated a classic system (i.e. non-mesh) avatar skin based on a single selfie, a full front-facing head shot. You can probably guess what happened next, right?…

Yep. I fed the best photos of actors Jake Gyllenhaal and Heath Ledger that I could find on the internet into this SL service to create Jack Twist and Ennis Del Mar avatars for role-play purposes! The results were pretty poor compared to modern, fully-mesh Second Life avatars, but more than sufficient for my purposes. Eventually, I decided to delete my Jack and Ennis avatars, once the fanfic fever had passed, and I had taken sufficient amusing pictures, and engaged in some Brokeback Mountain role play with other Second Life users (one memorable highlight was encountering a French-speaking group of Brokeback fans who threw Jack and Ennis the wedding they never could have in the movie!). I am sure that I still have the pictures from that crazy event tucked away somewhere, but I can’t be bothered to dig through all my hard drives to find them and post them here. Just use your imagination; I sure did. 😉


I have often written before on this blog about how Second Life (and other metaverse platforms) tend to be havens for LGBTQIA+ people, particularly for those who have not completely come out of the closet as queer people, for personal safety or for other reasons. This is especially true in an era where trans people’s rights are being attacked in particular. Here are links to a few blog posts I have written in the past:

One of the interesting aspects of Second Life (and indeed, most virtual worlds, even games like Fortnite) is that your world (or game) persona can be completely divorced from who you are in real life. You play under a name that is different from your own, and often you choose a look for your avatar that is utterly different from how you look in real life. In Second Life, most players maintain a strict separation between SL (Second Life) and RL (real life), where the people you play with online never get to know aspects of who you are in reality: where you live, what you look like, what you do for a living, etc.

So “coming out” has multiple meanings in the metaverse. It’s not just about embracing your sexuality and whom you’re attracted to, and whom you fall in love with; it can also be about sharing aspects of your real life with people who only know you virtually, as avatars. Many lasting friendships and relationships have had their unlikely but powerful start in Second Life (or some other virtual world or game), as players slowly get to know each other, first only via words between avatars, then perhaps actually meeting up in person. In fact, there is a whole 12-video YouTube series by Draxtor Despres titled Love Made in Second Life, where Drax profiles couples who first met each other in Second Life, and went on to have real-life relationships!

In fact, I first “met” Drax (Bernhard Drax in real life, who lives in Germany) when he and I were both part of the social VR platform Sansar, and participated in his regular Sunday morning explorations of various worlds built there, along with many other people. But although we have had many conversations, and each know who we are in real life, we’ve never met in person, face-to-face. Sansar was just one of those places where it was not unusual for players to “come out” to each other, and reveal the real person behind the avatar, unlike Second Life, where it is still relatively uncommon.

In fact, I very carefully kept SL and RL separate for most of the first, early years when I played, only gradually beginning to associate certain avatars (like my main one, Vanity Fair), with the real-life Ryan Schultz after starting to write about Second Life on this blog. Is it a risk? Well, yes, of course; coming out always involves some element of risk. But by a certain point, I decided that it didn’t make sense for me, as a metaverse blogger writing under my own name, to disassociate myself from many of the avatars I used to explore (and report on) those same virtual worlds and social VR platforms. In some cases, on some platforms (e.g. Sansar), I even took the same avatar name, Ryan Schultz, if it was available.

And so it was, that my Second Life freebie fashionista friend and partner in crime, whom I only know by their avatar name, Dreamer Pixelmaid, messaged me via Discord and asked me if I had watched Heated Rivalry yet. I had not. (Tonight I am finishing my second rewatch. I am hooked.) Dreamer is one of those people where I actually know little about their real life—and I am okay with that! Dreamer and I keep running into each other at Pride events in Second Life (so I assume they are a aprt of the LGBTQIA+ community as well), so I was not too surprised when they reached out to me about Heated Rivalry.

It turns out that Dreamer and I not only have a shared interest in ferreting out fabulous Second Life freebies and bargains (something we both happen to be very good at!); we also shared an interest in same-sex romance fan fiction! Soon we were exchanging links to YouTube videos, Tumblr posts, and podcast links, all to do with the outsized reaction which a small Canadian TV show about a gay romance in the professional hockey world (actually, not one but two such romances) was causing, both among the gays and the straights. Dreamer shared the following powerful fan-created music video with me, with a new-to-me song that I have been listening to on repeat:

By the way, the song used in this fan-edited video is a remix/cover song made with the assistance of AI, and it is so good that it actually gives me brain/body chills! This is a first for me: an AI-assisted song that I actually like to listen to, over and over again! Because this cover will be hard to find, here’s the link to the original posted to YouTube by the remixer, YZRmusic: https://www.youtube.com/watch?v=I43A6HHF8Vw). Even if you don’t watch the video, go and listen to this cover version, it is AMAZING!!

I cannot help but compare and contrast what is happening now with Heated Rivalry with what happened with Brokeback Mountain 20 years ago. Both were stories about a secret same-sex relationship developing over time in an unlikely and unfriendly place: 1960s Wyoming for BM and contemporary professional hockey for HR.

But what is different is this: the ripples from Brokeback Mountain were isolated to the queer and slash/fanfic communities, and the movie became the punchline to a joke in the rest of the world, and in much of the mainstream media at the time. But I don’t see that happening with Heated Rivalry; if anything, it seems to be getting some attention from the straight community, as well as the LGBTQIA+ folk and the fanfic/slashfic writers. Even straight hockey podcasters are watching the series and commenting on it. The ripples seem to be stronger, and they are going out further.

Another difference is that Brokeback Mountain ultimately ended in heartbreak, whereas Heated Rivalry ends on a much more hopeful note (SPOILER ALERT: even though the two hockey players are still in the closet, at least they are no longer lying to themselves and each other, they have a plan for the future where they can be together, and they have the full support of one player’s parents).

The show, in fact, has been so successful that a second season has already been greenlit by Crave/CTV, based on a second, already-written follow-up novel about the further relationship between these two hockey players. So yay, we are getting more Canadian-funded ice hockey yaoi! 🎉🏳️‍🌈🏒


As usual, it’s taken me a long time to get to the point, so here it is.

When you come down to it, virtual worlds, social VR, and other forms of the metaverse are all about identity and relationships: who you are on the inside, how you present yourself on the outside, and how you reconcile any tension between the two; whom you choose to be friends with and why, whom you choose to love, and why you love them; and how you navigate the network of relationships around you, both virtual and real. Revealing any of these things to another person (either virtual avatar or flesh-and-blood human being powering that avatar), is a form of coming out, where you might risk rejection—but also, risk gaining a deeper connection, possibly lifelong. Life’s too short. Take that chance.

Twenty years ago, Brokeback Mountain reinforced in me the pain and despair of a closeted life and its soul-killing compromises. Tonight, New Year’s Eve, Heated Rivalry teaches me that it is never too late to find deep, meaningful friendships and yes, perhaps even romantic love! It was a message which I needed to hear. I am no longer willing to cut down my life to fit other people’s comfort levels. In fact, it wasn’t until I finished watching the series that I realized that I had even been doing so, and to what extent. (Again, great art makes you realize things about yourself.)

I am going to ask all of you reading these words, my final words to you in 2025, to sit down and watch all six episodes of Heated Rivalry, on whatever television station in your country carries it (Crave here in Canada; HBO Max in the USA, etc.). It’s even more important that you do it if you’re heterosexual, and don’t consider yourself part of the LGBTQIA+ community of which I am a part. In particular, episodes 5 and 6 might just be some of the best television ever made, in my opinion (and yes, I am heavily biased!)

If you’re straight, just think of it as a homework assignment. I want it to spark conversations with your friends and family about how the need for love and belonging is universal, regardless of the gender and sexual orientation of the lovers; what it means to be LGBTQIA+ in a still sometimes-unfriendly society; and what it means to navigate that coming-out process, both internal and external.

Have a happy new year! See you in 2026.

Sorry, but you’ll only get the joke if you’ve watched season one of the hit Crave/HBO Max television series Heated Rivalry (if you know, you know).

With many thanks to Dreamer Pixelmaid for introducing me to my new favourite TV show! One more fan-edited music video featuring scenes from Heated Rivalry:

Metaverse Bombshell: NETFLIX Acquires Ready Player Me—What Does This Mean for Metaverse Platforms Using Ready Player Me Avatars?

I somehow missed a major piece of news that dropped last Friday, which will definitely impact a lot of existing metaverse platforms (including big names like VRChat). On Dec. 19th, 2025, Sarah Perez wrote, in an article on the tech news website TechCrunch:

After shifting its gaming strategy to focus more on games played on the TV, Netflix announced it’s acquiring Ready Player Me, an avatar-creation platform based in Estonia. The streamer said Friday it plans to use the startup’s development tools and infrastructure to build avatars that will allow Netflix subscribers to carry their personas and fandom across different games.

Terms of the deal were not disclosed. Ready Player Me had raised $72 million in venture backing from investors, including a16z, Endeavor, Konvoy Ventures, Plural, and various angels, including the co-founders of companies like Roblox, Twitch, and King Games.

Netflix told TechCrunch the startup’s team of around 20 people will be joining the company. Of the four founders Rainer Selvet, Haver Järveoja, Kaspar Tiri, and Timmu Tõke, only CTO Rainer Selvet is moving to Netflix. It doesn’t have an estimate of how long it will be until avatars launch. Nor does it detail which games or types of games will be first to get avatars.

Following the acquisition, Ready Player Me will be winding down its services on January 31, 2026, including its online avatar creation tool, PlayerZero.

Scott Hayden, in an article written for The Road to VR website, adds:

“Our vision has always been to enable avatars and identities to travel across many games and virtual worlds,” Ready Player Me CEO Timmu Tõke said. “We’ve been on an independent path to make that vision a reality for a long time. I’m now very excited for the Ready Player Me team to join Netflix to scale our tech and expertise to a global audience and contribute to the exciting vision Netflix has for gaming.”

Avatar creation using Ready Player Me in the metaverse platform Spatial

Additionally, Ready Player Me announced its taking avatar creation services offline starting January 31st, 2026.

And, indeed, when I head over to the Ready Player Me website, the banner across the top of my screen declares:

Thank you for the chance to build together with you. Our services will become unavailable starting January 31, 2026. Please reach out to devs@readyplayer.me for any questions.

I pity the poor person on the receiving end of all those emails, because there are countless metaverse platforms which have relied on Ready Player Me as their avatar creation component, rather than try to build their own avatar design system in-house. All of these platforms now have a little over a month to come up with a replacement for the services provided up until now by Ready Player Me, which is shutting down on January 31st, 2026!

Ready Player Me’s avatar creation tools, which have been used by many virtual worlds and social VR platforms, will be shutting down on January 30th, 2026.
Among the tools affected by the NETFLIX acquisition of Ready Player Me are the Avatar Creator SDK, and the newer PlayerZero SDK, which allowed for users to create and sell avatar modifications and updates.

Ready Player Me has been the go-to solution for both gaming and metaverse companies for outsourcing much of its avatar creation process. Among those companies is VRChat. Scott Hayden opines:

Netflix hasn’t intimated it’s getting into XR gaming yet, so it’s pretty safe to say the Ready Player Me acquisition and subsequent shutdown is more or less a blow to one specific group of people: namely, VRChat users.

VRChat beginners looking to make their own avatars over the years were almost always pointed to Ready Player Me, with the platform even allowing users to upload a personal photo and generate a cartoony persona that was easy to mix-and-match with a variety of parts.

And while they weren’t always the most original avatars out there, it’s difficult to argue with the platform’s ease of use, as the web-based tool basically got you a (mostly) unique avatar that was not only cross-platform, but also already rigged for VRChat.

I’m not too worried on the impact to VRChat; as Scott goes on to write in his article, there are alternatives, albeit ones requiring a bit more technical know-how on the part of the user. VRChat also has a thriving third-party avatar creation and sale ecosystem, including a very popular series of Virtual Market avatar shopping events). VRChat will be fine. But it’s the smaller metaverse platforms like Spatial.io, which wholly rely on Ready Player Me’s services, that are now going to have to scramble to find and implement a replacement in very little time.

NETFLIX’s acquisition of Ready Player Me reminds me, at first glance, of when the fledgling metaverse platform Cloud Party (which I have written about on my blog before) was acquired by Yahoo! back in early 2014, over a decade ago. The entire small company (only 3-4 people) was “acquihired” by Yahoo!, and they shut down the Cloud party platform (with a truly memorable sendoff, as they shut down the servers, that made me emotional; this link is from a former Blogger.com blog I used to write about Cloud Party, which is still up!). The staff were absorbed into Yahoo! to work on Yahoo! projects, and God only knows what happened to them, or the projects they were hired to work on. (And, of course, Yahoo! is a shadow of its former self; does anybody still use it?)

It is very clear from this news that NETFLIX has big plans for its gaming service, and they “acquihired” the staff (and assets) of Ready Player Me, in order to use them for some future project. Their gain (for whatever project they are working on) is the loss of the hundreds of virtual worlds, games, and social VR/AR platforms which relied on Ready Player Me.

The fallout from all this is going to be fascinating to watch.

UPDATE Dec. 23rd, 2025: Another thing that came to mind after I posted this blogpost is this: metaverse-building companies who choose to outsource aspects of their services to other companies like Ready Player Me, have to be prepared for the possibility that that other company could be bought out, change the terms of their service, or even shut down. While it might be more time and money consuming to build something like an avatar system in-house, at least it’s under your control, and you don’t run the risk of having the rug pulled from under you.


Thank you to my metaverse friend Carlos Austin for the heads-up on this news.

An Introduction to Artificial Intelligence in General, and Generative AI in Particular

I have already written at length about my neck and shoulder pain, for which I am working with my doctor, a physiotherapist, and a massage therapist to treat. I’ve also had an ergonomist come and do an assessment and adjustment of my workstations at my employer, the University of Manitoba (I’m still waiting for his final report, with a shopping list of equipment which will be purchased to help me get through an eight-hour workday without pain). I am still very much in the process of learning which actions are detrimental to the couple of deteriorating cervical joints in my spine, and which are more beneficial!

For example, you would think that having the extra weight of a virtual reality headset on my noggin would make things worse. However, I have been astonished to discover that my neck does not become as sore, as quickly, when I am using the Mac Virtual Display feature on my Apple Vision Pro, along with my MacBook Pro at work!

Therefore, I have been working 3 to 4 hours a day like this, as opposed to just using my MacBook Pro with an external monitor attached. The ergonomist did set me up with a temporary notebook riser, adjusted so that I am not hunched over the keyboard, and aligned so the top of both the MacBook Pro screen and the external monitor are both at eye level. I find that working like this, without my AVP, my neck and shoulders still start to ache after about two hours, and I have to stop, take a break, go for a walk, and do some of my physiotherapy exercises. As I mentioned earlier, this is a learning process.

On Wednesday, at lunchtime, I got up from my MacBook Pro, unplugged my Apple Vision Pro from its battery charging cable (I tend to leave it plugged in when I am working seated) and, while still wearing my AVP, went to the washroom. My coworkers in the library are already well-used to seeing this strange person wandering around with a VR headset on, and my vision while wearing it is almost as good as it is when I wear my glasses, so I often do this if I have to make a short walk to the printer, or in this case, the washroom.

However, on my way back from the washroom, disaster struck. I accidentally got the cord between my Apple Vision Pro (on my head) and its battery (sitting in the front left pocket of my pants) caught in a metal part of the door to my office cubicle space when I was coming back in from the washroom. My AVP is okay, but I wrenched my already-painful neck badly, and as a result, made a bad situation even worse. (Lesson learned; you need to take that damn power cord into account when moving around!)

As a result, I have been off sick from work for two and half days this week, spending a lot of my time either lying in bed or lying on the sofa. On top of that, we have had not one, but two Alberta Clippers roar through Winnipeg on Wednesday, Thursday, and Friday, so I have been apartment-bound as well as largely bed-bound. I just find it ironic that the very thing that seems to make my pain more bearable (the Apple Vision Pro) can also make it more severe! This has just not been my week.

Anyway, this is my usual off-topic preamble to the real purpose of today’s blogpost. I had promised that I would share with you, my blog readers, the artificial intelligence presentation I had been researching since this summer, which I have recently delivered to three separate audiences: University of Manitoba graduate students, graduate student advisors, and the professors and instructors in the Faculty of Agriculture and Food Sciences (the latter group for whom I am the liaison librarian, and from where the original request to create and give this talk was made by the chair of the agriculture library committee, many months ago). And while this talk was overall very well-received by my audiences, I did receive some negative feedback, and I wanted to talk a little bit about that as well. AI is a divisive topic in an already-divisive age.


I’m going to share an edited version of my PowerPoint slide presentation, with some University of Manitoba-specific bits removed, as well as any contact information removed (sorry, the UM faculty, staff, and students have the right to call on me with questions after my presentation, as I am their liaison librarian; you don’t 😉 ).

Also, I will be transparent about how I used generative AI tools in creating this PowerPoint presentation. I currently have paid-for (US$17-20 a month) accounts on three general-purpose generative AI tools: OpenAI’s ChatGPT; Anthropic’s Claude; and Google’s Gemini. These are the “top three” general-purpose generative AI tools currently recommended by Ethan Mollick (more on him later in this post). Do I plan to keep paying for all three? No. But I have found it highly instructive to enter the exact same text prompt into all three tools, and then compare the results!

In addition to conducting my own research into artificial intelligence in general and generative AI in particular, I used both ChatGPT and Claude to do additional research into this topic, some of which made it into this presentation. I also had a lot of text-heavy slides in the first draft of my PowerPoint presentation, so I asked Google Gemini to provide suggestions on how to reformat my slide presentation to have fewer bullet points per slide (which I think it did a pretty good job at).

I also did try to ask both ChatGPT and Gemini to redesign the theme and design aspects of my PowerPoint slides, but I was extremely unsatisfied with the results, despite several attempts, and I finally gave up on using AI for that task. So please keep in mind that generative AI (which I will refer to as GenAI from here on out) can still fail miserably at some tasks you put it to work on!

Here is my PowerPoint slide presentation, complete with my speaker notes, for you to download and use as you wish, with some stipulations. I am using the Creative Commons licence CC BY-NC-SA 4.0, which gives the following rights and restrictions):

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International

This license requires that reusers give credit to the creator. It allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, for noncommercial purposes only. If others modify or adapt the material, they must license the modified material under identical terms.

BY: Credit must be given to you, the creator.

NC: Only noncommercial use of your work is permitted. Noncommercial means not primarily intended for or directed towards commercial advantage or monetary compensation.

SA: Adaptations must be shared under the same terms.

(The tool I used to determine the appropriate Creative Commons licence can be found here: https://creativecommons.org/chooser/.)

So, with all that said, here is my PowerPoint presentation (please click on the Download link under the picture, not the picture):


In addition to sharing my slide presentation with you, I wanted to highlight a few resources which I discussed within it, which you might find useful. These are books and websites which I used as I worked my way up the learning curve associated with AI in general, and the new wave of GenAI tools in particular.

I start off with a bigger-picture look at the whole forest of artificial intelligence, later narrowing my focus to look at GenAI tools, a new subset of greater AI. First, a really good layperson’s guide to GenAI is a 2024 book by Ethan Mollick, titled Co-Intelligence (see image, right). One thing I want people to remember is that the new wave of GenAI tools only dates back to 2022, when the capabilities of these new tools (ChatGPT, DALL-E, Midjourney, Stable Diffusion, etc.) first captured the general public’s imagination, and stoked their fears. There are lots of published books about AI, but if they were published before 2022, they won’t cover the part of AI that is making the most noise right now. Also, keep in mind that any print/published book will soon be outdated, because the field of GenAI is evolving so rapidly!

Ethan does a good job of covering the territory, and I share with you his four rules of AI:

Principle 1: Always invite GenAI to the table. You should try inviting AI to help you in everything you do, barring any legal or ethical issues, to learn its capabilities and failures.

Principle 2: Be the human in the loop. GenAI works best with human help; always double-check its work.

Principle 3: Treat GenAI like a person (but tell it what kind of person it is). Give it a specific persona, context, and constraints for better results. For example, you’ll get better results from the detailed prompt “Act as a witty comedian and generate some slogans for my product that will make people laugh” instead of the more generic prompt “Generate some slogans for my product.”

Principle 4: Assume that this is the worst GenAI tool you will ever use. Generative AI tools are advancing and evolving rapidly.


Second, I want to share with you an online course from Anthropic, the makers of the GenAI tool Claude. This course, which I worked through this summer, is called AI Fluency: Framework & Foundations, and you do not need to use Claude to work through the exercises—you can use any GenAI tool you wish. The focus of this 14-lecture course is to learn how to collaborate with GenAI systems effectively, efficiently, ethically, and safely.

One of the concepts taught in the AI Fluency course is what Anthropic calls the four D’s: the four key competencies of AI fluency (they seem to be big on alliteration!).

Delegation: deciding what work should be done by humans, what work should be done by AI, and how to distribute tasks between them.

Description: effectively communicating with AI tools, including clearly defining outputs, guiding AI processes, and specifying desired AI behaviours and interactions.

Discernment: thoughtfully and critically evaluating AI outputs, processes, behaviours, and interactions (assessing quality, accuracy, appropriateness, and areas for improvement).

Diligence: using AI responsibly and ethically (maintaining transparency and taking accountability for AI-assisted work; an example of this is when I described in detail which GenAI tools I used, and how I used them, in creating the PowerPoint slide presentation, earlier in this post.)


Finally, I share with you what I found to be a very helpful guide prepared by a librarian, Nicole Hennig, about how to stay on top of the rapidly evolving and accelerating field of GenAI. You can obtain a copy of her 2025 guide here. This is as good a place as any to start working your way up the learning curve (as I first did, with the 2024 edition of her guide). Nicole offers a bounty of valuable tips, tricks, suggestions of people to follow, and advice on how best to keep up with the roiling sea of change which is currently taking place in GenAI!


Finally, I wanted to talk a bit about the divisive nature of GenAI. AI/GenAI seems to be a very polarizing topic, especially in the field of higher education! While I did try to present a balanced viewpoint on generative AI tools, talking about both the good and the bad, I did receive some feedback from a few people who felt that my presentation was too…positive? And that, despite the warnings in my talk about some very serious problems with GenAI tools, I had neglected to portray GenAI’s more negative aspects in a more forceful way.

For example, one agriculture professor, in an email after my talk, said this about the Anthropic online course in AI Fluency, a learning resource which I had mentioned in the previous section of this blogpost, as well as in my slide presentation:

…I know you were recommending the AI class that was created by Anthropic, and how it is agnostic to the AI used, and just a good introduction to use. I’ll admit that I have not taken the course  (I am now intrigued and will try to), but I couldn’t help thinking when you introduced it, of courses on appropriate opioid prescribing practices made by Purdue pharma.

Ouch. Fair point, but painful comparison (and I say that as someone who is now actually suffering from physical pain, as I stated up top). So I wanted to end this blogpost with a brief discussion about how some intelligent but more skeptical observers are responding to the tidal wave of GenAI tools washing over society as a whole, and share links to some criticism, as part of providing a larger perspective. I will be the first to admit that I am not an expert in this field, despite what I have learned since this summer! I am a librarian with a computer science degree, which made it easier for me to comprehend some of the more technical aspects of what I was reading, but not as good at the philosophical part of the discussion about GenAI.

The professor who commented on the Anthropic course above shared with me a couple of links to recent critical articles which I, in turn, will share with you. The first link is an Open Letter by 17 scholars, warning about blindly accepting GenAI tools in higher education (post-secondary education, i.e. colleges and universities, although obviously many of the same arguments could also be made about K-12 schooling):

Guest, O., Suarez, M., Müller, B., van Meerkerk, E., Oude Groote Beverborg, A., de Haan, R., Reyes Elizondo, A., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & van Rooij, I. (2025). Against the Uncritical Adoption of ‘AI’ Technologies in Academia. Zenodo. Retrieved Dec. 19th, 2025 from https://doi.org/10.5281/zenodo.17065099

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously to a) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

The second link is the text of a recent talk by the well-known intellectual, author, speaker, and gadfly Cory Doctorow, who gave his university audience a foretaste of his book on AI, which will be published in 2026:

Doctorow, C. (2025). Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI. Retrieved Dec. 19th, 2025 from https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington

Over the summer I wrote a book about what I think about AI, which is really about what I think about AI criticism, and more specifically, how to be a good AI critic. By which I mean: “How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.” I titled the book The Reverse Centaur’s Guide to Life After AI, and Farrar, Straus and Giroux will publish it in June, 2026.

But you don’t have to wait until then because I am going to break down the entire book’s thesis for you tonight, over the next 40 minutes. I am going to talk fast.

And both Cory Doctorow, and Olivia Guest et al., make some seriously valid points about the negative consequences of a heedless, thoughtless, headlong rush into adopting GenAI tools. Now, you can decide, after reading all this, that you will have absolutely nothing to do with AI and GenAI, and that’s a valid position to take. But will it change the fact that GenAI is already being incorporated into software we use every day? Can the genie be pushed back into the bottle? Doubtful.

So what I am saying is: learn how the enemy (if you see it as “the enemy”) works. Spend a bit of time to become familiar with the GenAI tools, try them out on certain tasks, and see for yourself where and how it succeeds at a particular task, and (more importantly) where and how it fails. I have had some amazing results from using GenAI tools over the past eight months, but I have also experienced situations where I walked away thinking, “this is garbage.” But may I gently suggest that the only way to gain the experience which informs your opinions is to actually use the tools, and not to stick your head in the sand, and refuse to have anything to do with them.

Are we the unwitting and unwilling beta-testers for these products, as they are rolled out and embedded stealthily in products we already know and use? Absolutely. Will there be negative consequences, some foreseen, and others unexpected and unanticipated? Absolutely. Will there be some tasks which GenAI does and does well? Also, yes, absolutely (and it is already happening based on my own experience). All three things can be true at the same time. Like all technology throughout human history, artificial intelligence is a double-edged sword. It can harm as well as heal.

I still think that the best stance on GenAI is to be a skeptical but informed user of the tools (even if you limit yourself to the lesser-powered, free versions). Also, you owe it to yourself to read a variety of viewpoints on the technology, from a range of sources (start with my fellow librarian Nicole Hennig’s excellent guide which I mentioned above, plus my skeptical professor’s two links, and work out from there).

Above all, even with how divisive AI can be as a topic, now is not the time to be locked into either a rigid AI-is-bad or AI-is-good perspective, because both are true at times, and we need to hold space for that unsettling and upsetting fact. And we need to brace ourselves, both personally and as a society, because (as I have stated before on this blog), things are about to get deeply, deeply weird before all this is over.

Image by Gerd Altmann from Pixabay

Editorial: Changing Gears, Letting Go, and Embracing Change

Photo by Zoltan Tasi on Unsplash

NOTICE: Except where explicitly stated in this blogpost, I have not used AI to write this editorial. This is me, Ryan, writing (and yes, I have been using em-dashes long, long before ChatGPT was a thing—and I will continue to do so!). See what I just did there? 😉

While my continuing neck and shoulder pain unfortunately limits the amount of time that I can spend sitting in front of a desktop computer (both at work and at home), I wanted to set aside some of my precious “good neck” time to talk a little bit about this past twelve months, and where I am planning on taking this blog in the future. Because, yes, I do have plans moving forward. (Update: as it turns out, because of my neck and shoulder pain, I had to split up the writing of this post over a couple of days, rather than one hours-long marathon sesssion.)

As many of you know, I took a lengthy hiatus from blogging, starting late last year, up until very recently. Part of the reason was that I was juggling a lot of responsibilities at work, notably being part of a virtual reality lab which was being set up in one of the libraries of the university library system in which I have been working for the past 30-odd years (yes, it’s really been that long; I started in 1992!).

I am happy to report that, although I am no longer involved with that particular project, the virtual reality lab at my university library system has already had a successful soft opening, with a dedicated staff person hired to manage it (not me; as I said, I already have my hands full being a liaison librarian for both the faculty of agricultural and food sciences and the computer science department at my university!). In fact, I have been so busy at work that I haven’t even had time to sit down and use any of the equipment in the new lab, although I have chatted a few times with the new manager. Everything is moving along fine without me.

As part of my responsibilities as agriculture librarian, I had volunteered to give a presentation to an upcoming faculty council meeting about artificial intelligence in general, and generative AI in particular. I have only myself to blame for getting myself into this situation! You see, the Faculty of Agricultural and Food Sciences at the University of Manitoba still has an active library committee, and at a recent in-person meeting, I was talking about how I have had to add a few slides to the PowerPoint presentation which I give to students about how to use the U of M Libraries, talking about AI. One thing led to another, and lo and behold, yesterday afternoon, I gave a half-hour presentation on artificial intelligence in general, and generative AI in particular, to a room full of agriculture and food science professors!

I spent a significant chunk of my summer reading through books and websites, working through online courses, and essentially getting myself up to speed (it helps that this librarian has an undergraduate degree in computer science!). And I had the good fortune to be able to give a version of my presentation to a class of graduate student advisors, and to a class of graduate students, as part of a series of special courses targeted to U of M grad students, before yesterday afternoon’s talk. Both times it was well received, as it was yesterday. (I have already shared my slides and notes with my fellow librarians and agriculture professors, and I might decide to also share a version of them with you, my faithful blog readers, as I have done in the past with presentations about virtual reality in higher education, and the virtual world of Second Life. But I think I will make that a separate blogpost, perhaps my next one.)

At this point, I will draw your attention to the tagline of my blog in the upper left-hand corner of the screen if you are looking at this page on a desktop computer. You might notice that it has changed.

It used to read, pretty much since I began this blog in 2017:

News and Views on Social VR, Virtual Worlds, and the Metaverse

As of yesterday, it now says:

News and Views on Social VR, Virtual Worlds, and the Metaverse, plus Artificial Intelligence and Generative AI’s Impact on the Metaverse

Now, that’s rather a mouthful (and yes, I might need to edit it a bit), but essentially, it’s all a part of the “embracing change” which I mentioned in the title of this blogpost.

As a matter of fact, I was having a bit of a brain fart coming up with a suitable title, so to assist me with the wording of the title of this blog post (and only that), I pulled up Anthropic’s generative AI tool, Claude, for a little chat, asking it:

I need a way of saying “to add something new” to contrast with the opposite idea of “letting go of something.” What are some ways that I could say that?

And here are screen captures of the resulting conversation:

Now, could I have done this without generative AI? Absolutely; thesaurus websites have been around since the earliest days of the World Wide Web (trust me, I was around then!). But I doubt I could have actually had a back-and-forth conversation with a tool that presented the information in such a helpful, tabular way, prior to November 2022, when the first public version of ChapGPT was unleashed upon an unsuspecting public. I could pose my question in dozens of different ways, asking for countless ways of expressing the concept of “letting go of something,” and the Claude GenAI (generative AI) tool never gets bored or impatient or irritated with me.

Simply put, I will now be writing about artificial intelligence in general, and the new wave of generative AI tools like ChatGPT and Claude in particular, as part of the RyanSchultz.com blog. In particular, I will talk about how these fast-developing and evolving tools will inevitably impact the metaverse.

I will give two quick examples of how GenAI is already impacting the metaverse. First, in my recent write-up of virtual sessions I attended as part of the Berlin-based Immersive X metaverse conference i attended a couple of weeks ago, there was a proof-of-concept working demonstration of a generative-AI-driven virtual diabetes counselor in a virtual world platform called Foretell Reality.

Second, were you aware that there is already a website called MeshZEUS, which will create a three-dimensional object for you from a text description, in a format ready to be uploaded to Second Life and sold on the SL Marketplace or an in-world store?

The MeshZEUS website

Yes, that’s right! You may choose, if you wish, to no longer work your way up the rather steep learning curve of Blender or Maya or 3Ds Max to painstakingly create an object from scratch; instead, all you have to do is describe your desired 3D object in enough detail, and hey presto, it gets delivered to you! (Provided you buy enough credits, and have enough patience to go through multiple iterations of text prompting, that is. But we’ll also leave that discussion, plus the whole enchilada of issues that using a GenAI tool like this raises, for another day, shall we? Trust, there’s lots to talk about.)

It’s now pretty obvious to me that the current hype cycle of artificial intelligence, which was ignited by startling new leaps forward in the capabilities of AI tools since 2022, is going to have an impact on the metaverse. And, unlike the previous short-lived hype cycle of the metaverse itself (which, hello, I was around for—beginning, middle, and end!— documented on this very blog), this new, AI-powered hype cycle might actually have a more direct impact on society than the still-somewhat-nebulous concept of the metaverse, sooner than any of us might have expected. Buckle up, folks, I predict that things are about to get deeply, deeply weird.


So, I have talked about changing gears for the RyanSchultz.com blog, returning to blogging, and also about embracing change, i.e., adding the topic of AI and GenAI to the subjects I will write about. Now I come to the part where I talk about letting something go.

Unfortunately, because of my neck and shoulder pain, I regret that I must conserve the time that I can spend productively sitting in front of a desktop PC. Obviously, first priority goes to the paying job, which keeps the lights on, the internet bill paid, and puts food in my belly and gas in my car. Second priority will likely be writing this blog, now that I have decided to keep blogging. Between these two, that probably is the limit of what I can reasonably accomplish.

What I am choosing to let go of is writing aboutt the virtual world of Second Life on this blog (in particular, reporting on fashionista freebies and bargains). I have made a similar announcement on Primfeed, which over the past year is where I have usually posted my freebie fashionista finds rather than on my blog. Because my Primfeed account is deliberately set to private (i.e., you need to have a Second Life account to join Primfeed, follow me, and read what I post there), I have done a screen capture of that particular post, plus a transcription:

Every December, I try to juggle four tasks (not very successfully, mind you):

1, Drag my small army of alts through a curated selection of Advent and 12 Days of Christmas calendars to vacuum up some fabulous gifts, every day from December 1st to December 25th;

2. Do the same thing at the annual Holiday Shop and Hop event;

3. Pick up free heads and skins during the LeLutka December event; and

4. Navigate real-life Christmas events, shopping, and other obligations. (My family, God bless them, finds #1-3 above to be very amusing, and last Christmas, they all chipped in to give me a cash-filled envelope marked “L$”, since they couldn’t actually buy me a gift card to buy Linden dollars. (Second Life, you need to look into this! There’s an untapped market here.)

I’m sure some of you here on Primfeed can relate to this! Often I ask myself: why am I doing this? But I still do get a great deal of personal satisfaction and fulfillment from designing a complete avatar look from head to toe, looking great while doing it as inexpensively as possible. And in order to do that, you need to acquire the knowledge and expertise to sniff out freebies and bargains (which I have often shared with you, either here on Primfeed or via my blog). I’ve loved doing it for years!

But, as I said, something has to give. I can no longer spend extended hours sitting in front of a desktop PC without significant, and sometimes severe, neck and shoulder pain. Therefore, in addition to NOT doing as much of numbers 1 through 3 as in previous years, I have made the difficult decision to cut back on telling all of you about the great deals I find. It’s not a decision I take lightly, but I do need to listen to my body, and my body is telling me to rest. And I need to pay attention.

So if you don’t see me post as often here, that’s why. ❤️ I’m just trying to rebalance my life a little better, that’s all. I’ll still be around, reading, scrolling, liking posts, following people and stores, but not posting so much. Thanks for understanding.

Don’t get me wrong; I am not leaving Second Life! In fact, I need SL as a sort of counter-balance to deal with all the batshit-craziness happening in my real life. Second Life is my temporary escape from the hamster-wheel of worry, anxiety, and despair inside my head, where I can reliably get into a pleasant flow state for an hour or two, and escape from the real world (where I have little to no control over what is happening).

In fact, one of the reasons I love SL so much is that it is such a vast, three-dimensional creative canvas over which I have so much control over what happens, where I choose to go, who I choose to interact with, and even what I look like to others! I still derive an inordinate amount of personal satisfaction from styling a complete avatar look from head to toe, as inexpensively as possible while still looking fabulous, darling! I call it “digital drag” 💅 (and yes, I do have a drag queen alt, whom I have written about numerous times on my blog, and who is about to embark on various antics, drama, and misadventures in a roleplaying region based on the U.S. Deep South). To my friends and acquiantances in Second Life: I am not going anywhere. I’m just not going to write about it here any more, that’s all. (I’m also cutting back on my Primfeed posting, but I’ll still be there, too.)


So, to sum up:

Yes, I am back.

Yes I will be blogging about the metaverse in all its forms and manifestations again, but with the added wrinkle of AI/GenAI and its potential impact.

No, I will no longer be writing about Second Life, although yes, I still will be playing it.

Stick around, folks, this should be both entertaining and educational! As RuPaul herself said: