Editorial: Turning 60

It wasn’t until I received an early-morning text message from my brother in Alberta that I remembered: holy shit! This morning at 6:00 a.m., I officially turned sixty years old.

I consider myself a fortunate man on this milestone birthday: not only do I have power, water, sewer, and broadband, a roof over my head, food on my table, and a nice new car to drive (all paid for by my full-time job as a science/agriculture librarian), I also get to work on a virtual reality lab project for my university library system (more on that below), and I even get to lecture about my beloved Second Life to a university class!

I mean, c’mon now. Having been through rough and even truly horrible times in the past, I am now wise enough to realize—and appreciate—when the good times do happen. Enjoy them while you can, because they are often fleeting! My single piece of wisdom to impart to all of you reading this is: Appreciate. The. Moment.

As I mentioned in a previous blogpost, I have been invited to give a lecture on Second Life for a graduate-level virtual worlds building and design course being taught this semester at my university, co-taught by a computer science professor and an interior design professor. I have been quite busy trying to put the finishing touches on my presentation, which I will be delivering in Second Life to the students, all of whom will be represented by Second Life avatars.

Doing a test run of my presentation
My professional librarian avatar, named Notecard Writer, dates back to 2007 (and if you think that I actually look like this at age 60, I have a bridge in Brooklyn to sell you! 😉 )

For the course, I have rented an entire Homestead region (256 square metres), with full build and terraform rights, where the students can practice their virtual world building. The rental is currently only for three weeks, but it can be extended if any students wish to work on a building project in Second Life for the course.

Here’s a sneak preview of the region; it came pre-landscaped by the landlord, and much of it I have kept as-is, but I have also worked to terraform a large, flat space for building projects (located in the foreground of this picture). I do have a small area where I will demonstrate just how much content creation has changed in Second Life over the 20 years of its history, with examples of houses and furniture from basic prim building to highly detailed mesh models! 

Some Second Life land with full build and terraform rights, for the use of the students taking the Virtual World Building class I am lecturing (and yes, I made the spiky peak to the far right; I haven’t had an opportunity to play around with the Second Life terraforming tools in such a long time!)

As for the virtual reality lab project, the project is moving ahead slowly but surely. I am one member of a committee of people working to move the project forward (my role is to specify VR and AR hardware and software for the lab, to support the use of VR/AR/MR/XR for faculty, staff, and students in their teaching, learning, and research). At the moment, we’re focused on room renovations to the chosen space for the lab: walls, floors, lighting fixtures, etc.

Depending on when the Department of Computer Science sets up their VR lab (I’ve heard that they are ordering a bunch of Meta Quest 3s), the Libraries’ XR Lab will be either the second or third such lab on campus, after the School of Nursing, which is now regularly using VR to train student nurses (more info here in a Mastodon post I made about that project).

So, at the ripe old age of sixty, my passionate hobbies (virtual worlds, and virtual reality) are actually intersecting with my paying job! I know how lucky I am to have these opportunities, and I do not take them for granted.

Now, I have to go—I’ve got two different Libraries training sessions to deliver today, one this morning and another this afternoon—but I just wanted to write this blogpost to let you all know that I’m still here, still blogging, and once my work for this course I am involved with winds down, I will once again be writing “News and Views on Social VR, Virtual Worlds, and the Metaverse,” as the tagline for my blog states. And not just about Second Life, but all the metaverse platforms I can find!

Here’s to another sixty trips* around the sun! Woohoo! (Well, okay…maybe 20? 30?? 40?!?? 😜)

Photo by NIPYATA! on Unsplash

*Actually, it’s sixty-one trips around the sun, so far!

Editorial: Artificial Intelligence and the Metaverse (And a Look at an AI-Assisted Social VR Platform, Riff XR)

I created this image using OpenAI’s DALL-E generative AI art generation tool, using the text prompt “artificial intelligence in the metaverse” (source)

Housekeeping Note: I first started writing this editorial back in April, and from time to time I have picked up the draft, tinkered with it a bit more, added a bit more to it—and then promptly filed it away again as a draft, because I still wasn’t satisfied with it, and I always felt that I had something more to say.

Enough. I finally decided that the perfect was the enemy of the good, and I decided today to just go ahead and publish what I already had, and then write follow-up blogposts on the topic of AI in general, and AI in the metaverse in particular. And I do expect that I will return to this topic often! So please stay tuned.

I have written before on this blog about artificial intelligence (AI) applications, such as the image manipulation and animation tools WOMBO and Reface, the text-to-art creation programs DALL-E 2, Midjourney, and Stable Diffusion, and most recently, the AI-powered chatbot Replika and the text-generation app ChatGPT. Most people, myself included, treated them as toys, mere curiosities (I entertained myself for hours making my Second Life and Sansar avatars “come alive” using WOMBO). John Hermann, in a recent article for New York magazine titled The AI Magic Show (original; archived version), wrote:

In 2022, artificial-intelligence firms produced an overwhelming spectacle, a rolling carnival of new demonstrations. Curious people outside the tech industry could line up to interact with a variety of alluring and mysterious machine interfaces, and what they saw was dazzling.

The first major attraction was the image generators, which converted written commands into images, including illustrations mimicking specific styles, photorealistic renderings of described scenarios, as well as objects, characters, textures, or moods. Similar generators for video, music, and 3-D models are in development, and demos trickled out.

Soon, millions of people encountered ChatGPT, a conversational bot built on top of a large language model. It was by far the most convincing chatbot ever released to the public. It felt, in some contexts, and especially upon first contact, as though it could actually participate in something like conversation. What many users suggested felt truly magical, however, were the hints at the underlying model’s broader capabilities. You could ask it to explain things to you, and it would try — with confident and frequently persuasive results. You could ask it to write things for you — silly things, serious things, things that you might pass off as work product or school assignments — and it would.

As new users prompted these machines to show us what they could do, they repeatedly prompted us to do a little dirty extrapolation of our own: If AI can do this already, what will it be able to do next year?

As Charlie Wurzel writes in The Atlantic, in a recent article titled What Have We Just Unleashed? (original; archived version), not even the AI experts know exactly what will come next:

Over the past few weeks, I’ve put questions like these to AI researchers, academics, entrepreneurs, and people who are currently building AI applications. I’ve become obsessive about trying to wrap my head around this moment, because I’ve rarely felt less oriented toward a piece of technology than I do toward generative AI. When reading headlines and academic papers or simply stumbling into discussions between researchers or boosters on Twitter, even the near future of an AI-infused world feels like a mirage or an optical illusion. Conversations about AI quickly veer into unfocused territory and become kaleidoscopic, broad, and vague. How could they not?

The more people I talked with, the more it became clear that there aren’t great answers to the big questions. Perhaps the best phrase I’ve heard to capture this feeling comes from Nathan Labenz, an entrepreneur who builds AI video technology at his company, Waymark: “Pretty radical uncertainty.”


You might be surprised to learn that we librarians are already preparing for the AI onslaught (perhaps not surprising, as some of the users we serve have already come to us with fake, AI-generated citations to information resources which do not exist!). I have seen no less than three events for Canadian librarians on this subject in the past couple of weeks, and at my employer, the University of Manitoba in Winnipeg, a fellow librarian has launched an AI Community of Practice (CoP), which will meet regularly to discuss ChatGPT and other AI apps and topics.

Librarians meeting via Zoom at the inaugural meeting of the brand-new University of Manitoba Libraries’ AI Community of Practice (I’m in the top row, second from the left!)

You.com is one of the newer entrants in the AI-powered search engine marketplace, which also offers AI-generated text and images as well. You can actually try out YouWrite (their AI-generated text generator) for free ten times before you have to shell out for the service, so I decided to give it a whirl, asking the site to create a professional-sounding blogpost about the impact of artificial intelligence on the metaverse:

And here’s the blogpost YouWrite spit back, exactly as generated, with zero edits:

The advent of artificial intelligence (AI) has revolutionized the way we interact with technology. AI is now being used to create virtual worlds, or “metaverses”, that are populated by digital avatars and objects that can be manipulated in real-time. As AI continues to evolve, it will have a profound impact on the metaverse and its inhabitants.

One of the most significant impacts of AI on the metaverse will be increased realism for users interacting within these virtual worlds. With advanced machine learning algorithms and natural language processing capabilities, computer-generated characters can become more lifelike than ever before – responding naturally to user input in real time while exhibiting realistic emotions such as joy or sadness depending on their environment. This could potentially lead to an unprecedented level of immersion for those engaging with these digital realms as they become indistinguishable from reality itself!

Another major advantage brought about by artificial intelligence is improved efficiency when it comes to managing large numbers of players at once – something which becomes particularly important when dealing with massive multiplayer online games (MMOs). By utilizing intelligent agents capable of autonomous decision making based off data collected from other players’ actions, developers can ensure smooth gameplay even during peak hours without sacrificing any quality due performance issues caused by too many people playing at once!

Finally, one area where AI could really shine in regards to improving user experience within a metaverse would be through automated content creation tools powered by deep learning models trained using vast amounts existing assets found across different platforms such as video games or movies etcetera… Such systems would allow game designers/developers quick access powerful resources allowing them craft unique experiences much faster than was previously possible – thus providing gamers exciting new adventures every time they log into their favourite world!

In conclusion then; It’s clear see how Artificial Intelligence stands revolutionize our concept what constitutes an immersive experience both inside out Metaverses – giving us greater control over our interactive environments alongside richer storylines filled dynamic characters all thanks advances this cutting edge technology brings us closer achieving true Virtual Reality today tomorrow alike…

That last paragraph is absolutely dreadful (almost as if the AI had a stroke!), but the rest reads well, and the text contains a few somewhat plausible, if borderline-fanciful ideas, but if I could sum it all up in one word, that word would be: bullshit. This is computer-generated bullshit, trained on a large language model to sound like an actual human expert, but it’s just parroting human writing, without any grasp of the knowledge it is talking about! (I can’t remember who said it first, but somebody once memorably and hilariously referred to AI-generated text apps like ChatGPT as “mansplaining as a service.” 😜 In fact, I would go so far as to state that generative AI tools like ChatGPT offer white, cisgender, mansplaining as a service! All the biases in the mountains of data—scraped off the greater unwashed internet—used to train these tools sometimes comes out in their responses, despite the best efforts of the companies building them to eradicate these biases.)

Despite appearances, Chat GPT doesn’t really understand the world the way a human brain, with all of its lived experiences, does; it only understands how to generate plausible-sounding sentences and assemble them in coherent paragraphs! It’s a narrowly-defined problem, not general AI that is good at a variety of tasks, and certainly not a rival to humans.


Hermann, in his New York magazine article, paints a somewhat disquieting picture of what could happen in the future, as the AI wave accelerates:

Models trained on flawed, biased, and often secret sets of data will be used to attempt to perform an assuredly ambitious range of tasks, jobs, and vital economic and social processes that affect the lives of regular people. They will depend on access to massive amounts of computing power, meaning expensive computer hardware, meaning rare minerals, and meaning unspeakable amounts of electricity. These models will be trained with the assistance of countless low-paid labourers around the world who will correct bogus statistical assumptions until the models produce better, or at least more desirable, outputs. They will then be passed on for use in various other workplaces where their outputs and performances will be corrected and monitored by better-paid workers trying to figure out if the AI models are helping them or automating them out of a job, while their bosses try to figure out something similar about their companies. They will shade our constant submissions to the vast digital commons, intentional or consensual or mandatory, with the knowledge that every selfie or fragment of text is destined to become a piece of general-purpose training data for the attempted automation of everything. They will be used on people in extremely creative ways, with and without their consent.

Charlie Warzel goes even further, likening the potential impact of artificial intelligence to that of nuclear fission and nuclear war:

Trying to find the perfect analogy to contextualize what a true, lasting AI revolution might look like without falling victim to the most overzealous marketers or doomers is futile. In my conversations, the comparisons ranged from the agricultural revolution to the industrial revolution to the advent of the internet or social media. But one comparison never came up, and I can’t stop thinking about it: nuclear fission and the development of nuclear weapons.

As dramatic as this sounds, I don’t lie awake thinking of Skynet murdering me—I don’t even feel like I understand what advancements would need to happen with the technology for killer AGI [Artificial General Intelligence] to become a genuine concern. Nor do I think large language models are going to kill us all. The nuclear comparison isn’t about any version of the technology we have now—it is related to the bluster and hand-wringing from true believers and organizations about what technologists might be building toward. I lack the technical understanding to know what later iterations of this technology could be capable of, and I don’t wish to buy into hype or sell somebody’s lucrative, speculative vision. I am also stuck on the notion, voiced by some of these visionaries, that AI’s future development might potentially be an extinction-level threat.

ChatGPT doesn’t really resemble the Manhattan Project, obviously. But I wonder if the existential feeling that seeps into most of my AI conversations parallels the feelings inside Los Alamos in the 1940s. I’m sure there were questions then. If we don’t build it, won’t someone else? Will this make us safer? Should we take on monumental risk simply because we can? Like everything about our AI moment, what I find calming is also what I find disquieting. At least those people knew what they were building.

The point these authors are making is that, with AI, we are dealing with something which has the potential to dramatically impact (and, in some cases, up-end) our current society, in ways which might not be readily apparent at first.

Amy Castor and David Gerrard, who have been busy dissecting and critiquing the ongoing three-ring circus that is blockchain, crypto, and NFTs, have turned their attention to artificial intelligence, in a two-part series (part one; part two). I strongly suggest you read both blogposts, but here’s a sample:

Much like crypto, AI has gone through booms and busts, with periods of great enthusiasm followed by AI winters whenever a particular tech hype fails to work out.

The current AI hype is due to a boom in machine learning — when you train an algorithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.

ChatGPT, a chatbot developed by Sam Altman’s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that’s all that it is. ChatGPT can’t think as a human can. It just spews out word combinations based on vast quantities of training text — all used without the authors’ permission.

The other popular hype right now is AI art generators. Artists widely object to AI art because VC-funded companies are stealing their art and chopping it up for sale without paying the original creators. Not paying creators is the only reason the VCs are funding AI art.

Do AI art and ChatGPT output qualify as art? Can they be used for art? Sure, anything can be used for art. But that’s not a substantive question. The important questions are who’s getting paid, who’s getting ripped off, and who’s just running a grift.

OpenAI’s AI-powered text generators fueled a lot of the hype around AI — but the real-world use case for large language models is overwhelmingly to generate content for spamming. [Vox]

The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it. [MIT Technology Review; The Verge

The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases. [Daily Dot

The use case for AI is spam news sites for ad revenue. [NewsGuard]

The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices. [CBS]

The use case for AI is spam Amazon reviews and spam tweets. [Vice]

The use case for AI is spam videos that advertise malware. [DigitalTrends]

The use case for AI is spam sales sites on Etsy. [The Atlantic, archive]

The use case for AI is spam science fiction story submissions. Clarkesworld had to close submissions because of the flood of unusable generated garbage. The robot apocalypse in action. [The Register]

You can confidently expect the AI-fueled shenanigans to continue.


Riff XR: Artificial Intelligence in the Metaverse

However, there have some rather interesting specific applications of AI to the metaverse. A brand-new social VR platform called Riff XR offers a tantalizing (if still somewhat buggy) glimpse of the AI-assisted metaverse of the future.

Among the AI-assisted features of Riff XR are NPC (non-playing characters, i.e. bots) with whom you can have surprisingly open-ended conversations, as well as a “cutting-edge Stable Diffusion-powered Generative Art System”:

Now, I have not visited Riff XR myself (yet), but a good friend of mine, metaverse videographer Carlos Austin, has, and he posted a video of his explorations on this new metaverse platform, including verbal conversations with a number of NPCs using generative AI to “listen” and “respond” to his spoken sentences.

One was a constable droid roaming the night-time central plaza in Riff XR, a scene straight out of Ready Player One; another played the role of Vincent Van Gogh in an exhibition of AI-generated artworks in a museum just off the plaza; a third was a woman, named Molly Millions, working at the back bar in a cyber-disco with pulsating music and gyrating NPCs of various kinds, with whom Carlos had a surprisingly in-depth conversation about cocktails!

Carlos demonstrated that you could even speak to these NPCs in different languages including German, Japanese, and Spanish (although let me just add, that the faux Van Gogh’s German accent was absolutely atrocious!). Here’s his full video (please fast-forward through all the technical bugs and mishaps; Riff XR is still quite buggy!). Carlos’ conversation with Molly Millions is nearer the end of this video:

We can expect to see more such applications of artificial intelligence coming soon (and perhaps sooner than we might expect!) to a virtual world or social VR platform near you. And you can expect more blogposts from me on this topic in future, as the technology continues to develop and evolve over time. Stay tuned!


Many thanks To Jim Carnicelli (a.k.a Galen from Sansar), with whom I had a couple of wide-ranging online discussions via Discord on the topic of AI while I was working on this blogpost over the summer! While I did not use many of the ideas we talked about, they did give me much food for thought (and possible topics for future blog posts!). You can visit Jim’s store selling his AI-generated artwork here: Snuggle Hamster Designs.

UPDATED! An Editorial: Climate Change, Civilization Collapse, and Looking at the Metaverse from a Broader Perspective

Well, now, there’s a blogpost title designed to attract attention, right? 😉

I was one of those people who, at the height of the bird flu scare, stocked up on rice, beans, and N95 masks to prepare for what I thought (at the time) to be an imminent pandemic. Well, we had a pandemic, but I was wrong about the when and the why of it.

Well, I have been doing a fair bit of reading lately, and following a number of intelligent, well-read people on Mastodon, and I feel like I’m in a situation somewhat similar to the one I was in a decade-and-a-half ago: I’m willing to bet something bad will happen, but like with the pandemic, I’m probably going to be wrong about the when and why. What is that something bad? Climate change (fuelled by growth-at-any-cost, maximize-the-value-to-shareholders capitalism) leading to widespread agricultural failures, leading in turn to famine, war, and civilization collapse.

Cheerful topic, eh?

But I am increasingly of the opinion that we, as a society, are going to continue to see more and more of the kind of things that would have been unthinkable even at the beginning of this year. A hurricane-driven wildfire leveling a Hawaiian city, Lahaina, burning 100 people to death before they could flee. A torrential downpour in a Libyan city, Derna, leading to dam failures and flooding, with 11,300 dead and 10,100 missing.

The Lahaina wildfires in Hawaii on August 8, 2023 burned so hot and fast that many people were trapped in their cars trying to flee; some only survived by jumping into the sea (image source).
Overturned cars lay among other debris caused by flash floods in Derna, eastern Libya, on September 11, 2023 (image source: CNN).

And my home country of Canada is not immune. This summer, almost every city and town in the Northwest Territories had to be evacuated due to an unprecedented wildfire season. A tropical storm named Lee rapidly turned into a Category-5 hurricane due to off-the-charts ocean warming, finally coming aground as a still-fierce storm attacking the Maritime provinces as I write this.

My usual response to depressing events like this is pretty much summed up by the following picture:

But the truth of the matter is, there’s only so much you can stick your head in the sand, like an ostrich, when it’s becoming increasingly clear that things are not okay.

Recently, I read a Medium post by Jessica Wildfire, titled The World Has Already Ended. In it, she writes:

We talk a lot about saving the world or preventing the collapse of civilization, but we don’t talk about what it really means. We don’t talk about which world or which civilization we’re trying to save.

It can’t be this one.

This civilization is gone. This world is gone. It already ended for millions of people. Some of us just haven’t felt it yet. It was never an easy one for most of us. It was never fair, but there was a level of predictability. There was a level of comfort and convenience. That’s gone now. Things aren’t going to get better. They’re not going to get back to the way they were.

Now, when the pandemic hit, I bewildered and even irritated some of my regular readers by abruptly swerving my blog content from “news and views on social VR, virtual worlds, and the metaverse” (as the tagline for this blog states) to wall-to-wall coverage of the COVID-19 pandemic. So, you’ll all no doubt be quite pleased to hear that I’m not planning to turn this into a doomer blog about the end of the world.

However.

From now on, my writing about the metaverse will take place in a context that we might be in the earliest stages of a climate-change-driven societal upheaval, perhaps even collapse. For example, how will sales of virtual reality headsets fare when more and more people are struggling to put food on their table and keep a roof over their heads? And, as the metaverse evolves, how many people will consider it as an acceptable alternative to airplane travel to conferences, given that plane travel adds to the overall toll of global warming? How can virtual worlds and social VR be used to help educate people on the challenges facing our world? What happens to VR/AR/MR/XR headset manufacture when components from companies in India or China or the U.S. are impacted by supply-chain problems caused by climate change?

So, while this will continue to be, first and foremost, a blog about virtual reality and the metaverse, I expect that, from now on, I will be looking at everything that is currently happening from a wider perspective, encompassing climate change and its possible impacts. And, if I should stray from that perspective, please feel free to call me out on it.*

UPDATE Sept. 21st, 2023: One of the people whom I follow on Mastodon, who posts about this topic regularly under the pseudonym Bread and Circuses (profile link), recently wrote the following (post link):

Okay, so here’s where we are…

If trends continue on their present course, within 30 years or even less, human industry and commerce and land use will have altered Earth’s livable environment to such an extent that modern industrial society will collapse. This would be an utter disaster. Hundreds of millions of the most vulnerable among us, perhaps even billions, will die. Not to mention all the thousands of animal and plant species we will have driven to extinction.

Sounds bad, right? And that’s what happens on our present course, which includes the belated (and much applauded) shift of most electricity generation to “renewables.” Sorry, but that ain’t gonna be enough to stop what’s coming.

I’m not the only one who can see this, obviously. Many scientists are now raising a similar alarm.

The rich and powerful can see it too, which is why they’re quietly buying large tracts of land in New Zealand and constructing luxury bunkers, hoping to get through the apocalypse that way.

Government and industry leaders certainly can see what lies ahead. And if they use their brains (which not all of them have), they’ll realize that the only possible way to head off catastrophic collapse is to begin an immediate shift to a post-growth economy.

Except — that would mean the cessation of capitalism. Well, we can’t have that! So, instead, they’re now making plans to drastically alter the environment in order to overcome the problems caused by altering the environment.

Yes, you read that right. Capitalists want to use fossil-fueled technology to fix all the damage caused by fossil-fueled technology. 🥴

Our Mastodon friend Geoffrey Deihl (aka Sane Thinker, @gdeihl) recently published a thorough exploration and evaluation of those plans.

“Stratospheric Aerosol Injection: Earth’s Last Chance Dance?”

See https://geoffreydeihl.substack.com/p/stratospheric-aerosol-injection-earths


*True confession time. Hey, I still don’t bother to recycle my plastic. 😱 I recently bought a new gasoline-powered car instead of a hybrid or electric vehicle. 😱😱 And I just booked a plane ticket to go see the rest of my family. 😱😱😱 I’m just as much a climate-change hypocrite as the next person. All I can do is continue to educate myself, and try to do better.

Tumbling Tumbleweeds: My Blogging Drought

Ever go through a creative dry patch? (Photo by Juli Kosolapova on Unsplash)

I admit it: I’m having a dry spell. (Oh, like you haven’t noticed.)

Or, to be more accurate, a dry year. At the moment, I have no less than 31 draft blogposts in various stages, which are waiting for me to finish and publish (see image, right, from my WordPress blogging software). I keep waiting for inspiration to strike, and it just never seems to come lately. And, as somebody who used to push out as many as 6 or 7 blogposts in a day, this creative desert I am experiencing is something new and unexpected.

Frankly, I’ve been feeling a bit overwhelmed in my life these past six months, which is part of the reason why I am just not feeling that spark. I come home from my job as an academic librarian, and most evenings, the last thing I want to do is blog.

And it’s not like there is any shortage of news in virtual worlds, social VR, and the metaverse to report on or editorialize over, either. If anything—even though last year’s metaverse craze has clearly been traded in for this year’s artificial intelligence hype cycle—the number of things happening out there for me to react to, or pontificate on, is still quite a steady stream. Zero shortage of news and events.

I feel tired and burned out, and more than a little overwhelmed by it all, to be honest.

The virtual reality lab project my university library system is working on (an initiative I am involved with) is moving ahead, slowly but surely, but I’m having trouble feeling excited about it. (Or perhaps, it’s more accurate to say that I am 50% excited and 50% terrified!) Even though I have been working with virtual reality since December 2016/January 2017, I am struggling with a massive case of imposter syndrome. I remind myself that there’s literally nobody else in the University of Manitoba Libraries system who has the right kind of background and experience to work on this, but some days are a struggle nonetheless. Perhaps it’s the late summer doldrums; I don’t know.

I know that part of the problem is that I’m just not feeling that sense of wonder and magic that I used to, when I slipped on a VR headset. Perhaps what I really need to do is to go seek out that magic again, actively look for it. I know that there are some truly wonderful projects out there that people are working on, projects that I have started to write about, and then somehow they still sit in my WordPress drafts folder, unfinished.

It’s a mystery to me. Maybe it was inevitable after five years of break-neck blogging, that there would be a dry spell, a pause.

I’m not sure when I will be back. I find myself escaping into Second Life a lot lately, my first love and my first introduction to the power and potential of virtual worlds. I do feel a bit guilty that so much of my recent coverage has been about Second Life and its 20th anniversary celebrations, but I do consider it a significant milestone that was worthy of the attention. But it bothers me that I have not paid nearly as much attention to VRChat where, for example, I have recently had some wonderful experiences exploring the delightfully bizarre worlds of DrMorro, a talented Russian world-builder with a vivid imagination.

Perhaps the problem is one of format. I know a lot of people don’t follow blogs that much anymore; some consider them an outdated form of communication (which I disagree with). But maybe a blog just isn’t the best way to cover everything that’s happening in the metaverse. Maybe a switch to a newsletter, or a podcast, or perhaps a resurrection of the moribund Metaverse Newscast, is in order?

But I digress; I am rambling. I will end this post here.

See you on the other side of my dry spell!

Tumblin’ tumbleweeds… (Photo by Luismi Sánchez on Unsplash)