Yet Another One Bites the Dust: Meta’s Shutdown of Horizon Workrooms

In a recent blogpost about the shutdown of MeetinVR, I wrote:

Facebook (which had gone to all the trouble and expense of rebranding as Meta during this ridiculous hype cycle) has dropped literally hundreds of millions of dollars into acquiring Oculus and trying to build a business metaverse platform, and failed to even to entice its own employees into using it (let alone anybody else)…

I predict that we are going to see a “metaverse winter,” much like the previous “AI winters,” when the initial promise and hype of the technology hits what the Gartner Group politely calls “the trough of disillusionment.” And I predict we are going to see a lot more shutdown announcements like this throughout 2026.

Well, guess what? Once again, I am late in reporting this, but Meta has finally shut down its Horizons Workrooms product, a social VR platform intended for business use. According to a Road to VR news report by Scott Hayden, Horizon Workroom’s final day was Feb. 16th, 2026.

Scott Hayden’s article on the shuttering of Horizon Workrooms, Road to VR, Jan. 16th, 2026

This is hardly a surprise. As I said up top, I don’t think anybody was using Workrooms. I wrote about the launch of the open beta of Workrooms in August 2021, at a time when Facebook Horizon (as it was then called) was still in closed, invitation-only beta. One neat feature is that it allowed you to bring your physical keyboard into the virtual space via keyboard tracking (this only worked for certain models of keyboard, though). One month later, they announced a collaboration with Zoom, but I don’t know if that went anywhere.

By October 2022, rumours were rumbling, with leaks from internal memos stating that even Meta’s own employees were avoiding the use of Workrooms. Shortly thereafter, The Verge issued a savagely critical evaluation of Workrooms. The product was buggy, the avatars were cartoony, and compared to simpler solutions like Zoom and Microsoft Teams, there just seemed to be too high a cost to entry for its designated use case. Meta finally decided this year to take the dog out back and shoot it. I’m surprised it lasted as long as it did. Scott Hayden reported:

For existing users, Meta has not announced a direct replacement for Workrooms; the company suggests users look into third-party apps such as Arthur, Microsoft Teams Immersive and Zoom Workplace.

Oh, and Meta has also been shelving projects, and laying off staff in its Reality labs division, according to Scott’s article. So it would appear that our metaverse winter is in full swing.

Photo by Bob Canning on Unsplash

But keep in mind that winter is only one season out of four. And winter has its own special beauty, even if it doesn’t seem like there’s very much going on under all that ice and snow.

Yes, we are probably going to see more platforms shut down, like Workrooms, and more companies go out of business (not Meta of course, smaller ones). But those of us who have already been active in the metaverse for many years aren’t going anywhere during these lean, cold times. We’ve found our people, our communities, wherever we happen to meet up, whether it’s a flatscreen virtual world like Second Life or a meetup in VRChat. We hop from world to world as needed.

Yes, the current marketplace struggles will still impact us all in some way. We can expect moments of panic and chaos (e.g. when Ready Player Me was bought out by Netflix, and thousands of developers had to scramble to replace their avatar systems). But we will hunker down, use the downtime productively, and wait for the next season to arrive.

Generative AI Update: Comparing ChatGPT, Claude, and Gemini while Researching the Metaverse Characteristics of Social VR Platforms

NOTICE: In this blogpost, I go into sometimes great detail about how these three generative AI tools work, comparing them in two ways:

– comparing how these tools work with the exact same text prompt; and
– comparing how they worked in August 2025 versus February 2026.

There’s an executive summary (Section 4) at the very bottom of this long, loooong blog post if you just want to skip to the highlights, and ranking.

If you need an introduction or a refresher, you might want to read this blogpost first: An Introduction to Artificial Intelligence in General, and Generative AI in Particular, which includes slides from lectures I gave on the topic in November and December of 2025.

SECTION 1: Introduction

In his 2024 book Co-Intelligence (still my go-to layperson’s guide to generative AI), Ethan Mollick says that one of the best ways to determine how well a particular generative AI tool works is to ask it questions about a subject that you already are an expert in. Why? Because it will be much easier for you, the human expert in the topic, to find errors and hallucinations in the answers.

Since last summer, I have been typing the exact same prompt into the “big three” general-purpose GenAI tools Ethan recommends: OpenAI’s ChatGPT, Anthropic’s Claude, and Google Gemini. I have been meaning to write a blogpost about my experiences with this first round of testing since September, but I have been too occupied with my paying job as an academic librarian to find an opportunity to do so—until now. (Please note that I have been using an em-dash, correctly, for many years before generative AI came along!)

So, today I decided to redo my original text prompt, using the latest versions of these three GenAI tools as outlined by Ethan in the latest edition of his AI Guide, which has posted to his Substack newsletter on Feb. 17th, 2026 (here’s a link).

I consider his advice to be quite valuable, as he seems to spend a lot of time working with the most popular and powerful GenAI tools, and keeping on top of the changes and advances in the technology. In this newest edition of his AI Guide, he discusses the shift from chatbots (where you have a conversation with the tool) to agents (where you give a specific, defined task with instructions to the tool, and it goes away and does the task and returns with results).

In all cases, the initial text prompt is the following:

What are some characteristics common to all metaverse platforms? How do these characteristics apply to social VR platforms? Please give me a chart comparing these characteristics for the most popular social VR platforms.

Please note that I have deliberately given the task of defining “popular,” and picking the social VR platforms, over to the generative AI tool (and I got some rather interesting results back!). Because I consider myself an expert on social VR and the metaverse, I should be able to spot inaccuracies, errors, or outright hallucinations in the responses I get back from these GenAI tools. In the next section (section 2), I compare and contrast the results I received from the above text prompt from:

  • Claude by Anthropic
  • ChatGPT by OpenAI
  • Gemini by Google

All three of these tools come with different versions. In all cases, I will use the most powerful version recommended by Ethan Mollick in his latest AI Guide I linked to above (but please note that in at least one case, I had made a mistake and not selected the correct option, as you will see below with Claude in Sections 2 and 3):

  • Claude Opus 4.6 Extended Thinking
  • ChatGPT 5.2 Thinking
  • Gemini 3.0 Pro Deep Research

In addition, in section 3 of this long blogpost, I will very briefly compare and contrast the results I received when I first ran this text prompt through all three GenAI tools on August 7th, 2025, with what I received when I ran them again on Feb. 18th, 2026.

All comparison charts in the February 2026 results in sections 2 and 3 will include some quick stats in a small table under each generative AI tool discussed, namely:

  • the number of characteristics common to all metaverse platforms (and their names); and
  • the number of social VR platforms in the comparison chart (and their names).

Section 4, the final section, contains my overall thoughts after spending a day working with these tools, and a ranking of how well I think these GenAI tools accomplished the given task.


SECTION 2: Comparing Searches Done Feb. 18th, 2026

Feb. 18th, 2026: Claude Opus 4.6 (and Cowork)

First up is Claude. I did this prompt two ways: once via the chatbot interface on the Claude website, and a second time using the Claude app and the new Cowork agent feature. (I was prompted to download and install the Claude app on my Mac, and authenticate using my email address.) First, the chatbot version:

This first report I got back compared eight metaverse characteristics between eight platforms:

8 Metaverse Characteristics8 Social VR Platforms
Persistent Virtual Environments
Real-Time Interactivity
User Identity/Avatars
Social Presence & Co-Experience
User-Generated Content
Virtual Economy
Cross-Platform Accessibility
Interoperability
VRChat
Rec Room
Meta Horizon Worlds
Resonite
Second Life
Spatial
ChilloutVR
NeosVR

Well, right off the bat, I see some problems. First, Second Life is not social VR. Second, it included both Resonite and NeosVR (although Claude told me, “I included both since NeosVR still has historical relevance, but noted it as legacy since the core team transitioned to Resonite”). However, that isn’t a good enough reason to include it in the table.

Then, I turned to the Claude app (which was suggested to me when I did the first text prompt above, so I downloaded and installed it on my MacBook Pro). Then I selected the Cowork (agent) tab along the top three tabs as suggested by Ethan, and I entered the exact same text ptompt:

After beavering away for a few minutes, it gave me the following result:

And when I click on the Open in Firefox button, I get this neatly formatted table (I’m not crazy about the chosen colour scheme, but that’s a minor quibble). It looks good at first:

However, the output, which might look impressive at first, is only as good as the quality of the sources used in its research. If the good information is locked behind a paywall (and therefore, not able to be scraped to add to its knowledge base), then the GenAI tool will use freely-available sources on the web, which can vary quite a bit in quality! There is an acronym in computer science called GIGO: Garbage In, Garbage Out, and I am reminded of this when I decide to take a closer, more critical look at the six sources listed.

All of them were non-academic sources, mostly generic market overviews from websites that I had never heard of before. The six sources included my own list of metaverse platforms on this blog (which is just a list, and doesn’t give any details about the platforms). While I’m flattered they included me, I expected something…more. And I absolutely hated that they mentioned cryptocurrencies, blockchain, DAOs, and NFTs, and included Somnium Space and Decentraland in the resulting table. While Somnium Space is social VR, Decentraland in absolutely not, and I have made my opinions on blockchain-based metaverse platforms very clear in the past on this blog.

8 Metaverse Characteristics6 Social VR Platforms
Persistence
Immersion & Presence
User-Generated Content
Built-In Economy
Social Interaction
Interoperability
Digital Ownership
Decentralized Governance
VRChat
Meta Horizon Worlds
Rec Room
Engage VR
Decentraland
Somnium Space

In fact, I was so dissatisfied with this report that I went back into the Claude Cowork app, and added a qualifier, and made sure that I had turned on Extended Thinking! (I’m almost positive I did that the first time around, but maybe I forgot, and unfortunately, once you’ve done your prompt, the results don’t tell you what modes you used in asking the original question.)

Only to get pretty much the same result: a pretty table with only six websites listed as sources! So much for being more specific and asking for Extended Thinking.

10 Metaverse Characteristics6 Social VR Platforms
Persistence
Immersive 3D Environments
User Identity & Avatars
Real-Time Social Interaction
User-Generated Content
Economy & Monetization
Cross-Platform Access
Scalability & Concurrency
Safety & Moderation
Interoperability
VRChat
Rec Room
Meta Horizon Worlds
Resonite
ChilloutVR
Engage VR

While better thatn the previous round, I am actually disappointed in the results I received from Claude Cowork. But read on; in section 3, I have an update on what I think went wrong here!

Feb. 18th, 2026: ChatGPT 5.2 Thinking

Next, I turned to OpenAI’s ChatGPT, using the ChatGPT 5.2 Thinking mode suggested by Ethan:

And I got back the following table. comparing six social VR platforms on ten metaverse characteristics:

While the resulting table might not be as pretty as the one produced by Claude Opus 4.6 Cowork, I appreciate that there are actual citations which you can hover over and click through to actually see the source material behind the comparison chart entries (and not just a list of websites checked, tacked on to the end). Also, ChatGPT seems to have checked a lot more sources than Claude, and made some sort of attempt to find authoritative sources (often, from the metaverse product’s own online documentation, as shown in this example).

10 Metaverse Characteristics6 Social VR Platforms
Shared Multi-User Spaces
Avatars/Embodied Identity
Real-Time Voice/”Hangout” Core Loop
Persistence (Account, Inventory)
User-Generated Worlds
In-World Creation Tools
Scripting
Economy & Monetization
Cross-Platform Access
Safety Governance
VRChat
Rec Room
Meta Horizon Worlds
Bigscreen Beta
Spatial
Resonite

Overall, I think that ChatGPT 5.2 Thinking gave me a better answer than Claude…but as we will see later on, it doesn’t compare to the best results I got from my day of testing and retesting. Let’s move on to the third of Ethan Mollick’s recommended, general-purpose GenAI tools, Google’s Gemini:

Feb. 18th, 2026: Gemini 3 Pro (first without, and then with, Deep Research)

The first go-round, I selected Gemini 3 Pro mode, as Ethan suggested:

And I got a resulting table comparing three social VR platforms across seven characteristics:

7 Metaverse Characteristics3 Social VR Platforms
Core Philosophy
Visual Style
Creation Tools
Hardware Access
Target Audience
Economy
“Metaverse” Strength (?!)
VRChat
Rec Rooom
Meta Horizon Worlds

I was so unhappy with this first Gemini result that I redid the prompt, this time making sure that I turned on the Deep Thinking mode, just to see if I would get better results, or even some actual citations to sources used:

Wow, what a difference!!

This time around, the task took a lot longer than either Claude or ChatGPT, and it included what appears to be extremely detailed feedback on what was happening behind the scenes (this seems to be turned on by default, and I’m not certain if this mode could have been enabled on Claude or ChatGPT):

And the report I got back was worth the longer wait:

And, at the end, not one but three comparison charts!

Here’s the quick stats, from all three tables in the final report (and notice how technical many of these “metaverse characteristics” are, compared to the other results!):

12 Metaverse Characteristics5 Social VR Platforms
Engine Core
Scripting Language
Persistence Type
Asset Pipeline
Audio Engine
Economic Model
Currency
Identity System
Tracking Support
Instance Cap
Network Model
Culling Tech
VRChat
Rec Room
Roblox
Meta Horizon Worlds
Resonite (only mentioned in one table)

SECTION 3: Comparing August 2025 Prompt Results with the February 2026 Ones

I also wanted to compare the results I when I did the testing last year (August 7th, 2025) with the results I got today (Feb. 18th, 2026) with all three GenAI tools. This was very enlightening.

Then Versus Now: Claude

You will understand why I was so disappointed with today’s results, when you see what the results were when I did the same prompt last year (dated August 7th, 2025):

The report I got back was extremely detailed, with actual citations to sources! I still don’t understand why I got such dramatically different—and worse—results. The difference is so astounding to me that I began to wonder if I had done something wrong this time around.

It was then that I realized that I had literally forgotten to turn on Research mode in the left-hand drop-down menu (previously, I had only had Web Search mode turned on):

So I went to check the Claude app, to see if there was that option available, and, of course, it was there—but under the Chat tab, not the Cowork tab!! So perhaps Cowork still has some user interface bugs to work out. Perhaps sending everything to an agent isn’t the better option; certainly, not in this case!!

Once I had selected both Research and Web Search from the left drop-down menu, and Opus 4.6 Extended from the right drop-down menu, I hit send and waited…until I got a message that I had used up all my credits on my $20-a-month plan!!!

AAAAAAAAAAAAAARGH!!!!

By this point, I was so frustrated with Claude that I simply exited the app. I had had enough frustration for one day.

The next morning, February 19th, 2026, after my daily credits reset at 6:00 pm, I once again tried my prompt with Claude Opus 4.6 Extended Thinking, with both Research and Web Search turned on (using the Gemini app I had installed on my Mac, as opposed to the web version; they appear to be identical in terms of features).

Right off the bat, I got a better response (and Claude even remembered that I was going to working on an OER about the metaverse!):

Again, similar to Google Gemini, I had a bit of wait while Claude did its thing. I actually preferred that Gemini actually gave better descriptions of what it was doing while it was going about its task, as opposed to…well, no updates from Claude other than me sitting and staring at an animated cursor!

Ten minutes later, I got the detailed report I wanted in the first place, and which Claude Cowork stubbornly refused to give me:

The response back included a concise summary taken from the sources examined:

The final report included citations to the academic literature (which I could hover over and click on to go to the source, see the red arrow below), and it cited experts in the field such as Matthew Ball and Tim Sweeney. It’s pretty much all I wanted, and it compares quite favourably to the similarly detailed report from Google Gemini, in the previous section. I am happy.

And this was the only report which had a listing of metaverse characteristics, separate from the ones used in the social VR platforms comparison chart:

Here’s the quick stats from the comparison chart. As you can see, there are some problems here, with the inclusion of platforms which are clearly not social VR (e.g. Second Life) and platforms that no longer exist (Altspace shut down on March 10th, 2023). These sort of mistakes make we wonder about the accuracy and currency of the report overall.

9 Metaverse Characteristics9 Social VR Platforms
Persistence
Synchronous Real-Time
Massive Scale/Concurrency
Cross-Platform Access
Virtual Economy
User-Generated Content
Interoperability
Avatar/Identity Systems
Immersive 3D/Spatial Computing
Open Standards/Decentralization
Spanning Physical-Digital
Ethical Goivernance/Accessibility
VRChat
Horizon Worlds (note: old name used)
Rec Room
Resonite
ChilloutVR
AlspaceVR (was shut down)
Second Life (not social VR!)
Roblox
Fortnite (not social VR!)

Then Versus Now: ChatGPT

An interesting difference between the August 2025 report from ChatGPT and today’s report is this: in last year’s report, for whatever reason, the tool asked me a follow-up question to clarify what was wanted (I did use the Deep Research feature in the 2025 report, as well):

Based on that clarification prompted by ChatGPT, I actually think I preferred the 2025 report format over this new one. So why didn’t ChatGPT 5.2 Thinking ask me any follow-up questions this time around? And that’s part of the frustration with tese tools; the way that they operate is still very much a black box, where you don’t understand how the tool is processing what you ask of it.

Then Versus Now: Gemini

The last comparison is between the Google Gemini report I produced on August 7th, 2025, and today’s report. One thing I noticed about the Aug. 7th report is how hard it tried to shoehorn in an overarching narrative into the final result, in a way that seemed a bit hamfisted, frankly. But the result was still a very detailed report with an extensive list of citations, comparable to today’s report. I prefer today’s version.


SECTION 4: Executive Summary and Ranking

This is going to be concise, I promise! Five points.

First, while we might be entering what Ethan Mollick calls “the agentic era,” my experience today shows that simply handing something off to an agent, as opposed to the back-and-forth conversation with a chatbot interface, does not always give the best result. In particular, Claude Cowork gave me terrible results, and eventually, I ran out of daily use credits to actually run the report I wanted in the first place.

Second, the user interface for these GenAI tools is awful and NON-intuitive. Hiding critical options like Deep Research under drop-down menus, and not making it clear what options have been selected when you do a text prompt, is a major problem. All three companies need to hire some good user interface/user experience staff. If I, with decades of computer experience and a goddamn computer science degree, can’t figure this shit out, God help the average non-technical user—and isn’t that what the point of generative AI is supposed to be, to make it easier for the user to do things??

Third, when these tools work, they are astoundingly good (the Gemini 3.0 Pro report with Deep Research turned on, and the Claude Opus 4.6 report with Research, Web Search, and Extended Thinking turned on). But when they don’t, they can still fail spectacularly (Claude Cowork). So you still have to be the human in the loop here, to figure out when you get a good result versus a bad one. What is frustrating is that all these GenAI tools operate in a black box, with only Gemini making some attempt at explaining what it was doing, as it was doing it.

Fourth, as Ethan himself said in his latest AI Guide:

The top models are remarkably close in overall capability and are generally “smarter” and make fewer errors than ever. But, if you want to use an advanced AI seriously, you’ll need to pay at least $20 a month (though some areas of the world have alternate plans that charge less). Those $20 get you two things: a choice of which model to use and the ability to use the more advanced frontier models and apps. I wish I could tell you the free models currently available are as good as the paid models, but they are not.

In other words, you get what you pay for. And sometimes, even the $20-a-month level isn’t enough, as seen with my experience on Feb. 18th with Claude (and yes, using the cutting-edge features does eat into your usage limits pretty quickly, as I learned to my chagrin).

Finally, I have found that the one of the best ways to see where the strengths and weaknesses of these GenAI tools is to enter the exact same text prompt into each of them, and then compare and contrast the results you get back. However, that approach is gonna cost you at least US$60 a month, so it might not be worth it to you. (And will I be doing this forever? No; at some point, I will just pick one or perhaps two tools and cancel my subscriptions to the rest of them.)

So, in this current round of testing, I would rank the results as follows (separating the results from Claude into the chatbot-generated report and the Cowork report):

  1. Google Gemini 3.0 Pro (with Deep Research turned on) provided me with a very detailed report with citations, as well as giving me a detailed play-by-play on how it was answering my query, which I really appreciated.
  2. Claude Opus 4.6 report (with Research, Web Search, and Extended Thinking turned on) also gave me a detailed report with citations, but several errors in the comparison chart made me question the overall quality and currency of the report. I also really hated how I had to futz around to get the results I really wanted!
  3. ChatGPT 5.2 Thinking is in a clear third place, in my opinion. Not bad, but not as detailed a result as Gemini and Claude provided.
  4. Claude Opus 4.6 Cowork, with perhaps the prettiest output but easly the least substantial result, using lower-quality sources of information, clearly failed at this task. For those reasons, I ranked it in last place. Ethan’s “Agentic Era” might be true for some applications, but certainly not this one!

I have found these little excursions into generative AI to be quite enlightening, and they have definitely given me some new ideas of topics to explore when I begin my research and study leave to write an OER about the metaverse. Hopefully, you found it enlightening, too. Please go subscribe to Ethan Mollick’s free Substack newsletter; he tends to update his AI Guide recommendations fairly regularly, and it’s really the best way too stay on top of a rapidly changing and evolving field!

Some Thoughts on the Apple Vision Pro, Two Years After Its Release

Photo by Sam Grozyan on Unsplash

I was surprised to discover, finger-swiping and pinching my way through the Apple Vision Pro subreddits I follow using the Pioneer for Reddit app (while in the Apple Vision Pro, of course!), that the Apple Vision Pro was already celebrating the two-year anniversary of its release in the United States. We Canadians and citizens of about a dozen other countries were only able to get our hot little hands on AVPs later, of course (I had a particularly tortured road until I finally was able to use mine, as explained here, including several frustrating and time-consuming incidents trying to communicate with both Apple’s and UPS’s AI-powered chatbots in efforts to speak with an actual live human being). But, as usual, I digress.

I have been thinking a lot lately about why I am so enamoured with my Apple Vision Pro, and how it compares to the many previous Windows PCVR and standalone VR/AR headsets I have used since January 2017 (Oculus Rift, Oculus Quest 1/2/3, Valve Index, Vive Pro 2). Also, I have been thinking a lot about how I have been using those different headsets, and again, why my use of the AVP has been such a radical departure from previous virtual reality gear. So this blogpost is my attempt to summarize all those thoughts, and get them down on—hmmm, well, not paper, exactly, but pixels?—to share them with you, my faithful blog readers. (By the way, I very much appreciate those of you who do actually take the time to read my ramblings!)


Any sufficiently advanced technology is indistinguishable from magic.
—Arthur C. Clarke

First, the technology of the Apple Vision Pro makes the device feel magic, and I still feel that sense of awe and appreciation while wearing it every day. Shortly after my first week of use, in a message I first excitedly shared with my friends on Second Life, first quoted here on my blog, I stated:

The Apple Vision Pro makes every single VR headset I have used to date feel like one of those red plastic View-Masters I used to play with as a kid in the 1960s. The “screen door” effect so evident in earlier VR headsets (where you can see individual pixels, making everything slightly blurry) is COMPLETELY, UTTERLY gone.

The Apple Vision Pro’s display resolution is 50 times more dense than the iPhone, and such a startling leap forward, that I often like to joke, it makes all the older VR/AR headsets I have ever worn feel like a cheap plastic View-Master toy!

After decades of working on Microsoft Windows computers, I used the Apple Vision Pro (and in particular, what I consider its killer feature, Mac Virtual Display) to switch almost completely to macOS and the Apple ecosystem. Let me walk you through a typical workday. I arrive at my cubicle in the librarian’s shared office space, turn on my MacBook Pro, and unpack and set up my Apple Vision Pro. I remove my prescription eyeglasses, put my AVP on, adjust the straps across the back and top of my head for a comfortable fit, and select my usual environment, Mount Hood, the tallest mountain in Oregon:

My preferred Apple Vision Pro Environment for work is Mount Hood, Oregon because I like to be surrounded by pine forest.

I can adjust how much my chosen Environment blends with my cubicle office space, by twisting the knob on the upper right of my AVP. Most times, I like to have it set up around 90-95%, so that I feel I am surrounded by forest, with the lake and Mount Hood to my back, but enough of the real world still pokes through so I can, for example, easily grab my insulated Winnipeg Folk Festival coffee mug (with an environmentally-friendly metal straw, so I can take a sip more easily while wearing my AVP!). When I use my Apple Magic keyboard, it automatically highlights itself as my hands hover over it, pulling itself out of the forested ground when I look down. Everything just works. It’s magic.

Usually, I have the Apple Music app pinned to my right side, and I select a playlist (usually instrumental new age music, but it can vary depending on my mood).

Sorry, any screen captures I take in my Apple Vision Pro always tend to be a bit lopsided! I need to learn how to angle my head correctly.

I pop in my Apple AirPods, and then look at my MacBook Pro. A virtual Connect button hovers over the MacBook Pro’s screen, I tap my finger and thumb together to select it, et voilà! A large, adjustable ultra-high-definition screen appears over my desk, a sharp, crystal-clear wide screen where I can rearrange my macOS windows to my heart’s content: Outlook for email, Word for whatever report I am working on, my latest PowerPoint presentation, my Firefox web browser, etc.

I now spend between four and six hours of my workday in this productivity cocoon. If I need to get up (say, to reheat my coffee in the microwave), I unplug the AVP battery from its power cable, place the battery in my left front pocket, and walk around the office. I exit the Mount Hood environment, which remains in place like a virtual office partition. If, on my way to the microwave, I happen to look behind me, I can still see my huge Mac Virtual Display, and the Apple Music window hanging in midair at my workstation.

This setup gives me two things: focus and pain relief.

First, the ability to isolate myself (literally, throwing an immersive, three-dimensional virtual environment around myself) gives me the ability to focus on the task at hand, and I find it helps with my overall productivity. I can even get into a much-desired flow state. (Interestingly, the second-edition Apple Vision Pro with the higher-end M5 processing chip seems to have completely alleviated a problem I had with the original-model AVP, which was I would develop eyestrain after at about the two-hour mark while using it with the Mac Virtual Display feature. The new dual-loop Dual Knit headband is also an improvement over the original, single-band knit headband.)

Second, I have a couple of deteriorating joints in the cervical part of my spine, which unfortunately limits how much time I can spend sitting in front of a desktop computer monitor and keyboard. I have noticed that I can work for longer periods of time, with less neck and shoulder pain, when using the Mac Virtual Display feature on my Apple Vision Pro with my MacBook Pro, than I can in any other workstation setup (including just my MacBook Pro with an external monitor). I am truly grateful that the technology is now sufficiently advanced to help alleviate my pain!

As far as I am concerned, the Mac Virtual Display feature is THE killer app on the Apple Vision Pro. While I have been browsing the AVP subreddits and downloading and installing various apps, I find I use the Virtual Display far more than any other app or program (at least, right now). No other VR headset can give me what the AVP offers, or even come close. The thousands of dollars I have spent on the first and now second editions of the Apple Vision Pro over the past two years have been worth every. single. penny. I cannot imagine living and working without this device.


With all the Windows PCVR and standalone VR/AR headsets I have used, I had always been hopping between one app or another (usually a metaverse platform like Sansar or VRChat, because that is my personal hobby and my research interest). I spent very little time in places like Steam VR Home, or the Meta Horizon Home, where you can see your library of installed VR/AR applications and games, launch them, and switch between apps. But in the Apple Vision Pro, with the Mac Virtual Display feature, I find I am using the device more like a filter or environment through which I am doing actual work with pre-existing programs like Microsoft Office, as opposed to loading and running virtual-reality-native apps. You can see immediately how this is a big difference. I would never for a second even think of using my Meta Quest 3 headset to edit a document in Microsoft Word, or fire off an email, yet I do those sorts of things without a second thought in my Apple Vision Pro.

Which leads me to my next important point: why the relative lack of AVP-native apps and programs is not as serious a problem as it would appear at first glance. When you use the device as a filter, or an environment, as you do with the Mac Virtual Display feature, you are using it with the much richer library of apps and programs available on macOS. Add to that the thousands of iOS apps you can run in flat-screen mode on the AVP (e.g. Firefox, my go-to web browser), and you can see why I am not too terribly concerned about this issue.

But it would appear that many consumers are concerned at how (relatively) slowly new, native-AVP apps and programs are being added to the Apple App Store. In a post made four days ago to the r/VisionPro subreddit, someone asked:

So I finally pulled the trigger and bought an Apple Vision Pro, and honestly… wow. The hardware is insane. The display, hand tracking, eye tracking, immersion – it genuinely feels like a glimpse into the future. Watching films, browsing the web, even basic spatial apps feel miles ahead of anything else I’ve tried.

That said, I can’t shake one big concern: developer support is thin.

Right now it feels like there are hardly any apps that are actually built for Vision Pro. Yes, iPad apps technically work, but that’s not the same as native spatial experiences that really show off what this thing can do. After the initial “this is amazing” honeymoon phase, you start noticing how limited the ecosystem still is.

My worry is this: if Vision Pro doesn’t gain real traction, Apple could quietly scale it back or pivot, and developers will have even less incentive to build for it. That becomes a vicious circle — fewer users → fewer apps → even fewer users.

I really want this platform to succeed because the tech absolutely deserves it. But at the moment it feels like we’re relying on Apple’s long-term commitment and patience more than anything else.

Curious what other Vision Pro owners (or devs) think. Are we just early and impatient, or is the lack of native apps a genuine red flag?

This question sparked some developers and other users to weigh in, with some very insightful commentary, which I wanted to share here with you:

I think Apple knew this going in and that’s why this device is almost like a prototype in a way. They need it in consumers hands to know what it will turn into. They knew the price point wasn’t for general consumption, but the only way to mold this thing into a future device for the masses that has better battery, less weight, and more importantly, costs less, was to get it into the hands of people and watch it do its thing.

Hi,Vision Pro developer here. Long response incoming (TLDR at bottom). You and other users have responded with what I think is a correct analysis that there’s an economics issue in that people won’t buy the Vision Pro until there’s sufficient app support, while developers can’t afford to make a dedicated Vision Pro app until there’s a sufficient user base. I can maybe provide some more perspective on some other aspects of Vision Pro development.

I truly believe that spatial computing is the future of computing, but it won’t be with the current version of Vision Pro. Essentially, I see this iteration of Vision Pro as a (very) cool device for media consumption and a dev tool. In the future, Apple (or some other company, but my money is almost always on Apple) will likely release the product that breaks through with consumers, whether it be the upcoming glasses or some vastly improved Vision Pro, and then developers will begin work making the apps for that eventual product. My personal development projects on Vision Pro are done with the certainty that they will be made at a financial loss to myself, but in the hope that learning how to build streamlined apps and leverage the capabilities of the current device will allow me to be better positioned to be a developer for the breakthrough model. As a developer, this is the time to be experimenting with 3D user experience, to learn what works and what doesn’t as an interaction model for experiences as immersive as Vision Pro allows. 

There are also problems with what Apple allows developers to do. In truth, there’s very little freedom to push the device to its limits and make something really imaginative and unique. Apple has set out strict privacy considerations (which are good broadly speaking, but might be overkill at this point) that lock developers into predefined paradigms that Apple approves of. Of course Apple’s own apps don’t have to obey these restrictions, which allows them to make apps that feel magical, like Experience Dinosaurs. Having attended the Vision Pro Developer conference for the past two years, I can tell you that there are significant frustrations among the developer community over the restrictions Apple has placed.

From where I’m sitting, I think the interest among developers for Vision Pro is reasonably high, but most can’t afford to build for it until there are some big changes in the market. I think in the near future there won’t be more than a smattering of new native apps, mostly made by the passionate developers who see the potential, but once Apple releases the product that clicks for consumers the dam will open up. This will probably result in a flood of apps for this current generation of Vision Pro, as I think Apple has nailed the software side of this, and just needs to work on building a physical frame that consumers want to put on their head.

TLDR: Be patient. At some point spatial computing will likely take off on a future Vision Pro-like model, and then the developers will come.

Developers aren’t going to invest heavily in the platform until there’s more users. Apple knows this. Apple is getting the OS and dev tools maturing while they work towards more consumer-friendly versions of their Vision line. They needed the hardware out and in user and developer’s hands to really start moving forward. Traction will come, I sincerely don’t think there’s anything to worry about there.

I agree wholeheartedly with the second commenter, the developer who stated that “people won’t buy the Vision Pro until there’s sufficient app support, while developers can’t afford to make a dedicated Vision Pro app until there’s a sufficient user base.” It’s a classic chicken or the egg problem, which is why what I said earlier is so important. The number of available apps and programs for the Apple Vision Pro doesn’t really matter at this point (at least, for me), because I am pretty much using it as an immersive environment through which I am running other programs. To date, the only native-AVP apps I have been running regularly have been the previously-mentioned Pioneer for Reddit app, InSpaze, and Explore POV! (I have, however, been avidly collecting dozens of free and inexpensive AVP apps based on recommendations posted to the r/AppleVisionPro and r/VisionPro subreddits! One day, probably when I am on my upcoming research and study leave, I will start to explore more AVP-native programs and apps. In fact, two days ago, Google finally released a version of its popular YouTube video-watching app for the Apple Vision Pro!)

As I said up top, Mac Virtual Display is the killer feature I use most often. And that is what makes my use of the Apple Vision Pro so dramatically and drastically different from previous VR/AR headsets. It’s a productivity tool first, and with my continuing neck and shoulder pain, it’s also been a pain management tool second, an unexpected but not unwelcome way to get through an eight-hour workday with as little discomfort as possible. I am eternally grateful that the technology has actually evolved enough, just in time, to help me still be productive despite my pain! And for those two reasons alone, it is worth every single penny I have spent on this device. As I said before, I am all in.

The upgraded Apple Vision Pro has been a Godsend, and worth every penny I have spent!

ANNOUNCEMENT: My One-Year Research and Study Leave Project

Photo by Jaredd Craig on Unsplash

Open Educational Resources (OER) are teaching, learning and research materials in any medium—digital or otherwise—that either reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions. While many think of OER as referring predominantly to open textbooks, OER includes a vast variety of resources, such as videos, images, lesson plans, coding and software, and even entire courses. In order for a resource to be considered open, it must fulfill the following criteria:

Modifiable: The resource must be made available under an open license that allows for editing. Ideally, it should also be available in an editable format.

Openly-licensed: The resource must explicitly state that it is available for remixing and redistribution by others. Some open licences may include restrictions on how others may use the resource (see: Creative Commons).

Freely Available: The resources must be available online at zero cost.

—definition adapted from Introduction to Open Educational Resources, Open Education Alberta.

Not long ago, on my 62nd-birthday blogpost, I wrote:

…although it is not official official (and I really should wait until I get the official letter from university administration, which I was told should happen about the end of March), the University of Manitoba Libraries has approved my application to take a one-year Research and Study Leave (at full salary) to start later this year, where I am relieved of my regular academic librarian duties, and can work on a special project. Academic librarians at the University of Manitoba are members of the faculty union, and just like the professors, we have the right (and the opportunity) to pursue research. Again, more details later. I’ve only mentioned this to a couple of people so far, but I think I can share that much detail at this time.

Well, I am very happy to announce that it is now official official: I have formally been approved to take a one-year research and study leave, at full salary, from my employer, the University of Manitoba Libraries, to pursue a special project.

What is that special project, you may ask? Well, I’m just going to quote from my approved application form:

During my Research Leave, I will create a comprehensive Open Educational Resource (OER) addressing a critical gap in scholarly literature: a rigorous, pedagogically-sound introduction to virtual worlds, social virtual reality, and the metaverse, with particular emphasis on applications in higher education. This project builds directly on my expertise as the writer of a popular blog on the topic over the past eight years (https://ryanschultz.com), as well as the owner and moderator of an associated Discord server, representing over 700 members who are actively using various metaverse platforms. The research phase will involve a literature review, plus case study analysis of specific metaverse platforms. The OER will consist of several modules, including topics such as: the history of the concept of the metaverse; how the current wave of generative AI will impact the metaverse, etc. This project requires a dedicated research leave because the rapidly-evolving nature of the field requires intensive, concentrated research and focus. Released under a Creative Commons license, this resource will serve UM faculty and the global educational community, providing a freely-adaptable foundation for teaching, learning, and research.

Yep, that’s right folks, I am taking a full year off from my regular academic librarian duties to write a book about what I know best, and have been blogging about for many years now: virtual worlds, social VR, and the metaverse! (Throwing in a little bit about artificial intelligence and generative AI, as it applies to those topics.)

My leave runs from July 1st, 2026 through to June 30th, 2027, and the best part of it is, since it’s about the metaverse, I can literally work from anywhere: at home in Winnipeg, while visiting the rest of my family in Alberta, on the beach at Bora Bora (highly unlikely, although the Apple Vision Pro provides a suitable substitute in a pinch!), etc. The only rule is you have to vacate your current office at the university for whoever is filling in for you while you’re away on research leave, which seems pretty reasonable to me. However, I will be borrowing some of the VR/AR equipment which I had purchased on previous years’ travel and expense funds (T&E funds for short; essentially, extra money allocated to faculty and librarians for things like conference travel, books, computers, etc.):

Because part of this research work will involve social VR, I will have to move some virtual reality equipment purchased on previous years’ T&E funds from my office in Elizabeth Dafoe to my home. This equipment will be returned to my office after my Leave ends.

Oh, and I also have to promise that I will come back to my job at the University of Manitoba Libraries after my leave ends, which is fine, since I am planning to stay until I retire at age 65, in January 2029. This will, of course, be the last research leave I take before I do retire.

Best of all, after my OER is complete, anybody can use it for teaching, learning, and research purposes, including editing. remixing, and repurposing it (the exact rights will depend on which Creative Commons license I choose to publish it under).

Watch for updates on this project as I get closer to July 1st. Stay tuned!

Photo by Windows on Unsplash