Generative Artificial Intelligence Tools for Academic Research: AI Research Assistants, AI-Powered Document Analysis Tools, and A Look at Elicit, Undermind, and NotebookLM

I freely admit that this was not the next blogpost I was planning to write, but as a follow-up to my previous detailed discussion of what I have started to call the “Big Three” of good (sometimes, good enough) general-purpose generative AI (GenAI) tools—ChatGPT, Claude, and Gemini—I wanted to write a little bit more about two particular subsets of GenAI tools which are focused on the academic research process. And, since I have two things coming up on my calendar which necessitate academic research, that is:

I figured, well, what better time to demonstrate some of these GenAI tools than with some real-world, real-life examples. from my own use of some of these new tools?

Some Definitions

Will new generative AI tools change how academic research is done? Photo by Dan Dimmock on Unsplash

These two categories of tools are:

  • AI research assistants: tools specifically designed to help researchers search, discover, synthesize, and analyze academic and scientific literature. Each of them uses large language models (LLMs, i.e. GenAI) combined with scholarly databases (e.g. PubMed for medicine; AGRICOLA for agriculture), to help users find relevant papers, extract key findings, and synthesize evidence across studies. Examples of such tools are Elicit, Undermind, Consensus, and Assistant by Scite. Keep in mind such tools are only as good as the scholarly databases they access! For example, while Consensus proudly announces partnerships with major academic publishers like Sage, Wiley, Taylor & Francis and ACS on their front page, Elicit only seems to use freely-accessible sources like Semantic Scholar and OpenAlex, as you will see below.
  • AI-powered document analysis tools: While AI research assistants search across published scholarly literature, GenAI-powered personal library/document analysis tools are built around the concept of “source-grounding” — you upload your own documents (e.g. PDFs of journal articles and conference papers, word processor documents, websites, YouTube videos, audio files, etc.) and then the GenAI tool works exclusively from those materials. They’re intended to help researchers make sense of a lot of information. The best-known of this relatively new category of GenAI tools is Google’s NotebookLM, but there are other products similar to it: Nouswise, and the open source tool Open Notebook.

To summarize the difference between the two: AI research assistants (Elicit, Consensus, etc.) help you discover literature, while AI-powered document analysis tools (NotebookLM, etc.) help you analyze and synthesize literature you’ve already collected. They occupy different stages of the research workflow.


Undermind

I currently have a Pro account with Undermind, at US$16 per month, which is one step up from their limited-use, free service. My initial question to Undermind was as follows:

I am researching the topic of the metaverse, both older virtual worlds (e.g. Second Life) and newer social VR/AR platforms (e.g. VRChat). I am interested in the history of the concept of the metaverse, and how the meaning of the term “metaverse” has evolved over time.

Undermind took this initial question, and asked a series of follow-up questions in order to clarify what I was looking for. Here’s part of that chat:

Eventually, I was able to come up with a more specific search, as follows:

The final question I finally sent Undermind off to work on was as follows:

Find academic literature on the history of platforms and user practices associated with what is now discussed as the metaverse, staying broad across decades. Focus on the history of virtual world platforms and how people used them, including older virtual worlds such as Second Life and newer social VR/AR platforms such as VRChat, while also including adjacent predecessor platforms that predate the coining and later popularity of the term “metaverse.” Emphasize user practices broadly rather than narrowing to a single type of practice, and help trace how the meaning of the term “metaverse” evolved over time in relation to these platforms and practices.

My search results were 80 papers which Undermind determined were relevant to my final question, covering a publication date range of 1970 to 2024:

Note, at the bottom of this screen capture, how Undermind actually went through and sorted these papers into eight broad categories or subtopics, in essence giving me a nice overview of these 80 published academic papers. This kind of context/overview work is something at which GenAI tools tend to excel, and it can save an academic researcher hours of work (but, of course, you still have to be the human in the loop, and actually read and digest all the papers retrieved!).

But even more important to note is how GenAI tools like Undermind mark a dramatic change in information retrieval: a shift away the from the sometimes-arduous task of using keyword searching, controlled thesaurus vocabulary, and Boolean logic to search traditional academic databases (e.g. PubMed and its MeSH or Medical Subject Headings), towards actually having a conversation with the search tool, starting with a plain English statement, and answering follow-up questions to clarify and refine that initial prompt into a final search question, then submitting it.

If you like what you see (and I did), you can click on the Generate Report button to start a new process, which prompts you:

I’d like to write a report based on papers from the search “History of metaverse platforms and practices”.

Let’s briefly discuss the content before you start writing.

And again, Undermind asks a helpful series of clarifying questions to help you figure out what you want from a report on all this research data it dredged up:

The final report (which I could save as a PDF or markdown file, using one of three citation styles), looked like this:

The resulting report had 36 citations. However, unlike the Elicit report, the Undermind report did not have a section where it got into the nuts-and-bolts of what sources it used to discover the papers used in this report, nor the method by which it selected them. So, while the initial read looked good, it would take actually getting and reading the full-text of the papers cited in this report to determine exactly how good it was.


Elicit

I also decided to spring for a Plus-level account on a tool similar to Undermind, Called Elicit (again, one step up from a free, Basic account, which offers a more limited service).

Having already done the Undermind search mentioned in the previous section of this blogpost, I decided to use the final search statement as my starting point, plugging it verbatim into Elicit to see what would happen…

…only to discover that Elicit doesn’t consider that a very concise search question, at all! (Actually, I kind of agree here. But Undermind let me do it, anyway!) However, instead of asking a series of follow-up questions like Undermind did, instead Elicit offered a series of buttons which, when pressed, rewrote the question to be much more narrowly focused, for example:

So, I clicked on the offered “Temporal and conceptual scope” button, and edited it a bit to include specific examples of what I was talking about, and hit the green Send button, using the default settings of research papers, and asking for a general review. Elicit then asks me what level of detailed answer I want (with the most detailed alternative greyed out unless I pony up more money for their Pro plan, one level up from my measly Plus plan):

I went with the Balanced report. However, I am not crazy about the limitations, especially when I could do a more traditional database search, using one of the over 650 databases offered by my university’s library service, without such petty limits as “the top 500 sources” (and, remember, that’s a ranking based on a newish GenAI computer algorithm, not keyword matching using a controlled thesaurus vocabulary and Boolean logic to construct a search strategy, the old-school way). Essentially, it’s a trade-off: a search using plain English language to start, with prompts to refine it, and a pre-limited number of sources examined—and with even more restrictions on the number of sources from which a comparative chart would be constructed (25). If you want more—and many users would want more—then you’ll have to pay extra for it.

However, for all of its limitations, the final report looked pretty good, at first. You can save a PDF version of the report, and you can even ask questions of it, via a chatbot interface (using the chat box located in the bottom-right corner of the screen capture below):

However, in doing a read of the PDF report, I was struck by several things:

  • Again, the hard limit of 25 papers from which data was extracted, which essentially makes Elicit useless to me at this level;
  • The fact that zero papers of the 500 selected were screened out by the selected criteria (see image below taken from the report: although, to be honest this technique probably would have worked much better for examining clinical research studies in, say, medicine, rather than looking for papers about metaverse platforms);
  • The search was performed against “over 138 million academic papers from the Elicit search engine, which includes all of Semantic Scholar and OpenAlex,” but again, my librarian mind kept thinking that there would be a lot of full-text content that was locked away behind academic publisher paywalls. And indeed, of the 25 sources picked for this report, only 15 sources had the full text of the article retrieved. For the other ten sources, Elicit likely relied only on the (freely-available) author-provided abstract. And indeed, many of these GenAI research tools tend to rely on scraping free sources such as Semantic Scholar and OpenAlex, rather than enter into potentially expensive agreements with academic publishers such as Elsevier and Wiley, which would give their users full access to content they own, and frankly, more complete data from which to write reports.

I actually came away from reading this report more disturbed by its limitations than I was impressed by any conclusions it was able to draw. Again, I hasten to add that my real-world test case would probably have performed much better if I had an actual real-world use case that fit Elicit better (like a systematic review of clinical medical trials, for example). It might just be that my admittedly fuzzy subject area didn’t fit the way Elicit works, at all. And that’s fine.

However, what bothered me most was that somebody without my 30-plus years of academic library experience would run this report it, read it, nod, and think that this was a good response. Even worse, an in-depth response. When, in fact, a more traditional search against a library database (perhaps executed with the expertise of a professional librarian) would give much better and more thorough search results.

Even worse, how many of those Elicit users would stop here, and run with this summary, and not actually go and read the full text of the 25 papers that were selected for the report, not to mention the countless papers NOT included? I would suspect that it’s more than a few. So yeah, this academic librariam does have some reservations about where all this is headed. However, I can also confess that the report did give me a few new ideas to think about, and some possible new directions to follow in my own academic research, which I might not have found otherwise.


NotebookLM

Now, I turn to NotebookLM, Google’s Language Model (the “LM” in the product name) which tries to do the same sort of thing to your personal research library that Google Gemini tries to do with—well, with an infinitely larger library of millions and millions of documents. The idea is the same, though: to feed a (much smaller) set of documents, audio, video, etc. into a service which then allows you to use a chatbot-type interface to ask it questions, and (hopefully) get some useful answers back. But, again, how useful NotebookLM will be to you depends entirely upon what you feed it! In computer science we have a saying, with the acronym GIGO: Garbage In, Garbage Out. If you fill NotebookLM with crappy sources, don’t be too surprised if you get crappy answers back!

I have a Google AI Pro plan, with 2 Terabytes of storage, which includes access to Google Gemini 3.1 Pro. This costs me CA$26.99 per month (approximately US$20), and frankly, I’m pretty sure I am not getting my money’s worth out of it. With that, My NotebookLM service is rated at the Pro level, which means I can have up to 500 notebooks, with each notebook having up to 300 sources. (NotebookLM Standard, the free service, lets you have up to 100 notebooks, with up to 50 sources each. You can compare the various levels of plans here.)

I have uploaded 103 documents (mostly PDFs of journal articles from my personal Zotero research library) into NotebookLM. Again, some of them are probably of lower quality than others, so the GIGO rule applies. For example, the notebook summary it seems to have automatically created veers alarmingly close to gobbledegook, and there’s even a mention of (gasp!) blockchain, and the audacity to name it as a “primary pillar necessary to facilitate real-time, multisensory interactions between users.” (WHAT THE ACTUAL FUCK?!?? Okay, I take it back, it is gobbledygook, a Frankenstein-like creation stitched together from bits and pieces of documents I had uploaded. I actually created this monster.)

There’s absolutely no explanation of how or why this summary was generated. In fact, the whole user interface of NotebookLM I found to be extremely confusing. I had to dig through the product’s Frequently-Asked Questions list to find out why some things wouldn’t load: any uploaded file over 200MB in size, any source with over 500,00 words, and any copy-protected PDF files will not load, but you don’t get any sort of error message if you try. In my limited testing thus far, you get…no response.

Even worse, this feels like a product that Google has just sorta dropped on us, with only the previously-mentioned FAQ and an email address for product support (yes, even for Pro users). I shouldn’t be surprised, I suppose. Just like I wouldn’t be surprised if Google is silently compiling notes on how people use NotebookLM, or decide to yank it away, like so many other previous Google products and services.

Honestly, I need to spend some more time playing around with NotebookLM before I issue any final summary judgement on the product. In particular, I get the feeling that the GIGO rule really applies to NotebookLM! Google themselves, in their NotebookLM FAQ, states:

Sometimes NotebookLM can’t answer your question because of…

  • Information not in sources: NotebookLM answers questions based on the information provided in your uploaded sources. If the answer isn’t in the source material, it won’t provide a response.

I had a very interesting day playing with these GenAI tools, and I learned a few things. I’ll keep you posted on how things go!

Photo by Jaredd Craig on Unsplash

UPDATED! Generative AI Update, March 2026: My Updated Presentation on Artificial Intelligence and GenAI, Plus My First Thoughts on the Claude Add-In for PowerPoint, and Yet Another Head-to-Head Comparison Between Claude, Gemini, and ChatGPT

I am (as you can clearly tell by this absurdly long blogpost title) trying to do three related things here. If you want, you can skip to the very end, where there will be an executive summary, where I have some thoughts to share about (waves hands) all this.

First, I wanted to share an updated version of the original slide presentation on artificial intelligence and generative AI, which I shared in a December 2025 blogpost. I used to think that keeping track of the many metaverse platforms I blog about was a task similar to herding cats, but let me tell you, it was a breeze compared to trying to stay abreast of all the rapidly changing and accelerating developments in generative AI!

Keeping on top of developments in generative AI is like herding cats, where the cats are multiplying and mutating!
One of the updated comparison charts in my PowerPoint slide deck (see link below to download)

Below is my updated PowerPoint slide presentation, complete with my speaker notes, for you to download and use as you wish, with some stipulations. I am using the Creative Commons licence CC BY-NC-SA 4.0, which gives the following rights and restrictions):

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International

This license requires that reusers give credit to the creator. It allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, for noncommercial purposes only. If others modify or adapt the material, they must license the modified material under identical terms.

BY: Credit must be given to you, the creator.

NC: Only noncommercial use of your work is permitted. Noncommercial means not primarily intended for or directed towards commercial advantage or monetary compensation.

SA: Adaptations must be shared under the same terms.

(The tool I used to determine the appropriate Creative Commons licence can be found here: https://creativecommons.org/chooser/.)

So, with all that said, here is my PowerPoint presentation (please click on the text link or the black Download button under the picture, not the picture itself):


NEW: Claude a just released add-ins for Microsoft Office

Second, today I installed a brand-new add-in from Anthropic’s Claude GenAI tool, which is supposed to work with Microsoft PowerPoint. This is an initial review of a very beta product.

And I have an actual real-world use case against which I will be trying out this new add-in: the design of an actual keynote presentation which I will be giving in a couple of weeks. (I am also using it in the third section, but in a different test of all three of ChatGPT, Claude, and Gemini.)

Now, before I get into this, I should explain that I have tried in the past with all three GenAI tools on which I currently have paid accounts (OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini) to create a PowerPoint slide deck presentation design—only to get highly disappointing and completely unusable results back. So I was not expecting much here, particularly as this is a still a research beta version of the PowerPoint add-in.

My initial prompt to the Claude add-in to Microsoft PowerPoint was:

Please create a new PowerPoint slide presentation design with the title of the presentation being: “Your Metaverse Is Too Small: How the Biases and Preconceptions of Virtual Worlds Hinder Their Use in Education.” The theme of the talk is educational uses of virtual worlds, social VR, and the metaverse in general. I want to have some nice background images to use in some of my slides, as well as a visually pleasing title slide. I’d prefer blue as a colour in the slide deck theme, thanks!

And Claude chugged away on my request, keeping my posted on what it was doing:

And it even prompted me to be sure I wanted to delete the Claude add-in help slide!

The set-up for the title slide took a long, loooong time, much longer than I would taken to click on the Designer button in the PowerPoint toolbar and just select one of the default options, and a colour scheme. Eventually, I just gave up on waiting and went off to work on another task, leaving Claude to beaver away. After fifteen minutes, I realized that I still had to explicitly okay the clearing of the original slide design (inset Homer Simpson “D’oh!), which I did, so that the work could continue.

If I could summarize the result in one word, it would be: meh (again, shout-out to The Simpsons):

I mean, I could easily do better than this myself. And two dots do not make, as I asked for, “some nice background images to use in some of my slides, as well as a visually pleasing title slide.” Here’s my section title slide:

Again, extremely underwhelming, and frankly, not an improvement at all over my previous failed attempts to generate a PowerPoint slide presentation design using any of the GenAI tools (Claude, ChatGPT, or Gemini). Mind you, I have deliberately stayed away from using the image-generation tools in these products; I can spot a GenAI-produced image from a mile away by this point, having been playing around with these tools, off and on, since they first came out in 2022.

Claude continued to generate all the standard versions of PowerPoint slides in this theme, ending with a final slide that, I must confess, I kind of liked the look of (although, again, I would have preferred some sort of background image):

This is where the process got interesting, as I finally decided to stop having to manually okay each individual step, and just gave Claude carte blanche to do whatever it felt was best. (I mean, the worst that could happen was that it come up with something I hated so much that I threw it away and started over.)

Claude was still working away while I took my lunch break, giving feedback along the lines of “Build stunning title slide design.” 🙄 (I’ll be the judge of what’s considered stunning, Claude. Calm the fuck down.)

Here’s the final result, my “stunning” title slide (insert RuPaul’s Drag Race shade death rattle):

The addition of three pieces of clip art in the upper right corner of the slide, plus a few more bubbles/dots. So, yes, this is, once again, a complete fail. I will probably still use this as a basic slide design, but obviously I will be locating and using my own images to illustrate it. This is now the second new tool in a week (first Claude Cowork and now Claude PowerPoint add-in) which has utterly failed at the tasks given it. I am not impressed.


Third, and finally, thank God, I had much better luck was in issuing all three general-purpose GenAI tools the exact same text prompt, a technique I had used before here (and one which I found very useful in comparing and contrasting the responses):

I am writing a keynote presentation on the mistakes companies make when creating, designing, and marketing the following product category: virtual worlds, social VR/AR, and metaverse platforms in general. Please give me a list of failed or shut down metaverse platforms, along with reasons why they might have failed. Please cite both academic and industry sources of information in your answer.

In all cases, I used the latest models as specified in Ethan Mollick’s latest AI Guide:

  • ChatGPT’s GPT 5.2 Thinking with the Extended Thinking option;
  • Claude Opus 4.2 Extended Thinking with the Research option; and
  • Gemini 3 Thinking with the Deep Research option.

Unlike the last comparison, I’m not going to go into great detail on the results (because I will be using some of these results, once they are double-checked against more authoritative sources, in an actual keynote presentation I will be delivering later this month). Instead, I will my general overall impression of each report (and all three did provide a detailed report with citations).

Please note that I deliberately left it up to the specific GenAI tool to define what “failed” or “shut down” means, how far back and how thoroughly to search for failed platforms, and what metaverse platforms to include or exclude from its final report. As always, I find the differences between the reports to be an interesting way to compare and contrast the results, so below I will give some basic statistics:

GenAI Tool# Failed Platforms ListedTime Range of Failed Platforms# Citations in Final Report
ChatGPT152003 to 202623
Claude13(start dates not given) to 2023/”effectively failed, still limping along”30
Gemini92009 to 2024 (but some platforms had no timeline information given)33

While ChatGPT was the most thorough in listing failed metaverse platforms, and seems to have gone the furthest back in time (including There.com, which launched back in 2003!), it also had the fewest number of citations, and most of them were historical, platform-related announcements (e.g. a 2020 announcement of the shutdown of the then-social-VR platform High Fidelity by its CEO) rather than peer-reviewed academic journal articles (although there were a couple of those, too). While Claude had more citations, a review of those showed mostly blogs and news websites, with fewer references to actual academic research papers (probably because much of that content is locked behind academic publisher paywalls, although there were still quite a few academic references to free sources such as ResearchGate and PubMed Central/PMC; see the Claude report image below for one section which did focus on academic sources). Of the three, Gemini’s 33 citations used included the most resources which I would consider academic, from a good range of different publishers (as well as more informal websites). Interestingly, Gemini also included a list of resources which it looked at, but chose not to include in the final report, something which neither ChatGPT nor Claude offered! I thought that was particularly valuable, in case something else caught my eye to follow up on. Gemini for the win here.

Gemini was also notable for the strong, overarching narrative structure to its report, something which I had also noticed in previous queries using this GenAI tool. Gemini has clearly been trained well in telling a cohesive story! However, Claude was also notable for listing, in a separate section of its report, what it called “cross-cutting failure themes” in its 13 examined metaverse failures (which is definitely a phrase I will be stealing for my final keynote presentation!). By comparison, the final report from ChatGPT, while thorough, was jargon-heavy, poorly-formatted, and seemed to lack the final polish of its competitors. For example, there were three separate sections titled “failure themes and comparative analysis,” “theme-to-platform mapping,” (?!??) and “top 10 failures by primary cause.” It was, in my opinion, the poorest of the three reports generated, just in terms of sheer (lack of) organization and narrative. Again, Gemini for the win!

Gemini’s report had a strong, overarching narrative structure—something which I have noticed seems to be a particular strength of this GenAI tool, a sort of final overall polish to the text that ChatGPT, in particular, was lacking in its report (see below).
Claude’s report had a summary section titled “cross-cutting failure themes,” which I am definitely stealing for my keynote presentation!
Compared to the Gemini report, the ChatGPT report was jargon-heavy and poorly-formatted.

EXECUTIVE SUMMARY: So, here are my final thoughts.

  • It is getting harder and harder (in fact, almost a full-time job) to keep on top of what is fast becoming an arms race between the top three general-purpose generative AI tools (ChatGPT, Claude, Gemini), not to mention an ever-growing legion of more narrowly-focused applications, which might be better at certain specific tasks, such as writing programming code or generating music.
  • While Claude seems to be good at putting new agentic (e.g. Claude Cowork) and add-in tools (Claude for PowerPoint) into the hands of its users first, my personal experience with these new tools has been very disappointing, even comically bad. However, Claude’s chatbot interface works well for generating detailed answers with citations (although slightly edged out by Gemini).
  • I am impressed by Gemini’s consistent ability to create a strong narrative structure within its generated reports, something in which ChatGPT in particular is noticeably lacking. It also came first in a key metric: actual citations to academic literature, not just freely-accessible websites (blogs and news articles).
  • If I were forced to rank the three GenAI tools by just this one head-to-head-to-head comparison (i.e. the third part of my blogpost), I would rank them as follows:
    • 1st: Google Gemini.
    • 2nd: Anthropic Claude.
    • 3rd. OpenAI ChatGPT.
  • Again, when these GenAI tools work, they work well (sometimes very well!), but they they fail, they fail spectacularly. Which, in my mind, is another reason why it is good to put these tools to the test regularly, and use them in real-life situations, so that you can learn what they are good and bad at!

HOUSEKEEPING NOTES: Going Off-Topic on My Blog, and Being Clear About How I Use AI in My Blogposts

One of the advantages about having a blog with your name in the title is that you can write a blogpost about literally anything, and it’s technically not off-topic! 😜 I have been sharing a lot of personal details about my life recently, and I wanted to talk about how I do have a tendency to go off-topic on this blog.

A classic example of this is when I correctly forecast, on January 25th, 2020, that we were going to face a global pandemic, which led to many of my blogposts after that point being about COVID-19. (The financial planner I had at my bank at that time, whom I shared my prediction with when discussing the financial impact of a pandemic, was convinced that I was psychic, but all I was doing was paying close attention to the news that was coming out of China about a mysterious new virus.) Many of my readers at that time were no doubt puzzled as to why I had so suddenly shifted focus, but obviously, everybody started paying attention by March 2020, as the world shut down. (I still cannot wrap my mind about the fact that over a million Americans died from COVID-19, some of them due to the misinformation, disinformation, and crazy conspiracy theories spread widely via social media.)

This is always be, first and foremost, a blog about the metaverse.

So, what I learned from that experience is that, while you can go off-topic from time to time, you probably shouldn’t go completely overboard, like I did during the pandemic. This will, at heart, remain a blog about my passionate hobby and my research interest: virtual worlds, social VR, and the metaverse. The only recent change I have made is to explicitly include, in my blog’s tagline, a mention of artificial intelligence and generative AI (GenAI):

News and Views on Social VR, Virtual Worlds, and the Metaverse, plus Artificial Intelligence and Generative AI’s Impact on the Metaverse

And, as my tagline states, I will try to keep my writing about AI focused on how this rapidly-evolving technology is now, and will in future, impact the metaverse. There are so many other people writing about AI during this new hype cycle, sparked in 2022 by the startling results being produced by a new crop of generative AI tools. And frankly, those other writers are doing such a good job, that the best I can do is refer you to them, and urge you to follow them! But I will share, as I did recently, my own experience in learning how to use GenAI tools effectively and efficiently.

Whether we like it or not, all of us are going to be interacting with AI in the future.

What I will start to do, is be transparent about how and when I do use GenAI tools in writing a particular blogpost. We are already awash in ChatGPT-generated slop passing for content on the internet, and frankly, I think I owe it to my blog readers to tell them when I use such tools in my writing. Therefore, from now on, you will see a purple box at the top of all my blogposts, which will be:

  • Either a statement, “EDITORIAL NOTE: No generative AI tools were used in the creation of this blogpost,” or
  • A statement “EDITORIAL NOTE: I used the following generative AI tools in creating this blogpost,” followed by a list of all such tools used, where I used them, and how I used them.

You can see an example at the very top of this post. Below is a screenshot of another example of what I’m talking about, from a recent post on my blog:

The last thing I wanted to say, is that this is (as I said up top) a personal blog, and I will, from time to time, talk about off-topic things, such as the TV show Heated Rivalry and how it made me feel. I realize in that blogpost I did try to add a bit about how the concept of “coming out” is different in the metaverse, in order to try and make the post fit the tagline of my blog. However, in reading it afterwards, I felt that I kind of shoehorned that part in, and not terribly successfully at that. So from now on, when I do go off-topic, I won’t twist myself into a pretzel to try to make it about the metaverse!! Like I said up top, it’s a blog with name in the title, so whatever pops into my mind when I sit down in front of the WordPress editor window, could become an off-topic blogpost. Fair warning!

For example, I just finished binge watching all three seasons of TV series Heartstopper, so you can definitely expect an off-topic blogpost about that sometime soon!! 🏳️‍🌈
I get nowhere near the kind of traffic I got circa 2019-2022, but I still get enough traffic (and feedback) for me to keep writing my blog.

While I get nowhere near the traffic I did during the heady heydays of the 2019-2022 metaverse hype cycle, I still do get enough traffic to indicate that it’s worthwhile to keep blogging. I find I enjoy writing!

Thank you to those of you who post comments on my blogposts, and leave messages on my Contact Me page. However, I am very bad at getting back to people who leave messages via the Contact Me page, so I have a huge, huuuge backlog to dig through!!

That’s it for now. Take care!

Yet Another One Bites the Dust: Meta’s Shutdown of Horizon Workrooms

In a recent blogpost about the shutdown of MeetinVR, I wrote:

Facebook (which had gone to all the trouble and expense of rebranding as Meta during this ridiculous hype cycle) has dropped literally hundreds of millions of dollars into acquiring Oculus and trying to build a business metaverse platform, and failed to even to entice its own employees into using it (let alone anybody else)…

I predict that we are going to see a “metaverse winter,” much like the previous “AI winters,” when the initial promise and hype of the technology hits what the Gartner Group politely calls “the trough of disillusionment.” And I predict we are going to see a lot more shutdown announcements like this throughout 2026.

Well, guess what? Once again, I am late in reporting this, but Meta has finally shut down its Horizons Workrooms product, a social VR platform intended for business use. According to a Road to VR news report by Scott Hayden, Horizon Workroom’s final day was Feb. 16th, 2026.

Scott Hayden’s article on the shuttering of Horizon Workrooms, Road to VR, Jan. 16th, 2026

This is hardly a surprise. As I said up top, I don’t think anybody was using Workrooms. I wrote about the launch of the open beta of Workrooms in August 2021, at a time when Facebook Horizon (as it was then called) was still in closed, invitation-only beta. One neat feature is that it allowed you to bring your physical keyboard into the virtual space via keyboard tracking (this only worked for certain models of keyboard, though). One month later, they announced a collaboration with Zoom, but I don’t know if that went anywhere.

By October 2022, rumours were rumbling, with leaks from internal memos stating that even Meta’s own employees were avoiding the use of Workrooms. Shortly thereafter, The Verge issued a savagely critical evaluation of Workrooms. The product was buggy, the avatars were cartoony, and compared to simpler solutions like Zoom and Microsoft Teams, there just seemed to be too high a cost to entry for its designated use case. Meta finally decided this year to take the ailing dog out back and shoot it. I’m surprised it lasted as long as it did. Scott Hayden reported:

For existing users, Meta has not announced a direct replacement for Workrooms; the company suggests users look into third-party apps such as Arthur, Microsoft Teams Immersive and Zoom Workplace.

Oh, and Meta has also been shelving projects, and laying off staff in its Reality Labs division, according to Scott’s article and CNBC. So it would appear that our metaverse winter is now in full swing.

Photo by Bob Canning on Unsplash

But keep in mind that winter is only one season out of four. And winter has its own special beauty, even if it doesn’t seem like there’s very much going on under all that ice and snow.

Yes, we are probably going to see more platforms shut down, like Workrooms, and more companies go out of business (not Meta of course, smaller ones). But those of us who have already been active in the metaverse for many years aren’t going anywhere during these lean, cold times. We’ve found our people, our communities, wherever we happen to meet up, whether it’s a flatscreen virtual world like Second Life or a meetup in social VR like VRChat. We hop from world to world as needed.

Yes, the current marketplace struggles will still impact us all in some way. We can expect moments of panic and chaos (e.g. when Ready Player Me was bought out by Netflix, and thousands of developers had to scramble to replace their avatar systems). But we will hunker down, use the downtime productively, and wait for the next season to arrive.