UPDATED! Generative Artificial Intelligence Tools for Academic Research: AI Research Assistants, AI-Powered Document Analysis Tools, and A Look at Elicit, Undermind, and NotebookLM

I freely admit that this was not the next blogpost I was planning to write, but as a follow-up to my previous detailed discussion of what I have started to call the “Big Three” of good (sometimes, good enough) general-purpose generative AI (GenAI) tools—ChatGPT, Claude, and Gemini—I wanted to write a little bit more about two particular subsets of GenAI tools which are focused on the academic research process. And, since I have two things coming up on my calendar which necessitate academic research, that is:

I figured, well, what better time to demonstrate some of these GenAI tools than with some real-world, real-life examples. from my own use of some of these new tools?

Some Definitions

Will new generative AI tools change how academic research is done? Photo by Dan Dimmock on Unsplash

These two categories of tools are:

  • AI research assistants: tools specifically designed to help researchers search, discover, synthesize, and analyze academic and scientific literature. Each of them uses large language models (LLMs, i.e. GenAI) combined with scholarly databases (e.g. PubMed for medicine; AGRICOLA for agriculture), to help users find relevant papers, extract key findings, and synthesize evidence across studies. Examples of such tools are Elicit, Undermind, Consensus, and Assistant by Scite. Keep in mind such tools are only as good as the scholarly databases they access! For example, while Consensus proudly announces partnerships with major academic publishers like Sage, Wiley, Taylor & Francis and ACS on their front page, Elicit only seems to use freely-accessible sources like Semantic Scholar and OpenAlex, as you will see below.
  • AI-powered document analysis tools: While AI research assistants search across published scholarly literature, GenAI-powered personal library/document analysis tools are built around the concept of “source-grounding” — you upload your own documents (e.g. PDFs of journal articles and conference papers, word processor documents, websites, YouTube videos, audio files, etc.) and then the GenAI tool works exclusively from those materials. They’re intended to help researchers make sense of a lot of information. The best-known of this relatively new category of GenAI tools is Google’s NotebookLM, but there are other products similar to it: Nouswise, and the open source tool Open Notebook.

To summarize the difference between the two: AI research assistants (Elicit, Consensus, etc.) help you discover literature, while AI-powered document analysis tools (NotebookLM, etc.) help you analyze and synthesize literature you’ve already collected. They occupy different stages of the research workflow.


Undermind

I currently have a Pro account with Undermind, at US$16 per month, which is one step up from their limited-use, free service. My initial question to Undermind was as follows:

I am researching the topic of the metaverse, both older virtual worlds (e.g. Second Life) and newer social VR/AR platforms (e.g. VRChat). I am interested in the history of the concept of the metaverse, and how the meaning of the term “metaverse” has evolved over time.

Undermind took this initial question, and asked a series of follow-up questions in order to clarify what I was looking for. Here’s part of that chat:

Eventually, I was able to come up with a more specific search, as follows:

The final question I finally sent Undermind off to work on was as follows:

Find academic literature on the history of platforms and user practices associated with what is now discussed as the metaverse, staying broad across decades. Focus on the history of virtual world platforms and how people used them, including older virtual worlds such as Second Life and newer social VR/AR platforms such as VRChat, while also including adjacent predecessor platforms that predate the coining and later popularity of the term “metaverse.” Emphasize user practices broadly rather than narrowing to a single type of practice, and help trace how the meaning of the term “metaverse” evolved over time in relation to these platforms and practices.

My search results were 80 papers which Undermind determined were relevant to my final question, covering a publication date range of 1970 to 2024:

Note, at the bottom of this screen capture, how Undermind actually went through and sorted these papers into eight broad categories or subtopics, in essence giving me a nice overview of these 80 published academic papers. This kind of context/overview work is something at which GenAI tools tend to excel, and it can save an academic researcher hours of work (but, of course, you still have to be the human in the loop, and actually read and digest all the papers retrieved!).

But even more important to note is how GenAI tools like Undermind mark a dramatic change in information retrieval: a shift away the from the sometimes-arduous task of using keyword searching, controlled thesaurus vocabulary, and Boolean logic to search traditional academic databases (e.g. PubMed and its MeSH or Medical Subject Headings), towards actually having a conversation with the search tool, starting with a plain English statement, and answering follow-up questions to clarify and refine that initial prompt into a final search question, then submitting it.

If you like what you see (and I did), you can click on the Generate Report button to start a new process, which prompts you:

I’d like to write a report based on papers from the search “History of metaverse platforms and practices”.

Let’s briefly discuss the content before you start writing.

And again, Undermind asks a helpful series of clarifying questions to help you figure out what you want from a report on all this research data it dredged up:

The final report (which I could save as a PDF or markdown file, using one of three citation styles), looked like this:

The resulting report had 36 citations. However, unlike the Elicit report, the Undermind report did not have a section where it got into the nuts-and-bolts of what sources it used to discover the papers used in this report, nor the method by which it selected them. So, while the initial read looked good, it would take actually getting and reading the full-text of the papers cited in this report to determine exactly how good it was.


Elicit

I also decided to spring for a Plus-level account on a tool similar to Undermind, Called Elicit (again, one step up from a free, Basic account, which offers a more limited service).

Having already done the Undermind search mentioned in the previous section of this blogpost, I decided to use the final search statement as my starting point, plugging it verbatim into Elicit to see what would happen…

…only to discover that Elicit doesn’t consider that a very concise search question, at all! (Actually, I kind of agree here. But Undermind let me do it, anyway!) However, instead of asking a series of follow-up questions like Undermind did, instead Elicit offered a series of buttons which, when pressed, rewrote the question to be much more narrowly focused, for example:

So, I clicked on the offered “Temporal and conceptual scope” button, and edited it a bit to include specific examples of what I was talking about, and hit the green Send button, using the default settings of research papers, and asking for a general review. Elicit then asks me what level of detailed answer I want (with the most detailed alternative greyed out unless I pony up more money for their Pro plan, one level up from my measly Plus plan):

I went with the Balanced report. However, I am not crazy about the limitations, especially when I could do a more traditional database search, using one of the over 650 databases offered by my university’s library service, without such petty limits as “the top 500 sources” (and, remember, that’s a ranking based on a newish GenAI computer algorithm, not keyword matching using a controlled thesaurus vocabulary and Boolean logic to construct a search strategy, the old-school way). Essentially, it’s a trade-off: a search using plain English language to start, with prompts to refine it, and a pre-limited number of sources examined—and with even more restrictions on the number of sources from which a comparative chart would be constructed (25). If you want more—and many users would want more—then you’ll have to pay extra for it.

However, for all of its limitations, the final report looked pretty good, at first. You can save a PDF version of the report, and you can even ask questions of it, via a chatbot interface (using the chat box located in the bottom-right corner of the screen capture below):

However, in doing a read of the PDF report, I was struck by several things:

  • Again, the hard limit of 25 papers from which data was extracted, which essentially makes Elicit useless to me at this level;
  • The fact that zero papers of the 500 selected were screened out by the selected criteria (see image below taken from the report: although, to be honest this technique probably would have worked much better for examining clinical research studies in, say, medicine, rather than looking for papers about metaverse platforms);
  • The search was performed against “over 138 million academic papers from the Elicit search engine, which includes all of Semantic Scholar and OpenAlex,” but again, my librarian mind kept thinking that there would be a lot of full-text content that was locked away behind academic publisher paywalls. And indeed, of the 25 sources picked for this report, only 15 sources had the full text of the article retrieved. For the other ten sources, Elicit likely relied only on the (freely-available) author-provided abstract. And indeed, many of these GenAI research tools tend to rely on scraping free sources such as Semantic Scholar and OpenAlex, rather than enter into potentially expensive agreements with academic publishers such as Elsevier and Wiley, which would give their users full access to content they own, and frankly, more complete data from which to write reports.

I actually came away from reading this report more disturbed by its limitations than I was impressed by any conclusions it was able to draw. Again, I hasten to add that my real-world test case would probably have performed much better if I had an actual real-world use case that fit Elicit better (like a systematic review of clinical medical trials, for example). It might just be that my admittedly fuzzy subject area didn’t fit the way Elicit works, at all. And that’s fine.

However, what bothered me most was that somebody without my 30-plus years of academic library experience would run this report it, read it, nod, and think that this was a good response. Even worse, an in-depth response. When, in fact, a more traditional search against a library database (perhaps executed with the expertise of a professional librarian) would give much better and more thorough search results.

Even worse, how many of those Elicit users would stop here, and run with this summary, and not actually go and read the full text of the 25 papers that were selected for the report, not to mention the countless papers NOT included? I would suspect that it’s more than a few. So yeah, this academic librariam does have some reservations about where all this is headed. However, I can also confess that the report did give me a few new ideas to think about, and some possible new directions to follow in my own academic research, which I might not have found otherwise.

UPDATE March 11th. 2026: I’ve since gone back to Elicit and realized that what I probably should have done first was just search for papers, instead of just asking it to generate a report (see the red arrow on the image above). I tried searching for papers using the question, “What are the most effective techniques for dealing with trolling, griefing, and harassment on metaverse platforms?”

The next image is a screen capture of the search results. It gave me ten research papers in a chart, with brief citation details, a GenAI-generated summary of each paper, and another GenAI-generated overview of all ten papers in a couple of paragraphs on the right-hand side of the page, with the option to chat with the papers (i.e., ask questions and get answers from the content of these research articles). There’s a button at the bottom of the chart which you can click on to load another ten papers to keep retrieving information, although the right-hand-side overal summary does not seem to update with new articles added to the chart.


NotebookLM

Now, I turn to NotebookLM, Google’s Language Model (the “LM” in the product name) which tries to do the same sort of thing to your personal research library that Google Gemini tries to do with—well, with an infinitely larger library of millions and millions of documents. The idea is the same, though: to feed a (much smaller) set of documents, audio, video, etc. into a service which then allows you to use a chatbot-type interface to ask it questions, and (hopefully) get some useful answers back. But, again, how useful NotebookLM will be to you depends entirely upon what you feed it! In computer science we have a saying, with the acronym GIGO: Garbage In, Garbage Out. If you fill NotebookLM with crappy sources, don’t be too surprised if you get crappy answers back!

I have a Google AI Pro plan, with 2 Terabytes of storage, which includes access to Google Gemini 3.1 Pro. This costs me CA$26.99 per month (approximately US$20), and frankly, I’m pretty sure I am not getting my money’s worth out of it. With that, My NotebookLM service is rated at the Pro level, which means I can have up to 500 notebooks, with each notebook having up to 300 sources. (NotebookLM Standard, the free service, lets you have up to 100 notebooks, with up to 50 sources each. You can compare the various levels of plans here.)

I have uploaded 103 documents (mostly PDFs of journal articles from my personal Zotero research library) into NotebookLM. Again, some of them are probably of lower quality than others, so the GIGO rule applies. For example, the notebook summary it seems to have automatically created veers alarmingly close to gobbledegook, and there’s even a mention of (gasp!) blockchain, and the audacity to name it as a “primary pillar necessary to facilitate real-time, multisensory interactions between users.” (WHAT THE ACTUAL FUCK?!?? Okay, I take it back, it is gobbledygook, a Frankenstein-like creation stitched together from bits and pieces of documents I had uploaded. I actually created this monster.)

There’s absolutely no explanation of how or why this summary was generated. In fact, the whole user interface of NotebookLM I found to be extremely confusing. I had to dig through the product’s Frequently-Asked Questions list to find out why some things wouldn’t load: any uploaded file over 200MB in size, any source with over 500,00 words, and any copy-protected PDF files will not load, but you don’t get any sort of error message if you try. In my limited testing thus far, you get…no response.

Even worse, this feels like a product that Google has just sorta dropped on us, with only the previously-mentioned FAQ and an email address for product support (yes, even for Pro users). I shouldn’t be surprised, I suppose. Just like I wouldn’t be surprised if Google is silently compiling notes on how people use NotebookLM*, or decide to yank it away, like so many other previous Google products and services.

*UPDATE March 11th, 2026: It turns out that I was wrong; apparently (according to a quote from Steven Johnson, a member of the NotebookLM product team, in a slide presentation I watched today by fellow librarian and GenAI expert Nicole Hennig) anything you upload to NotebookLM is only stored in the model’s short-term context memory, and it is not used in training the Gemini LLM used:

As an author, Johnson clarifies that no material uploaded to the model is used to train NotebookLM or Google Gemini; it’s only sent to the model’s context window, or “short-term memory.” Johnson explains that if you “have the right to use [the material] under copyright, you can use it inside of Notebook.” (source)

Honestly, I do need to spend some more time playing around with NotebookLM before I issue any final summary judgement on the product. In particular, I get the feeling that the GIGO rule really applies to NotebookLM! Google themselves, in their NotebookLM FAQ, states:

Sometimes NotebookLM can’t answer your question because of…

  • Information not in sources: NotebookLM answers questions based on the information provided in your uploaded sources. If the answer isn’t in the source material, it won’t provide a response.

I had a very interesting day playing with these GenAI tools, and I learned a few things. I’ll keep you posted on how things go!

Photo by Jaredd Craig on Unsplash

Editorial: Changing Gears, Letting Go, and Embracing Change

Photo by Zoltan Tasi on Unsplash

NOTICE: Except where explicitly stated in this blogpost, I have not used AI to write this editorial. This is me, Ryan, writing (and yes, I have been using em-dashes long, long before ChatGPT was a thing—and I will continue to do so!). See what I just did there? 😉

While my continuing neck and shoulder pain unfortunately limits the amount of time that I can spend sitting in front of a desktop computer (both at work and at home), I wanted to set aside some of my precious “good neck” time to talk a little bit about this past twelve months, and where I am planning on taking this blog in the future. Because, yes, I do have plans moving forward. (Update: as it turns out, because of my neck and shoulder pain, I had to split up the writing of this post over a couple of days, rather than one hours-long marathon sesssion.)

As many of you know, I took a lengthy hiatus from blogging, starting late last year, up until very recently. Part of the reason was that I was juggling a lot of responsibilities at work, notably being part of a virtual reality lab which was being set up in one of the libraries of the university library system in which I have been working for the past 30-odd years (yes, it’s really been that long; I started in 1992!).

I am happy to report that, although I am no longer involved with that particular project, the virtual reality lab at my university library system has already had a successful soft opening, with a dedicated staff person hired to manage it (not me; as I said, I already have my hands full being a liaison librarian for both the faculty of agricultural and food sciences and the computer science department at my university!). In fact, I have been so busy at work that I haven’t even had time to sit down and use any of the equipment in the new lab, although I have chatted a few times with the new manager. Everything is moving along fine without me.

As part of my responsibilities as agriculture librarian, I had volunteered to give a presentation to an upcoming faculty council meeting about artificial intelligence in general, and generative AI in particular. I have only myself to blame for getting myself into this situation! You see, the Faculty of Agricultural and Food Sciences at the University of Manitoba still has an active library committee, and at a recent in-person meeting, I was talking about how I have had to add a few slides to the PowerPoint presentation which I give to students about how to use the U of M Libraries, talking about AI. One thing led to another, and lo and behold, yesterday afternoon, I gave a half-hour presentation on artificial intelligence in general, and generative AI in particular, to a room full of agriculture and food science professors!

I spent a significant chunk of my summer reading through books and websites, working through online courses, and essentially getting myself up to speed (it helps that this librarian has an undergraduate degree in computer science!). And I had the good fortune to be able to give a version of my presentation to a class of graduate student advisors, and to a class of graduate students, as part of a series of special courses targeted to U of M grad students, before yesterday afternoon’s talk. Both times it was well received, as it was yesterday. (I have already shared my slides and notes with my fellow librarians and agriculture professors, and I might decide to also share a version of them with you, my faithful blog readers, as I have done in the past with presentations about virtual reality in higher education, and the virtual world of Second Life. But I think I will make that a separate blogpost, perhaps my next one.)

At this point, I will draw your attention to the tagline of my blog in the upper left-hand corner of the screen if you are looking at this page on a desktop computer. You might notice that it has changed.

It used to read, pretty much since I began this blog in 2017:

News and Views on Social VR, Virtual Worlds, and the Metaverse

As of yesterday, it now says:

News and Views on Social VR, Virtual Worlds, and the Metaverse, plus Artificial Intelligence and Generative AI’s Impact on the Metaverse

Now, that’s rather a mouthful (and yes, I might need to edit it a bit), but essentially, it’s all a part of the “embracing change” which I mentioned in the title of this blogpost.

As a matter of fact, I was having a bit of a brain fart coming up with a suitable title, so to assist me with the wording of the title of this blog post (and only that), I pulled up Anthropic’s generative AI tool, Claude, for a little chat, asking it:

I need a way of saying “to add something new” to contrast with the opposite idea of “letting go of something.” What are some ways that I could say that?

And here are screen captures of the resulting conversation:

Now, could I have done this without generative AI? Absolutely; thesaurus websites have been around since the earliest days of the World Wide Web (trust me, I was around then!). But I doubt I could have actually had a back-and-forth conversation with a tool that presented the information in such a helpful, tabular way, prior to November 2022, when the first public version of ChapGPT was unleashed upon an unsuspecting public. I could pose my question in dozens of different ways, asking for countless ways of expressing the concept of “letting go of something,” and the Claude GenAI (generative AI) tool never gets bored or impatient or irritated with me.

Simply put, I will now be writing about artificial intelligence in general, and the new wave of generative AI tools like ChatGPT and Claude in particular, as part of the RyanSchultz.com blog. In particular, I will talk about how these fast-developing and evolving tools will inevitably impact the metaverse.

I will give two quick examples of how GenAI is already impacting the metaverse. First, in my recent write-up of virtual sessions I attended as part of the Berlin-based Immersive X metaverse conference i attended a couple of weeks ago, there was a proof-of-concept working demonstration of a generative-AI-driven virtual diabetes counselor in a virtual world platform called Foretell Reality.

Second, were you aware that there is already a website called MeshZEUS, which will create a three-dimensional object for you from a text description, in a format ready to be uploaded to Second Life and sold on the SL Marketplace or an in-world store?

The MeshZEUS website

Yes, that’s right! You may choose, if you wish, to no longer work your way up the rather steep learning curve of Blender or Maya or 3Ds Max to painstakingly create an object from scratch; instead, all you have to do is describe your desired 3D object in enough detail, and hey presto, it gets delivered to you! (Provided you buy enough credits, and have enough patience to go through multiple iterations of text prompting, that is. But we’ll also leave that discussion, plus the whole enchilada of issues that using a GenAI tool like this raises, for another day, shall we? Trust, there’s lots to talk about.)

It’s now pretty obvious to me that the current hype cycle of artificial intelligence, which was ignited by startling new leaps forward in the capabilities of AI tools since 2022, is going to have an impact on the metaverse. And, unlike the previous short-lived hype cycle of the metaverse itself (which, hello, I was around for—beginning, middle, and end!— documented on this very blog), this new, AI-powered hype cycle might actually have a more direct impact on society than the still-somewhat-nebulous concept of the metaverse, sooner than any of us might have expected. Buckle up, folks, I predict that things are about to get deeply, deeply weird.


So, I have talked about changing gears for the RyanSchultz.com blog, returning to blogging, and also about embracing change, i.e., adding the topic of AI and GenAI to the subjects I will write about. Now I come to the part where I talk about letting something go.

Unfortunately, because of my neck and shoulder pain, I regret that I must conserve the time that I can spend productively sitting in front of a desktop PC. Obviously, first priority goes to the paying job, which keeps the lights on, the internet bill paid, and puts food in my belly and gas in my car. Second priority will likely be writing this blog, now that I have decided to keep blogging. Between these two, that probably is the limit of what I can reasonably accomplish.

What I am choosing to let go of is writing aboutt the virtual world of Second Life on this blog (in particular, reporting on fashionista freebies and bargains). I have made a similar announcement on Primfeed, which over the past year is where I have usually posted my freebie fashionista finds rather than on my blog. Because my Primfeed account is deliberately set to private (i.e., you need to have a Second Life account to join Primfeed, follow me, and read what I post there), I have done a screen capture of that particular post, plus a transcription:

Every December, I try to juggle four tasks (not very successfully, mind you):

1, Drag my small army of alts through a curated selection of Advent and 12 Days of Christmas calendars to vacuum up some fabulous gifts, every day from December 1st to December 25th;

2. Do the same thing at the annual Holiday Shop and Hop event;

3. Pick up free heads and skins during the LeLutka December event; and

4. Navigate real-life Christmas events, shopping, and other obligations. (My family, God bless them, finds #1-3 above to be very amusing, and last Christmas, they all chipped in to give me a cash-filled envelope marked “L$”, since they couldn’t actually buy me a gift card to buy Linden dollars. (Second Life, you need to look into this! There’s an untapped market here.)

I’m sure some of you here on Primfeed can relate to this! Often I ask myself: why am I doing this? But I still do get a great deal of personal satisfaction and fulfillment from designing a complete avatar look from head to toe, looking great while doing it as inexpensively as possible. And in order to do that, you need to acquire the knowledge and expertise to sniff out freebies and bargains (which I have often shared with you, either here on Primfeed or via my blog). I’ve loved doing it for years!

But, as I said, something has to give. I can no longer spend extended hours sitting in front of a desktop PC without significant, and sometimes severe, neck and shoulder pain. Therefore, in addition to NOT doing as much of numbers 1 through 3 as in previous years, I have made the difficult decision to cut back on telling all of you about the great deals I find. It’s not a decision I take lightly, but I do need to listen to my body, and my body is telling me to rest. And I need to pay attention.

So if you don’t see me post as often here, that’s why. ❤️ I’m just trying to rebalance my life a little better, that’s all. I’ll still be around, reading, scrolling, liking posts, following people and stores, but not posting so much. Thanks for understanding.

Don’t get me wrong; I am not leaving Second Life! In fact, I need SL as a sort of counter-balance to deal with all the batshit-craziness happening in my real life. Second Life is my temporary escape from the hamster-wheel of worry, anxiety, and despair inside my head, where I can reliably get into a pleasant flow state for an hour or two, and escape from the real world (where I have little to no control over what is happening).

In fact, one of the reasons I love SL so much is that it is such a vast, three-dimensional creative canvas over which I have so much control over what happens, where I choose to go, who I choose to interact with, and even what I look like to others! I still derive an inordinate amount of personal satisfaction from styling a complete avatar look from head to toe, as inexpensively as possible while still looking fabulous, darling! I call it “digital drag” 💅 (and yes, I do have a drag queen alt, whom I have written about numerous times on my blog, and who is about to embark on various antics, drama, and misadventures in a roleplaying region based on the U.S. Deep South). To my friends and acquiantances in Second Life: I am not going anywhere. I’m just not going to write about it here any more, that’s all. (I’m also cutting back on my Primfeed posting, but I’ll still be there, too.)


So, to sum up:

Yes, I am back.

Yes I will be blogging about the metaverse in all its forms and manifestations again, but with the added wrinkle of AI/GenAI and its potential impact.

No, I will no longer be writing about Second Life, although yes, I still will be playing it.

Stick around, folks, this should be both entertaining and educational! As RuPaul herself said: