An Introduction to Artificial Intelligence in General, and Generative AI in Particular

I have already written at length about my neck and shoulder pain, for which I am working with my doctor, a physiotherapist, and a massage therapist to treat. I’ve also had an ergonomist come and do an assessment and adjustment of my workstations at my employer, the University of Manitoba (I’m still waiting for his final report, with a shopping list of equipment which will be purchased to help me get through an eight-hour workday without pain). I am still very much in the process of learning which actions are detrimental to the couple of deteriorating cervical joints in my spine, and which are more beneficial!

For example, you would think that having the extra weight of a virtual reality headset on my noggin would make things worse. However, I have been astonished to discover that my neck does not become as sore, as quickly, when I am using the Mac Virtual Display feature on my Apple Vision Pro, along with my MacBook Pro at work!

Therefore, I have been working 3 to 4 hours a day like this, as opposed to just using my MacBook Pro with an external monitor attached. The ergonomist did set me up with a temporary notebook riser, adjusted so that I am not hunched over the keyboard, and aligned so the top of both the MacBook Pro screen and the external monitor are both at eye level. I find that working like this, without my AVP, my neck and shoulders still start to ache after about two hours, and I have to stop, take a break, go for a walk, and do some of my physiotherapy exercises. As I mentioned earlier, this is a learning process.

On Wednesday, at lunchtime, I got up from my MacBook Pro, unplugged my Apple Vision Pro from its battery charging cable (I tend to leave it plugged in when I am working seated) and, while still wearing my AVP, went to the washroom. My coworkers in the library are already well-used to seeing this strange person wandering around with a VR headset on, and my vision while wearing it is almost as good as it is when I wear my glasses, so I often do this if I have to make a short walk to the printer, or in this case, the washroom.

However, on my way back from the washroom, disaster struck. I accidentally got the cord between my Apple Vision Pro (on my head) and its battery (sitting in the front left pocket of my pants) caught in a metal part of the door to my office cubicle space when I was coming back in from the washroom. My AVP is okay, but I wrenched my already-painful neck badly, and as a result, made a bad situation even worse. (Lesson learned; you need to take that damn power cord into account when moving around!)

As a result, I have been off sick from work for two and half days this week, spending a lot of my time either lying in bed or lying on the sofa. On top of that, we have had not one, but two Alberta Clippers roar through Winnipeg on Wednesday, Thursday, and Friday, so I have been apartment-bound as well as largely bed-bound. I just find it ironic that the very thing that seems to make my pain more bearable (the Apple Vision Pro) can also make it more severe! This has just not been my week.

Anyway, this is my usual off-topic preamble to the real purpose of today’s blogpost. I had promised that I would share with you, my blog readers, the artificial intelligence presentation I had been researching since this summer, which I have recently delivered to three separate audiences: University of Manitoba graduate students, graduate student advisors, and the professors and instructors in the Faculty of Agriculture and Food Sciences (the latter group for whom I am the liaison librarian, and from where the original request to create and give this talk was made by the chair of the agriculture library committee, many months ago). And while this talk was overall very well-received by my audiences, I did receive some negative feedback, and I wanted to talk a little bit about that as well. AI is a divisive topic in an already-divisive age.


I’m going to share an edited version of my PowerPoint slide presentation, with some University of Manitoba-specific bits removed, as well as any contact information removed (sorry, the UM faculty, staff, and students have the right to call on me with questions after my presentation, as I am their liaison librarian; you don’t 😉 ).

Also, I will be transparent about how I used generative AI tools in creating this PowerPoint presentation. I currently have paid-for (US$17-20 a month) accounts on three general-purpose generative AI tools: OpenAI’s ChatGPT; Anthropic’s Claude; and Google’s Gemini. These are the “top three” general-purpose generative AI tools currently recommended by Ethan Mollick (more on him later in this post). Do I plan to keep paying for all three? No. But I have found it highly instructive to enter the exact same text prompt into all three tools, and then compare the results!

In addition to conducting my own research into artificial intelligence in general and generative AI in particular, I used both ChatGPT and Claude to do additional research into this topic, some of which made it into this presentation. I also had a lot of text-heavy slides in the first draft of my PowerPoint presentation, so I asked Google Gemini to provide suggestions on how to reformat my slide presentation to have fewer bullet points per slide (which I think it did a pretty good job at).

I also did try to ask both ChatGPT and Gemini to redesign the theme and design aspects of my PowerPoint slides, but I was extremely unsatisfied with the results, despite several attempts, and I finally gave up on using AI for that task. So please keep in mind that generative AI (which I will refer to as GenAI from here on out) can still fail miserably at some tasks you put it to work on!

Here is my PowerPoint slide presentation, complete with my speaker notes, for you to download and use as you wish, with some stipulations. I am using the Creative Commons licence CC BY-NC-SA 4.0, which gives the following rights and restrictions):

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International

This license requires that reusers give credit to the creator. It allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, for noncommercial purposes only. If others modify or adapt the material, they must license the modified material under identical terms.

BY: Credit must be given to you, the creator.

NC: Only noncommercial use of your work is permitted. Noncommercial means not primarily intended for or directed towards commercial advantage or monetary compensation.

SA: Adaptations must be shared under the same terms.

(The tool I used to determine the appropriate Creative Commons licence can be found here: https://creativecommons.org/chooser/.)

So, with all that said, here is my PowerPoint presentation (please click on the Download link under the picture, not the picture):


In addition to sharing my slide presentation with you, I wanted to highlight a few resources which I discussed within it, which you might find useful. These are books and websites which I used as I worked my way up the learning curve associated with AI in general, and the new wave of GenAI tools in particular.

I start off with a bigger-picture look at the whole forest of artificial intelligence, later narrowing my focus to look at GenAI tools, a new subset of greater AI. First, a really good layperson’s guide to GenAI is a 2024 book by Ethan Mollick, titled Co-Intelligence (see image, right). One thing I want people to remember is that the new wave of GenAI tools only dates back to 2022, when the capabilities of these new tools (ChatGPT, DALL-E, Midjourney, Stable Diffusion, etc.) first captured the general public’s imagination, and stoked their fears. There are lots of published books about AI, but if they were published before 2022, they won’t cover the part of AI that is making the most noise right now. Also, keep in mind that any print/published book will soon be outdated, because the field of GenAI is evolving so rapidly!

Ethan does a good job of covering the territory, and I share with you his four rules of AI:

Principle 1: Always invite GenAI to the table. You should try inviting AI to help you in everything you do, barring any legal or ethical issues, to learn its capabilities and failures.

Principle 2: Be the human in the loop. GenAI works best with human help; always double-check its work.

Principle 3: Treat GenAI like a person (but tell it what kind of person it is). Give it a specific persona, context, and constraints for better results. For example, you’ll get better results from the detailed prompt “Act as a witty comedian and generate some slogans for my product that will make people laugh” instead of the more generic prompt “Generate some slogans for my product.”

Principle 4: Assume that this is the worst GenAI tool you will ever use. Generative AI tools are advancing and evolving rapidly.


Second, I want to share with you an online course from Anthropic, the makers of the GenAI tool Claude. This course, which I worked through this summer, is called AI Fluency: Framework & Foundations, and you do not need to use Claude to work through the exercises—you can use any GenAI tool you wish. The focus of this 14-lecture course is to learn how to collaborate with GenAI systems effectively, efficiently, ethically, and safely.

One of the concepts taught in the AI Fluency course is what Anthropic calls the four D’s: the four key competencies of AI fluency (they seem to be big on alliteration!).

Delegation: deciding what work should be done by humans, what work should be done by AI, and how to distribute tasks between them.

Description: effectively communicating with AI tools, including clearly defining outputs, guiding AI processes, and specifying desired AI behaviours and interactions.

Discernment: thoughtfully and critically evaluating AI outputs, processes, behaviours, and interactions (assessing quality, accuracy, appropriateness, and areas for improvement).

Diligence: using AI responsibly and ethically (maintaining transparency and taking accountability for AI-assisted work; an example of this is when I described in detail which GenAI tools I used, and how I used them, in creating the PowerPoint slide presentation, earlier in this post.)


Finally, I share with you what I found to be a very helpful guide prepared by a librarian, Nicole Hennig, about how to stay on top of the rapidly evolving and accelerating field of GenAI. You can obtain a copy of her 2025 guide here. This is as good a place as any to start working your way up the learning curve (as I first did, with the 2024 edition of her guide). Nicole offers a bounty of valuable tips, tricks, suggestions of people to follow, and advice on how best to keep up with the roiling sea of change which is currently taking place in GenAI!


Finally, I wanted to talk a bit about the divisive nature of GenAI. AI/GenAI seems to be a very polarizing topic, especially in the field of higher education! While I did try to present a balanced viewpoint on generative AI tools, talking about both the good and the bad, I did receive some feedback from a few people who felt that my presentation was too…positive? And that, despite the warnings in my talk about some very serious problems with GenAI tools, I had neglected to portray GenAI’s more negative aspects in a more forceful way.

For example, one agriculture professor, in an email after my talk, said this about the Anthropic online course in AI Fluency, a learning resource which I had mentioned in the previous section of this blogpost, as well as in my slide presentation:

…I know you were recommending the AI class that was created by Anthropic, and how it is agnostic to the AI used, and just a good introduction to use. I’ll admit that I have not taken the course  (I am now intrigued and will try to), but I couldn’t help thinking when you introduced it, of courses on appropriate opioid prescribing practices made by Purdue pharma.

Ouch. Fair point, but painful comparison (and I say that as someone who is now actually suffering from physical pain, as I stated up top). So I wanted to end this blogpost with a brief discussion about how some intelligent but more skeptical observers are responding to the tidal wave of GenAI tools washing over society as a whole, and share links to some criticism, as part of providing a larger perspective. I will be the first to admit that I am not an expert in this field, despite what I have learned since this summer! I am a librarian with a computer science degree, which made it easier for me to comprehend some of the more technical aspects of what I was reading, but not as good at the philosophical part of the discussion about GenAI.

The professor who commented on the Anthropic course above shared with me a couple of links to recent critical articles which I, in turn, will share with you. The first link is an Open Letter by 17 scholars, warning about blindly accepting GenAI tools in higher education (post-secondary education, i.e. colleges and universities, although obviously many of the same arguments could also be made about K-12 schooling):

Guest, O., Suarez, M., Müller, B., van Meerkerk, E., Oude Groote Beverborg, A., de Haan, R., Reyes Elizondo, A., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & van Rooij, I. (2025). Against the Uncritical Adoption of ‘AI’ Technologies in Academia. Zenodo. Retrieved Dec. 19th, 2025 from https://doi.org/10.5281/zenodo.17065099

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously to a) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

The second link is the text of a recent talk by the well-known intellectual, author, speaker, and gadfly Cory Doctorow, who gave his university audience a foretaste of his book on AI, which will be published in 2026:

Doctorow, C. (2025). Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI. Retrieved Dec. 19th, 2025 from https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington

Over the summer I wrote a book about what I think about AI, which is really about what I think about AI criticism, and more specifically, how to be a good AI critic. By which I mean: “How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.” I titled the book The Reverse Centaur’s Guide to Life After AI, and Farrar, Straus and Giroux will publish it in June, 2026.

But you don’t have to wait until then because I am going to break down the entire book’s thesis for you tonight, over the next 40 minutes. I am going to talk fast.

And both Cory Doctorow, and Olivia Guest et al., make some seriously valid points about the negative consequences of a heedless, thoughtless, headlong rush into adopting GenAI tools. Now, you can decide, after reading all this, that you will have absolutely nothing to do with AI and GenAI, and that’s a valid position to take. But will it change the fact that GenAI is already being incorporated into software we use every day? Can the genie be pushed back into the bottle? Doubtful.

So what I am saying is: learn how the enemy (if you see it as “the enemy”) works. Spend a bit of time to become familiar with the GenAI tools, try them out on certain tasks, and see for yourself where and how it succeeds at a particular task, and (more importantly) where and how it fails. I have had some amazing results from using GenAI tools over the past eight months, but I have also experienced situations where I walked away thinking, “this is garbage.” But may I gently suggest that the only way to gain the experience which informs your opinions is to actually use the tools, and not to stick your head in the sand, and refuse to have anything to do with them.

Are we the unwitting and unwilling beta-testers for these products, as they are rolled out and embedded stealthily in products we already know and use? Absolutely. Will there be negative consequences, some foreseen, and others unexpected and unanticipated? Absolutely. Will there be some tasks which GenAI does and does well? Also, yes, absolutely (and it is already happening based on my own experience). All three things can be true at the same time. Like all technology throughout human history, artificial intelligence is a double-edged sword. It can harm as well as heal.

I still think that the best stance on GenAI is to be a skeptical but informed user of the tools (even if you limit yourself to the lesser-powered, free versions). Also, you owe it to yourself to read a variety of viewpoints on the technology, from a range of sources (start with my fellow librarian Nicole Hennig’s excellent guide which I mentioned above, plus my skeptical professor’s two links, and work out from there).

Above all, even with how divisive AI can be as a topic, now is not the time to be locked into either a rigid AI-is-bad or AI-is-good perspective, because both are true at times, and we need to hold space for that unsettling and upsetting fact. And we need to brace ourselves, both personally and as a society, because (as I have stated before on this blog), things are about to get deeply, deeply weird before all this is over.

Image by Gerd Altmann from Pixabay

Entering the RadyVerse: A Look at Five VR and AI Projects for Training Healthcare Workers at the University of Manitoba’s Rady Faculty of Health Sciences

One of the virtual reality labs being used to train nursing students in the College of Nursing at the University of Manitoba

As many of my readers already well know, I am the computer science and agriculture librarian at the Jim Peebles Science and Technology Library at the University of Manitoba in Winnipeg, Manitoba, Canada, and I have been writing about “news and views on social VR, virtual worlds, and the metaverse” (as the tagline of the RyanSchultz.com blog states) since July 31st, 2017. I have now been actively and avidly reporting on this space on my blog for almost seven years, sharing news and events in the rapidly-evolving metaverse!

So it was that I had already written on my blog (albeit somewhat in passing) about the University of Manitoba’s College of Nursing, which has been training new nursing students using the UbiSim software since the Fall 2022 term. Here’s a one-minute YouTube video about that work:

However, today I wanted to give you all an update on some newer innovations in the use of VR (and AI!) in healthcare education at my employer, the University of Manitoba.

Yes, the RadyVerse launch even had a cake! Carbs take priority, people!!! 😉

One month ago, on Friday, March 15th, 2024, I attended a special afternoon event located at the University of Manitoba’s Bannatyne Campus (the downtown, health-sciences-focused campus, next door to Winnipeg’s main hospital complex, the Health Sciences Centre). This event was the official launch of a new initiative of the Max Rady Faculty of Health Sciences, called the RadyVerse. According to the announcement:

The RadyVerse is an exciting initiative of the Rady Faculty of Health Sciences that combines virtual reality (VR), artificial intelligence and machine learning to create immersive and controlled simulations for students, educators and clinicians. The integration aims to empower an interprofessional community, promote collaboration and enhance skill development in a risk-free setting.

Dr. Nicole Harder speaking at the RadyVerse launch event (with Dr. Lawrence Gillman, seated)

In an article published in UM Today, the University of Manitoba’s online newspaper, one of the speakers at the launch described the purpose of the event, and the benefits of using VR in the College of Nursing programs:

Dr. Nicole Harder, associate dean, undergraduate programs and professor in the College of Nursing,  and Mindermar Professor in Human Simulation, Rady Faculty of Health Sciences, described the launch event as a “technology fair” that will give faculty, staff and students the opportunity to participate in interactive demonstrations.

“People will be able to try on the VR headsets and step into the immersive world. We’ll also have monitors where we can screencast and show others what they see in the VR, and how this will be used as an educational tool,” Harder said.

“VR has been used in other universities for some time, but not to the same extent. In the College of Nursing, it is embedded into our curriculum.”

The college recently expanded its VR simulation training to its programming in The Pas and Thompson through a partnership with the University College of the North. This allows students from different parts of the province to work together on a simulated clinical case in one virtual room.

As more disciplines become involved, interprofessional teams will not even need to be in the same physical space when collaborating, Harder said.

“VR is a great tool for learning clinical decision-making, problem solving, empathy and communication.”

One of my Libraries colleagues tries out the UbiSim nursing simulation software
Kimberly Workum of the College of Nursing, at the Bodyswaps demonstration workstation

The launch event had five stations intended to showcase how the faculty is using virtual reality and artificial intelligence to educate and train the next generation of healthcare professionals: doctors, nurses, pharmacists, rehabilitation therapists, etc. U of M faculty, staff, students, reporters, and the general public were invited to try out the technology for themselves, and get a taste of how it works. The five stations were:

  • The previously mentioned UbiSim VR software, used for training nurses in simulated but realistic nursing scenarios, where students can practice their skills within a safe and controlled environment;
  • Bodyswaps, another initiative of the College of Nursing, which provides experiential, soft-skills training (e.g. how to talk with patients and family members in various scenarios);
  • An artificial intelligence (AI) tool called OSCE GPT, which uses a specially-trained large language model (LLM) to simulate patients, in order to allow healthcare professionals to practice their patient interview skills, and give them feedback on how to improve it;
  • Lumeto, social-VR-based roleplay software for up to 4 users at once, used to train healthcare workers in interprofessional collaboration skills; and
  • Acadicus (a VR program for education which I had written about in 2019 on my blog), which is being used by Dr. Lawrence Gillman. According to the UM Today article:
People could try out the Acadeicus software, being used by Dr. Gillman’s team to train doctors

One of the stations will be led by Dr. Lawrence Gillman, associate professor of surgery at the Max Rady College of Medicine and director of the Clinical Learning and Simulation Program at Bannatyne campus.

Gillman has a crisis-based simulation and trauma resuscitation program in development that he will soon be using to teach his residents. At the launch, he’ll demonstrate what trainers and learners will be able to do.

“This VR program is basically a playground where you can create your own sim lab in a virtual environment. You can create whatever scenarios or places you want, and people can participate together in person, or even from a distance,” Gillman said.

“Basically, we create medical crises that people can practice in and then make mistakes in simulation rather than real life.”

A user tries out Lumeto

I visited all five workstations, and had an ample opportunity to test out most of these applications first-hand, and speak to my U of M coworkers about these projects. In fact, you can even catch a glimpse of me standing behind Dr. Gillman as he guides a user through the Acadicus software, in the video attached to this CTV News report of the RadyVerse event (see the red arrow in the screen capture I took from that video):

(I didn’t even know about this until a friend who watched CTV News told me!)

There’s just so much exciting stuff going on right now! There are so many VR initiatives taking place on campus, oftentimes in isolation, which is a shame. For example, I wonder how many of the healthcare professionals at the RadyVerse launch were aware that the UM Libraries is working on setting up a VR lab for faculty, staff, and student use (an initiative which is now well underway). And that the Department of Computer Science also has plans to set up a VR lab for its students. And I believe that the university’s Centre for the Advancement of Teaching and Learning is also working on something to do with VR…like I said, there’s a lot going on.

Therefore, I hope to be able to use some of my own “soft skills” and abilities to help set up improved communication channels and venues at the university, so we can all learn from each other as we beaver away on our separate projects and programs! I believe that there is much so in-house expertise and experience which we can share with each other. I know that I would benefit from this, and I suspect others would as well. We can all learn from each other.

The RadyVerse event was a fantastic opportunity to learn more about some of the other virtual reality and artifical intelligence work taking place at the University of Manitoba, and I hope to report on future developments in this exciting edtech as it rolls out across campus. These are exciting times to be a VR and AI enthusiast at the University of Manitoba!

Editorial: Why Am I Buying a Meta Quest 2 Wireless VR Headset—After Swearing I Would Boycott Meta Hardware and Software Forever?

I will soon be the owner of a shiny new Meta Quest 2, as shown here in this screen capture from the Meta website

Longtime readers of this blog will know that I have, over the years, developed a well-founded aversion to Meta (the company formerly known as Facebook), its business practices based on surveillance capitalism, and its products and services.

For me, the final straw was when then-Facebook-now-Meta did an about-face, and insisted that users of its then-Oculus-now-Meta virtual reality hardware had to set up accounts on the Facebook social network in order to use the devices (more on that in a moment). I angrily responded by giving away my Oculus Quest 1 to my brother’s family, and upgrading my trusty Oculus Rift to a Valve Index headset using SteamVR. I was DONE with Meta, and I was willing to vote with my feet (and my wallet).

So, it might come as a surprise to some people, to learn that I have decided to purchase a shiny new Meta Quest 2 wireless virtual reality headset. Why did I do this? Several points, which I will take one at a time.


Well, first and foremost, Meta blinked and backtracked after much criticism; you no longer need to set up a Facebook account to use the Meta Quest 2 (although you still have the option to link your Facebook or Instagram account to your Meta account, if you so wish). Instead, you set up a new Meta account for your device, as explained in the following YouTube video from six months ago:

It is now possible to have up to four Meta accounts per device, with one as an admin account, and you will be able to share some (not all, some) apps between Meta accounts using a new app-sharing feature. Note that Meta is still dragging its feet in setting up systems for use in business and academic circles; its “Meta Quest for Business” program is still in beta test with a (U.S. only) waiting list, a rather mystifying decision given the push Meta is already trying to make with Horizon Workrooms for corporate users. Then again, Meta seems to be just generally flailing (and failing) with its still-recent pivot to the metaverse, so who knows?


Second, as you may remember, I am still working on a project to set up a virtual reality lab within the University of Manitoba Libraries. While my original proposal was to purchase and install four high-end PCVR workstations using HTC Vive Pro 2 tethered headsets, we are now looking at offering faculty, staff, and students a wider variety of headsets for use in their teaching, learning, and research activities.

It’s probably not wise to purchase only one kind of VR hardware, which leaves you vulnerable if a company decides to shut down (although this is highly unlikely in the case of both HTC and Meta!). Best not to put all our eggs into one basket; life tends to throw all kinds of unexpected curveballs at you!

One unintended consequence of the coronavirus pandemic is that I had several successive years’ worth of travel and expense funds carried over and built up, some of which had to be spent by a certain deadline, or I would lose the money. So part of that funding went towards a brand-new work PC with a good graphics card, and an HTC Vive Pro 2 Office Kit, which of course is one of the models we are looking at purchasing for the virtual reality lab. However, I still had some money left over that I had to spend soon, and I decided to also buy a Meta Quest 2 as another testing unit, since we are considering also using that device in the virtual reality lab.


Third: while hunting around for easy-to-use, introductory demonstrations of virtual reality for those coworkers who have never experienced VR before, like Felix & Paul Studio’s excellent Introduction to Virtual Reality, I discovered to my great dismay that many apps were only available for Meta devices, and not available on SteamVR at all!

Unfortunately, some VR apps are exclusive to Meta VR headsets

In other words, some of the programs which students might want to use force us to purchase headsets on which they can run. This “walled garden” approach is antithetical to setting up an academic VR lab, where ideally we should be able to run any app on any headset. However, we have little choice, given the way the marketplace is currently structured (and especially given Meta’s outsized influence, with a little under 20 million Quests of various kinds sold, which makes it by far the most popular VR headset).


The University of Manitoba’s School of Nursing recently opened the first virtual reality lab on campus, and they are only using Meta Quest 2 headsets. This lab is currently training nursing students using UbiSim software, with plans to expand its offerings over time (more info here on Mastodon). And the U of M’s Computer Science department is also planning to use Meta Quest 2s in its planned VR lab.

The VR lab at the University of Manitoba School of Nursing

In other words, you can choose not to dance with the 900-pound gorilla in the room (i.e., Meta), but it will severely limit your choice of dance partners! And that is why, despite my lingering antipathy towards Mark Zuckerberg and his company’s business practices, we will likely be buying a number of Meta Quest 2 headsets to add to our planned virtual reality laboratory at the University of Manitoba Libraries, starting with a single test unit purchased on my travel and expense funds for work.

Wish me luck; I am off on yet another adventure!

Teaching a Psychology Course at Mount Royal University using AltspaceVR

Dr. Tony Chaston in the virtual world he created for his psychology course in AltspaceVR

Psychology Professor Tony Chaston of Mount Royal University (in Calgary, Alberta, Canada) has developed a new psychology course that will teach students using the social VR platform AltspaceVR. The undergraduate-level course, which is called The Digital Frontier: Perception, AI and Virtual Reality in Psychology, is described as follows in the course calendar:

This course focuses on psychological theory and application relevant to interacting with current and emerging digital technologies. Topics will typically include interfacing and communicating with artificial intelligence, perception and cognition in digital spaces such as virtual and augmented reality and how we can feel “present” in our digital experiences. This course will be taught in a Virtual Reality Classroom. 

Note: This course requires students to have a Virtual Reality Head Mounted Display (HMD). 

According to a news article from Mount Royal:

The first of its kind in Canada, the class, which started Sept. 14, filled its 20 spots (standard for a fourth-year psych course) in a matter of days…

“Immersion in media is a topic that’s been around for a long time, but it takes on a whole different level when you talk about it in VR,” Chaston says, noting it will play a role in everything from work and play to shopping as retailers set up VR stores.

After diving deep into what VR is and how it works, the course will focus on Chaston’s research into using VR nature scenes to lower stress levels. The class is set up as a three-hour block and already students have been invited to a couple of VR “events” to ensure they are comfortable in the space.The first day was an introduction, including basic etiquette for behaviour in VR. While most class time will be in VR, there will be time for group work that uses other more traditional online formats like Google Meet so that students aren’t wearing headsets for three hours straight. As note-taking is tough in VR, those will be provided separately.

Chaston credits Anna Nuhn (who has since left MRU) and Erik Christiansen at the Riddell Library and Learning Centre and MRU psychology professor Dr. Evelyn Field, PhD, for their help over the past year in developing the course.

“This course is possible thanks to Tony’s willingness to immerse himself in the pedagogy of VR and best practices for designing virtual learning environments,” said Christiansen, an assistant professor and subject librarian at the MRU Library who has a background in information technology.

It’s wonderful to see more use cases of social VR in university teaching! For 15 more examples of the use of social VR in higher education, I can refer you to my recent half-hour presentation on the topic to the University of Manitoba Senate Committee on Academic Computing, as well as all my blogposts tagged Higher Education.


Thank you to Kari Kumar of the University of Manitoba for the heads up!