A Report from the IMMERSIVE X Conference, Nov. 12th and 13th, 2025

Because of my workload, I was only able to attend one session of the IMMERSIVE X metaverse conference on Wednesday, November 12th:

  • Conversational AI in Healthcare (held in Foretell Reality, which was a new-to-me platform).

However, I more than made up for it on Thursday, November 13th, attending the following five conference sessions:

  • Private, Present & Fully Heard: How Virtual Reality is Reclaiming the Power of Anonymous Peer Support (held in Foretell Reality)
  • Healing Beyond Walls: VR Social Support For Patients At SickKids (held in Foretell Reality)
  • Immersive Learning Beyond the Classroom (held in ENGAGE)
  • AI, WebXR and the Future of the Immersive Web (held in Hubs)
  • Will AR Be The Big Immersive Breakthrough? (held in VRChat)

So I will briefly report on each of these six sessions, one by one.

I accessed the three sessions held in Foretell Reality using the Meta Quest 3 wireless headset at my workplace, and I entered the sessions in ENGAGE and VRChat using my PCVR setup at work, a Vive Pro 2 VR headset tethered to a Windows desktop PC with a fairly decent NVidia gaphics card.

The final session, held in Hubs (formerly Mozilla Hubs), I could have entered via virtual reality, but instead I opted to pay a visit via the flatscreen monitor on my trusty MacBook Pro! By the end of the day, my neck and shoulders were aching, but I did make it through.

Conversational AI in Healthcare

This was not the first time that I had seen artificial intelligence combined with social VR (the first time was a memorable conversation I had with an AI-enhanced toaster in the now-shuttered platform called Tivoli Cloud VR, back in January of 2021), this one had a more practical purpose: to use generative AI to power a diabetes counselor (played by an NPC avatar) who could hold a conversation with a real-life person who has questions after being newly-diagnosed with type II diabetes.

An initial discussion held in an open-air auditorium was followed by a group teleport to a lecture theatre where the embodied AI chatbot (a woman dressed in light blue, centre) held a conversation with a demonstrator (the woman named Ines MTX):

When I asked what generative AI system was being used to drive this demo, I was informed that Foretell Reality actually can use any of Google’s Gemini, OpenAI’s ChatGPT, or Anthropic’s Claude AI to generate responses. As somebody who was actually diagnosed with Type II diabetes during the recent pandemic, and who never had an opportunity to meet with a real-life diabetic coach, I would really have appreciated having something like this available!

Unfortunately, the conference session description was frustratingly short on concrete details: who the speakers were, what company (or companies) they represented (other than Foretell Reality), and who the actual client was. It was also not clear to me if this just a tech demo or an actual system used by real people. And, because I was in my Meta Quest 3 headset, I could not take any written notes as people were speaking. There was a company called MTX involved, as far as I can remember. This is an example of where an inadequate session description hampers my ability to report on the event itself, as impressive as the technology demo was.

Private, Present & Fully Heard: How Virtual Reality is Reclaiming the Power of Anonymous Peer Support

We started off in this open-air amphitheatre at dusk (I think they said it was based on Red Rocks in Colorado)

Unlike the previous day’s session, both sessions I attended in Foretell Reality were sterling examples of how social VR could be used as an effective solution to address real-world problems and issues, and provide tangible benefits.

First up, here’s the conference blurb about the NorthStar project:

In traditional Alcohol & Substance Use Disorder treatment spaces, anonymity is often promised but rarely provided. NorthStar’s groundbreaking VR platform redefines what true
anonymity can look like—and how it unlocks unparalleled honesty, vulnerability, and connection. This session explores how immersive, avatar-based peer support transforms treatment outcomes by allowing patients to show up fully without being seen, while feeling surrounded by a community. We’ll discuss how VR group therapy makes treatment more accessible, more private, and more powerful—meeting people where they are – literally – while protecting who they are.

Unfortunately, the representative from NorthStar was unable to be present at this session, but DJ from Foretell Reality still had plenty to show us, taking us on a sort of field trip through the various settings built by the company to facilitate NorthSatr’s virtual group meetings (based on Alcoholics Anonymous principles), such as an urban park where you could toss a stick and have one of several virtual dogs fetch it back to you:

Foretell Reality’s dog park, where virtual AA meetings are sometimes held

Other locations included a chilly space station, where you could see your breath in front of you in the frosty air, and gravity could be turned off and on at will:

Foretell Reality’s space station

And finally, a newer addition, a competitive shooting game where you were part of team trying to shoot down rubber ducks of various colours! (I’m not sure if this last one was actually used by NorthStar clients, though).

Duck hunting in Foretell Reality

Overall, and especially when combined with the following conference session I describe below, I came away with a very favourable impression of Foretell Reality. You can check out their website here.

Healing Beyond Walls: VR Social Support For Patients At SickKids

Shaindy the avatar presents a video of the real-life Shaindy, explaining the SickKids project

Another Foretell Reality client is Toronto, Ontario’s famous SickKids Hospital,where the conference blurb states:

Join us for a special fireside chat with Shaindy, Clinical Manager [of the] Child Life Program at SickKids Hospital in Canada and DJ Smith, Co-Founder and Chief Creative Officer of Foretell Reality. Together, they will share how virtual reality is transforming the way children facing serious illnesses connect, play, and support one another. Shaindy will discuss her groundbreaking program that allows kids to log in once a week to a virtual world for group sessions. DJ will highlight how Foretell Reality’s platform has powered successful clinical pilots and is now scaling to reach even more children. This conversation will explore the impact on patients and families, the power of hospital collaboration, and the future of immersive technology in pediatric care.  

By “kids,” Shaindy explained that these were actually teenagers (aged 13 to 19) who were in hospital or a hospice, fighting various health-threatening conditions such as cancer. Because of their illnesses, these teenagers often found it difficult to socialize, which is where social VR afforded them an opportunity to interact and have fun virtually. Shaindy explained that they would get groups of six or so patients together, and they would keep it open and freeform so the “kids” could join or leave as they felt able to do so.

Among the many stories told were the delight by one patient who discovered a rubber ducky hiding in one of the virtual environment, which led to a quest to hide ducks (and pigs!) in as many environments as possible, for others to find. DJ helpfully rezzed one such duck for show-and-tell (also a pig, but I didn’t take a picture of that!). I apologize for the lopsided aspect of some of these screenshots; determining the right balance of your head in a VR headset when taking screenshots is a bit of a black art, at which I usually fail miserably!!

Behold, a rubber duck! (Apologies for the awkward angle of this shot.)

The presentation ended with a group teleport to a meditation centre, where Saindy led us through a box breathing exercise, helped along by the in-world painting tools installed by Foretell Reality!

We ended with a box breathing exercise in a meditation temple, assisted by a little art therapy. (Again, apologies for the sideways tilt!)

This was one of the most heartwarming conference sessions I have ever attended, and I wish this project every success as they hope to expand this service to more hospitals in future!

Immersive Learning Beyond the Classroom

This session had a capacity crowd of avatars present, and was held in ENGAGE (in fact, there were so many avatars that my experience began to degrade to the point where I eventually had to bail out of my Vive Pro 2 VR headset or risk nausea!). Because of that, I missed about the final third of the talk. Here’s the blurb:

How can immersive environments transform teaching, learning, and cross-cultural connection? This panel brings together diverse perspectives from the fields of education and innovation.
Chris Madsen empowers organizations worldwide through the ENGAGE XR platform. Wolf Arne Storm and his team at the Goethe-Institut created GoetheVRsum, which explores new formats in culture, language, and creativity. Marlene May researches and teaches in 3D virtual spaces at Karlshochschule International University and Birgit Giering is pioneering the large-scale adoption of XR in schools of North Rhine-Westphalia. Moderated by Prof. Dr. Dr. Björn Bohnenkamp, this session will explore the future of learning beyond traditional classrooms.

However, this time I was able to take some chicken scratch handwritten notes! So here goes…Wolf-Arne spoke about the Goethe Institut, Marlene spoke about the Karlshochschule International University (in fact, the space where we met in ENGAGE was one of their creations), and Birgit spoke her work in the schools of North Rhine-Westphalia.

The Goethe Institut is Germany’s premier cultural institute, with locations around the world teaching German language and culture. The organization chose ENGAGE as their metaverse platform, creating a virtual space called the Goethevrsum. The Goethevrsum uses the works of various Bauhaus artists as inspiration for its design.

It was a shame that technical glitches kinda marred the overall experience for me, but I am glad that I was able to be able to make it in, and make it through most of it!

AI, WebXR and the Future of the Immersive Web

This session was held in (formerly Mozilla) Hubs, and much like all Hubs experiences I have ever had, it tended towards the spontaneous, the off-the-cuff and the chaotic! Like the ENGAGE session, it was unfortunately plagued by technical issues and problems. The presenter, Adam Filandr, talked about how he used open-source WebXR code and generative AI tools to create something called NeoFables, which delivered personalized worlds, characters, and storytelling (currently limited to 2D images, although he hopes to be able to expand it over time to create 3D content).

Screenshot

He discussed the advantages and disadvantages of using WebXR to create VR content, and gave a couple of examples of bigger-name projects which were based on WebXR (Wol, made by Google to provide information about the U.S. national parks system, and Raw Emotion Unites Us, about Paralympian athletes). It was interesting to hear a developer’s perspective of using WebXR to create content, mixed in with generative AI tools, however.

Will AR Be The Big Immersive Breakthrough (Heather Dunaway Smith and Lien Tran)

My final session on Thursday, Nov. 13th was not what I expected. It was a panel discussion with two musicians and artists, Lien and Heather, who have worked extensively with augmented reality and mixed reality. They shared samples of their work, and the panel (moderated by Christopher Morrison) held a wide-ranging discussion on how AR/MR/XR (or, as Chris said it, “XR-poly”) is impacting and transforming creative expression. I’m not sure if there will be a livestream of this talk (I did not see Carlos and his video camera while I was there), so I will leave it at that, since (again), I did not take written notes.

The Lazy, Hazy, Crazy Days of Summer: AI, VR, and the Trade Wars

This summer, following my return to full-time work after my six-month, half-time sick leave for job burnout, has been interesting, in both positive and negative ways (remember the ancient Chinese curse, “may you live in interesting times.”) I’ve already written at length about our unprecedented, climate-change-fuelled wildfire season here in Manitoba, but there have been other things on my mind as well: AI, VR, and the ongoing trade war with the United States.

Photo by Steve Johnson on Unsplash

I have been learning a lot more about artificial intelligence in general, and generative AI in particular, over the past few months. I am doing this to prepare myself for a couple of events this coming Fall term at my university.

Well, I have somehow talked myself into giving a 15-minute presentation on artificial intelligence and generative AI (GenAI) to the professors at an upcoming Faculty Council meeting in the Faculty of Agriculture and Food Sciences (as I am the liaison librarian serving the faculty). This all came out of a recent addition to my PowerPoint slides last year, where I was warning the students I spoke to about the dangers of relying on GenAI tools like ChatGPT as search engines. I had been telling members of the Agriculture Library Committee about this work, at one of our face-to-face meetings. By the end of the discussion, I had agreed to give a presentation to Faculty Council. (Me and my big mouth!)

However, to my horror, I realized that the field of GenAI was now evolving so quickly, that pretty much everything I had talked about last year was already way out of date! So this necessitated a lot of reading (yes, actual books from the university’s collection), and a lot of web browsing, including taking some online courses, in order to work my way up the learning curve. It turns out that being asked to give an accessible presentation on a topic, to an audience of professors (who are pretty smart people overall), is a very powerful motivator to learn new things!

So I have been spending much of the past couple months learning more about AI. I had already had a subscription to ChatGPT, by OpenAI, being among the first million people to set up an account in 2022. To that, I have added a second subscription to a service called Claude AI, by a company called Anthropic, which was founded by some ex-OpenAI employees who had some ethical concerns about the direction in which their former company was going with its GenAI products.

I’m getting closer to the point that I now feel more comfortable attempting to pull together this 15-minute talk. In addition, I have agreed to team-teach a course to graduate students and student advisors on GenAI this Fall term, along with a lawyer. The lawyer will discuss the legal and copyright issues associated with GenAI, and I will focus on the technical and practical aspects of GenAI tools (leaning heavily on the same content as my talk to the agriculture professors). I am slowly but surely becoming the in-house AI expert at the University of Manitoba Libraries, as well as the virtual reality expert!


Speaking of virtual reality, now that I am no longer officially involved with the ongoing virtual/augmented reality lab project at my university library system, all the VR equipment I had donated to the lab has been returned to me (the people working on the project have decided to purchase brand-new equipment).

I have had to drag a second desk into my open-office cubicle area to re-setup my Windows desktop PC and Vive Pro VR headset, and I’ve had to find space to stash away my Meta Quest 2 and Meta Quest 3 wireless headsets when I am not using them! Between work and home, I have no less than five different headsets to deal with (my Valve Index at home sits unused because I need to reinstall its software after the recent hard drive crash of my personal computer, and, of course, my Apple Vision Pro, about which I have written several blog posts over the past twelve months).

However, I must confess that I haven’t really used any of the Windows VR/AR headsets very much since I bought my Apple Vision Pro, which I still use a couple of hours a day at work in the large, clear (and now, ultra widescreen!) Virtual Display, with my MacBook Pro. Often, I lug my Apple Vision Pro home in my backpack, using it there to watch TV and movies, to browse Reddit news posted to the AVP subreddits, and to hang out and chat with folks from all over the world in InSpaze (still one of the killer apps, in my opinion). This device is worth every penny I paid for it, despite its high price tag, and I will be first in line for whatever Apple comes out with next in its line of spatial computing devices. I’m all in.

As many of you already know, I have already completely given up on most corporate-run, algorithm-driven social media platforms, most of which have become toxic cesspools. I left Meta’s Facebook several years ago, and I quit Twitter/X when Apartheid Clyde took over. While I still have nominal accounts on Mastodon (from which I watched the Twitter dumpster fire from afar), and Bluesky (to follow public health experts and, more recently, AI experts), I find that I can now go weeks at a time without bothering to check either site. I have found that my mental and emotional health has greatly improved since I have essentially discarded most social media, and I can recommend it highly.

I have also been going through the long, slow, arduous process of disengaging from Google as well, replacing the Chrome web browser with Firefox, Google search with Qwant, YouTube Music with Apple Music*, and Gmail with the Swiss-owned, privacy-oriented Proton service. In particular, the switch from Gmail to Proton email has been lengthy and ongoing.


Photo by Praveen Kumar Nandagiri on Unsplash

I don’t think that most Americans (as disinterested as they tend to be about anything that goes on outside their borders) really understand just how royally pissed off Canadians are at the United States right now. As I write this, the latest word from Donald Trump is that he is planning to impose a 35% tarriff on Canadian imports, which of course is going to kick off another round of tit-for-tat trade war, which is going to piss Canadians off even more than they are already. Elbows up!

I read an article last week in Maclean’s (the Canadian version of Time or Newsweek) that made that point quite well, so I am quoting it at length below:

Canadians define themselves in opposition to the United States because the country was founded by people who rejected the bloody American Revolution. We’ve kept rejecting it for almost three centuries.

The United States is an unpredictable and increasingly dysfunctional empire, an extended experiment in pushing everything to the extreme. Canadians, on the other hand, have a long but imperfect history of muddling along peaceably. We are not bound together by some intrinsic identity—by language, race, religion or a shared and glorious history of revolution or conquest. We become nationalistic only when it is necessary to protect ourselves against the aggression of the United States.

That negative, defensive definition has always been enough. It is kind of the point of Canada.

As Canada settled deeper into the winter of 2025, and Trump kept boorishly insisting that Canadians would be happier in his clutches, we got mad.

Canadians yanked U.S. liquor from store shelves, cancelled trips and hoisted flags, even in downtown Montreal. Pallets of U.S. produce spoiled in the supermarket aisles. Normally bustling American border towns that depended on shopping day trips were suddenly silent. The U.S. departure lounges at Pearson and Trudeau were empty.

Nova Scotia Premier Tim Houston removed interprovincial trade barriers for any province that would reciprocate and, post-election, Mark Carney went a step further and pledged to dismantle all interprovincial trade barriers by Canada Day. Manitoba Premier Wab Kinew announced he was planning to let some electricity contracts with the States lapse and use much of that excess power to boost his own province’s energy economy. Quebec Premier François Legault said Quebecers would consider east-west oil pipelines they had previously opposed.

People were soon speculating about a guerrilla war of resistance. The Americans might be able to take Canada, but could they hold it? How could they justify the casualties they would take? At the end of January, one of the most capable men I know texted me, out of the blue, that he had told his wife, the mother of his infant child, that he’d be “willing to die on the end of a rifle to make sure” the Americans could not take Canada.

It became clear how deep the feeling ran on February 1 at Ottawa’s Canadian Tire Centre, where the Senators played the Minnesota Wild. Because Ottawa is a government town, and there are often as many Leafs or Habs fans in attendance as Sens supporters, it can be a dull place to watch a game. But there was nothing sedate about the booing as “The Star-Spangled Banner” played. Fans booed it heartily from start to finish, drowning out the unfortunate singer.

Stephen Maher, “Never for sale.” Maclean’s, July 2025.

I honestly don’t know how all this is going to play out over the next four years, but I have slowly learned to tune out whatever batshit craziness is happening in the United States and its trade war with Canada (and the rest of the world), and to focus on what I can control. So I have been voting both with my feet and my wallet.

In particular, like many of my fellow Canadians, I refuse to visit the United States until Trump is out of office. No conferences, no vacations. Nothing. And I have already cancelled my subscriptions to Netflix and Amazon Prime, and most recently I added both Disney+ and Hayu (Bravo reality TV) to that list. I’m probably not done yet. I am pissed.

During the pandemic, I got into the habit of ordering my groceries online through the Walmart website, and then using their Pickup service early Saturday morning. Not any more! I have used my librarian skill set to extensively research Canadian-made alternatives to American brands (Buh-bye, Campbell’s Chunky Soup! Hello, Tim Horton’s Soup!). I have swapped the Walmart website for the Real Canadian Superstore, still picking up my online-ordered (but now overwhelmingly Canadian-produced) groceries bright and early Sunday morning. Works just as well for me!

Finally, I have gone and joined the Red River Co-Op, a locally-owned co-operative grocery store and gas station that has been active here in Winnipeg since the 1930s. And I do plan to regularly shop at the St. Norbert farmers’ market, just south of where I live in Winnipeg, to support local farmers and artisans (it’s quite literally across the street from the Red River Co-Op store I now shop at!).

So, that’s my report from my lazy, hazy, crazy days of summer! Stay cool and stay sane in these trying times.


*I fully realize that Apple is an American company, but I associate Apple with California, and I am not averse to supporting liberal-leaning, Democratic-voting California! 😜

Taking a Moment to Catch My Breath and Figure Out Where I’m Going Next

So, as I have mentioned, I haven’t been blogging much lately, because I have been so busy with my full-time paying job as an academic librarian at my employer, the University of Manitoba in Winnipeg, Canada. Now that the annual rush of training hundreds of students on how to use the university libraries effectively and efficiently has ended, my attention turns to my other big project: specifying hardware and software for a virtual reality lab, which we are calling the XR Lab (the XR stands for eXtended Reality, a sort of umbrella term used for virtual reality, augmented reality, mixed reality, and what Apple is now calling spatial computing).

The purpose of this lab is to provide virtual reality and augmented reality hardware and software (both VR/AR experiences and content creation tools) to University of Manitoba faculty, staff, and students to support their teaching, learning and research. I have been working on this project for the past two and half years, and it is a weird feeling to finally see the computers removed from the room which we have designated as the future home of the XR Lab, in preparation for the necessary room renovations (which are to start soon, and are supposed to be completed by spring next year):

The former computer lab which will be renovated to create the XR Lab

In the meantime, I have been cross-training another Libraries staff member on the hardware and software which I am proposing for the XR Lab. In other words, if (God forbid!) I should get run over by a bus, the idea is that somebody will be able to give VR/AR demos in my place. There is a lot of information which has to be shared! For example, our last training session included a section on how to set the correct interpupillary distance (IPD) on both the Vive Pro 2 and Meta Quest 3 headsets (thankfully, the Apple Vision Pro automatically scans your eyes and sets the IPD automatically!).

Just another day in the office: the Vive Pro 2 VR headset is sitting on the Windows desktop PC it is tethered to on the right, the Meta Quest 3 is to the left near the back of the table, and the Apple Vision Pro is sitting at the centre, near the front of the table.

There’s a lot of balls to juggle, and I must confess that I often feel exhausted and even overwhelmed at times. When I come home from work, the last thing I want to do is write a blog post! So my formerly feverish blogging pace has unfortunately slowed to a crawl. Also, my blogpost viewing stats are way, waaay down. Where I used to get 1,500 views a day, now I’m lucky to reach even one third of that:

Partly it’s because the metaverse hype cycle has crested and crashed (and everyone has jumped on the artificial intelligence bandwagon), and partly it’s because longform blogs seem to be an increasingly outdated—even quaint—means of communication in the current short-attention-span era of Instagram pictures and TikTok videos.

Which means I seriously need to pause and think about what direction in which I want to take this blog, and who I want my audience to be. One of the things that I have always said is that, in a blog that literally has my name in the URL, anything I want to talk about here is on topic! However, I am wondering if perhaps I have cast my net a little too broadly, and it might be time to narrow the focus of the RyanSchultz.com blog somewhat.

I don’t think that I will cease blogging completely; I still feel the need to write, but I need to reflect a bit on what I want to write about, and why. I still do get a sense of accomplishment when I craft a well-written blog post on a topic that I care about and, as always, I read and appreciate all the comments and feedback I receive on my blogposts!

So please bear with me as I figure out where I am going next with (gestures broadly) all this.

It can be difficult to choose the next direction in which to go (Image by Rama Krishna Karumanchi from Pixabay)

InSpaze: One of the First Social Apps for the Apple Vision Pro (Plus a Tantalizing Look at Apple’s New Spatial Personas)

The HelloSpace team (makers of InSpaze) met with Apple CEO Tim Cook in late March (source: Twitter)

In its emphasis on the term spatial computing (instead of virtual reality or augmented reality), some observers have commented that there is a somewhat puzzling lack of social VR/AR apps for the Apple Vision Pro. Well, I recently learned (from the very active r/VisionPro community on Reddit) that there is a social app for the AVP, called InSpaze. Here’s a 15-minute YouTube video giving you an idea of what is possible now:

Please note several interesting things about this video: First, when you see the hands of the person capturing this video in his Apple Vision Pro (using the built-in video recording features), they are actually his real hands via pass-through, not an avatar’s hands!

Second, one of the features of InSpaze is real-time voice translation! One of the participants spoke a sentence in Chinese, which was translated into English and displayed as a subtitle under his Persona (at the 6:45 minute mark in this video).

There are people in this video participating around the table via their own Vision Pro headsets, in which their avatar appears as the still-in-beta-testing Personas (which is based on a scan of their real-life face, as a part of setup). While the Personas feature of the AVP can still be a bit unsettling, with uncanny valley vibes, and they appear currently in InSpaze only via a flatscreen view right now, Apple has just announced (and released) Spatial Personas, which look like this:

So, I expect it will only be a matter of time before Spatial Personas are added to InSpaze, replacing the locked-in-flatscreen look that current AVP participants have in InSpaze with a three-dimensional version. Mind blowing! It’s certainly a refreshing change from a Zoom call!

Also, note that iPhone and iPad users, running the InSpaze app, can also participate in InSpaze rooms! iPhone and iPad users actually have a cartoonified version of their real-life face, which honestly kind of matches the cartoony look of the AVP Personas. I couldn’t help but notice that one of the iPhone participants was standing outside, and the wind was blowing his hair around, which looked really weird combined with his cartoony face! Another guy (the one speaking Chinese) was behind the wheel of his car (let’s hope he wasn’t driving!).

InSpaze is already available on the Apple Store, for the Apple Vision Pro, iPhone, and iPad. It is made by a company called HelloSpace (website; Discord; Twitter/X). Apparently, it’s been quite a hit among AVP users, who seem to appreciate having a way to connect with each other in virtual space! In fact, in the first video up top, they talk about how Apple employees themselves like to use InSpaze to connect with their customers.

Things are happening so fast in this space that it’s been hard to keep on top of all the developments! I do find that a daily visit to the r/VisionPro subreddit is a very good way to stay abreast of everything that’s going on with this rapidly-evolving technology. I’m still patiently waiting for when we Canadians can pre-order the Apple Vision Pro (hopefully sometime this spring or summer). And I’m quite envious of the Americans who have already gotten their hot little hands on a unit!

I want one. I waaaant one!