A Report from the IMMERSIVE X Conference, Nov. 12th and 13th, 2025

Because of my workload, I was only able to attend one session of the IMMERSIVE X metaverse conference on Wednesday, November 12th:

  • Conversational AI in Healthcare (held in Foretell Reality, which was a new-to-me platform).

However, I more than made up for it on Thursday, November 13th, attending the following five conference sessions:

  • Private, Present & Fully Heard: How Virtual Reality is Reclaiming the Power of Anonymous Peer Support (held in Foretell Reality)
  • Healing Beyond Walls: VR Social Support For Patients At SickKids (held in Foretell Reality)
  • Immersive Learning Beyond the Classroom (held in ENGAGE)
  • AI, WebXR and the Future of the Immersive Web (held in Hubs)
  • Will AR Be The Big Immersive Breakthrough? (held in VRChat)

So I will briefly report on each of these six sessions, one by one.

I accessed the three sessions held in Foretell Reality using the Meta Quest 3 wireless headset at my workplace, and I entered the sessions in ENGAGE and VRChat using my PCVR setup at work, a Vive Pro 2 VR headset tethered to a Windows desktop PC with a fairly decent NVidia gaphics card.

The final session, held in Hubs (formerly Mozilla Hubs), I could have entered via virtual reality, but instead I opted to pay a visit via the flatscreen monitor on my trusty MacBook Pro! By the end of the day, my neck and shoulders were aching, but I did make it through.

Conversational AI in Healthcare

This was not the first time that I had seen artificial intelligence combined with social VR (the first time was a memorable conversation I had with an AI-enhanced toaster in the now-shuttered platform called Tivoli Cloud VR, back in January of 2021), this one had a more practical purpose: to use generative AI to power a diabetes counselor (played by an NPC avatar) who could hold a conversation with a real-life person who has questions after being newly-diagnosed with type II diabetes.

An initial discussion held in an open-air auditorium was followed by a group teleport to a lecture theatre where the embodied AI chatbot (a woman dressed in light blue, centre) held a conversation with a demonstrator (the woman named Ines MTX):

When I asked what generative AI system was being used to drive this demo, I was informed that Foretell Reality actually can use any of Google’s Gemini, OpenAI’s ChatGPT, or Anthropic’s Claude AI to generate responses. As somebody who was actually diagnosed with Type II diabetes during the recent pandemic, and who never had an opportunity to meet with a real-life diabetic coach, I would really have appreciated having something like this available!

Unfortunately, the conference session description was frustratingly short on concrete details: who the speakers were, what company (or companies) they represented (other than Foretell Reality), and who the actual client was. It was also not clear to me if this just a tech demo or an actual system used by real people. And, because I was in my Meta Quest 3 headset, I could not take any written notes as people were speaking. There was a company called MTX involved, as far as I can remember. This is an example of where an inadequate session description hampers my ability to report on the event itself, as impressive as the technology demo was.

Private, Present & Fully Heard: How Virtual Reality is Reclaiming the Power of Anonymous Peer Support

We started off in this open-air amphitheatre at dusk (I think they said it was based on Red Rocks in Colorado)

Unlike the previous day’s session, both sessions I attended in Foretell Reality were sterling examples of how social VR could be used as an effective solution to address real-world problems and issues, and provide tangible benefits.

First up, here’s the conference blurb about the NorthStar project:

In traditional Alcohol & Substance Use Disorder treatment spaces, anonymity is often promised but rarely provided. NorthStar’s groundbreaking VR platform redefines what true
anonymity can look like—and how it unlocks unparalleled honesty, vulnerability, and connection. This session explores how immersive, avatar-based peer support transforms treatment outcomes by allowing patients to show up fully without being seen, while feeling surrounded by a community. We’ll discuss how VR group therapy makes treatment more accessible, more private, and more powerful—meeting people where they are – literally – while protecting who they are.

Unfortunately, the representative from NorthStar was unable to be present at this session, but DJ from Foretell Reality still had plenty to show us, taking us on a sort of field trip through the various settings built by the company to facilitate NorthSatr’s virtual group meetings (based on Alcoholics Anonymous principles), such as an urban park where you could toss a stick and have one of several virtual dogs fetch it back to you:

Foretell Reality’s dog park, where virtual AA meetings are sometimes held

Other locations included a chilly space station, where you could see your breath in front of you in the frosty air, and gravity could be turned off and on at will:

Foretell Reality’s space station

And finally, a newer addition, a competitive shooting game where you were part of team trying to shoot down rubber ducks of various colours! (I’m not sure if this last one was actually used by NorthStar clients, though).

Duck hunting in Foretell Reality

Overall, and especially when combined with the following conference session I describe below, I came away with a very favourable impression of Foretell Reality. You can check out their website here.

Healing Beyond Walls: VR Social Support For Patients At SickKids

Shaindy the avatar presents a video of the real-life Shaindy, explaining the SickKids project

Another Foretell Reality client is Toronto, Ontario’s famous SickKids Hospital,where the conference blurb states:

Join us for a special fireside chat with Shaindy, Clinical Manager [of the] Child Life Program at SickKids Hospital in Canada and DJ Smith, Co-Founder and Chief Creative Officer of Foretell Reality. Together, they will share how virtual reality is transforming the way children facing serious illnesses connect, play, and support one another. Shaindy will discuss her groundbreaking program that allows kids to log in once a week to a virtual world for group sessions. DJ will highlight how Foretell Reality’s platform has powered successful clinical pilots and is now scaling to reach even more children. This conversation will explore the impact on patients and families, the power of hospital collaboration, and the future of immersive technology in pediatric care.  

By “kids,” Shaindy explained that these were actually teenagers (aged 13 to 19) who were in hospital or a hospice, fighting various health-threatening conditions such as cancer. Because of their illnesses, these teenagers often found it difficult to socialize, which is where social VR afforded them an opportunity to interact and have fun virtually. Shaindy explained that they would get groups of six or so patients together, and they would keep it open and freeform so the “kids” could join or leave as they felt able to do so.

Among the many stories told were the delight by one patient who discovered a rubber ducky hiding in one of the virtual environment, which led to a quest to hide ducks (and pigs!) in as many environments as possible, for others to find. DJ helpfully rezzed one such duck for show-and-tell (also a pig, but I didn’t take a picture of that!). I apologize for the lopsided aspect of some of these screenshots; determining the right balance of your head in a VR headset when taking screenshots is a bit of a black art, at which I usually fail miserably!!

Behold, a rubber duck! (Apologies for the awkward angle of this shot.)

The presentation ended with a group teleport to a meditation centre, where Saindy led us through a box breathing exercise, helped along by the in-world painting tools installed by Foretell Reality!

We ended with a box breathing exercise in a meditation temple, assisted by a little art therapy. (Again, apologies for the sideways tilt!)

This was one of the most heartwarming conference sessions I have ever attended, and I wish this project every success as they hope to expand this service to more hospitals in future!

Immersive Learning Beyond the Classroom

This session had a capacity crowd of avatars present, and was held in ENGAGE (in fact, there were so many avatars that my experience began to degrade to the point where I eventually had to bail out of my Vive Pro 2 VR headset or risk nausea!). Because of that, I missed about the final third of the talk. Here’s the blurb:

How can immersive environments transform teaching, learning, and cross-cultural connection? This panel brings together diverse perspectives from the fields of education and innovation.
Chris Madsen empowers organizations worldwide through the ENGAGE XR platform. Wolf Arne Storm and his team at the Goethe-Institut created GoetheVRsum, which explores new formats in culture, language, and creativity. Marlene May researches and teaches in 3D virtual spaces at Karlshochschule International University and Birgit Giering is pioneering the large-scale adoption of XR in schools of North Rhine-Westphalia. Moderated by Prof. Dr. Dr. Björn Bohnenkamp, this session will explore the future of learning beyond traditional classrooms.

However, this time I was able to take some chicken scratch handwritten notes! So here goes…Wolf-Arne spoke about the Goethe Institut, Marlene spoke about the Karlshochschule International University (in fact, the space where we met in ENGAGE was one of their creations), and Birgit spoke her work in the schools of North Rhine-Westphalia.

The Goethe Institut is Germany’s premier cultural institute, with locations around the world teaching German language and culture. The organization chose ENGAGE as their metaverse platform, creating a virtual space called the Goethevrsum. The Goethevrsum uses the works of various Bauhaus artists as inspiration for its design.

It was a shame that technical glitches kinda marred the overall experience for me, but I am glad that I was able to be able to make it in, and make it through most of it!

AI, WebXR and the Future of the Immersive Web

This session was held in (formerly Mozilla) Hubs, and much like all Hubs experiences I have ever had, it tended towards the spontaneous, the off-the-cuff and the chaotic! Like the ENGAGE session, it was unfortunately plagued by technical issues and problems. The presenter, Adam Filandr, talked about how he used open-source WebXR code and generative AI tools to create something called NeoFables, which delivered personalized worlds, characters, and storytelling (currently limited to 2D images, although he hopes to be able to expand it over time to create 3D content).

Screenshot

He discussed the advantages and disadvantages of using WebXR to create VR content, and gave a couple of examples of bigger-name projects which were based on WebXR (Wol, made by Google to provide information about the U.S. national parks system, and Raw Emotion Unites Us, about Paralympian athletes). It was interesting to hear a developer’s perspective of using WebXR to create content, mixed in with generative AI tools, however.

Will AR Be The Big Immersive Breakthrough (Heather Dunaway Smith and Lien Tran)

My final session on Thursday, Nov. 13th was not what I expected. It was a panel discussion with two musicians and artists, Lien and Heather, who have worked extensively with augmented reality and mixed reality. They shared samples of their work, and the panel (moderated by Christopher Morrison) held a wide-ranging discussion on how AR/MR/XR (or, as Chris said it, “XR-poly”) is impacting and transforming creative expression. I’m not sure if there will be a livestream of this talk (I did not see Carlos and his video camera while I was there), so I will leave it at that, since (again), I did not take written notes.

Taking a Moment to Catch My Breath and Figure Out Where I’m Going Next

So, as I have mentioned, I haven’t been blogging much lately, because I have been so busy with my full-time paying job as an academic librarian at my employer, the University of Manitoba in Winnipeg, Canada. Now that the annual rush of training hundreds of students on how to use the university libraries effectively and efficiently has ended, my attention turns to my other big project: specifying hardware and software for a virtual reality lab, which we are calling the XR Lab (the XR stands for eXtended Reality, a sort of umbrella term used for virtual reality, augmented reality, mixed reality, and what Apple is now calling spatial computing).

The purpose of this lab is to provide virtual reality and augmented reality hardware and software (both VR/AR experiences and content creation tools) to University of Manitoba faculty, staff, and students to support their teaching, learning and research. I have been working on this project for the past two and half years, and it is a weird feeling to finally see the computers removed from the room which we have designated as the future home of the XR Lab, in preparation for the necessary room renovations (which are to start soon, and are supposed to be completed by spring next year):

The former computer lab which will be renovated to create the XR Lab

In the meantime, I have been cross-training another Libraries staff member on the hardware and software which I am proposing for the XR Lab. In other words, if (God forbid!) I should get run over by a bus, the idea is that somebody will be able to give VR/AR demos in my place. There is a lot of information which has to be shared! For example, our last training session included a section on how to set the correct interpupillary distance (IPD) on both the Vive Pro 2 and Meta Quest 3 headsets (thankfully, the Apple Vision Pro automatically scans your eyes and sets the IPD automatically!).

Just another day in the office: the Vive Pro 2 VR headset is sitting on the Windows desktop PC it is tethered to on the right, the Meta Quest 3 is to the left near the back of the table, and the Apple Vision Pro is sitting at the centre, near the front of the table.

There’s a lot of balls to juggle, and I must confess that I often feel exhausted and even overwhelmed at times. When I come home from work, the last thing I want to do is write a blog post! So my formerly feverish blogging pace has unfortunately slowed to a crawl. Also, my blogpost viewing stats are way, waaay down. Where I used to get 1,500 views a day, now I’m lucky to reach even one third of that:

Partly it’s because the metaverse hype cycle has crested and crashed (and everyone has jumped on the artificial intelligence bandwagon), and partly it’s because longform blogs seem to be an increasingly outdated—even quaint—means of communication in the current short-attention-span era of Instagram pictures and TikTok videos.

Which means I seriously need to pause and think about what direction in which I want to take this blog, and who I want my audience to be. One of the things that I have always said is that, in a blog that literally has my name in the URL, anything I want to talk about here is on topic! However, I am wondering if perhaps I have cast my net a little too broadly, and it might be time to narrow the focus of the RyanSchultz.com blog somewhat.

I don’t think that I will cease blogging completely; I still feel the need to write, but I need to reflect a bit on what I want to write about, and why. I still do get a sense of accomplishment when I craft a well-written blog post on a topic that I care about and, as always, I read and appreciate all the comments and feedback I receive on my blogposts!

So please bear with me as I figure out where I am going next with (gestures broadly) all this.

It can be difficult to choose the next direction in which to go (Image by Rama Krishna Karumanchi from Pixabay)

UPDATED! Meta Announces the Meta Horizon Operating System for Future Third-Party VR/AR/MR Headsets, and Partnerships with ASUS, Lenovo, and Xbox (Also: Reports of Slower-Than-Expected Sales for the Apple Vision Pro)

On April 22nd, 2024, Meta (the company formerly known as Facebook) made an announcement titled A New Era for Mixed Reality:

Today we’re taking the next step toward our vision for a more open computing platform for the metaverse. We’re opening up the operating system powering our Meta Quest devices to third-party hardware makers, giving more choice to consumers and a larger ecosystem for developers to build for. We’re working with leading global technology companies to bring this new ecosystem to life and making it even easier for developers to build apps and reach their audiences on the platform.

This new hardware ecosystem will run on Meta Horizon OS, the mixed reality operating system that powers our Meta Quest headsets. We chose this name to reflect our vision of a computing platform built around people and connection—and the shared social fabric that makes this possible. Meta Horizon OS combines the core technologies powering today’s mixed reality experiences with a suite of features that put social presence at the center of the platform.

Of course, this also includes the Meta Quest Store, which will apparently be renamed the Meta Horizon Store:

Developers and creators can take advantage of all these technologies using the custom frameworks and tooling we’ve built for creating mixed reality experiences, and they can reach their communities and grow their businesses through the content discovery and monetization platforms built into the OS. These include the Meta Quest Store, which contains the world’s best library of immersive apps and experiences—we’re renaming it to the Meta Horizon Store.

And, as you might expect with a company whose profits still largely derive from social media based on surveillance capitalism, you’d best believe that Meta wants to make sure that it inserts itself into all the social aspects of this technology, as it licenses the tech to other companies:

The Horizon social layer currently powering Meta Quest devices will extend across this new ecosystem. It enables people’s identities, avatars, and friend groups to move with them across virtual spaces and lets developers integrate rich social features into their apps. And because this social layer is made to bridge multiple platforms, people can spend time together in virtual worlds that exist across mixed reality, mobile, and desktop devices. Meta Horizon OS devices will also use the same mobile companion app that Meta Quest owners use today—we’ll rename this as the Meta Horizon app.

It looks very much as though the word Quest is going to be replaced by the word Horizon throughout (much as Oculus was replaced by Quest previously). I guess those Meta marketing people need to justify their paycheques by constant rebranding! Gotta keep it fresh! Personally, I think they should have stuck with Oculus… 😉

Also part of this announcement are three key partnerships with third-party hardware developers:

  • ASUS and its Republic of Gamers subsidiary “will use its expertise as a leader in gaming solutions to develop an all-new performance gaming headset.”
  • Lenovo will apparently focus on education and the workplace: “Lenovo will draw on its experience co-designing Oculus Rift S, as well as deep expertise in engineering leading devices like the ThinkPad laptop series, to develop mixed reality devices for productivity, learning, and entertainment.”
  • Meta will also be working with Xbox to create a limited-edition Meta Quest (Microsoft and Meta also worked together recently to bring Xbox cloud gaming to the Quest).

Reactions to this new on Reddit have varied. One person on the r/VisionPro subreddit (hardly an impartial source!) commented, “Feels more closed than Apple. And also less developer friendly.” (As if Apple doesn’t have its own walled-garden approach to its technology.)

Also mentioned in Meta’s announcement was that software developed through the Quest App Lab will be featured in the newly-renamed Horizon Store:

As we begin opening Meta Horizon OS to more device makers, we’re also expanding the ways app developers can reach their audiences. We’re beginning the process of removing the barriers between the Meta Horizon Store and App Lab, which lets any developer who meets basic technical and content requirements ship software on the platform. App Lab titles will soon be featured in a dedicated section of the Store on all our devices, making them more discoverable to larger audiences.

I think that this is good news for smaller developers, who often struggle to get word out about their products. (Of course, Meta will get a cut of any sales through its store!)

In an Engadget report by Devindra Hardawar, she writes:

Think of it like moving the Quest’s ecosystem from an Apple model, where one company builds both the hardware and software, to more of a hardware free-for-all like Android. The Quest OS is being rebranded to “Meta Horizon OS,” and at this point it seems to have found two early adopters. ASUS’s Republic of Gamers (ROG) brand is working on a new “performance gaming” headsets, while Lenovo is working on devices for “productivity, learning and entertainment.” (Don’t forget, Lenovo also built the poorly-received Oculus Rift S.)

As part of the news, Meta says it’s also working on a limited-edition Xbox “inspired” Quest headset. (Microsoft and Meta also worked together recently to bring Xbox cloud gaming to the Quest.) Meta is also calling on Google to bring over the Google Play 2D app store to Meta Horizon OS. And, in an effort to bring more content to the Horizon ecosystem, software developed through the Quest App Lab will be featured in the Horizon Store. The company is also developing a new spatial framework to let mobile developers created mixed reality apps.

Devindra does have a good point; Apple has long been opposed to opening up its hardware to third-parties (and it would appear, based on recent media reports, that sales of the eyewateringly-pricey Apple Vision Pro are not as brisk as the company had hoped):

Apple has dropped the number of Vision Pro units that it plans to ship in 2024, going from an expected 700 to 800k units to just 400k to 450k units, according to Apple analyst Ming-Chi Kuo.

Orders have been scaled back before the Vision Pro has launched in markets outside of the United States, which Kuo says is a sign that demand in the U.S. has “fallen sharply beyond expectations.” As a result, Apple is expected to take a “conservative view” of headset demand when the Vision Pro launches in additional countries.

Kuo previously said that Apple will introduce the Vision Pro in new markets before the June Worldwide Developers Conference, which suggests that we could see it available in additional areas in the next month or so.

Apple is expecting Vision Pro shipments to decline year-over-year in 2025 compared to 2024, and the company is said to be “reviewing and adjusting” its headset product roadmap. Kuo does not believe there will be a new Vision Pro model in 2025, an adjustment to a prior report suggesting a modified version of the Vision Pro would enter mass production late next year.

According to Apple industry analyst Ming-Chi Kuo, initial sales of the high-end Apple Vision Pro have “fallen sharply beyond expectations.”

I find it an absolutely fascinating time to be working in virtual reality, augmented reality, mixed reality, and spatial computing! While Apple has aimed for the high-end with its US$3,500 headset, Meta has focused its attention on the low end, with a wireless headset that is seven times cheaper than the Apple Vision Pro! (Of course, you could also use the Quest 3 as a PCVR headset, but most people don’t do that.)

I never would have predicted that we’d have two firmly-set goalposts at each end of the field, instead of companies releasing a mass of options in the middle of the field! This leaves a huge gap between the ultra-low-end Meta Quest 3 and the ultra-high-end Apple Vision Pro, and I do believe that there is certainly opportunity for companies to fill that gap, with existing hardware (e.g. the Valve Index, the Vive Pro 2, etc.), as well as some new devices which fall in between the two extremes.

I think that Meta is very smart to partner up with third parties who already have some experience in this space (notably Lenovo), and from those partnerships, new products will spring up to address that gap. While it will likely not be until 2025 or 2026 until we see the fruit of these new partnerships, interesting times are ahead!


UPDATE April 26th, 2024: I sometimes post my blogposts to the various virtual world and virtual reality Discord servers I belong to, in order to drive a bit more traffic to my blog (I don’t do it nearly as often as I used to, though). And PK, on the MetaMovie Discord server, made the following insightful and thought-provoking comment on this announcement from Meta/Facebook:

I want someone to dig into what sort of access Meta would have to data on these third-party headsets, potentially, through various software that would be required. I think it’s existential that we need to keep metaverse data out of their hands.

Even now, having failed with five or six different social VR attempts so far, they still manage to collect 1/3 of every virtual transaction in VRChat, at least those using Quest headsets, which is the majority of users now. Their [i.e., VRChat’s]creator economy is only in beta so far, but thanks to Facebook and Steam, and Apple for pushing this model, we don’t have the thriving virtual economy we would have had by now, because even taking 1% of every transaction just for monopolizing app downloads, that would be too much. A third is robbery, but because [Meta CEO Mark] Zuckerberg could afford to make mobile headsets affordable without worrying about profits so far, they’re now cornering commerce in this space. I don’t think it’s safe to trust them with our future, and so I’m very skeptical about these sorts of initiatives.

PK is correct; it is troubling that the walled-garden gatekeepers like app store owners (Meta, Google/Android, and Apple) are each taking a cut of any in-world transactions. It has a chilling effect on anybody trying to make money within VRChat (of course, the social VR platform has long had a booming economy going on outside of VRChat, with places such as the Virtual Market series of avatar shopping events and the VRCMods Discord server, where avatar buyers and sellers can connect).

Linden Lab was luckily able to avoid this entire mess by creating its own in-world economy within Second Life well before the advent of Google Play and Apple’s App Stoe—but now that they are actively working on a new mobile Second Life app for Android and iOS, it will be interesting to see whether Second Life, too, will be impacted by other players like Meta wanting to take their cut. (Probably not, since you can do things like buy Linden dollars directly from the Second Life website.)

Interesting times lie ahead! As drag queen RuPaul likes to say on her hit reality TV show, RuPaul’s Drag Race (and my guilty pleasure!):

Mama Ru raises her opera glasses and says, “I can’t wait to see how this turns out.”

Thank you to PK of the MetaMovie Discord, for giving me permission to quote them directly!

InSpaze: One of the First Social Apps for the Apple Vision Pro (Plus a Tantalizing Look at Apple’s New Spatial Personas)

The HelloSpace team (makers of InSpaze) met with Apple CEO Tim Cook in late March (source: Twitter)

In its emphasis on the term spatial computing (instead of virtual reality or augmented reality), some observers have commented that there is a somewhat puzzling lack of social VR/AR apps for the Apple Vision Pro. Well, I recently learned (from the very active r/VisionPro community on Reddit) that there is a social app for the AVP, called InSpaze. Here’s a 15-minute YouTube video giving you an idea of what is possible now:

Please note several interesting things about this video: First, when you see the hands of the person capturing this video in his Apple Vision Pro (using the built-in video recording features), they are actually his real hands via pass-through, not an avatar’s hands!

Second, one of the features of InSpaze is real-time voice translation! One of the participants spoke a sentence in Chinese, which was translated into English and displayed as a subtitle under his Persona (at the 6:45 minute mark in this video).

There are people in this video participating around the table via their own Vision Pro headsets, in which their avatar appears as the still-in-beta-testing Personas (which is based on a scan of their real-life face, as a part of setup). While the Personas feature of the AVP can still be a bit unsettling, with uncanny valley vibes, and they appear currently in InSpaze only via a flatscreen view right now, Apple has just announced (and released) Spatial Personas, which look like this:

So, I expect it will only be a matter of time before Spatial Personas are added to InSpaze, replacing the locked-in-flatscreen look that current AVP participants have in InSpaze with a three-dimensional version. Mind blowing! It’s certainly a refreshing change from a Zoom call!

Also, note that iPhone and iPad users, running the InSpaze app, can also participate in InSpaze rooms! iPhone and iPad users actually have a cartoonified version of their real-life face, which honestly kind of matches the cartoony look of the AVP Personas. I couldn’t help but notice that one of the iPhone participants was standing outside, and the wind was blowing his hair around, which looked really weird combined with his cartoony face! Another guy (the one speaking Chinese) was behind the wheel of his car (let’s hope he wasn’t driving!).

InSpaze is already available on the Apple Store, for the Apple Vision Pro, iPhone, and iPad. It is made by a company called HelloSpace (website; Discord; Twitter/X). Apparently, it’s been quite a hit among AVP users, who seem to appreciate having a way to connect with each other in virtual space! In fact, in the first video up top, they talk about how Apple employees themselves like to use InSpaze to connect with their customers.

Things are happening so fast in this space that it’s been hard to keep on top of all the developments! I do find that a daily visit to the r/VisionPro subreddit is a very good way to stay abreast of everything that’s going on with this rapidly-evolving technology. I’m still patiently waiting for when we Canadians can pre-order the Apple Vision Pro (hopefully sometime this spring or summer). And I’m quite envious of the Americans who have already gotten their hot little hands on a unit!

I want one. I waaaant one!