The first successful metaverse stablecoin actually predates crypto! (photo by CoinWire Japan on Unsplash)
In a five-minute YouTube video which dropped today, Amy Jo Kim speaks with Linden Lab’s founding CEO, Philip Rosedale, about a stable digital currency that powers a vibrant metaverse economy—and has kept it running for almost two decades! Of course, I am talking about the Linden dollar.
As I often like to say on this blog, Second Life is the perfect mature, fully-evolved model of a working metaverse which newer entrants to the space would benefit from studying! And whether or not you are already familiar with Second Life, Philip is always a good interview: insightful, personable, understandable, and articulate. Highly recommended!
In related news, were you aware that Linden Lab’s financial subsidiary, Tilia, has recently secured a strategic investment (amount unnamed) from J.P. Morgan Payments? According to the official press release:
Tilia LLC, the all-in-one payments platform, today announced it has secured a strategic investment from J.P. Morgan Payments. Tilia’s solution, built for game, virtual world and mobile application developers handles payment processing, in-game transactions, as well as payouts to creators by converting in-world tokens to fiat currency including USD, which serves as the backbone of any functioning virtual economy. Drew Soinski, Senior Payments Executive, Managing Director, J.P. Morgan Payments said “We believe that contextualized commerce – such as virtual economies within games and virtual worlds – is an area perfectly positioned for innovative payments solutions to play a critical role in the coming years. We’re delighted to invest in Tilia LLC, a market leading provider of software gaming payments tools, to develop solutions for these new and exciting marketplaces.”
Tilia’s virtual payment system easily and securely converts in-game tokens and currency into fiat currency. Built from the ground up to power Second Life and its creator-based economy, Tilia was developed over several years to build its unique capabilities. Tilia has secured the required money transmitter licenses in the U.S. to support payouts, allowing for secure transactions on a large scale. Tilia provides developers with the tools to enable thriving, profitable in-world economies that empower their players and users to buy and sell virtual goods and services and facilitate robust play-to-earn programs.
“Virtual economies represent a huge financial opportunity particularly for game, app and virtual world developers,” said Brad Oberwager, Executive Chairman of Tilia LLC. “J.P. Morgan Payments, a worldwide leader and recognized innovator in payments, is the right partner as we continue to expand capabilities in line with these rapidly growing creator-based economies”.
Tilia has been running Second Life’s $650 million dollar economy for the past seven years. Financing for the new company is coming from their strategic partner, JP Morgan. “It’s very important virtual worlds have the instantaneous settlement Tilia provides,” said Brad Oberwager, Executive Chairman of Tilia, and acting CEO of Linden Lab. “We can handle very high transaction volume at very low dollar amount that even with USDC, the systems aren’t built for that kind of stuff. We move one 250th of a dollar sometimes.”
In addition to the investment, Tilia is also working with J.P. Morgan Payments to increase payout methods and expand the number of pay-out currencies. Perhaps most importantly, partnering with the world’s largest bank will enable Tilia to scale to the potential size of the putative metaverse.
Oberwager sees his company as crucial for the metaverse.
“Tilia is money into the metaverse. It’s money moved into the metaverse and money moved out of the metaverse,” said Oberwager. “And why this is so important is because you cannot have this concept of the metaverse without a social economy. It is both the social aspect and the financial aspect. Those two things must work in harmony. To do money, you need some virtual token to make money work.”
He added, “Money has to be rock solid. That is JP Morgan. That’s the partnership. What’s the value of Tilia? You can’t build a metaverse without user-generated content. You can’t build a metaverse without social interaction. You can’t build a metaverse without some sort of financial token that allows people to build a world.”
The company will use the funds to expand its business and go into new markets.
“We are moving money in the metaverse,” Oberwager said. “It’s a real thing. that’s where the investment is going. We have a customer list and people are coming to us.”
Tilia fuels commerce in Second Life, which generated $86 million in payments in the past 12 months. The Second Life economy is still measured at $650 million nearly 20 years after its founding. Tilia has about 48 employees.
Oberwager said the deal took about a year to work out with J.P. Morgan Payments. During that time, Tilia made sure it could be interoperable with J.P. Morgan.
Finance giants like J.P. Morgan make strategic investments like this on the expectation they’ll be accessing a larger market down the road, i.e. burgeoning metaverse platforms with less experience than Linden Lab handling international payments/virtual currency.
On the other hand, Tilia has been a standalone company since 2019 and only counts Second Life and below-the-radar metaverse platform Upland as its major consumer-facing clients. (Despite a partnership with Unity in early 2022.) But with JP Morgan as a backer, I’d expect other customers to come along soon.
I agree with Wagner; I’m pretty sure that this partnership will lead to more metaverse platforms using Tilia to implement their in-world economies! (By the way, this news has absolutely zero impact on Second Life. Everything stays the same.)
You might remember that I was one of the lucky few who received an invitation to be part of the closed beta test (or “research preview”, as they called it) of DALL-E 2, a new artificial intelligence tool from a company called OpenAI, which can create art from a natural-language text prompt. (I blogged about it, sharing some of the images I created, here and here.)
Here are a few more pictures I generated using DALL-E 2 since then (along with the prompt text in the captions):
Meanwhile, other DALL-E 2 users have generated much better results than I could, by skillful use of the text prompts. Here are just a few examples from the r/dalle2 subReddit community of AI-generated images which impressed and sometimes even stunned me, with a direct link to the posts in the caption underneath each picture:
As you can see by the last two images, you can get very detailed and technical in your text prompts, even including the model of camera used! (However, also note that in the fourth picture, DALL-E 2 ignored some specific details in the prompt.)
Yesterday, OpenAI sent me an email to annouce that DALL-E 2 was moving into open beta:
Our goal is to invite 1 million people over the coming weeks. Here’s relevant info about the beta:
Every DALL·E user will receive 50 free credits during their first month of use, and 15 free credits every subsequent month. You can buy additional credits in 115-generation increments for $15.
You’ll continue to use one credit for one DALL·E prompt generation — returning four images — or an edit or variation prompt, which returns three images.
We welcome feedback, and plan to explore other pricing options that will align with users’ creative processes as we learn more.
As thanks for your support during the research preview we’ve added an additional 100 credits to your account.
Before DALL-E 2 announced their new credits system, I had spent most of one day’s free prompts during the research preview to try and generate some repeating, seamless textures to apply to full-permissions mesh clothing I had purchased from the Second Life Marketplace. Most of my attempts were failures, pretty designs but not 100% seamless. However, I did manage to create a couple of floral patterns that worked:
So, instead of purchasing texture packs from without and outside of Second Life, I could, theoretically, generate unique textile patterns, apply them to mesh garments, and sell them, because according to the DALL-E 2 beta announcement I received:
Starting today, you get full rights to commercialize the images you create with DALL·E, so long as you follow our content policy and terms. These rights include rights to reprint, sell, and merchandise the images.
You get these rights regardless of whether you used a free or paid credit to generate images, and this includes images you’ve created before today during the research preview.
Will I? Probably not, because it took me somewhere between 20 and 30 text prompts to generate only two useful seamless patterns, so it’s just not cost effective. However, once AI art tools like DALL-E 2 learns how to generate seamless textures, it’s probably going to have some sort of impact on the texture industry, both within and outside of Second Life! (I can certainly see some enterprising soul set up a store and sell AI-generated art in a virtual world; SL is already full of galleries with human-generated art.)
Another cutting-edge AI art-generation program, called Midjourney (WARNING: ASCII art website!), has also announced an open beta. I had signed up to join the waiting list for an invitation several weeks ago, and when I checked my email, lo and behold, there it was!
Hi everyone,
We’re excited to have you as an early tester in the Midjourney Beta!
To expand the community sustainably, we’re giving everyone a limited trial (around 25 queries with the system), and then several options to buy a full membership.
Full memberships include; unlimited generations (or limited w a cheap tier), generous commercial terms and beta invites to give to friends.
Although both DALL-E 2 and Midjourney use human text prompts to generate art, they operate differently. While DALL-E 2 uses a website, Midjourney uses a special Discord server, where you enter your prompt as a special command, generating four rough thumbnail images, which you can then choose to upscale to a full-size image, or use as the basis for variations.
I took some screen captures of the process, so you can see how it works. I typed in “/imagine a magnificent sailing ship on a stormy sea”, and got this back:
The U buttons will upscale one of the four thumbnails, adding more details, while the V buttons generate variations, using one of the four thumbnails as a starting point. I choose thumbnail four and generated four variations of that picture:
Then, I went back and picked one of my original four images to upscale. You can actually watch as Midjourney slowly adds details to your image, it’s fascinating!
I then clicked on the Upscale to Max button, to receive the following image:
My first attempt at generating an image using Midjourney
Now, I am not exactly satisfied with this first attempt (that sailing ship looks rather spidery to me), but as with DALL-E 2, you get much better results with more specific, detailed text prompts. Here are a few examples I took from the Midjourney subReddit community (with links back to the posts in the captions):
So, as you can see, you can get some pretty spectacular results, with incredible levels of detail! And unlike DALL-E 2, you can set the aspect ratio of your pictures (as was done in the fourth image generated). You do this with a special “–ar” command in your text prompt to Midjourney, e.g. “–ar 16:9” (here’s the online documentation explaining the various commands you can use).
And one area in which Midjourney appears to excel is horror:
You can see many more examples of depictions of horror in the postings to the Midjourney SubReddit; some are much creepier than these!
So, in comparing the two tools, I think that Midjourney offers more parameters to users (e.g. setting an aspect ratio), which DALL-E currently lacks. Midjourney also seems to produce much more detailed images than DALL-E 2 does, whereas DALL-E 2 is often astoundingly good at a much wider variety of tasks. For example, how about some angry bison logos for your football team?
I think these images are all very good! (Note that DALL-E 2 still struggles with text! Midjourney does too, but it gets the text correct more often than DALL-E 2 does at present. But note that might change over time as both systems evolve.)
So, the good news is that both DALL-E 2 and Midjourney are now in open beta, which means that more people (artists and non-artists alike) will get an opportunity to try them out. The bad news is that both still have long waiting lists, and with the move to beta, both DALL-E 2 and Midjourney have put limits in place as to how many free images you can generate.
Midjourney gives you a very limited trial period (about 25 prompts), and then urges you to pay for a subscription, with two options:
Basic membership gives you around 200 images per month for US$10 monthly; standard membership gives you unlimited use of Midjourney for US$30 a month.
For now, OpenAI has decided to set DALL-E 2’s pricing based on a credit system (similar to their GPT-3 AI text-generation tool), as described in the first quote in this blogpost. There’s no option for unlimited use of DALL-E 2 at any price, just options for buying credits in different amounts (and there are no volume discounts for purchasing larger amounts of credits at one time, either). The most you can by at once is 5,750 credits, which is US$750. So, yes, it can get quite expensive! (As far as I am aware, your unused credits carry over from one month to the next.)
In my experience, using Dall-E 2 to generate concept arts for our next project, it takes me between 10 to 20 attempts to get something close to what I want (and I never got exactly what I was asking for)…
Dall-E 2, at this point, is not a professional tool. It’s not viable as one, unless you produce exactly the type of content the AI can produce instantly just the way you want it.
Dall-E 2, at this point, IS A TOY! And that’s OpenAI’s mistake right now. You can’t sell a toy the way you sell a professional service! I’m ready to pay for it because I’m experimenting with it. I’m having fun with it and, when it works, it provides me with images I can also use for professional project. However, I wont EVER spend hundreds of dollars on this just for fun, and I certainly wont pay that amount for it as a tool until it can provide me with better and more consistent results!
OpenAI is going after the WRONG TARGET! OpenAI should be seeling it at a much lower price for everyday people and enthusiasts who want to experiment with it because this is literally the only people who can be 100% satisfied with it at this point and these people wont pay hundreds of dollars per month to keep playing when there are other shiny toys out there, cheaper and more open, existing or about to.
Several commenters said that they will be moving from DALL-E 2 to Midjourney because of their more favourable pricing model, but of course it’s still early days. Also, there are any number of open-source AI art-generation projects in the works, and competition will likely mean more features (and better results!) at less cost over time. One thing is certain: we can anticipate an acceleration in improvement of these tools over time.
The future looks to be both exciting and scary! Exciting in the ability to generate art in a new way, which up until now has been restricted to experienced artists or photographers, and scary in that we can no longer trust our eyes that a photograph is real, or has been generated by artificial intelligence! Currently, both systems have rules in place to prevent the creation of deepfake images, but in future, things could get Black Mirror weird, and the implications to society could be substantial. (Perhaps now you will understand the first three DALL-E 2 text prompts I used, at the top of this blogpost!)
P.S. Fun fact: the founding CEO of Linden Lab (the makers of Second Life), Philip Rosedale, is one of the advisors to Midjourney, according to their website. Philip gets around! 😉
UPDATE July 22nd, 2022: Of course, the images generated by DALL-E 2 and Midjourney can then be used in other AI tools, such as WOMBO and Reface (please click the links to see all the blogposts I have written about these mobile apps).
What you see here is an AI-generated image, “animated” using another deep learning tool. This is a tantalizing glimpse into the future, where artificial intelligence can not only create still images, but eventually, video!
I’m constantly on the look out for stories for the RyanSchultz.com blog, bookmarking anything and everything that I or my readers might find of interest—news and announcements about social VR, virtual worlds, and the metaverse (including the blockchain-based platforms).
At the moment, I’m so backlogged with my bookmarks, that today I’ve just decided to share many of them with you, in an effort to get caught up! Each would likely be the seed for a proper blogpost all on its own, but here each one will just get a sentence or two, a brief annotation only. Hope you don’t mind!
Medium: World War “M” and the curse of the Metaverse, by Avi Bar-Zeev (an editorial where Avi poses the question: If “The Metaverse” represents our digital future, who decides what “it” is?)
XR Today: Sensorium, Humanity 2.0 Launch Vatican City Art Metaverse (Ultra high-end social VR platform Sensoirum Galaxy partners with the Humanity 2.0 Foundation to build a virtual gallery for Vatican City). “The company’s Sensorium Galaxy platform is currently in beta testing, with a launch date set for later in the year to expand its availability across devices, including VR headsets, PCs, and mobile devices.”
Now that I’ve shared some of my most interesting finds with you, I hope that this list will tide you over until I can whip up some fresh new content for you! Expect more blogposts soon. (If people find these news roundups useful, I might continue to write them, as well as my regular blogposts.)
Lab Gab host Strawberry Linden with Linden Lab founder Philip Rosedale (Philip Linden) and Linden Lab executive chairman Brad Oberwager (Oberwolf Linden); image is a screencapture from the YouTube video
Have you heard the news? Second Life founder Philip Rosedale is back! With today’s announcement about High Fidelity’s investment in Linden Lab, we’re excited to welcome back Philip Rosedale in the all-new role as Second Life strategic advisor. Philip is a recognized metaverse pioneer who led the early days of Second Life to help form and inform the now-mainstream concepts of virtual economies, cultures, and communities. In his new role, he will bring his vast virtual world experience and vision to help shape the future of Second Life.
And today Linden Lab released a pre-recorded hour-long episode of their popular talk show, Lab Gab, hosted by the ever-capable Strawberry Linden (formerly known as the SL blogger Strawberry Singh, and now a Linden Lab employee herself):
As I have often said before, Philip is a very articulate and highly informed speaker with many years of experience in virtual reality and virtual worlds, and of course Brad is no stranger to the microphone himself! I only caught the last few minutes of the streaming video on YouTube myself, but I will be sure to go back later this evening and watch this in full! Enjoy.