Shared posts

20 Mar 21:07

We need camera access to unleash the full potential of Mixed Reality

by Skarredghost

These days I’m carrying on some experiments with XR and other technologies. I had some wonderful ideas of Mixed Reality applications I would like to prototype, but most of them are impossible to do in this moment because of a decision that almost all VR/MR headset manufacturers have taken: preventing developers from accessing camera data.

My early start with MR

As you may know, I got started with passthrough mixed reality in 2019, far before Quest enabled the use of passthrough. I was using the Vive Focus Plus, and I hacked one of its SDK samples to transform it into a mixed-reality device. The weeks after, Max Ariani (my partner in crime at NTW) and I experimented a lot with this tech, and we managed to do some cool stuff, like:

  • Make objects “disappear” trying to do (a very rough) diminished reality
  • Applying a Predator-like filter to the environment
  • Detecting a QR code to perform the login
  • Detect and track an Aruco marker to make a 3D object appear on it
The trailer of Beat Reality. It was pretty cool using it inside a discotheque

The tools we had were very limited: the Vive Focus had just a Snapdragon 835 processor, the image was black and white and low-resolution, we had to do everything at the Unity software level, and we had no environment understanding. Besides, at that time, AI was already there, but not growing as fast as today. But notwithstanding this, we managed to do a lot of crazy tests, and we dreamt about the moment that powerful standalone headsets supported high-quality mixed reality to bring these tests to the next level.

Quest and privacy

Those times we hoped for have arrived: the Quest 3 is a machine much more powerful than the Vive Focus, it has a color passthrough with a quite good definition, and AI is now flourishing. But, paradoxically, I can do now much fewer experiments than before.

meta quest 3 launch price
Meta Quest 3, the first truly mixed reality headset by Meta (Image taken during a Meta event)

The reason is that Meta is playing the extra safe way and it is preventing developers from accessing the camera feed seen by the user in MR applications, both as input (getting the image) and output (writing on the image). It is doing that for privacy reasons: if a malicious developer made a cute game and behind the curtains activated the cameras and streamed whatever they saw to its servers, that would be an enormous privacy violation. Evil developers could easily spy on our homes.

Meta had a lot of scandals about its privacy, so to avoid a new one from happening, or even from seeing the press complaining about a potential privacy issue, it has disabled camera access from developers. This camera lock can not be circumvented in any way: as I explain in this post, when you develop an application in Unity for the Quest, the application “flags” part of the screen to be painted with the passthrough view, and then it is the operating system that does this “painting” operation. For the application, the background of the app is pure black, it is only the OS that knows what data to put there. So unless you crack the Quest firmware and its SDK, you have literally no way to get the passthrough from inside your application.

After Meta started raising this privacy concern, all the other vendors slowly started to follow suit, and as far as I know, camera access is now also blocked on Pico and Vive headsets. It is only accessible on some enterprise headsets.

Why is this a limit for mixed reality?

You may wonder why access to camera images is so important. The reason is that mixed reality shines when it can bridge the real and the physical world. But if your application has no understanding of the real world, how can this bridge be created? As a developer, you have no idea where the user is, what he is doing, what he has in front of him. The only thing you can do is to show the camera feed, apply some lame filters, and detect planes and walls. It’s something, but in my opinion, it is not enough to make a whole MR ecosystem flourish.

AI Systems can now detect almost everything

We live now in an era where there are AI systems for everything, and one of the reasons why MR and AI are a match made in heaven is because AI can understand the context you are in (where you are, what you are doing, etc…) and provide you assistance in mixed reality. For instance, one classical example of our future in MR is having a virtual assistant that provides you with suggestions related to what you are doing. Another example could be an educational experience that trains the user in doing something (e.g. operating a machine) and verifies that the user is doing those actions correctly.

To do that, we should feed the camera stream into some AI system (running locally or on the cloud), but we can not because the operating systems of headsets are preventing us from doing that. So all the vibrant work that the AI community is doing can not be applied to MR headsets.

Using markers in passthrough… I was able to do it by running the camera images through OpenCV. This is absolutely not possible on Quest

Another thing that would be possible to do is run computer vision algorithms. The easy idea to understand is detecting QR Codes and markers, which would allow many interesting applications (e.g. providing an easy login without a keyboard for applications). We could also potentially run Vuforia on the Quest and considering that Vuforia can track 3D objects, we could put a mixed-reality overlay on objects without needing to use any tracker.

The ability to write on the image would be cool, too: now we can only apply a colored edge filter and a color mapping operation, but it would be very cool to unlock the possibility of adding filters of any kind to the image. Creators would love this opportunity.

Giving these powers to the community would unlock a huge experimentation on mixed reality, making everyone exploit its full potential. I’m pretty sure that people would come with some amazing prototypes showing things that we didn’t even think about. Some very creative devs already managed to create something cool with the limited tools we have now (think about Laser Dance or Starship Home), so imagine what they could do by using the full power of AI and computer vision.

Laser Dance is a pretty cool concept, IMHO

We could unlock a new type of creativity and enthusiasm in our space, and make the whole technology evolve faster. If you remember that some of the most successful VR games (e.g. Beat Saber and Gorilla Tag) came from small and unknown indie studios, you realize how important it is to let everyone in the community experiment with new paradigms.

How to preserve privacy then?

I hope I have convinced you about the importance for us creators and developers to have access to all the data that we can about the experience that the user is having. But at the same time, there are still concerns about the privacy risks of this operation: as I’ve said before, a malicious developer could harvest this data against your will. So, how we empower the developers without hurting the user?

Of course, since I’m not a security expert, I do have not a definitive answer for you. But I have some ideas to inspire the decision-makers on this matter:

  • Most VR headsets are based on Android, and Android is an operating system that cares a lot about these problems already. We have cameras on our phones and we take phones even in private places where we currently do not take our headsets (e.g. in the toilet). But on phones, I can access the camera feed, so it’s a bit strange and I can not do that on a headset. It would be ideal to copy the strategies that Android already employs on the phones, where a popup asks you if you want to give some permissions to the app that you have just opened. If you do not trust the app creator, you can simply not grant this permission. Meta already does that with some features (e.g. for spatial anchors), so it may do that also for passthrough
  • In general, as Alvin Graylin said during my interview with him, it’s important to give tools to let the user choose. Asking the user if he/she wants to give an app camera access is a powerful feature. Another good idea could be asking the user WHERE he wants to give camera access: since the Quest can detect which room we are in, the user may decide to consent to camera access in his VR room, but not in his bedroom, for instance
  • Meta (or every other vendor… I talk about Meta because it has the most popular device) could use some AI magic to hide some sensitive details from the images: for instance, the AI could detect if there are faces or naked bodies in the frames, and those would appear as censored in the images provided to the application. This would come as an additional computational cost, though
  • Meta could start by providing us developers the opportunity to develop “plugins” that use the camera images. For instance, the Meta SDK could allow the registration of a function that takes an image and returns a set of strings. This way I would never manipulate directly the image (so I can not copy or stream it), because it is the OS that just runs my algorithm over it without giving me direct access, but I could still get the results of the data analysis that I wanted to perform
  • Alternatively, Meta could wire its SDK to many of its AI and computer vision services, so we could at least have a wide set of tools to use to do some tests and prototypes
  • Since Meta reviews every application that goes to its Store, every developer submitting an application requiring the camera feed could undergo heavy scrutiny, with checks on the data transmitted by the app and to what servers, the history of the company, etc… This would make life harder for the malicious developers that want to get to the Meta Quest Store (or every other store)
  • Meta could allow camera access only as a developer feature, available only on developer builds that can be distributed via SideQuest. While this is not ideal, it would at least let us developers start to experiment with it and share our work with other techie peers. Every user sideloading an application is most probably a skilled user, who has enough technical expertise to know if he is willing to take the risk or not

These are just suggestions. Probably my friends at XRSI have much better ideas to suggest to mitigate the privacy issues given by the opening of camera access. I care a lot about values like privacy and safety, so I’m all in for empowering developers in a responsible way. And I hope this article will help in triggering a dialogue among all the parties involved (I will share it with both XRSI people and people from headset manufacturers and see what happens), because in my opinion it is crucial that we speak about this topic.

What to do if you need camera access now

Meta Augments are a nice tool, but I think we need more than this (Image taken during a Meta event)

What if you need camera access today? What if you want to experiment with AI and MR and you don’t want to wait for Meta/Pico/HTC to provide access to the camera feed? Well, there are some (not ideal) ways that let you at least do some experiments:

  • Use a headset that provides the access you want: some enterprise headsets give you access to the images the user sees. They are not many, but they are. For instance, according to its documentation, Lynx R-1 will allow for the retrieval of the camera images
  • Use a PC headset: on PC things are much more open than on Android, and usually it’s easier to “find a way”
  • Use additional hardware. If you use a Leap Motion controller, you should be able to grab the feed of its cameras according to its docs. And recently Leap Motion has become compatible with standalone headsets like Pico ones. Of course, you must be careful of calibrating the position of Leap Motion’s cameras to the headset’s cameras
  • The poor-man version of the point above is to stick a phone in front of your headset and stream the images from your phone to the headset via Wi-Fi. If you want to go the hard tech way, you can connect a USB camera to your HMD and try to retrieve the camera feed by starting from this opensource project and heavily modifying it, hoping that Meta lets you do this operation
  • You can also run ADB on a computer that is in the same network as your headset, and let it stream the screen content of your headset to the computer (the ADB commands listed in this old post still apply), where you can grab the frames, analyze them, and then return the results via Wi-Fi to the headset application again. This solution is complicated, adds latency, and requires a big part of the application to show the camera feed (because you stream the screen content, not directly the camera feed), but it could be used to start with some experiments.

UPDATE (2024.03.26): Leland Hedges, the Head of Enterprise Business at PICO XR EMEA, answered this post on Linkedin, saying that Pico allows access to the camera stream data on a case-by-case basis on its Pico 4 Enterprise headset. Reach out to him on Linkedin (or ask me for an introduction) in case you are interested in this possibility


As I’ve said, I hope that this post will trigger a debate in our community about accessing camera data from MR applications. So please let me know your considerations in the comments of this post or on my social media channels. Let’s try to push our ecosystem together, as always.

(Header image by Meta)

The post We need camera access to unleash the full potential of Mixed Reality appeared first on The Ghost Howls.

09 Sep 20:33

Goliath Review: a deep experience about psychosis

by Skarredghost

At the latest Venice VR Expanded, one of the experiences all the magazines are talking about is Goliath by ANAGRAM, which is narrated by the beautiful voice of Tilda Swinton. ANAGRAM is the studio behind one of my favorite VR experiences, that is The Collider, so I really wanted to try its new creation, and thanks to the people there that have provided me a key, I have been able to experience Goliath in preview. Here you are my impressions of it.

[ATTENTION: THIS REVIEW CONTAINS SPOILERS ABOUT THE EXPERIENCE. This is not an experience with a real story to follow, so there’s not a true ending to spoil… and I think you can appreciate it even by knowing some parts about it. I will reveal various details about it (not everything of course), so if you want to watch it without knowing before how it is made, stop here. Otherwise, go on]

Goliath: Playing With Reality

Anagram’s official website describes the experience “Goliath: Playing With Reality” (this is its full name) this way:

Through mind-bending animation, GOLIATH: PLAYING WITH REALITY explores the limits of reality in this true story of so-called ‘schizophrenia’ and the power of gaming communities.

Echo (narrated by Academy Award-winning actress Tilda Swinton) guides you through the many realities of Goliath, a man who spent years isolated in psychiatric institutions but finds connection in multiplayer games. Combining heart-felt dialogue, mesmerising visuals and symbolic interactions, weave through multiple worlds to uncover Goliath’s poignant story.

In other words, it is an experience that tells you the story of this guy whose nickname is Goliath that goes through a complicated life marked by episodes of psychosis and then finds some relief in playing online multiplayer games. The experience tries not only to tell you his story but also tries to make you understand how a person affected by schizophrenia feels every day.

The experience

Goliath is an experience that is difficult to categorize, and also difficult to describe. It’s crazy, it’s non-linear, it’s many things at once. In this sense, it is similar to The Collider, which was another experience that was outside of all the schemes of the other XR experiences. And exactly as The Collider, Goliath plays a lot with your mind and your feelings.

I can probably define Goliath as a “collection of scenes”, all different the one from the others. Some of them are interactive, others are not. Some of them are minigames (actually playable minigames, that create a parallel with Goliath’s life as a gamer), others are just narrated. Some of them have a cartoonish style, others are abstract, one is even made with photogrammetry. Some scenes are driven by the narrating voice of Tilda Swinton, in others, it just says some words. It is a continuous jump between different things, usually happening via the ending scene decomposing in tiny squares, that then becomes the new scene. The common thread is the one of narrating Goliath’s story from when he’s young, to all his troubles, until his relief through online videogames, and also the one of explaining to you what schizophrenia is. But every chapter of this experience looks stylistically disconnected from the other ones.

Just to make some examples of some available scenes:

  • In one scene, you are playing Goliath’s life as a game in an arcade room;
  • In another one, you see Goliath from the outside trying to integrate with society with all the reality around him represented by cubes;
  • Then there is one scene in photogrammetry where you are in the room where Goliath plays his computer games;
  • In one you are together with Goliath in the sad room of his pshychiatric hospital, and he is red while all the room around him is greyish.

All of them are different, and appear as loosely connected. But all of them are always weird, crude, disturbing... and all present some original and creative solutions on how to depict that particular situation you are in.

goliath vr storytelling review
Goliath in a reality made by cubes. His head is a cube too. (Image by ANAGRAM)

This is cool from a technical standpoint because it means that the creators had to work with a big team to create all these different scenes, but from an experience standpoint, I found it quite confusing. It looked to me as if there was not a coherent experience to follow, but it was just a collection of mini-episodes: yes, all of them were related to schizophrenia or the life of Goliath, but they didn’t even seem to belong to the same application. It is like a movie in which the first 10 minutes are a cartoon, the next 10 minutes are set in ancient Rome, the next 10 minutes are sci-fi, and so on, while keep telling the same story. It would be original, but a bit weird, and so Goliath seemed to me.

I’m not a smart man [cit. Forrest Gump], and I’m sure there is some big picture behind this collection of different scenes that intellectual people understand and I do not… like those Fellini’s movies that I just see as a bunch of weirdness while movie critics find fantastic. Probably the reasoning is that Goliath’s reality, because of his mental illness, is actually made by different realities, all different, all loosely connected, and so are the realities that you as the viewer are experiencing.

One trippy moment of the experience (Image by ANAGRAM)

This would be coherent with the other theme of the experience, which is the one of making you understand how is having schizophrenia. It is like the experience doesn’t only want to tell you the story of Goliath but also wants you to feel the same things that Goliath feels in those parts of the story it is telling you. This is the reason for the “playing with reality” part of the name of the application: since the beginning, the narrating voice questions your sense of reality. Reality is in the end what our brains want us to feel… we have no idea what is the real reality, we just know how we interpret it… and a person with psychosis sees many realities, some different from our “usual” one. The experience makes you go through all these different scenes, that have different styles but are all weird and disturbing. It does its best to make you feel disturbing feelings, like for instance by:

  • Showing your hands becoming geometric shapes
  • Making the elements of the reality around you decompose and recompose
  • Making you see a scene from an unusual point of view (e.g. in the DJ-ing scene, you are like a little creature on the discs, and you see an enormous Goliath playing with the console where you are)
  • Making you hear disturbing voices (e.g. in the hospital one, you hear the same world repeated many times from voices all around you that talk one on top of the other)
  • Putting you in scenes that have trippy visuals
  • Changing continuously the setting you are in

In fact, the experience warns you in the beginning that you may feel unsettling sensations. And honestly, in this, it has failed with me. Goliath reminded a bit of the sensations that I had when trying Ayahuasca at the Sandbox Immersive Festival: I could understand that those trippy visuals were meant to make me feel in the same way that people that use the drug feel, but it did not, I just saw weird visuals that were well made, but had no meaning for me. Here it was the same: yes, I could feel that some things were “weird”, that they should have made me understand how a person with psychosis feels reality, that they should have made me feel weird or disturbed… but they didn’t impress me at all, and didn’t give me the impression that I had psychosis. I just saw them and thought “ah, this is to make me feel how that guy feels” and that’s it. As someone that has watched 2 Girls 1 Cup without batting an eye, I need something more than some trippy visuals to be impressed. Probably I should have drunk two beers to appreciate them more.

And here comes another problem: the “unease tourism” (as I call it) that VR can offer. Some time ago, I shared with you an article proving that VR is not the ultimate empathy machine, and one of the reasons is that it just puts you in the difficult situation you want to generate empathy for for a limited time. Last year I read an article that used the term “racism tourism” when talking about living in VR in the shoes of someone undergoing racism attacks for just some minutes. The point is: living some bad experience from the comfort of your home for 30 minutes is not like living a whole life of psychosis, or racism attacks, or physical pain, or whatever unease condition. It is just “seeing how it is” and that’s it. It’s like having a mental trip into some difficult situations, have some feelings, and then return back to your usual life.

This is also how I felt when playing Goliath: In some parts, my brain was like “oh, look these weird visuals, these weird sounds… this is similar to what people with psychosis may feel… interesting thing to discover”… knowing that soon I would have returned to my standard life. What the experience made me feel about having a psychosis felt like a toy to me, it gave me no real feeling of having a psychosis attack and also gave me almost no real empathy for the condition. It was just interesting as it is launching from high heights in a VR game to see how I feel falling down. Only if I rationally think about the story that Goliath told me in the experience, I understand the black hole of pain he was in: a complicated family, addictions, psychotic attacks, years in mental hospitals… this is terrible, and it hurts me thinking about it.

Big sentences, bright colors, weird visuals are meant to give you unsettling feelings (Image by ANAGRAM)

I’m not blaming Anagram for this, they tried to do their best, and it’s honorable that they have worked so hard to put the spotlight on the important, and often forgotten theme, of psychosis. They worked well for this. But the result, for me, was not the expected one. This is what happened to my perverted mind, maybe for your sensitive one, the result would be different. The nice thing about art is that every one of us experiences it in a different way.

When I say that Anagram did a great job, I sincerely think it: this studio confirms with Goliath to be a team that is able to develop high-quality and very original experiences. Goliath is different from everything else I’ve tried in VR, and it is incredibly well-made. Audio and visuals are great, and there are some parts developed with creative genius. For instance, at a certain point, you see Goliath, and instead of his head, you see a lot of symbols changing continuously (e.g. a middle finger, a smile, a question mark, etc…) to show its confusion of reality at that moment, and then when he takes his medicine, it returns having just a standard head, to symbolize that the medicine is giving him some moments of peace of the mind. There is a moment when you are in Goliath’s brain where you hear flows of disturbing noises, and only when you move your controllers on top of them, you can hear they were real sentences talking about psychosis, and this again makes you understand how a person with psychosis may understand other people talking with him. There are many moments like these ones, and they show you how this experience must be watched multiple times to be appreciated completely (I’ve watched it twice, indeed).

(Activate sound on this GIF to hear the flow of noises becoming sentences)

The narrating voice plays with your mind: her voice (the one of Tilda Swinton) is soothing and speaks like the one of a hypnotist. She welcomes you, and prepares you for the journey between realities, then narrates you the story of Goliath throughout all the weird scenes, and then, in the end, salute you questioning the whole sense of what “reality” really is. Anagram was already good at playing with my mind in The Collider, and so it added these hypnotic hints also in this experience. But again, in this experience, this “hypnosis” hasn’t worked so well with me, hasn’t been able to dig a deep hole in my mind, and I kept being rational and lucid during the whole experience. Anyway, the sentences she says are smart and fascinating, and I appreciated all her questions about the sense and the meaning of reality. She was my guide in my journey in the various realities in Goliath’s mind, and she was a good guide.

When I removed the headset, I had no weird sensation for having returned to my reality after this journey across realities, and this proved that the experience was not that effective on me. But I kept feeling a sensation of sadness, of discomfort: Goliath’s story is very sad, and it made me feel sad. The fact that I’ve been transferred this emotion is anyway a win for Goliath: VR is all about emotions for me, and if an experience has managed to make me feel emotions it has been somehow successful. And Goliath did this pretty well, so I can say that it has been worth living.

It is a deep experience: like everything that tries to make you have feelings, that tries to bend your mind, it makes you keep thinking about it even some hours after you have watched it. It is an experience that for sure I’ll remember for a long time.

Multimedia Elements

As I’ve said, this is a high-quality experience, and all its audio and visual elements are carefully studied. It is also impressive all the different styles the various scenes are made of, going from realism to abstract, too gamey. This has required for sure an enormous effort: in fact, the final credits are pretty long. And everything is polished and curated in detail: for instance, I remember a scene where the world composed itself only in the direction I was looking at and decomposed where I was not looking at. I found it pretty fascinating.

Audio has surprised me in a pleasant way, and it’s not common that I say this because I’m not an audio expert at all. Tilda Swinton’s voice is fantastic, and all the soundtrack and the audio effects are really immersive and drive you in the mood of the experience. The only problem on this side for me was with Goliath’s voice: it is distorted to make the experience trippier, but this way as a non-native English speaker, I was not able to understand completely what he was saying. I don’t know if this was an intended effect or a problem, actually.

User Input

goliath
In this scene, you play a minigame in an arcade using the thumbstick of your right controller (Image by ANAGRAM)

The user interacts with the experience with his/her gaze, or with simple interaction with the controllers using the triggers or the thumbstick. Every scene is different from the others, and every one has its own interaction scheme: in one scene you use the controllers as guns, in another one you use the thumbstick to play with an arcade machine, in another one you move your controllers on some streams to uncover words, and so on. The experience shows explicit indicators to tell the user how to interact in every different scene, so there’s no risk the user doesn’t know how to go on. There is also a moment, at the beginning, where the user has to use his/her own voice, and this shows how also the input schemes are very variated.

Comfort

Goliath shouldn’t make you feel motion sickness because it is a very static experience (you can play it by being seated). But it is also made to make you feel uncomfortable, to make you feel how it is having psychosis, so I can’t say it is a comfortable experience for your brain. It is the opposite.

Final impressions

While I totally loved The Collider, I have mixed feelings about Goliath. It’s a well-made experience, it is very original, it is deep, it talks about an important theme like mental illness, it features Tilda Swinton, it gave me feelings (sadness, to be exact). These are all enormous pros on its side.

But it is also true that I found it a confusing collection of scenes, and I really haven’t felt like someone with psychosis, and I haven’t felt any kind of empathy by living inside it, even if I played it two times in a row. I came out with a feeling, but not with a deep bond with the main character, and not with a deep bond with its illness. It was just a journey inside the life of someone else, and that’s it… but I expected much more from this point of view, that I guess was one of the purposes of the experience.

It’s a kind of mixed bag, then. I agree with everyone else that this is a great experience, and I’m happy having tried it because it is worth a watch and even two, but I’ve not fallen in love with it. But at the same time, it is an experience I’ll probably never forget for its depth and originality.

If you are into VR storytelling, give it a try: Goliath is available on Oculus Quest Store starting today.

(Header image by ANAGRAM)

The post Goliath Review: a deep experience about psychosis appeared first on The Ghost Howls.

07 Jul 22:45

‘A Township Tale’ Gets Quest vs. PC Graphics Comparison

by Ben Lang

A Township Tale is soon to launch on Quest and developer Alta has now offered up a graphical comparison to show how things differ between the Quest and PC versions of the game.

A Township Tale’s stylized art direction might make it look like it wouldn’t be too demanding to run on Quest, but the game’s large open world exists as a nearly seamless space, with sightlines that sometimes allow players to see across vast distances—not to mention tons of physics-based objects, interactions, all happening with up to eight simultaneous players.

Even though the game was built for PC well before the original Quest was even announced, developer Alta has managed to get A Township Tale’s open world running on the low-powered device. It’s definitely a downgrade from PC, but the studio clearly took care to bring the essence of the game to Quest without simply crushing the resolution.

A common technique used to get games to run on lower-end hardware is to employ a dense fog wall around the player in an effort to drastically cut down how much of the game world must be drawn at any given time.

As A Township Tale game director Boramy Unn explains, this wasn’t an option for the game, because the studio wanted to preserve distant landmarks in the world which help players navigate. While the distant landscape is significantly cut down in its level of detail, the core function of guiding players remains intact. Fortunately nearby buildings and objects render in solid quality, as we saw in our recent preview of the Quest version of A Township Tale.

Boramy also notes that shadows and transparency had to be removed from the Quest version because they were too expensive to run on the headset’s low-powered processor while still reaching the goal of achieving a “perfect framerate” on the headset. For the game’s crucial torches, the studio says they’ve “done some wizardy to give the impression of lights on a shader-wider level.”

Going forward, the studio says it plans to focus on improving the game’s graphical presentation.

– – — – –

Priced at $10, A Township Tale launches on Quest July 15th, or on July 13th for those that pre-ordered the game. On PC the game is already available and free-to-play.

The post ‘A Township Tale’ Gets Quest vs. PC Graphics Comparison appeared first on Road to VR.

15 Dec 14:30

RFL Announcement: 4 Million US Dollars--1 Billion Lindens

by Bixyl Shuftan

 

 Congratulations to everyone who has been involved with Relay For Life of SL over the years !!!

The 2020 Christmas Expo took us over  $4 million USD total raised since 2004 !

That is ONE BILLION LINDENS !

As we celebrate, take a few minutes and journey with us as we chronicle the history of Relay For Life of Second Life.

We titled this video “Jade Lilly - An Army of One” because it all started with her in 2004! One person turned into an army of thousands!

https://www.youtube.com/watch?v=CvGYDMLtry4

14 Apr 18:23

Origin of the first known interstellar object 'Oumuamua

Beijing, China (SPX) Apr 14, 2020
What is the origin of the famous interstellar object 'Oumuamua? How was it formed and where did it come from? An article published on April 13 in Nature Astronomy by ZHANG Yun from National Astronomical Observatories of Chinese Academy of Sciences (NAOC) and Douglas N. C. Lin from University of California, Santa Cruz, offers a first comprehensive answer to this mystery, which involves tidal forc
11 Jul 22:15

Elon Musk says he will fund fixing Flint’s foul water

by Jonathan M. Gitlin

Enlarge (credit: Aurich Lawson / Getty)

For around four years now, the water supply to the city of Flint, Michigan, has been contaminated with lead. Now, Tesla and SpaceX CEO Elon Musk has promised to help. Replying to a request on Twitter, Musk pledged to fund remediation work to houses with contaminated water supplies.

For some time now, people on Twitter and elsewhere have been calling on Musk to turn his attention to this domestic scandal; those calls having escalated in response to his high-profile interest in the rescue of 12 children and their soccer coach from a cave network in Thailand.

As is usually the case with plans that are barely an hour old, the details are thin as of now. But Musk—tweeting from China—told people in Flint to reply to his tweet with test results showing contamination above the recommended limits, at which point he would arrange having a water filter fitted for them. (We should note that it's actually the EPA, not the FDA, that sets limits on environmental pollution exposure, and that the state of Michigan has already been supplying water filters to affected residents.)

Read 2 remaining paragraphs | Comments