Gracia on Quest 3 review: a promising volumetric content platform

(Image by Gracia)

After having tried Meta Horizon Hyperscape and loved the realism it provides in its reconstructed scenes, many people suggested I also try Gracia, another similar platform. And guess what? I’ve tried it and reviewed it.

Gracia

Gracia is a platform that hosts volumetric content, that is content that has been scanned and then reconstructed digitally in full 3D. The reconstruction is made of Gaussian Splats, the cutting-edge technology that allows for optimal 3D rendering of objects and scenes. The vision of Gracia is to become the YouTube of volumetric content, with creators from all over the world scanning scenes and objects with their phones and uploading them to Gracia, and users that go to the website/app, choose the content they want to enjoy and immerse in it. Content may be static or dynamic (i.e. volumetric videos) and people should be able to fully immerse in these scenes and move around them as if they were real.

The vision is amazing, but of course, every long journey just starts with a small step and now Gracia is actually an application that hosts some small content, mostly static, mostly uploaded by the team at Gracia itself. It is available both on Quest 3 and PC: the Quest 3 version has lower performance and features hardcoded content, while the PC version shows higher-quality Gaussian Splats and can also be used to view volumetric content developed by independent creators. The fact that it can actually run reconstructed environments with Gaussian Splats in real time on Quest is a little technical marvel the team at Gracia is proud of. According to Upload VR, the Gracia team claims it is possible because its specific Gaussian splatting rendering implementation is faster than “any other technology on the market.

I have only been able to try the Quest 3 version because the PC one requires an RTX3060 graphics card and I don’t have it (you have to make me more donations on Patreon so I can afford a more powerful computer, LOL).

Gracia vs Horizon Hyperscape

A part of a room rendered inside Meta Horizon Hyperscape

Both Gracia and Horizon Hyperscape provide volumetric content rendered via Gaussian Splats, and both aim at becoming platforms with user-generated content, so during this article I will make a few comparisons between the two pieces of software. But what are the differences between them?

First of all, the difference is in the companies behind them: the team at Gracia is a small startup, while Meta is… well, Meta. Then Gracia performs all the rendering locally on the device, while Meta performs at least part of the rendering remotely on some powerful server and streams it to the Quest using the Meta Avalanche cloud streaming service. It means that Gracia is more optimized and can work offline, while Horizon Hyperscape can provide higher-quality visuals because they are rendered in the cloud.

Gracia hands-on

Video with my first 10 minutes inside the experience

When I started Gracia on my Quest3, I found myself in an empty space with in front of me a panel describing a piece of content and a button to download it. There were a few arrows I could use to scroll to the next panel, where there was described another piece of content. This initial menu basically let me choose which content I wanted to enter. And for each piece of content, there was the possibility of downloading it. When the download was complete, it was possible to enter into it. As I’ve said before, this is different than Hyperscape, because in Gracia content must be downloaded locally, because it will also be rendered locally.

The interface was usable both with my hands and my controllers, but I have to say, it was horrible. I don’t know who made it, and I don’t even know how this person could make it so bad, considering that now game engines like Unity are so full of drag-and-drop facilities to make point-and-click 2D UIs in seconds. No rays were coming out from my hands, so I had to guess where the cursor was going, and the cursor was shown only when it hit a UI canvas… so if I didn’t see it, good luck in understanding how I should have rotated my hands to aim at the UI. To make things even worse, the cursor ray (which was invisible) was tilted downwards than usual and when the cursor appeared on the UI, it moved in an uncoherent way… it was very uncomfortable to use.

Anyway, with some patience I managed to learn how to use my hands to operate the menus and I downloaded all the pieces of content and I entered all of them. The first piece of content is their best one: it is called Embryo Of The Future, and it is a piece composed of 11 scenes, in which you see the transition of a futuristic girl who in the beginning is trapped into a chrysalis, and then she gets free and covers herself with flowers. There is a voiceover that represents the girl speaking about her sensations during these evolutions. I am a simple man who watches soccer and the movies with Leslie Nielsen, so I have no idea what this meant, but I guess people more into artistic performances may explain it to me. The only thing I could appreciate is that visually it was beatiful.

The girl trapped in a web chrysalis (Image by Gracia)

While this first piece of content was more like a storytelling experience, the others were just a showcase of static elements: there were a few environments, a Lego model, and also a collection of three pieces of food. In every one of the environments, I could use my hands or my controllers to move, zoom the content, or rotate it. All these operations happened with the classical gestures used for them: e.g. to zoom in, I had to pinch both hands and then spread them apart. The zoom and rotation happens sometimes about a weird pivot point, but in general, the interactions with the elements are pretty usable.

The reconstructed scene of a cut tree inside a forest (Image by Gracia)

In the various scenes, there is always a system UI through which you can do some operations, the most relevant of which is switching to passthrough vision, so you can see the reconstructed object inside your room. This is a very cool feature that is not available in Meta Horizon Hyperscape.

Volumetric content quality

I have mixed opinions on the content I enjoyed on Gracia. The quality is overall good, but not as good as in Horizon Hyperscape. If I had tried this before Hyperscape, probably I would be here saying that this app features some amazing Gaussian Splats artworks, but after having seen the high bar set up by Meta, it’s hard not to notice all the imperfections that there are in the content on Gracia. As I’ve said before, this is also because Gracia renders everything locally on the limited chipset of Quest 3, which requires hard compromises. On Gracia, I also had noticeable glitches and stutters: I wouldn’t suggest trying it to people with sensitivity to motion sickness (maybe they can try the PC version if they have a graphics card that is powerful enough).

Zoom on the neck of the girl. You see all the imperfections that make her look like she has a bit of fur. This ruins a bit the sense of immersion in the experience

As I’ve said, the content is good, but there are imperfections. The reconstruction of the Crysalis girl is impressive, it’s very well made, but at the same time the human brain is very good at spotting any imperfections in a human, so I could easily notice that there were some imperfections in her. My brain kept telling me “she’s not real”. The choice of Meta of just showcasing static environments in Hyperscape has been smart in this sense because the brain is more easily tricked with this type of content.

Notice all the distortions and artifacts that are in the background (Image by Gracia)

Most of the scenes usually have a few parts that are rendered in high quality, while the background appears very fuzzy. The food content is instead scanned without any scene around them and it has actually been my favorite. If I had to choose a highlight of my experience, I would absolutely pick the Burger.

There is a burger that is shown in Gracia which is incredibly well-scanned and rendered, it looks real. And since it has no other objects around it in its scene, you can activate AR and put the burger on the table in front of you: if it weren’t brighter than reality, I’m pretty sure that it could easily trick you in believing that the burger is real. The burger is what showed me what is the potentiality of this app, especially in combination with augmented reality. The fact that someone could scan an object and I could put it persistently into my reality is a fascinating concept. For sure there will be companies that will exploit this in the future.

I think watching this burger is worth the download of the app

Another thing that I found powerful about Gracia was the possibility of freely moving inside the environments: using the thumbstick of my controller, I could fly around the scenes, moving around all six degrees of freedom. This is in stark contrast with many other applications: even Horizon Hyperscape lets you move only through teleporting. With Gracia you can move both with roomscale and smooth locomotion: it is like you are in the scene.

Final impressions

Gracia is a bit like the chrysalis woman: it has to get free from all its imperfections and show all its potential (Image by Gracia)

I think that Gracia is a very promising experience: the idea of having an open platform where people can enjoy in mixed reality the content scanned by other people is fascinating. And when the virtual elements are rendered well, like it happened with the burger, this feels like a magical experience. The episode of the woman showed me that platforms like this one have also potential for art and storytelling (and also porn, let’s be honest). But currently, while it is remarkable that the experience is able to run on a limited device like Quest 3, the application is still taken down by problems with performance and user interactions.

This is the current situation of Gracia, but I hope their journey will be long and all these issues will be addressed over time. I will for sure keep an eye on it, also because I’ve read they are also going to release volumetric videos rendered with Gaussian Splats…

Skarredghost: AR/VR developer, startupper, zombie killer. Sometimes I pretend I can blog, but actually I've no idea what I'm doing. I tried to change the world with my startup Immotionar, offering super-awesome full body virtual reality, but now the dream is over. But I'm not giving up: I've started an AR/VR agency called New Technology Walkers with which help you in realizing your XR dreams with our consultancies (Contact us if you need a project done!)
Related Post
Disqus Comments Loading...