CES 25: Hands-on Ray-Ban Meta and its experimental AI features
Yesterday at CES, in a Meta Showroom, I was finally able to try the Ray-Ban Meta glasses and especially their experimental AI features! Let me tell you everything about this experience.
(As usual, let me remind you that since I had just a few minutes to try the device, this can’t be considered an exhaustive review, but just a “first impression” article. Take everything written here with a grain of salt)
Design
One of the reasons why Ray-Ban Meta has been such a success is because it is made by Ray-Ban. With much respect for the work of Meta’s engineers, one of the main reasons why people buy Ray-Ban Meta glasses is because they have a Ray-Ban design and so they look good on the face. The glasses are very thin and stylish: when I wore them, I thought that they didn’t exactly suit me, but they were good-looking anyway. Let me share with you some photos of the device, so you can see it from different points of view:
Even the case is pretty cool: the one I’ve been shown was made in some sort of leather and had a classy design. On the bottom, it featured a USB-C port for charging the device. The little button that is used to open the case also features a colored ring that is green when the glasses are fully charged and then goes orange when the case is charging the glasses: this is very handy for checking the battery status of the device. Let’s have a look at the case, too!
I loved the design of the glasses and the case. I guess it was to be expected: we Italians know how to design good-looking things!
Comfort
The Ray-Ban Meta glasses feel like standard glasses. I’ve not worn them for hours to tell you if the few grams they are heavier than a standard frame make any difference. In the 10 minutes I used them, I had no problems at all: I felt them as very lightweight.
Photos and Videos
Meta people let me try to record a few photos and videos with the device. To do that, you can use vocal commands, and say “Hey Meta, take a picture/video” or something like that, or you can use the button that is on the right frame (short press for photo, long press for video). In any case, the glasses emit a short sound to signal that they understood your command and then they turn on the white status led to signal to the people around you that you taking images from the camera.
The camera is a 12MP one, so it can record photos up to 3024 x 4032 pixels and videos up to 1080p. The videos are recorded with spatial audio thanks to the 5 mics installed on the glasses. One of the Meta people walked around me while I was recording the video, and when we tried the playback of the video on the phone, with the audio routed through the glasses, I could feel her voice moving from one ear to the other. That was pretty cool.
The quality of the recorded videos is pretty nice: yes, you can obtain much better quality with the latest flagship smartphones, but it is still very good for something that is so small and lightweight. You can see a video I recorded at the booth here below and judge by yourself.
Photos and videos are shot and temporarily saved on the device. You can then transfer them to your phone via the companion app.
Audio
I’m not an audio expert, so I can’t tell you about the quality of the speaker in deeper detail. I have an average ear, and for me, the audio quality was good. The glasses were also definitely loud: I was in a noisy showroom, and I could hear perfectly what the glasses were telling me. Even more, I remember that when the glasses were speaking loud, I had difficulties understanding what the Meta hostess, who was close to me, was saying. I was impressed on this side.
Privacy
The glasses turn on a small white light on the right temple when the camera is on. This is to warn the people around that there is the possibility that you are recording them. After reading so many complaints about it, I thought this light was almost invisible, while actually, it is noticeable if you are close to the device. The major problem to me is that no one outside our circle knows what that white dot means, so even if people see it, they don’t know it means there is a privacy risk for them.
Then of course, we have to mention the elephant in the room: Meta has not exactly been a champion in caring about the privacy of the users in the past. So if you wear a Meta device with cameras on your face, you have to consider the risk that some operations you do that involve cameras may mean that some of the data of what you see is sent to Meta.
Experimental AI features
At Meta Connect, Mark Zuckerberg announced upcoming AI features for Meta Ray-Ban glasses that could transform the glasses into assistants that help you in your everyday life. These features have not been released to the public yet (and I’ve not been provided a timeline), because they are still in private beta. They are of course available to Meta people and they let me try them so that I could get a glimpse of the future.
AI assistant
Instead of asking a single question to the Meta AI every time, it will be possible to start a continuative AI session by saying something like “Hey Meta, start an AI session”. From that moment on, until you say “End session”, the camera will be always on, and the AI will be in conversational mode. The glasses will always be aware of what you have around you so that the AI can support you in your activities. It is nice that you must explicitly start a session like that: this way it is you that decide when the glasses are always on and start capturing your surroundings. Privacy-wise, this is what we should have to avoid having glasses that can spy on us every moment of our lives.
I tried this feature in front of a big shelf full of objects, that also featured some example questions I could ask the AI. Needless to say, I didn’t follow the suggestions…
As soon as I started the AI session, I looked at the Meta hostess helping me and said “Who is this person?”. Meta AI answered it can not provide details on people: this is good on the privacy side. I so looked at a toy car and asked what it was. The AI answered correctly, as it was expected. I did the same with some magazines, having a small conversation about them. Then I decided to see how much the AI was able to resonate with the past: I asked what was the first question I asked, and it correctly remembered it was about the Meta hostess. So I decided to throw a curve ball: I looked again at the hostess and I asked if she was the same person I was framing during the first question. Meta AI answered it can’t answer about people. So I looked at another car toy on the shelf, and I asked Meta AI if it was the same car I was looking at in the beginning (when I asked it to tell me what a specific car toy was). Meta AI answered “Yes, it is the same car”, but it was not true. I so corrected it by saying “No, it is not the same car”, and at that point, the AI answered, “Ah no, it’s not the same car”. No shit Sherlock, I’ve just said it, it’s too late to change your answer! I then asked Meta AI to remember where I was putting a specific object, but the AI answered that it could not do that during a live session, which sounded pretty weird to me.
The impression I had is that the AI system works pretty well for standard questions like “What is this”, and “How do you do that”, but the moment you start making it more complex questions that require it to be more context-aware, it still has problems. From the hands-on review articles I’ve read, it seems that Gemini on Android XR is better in this sense. But let’s also remember that these Meta AI features are experimental and not released yet, so they may still improve.
AI translation
The Meta hostess wanted also to showcase to me the live AI translation features of the glasses. We wanted to make a session where she was speaking Spanish and I was speaking Italian, but it seems that at the moment at least one of the two languages should be English (which is a bit disappointing to me). So we decided that she was speaking Spanish and I was speaking English. She was speaking to me on the phone, and I could hear the translation in English of what she was saying through the speakers of the glasses. I was then answering in English, and she could read the translation in Spanish of what I was saying on the companion app of the glasses on the phone.
The system worked fairly well, but not without some hiccups. First of all, the LLM needs some time to do the translation, so the conversation can not be very fluid because every time someone speaks, the other one has to wait 2-3 seconds to understand what has been said. Then, not everything that we said was translated: some parts of the sentences were omitted like if the glasses didn’t hear them. The Meta person told me that this happened also because we were operating in a noisy environment and so some parts of the dialogues may have been lost.
Final impressions
Ray-Ban Meta are very successful glasses for a reason: they are lightweight, stylish, and have few functionalities, but do them pretty well. The new upcoming AI features make them even more useful, letting us have an assistant who helps us whenever we need it. This is cool, but these features still need some fine-tuning before being released to the public. Of course, there is the problem of privacy to consider. But technology-wise, they are a pretty nice product.
Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.