Mojo Vision smart contact lenses are now “feature complete”
We are one step closer to having smart contact lenses. Mojo Vision, one of the most interesting startups in the XR market, has just announced that its smart contact lens prototype is now “feature complete”, reaching a new milestone in its product development timeline.
In the remainder of the article, I will explain to you what it does mean and I will also tell you about my experience with a live demo of this prototype. Are you excited like me?
Mojo Vision
It’s not the first time that I talk about Mojo Vision on this blog, but if it is the first time you read this name, let me perform a short recap for you. Long story short, it is a company that is working on building smart contact lenses. Yes, you’ve read that right: not smart glasses, but contact lenses. They want to cram all the hardware necessary to have visual augmentations in a tiny lens that sits on your eye. And while this may seem like a technology coming directly from a science fiction movie, they are building it now, in our present.
Mojo already has internal prototypes of its technology, and a few of its executives have already worn them. Of course, there are many safety concerns in wearing an electronic device in a sensitive area like the eyes, so inside Mojo Vision, there are not only people that work on the technology side of the lenses, like in all hardware companies (e.g. software developers, hardware designers, etc…) but also people that come from a medical background (e.g. optometrists, medical device professionals, etc…) that work in guaranteeing that the lenses are comfortable and safe for their users.
If you want to discover more about this company, my suggestion is to read the interview I have had with them, my hands-on with their demo at AWE 2021, and this long detailed article on Fast Company.
Mojo Vision Advanced Prototype
The breaking news of the day is that Mojo Vision has just finalized a new prototype of its lenses, the first one that is “feature complete”. This means that this new prototype already contains all the modules that the final product will need: the architecture, the structure of both the hardware and the software are the final ones, and now it’s only a matter of optimizations and improvements of the various parts. The system from a high-level standpoint is complete, and now it’s just a matter of finalizing it to make it arrive at a final production stage.
From my personal experience, I can guarantee that going “feature complete” is a huge result, because you have a stable version of the skeleton of your product, and it’s much easier to work on improvements and perform tests from that point on. To make a comparison with my usual work as a software developer: when I was working on our VR fitness game, when we got to a point where we could see an initial menu, then start a game, punch the targets, and then get a final score and enter a leaderboard, we were feature complete. After that, we refined all the UI, we added levels, we improved graphics, we added sounds… but the overall structure remained the same. We had a stable structure to work on, and that was huge to give us confidence and speed up our development. From that moment on, everything became easier. And I’m sure that it will be the same for Mojo Vision.
Take in mind anyway that while going “feature complete” is a huge milestone to reach, the road to the final polished product may still be quite long. So don’t expect a Mojo Vision contact lens to ship tomorrow. But this is a huge step in that direction, that makes us much more confident in their work. Also because they already forecasted this step for a year in the 2020 decade, and they have respected that delivery date.
Mojo Vision Smart Lens structure
So, what’s inside a Mojo Vision Smart Lens? A set of components that you can’t believe can sit inside such a small piece of hardware:
- A microdisplay to let the user see the augmentations;
- An image sensor for seeing the surroundings and process them through computer vision (e.g. edge detection);
- Eye-tracking sensors (accelerometer, gyroscope, magnetometer);
- A battery system;
- A 5Ghz radio communication antenna to make the lens communicate with an external unit;
- An ARM0 processor, that acts as a “traffic cop” for the data.
The lens doesn’t contain the processing unit inside, but it communicates via radio waves with an external computational module. I asked Steve Sinclair, SVP of Product and Marketing at Mojo Vision, if it will be possible to integrate also the computational unit inside the lens, and he said that for now, it is not possible, but of course, we all hope that one day in the future this may happen. The lens already contains a small ARM0 processor inside, so hopefully one day it will be able to incorporate a more powerful one. For the first product Mojo Vision is going to release, the plan is to ship an external computational unit, making sure that it is small and wearable.
The lens is not soft like the ones that you can usually buy from your optician, but it is a hard one that sits on your sclera. Scleral lenses, since they sit on the white part of your eyes, which is less sensitive than the cornea, are known to be more comfortable to wear (You can read more about this topic here). I asked Steve why the lens can’t be a soft one, and he explained to me that they have made many tests when they started the process of building a smart contact lens, and this structure proved to be the best one. One of the advantages of the rigid material is that the lens can’t be squeezed or bent, and also it doesn’t slip inside the eye, so the display stays fixed in place: this way its output always falls inside the fovea of the eye, for optimal visuals of the user. Also in a smart soft lens you can only add flexible circuits, and this creates even bigger challenges in building a full AR system.
Mojo Vision Component Details
In its press release, Mojo Vision details what are some of the key innovations in the components it has built. Let me list (i.e. copy-paste) them here for you:
Smallest, Densest Display
At the heart of Mojo Lens is our 14,000 pixels per inch MicroLED display. Measuring less than 0.5mm in diameter with a pixel-pitch of 1.8 microns, it is the world’s smallest and densest display ever created for dynamic content. Paired with a Mojo-designed micro-optic and custom silicon backplane chip, the Mojo display can project bright text, rich graphics, and high-resolution video on the wearer’s retina that are visible indoors, outdoors, or even with eyes closed.
Low Latency Communication
Mojo Vision has custom-designed an ASIC for Mojo Lens that incorporates a 5GHz radio and ARM Core M0 processor that transmits sensor data of the lens and streams Augmented Reality (AR) content to the MicroLED display. The radio is capable of communicating with the ultra-low latency required by AR applications by using a Mojo-proprietary communication protocol that is more efficient and faster than Bluetooth LE.
Ultra Precise Eye Tracking
A key element of any AR experience is the ability to see and interact with digital content placed within the world around us. Mojo Lens has a custom-configured accelerometer, gyroscope, and magnetometer that continuously track eye movements so that AR imagery is held still as the eyes move. Coupled with proprietary motion-sensing algorithms, Mojo Lens’ eye-tracking is an order of magnitude more precise than today’s leading Augmented Reality/Virtual Reality optical eye-tracking systems and is a key enabler of Mojo’s unique eye-controlled user experience.
Medical-Grade Power System
Powering Mojo Lens is a proprietary power management system that includes medical-grade micro-batteries and Mojo-developed power management integrated circuit and wireless recharging componentry. The power system is a critical element of the Mojo Lens and allows Mojo to optimize the final product for all-day wear and to run reliability and safety tests in preparation for FDA clinical trials.
Software System
The “Feature completion” of the system doesn’t regard only the hardware, but also the software foundation. The company has built “foundational operating system code and user experience (UX) components” for the first time for this new prototype. The new software will allow for further development and testing of important use cases for consumers and partners. This means that there is now a software layer on top of which starting to build prototypical applications these lenses can support. Currently, there are some initial demo applications that Mojo has started to craft.
Mojo Vision Innovation
I asked Steve Sinclair what has been the greatest difficulty in creating such a smart contact lens and he answered that the biggest difficulty has been exactly the one of creating a whole working computing system in such a small form factor, while at the same time guaranteeing that it could be safe for the user. Creating just a single component of this system (motion system, battery, etc…) with the hard constraints it must have (e.g. small dimension and little power consumption) is already hard, and could be a company in itself… so designing and developing a whole system, made by all these many communicating parts that should be integrated into such a small space is a hard task that required Mojo to perform many breakthrough innovations.
Next Steps
At the beginning of this article, I’ve mentioned that some of Mojo’s executives have already worn some prototypes in their eyes. But this is not the case for this latest prototype, yet: the system is theoretically safe to wear even when it is turned on, has already received some initial pre-approval for clinical tests from an Independent Review Board of medical experts, but many trials will be needed to prove its safety. The company cares a lot about the safety and comfort of its future users, and so will begin very soon extensive user testing and analysis. Remember that since 2019, Mojo Vision has been working with the U.S. Food and Drug Administration (FDA) through its Breakthrough Devices Program to develop a discreet low vision aid, so its work is tightly controlled.
In parallel with that, it will proceed to the overall product optimization and to the development of the software that can run on top of these lenses.
Use cases
The first use cases of the lenses are about offering better vision to people with eye impairments (e.g. highlighting the edges of the objects in the field of vision of people that don’t see very well). But the company is also exploring other use cases: for instance, at CES this year, they announced new strategic partnerships aimed at the sports sector:
Mojo has identified initial compelling consumer uses of Invisible Computing for performance athletes, and recently announced strategic partnerships with leading sports and fitness brands, such as Adidas Running, to collaborate on eyes-up, hands-free experiences. Mojo has been working with its new partners to find unique ways to improve athletes’ access to in-the-moment or during data. Mojo Lens can give athletes a competitive edge, allowing them to stay focused on their workout or training and maximize their performance, without the distraction of traditional wearables.
Release date and price
I’ve tried asking again Steve about the expected release date of the lenses, and he answered that they are just “a few years away”. While we all think about smart contact lenses as something that could become reality in 20 years, actually the expected release of the first product is for this decade. The exact timing will depend not only on the technical challenges that are still ahead but also on the result of the tests with people wearing the device and the regulatory approvals.
As for the price, the vision is still the one of a high-end smartphone.
Hands-on Demo
All of this seems to come from a science fiction movie, and I know that some of you may still be skeptical, especially considering that some of the above photos are digital renderings and the videos are made using stock videos as the background. But here comes the actual twist: I have seen the lens in action.
Unluckily I am in Italy, so I could have the demo only via Zoom, but having tried the previous demo physically myself, seeing this update remotely was enough to convince me of the big step forward that the company has performed.
The lens
Steve Sinclair has shown me the latest prototype of the lens: it was exactly like one of the pictures that you see in this article: a plastic rigid transparent lens, with inside some green circuits, and a tiny dot in the center. It was there, in his hands, and it was exactly like in the pictures. And even if I already saw something similar at AWE, seeing such a technological marvel, that is a tiny contact lens containing some smart circuits, is always an emotional moment for me. It was so cool seeing it, and seeing that it had evolved since last time.
The demo
I had two demos of Mojo Vision at AWE: one was with the lens on a stick, that I could put close to my eye to see that the tiny display was working; and the other one was on a Vive Pro Eye, that I could wear to try the prototypical UX of the contact lens, running on a PC.
Now that the lens is feature complete, the two demos could finally be just one, and Steve could showcase to me the actual interface of the lens working directly on the lens.
The demo setup was as follows: there was again the lens on a stick because at this moment you can’t put it into your eyes, yet. I could spot that the lens was different from the one of the previous setup, that is the one depicted here below
because it featured much more green circuits inside. The stick was connected via a cable to a small box on the desk, and the box was connected to a laptop, which on the screen could mirror what was the content showcased inside the lens. The box on the table is what will evolve in the computational unit of the lens, and in this demo, it was connected via a wired connection to the lens for the sake of stability of the demo. Remember that the lens features a radio antenna, so in production, it will communicate wirelessly with the computational box (also because wearing a tethered contact lens would be pretty weird…). Notice that everything was working just with the box and the lens, and the computer was there just to mirror the content for me to see and was not contributing to the Mojo Vision system in any way.
There was a demo application running on the lens system (lens + box), and Steve put the lens very close to the camera of his phone to let me see again that the display of the device was showing the same content I could see mirrored on the laptop screen. It still marvels me every time that a tiny green dot that is as small as the tip of a pencil is a display! For the sake of practicality of the remote demo, though, I saw most of the action happening on the laptop screen, while Steve was handling the stick with the lens with his hands.
The application he showed me was very similar to the one that I already described for AWE:
Mojo employees told me to look in the periphery of my vision (at the very far left, right, etc…) and there I could actually see some green icons, all connected by a line forming a circle. Looking at one of those icons for a while, I could see a popup appearing inside the external circle showing something related to that icon. For instance, if the icon was about music, the popup was about the music playing at that moment. Inside these popups, there were other interactive elements (usually tiny arrows), which I could stare at for a while to trigger some action (e.g. a button to pause the music currently playing) or to trigger some other internal menus. The external circle had more or less the dimension of the FOV of my eyes, so I had really to look at far left, right, up, and down, to see the external icons. […]. The internal “popups” were visible only if I triggered the related menu (e.g. the popup about music was visible only if I previously stared at the “music” icon), and were visible only one by one.
The application had a similar interface, but first of all, I had the impression that the graphics had been slightly improved. And then this time it was not only a matter of some buttons and popups, but this initial menu could literally start some mini-demo applications. So the experience described above was basically the launcher, that triggered some simple demos that showed information for specific use cases. It was impressive that all of this was running on the lens system.
Here below you can see one of the demo applications that I could enjoy, about an “assistant” that supports you in reaching the airport, then your gate, and then your seat on the plane:
Three things impressed me about the demo:
- The compass. To prove to me that the whole system was working, Steve launched an application showing a compass. At first, he was looking forward, and I could see an “N” on the screen with around it some small lines for the various orientation angles. He then started rotating the lens, and I could see the data of the compass changing on the screen, showing at first “NW”, then “W”, then “SW” and when he was at 180°, and I started seeing an “S”. This meant that the lens was working and could clearly show an output depending on the data provided by its sensors (in this case, its magnetometer). It was a valuable demo because it was not just a showcase of the display, but it was a showcase that the full system was working, that the sensors got some data, this data was analyzed, and some visual information derived from it was shown on the lens display. This was the proof of the “feature complete” claim of the company;
- The fixed floating display. What is interesting to notice about Mojo Vision lenses, is that their info is not locked onto your vision. I mean, you have a lens that is fixed onto your eyes, but it doesn’t just show pieces of information that are locked into the center of your field of view. Using some eye-tracking magic of the motion sensors, it shows the information as if it was in a screen attached to your face at a certain distance from your eyes. Look at the above videos: you can explore the virtual screen by moving your eyes, and you see only the portion of that virtual screen that falls inside your fovea. This means that the information is not obtrusive (imagine the annoyance of having writings always attached to the center of your vision) and that you can navigate it naturally exploring it with your eyes. It’s a bit like having a virtual screen created by some smartglasses… but with contacts in place of the glasses. Your eyes are like a spotlight on the virtual screen that is in front of you;
- Gaze interaction. With your eyes, you can explore all the information showcased on the virtual display, and also perform some simple interactions like pressing buttons buy staring at them (stare-to-click). There was also the possibility of scrolling some text by just looking at it, as it happens on HoloLens: when you arrive at the end of the text shown in the textbox, the system detects that you are looking at the bottom of the box and scrolls the text for you automatically. This shows that Mojo is working on finding nice UX paradigms for interactions that just use your eyes
Steve showed me different menus and different sample applications, and all of them were about reading data directly in front of your eyes.
I think the demo has been amazing because it proved to me that what they were claiming was true. He also was open about having a physical demo in the future to let me experiment with the device in person. I really can’t wait!
Final considerations
Smart contact lenses are not a product of today or tomorrow, but Mojo Vision is making sure that they are coming much sooner than we thought. Thinking that a similar product may come before 2030 is mind-blowing. And what I appreciate about this company is that it doesn’t make bold claims and it doesn’t generate hype. They have worked hard for many years to make smart contact lenses happen. They have a product development roadmap and they are respecting it. They have public demos to show, and in these demos, I have been able to verify that their solution is evolving.
I am very confident in their work, and I can’t wait to try one of their contact lenses in the next “few years”. For now, I rejoice in this “feature complete” milestone, hoping that it will become “future complete” soon 😉
PS: Register to my newsletter not to miss the next mindblowing articles about AR and VR straight in your inbox!
Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.