You know how much I’ve loved the Kinect sensor: it has been a companion for my three-year virtual reality startup adventure and during all this time I’ve appreciated its endless functionalities and possibilities, especially in connection with virtual reality. Having full body virtual reality without wearing any marker or suit is a dream for us all.
You surely also know how much I’ve been sad when I got to know about its death: after various years of discontinued updates, Microsoft, in the end, killed Kinect and has stopped manufacturing it.
But Kinect is hard to kill. First of all, it still survives inside HoloLens: not only most of the team that worked on Kinect is now working on HoloLens (Alex Kipman is the most famous example of this), but HoloLens contains a depth camera that is an evolution of Kinect. So, Kinect keeps living as the beating heart of HoloLens. And then, today, at Build conference, Microsoft has announced the “Project Kinect for Azure”.
Hearing the name, I was like: “What? Kinect? Really? Where is it? Is it live again??”. I was super-excited. Then, reading some online posts, I discovered that:
- It is not exactly Kinect;
- Microsoft has just teased the program but has not provided many details about it.
Anyway, the news is very interesting and is also very interesting that Microsoft has decided to carry on the Kinect brand. But, what exactly is this “Project Kinect for Azure”?
Basically, Microsoft makes available to interested developers and companies the Kinect sensor that will be embedded inside the next generation of Hololens: a very tiny device with 1024×1024 depth camera resolution. To have a comparison: the Kinect v2 device had a 512×424 depth camera, so we have a massive improvement of resolution. And if you look the above picture, we have also a dramatic reduction of dimensions! This is like a super-Kinect.
But we don’t have only this device: the company will release various sensors like an RGB camera, an accelerometer and a microphone array (all parts of the Kinect, do you remember it?) so that you can attach them to a board and use them for some spatial computing magic.
Now I can read in your head a very intelligent consideration: it is quite easy to make all these sensors… the great part of the Kinect was its brain, the body tracking technology, that was the best ever produced for consumers. Well, here the “Azure” part of the program comes in help. The idea is that all these sensors communicate with AI algorithms that run in the cloud and that can analyze all the data streams that these sensors produce. And in case that there is no connection available, according to Microsoft they will be able to also produce some results while offline.
The idea of Microsoft is very smart: Kinect was a terrible tool for gaming and a great tool for research centers, so they now transformed it exactly as a tool for enterprise use only. Microsoft AI algorithms are very powerful (they have announced a new upgrade to the cognitive services and speech services) and if only they will also add face tracking and body tracking among the Azure-powered features (and some one told me that they’re going to add them!), we could have back our lovely Kinect and use it for a lot of research purposes, maybe even more than before. And maybe we could also use it for augmented reality and virtual reality: in the website of Kinect for Azure, Microsoft explicitly talk about its use for Holoportation… so I think that this integration is not only a dream of a crazy blogger.
I think that this program has been a smart choice by Microsoft, to answer the requests of research centers of having back a powerful device to experiment with. And the fact that they kept the name may be because people asked back a substitute for the Kinect, so Microsoft just gave them a Kinect back.
I don’t know if this program will be successful, since there are too many unknowns (first of all: the cost), but seeing Kinect living again makes me a bit happy anyway 🙂