Google’s Project Tango was the first to market and others will follow in the consumer space very soon. 3D sensing is a critical component of the future of computing. It will not only allow facial recognition on a completely new level (e.g. you won’t need finger print readers anymore), the spacial awareness of the devices will change pretty much anything. Think interaction. Think entertainment. Think mixed reality.
Image recognition won’t have to rely on a 2-dimensional photo, but can use actual depth, which will in turn make recognition reliability almost perfect. Gesture recognition is another piece of the puzzle. The moment your device can tell what your hands are doing, you’ll have a new user interface. Touch that virtual computer screen that’s hovering in front of your face with your hand and move it to the side. Type on a virtual keyboard that’s on your lap. 3D sensors could even be used for eye tracking. The moment your device knows what you are looking at, that is another major step into a completely new paradigm. Eyefluence had an impressive demo with what you can do in that area.
Apple bought Primesense, the makers of Microsoft Kinect, in 2013. Their technology must now be generations ahead of what it was back in 2013, but we don’t know yet, because for the last 4 years they have been developing behind closed doors. What we know is that Apple has spent billions on R&D and it’s said that they have 600 engineers working on the 3D sensor alone. Fitting it into a device the size of an iPhone is absolutely believable, so I’m a bit excited about the rumours about Apple’s iPhone 8 that are in circulating right now.