According to The Daily, laptop manufacturers will be running Windows 8 equipped with Kinect motion camera hardware instead of standard webcams. But, of course, perceptions can be created off headlines and I’d assume that everyone is thinking 360’s Kinect. Fine. But before everyone assumes you have to flail your body in front of your laptop that sits on the public bench to navigate the interface, let’s look on the possibilities.
The Daily got to see some of this technology first hand and had a positive spin on its possibilities while CNET blasted the hell out of it. CNET claims there’s no use for it.
So a CNET editor asks what he believes is the ultimate question: who’s going to use this?
Umm. Everybody? You see. Who said that Kinect functions have to be required? What about the cost to manufacturing? If the cost of switching to Kinect is not much different from a standard webcam why shouldn’t manufacturers go for the upgrade? I doubt that Windows 8 will require Kinect’s motion capturing camera but this is really more as an added bonus than a handicap.
Kinect at the moment is still a separate module from the XBox 360 and depends mostly (if not all) on the console’s processing power to determine full body motion in 3D space. But on a laptop, I’d expect accurate tracking in a limited space based on the size of this camera. My hope is that its infrared sensors, camera and mic would seamlessly integrate into the laptop at a low cost. Current laptops already have mics and a single camera built in. Kinect only differentiates itself with an additional camera – infrared (RGB). My main concern will depend entirely on how the gestures are designed to navigate and control certain types of apps and responsiveness. That will also depend if applications will support laptop versions of Kinect as well. For instance, photos. Navigating and zooming maps, drawing, writing signatures, etc. Even browsing the web.
Here’s a demo of Kinect being used on desktop version Windows.
If the laptop version of Kinect can do a smaller scale of gesture detection in 3D space then we can see something practical where users gradually begin to use Kinect gestures to browse the web similar to how the mouse wheel was an evolution of scrolling, zooming, etc.
I doubt Microsoft wants people to perform sign language on their laptops all day long, but Kinect could work best from an accessibility standpoint. Hell, app developers could make use of Kinect to teach you sign language. Obviously, it doesn’t make sense to start using Minority Report-like gestures for complex things like video editing but for video browsing, it can work. Forwarding and rewinding video as well as selecting different videos from a library using Kinect on 360 is useful and natural. If Microsoft can take advantage of Kinect at a small scale, the only thing I think that would need to be addressed is responsiveness to the gestures.
One Kinect hack demonstrates the responsiveness of fingertip-based gestures on this video:
Most importantly, a Kinect integated laptop must be competitive in price as its forebearers. If it is tightly integrated to the laptop (along with its processing), we could see some pretty good use for it. If it can’t keep laptop prices low and/or isn’t responsive to gestures, those things would major drawbacks. What would you use Kinect on a laptop for, other than possibly games? Tons of things really. I’m a future-minded person and I’m having a hard time coming up with something that doesn’t work with Kinect.