Tuesday, November 29, 2011

Gesture recognition


eyeSight Mobile Technologies is a company doing interesting things with machine vision. They’re using the camera integral to almost every mobile gizmo on the market as an input device. The video on their website shows how a simple sweep of the hand can be used to turn an electronic page, adjust volume, or select from on-screen icons.

What intrigues me is that they’re doing this without 3D. They use a single camera and ambient lighting: there’s no IR projector as with the Kinect, and no time-of-flight hardware either. So how does it work? No details are provided, although I suspect there’s some kind of feature extraction – the hand – and perhaps some optical flow analysis to determine how the hand is moving. But I may be simplifying things too much. A quick Google threw up a link to this research on vision-based hand tracking.

The eyeSight video, naturally enough, makes the system look very robust. I’d be interested to know just how much precision is needed with the gestures – I can imagine a significant training time, just as there was with the old handwriting recognition systems. Neither am I convinced by all the applications shown. For in-car use I think voice recognition is a better way to go. In my oh-so-humble opinion, the killer apps will be those where messy hands need to be kept away from keyboards and mice. The medical field springs to mind, and perhaps manufacturing, especially in places where gloves have to be worn.

In short, this looks like a technology with a future, and judging by the funding they’ve received, I am not alone in this view. Now, who will be first to introduce a shop-floor version?

No comments: