Sunday, November 25, 2012

An advance in vision guided robotics


Georgia Tech is doing some interesting work on visual servoing. They’re using a time-of-flight camera to dramatically reduce the time it takes to move a robot end effector through a sequence of steps to complete a task.

If you watch the video below you’ll learn they see this as useful in bomb disposal and surgery. I can see it going further. Currently we teach a robot a task by driving it to a position and saving those coordinates. But how about showing the robot the part it needs pick up, and then letting it find that part in 3D space? I already have an application waiting for such a capability.

1 comment:

J, Campbell said...

Since 2010, the HALCON library from MVTec provides 3D surface-based (3D point cloud) matching that enables the location and 3D pose detection of objects based on point cloud models created from CAD or actual 3D images (including those from time of flight sensors). So it is possible to show a vision-enabled robot a trained object and know where it is in 3D robot coordinate space. It is then up to the user to define the best/safest path for the robot arm to take. .