Edge detection is one of the most useful tools in machine vision. We use it to measure lengths, distances, spacing and so on, but getting a good image can be difficult. Whenever possible, the preferred technique is to backlight, but sometimes part geometry or the constraints of part presentation make that impossible.
An alternative is to front light – perhaps using a darkfield approach – to bring out edges, but if that doesn’t yield the required contrast you’re left with image processing – Canny, Sobel or Laplace, for example. Now I’d like to hear what your experience has been, but I’ve never found these tools particularly effective. They tend to miss some of the edges I want while finding a lot of noise. As a result, I’ve never implemented a Canny edge extraction algorithm in a real-world application.
But from MIT comes what may be the next big thing in finding edges. You’ll find details, and an excellent YouTube movie, at “Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering using Multi-Flash Imaging”. This describes an approach where multiple images are acquired, each with illumination from a slightly different direction. This yields a set of images with slightly different shadows, which can be combined and processed to bring out the real edges. (You need to visit their web page to see what I mean.)
Now the MIT geniuses see this as a method of automating the production of figures and diagrams for technical reference manuals, but I see it as a new machine vision technique. I can visualize being able to deal with cluttered backgrounds and finding subtle edges on complex parts – castings for example. I can even see this as simplifying lighting configuration and enabling robust robot guidance in unstructured environments. Yes, I’m pretty excited about it.
Now I imagine some of you are saying, “Multiple images with light from different directions? Isn’t that shape from shading?” Well yes, and no. Shape from shading provides more detail about a surface – it’s a good way to find dents on a flat but textured plane – but it’s computationally intense and very few companies have succeeded with commercial implementations. This “Multi-Flash Imaging” technique is not exactly simple, but I’m thinking that, as there’s no requirement to calculate surface vectors, it’s perhaps less demanding and more robust.
So who will be first among the machine vision companies to pick it up?
No comments:
Post a Comment