Measurement is a complicated subject, and nowhere does it seem more complex than in the world of machine vision. Allow me, if I may, to outline the problem.
Imagine you wish to measure a box. You grab a steel rule, only to find that the smallest unit on the scale is an inch. Thus you can say the box measures between 10 and 11 inches along its side, but you cannot give a more accurate number.
Flip the rule over and you find it is graduated in centimeters (I know the centimeter is not an ISO unit, but I’m old-school.) This allows you to say the box measures between 26 and 27 cm. That’s better than before (because centimeters are smaller than inches,) but it’s still not very precise.
Let us first recognize that this might be good enough. If the purpose of the measurement is to cut a piece of wrapping paper that will cover the box, an uncertainty of +/- 0.5 cm is probably enough. No need to measure to three decimal places.
But what if you need a more precise measure? Well by eye you can subdivide the gap between 26 and 27 cm. It’s easy to estimate the midpoint, and you can probably get pretty close on the quarters. What you’re doing is interpolating between 26 and 27, and that’s what sub-pixel interpolation does. It looks at the gray values of a series of pixels and mathematically estimates where an edge would be.
I’m not a fan of this approach because I’ve never seen a vision system produce two identical images. There’s always some variation in pixel values, even if only because of noise in the system, but other machine vision folks, some with far more expertise than me, will argue that sub-pixel interpolation is a valuable tool.
Readers who’ve been with me a long time might recall that I addressed this back in January 2010 under “A case study on sub-pixel interpolation”. At the time I admitted that I’m a skeptic but I did refer you to a paper by Den Dawson of Dalsa. The link I gave then still works, so if you’re curious, take a look.
4 comments:
I've been at this for 10 years and do not consider sub pixels for any edge detection... I'm barely comfortable with and edge value down to a pixel for most industrial applications.
I agree (at least at the design-stage) that you shouldn't reckon on sub-pixel capability but if you can do more than a simple edge detection, then there's more sense in sub-pixel measurements. Start with multiple edge detections to get statistics on your side, or a pattern-match where sub-pixel values are also valid.
There are definitely real world scenarios that make use of 0.05 pixel precision or better, but of course it depends on the application. If you are an OEM building a heavy, expensive, closed machine, for example in semiconductor industry, you can control much better for calibration, vibrations, thermal distortions, lighting etc. than if you just mount a camera over a conveyor belt.
I believe strongly in sub-pixel measurement - it is the foundation of Scorpion Vision Software - we do sub-pixel measurement in 3D with 2D tools :)
Post a Comment