My post “Can a worse image produce better results?” on January 22nd prompted a record number of comments, several of which asked for details of my test. This gave me an opportunity to repeat what I’d done and therefore convince myself that I’m not talking gibberish, so here goes.
My objective was to determine if applying a low pass filter to blur the image would increase the measurement repeatability. I started by setting up a gap as in the image below.
To provide some natural variation I captured four images at about one second apart.
I then used edge tools to locate each of the vertical edges, and measured the gap.
On the raw, unprocessed images the gap varied from 74.49 pixels to 83.05 pixels, a range of 7.56 pixels. (Now this is much greater than when I first did this test, when the variation was 4 pixels, but keep reading.)
I then added a low pass filter to the images which made them look like this:
Repeating the measurements and changing nothing, I found the gap varied from 74.76 pixels to 78.31 pixels, a range of 3.55 pixels.
My conclusion: a low pass filter will improve the repeatability of how edges are detected.
1 comment:
In my opinion there are two different topics here. The first is the ability to discern objects of a certain size and how many pixels are needed to actually determine an edge. I have typically heard that you need at least two pixels to determine an edge, which is consistent with the statement by Mr. Singer. This is related to the physical and mathematical nature of digital imaging.
Your experiment with adding the low pass filter to soften the edges has more to do with the algorithm used to detect the edge. A common type of thresholding (differentiating between two things like light and dark) for edge detection uses gradients. Basically this means that the rate of change (i.e. the differential in calculus terms) from light to dark is used to determine the location of the edge based on the contrast chosen. The pixels gradually transition from some higher intensity to some lower intensity and the more gradual that change the smaller the slope of the line that is asymptotic to that curve. An increase in the data points to define that curve produces a more accurate representation of that curve, which provides a better definition of the slope of the gradient line. This is related more to the mathematical nature of the algorithm.
While it seems that there is direct relationship between the number of pixels needed to detect an object and low pass filtering for more repeatable edge detection, they actually have two different explanations. This is not to say that they are completely unrelated, but the relation is much more subtle.
Post a Comment