Scanning my October ’11 edition of Vision Systems Design, I came across an interesting article by Stuart Singer of Schneider Optics. Nothing unusual about that – everything Stuart writes is interesting – but his articles do tend to be a little intense, so I set it aside for later reading.
Now most of what he says in “Filters Factor into Optical Imaging” goes over my head, although I think I get the main point: a lens filter will affect the MTF of your optical system. But it was a throwaway line about Nyquist theory and limiting resolution at the sensor that really caught my eye.
What Stuart said was that the smallest object that can be resolved by a pixel of 7 µm is 14 µm, according to the Nyquist theorem. This sent me off on a quest to learn more about Harry Nyquist and his work. I stumbled across a great definition here, but the most interesting find was buried in some notes from the Sensory Processes class at the University of Berkeley. On the last page of this six pager about Aliasing are four photos illustrating how blurring an image can have the effect of adding more information.
I’m oversimplifying, I know, but it set me wondering if a low pass filter would improve the performance of a precision gauging application I’m working on. So I grabbed a set of images under slightly varying lighting (just like in the real world,) and made some measurements. I got a variation between images of around 4 pixels.
Then I applied a low pass filter to the images and repeated the measurement. This time the variation was 0.4 pixels. In others, and I’m speaking crudely, blurring the image, like in the Berkeley paper, produces a more stable result.
Math is not my strong suit, so I’m not going to attempt an elegant explanation of why a worse image produces a better result, but I would welcome comments on and explanations of my findings.
8 comments:
Interesting stuff.
One thought a had: about "the smallest object that can be resolved by a pixel of 7 µm is 14 µm".
I think it should be clear that a "pixel of 7um" should be read as: "a point sample with a spacing of 7um to the next sample".
At first I confused it with: "a pixel that is 7um wide, and lies directly adjecent to the next pixel, with (next to) no space between", like in modern digital cameras.
The latter already has a form of smoothing, in that all the light falling on one pixel (however sharp) is integrated and output as one value.
But with the former, surely low pass filtering is needed to ensure that no information is lost if it falls between two sample points.
another thought:
While it seems counterintuitive to be needing pixels of 7um to detect objects of 14um (a 7um object fits a 7um pixel right?), I thought of it like this:
Say you have a bunch of hairs of 7um on a white table. You think you'd need one pixel to detect on hair. But if you have 10 hairs laying next to eachother with some space between, how many pixels do you need at least to detect the individual hairs? That would be 20 (minus one). So with pixels of 7um, you need at least 14um of space to detect an object. The hair could as well be thinner, like 2um, you'd still need 14um of space to detect individual hairs.
Look at it like this: the hair + adjecent space = one period of the signal of max. detectable frequency.
Some edge detection tools have built-in low pass filter.
I don't know how you did your measurements, but it could also have something to do with the fact that edge detection algorithms work better with an edge that is a little bit blurry. This might also have had a positive effect on the measurement errors.
I'm also interested in exactly how and what measurements were done.
Couldn't the drop in variation be due noise being blurred out?
I beleive there are 2 reason why your repeatability was improved through optical filtering.
1. The focus of a lens is a function of wavelength. By eliminating wavelengths through filtering there is less light the lens needs to focus across and will effectively improve overall resolution or sharpness by correcting inherent chromatic aberations.
2. You mentioned "varying" light conditions... Optical filters help control the variability of ambient light which can interfere with image quality by blocking unwanted light from entering the camera. I would suggest trying a monochromatic bandpass filter which only passes a select band of light instead of just the lowpass filter. A bandpass will help control ambient light and and further chromatic aberation correction.
That is very interesting. Were you using monochromatic light? If not, perhaps the improvement is related to cutting out some of the chromatic aberations in the lens.
I agree with the other post. Try two narrow bandpass filters, say one passing blue and one passing red. Now which is more repeatable?
With a low pass filter could get a better repeating precision but a biased absolute precision
Post a Comment