My last post discussed the presence of noise in images. Today let’s talk about the implications for engineering a robust machine vision system.
The issue we’re dealing with here is signal-to-noise ratio. A good rule of thumb is that a signal should be 10 times greater than the background noise. In machine vision this would tell us that if we want to find an edge repeatably, and our noise level is around 3 grey levels, then we should aim to have edge contrast of at least 30 grey levels.
A more mathematical approach would be to look at the RMS noise, and then set a minimum contrast difference based on this, but that’s probably more science than really needs to be brought to bear.
What this means though is that the machine vision newbie who sets up his edge detection based on a contrast difference of 5, or even 10, grey levels, is unlikely to have a robust system. My personal belief is that 20 is the absolute minimum.
Comments anyone?
Subscribe to:
Post Comments (Atom)
1 comment:
I agree, if you are searching for and edge along just one pixel line and you need to find it really precisely.
Many times however, you can use much more information than just one pixel line. The obvious example is searching for a straight edge across multiple pixel lines and then fitting a line through the (less precise) results. In special cases, you can utilize some a priori information, such as e.g. "the edge slope will always be between 60 and 80 degrees", which allows for reliable results even if the signal-to-noise ratio is very low. There are several application where the contrast difference had to be set as low as 7, and they are still working reliably.
We are using NI's technology for our vision systems, and I can say their "advanced straight edge" tool gives some nice results even in hard signal-to-noise situations.
Post a Comment