If we could reduce the noise in an image, would we get more stable/repeatable results from our machine vision tools?
This is an idea I’ve been chewing over for a few weeks now, and if you read “Better measurements from a worse image?” and “Some crude image averaging” you’ll know that I’m making progress in convincing myself that this is possible. I also mentioned in “Dealing with noise in an image” that I’d found some commercial software that claimed to reduce image noise.
Well I’ve now had a chance to put one of these to the test. Neat Image, which claims to offer “best noise reduction for digital cameras and scanners,” is available as a free download for the demo version.
I anticipated it would request a set of “identical” images and perform some averaging between them, but that wasn’t what happened. Instead, after loading an image, the software searches for a featureless region where it can take a noise sample. A correction factor is then calculated and applied to the whole image.
And the results? I’m impressed. It did a great job of cleaning up the noise in my images and I can certainly imagine using it in my digital camera snaps at home. But would I use it in a machine vision application? I’d need to do a lot more testing to understand the effect it has. I’d also have to ask the minds behind Neat Image for some code to plug in to my applications as it’s not readily available in such a format.
To be honest, I think I prefer my crude averaging technique better, although Neat Image is definitely worth investigating.
No comments:
Post a Comment