Most of us work with 8 bit images. That means we have 256 levels of grey. So if we’re trying to distinguish between features – an edge or a blob – how big a difference do we need?
If your answer was “as much as possible,” go stand in the corner and think about it a little more. Theoretically, the most contrast you could have would be from zero (totally black) to 255 (completely saturated) but there is a problem with that.
Once you saturate you lose control over just how much light is received. If you think of the individual pixels on the sensor as photon-catching buckets, each one has a finite capacity. Once the bucket is full to the brim any extra photons just spill over the side. This is particularly a problem with CCD sensors where the “spilling over” phenomenon is termed “blooming” and leads to severe degradation of the edge of the feature you’re trying to see.
Fortunately, the latest generation of CCDs seem pretty bloom-resistant, but that doesn’t mean you can just crank up the light intensity. A regular bucket fills up in a linear manner, but as CCD pixels approach their capacity their electron-generating response becomes non-linear. Basically, this means that as you close in on a grey level of 255 you can’t rely on what the number is actually telling you. Going from 230 to 253 might mean the light has increased 10%, but it might not.
And to complicate matters, in my experience most edge-based tools work better a little way short of 255. In other words stay away from saturation.
Tuesday, December 21, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment