One of the less challenging tasks in machine vision is making measurements between features. All that’s required is to find two edges, and then calculate how many pixels separate the two. (Actually, to be pedantic, in machine vision we measure an image rather than the physical part.)
Unfortunately though, it’s seldom possible to say exactly where the edge is.
Here are two images to illustrate what I mean.
The first shows a gap between two backlit edges. The second is a magnified view of one side. You’ll notice that the edge is a little fuzzy. If we drew a horizontal line and graphed the grayscale values we’d see a slope as we gradually transition from 200 to around 40 rather than a vertical drop.
Interestingly enough, even if we use a telecentric lens and backlight we’ll still see this effect because the edge will never align with the pixels in the camera.
So where is the edge?
To tell the truth, we can’t say with any certainty because it depends on how we define edge in terms of grayscale. In other words, there is uncertainty in every measurement we make.
To learn how to deal with that, check back soon for part 2.
No comments:
Post a Comment