Has your boss ever asked why it takes so long to develop a vision application? It’s happened to me, so I’m sure you’ve been there too. Being charitable, I think this comes about because of the way the human brain processes images: if something is missing or otherwise ‘wrong’ it’s just obvious. Unfortunately, it’s not obvious to a computer, hence all the laborious development and coding of algorithms.
Making matters worse, every time Product Engineering release a change, the vision program has to be at best, modified, and worst case, rewritten, to suit the new feature. And all the while the boss is standing over you like a kid on a long car journey asking, “Are we there yet?”
Perhaps what you need is the “PC-Eyebot” from Sightech Vision Systems of Santa Clara, California. This is essentially a neural network vision system that learns a feature set and then checks every acquired image to see if it matches what is expected. Intuitively, this is an obvious way to approach machine vision: no messing about with pattern matching, edge detection or blob analysis. Just a simple ‘does it look like what I’ve been trained to see?’
Sounds great, so why haven’t Cognex, Matrox and their ilk jumped on the neural network bandwagon? Well frankly speaking, because it’s only great in certain applications. As I understand it, the biggest problem is in providing a truly representative sample set for training. Neural nets assign weights based on frequencies of occurrence, so the training set has to mirror the expected usual distribution of image variation. If you think you can provide that, then Art Gaffin, (the founder of Sightech,) has some great tutorials on his web site that will help you determine if the PC-Eyebot is for you.
Tuesday, August 4, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment