Monday, February 16, 2009

Wanted: a real Buyers Guide

If I’m shopping for a car, a household appliance, or even wine, there are many places I can go for independent information, so why, when I’m shopping for machine vision hardware, must I rely on manufacturers information?

Andy Wilson, of Vision Systems Design, must have been musing on this same point last year, for in the November ’08 edition he published a lengthy article proposing a means of benchmarking the performance of a machine vision system. Taking as his starting point the “Abingdon Cross Benchmark Survey,” (written by Kendall Preston, way back in ’89,) he proposes a series of benchmarking tests that would facilitate comparison between different software and systems.

The original Abingdon Cross benchmark set an image processing challenge without dictating the algorithms that should be used, thus providing a true test of the capability of the whole system rather than any particular image processing tools. However, the main emphasis of the Abingdon was on processing speed.

One of the many good points Andy raises is that few buyers today are concerned with execution speed: most systems are fast enough for most tasks. However, what does matter are factors such as accuracy and repeatability, and Andy suggests that an appropriate series of tests could be devised.

So just as car buyers can review zero to sixty times, I’d like to see some comparative data on how well various vision software and hardware products can solve specific, somewhat real-world, problems.

Wouldn’t developing such a benchmark be a great way to bring together the various machine vision trade bodies and help grow our industry?

No comments: