Thursday, May 7, 2009

Comparing software packages


Building on an article in Vision Systems Design (Andrew Wilson, November 2008,) Wolfgang Eckstein of MVTec has suggested an approach to benchmarking machine vision software. This starts from the perspective that what matters to users is the ease with which an image processing task can be performed, and flows into a discussion of what some suitable tests might be.

To help facilitate the discussion I’m posting the list of test applications here. My hope is that as a community we can develop this from the initial “straw man” to a framework that brings greater transparency to our industry. This in turn will stimulate improvement in product offerings, (especially by those at the bottom of the “league tables,”) which can only benefit the users and ultimately the industry.

Wolfgang should be applauded for his initiative: the question now is who will rise to the challenge?

1 comment:

David Dechow said...

Hi;

I'm strongly in favor of doing software benchmarking, but with some caveats.

I was involved in stringent benchmarking of search algorithms (geometric and normalized correlation) some ten years ago. One of the key obstacles is developing a competent test bed of images that 1) represent real world usage, and 2) do not favor one algorithm implementation over another.

A second obstacle is "who does the research". Technical specs in machine vision are universally suspect because there's no standardization as to methodology. Could a third-party take over (like AIA)?

Finally, one thing that can't be benchmarked is the usability. I find that most software is generally pretty competently implemented for industrial use, but the levels of exposed parameters and the overall usability account for most of the product differentiation. This is of course subjective, but may be more important than minute differences in algorithm performance.

David