Sunday, November 23, 2008

Is execution speed a differentiator?


In their white paper “10 Things to Consider When Choosing Vision Software,” (published as part of the “Vision Resources Kit,”) National Instruments (NI) included a table comparing the speed of their image processing algorithms with those of a competitor. As I’m promoting their products I’m sure they won’t object to me reproducing it here.

I would have liked some information on how they produced these statistics – for example, what size was the image? But even without such details I find the table interesting, for two reasons. First, the NI algorithms appear to execute much faster than those from the unnamed competitor, which suggests that NI engineers do their coding very efficiently. And second, NI appear to believe that, even in these days of quad-core processors running at a gazillion MIPS, speed is still an issue for some vision users.

So let me throw out a question: do you ever run into problems with algorithms taking too long to execute? If you do, would you be kind enough to share some details?

1 comment:

Anonymous said...

For certain applications, speed is important. But to just post results without the methodology to the measurement is not conclusive. I would be interested to hear from NI what libraries they measured against, Sapera, MIL, Cognex, etc. Did they use routines that use HW acceleration from the competition? Was it tit for tat?

For simple apps, the speed of the processing is so much faster than the acquisition of the data, it makes no difference. Now if you are doing filtering in the fourier domain on 4M data at 15 fps, it might be important.