Newbies to the world of machine vision – and I count myself in their number – often find there is a mysterious inner circle of experts distinguished from we lesser mortals by their ability to use cryptic words.
Chief among these words are “jitter” and “latency”. You’ll see them in framegrabber specs and hear them bandied around by those who were doing machine vision when Bill Gates was a boy, but if you don’t know what they mean, it’s really difficult to find out.
But thanks to Vision Systems Design, that glass ceiling has been shattered. “Exposing jitter and latency myths in Camera Link and GigE Vision systems” (January 1st 2011) summarizes a Dalsa paper on the issue. One interesting point to note is that the author of the Dalsa paper concedes there are variations of the definitions, which is guaranteed to make like complicated.
If you want to understand jitter and latency, and I suggest you should, follow the link and become a guru of machine vision. Or at least be able to talk like one.
1 comment:
Thanks for the pointer. I measured the jitter of camera link interface myself a while ago, but I use an FPGA instead of a PC. My measurement shows little jitter (at 10 microsecond scale). That may imply that the source of the jitter mentioned by that article may not come from the camera link protocol itself, but from the cameralink framegrabber and its interface to the PC. That article is not actually comparing the jittes of two protocols, but comparing the jitters of the IMPLEMENTATIONS of the two protocols. If implemented properly, the camera link protocol should have ZERO jitter. I do not know whether the GigE Vision protocol also has theoretical ZERO jitter though.
Post a Comment