Thursday, May 29, 2008
Computer Vision Resource
The ‘visionbib’ site is an “annotated bibliography … of computer vision papers accumulated over a number of years,” put together and maintained by Keith Price, who, according to his site, is a retired computer vision researcher.
My brief perusal revealed this to be a real treasure trove of info, but note that the theme is computer vision and not machine vision. Yes, they overlap in that machine vision uses computer vision tools, but ‘visionbib’ is not a place to learn about implementing industrial vision systems (that’s what this blog is for!) ‘Visionbib’ will guide you to sources that will help you learn how image processing tools work. If you’re willing to do the legwork to access the referenced papers you’ll gain a much deeper understanding of the science that underpins our practical applications.
Dip in to it when you want to improve your grasp of the fundamentals, but think of it as your vision “Yoda” – you’ll need to figure out how to apply what you learn!
Wednesday, May 28, 2008
It’s not how many pixels you have …
What it really boils down to is how many line pairs per millimeter can you see?
Line pair?
Imagine a series of black and white stripes. One black and one white is a line pair. Now you know that if you put these really close together – say 12 pairs per inch - your camera will cease to show you solid black transitioning to perfect white (or 0 to 255 in grayscale values) but in fact you’ll see some blurring. And as the line pairs get finer, (more line pairs per inch,) the blurring will increase. After a while you’ll notice the transitions are more like 30 to 220, and as you pack in ever more line pairs eventually it will all blur down to a grey stripe.
But this is just generalities. If you want to get specific I suggest you start by reading this excellent note by John Titus of Test & Measurement World.
Cat fight in the Vision world!
Interesting how this comes just days after the MVTec webcast that discussed in some detail the strengths of the Halcon product. Maybe the folks in Natick were watching!
I’m eager to find out the specific details but so far my sources aren’t talking. However, Cognex has a reputation for aggressively protecting its patents, so it will be very interesting to see how this turns out.
Tuesday, May 27, 2008
Going deep
Three extremely knowledgeable gentlemen from MVTec Software, Carsten Steger, Markus Ulrich and Christian Wiedemann, discussed a number of applications, and in each case went in to a lot of detail about the approach taken and the algorithms used. (If you’ve been paying attention you’ll recall that these three guys have a book out with the same title as the webcast.)
MVTec is the company behind Halcon, itself a world-class machine vision software package, but to their credit I don’t think there was a single commercial ‘plug’ for the product through the entire presentation. Frankly, the webcast sold Halcon beautifully and more overt ‘salesmanship’ would just been unnecessary.
If you missed out, use this link to access it, (you’ll need to register first, but it is free,) and be sure to have 45 minutes to an hour available.
Personally, I found the webcast just an appetizer. I’ll be buying the book very soon.
Monday, May 26, 2008
Computer vision rocks!
Trust me, you’ll like it.
Thursday, May 22, 2008
Doing “The Knowledge”
Machine vision professionals need to do the same – not memorizing the London “A to Z” but getting a thorough grasp of the technology basics. Andor, who specialize in “the development and manufacture of high performance digital cameras,” have an excellent section on their website that will help.
There’s a lot of very detailed info on subjects such as CCD Architectures, Binning, and Blooming. If you’re serious about mastering camera technology, click here to access the information provided by Andor.
Wednesday, May 21, 2008
In praise of monitors
But how do you know the darned thing is actually working? Yes I know your quality system requires an hourly check, but even with that isn’t it nice to see what your camera is seeing? Having a monitor set up on the line lets me see that the camera is alive and kicking. I want to walk past and know that the parts are still in the field of view, the lights are on, and the system is not in ‘bypass’ mode.
Banner and Keyence let you hook up a simple monitor to the camera. I wish other companies would do the same.
Tuesday, May 20, 2008
A crowded marketplace
Vision-Components is one such company, quietly slogging away through good years and bad, all the while growing their product offering. I’m not completely convinced by their claim to have invented the smart camera, but they were certainly amongst the early pioneers, and the very fact that they’re still here suggests you should give them a look.
If you check the specs, you’ll observe that V-C are wedded to the TI processor family. That means you’ll need to run the V-C software, so this is not an equivalent to the Sony Smart Camera that seems content to host a variety of vision packages, but if you’ve no great investment in software that might not be a problem for you.
Give it a go – you just might like it.
Monday, May 19, 2008
Another conference opportunity
I’ve never had the chance to visit the land of the rising sun: perhaps it’s time I wangled myself a trip (business class, of course!)
Sunday, May 18, 2008
Synchrotech Support Blog: FireRepeater 800 Pro FireWire IEEE 1394b Hub
Synchrotech Support Blog: FireRepeater 800 Pro FireWire 800 IEEE 1394b Repeater Hub 4 Port
It appears to be an industrial firewireB hub, which could be useful to have around.
Anyone got any direct experience with it?
Think like a vision system
However, regulars will also know that I can never resist point out an opportunity for improvement, so here it comes.
The weakness with this article is that it focuses (yes, that’s a deliberate pun,) on hardware rather than on solving the application. The guys note that lighting is the “most critical and least understood factor,” but they don’t discuss what you’re trying to achieve with the lighting. Perhaps that needs a separate article, but I can sum it up in one word: contrast.
OK, maybe it needs a few more words, so here’s a slightly better explanation. Your vision system “sees” only grayscale values – all it knows about the scene is a matrix of numbers – it has no inherent ability to understand what it’s looking at, so you have to make it very obvious. This means you need to light your target to create a strong contrast between what you want to see and what you don’t. Think like a vision system and not a human.
Friday, May 16, 2008
Off-topic, but interesting
Research in Australia has shown that, in this regard, we’re significantly inferior to the mantis shrimp. Apparently this little beastie can see way more than you and I. (Thanks again to R&D magazine for another intriguing article.)
I wonder if there’s a way I can put a mantis shrimp to work in my lab?
Thursday, May 15, 2008
Light at the end of the tunnel?

But PPT Vision President Joe Christenson argues that volumes of the IMPACT A10 smart camera will continue to grow on the back of ongoing product development. Unfortunately, this is a lower-priced item than the rest of the range, which, when you stop to look, is remarkably broad. I suggest you take a minute to check it out.
So what should PPT do? Is their only option to slash costs, or are there ways to grow the business? Do you MBAs out there have any suggestions?
Here’s mine: marketing. Do lots more marketing. You have a good product but the machine vision users of world don’t know about it. Spread the word, let people try IMPACT and when they see it’s worth sales will follow.
Wednesday, May 14, 2008
“Everything looks worse in black and white”
The main point Mr. Howison is trying to make is that color is often not necessary in a machine vision system, which I agree with, but I think more explanation would have been helpful.
First, we need to remember that the CCD or CMOS sensor does not sense wavelength; it just catches photons of light that pass through the lens of the system. So all its detecting is the quantity of light.
Second, a filter will let us trap photons of certain wavelengths, so preventing them from reaching our sensor. For example, if we use a green filter we are allowing photons with wavelengths in the region of 500 to 570 nm to pass through, while absorbing all other wavelengths. This is how we detect green light.
So, it doesn’t take too much imagination to see that combinations of colored light and colored filters can be used to help a monochrome camera detect light of a specific wavelength. Thus the complexity of color is often unnecessary. (I should add that color image processing can get really ‘hairy’, but any further discussion is beyond the scope of this blog.)
It’s possible that everything actually looks better in black and white, (which is what Paul Simon sometimes sings, as opposed to the title of this piece, which is what he wrote.)
Enjoy the article, but treat it as an appetizer.
Tuesday, May 13, 2008
Machine vision hits the streets
On a less sarcastic note, let me say that it’s good to see vision technology being applied to complex problems in everyday life, and not just to inspection automation.
Monday, May 12, 2008
Seeing 360
Machine Vision Consulting (MVC) offer a very similar product which they call CircumSpect™, (in fact I believe MVC were the developers of the OmniView™ system – please correct me if I’m wrong.)
And now there’s a Matrox MIL-based competitor. CIVision have launched the ‘Lomax360’ system, which is discussed in detail in the April issue of Vision Systems Design magazine.
Competition really is a wonderful thing.
Sunday, May 11, 2008
What’s wrong with pattern matching?
I have several concerns about pattern matching. First, it’s computationally intensive, meaning that it’s slow. I realize that in these days of quad core PC architectures that’s less of an issue than in days of yore, but it can still be an issue when the part rate is high.
Secondly, it’s just plain inefficient. Why use pattern matching when there’s a simpler, more elegant way? If the answer is that you need robustness, then work on your lighting and optics rather than just throwing processing “horsepower” at the problem.
Third, it’s expensive. In a number of the well known machine vision packages the pattern matching tools cost extra, (and that’s disregarding the cost of the high-end PC needed to run the tools.)
But, in the interest of balance, let me also mention one very good reason for going with pattern matching: speed of application development. In many cases the single most expensive part of putting a vision system on-line is the time of the developer. If pattern matching shaves a week off his time, that’s a big chunk of money saved.
So, should you pattern match? If it saves development time, makes your application more robust, or there’s no other way, then yes. But in all other cases it’s a sledgehammer to crack a nut.
Thursday, May 8, 2008
One more source of help
Give it a try, and don’t forget to mention who sent you!
One more source of help
Give it a try, and don’t forget to mention who sent you!
Wednesday, May 7, 2008
Deciphering the Value Proposition
But here’s my question: why?
Why should I buy this package as opposed to any of the others out there? What does this do for me that VisionPro, Sherlock, Halcon, Sapera, MIL, or any of the other packages can’t do? What problem does it solve for me, the customer?
Frankly, I just can’t see what’s special about “VisioMint”. Don’t get me wrong; I admire people who set out to develop new products and launch new businesses. The world needs that kind of innovation. But, unless I’ve seriously misunderstood something, this is just a “me too” product with no unique selling point.
If you’re going to go to the effort and expense of developing a new product it seems to me it’s essential to (a) identify a particular need that your product meets, and (b) make sure it’s evident to the world just why your product is special.
So, to the guys at JasVisio, the developers of “VisioMint”, I wish you well, but please tell me why I should buy your vision package.
Tuesday, May 6, 2008
CCD v. CMOS – Battle of the Acronyms
So does it really matter? Well sometimes, yes. This article from Dalsa gives a good overview of the pros and cons of each technology, so I’m not going to plagiarize it here. Suffice to say, lower cost and higher speed cameras often come with CMOS sensors and a prudent buyer makes sure he understands what he’s getting and whether or not it’s appropriate to his task.
Caveat emptor.
Monday, May 5, 2008
Just how does the clutch work?
If you have arrived at this page wanting to learn more about the clutch, let me refer you to these links:
I believe machine vision is much the same. You don’t have to know how edge detection works to get a result from the tool, but if you do have an appreciation of what’s happening under the hood you’ll probably be a smarter and more skillful user.
With that in mind, I’m going to direct you to this blog posting by Jon Titus of Test & Measurement World. Jon is reviewing a book on vision algorithms – possibly a good cure for insomnia – that apparently provides a good explanation of how many tools actually work. I haven’t read the book myself, so I can’t comment on its usefulness, but in principal I do believe anyone working with machine vision is well advised to learn what’s actually going on at the pixel level.
A little knowledge can’t hurt, can it?
Sunday, May 4, 2008
Thoughtful opinions
I’m not going to waste time reprinting excerpts from the interview. Much better that you use this link to read it all yourself.
Thursday, May 1, 2008
Got to admit, it’s getting better
First, let me say that I’m moving up from version 2.6. I liked the old version, but at times it was a bit unfriendly, so I’m glad to see that 3.5 is much improved. NI have made much of the ‘State Machine’ functionality, but for me that’s less important that the simple addition of a number of tools.
If you’re at all familiar with the earlier versions of VBAI you’ll know that to perform an image processing operation such as thresholding you had to drop out to Vision Assistant. Not a big deal, I know, but somewhat inelegant. Well now filters and thresholding are available directly from the tools menu, so that’s an improvement. Other changes I spotted (and I don’t claim to have logged them all,) are an “advanced” edge tool, the inclusion of a “golden template” tool and the addition of a QR code reader. The range of I/O functions has also grown, as has the ‘other tools’ section with the inclusion of custom overlay and logical operator functions, to name just a few.
So all-in-all, 3.5 is a significance advance over 2.6, but in the interests of being evenhanded, I feel I should also flag a reservation. There are now quite a few pattern matching-type tools available (like the golden template tool I mentioned earlier.) I’m not sure this is a good thing. Pattern matching is computationally intensive, meaning that your inspections are going to run slowly and this could be a problem if you plan to run an inspection on the Compact Vision System. But more fundamentally, pattern matching is an easy tool for the inexperienced user to grab a hold of. In fact I think it’s too easy – a sledgehammer to crack a nut, in many cases – and having tools of this power available encourages a developer to take the quick route rather than the best route.
Is that a bad thing? I’d love to hear your comments.
