Thursday, May 29, 2008

Computer Vision Resource

I like to share links to ‘go-to’ web sites that house information of value to the machine vision community (if “community” doesn’t sound too pompous,) so here’s one that I just found.

The ‘
visionbib’ site is an “annotated bibliography … of computer vision papers accumulated over a number of years,” put together and maintained by Keith Price, who, according to his site, is a retired computer vision researcher.

My brief perusal revealed this to be a real treasure trove of info, but note that the theme is computer vision and not machine vision. Yes, they overlap in that machine vision uses computer vision tools, but ‘visionbib’ is not a place to learn about implementing industrial vision systems (that’s what this blog is for!) ‘Visionbib’ will guide you to sources that will help you learn how image processing tools work. If you’re willing to do the legwork to access the referenced papers you’ll gain a much deeper understanding of the science that underpins our practical applications.

Dip in to it when you want to improve your grasp of the fundamentals, but think of it as your vision “Yoda” – you’ll need to figure out how to apply what you learn!

Wednesday, May 28, 2008

It’s not how many pixels you have …

Resolution is a real complex topic. Sure there’s the issue of how many pixels you have per inch (or millimeter) of your field of view, but there’s also the question of the resolving power of the lens and the actual size of the pixels on your sensor.

What it really boils down to is how many line pairs per millimeter can you see?

Line pair?

Imagine a series of black and white stripes. One black and one white is a line pair. Now you know that if you put these really close together – say 12 pairs per inch - your camera will cease to show you solid black transitioning to perfect white (or 0 to 255 in grayscale values) but in fact you’ll see some blurring. And as the line pairs get finer, (more line pairs per inch,) the blurring will increase. After a while you’ll notice the transitions are more like 30 to 220, and as you pack in ever more line pairs eventually it will all blur down to a grey stripe.

But this is just generalities. If you want to get specific I suggest you start by reading
this excellent note by John Titus of Test & Measurement World.

Cat fight in the Vision world!

From www.boston.com, (also reported on Yahoo,) comes news that Cognex is suing MVTec for patent infringement. Specifically, they allege that Halcon “infringes the claims of at least seven Cognex patents.” (Click here to read more.)

Interesting how this comes just days after the MVTec webcast that discussed in some detail the strengths of the Halcon product. Maybe the folks in Natick were watching!

I’m eager to find out the specific details but so far my sources aren’t talking. However, Cognex has a reputation for aggressively protecting its patents, so it will be very interesting to see how this turns out.

Tuesday, May 27, 2008

Going deep

There’s a ton of light and fluffy promotional material out there that tells you machine vision will solve all your problems, but there’s precious little that gets into the details of how. That’s why the recent VSD webcast on the subject of “Machine Vision Algorithms and Applications” was such a pleasure to watch.

Three extremely knowledgeable gentlemen from
MVTec Software, Carsten Steger, Markus Ulrich and Christian Wiedemann, discussed a number of applications, and in each case went in to a lot of detail about the approach taken and the algorithms used. (If you’ve been paying attention you’ll recall that these three guys have a book out with the same title as the webcast.)

MVTec is the company behind Halcon, itself a world-class machine vision software package, but to their credit I don’t think there was a single commercial ‘plug’ for the product through the entire presentation. Frankly, the webcast sold Halcon beautifully and more overt ‘salesmanship’ would just been unnecessary.

If you missed out, use
this link to access it, (you’ll need to register first, but it is free,) and be sure to have 45 minutes to an hour available.

Personally, I found the webcast just an appetizer. I’ll be buying the book very soon.

Monday, May 26, 2008

Computer vision rocks!

Just finished watching a cool video put together by 4 computer science students at UC Berkley. They've put together some clever image processing that lets a computer play "Rock Band", via a camcorder pointed at a TV screen. I don’t want to spoil your enjoyment, so I suggest you just click here to go to their blog, then let the movie play.

Trust me, you’ll like it.

Thursday, May 22, 2008

Doing “The Knowledge”

Before being set loose, London cab drivers (“cabbies”) have to master The Knowledge. This means learning their way through the maze of streets and complex one-way systems that constitutes the capital of the UK. It takes time, but once acquired, a “cabbie” can take you from A to B faster than any other mode of transport.

Machine vision professionals need to do the same – not memorizing the London “A to Z” but getting a thorough grasp of the technology basics.
Andor, who specialize in “the development and manufacture of high performance digital cameras,” have an excellent section on their website that will help.

There’s a lot of very detailed info on subjects such as CCD Architectures, Binning, and Blooming. If you’re serious about mastering camera technology, click
here to access the information provided by Andor.

Wednesday, May 21, 2008

In praise of monitors

Many smart cameras and vision sensors are designed to run headless. In other words, they stand guard over production like a silent sentinel, ejecting nonconforming product whenever they see it. Or at least that’s the idea.

But how do you know the darned thing is actually working? Yes I know your quality system requires an hourly check, but even with that isn’t it nice to see what your camera is seeing? Having a monitor set up on the line lets me see that the camera is alive and kicking. I want to walk past and know that the parts are still in the field of view, the lights are on, and the system is not in ‘bypass’ mode.

Banner and Keyence let you hook up a simple monitor to the camera. I wish other companies would do the same.

Tuesday, May 20, 2008

A crowded marketplace

At times the machine vision world reminds me of one of those North African bazaar scenes that crop up in the Indiana Jones movies and others of their ilk – crowds of vendors hawking their wares in what appears to be a foreign tongue, and all getting in my way when I want to do some work. Inevitably, those who shout loudest get paid the most attention, (which, I always thought, was a big factor in the success of DVT,) but this means it’s possible to overlook the quiet guy with a product that might actually be different.

Vision-Components is one such company, quietly slogging away through good years and bad, all the while growing their product offering. I’m not completely convinced by their claim to have invented the smart camera, but they were certainly amongst the early pioneers, and the very fact that they’re still here suggests you should give them a look.

If you check the
specs, you’ll observe that V-C are wedded to the TI processor family. That means you’ll need to run the V-C software, so this is not an equivalent to the Sony Smart Camera that seems content to host a variety of vision packages, but if you’ve no great investment in software that might not be a problem for you.

Give it a go – you just might like it.

Monday, May 19, 2008

Another conference opportunity

Still looking for that chance to travel to an interesting destination and tell the world of your expertise in machine vision? Then this conference in Japan may be just what you need.

I’ve never had the chance to visit the land of the rising sun: perhaps it’s time I wangled myself a trip (business class, of course!)

Sunday, May 18, 2008

Synchrotech Support Blog: FireRepeater 800 Pro FireWire IEEE 1394b Hub

I'm not big on plugging products I have no familiarity with, but this just popped up in my email and I thought it looked worth sharing ...

Synchrotech Support Blog: FireRepeater 800 Pro FireWire 800 IEEE 1394b Repeater Hub 4 Port

It appears to be an industrial firewireB hub, which could be useful to have around.

Anyone got any direct experience with it?

Think like a vision system

Regular readers know that I’m a big believer in the value of education, so it will be no surprise that I’m recommending the first-rate machine vision primer from Greg Hollows and Glenn Archer, published in the May ’08 Assembly magazine. (Unfortunately I can’t find a way to link directly to the article, so you’ll need to open up the digital version of the May edition.)

However, regulars will also know that I can never resist point out an opportunity for improvement, so here it comes.

The weakness with this article is that it focuses (yes, that’s a deliberate pun,) on hardware rather than on solving the application. The guys note that lighting is the “most critical and least understood factor,” but they don’t discuss what you’re trying to achieve with the lighting. Perhaps that needs a separate article, but I can sum it up in one word: contrast.

OK, maybe it needs a few more words, so here’s a slightly better explanation. Your vision system “sees” only grayscale values – all it knows about the scene is a matrix of numbers – it has no inherent ability to understand what it’s looking at, so you have to make it very obvious. This means you need to light your target to create a strong contrast between what you want to see and what you don’t. Think like a vision system and not a human.

Friday, May 16, 2008

Off-topic, but interesting

Light is one of the keys to machine vision, and smart vision people recognize that it’s possible to harness both wavelength and polarization to solve some of our application challenges. However, humans are hampered in this by our inability to sense light in the UV or IR parts of the spectrum, or to perceive polarization.

Research in Australia has shown that, in this regard, we’re significantly inferior to the mantis shrimp. Apparently
this little beastie can see way more than you and I. (Thanks again to R&D magazine for another intriguing article.)

I wonder if there’s a way I can put a mantis shrimp to work in my lab?

Thursday, May 15, 2008

Light at the end of the tunnel?


PPT is a company that’s been around a long time, (in vision industry terms, anyway,) but in recent years they’ve found the going tough. The last public figures (for year ending 10/31/07) showed a continuing decline in revenue, addressed in part by selling some patents, (which sounds like selling the family silver.)

But PPT Vision President Joe Christenson argues that volumes of the
IMPACT A10 smart camera will continue to grow on the back of ongoing product development. Unfortunately, this is a lower-priced item than the rest of the range, which, when you stop to look, is remarkably broad. I suggest you take a minute to check it out.

So what should PPT do? Is their only option to slash costs, or are there ways to grow the business? Do you MBAs out there have any suggestions?

Here’s mine: marketing. Do lots more marketing. You have a good product but the machine vision users of world don’t know about it. Spread the word, let people try IMPACT and when they see it’s worth sales will follow.

Wednesday, May 14, 2008

“Everything looks worse in black and white”

My May edition of Evaluation Engineering is hot off the press, and as always I turned straight to the machine vision article. This is an interesting piece by Robert Howison of Dalsa about the value of color in machine vision. It provides a reasonable overview of the issues but, to be quite frank, I thought it needed more detail.

The main point Mr. Howison is trying to make is that color is often not necessary in a machine vision system, which I agree with, but I think more explanation would have been helpful.

First, we need to remember that the CCD or CMOS sensor does not sense wavelength; it just catches photons of light that pass through the lens of the system. So all its detecting is the quantity of light.

Second, a filter will let us trap photons of certain wavelengths, so preventing them from reaching our sensor. For example, if we use a green filter we are allowing photons with wavelengths in the region of 500 to 570 nm to pass through, while absorbing all other wavelengths. This is how we detect green light.

So, it doesn’t take too much imagination to see that combinations of colored light and colored filters can be used to help a monochrome camera detect light of a specific wavelength. Thus the complexity of color is often unnecessary. (I should add that color image processing can get really ‘hairy’, but any further discussion is beyond the scope of this blog.)

It’s possible that everything actually looks better in black and white, (which is what Paul Simon sometimes sings, as opposed to the title of this piece, which is what he wrote.)

Enjoy the article, but treat it as an appetizer.

Tuesday, May 13, 2008

Machine vision hits the streets

From the Business section of the Los Angeles Times comes news of an interesting machine vision application. Apparently the “experts” that oversee road traffic have realized that vision may have some application in traffic control. Imagine that!

On a less sarcastic note, let me say that it’s good to see vision technology being applied to complex problems in everyday life, and not just to inspection automation.

Monday, May 12, 2008

Seeing 360

It’s been around a year since Cognex launched their system for inspecting cylindrical surfaces. Called OmniView™, it takes advantage of their ‘Pat’ capabilities to unwrap and stitch together images from four cameras. It’s a neat way of avoiding the complexity of rotating a part in front of a linescan camera.

Machine Vision Consulting (MVC) offer a very similar product which they call
CircumSpect™, (in fact I believe MVC were the developers of the OmniView™ system – please correct me if I’m wrong.)

And now there’s a Matrox MIL-based competitor. CIVision have launched the ‘
Lomax360’ system, which is discussed in detail in the April issue of Vision Systems Design magazine.

Competition really is a wonderful thing.

Sunday, May 11, 2008

What’s wrong with pattern matching?

A few days back I commented on the number of pattern matching-type tools offered in the latest release of Vision Builder AI from National Instruments, (thanks to Keyence for the link to their concise definition,) and suggested that this might not be “a good thing”. It occurs to me that perhaps you, my loyal reader, deserve something of an explanation.

I have several concerns about pattern matching. First, it’s computationally intensive, meaning that it’s slow. I realize that in these days of quad core PC architectures that’s less of an issue than in days of yore, but it can still be an issue when the part rate is high.

Secondly, it’s just plain inefficient. Why use pattern matching when there’s a simpler, more elegant way? If the answer is that you need robustness, then work on your lighting and optics rather than just throwing processing “horsepower” at the problem.

Third, it’s expensive. In a number of the well known machine vision packages the pattern matching tools cost extra, (and that’s disregarding the cost of the high-end PC needed to run the tools.)

But, in the interest of balance, let me also mention one very good reason for going with pattern matching: speed of application development. In many cases the single most expensive part of putting a vision system on-line is the time of the developer. If pattern matching shaves a week off his time, that’s a big chunk of money saved.

So, should you pattern match? If it saves development time, makes your application more robust, or there’s no other way, then yes. But in all other cases it’s a sledgehammer to crack a nut.

Thursday, May 8, 2008

One more source of help

Over the months I’ve been blogging away I’ve tried to share links to places where you can have your machine vision questions answered. Here’s another that I recently stumbled upon: Vision Systems Design (VSD) magazine has its own forum. To say it’s lightly used is an understatement, (just two postings so far in 2008,) but knowing the quality of work put out by VSD, and the caliber of their readership, I think you could expect an intelligent response to any questions posed.

Give it a try, and don’t forget to mention who sent you!

One more source of help

Over the months I’ve been blogging away I’ve tried to share links to places where you can have your machine vision questions answered. Here’s another that I recently stumbled upon: Vision Systems Design (VSD) magazine has its own forum. To say it’s lightly used is an understatement, (just two postings so far in 2008,) but knowing the quality of work put out by VSD, and the caliber of their readership, I think you could expect an intelligent response to any questions posed.

Give it a try, and don’t forget to mention who sent you!

Wednesday, May 7, 2008

Deciphering the Value Proposition

I spotted an ad for the “VisioMint” package in April’s Vision Systems Design magazine and spent a few minutes checking out the related web site. It seems to be an interesting machine vision software product, strong on image processing although perhaps less so on vision tools.

But here’s my question: why?

Why should I buy this package as opposed to any of the others out there? What does this do for me that
VisionPro, Sherlock, Halcon, Sapera, MIL, or any of the other packages can’t do? What problem does it solve for me, the customer?

Frankly, I just can’t see what’s special about “
VisioMint”. Don’t get me wrong; I admire people who set out to develop new products and launch new businesses. The world needs that kind of innovation. But, unless I’ve seriously misunderstood something, this is just a “me too” product with no unique selling point.

If you’re going to go to the effort and expense of developing a new product it seems to me it’s essential to (a) identify a particular need that your product meets, and (b) make sure it’s evident to the world just why your product is special.

So, to the guys at JasVisio, the developers of “
VisioMint”, I wish you well, but please tell me why I should buy your vision package.

Tuesday, May 6, 2008

CCD v. CMOS – Battle of the Acronyms

If you spend any time at all reading the vision-related press instead of working, at some point you’ll have come across the whole “which is the better sensor” debate. Now while this might be a great interest to a few of our – dare I say it – “geekier” brethren, I suggest that the vast majority of vision users don’t give a damn. All they want is a camera that produces an image they can work with, and doesn’t break the bank.

So does it really matter? Well sometimes, yes. This
article from Dalsa gives a good overview of the pros and cons of each technology, so I’m not going to plagiarize it here. Suffice to say, lower cost and higher speed cameras often come with CMOS sensors and a prudent buyer makes sure he understands what he’s getting and whether or not it’s appropriate to his task.

Caveat emptor.

Monday, May 5, 2008

Just how does the clutch work?

When I was learning to drive a stick (‘manual transmission’, for all you European readers) I found mastering the clutch to be exceptionally difficult. Eventually my father found some diagrams (probably in a Haynes manual) that helped him explain what happened when I pushed down on the left-most pedal. Once I understood how the friction plate moved away from the flywheel it all made much more sense, and I have had no trouble in shifting up and down ever since.

If you have arrived at this page wanting to learn more about the clutch, let me refer you to these links:

I believe machine vision is much the same. You don’t have to know how edge detection works to get a result from the tool, but if you do have an appreciation of what’s happening under the hood you’ll probably be a smarter and more skillful user.

With that in mind, I’m going to direct you to
this blog posting by Jon Titus of Test & Measurement World. Jon is reviewing a book on vision algorithms – possibly a good cure for insomnia – that apparently provides a good explanation of how many tools actually work. I haven’t read the book myself, so I can’t comment on its usefulness, but in principal I do believe anyone working with machine vision is well advised to learn what’s actually going on at the pixel level.

A little knowledge can’t hurt, can it?

Sunday, May 4, 2008

Thoughtful opinions

Every month Vision Systems Design magazine publishes an interview with someone notable from the machine vision world. The latest is with Ned Lecky, of Lecky Integration. I didn’t know this, but Ned was the originator of the Sherlock package. He also has some very interesting thoughts on the direction machine vision needs to go in the future.

I’m not going to waste time reprinting excerpts from the interview. Much better that you use
this link to read it all yourself.

Thursday, May 1, 2008

Got to admit, it’s getting better

This week is taking on a bit of a National Instruments theme, but I don’t think the guys in Austin will mind too much. I finally got around to spending some quality time with Vision Builder AI 3.5, so today’s homily is on what I found.

First, let me say that I’m moving up from version 2.6. I liked the old version, but at times it was a bit unfriendly, so I’m glad to see that 3.5 is much improved. NI have made much of the ‘State Machine’ functionality, but for me that’s less important that the simple addition of a number of tools.

If you’re at all familiar with the earlier versions of VBAI you’ll know that to perform an image processing operation such as thresholding you had to drop out to Vision Assistant. Not a big deal, I know, but somewhat inelegant. Well now filters and thresholding are available directly from the tools menu, so that’s an improvement. Other changes I spotted (and I don’t claim to have logged them all,) are an “advanced” edge tool, the inclusion of a “golden template” tool and the addition of a QR code reader. The range of I/O functions has also grown, as has the ‘other tools’ section with the inclusion of custom overlay and logical operator functions, to name just a few.

So all-in-all, 3.5 is a significance advance over 2.6, but in the interests of being evenhanded, I feel I should also flag a reservation. There are now quite a few pattern matching-type tools available (like the golden template tool I mentioned earlier.) I’m not sure this is a good thing. Pattern matching is computationally intensive, meaning that your inspections are going to run slowly and this could be a problem if you plan to run an inspection on the
Compact Vision System. But more fundamentally, pattern matching is an easy tool for the inexperienced user to grab a hold of. In fact I think it’s too easy – a sledgehammer to crack a nut, in many cases – and having tools of this power available encourages a developer to take the quick route rather than the best route.

Is that a bad thing? I’d love to hear your comments.