The end of one year and beginning of
the next is traditionally a time to take stock. Magazines, TV shows and yes,
even blogs like this, are full of such articles. But as I’m too pushed for time
to compose my own, (yes, that’s code for “lazy”,) I’m going to link to and
critique a great article of the looking forwards genre, and that’s “Machine
Vision: The Future of Machine Vision” written by Ben Dawson and published on the Vision &
Sensors site, November 28th, 2011.
Ben makes some great points, most of
which I fully endorse. I especially agree with his plea for kinder and gentler
algorithms, because I believe it’s lack of ease-of-use that’s holding back the
industry. But to expand on Ben’s points, what I’d really like are software
tools that seamlessly account for or tolerate changes in lighting. I want the
software to understand the difference between an edge and a shadow, and I want
it to deal with lighting that changes when someone opens an enclosure door.
Readers might point to PatMax from
Cognex as such a tool, but I’m not convinced. Yes it does some of what I want,
but I’m looking for it to be invisible to the user. I want the vision system to
just understand the difference between a part and its shadow. Is that too much
to ask?
One area missing from Ben’s paper is
optics. I would love to see smarter optics. I want a machine vision system that
can automatically focus on what I put in front of it, and can stay focused even
if the working distance changes. My feeling is that liquid lenses will give us
this ability within a year or two. It can’t happen soon enough!
Last, Ben has a great product idea.
He discusses a room painting app – just snap a picture of your room or house
and then recolor it in a paint of your choosing. I’ve actually tried this
myself using Photoshop so I think it’s a great idea. What it needs though is a
database of the reflective properties of every surface in the image. It also needs
to identify what each surface is, and apply those properties. Then it has to
determine the light source and model how that light will scatter from the
surfaces and how much will reach the camera.
It’s all do-able, it just needs
programming. Cut me in for 10% please Ben!
Yogi Berra once said, “It’s tough to
make predictions, especially about the future,” and I’m sure Ben’s list, with
additions by me, will prove that true. Maybe, just maybe, there’s some
game-changing idea lurking around the corner that will shake things up in ways
we can’t even imagine.
1 comment:
I'm commenting a bit late but there is a system that has Hybrid logic (fuzzy neural) that is made from Italian company VEA: VEDO.
www.vea.it maybe you could give it a try, they did a demo to us and it was quite impressive.
Ant
Post a Comment