Tuesday, April 5, 2011

Machine vision for process control

I don’t see many vision systems that are implemented to reduce process variability, so an item on the R&D magazine website titled, “Imaging system helps improve sandwich bun quality” (March 9th, 2011,) came as breath of fresh air. In fact I was so interested that I dug back to the source, (the Georgia Tech Research Institute, (GTRI)) and tried to figure what they are actually doing. But before sharing some links and idle speculation, let me take a minute to address why process variability is a very big deal.

Process variability often appears as differences between supposedly identical products. Take cars for example. When you buy a Toyota Camry you can be pretty certain you’re going to get a well-assembled and highly reliable vehicle. But if you’d been shopping for, let’s say a Chrysler from a few years back, it was a crapshoot. You might get a great car or van, but there was also a risk of ending up with a total dog.

The same applies to burgers, which is why McDonalds works very hard to ensure that your customer experience is the same every time. They have rigorous standards for potatoes, beef patties and so on. But this is where we start to see the manufacturing problem: a process like potato slicing, can be engineered to work the same way every time, but when the input material is a natural product, variation is inevitable.

That variation creates unpredictability in the manufacturing process. In the case of potatoes, there will be a level of waste resulting from shape variation, but what that waste is will change minute to minute. This means the poor guy trying to ensure just the right quantity of potatoes are delivered to the slicer each hour has his work cut out. Deliver too many and some will go bad before they hit the fryer. Deliver too few and the customer has to wait for his order.

The
Food Processing Technology Division at GTRI understands this. That’s why they’ve been working on technology to monitor the size and appearance of sandwich buns as they cook, and use that data to adjust the baking process to ensure product consistency.

My assumption is that this is achieved through a combination of color linescan cameras and laser triangulation-based 3D imaging, (well how would you do it?) but I’ve been unable to find any specifics.

This
article talks about how they look at the buns from beneath, using a slatted conveyor system. (Question: doesn’t it have to be slatted anyway to let the air circulate and achieve even baking?) And this one provides a little more detail on the use of color, 2D size estimation, and the process control algorithms, but I’m left wanting more.

Baking-type processes are widely used in manufacturing, so this kind of machine vision work has a great deal of potential, especially when use to control the process rather than just sort bad from good. I just wish I could get more details. Perhaps Vision Systems Design could help?

No comments: