OT: Presence/Absence Detection -- use machine vision?

Go To Last Post
22 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This is a new area for us, and I'm looking for a place to start.

We want to detect presence or absence of an object (industrial, relatively large like 1m cube). When the object is sensed, it then triggers the next state in the processing. Like an assembly line.

Currently photoeye sensors are the norm, and work OK. The price isn't too bad, and wiring is only needed to one point.

There is a situation with oddly-shaped objects that might be "present", but may not break the photoeye beam because the front extended edge may be above or below the nominal level of the photoeye.

So let's do more photoeyes or a light curtain. Lots more wiring; lots more expense. It essentially multiplies the cost.

The idea being pursued is to use a CCD camera of some type, and essentially do a "motion detection" of some sort. In other words, have a reference frame of the "empty" state, and then detect the change to the "present" state. At this point we don't care about object identification or orientation verification though thise are obvious future extensions.

Where do I start for the most cost-effective solution? Looking for components it seems that buying a complete package like a WebCam or the bargain CVS digital camera is cheaper than trying to get the CCD, lens, power supply, etc. And included software or inexpensive software could do the task. But then there would be a whole PC involved, and a USB master, etc.

We certainly don't need 24-bit color. Monochrome would be enough. 320x240x1 gives about 10kB per frame; so an AVR with 32k of external storage could hold a couple of reference frames and the active frame.

One thing that would certainly need to be considered is varying light levels. Does flooding the area with our own infrared light source make any sense?

Let's say our industrial photoeye is a US$100 package. I'd like to target the same cost point but have a >>better<< system than the single-beam photoeye.

Main question 1: Where to start the research. Machine vision? Robotics clubs? WebCam software?

Main question 2: Would an AVR have enough horses to do even simple presence/absence detection as I outlined above?

Main question 3: Are there other approaches that might be better suited? E.g. ultrasound, radar, laser, ???

A totally new area for us. I'm sure some of the regular posters will be able to privide direction.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You didn't say what type of material the object is. Given that, let me throw out two off-the-wall suggestions.

The first is an electric field proximity sensor. I believe this was based on some work done at MIT Labs. You can get more information at http://www.bik.com/.

Second suggestion - RFID. I don't know how close to the object your AVR is, but this is a technology I've been keeping an eye on for inventory tracking. Range for passives isn't that great (although they are getting pretty cheap) but I believe there are also active solutions.

Dave

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi Lee,

Depending on price point, you might look at LabVIEW and their vision packages. They do in very quick order exactly what you are wanting to do and more. http://www.ni.com

But to do it on the cheap, I've seen security systems that do the image differencing thing with a stored frame to determine when to full-frame a certain camera.

To a very first approximation it should be pretty easy as long as lighting is consistent between frames, etc. But if you hook up a video camera to a frame grabber, you can grab images, maybe do some kind of equalization, maybe have hot spots/ignore areas, do the differencing,, and then act on it.

But depending on how much your time is worth, the NI software might save you a bunch of money.

http://sine.ni.com/apps/we/nioc.vp?cid=10419&lang=US

Please note - this post may not present all information available on a subject.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What about an RFID tag on the object?

The new "EPC" (UHF) are passive and "stick on" like a shipping label. Range is 3-8 feet, depending on the object and the environment. If the object is RF-transparent (ie, wood, plastic, etc), you can read through the object with one reader. Metallic and ionic fluid (ie, water-based) objects are more of a challenge.

There are also much lower frequency "tags" that run at a few hundred KHz or 13MHz but the read range is very short. Examples are the pet ID tags and anti-theft tags. And, there are the active UHF "tags" (SpeedPass, etc).

UHF readers are available with RS232 ports. They take some intelligence (AVR is more than intelligent enough). Some readers are more highly integrated and offer WLAN connectivity, etc. There are fixed readers (called "portal readers" because they are designed to be used at warehouse dock doors) and there are hand-held readers (my company makes some).

Just an idea.

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Actually, building on Dave's post, the Quantum Research proximity sensors might work too.

Basically you have a sensor that is just one plate of a capacitor. Anything placed near the plate with a dielectric constant different than air and the surroundings could probably be detected. And they are programmable for sensitivity, hysteresis, drift, etc.

Please note - this post may not present all information available on a subject.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

And on the RFID tack, what about a plain old tuned and shorted antenna. These are used all the time in stores to protect from theft. Look for a tuned transmitter to pull down because of the load in proximity and you have object detection and the tags are very cheap. No identification though.

Please note - this post may not present all information available on a subject.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Back on the camera thing, providing consistent lighting would definitely help. Depending on the camera you used, IR LED illumination would work great and a number of the security cameras even come with rings of IR LEDs around the camera lens. If you filter to make the camera solar blind you would then have pretty consistent illumination without changing shadows, etc that could confuse the software.

But CCDs used to collect color images have filters that block near IR so that the color balance isn't thrown off so easy. For IR illumination, you probably want a B/W camera.

If these are the same objects over and over that you are sensing, you could also possibly use reflective tape in strategic locations that could enhance contrast even more and eliminate false detections.

Please note - this post may not present all information available on a subject.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

refields wrote:
...
But to do it on the cheap, I've seen security systems that do the image differencing thing with a stored frame to determine when to full-frame a certain camera.

To a very first approximation it should be pretty easy as long as lighting is consistent between frames, etc. But if you hook up a video camera to a frame grabber, you can grab images, maybe do some kind of equalization, maybe have hot spots/ignore areas, do the differencing,, and then act on it.
...

That's exactly what I had in mind. This isn't a 1-off, so cost is a major issue.

I'll check out the LabView links. Even if not suitable for production, I'll probably try to mock-up a proof-of-concept using whatever tools are easiest.

To the other posts, object identification isn't really feasible. I cannot disclose the entire app, but think of it more as "raw goods" than "finished goods". Perhaps another good analogy would be tree branches--either end might come first; different sizes and shapes; and the leading edges are oddly shaped and in an arbitrary height from the floor & distance from the sensor.

Or another example: vehicles coming to a gate or airlock. A truck with a ladder on top may not be able to get close enough to trip a low-mounted sensor. That kind of idea.

But you have given me another idea. Just as objects break the photoeye beam, perhaps several RFID tags or similar could be mounted at the opposite side, and continually be read & scanned by the reader. If one or more "time out" and cannot respond, then there may be something between the tag & the reader? I know nothing about that reading technology. As you said, the tags should be cheap, and you'd only need one reader station.

Keep those ideas coming.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think you are on the right track with the vision idea. The cost of an AVR and a webcam is low.

Althouh there might be som work to get the interface done to get the pixels inside the AVR, that can be done with a reasonable effort.

If an easy solution is to be haunted, I think it is important to have a light background, so the circumference of the object is clearly visible. I also assume the object has optical properties, so a sufficient contrast is obtained.

Then I remember talking to a bright guy 14-17 years ago. He had made a vision system for recognition of hand writing. The system was made to enter data about some storage entry objects. The operator wrote some facts about the object on a piece of paper, placed it under the camera and pressed a button.

The system was implemented with a 4 MHz Z80 processor, the SW was written in assembler, and it was capable of reading 30 hand written characters a second.

The algoritm was surprisingly simple. He did a circumference scan around the character line at constant distance. The xy pixel values was then interpreted as though it was sampled real time data of an analog signal. Then he did an FFT analysis to obtain the frequency coifficients of the signal. As I remember he used only 20 frequency components.

For each character he had a table entry with upper and lower values of the frequency spectre. The character in question was found by searching the character table. If the character was within the limits of a character table entry, bingo.

I saw it demonstrated and it worked almost 100% certain because the operator first wrote several instanses of each character and number. From this the charater table was generated.

This should also be insensitive to orientation, so the job is then to store a table entry for each stable placement of the obejct. And of cause more objects can be recognised just like the characters.

An AVR is much faster than a 4 MHz Z80 so this should be more easy to do this time.

I hope this can be to some help, you are one of the most active helping others in this forum.

Erik

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

SteveN wrote:
Hi Lee,

I am just wondering out loud here and I know nothing about the technology. What about ultrasound. Not sure how far away or how close the object to be detected needs to be when it is detected. The idea came to me when I remembered the Mercury Mountaineer ad I saw recently with the backup alert. I would think those sensors would have to scan from slightly above the bumper to just above the ground...dunno really. Anyway, just a thought.

Regards,
Steve

Yeah, I haven't looked at that but those detectors probably work like sonar? Send out a signal (probably limited to a "field of view" like a camera lens?) and then if you don't get back enough of a signal within the appropriate time frame there is nothing there. I'll bet those are cheap, and for that I'll bet robotics sites have all kinds of info.

Benefits: No problems with dirt (within reason); no ambient light levels needed; no fixed background needed; no worries working in a freezer (fogging & frosting); able to check stages of fetal development (future enhancements).

Drawbacks: Problems with false triggers from ambient noise? Problems if the background changes? [like a solid piece of metal in the field of "view"]

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Have you read about the AVRcam?
https://www.avrfreaks.net/phpBB2/...
http://www.jrobot.net/RobotProje...
I think this would be very useful reading for your project with tips about camera, image recognition using an AVR etc.
If you send a PM to "ajo115" he could probably answer questions you might have about the subject.
The AVRcam won the 2nd prize Circuit Cellar's recently held Atmel AVR 2004 Design Contest.

Instead of LabView you could also use MATLAB: http://www.mathworks.com
I know this is a very expensive tool, but it's very powerful and you can use it for all sorts of things.
MATLAB is the most commonly used PC tool for digital image recogniction and processing.

I can also recommend you the #1 digital image processing book in the world, it's the most used book at engineering courses about digital image processing .
Title: Digital Image Processing, 2nd edition
Author: Rafael C. Gonzalez & Richard E. Woods
ISBN: 0-20-118075-8
Book website: http://www.imageprocessingbook.com

The same authors have also made a new book called "Digital Image Processing Using MATLAB". This book could be used as a supplement to "Digital Image Processing, 2nd edition"

DIFFERENCES BETWEEN
Digital Image Processing, 2nd edition (DIP/2E)
and
Digital Image Processing Using MATLAB (DIPUM) :
http://www.imageprocessingbook.c...

There's also a good reference for other publications about the subject:
http://www.imageprocessingbook.c...

A new book called Machine Vision might also be useful, but I don't know this book:
http://www.quantumbooks.com/Merc...
You can also find other books about Machine Vision, but the "bible" for Image Proccessing is the Gonzalez book I mentioned first.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Re: ultrasonics - Parallax has modules to slap on their robots.

http://www.parallax.com/detail.asp?product_id=28015

Please keep us posted.

Please note - this post may not present all information available on a subject.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Would it be feasible to have 2 mirrors placed either side of the place where the object would pass?
At the top of the area a fairly narrow beam light would be projected so as to reflect many times between the mirrors before hitting a sensor at tha bottom of the area.
The light source I was thinking of would be a Luxeon LED or similar as they are long life and easy to modulate with an FET. The LED would be powered using pulses at an audio frequency. That would eliminate the ambient lighting.
The detector firmware would just have to check for changes in the level of the received modulated light.

Ralph Hilton

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,

If you use a digital CMOS camera like on AVRCam you can do this on the cheap, because you never have to convert the analog video signal into a digital form. You could probably even get away with just black and white video maybe - this would make processing a lot easier if you need to use an analog-output video camera.

If you want to do this with "real" analog video cameras it will get a bit more expensive. The advantage is that the signal is much higher quality in terms of video - not important for simple go no/go testing, but there may be a situation where you want to record photographs of the raw material for quality control or something.

In a case like that you could use one of the video digitizer chips (TVP5150 for example from TI) to convert the analog video to digital. I've done a project on object tracking that used an FPGA and stuff, it could do what you are looking for I think. http://www.newae.com/eagrack.html is the project infromation, and a much more robust Version 2.0 is coming out actually sometime....

Regards,

-Colin

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I had a bit of a look round with google.
Allen Bradley have a set of pdf documents at
http://www.ab.com/catalogs/
They describe their products but also give quite a detailed technical discussion of the merits of various sensing methods - photodiode, ultrasonic, capacitative, inductive etc.
If you click on sensors then on "view full pdf" a selection of pdfs to download is shown.

A couple of ccd pages:
http://www.beyondlogic.org/imagi...
http://www.beyondlogic.org/imagi...

and some robot sensors that may be of interest
http://www.totalrobots.com/acces...

Ralph Hilton

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

"Optical Flow" is a well know algorithm used for motion detection.

It takes the differences between frames, and gives the direction of motion of an object. It takes into account the differeces due to light changes, and the difference due to the motion.

I think that a simplified version of this algorithm could be useful in your application.

As usual, google is your friend.

Regards,
Alejandro.
http://www.ocam.cl

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The solution most resembling your opto sensor is an opto line-sensor.
This can be done with pretty simple image analysis.
At one side you need a light line. This can be a tube light or a line of LEDs. Infra red LEDs would help you with better noise immunity.

You need a live image option to be able to point the camera to the light-line and do focusing. a CCTV camera will do this job best. A VGA output, or a module for sending images to a PC could help as well.
uCFG of Digital Creations Labs can digitize NTSC/PAL to a UART.

Then what you need to do for detecting the object is to measure the area of the light line (this should have good contrast to the background). If the area is smaller, or the line is interrupted, an object is blocking the view and therefore present.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I have done some object detection investigation. Essentially if you think of the entire spectrum, from very low vibrations to sound, to radio to IR, to UV, what you are wanting to do is possible pretty much from any point on the energy spectrum. What you are asking is which point along the spectrum is "the best."

Determining which area of the spectrum to work in involves a few questions such as are the objects that you are wanting to detect are putting out any signature emissions and how much differentiation you have to do from other things that are there,etc.

Also, from a cost perspective, there is usually a sweet spot as well.

Anyway, if it were me, I would draw a line to represent the energy spectrum as seen from frequency and then write down each solution that is possible at the different points.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Lee/All,

I have been away from my PC for a few days, and thus missed this thread...so I'm hopping in a little late.

If you are still interested in pursuing a vision-based approach to the solution, I believe that it would be entirely possible to make it work using a CMOS image sensor and a fairly small AVR (such as the mega8). As was mentioned earlier, the AVRcam I've been working on could be modified to perform frame differencing without too much difficulty, and the whole system will run in real time (the latest version I'm working on will do 30 frames/sec processing).

I know that cost is a major issue here, and a image-based solution may be overkill, but it opens up a much larger scope of capability if the system needs to be extended in the future.

Drop me an email at john@jrobot.net if you would like more info about how this system could be adapted. I've always appreciated the help you have provided to this group, and would be more than happy to see if a workable solution could be found here.

Regards,
John
www.jrobot.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I wanted to do a quick post acknowledging all of the ideas and approaches. We are digesting them now for feasibility & mock-up testing. Especially valuable were the suggestions for the best references, and terms like "optical flow" that I hadn't heard of but will give direction to further investigation.

For presence/absence my preliminary conclusion is that ultrasonic will give better capability than the current single photoeye, not too expensive or complex and easily handled by an AVR, not dependent on light levels, duust, and other environmental conditions (within reason).

The vision solution is affected by many of the items mentioned above: more complex; more processing power needed; more affected by environmental conditions. But it may have advantages that outweigh the drawbacks, such as better position identification etc.

I'll report back if anything useful comes out and tell about the approach chosen. Take care.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

In playing around with the CMU cam, it would work nicely for this.
It uses a serial port so it would be easy to hook up to something.
When a object comes into view the image data would change from something say like a white background to something else, thus you could detect an object very easily with minimal software development effort.
Of course you could get fancy, but what the heck.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Lee - I just saw something interesting this morning and remembered this thread from a while back. Last weeks issue of EE Times had an article called "Novel approach to tracking shows it accuracy" (you can also read about it online here). In a nutshell, the company (Q-track) uses the fact that electric and magnetic waves are offset by ninety degrees as they leave the antenna, but this offset drops to zero by the time they have traveled a half wavelength. They are able to use this to track objects with better than a 1% accuracy. Don't know if they have any products yet or even if this fits what you were trying to do, but it does look like an interesting technology.

Dave