Facial recognition using AVR and eigenface ?

Go To Last Post
70 posts / 0 new

Pages

Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Guys,
Does anyone of you have experience on using eigenface for comparing the image from camera ?
thanks

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What is "eigenface"?

Leon Heller G1HSM

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

leon_heller wrote:
What is "eigenface"?

http://en.wikipedia.org/wiki/Eigenface

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The clever students at Cornell did just that:
http://people.ece.cornell.edu/la...
(Those are fascinating projects for a one-term course. Most with Mega32/Mega324 family. Check out the Scanning Tunneling Microscope among many others.)

http://people.ece.cornell.edu/la...

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

must use matlab....can't it be without matlab ? avr standalone, or I need a better MCU and RAM ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

avr standalone [...] ?

You're kidding, right?
Have you read up enough on this to know what resources are needed (memory and cycles)?
I had a wee peek at the Wikipedia article and immediately thought "never on an AVR". E.g. things like

Quote:
Performing PCA directly on the covariance matrix of the images is often computationally infeasible. If small, say 100 × 100, greyscale images are used, each image is a point in a 10,000-dimensional space and the covariance matrix S is a matrix of 10,000 × 10,000 = 108 elements. However [...]
Meaning that, if ui is an eigenvector of TTT, then vi = Tui is an eigenvector of S. If we have a training set of 300 images of 100 × 100 pixels, the matrix TTT is a 300 × 300 matrix, which is much more manageable than the 10,000 × 10,000 covariance matrix.

Do you at all understand the implications of this piece of text? No? (To be honest, I don't understand all the details there but I still see something like 300x300x100x100 pixels. That is, uhhhmmm.., 900000000 pixels. Each pixel a byte (greyscale)? Then that is 900 MB. On an AVR? Yes, you need more memory. A lot more. Perhaps not all those MB if you can swap to a secondary storage.. You probably need a lot better CPU performance also.

With that piece of help I have fulfilled the "only post if you can help" requirement of AVRfreaks. But somehow my fingers just keep typing anyway:

Let me guess.. This is another attempt of yours to have others solve your problems. Either that or we have actually been trolled very effectively for a long time now.

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

must use matlab...

What part of the Cornell project says "must use Matlab"? The "Background Math" section gives all of the mathematical steps used to transform an image capture into the eigenvalues. You can use pencil and paper if you wish. Or calculator. Or any other appropriate tool.

They chose an appropriate tool -- Matlab -- that they had in their toolbox. If they soldered the project with a Weller-brand station, and you only have Hakko, would you then have another similar problem?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This is the world I work in professionally. Our research engineers generally do all their development work in MatLab then we implement the algorithms in C++ on DSPs or very fast ARM processors. We typically process four 1280x800 images at about 10..30fps (depending on the algorithm) so have between about 30ms to 100ms to process a frame. The CPUs tend to be quad core or more with four 1GHz cores - so each camera effectively gets a 1GHz CPU and, like I say, can process two or three vision algorithms in parallel within (say) 50ms.

You can scale this down if you like - drop to 640x480 or even 320x240 and drop the frame rate to even 1fps or perhaps seconds per frame if you think you have time and the CPU usage will scale accordingly. Eventually you will hit a frame resolution and frame processing time within the bounds of what an AVR can achieve.

I only know the Cornell eigenface thing from a very brief overview reading a while back but my understanding is that they do the heavy lifting on a PC to reduce known facial images to eigenvectors. That's where the MatLab comes in. Naturally they'd just use that (rather than standalone C app or whatever) because if they want to apply a transition matrix or something it's one command in Matlab while it may be 100 lines spread over several functions in C.

Similarly you will find most vision processing engineers rely heavily on OpenCV for their work too. Again you could write it all yourself so if you want to Gaussian Blur an image or something you could eventually rewrite it from scratch in C but on the whole you just call OpenCV's GaussianBlur() to do it for you. When you get to the likes of Lucas-Kanade or something you are bound to use OpenCV as it would take man months to do that in C yourself. OpenCV use has become so widespread in vision processing since Intel released it to the world that it's now ported to most vision capable embedded CPUs like ARMs and TI DSPs and so on. So you call the same GaussianBlur() or LucasKanade() on your embedded platform and local implementations, making best use of the vision processing silicon on the SoC, implements the same as your 2-3GHz PC did (sadly for Intel this does not sell Intel chips into the embedded vision world though!).

So yes, small amounts of vision processing on an AVR with limited resources and limited tools are possible and the Cornell project is perhaps the pinnacle in this field to demonstrate what can be achieved (just as AtmoicZombie video generation projects are perhaps the pinnacle of showing what an AVR can do in that area).

But it's a bit like digging the foundations of the Empire State Building and deciding whether you want to do it with this:

or this

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

So yes, small amounts of vision processing on an AVR with limited resources and limited tools are possible and the Cornell project is perhaps the pinnacle in this field to demonstrate what can be achieved...

Now, most of the Cornell projects aren't really "production ready". But put a few things in perspective: First, (AFAIK) it is a one-term project so the timeframe is limited. And the groups working on it are a small team, one to a few people. Nearly all have hardware and software involved. I'd guess that for most students it is there first micro "project" outside of perhaps class assignments in prerequisite courses.

So for them to come out with something usable (and some are "publishable") within those constraints is impressive.

Note that within the limited SRAM and processor of the ATmega used, it takes 15 seconds for the "match" process. Still, they got about 80% matching success in their test runs, and few (none?) false matches. Impressive.

Give'em a Cortex M4 with more SRAM and computational features, with the same algorithms and pre-processed trained images, and I'd think the matching process would come down at least a factor of 10--down to a second or two.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Note this though:

And when you look at the hardwdare with the camera in you will notice a "chest rest" where you plonk your face so it's an exact distance from the camera and (hopefully) exactly parallel to the plane of the imager. I imagine they also have pretty fixed lighting.

In the real world folks would be unlikely to accept such constraints for a facial entry system. You've seen Tom Cruise movies - people just look up at a camera and say "open the pd bay doors" or whatever (well not in Tom Cruise movies perhaps) and the thing magically opens to reveal the nations nuclear arsenal or whatever. So a real world system needs to work with variable lighting, faces presented at angles, faces at varying distances from the camera so it fills more/less of the frame and so on.

The vision maths then has to try and account for all these things.

If you have ever used a finger print recognizer on a laptop you will have faced some of these issues already - I gave up on mine after the millionth time it wouldn't recognize just one of ten fingers!

I've also got an Android tablet with forward facing camera. One of it's "locks" is to use facial recognition. To do so it insists to tilt the tablet about and move it back and forth until your face fills an exact oval and then you hold it there for about 10 seconds while it chews on the image and 9 out of 10 times it won't let you in because the light level is a bit different to last time or whatever.

So even with powerful systems the challenges at present are just on the fringe of what powerful CPUs can do (my Samsung tablet has a quad-core ARM at about 1GHz).

Yes you can do a very weak and wishy-washy imitation of this on a 20MHZ AVR but it's never going to recognize you as Tom Cruise!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Yes you can do a very weak and wishy-washy imitation of this on a 20MHZ AVR but it's never going to recognize you as Tom Cruise!

But don't you have to admit that within the constraints, it was excellent progress for a pair of novices and a one-term project? (And the discussion(s) we just had should be good food for thought for Bianchi.)

I mentioned the Scanning Tunneling Microscope as another of my favourites. The home-built RFID reader for the college's ID cards was another.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

But it's a bit like digging the foundations of the Empire State Building and deciding whether you want to do it with this:

or this


Are those images to the same scale?

Four legs good, two legs bad, three legs stable.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It doesn't sound like the Android has a very sophisticated algorithm in its present version.

One would expect the pupil location is "trivial", (e.g. all of the "red eye" photo removal programs), and one could first scale the image for a given intra-pupillary distance, a size normalizing step. Then look at the vector from one pupil to the other to correct for "tilt", (given a straight on shot). Then locate your other markers and finally apply the recognition program on the data base.

I use a fingerprint reader 100's of times a day in the ER, but it isn't an inexpensive model.

JC

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

To clawson, using this one ?
http://opencv.org/downloads.html
and then combine with matlab, before use the coefficient in C ?
http://docs.opencv.org/master/modules/contrib/doc/facerec/index.html
thanks

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I saw this one, is it the right starting point ?
[url]http://docs.opencv.org/master/modules/contrib/doc/facerec/index.html[/url]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
OpenCV use has become so widespread in vision processing since Intel released it to the world that it's now ported to most vision capable embedded CPUs like ARMs and TI DSPs and so on. So you call the same GaussianBlur() or LucasKanade() on your embedded platform and local implementations, making best use of the vision processing silicon on the SoC, implements the same as your 2-3GHz PC did (sadly for Intel this does not sell Intel chips into the embedded vision world though!).
Intel does have a dog in the hunt.
"If you can't run with the big dogs, you'd better stay on the porch" does not apply to Intel.
Gig 'em Intel!
Some do like the x86 architecture.

Galileo Getting Started Guide by Jimb0 (SparkFun Electronics, search for webcam)
Gig 'em (Texas A&M University, Aggie Traditions)
When the hardware’s got your back by Jim Turley (embedded.com; April 24, 2014)

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Intel does have a dog in the hunt.

Not in the world I work in (automotive). Try buying a fast CPU/DSP from them that is rated for an automotive environment ;-)

(Neither Microsoft not Intel ever really "got" the embedded world - I remember flying to Seattle to learn about WinCE when it first appeared - they wanted $50/unit (just like desktop Windows) when the VxWorks or other RTOS we used at the time was about $1/unit)

Quote:

To clawson, using this one ?

Yes that is the OpenCV I'm talking about. As for Matlab - it's a pretty expensive proposition but there are two or three open source "work alikes" if you want to dabble.

(personally I look at Matlab scripts and my eyes go fuzzy just before my head explodes but, then, I only did maths to degree level!).

BTW I've mentioned OpenCV here before. Here's a very simple example of it in use:

https://www.avrfreaks.net/index.p...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I tried to add OpenCV to Visual Studio 10
following :
[url]http://docs.opencv.org/trunk/doc/tutorials/introduction/windows_visual_s...

I tried the test code but I got
IntelliSense: cannot open source file "opencv2/core.hpp"

I can't find core.hpp ...where is it ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I put the configuration on Visual Studio 10 like this:
[img]http://i129.photobucket.com/albums/p231/picture_77/opencv_zpsf5ddfddd.jp...

Do I miss something here ?
thanks

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

That relies on OPENCV_DIR being defined in the environment that started VS. Was it?

I'd be tempted to simply hard code that to the location of your separate OpenCV installation.

In my case things were much simpler. I just built at the command line:

clawson@ws-czc1138h0b:~$ g++ -g opcvtest.cpp -lopencv_highgui -lopencv_core -lopencv_imgproc -o opcvtest 

that just invokes the host GNU C++ compiler and the libraries are in the "usual" place (/usr/lib). In the source I used:

#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"

because the headers are presumably in /usr/include/opencv2.

Ah the joys of Linux!

BTW I read that tutorial you linked to - it refers to a "previous tutorial" at:

http://docs.opencv.org/trunk/doc/tutorials/introduction/windows_install/...

that is the place that the OPENCV_DIR env var was set - did you do that?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I have read that, may be I missed something there..

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I used :
setx -m OPENCV_DIR c:\opencv\Build\x64\vc10

and on path variables :
OPENCV_DIR%\bin

my directory : C:\opencv\build\x64\vc10

Is it case sensitive or not ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Here's a VS2010 (Express) project that works for me. I just installed VS2010 and OpenCV 249 (Windows) into a virtual machine with my installation of OpenCV at \\VBOXSVR\Windows\opencv-2.4.9, you might have it at c:\opcv249 or something like that? If so replace occurrences of "\\VBOXSVR\windows\opencv-2.4.9" in the following with "c:\opcv249" or whatever.

In the project config under "debugging" I set the "command arguments" to the name of the .PNG file I wanted to load and process and under "Environment" I put:

PATH=\\VBOXSVR\windows\opencv-2.4.9\opencv\build\x86\vc10\bin

You just have to ensure that is set to the place where your .DLL files for OpenCV are.

Under "VC++ Directories" I edit the Include and Library entries to contain:

\\VBOXSVR\windows\opencv-2.4.9\opencv\build\include

and

\\VBOXSVR\windows\opencv-2.4.9\opencv\build\x86\vc10\lib

Set those appropriate to where you have the OpenCV headers and .lib files.

Finally under Linker-input I set the "additional dependencies" to hold:

opencv_highgui249d.lib
opencv_imgproc249d.lib
opencv_core249d.lib

And that's all there was to it.

(still think it's easier in Linux!)

Attachment(s): 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

this one is helping me :

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

but I got another issue :

Error	1	error LNK1112: module machine type 'x64' conflicts with target machine type 'X86'

I tried to change here :
[img]http://i129.photobucket.com/albums/p231/picture_77/opencv2_zps17cab4ec.j...
but not yet working,

any suggestions ? thanks

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

but I got another issue :

You'll notice(*) in my project I used x86 all the way.
Quote:
this one is helping me :

BTW how is it I used "..." and you changed to <...>. I did use "..." for a good reason you know (they aren't system headers!)

(*) or perhaps you won't because the download count is still 0 :-(

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

BTW when you get OpenCV building I guess the next step is to explore:

http://docs.opencv.org/trunk/modules/contrib/doc/facerec/

Apparently there are three types of recogniser available: Eigen, Fisher and BPB. The first being similar to that Cornell project.

If you want all this in an "embedded device" then obviously something like Raspberry Pi or BeagleBone Black would be candidates as you could easily run OpenCV on any small ARM Linux machine.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
If you want all this in an "embedded device" then obviously something like Raspberry Pi or BeagleBone Black would be candidates

Or Atmel's new kid on the block:

http://www.at91.com/discussions/...

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

From a nearly immediate-to-here local distributor it'd be 30USD less expensive to buy an Olimex OLinuXino LIME; an option is the case for it.
The SAMA5D3 Xplained though does have dual Ethernet MAC+PHY; that would make some infrastructure easier to implement.
[url]https://www.olimex.com/wiki/A10-OLinuXino-LIME[/url], "WEB camera A4TECH"

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I followed :

#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>

From this link on the bottom (test it) there's no " " :
[url]http://docs.opencv.org/trunk/doc/tutorials/introduction/windows_visual_s...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

To Clawson,

I have used the code from you, and got the same error :
with :
[img]http://i129.photobucket.com/albums/p231/picture_77/opencv3_zpsab86fa4d.j...
I got:

Error	1	error LNK1112: module machine type 'x64' conflicts with target machine type 'X86'	\ocvtest\ocvtest\opencv_highgui249d.lib(opencv_highgui249d.dll)	ocvtest

If I switch to x64 :
[img]http://i129.photobucket.com/albums/p231/picture_77/opencv4_zps39bfdd27.j...
I got :

Error	1	error LNK1112: module machine type 'X86' conflicts with target machine type 'x64'	\ocvtest\ocvtest\Debug\ocvtest.obj	1	1	ocvtest

Which one I have to choose then ??
thanks

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I used :

setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc10     (suggested for Visual Studio 2010 - 64 bit Windows)

from :
[url]http://docs.opencv.org/trunk/doc/tutorials/introduction/windows_install/...

at the registry :
[img]http://i129.photobucket.com/albums/p231/picture_77/opencv5_zpsd06b0d6b.j...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

All I can tell you is that project works for me. The only thing that might need changing are the locations of the OpenCv components but otherwise it's set as an x86 project using x86 libs.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I solved it finally, I must create x64 on configuration manager....

Anyway thanks Clawson for your suggestions
[img]http://i129.photobucket.com/albums/p231/picture_77/opencv6_zps34af2783.j...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
All I can tell you is that project works for me. The only thing that might need changing are the locations of the OpenCv components but otherwise it's set as an x86 project using x86 libs.

How much memory (SRAM) do I need at the smallest for a system with openCV and uClinux ?
Would 16Mb be enough ?

Thanks

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

By sram, i think you mean ram. With linux, the main ram is usually dram. Why would you want to run uclinux? There's plenty of solutions at low price points (beagle bone, rpi) that use linux and have sufficient ram. With video processing, 16MB wont go far - linux itself will chew around 1MB. If you want to run opencv on those cheapy webcams, then i'd suggest you are pushing your luck.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Kartman wrote:
By sram, i think you mean ram. With linux, the main ram is usually dram. Why would you want to run uclinux? There's plenty of solutions at low price points (beagle bone, rpi) that use linux and have sufficient ram. With video processing, 16MB wont go far - linux itself will chew around 1MB. If you want to run opencv on those cheapy webcams, then i'd suggest you are pushing your luck.

so better use beagle bone or rpi...thanks for your suggestion mate....
Yes I'm thinking video processing will eat a lot of RAM...
like this one ?
http://www.aliexpress.com/item/Raspberry-Pi-Project-Board-Model-B-Rev2-0-512-ARM-Free-Shipping-Dropshipping/1128908170.html

Last Edited: Sat. May 3, 2014 - 04:58 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

any ideas for the lowest spec for beagle bone or rpi ?
256Mbytes will be enough hei ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Why do you ask such questions! Considering the popularity of the rpi and bbb, you'd think someone has done it already and done a writeup? Guess what? They have and they tell you exactly how to do it. Get used to googling first.
Google
Rpi opencv
Beaglebone opencv

No shortage of info eh?

You can most likely get a rpi from RS components for much the same $$$ and they have a trade counter in Perth. There's also Element14 especially when they have a free delivery deal and/or 10% discount offer.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

To Kartman, thanks for the info
I saw :
[url]http://www.raspberrypi.org/facial-recognition-opencv-on-the-camera-board...
and :
[url]http://australia.rs-online.com/web/generalDisplay.html?id=aboutRS&file=t...
but RS will cease trading on Saturday this June.....

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yes, i can Google.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

To Clawson,

Do you think this board, [url]http://www.friendlyarm.net/products/mini6410[/url], will be enough for face recognition ?

Thanks

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yes. I'd suggest sticking to a rpi or beaglebone black - you're going to need the support

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yup I've used the Mini2440 in the past - it was a very pleasant experience - the Samsung BSP in the Linux kernel is very slick code.

However, like Kartman I'd use an Rpi or BB Black - the range of support of those is astronomically wider than the Friendly ARM boards. It used to be the case that the Friendly boards were the cheapest small ARM SoC Linux modules you could buy and at the time I didn't believe they could make the RPi for the price they were promising. But here we are, a few years later, and it's now become (probably because of the price) the most widely used small Linux SBC you can buy with the TI OMAP offerings very close behind (and in some ways technically superior I believe).

Try a google for "raspberry face recognition" or "begalebone face recognition"

Last Edited: Wed. May 7, 2014 - 08:54 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Guys,
Thanks for the advice
Ok, how can I compile the opencv in linux ? what's the toolchain for cross compiling from ubuntu linux ?
For example I use Kdevelop ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

How can I cross compile the code for raspberry pi with KDevelope ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I fear you may have missed the point of Linux. I'll bet that if I was to put either an Rpi or a Beaglebone in an anonymous looking box and simply attach a mouse, keyboard and monitor that you wouldn't be able to easily tell whether you were using an ARM, x86 or AMD64 CPU. (well OK "uname -m" would be a start!).

But the point is this is not like AVR development where you cross compile on a PC then "squirt" the code to the target to run it there.

A small Linux machine is just like a PC (well it IS a PC!) so you keep your source files and your compiler on the board's own storage (probably an SD/MMC but you could use SSD or HDD if you like) and you can build natively for the target. This is one of the many ways that makes Linux embedded solutions much easier to develop for.

Of course, in the old days you used to have 1GHz PCs but just 100-150MHz Linux ARM boards and it was a right pain to build the code on the Linux board itself as it could be 10 times slower than the PC. So in this case you would get an arm-gcc (in fact arm-linux-gnueabi-gcc probably!) and run it on your desktop PC to build the code you would then run on the Linux board. You would use one of the joys of Linux which is NFS to have the place on the PC where you do your building simply appear (across the Ethernet/TCP/IP) as part of the file system of the ARM board itself so as soon as you had compiled it on the PC you could run it (or more likely "gdb" it) on the ARM target.

But these boards like Rpi and BB Black are 800-1GHz machines these days. There's no huge advantage in building on the PC anymore. You can build anything you need to on the ARM board itself.

Now another issue of Linux SBCs is how you get the Linux components you want to use on to them. In the old days the board itself might only come with a very simple Linux and a C compiler (getting that bit was always the real tricky step!) but then if you wanted OpenCV you would just pull the C source package (you may have noticed the OpenCV package includes its full source?). Then you would just set the C compiler to work and build it in-situ for your SBC. However the first stage of almost all Linux building is a step called ./configure and this runs a script which has a hunt around your Linux installation and makes sure you have all the bits the thing you are about to build relies on. I'll bet OpenCv is reliant on something like 50 other components (almost all lib*) so it would have a look round and make sure you already have all those things.

Half the time you only have 20 of the 50 needed. So you have to go to the sites for the 30 you haven't got and pull the source of each then ./configure and compile each of them in turn. Of course during their own ./configure one might say there's another 3 things it relies on and you need to get those first and so on.

Eventually you got all the lib* you need and could successfully ./configure and then build OpenCV.

But if you have ever used Red Hat or Debian or Ubuntu or one of those on a desktop you will know that for x86 and AMD64 almost everything you could ever want has already been built for your particular Linux distribution. So if you want OpenCV you just "apt-get install opencv" or use a higher level tool like Synaptic or something (or rpm or yum or whatever your Linux uses). That process will then see details in the OpenCV package that tell of the 50 things it needs (and it already knows which you have installed) so it would automatically pull in the 30 prebuilt packages you hadn't got. And when one of those 30 said it relied on 3 things it would pull those in too.

So in a modern desktop Linux you almost never need to build from source because the "repositories" that back up your Linux distribution already have everything built.

Now, like I say, until a few years ago the situation was the same for small embedded Linux SBCs like the Friendly Arm boards and so on but starting with an ARM Linux distribution called Angstrom (or maybe it was Montavista?) the suppliers started to build prebuilt repositories even for these small boards.

Because there are so many RPi and BB's the Linux distributions they typically run (Raspbian which is a form of Debian etc) have been set up the same so I would be astonished if using OpenCV on one was not just as simple as "please install Opencv" and it is done.

Having said that I just googled "raspberry face recognition" which hit this:

https://learn.adafruit.com/raspberry-pi-face-recognition-treasure-box/ov...

Skipping ahead to the "software" page:

https://learn.adafruit.com/raspberry-pi-face-recognition-treasure-box/so...

It says that the repo version of OpenCV for Rpi is still at version 2.3 and does not have the face recognizer stuff so you do have to pull the source of 2.4 and build it yourself. However at least your system should have all (well most) of the supporting libs) if you already did an "apt-get install opencv"

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

To Clawson
thanks for the information,
so I'll put my development environment directly inside raspberry pi..yup I know apt-get install....must be root, I used it in ubuntu...
and I need opencv 2.4 and configure it, for face recognizing...
I'm thinking that I will create the software in PC then downloading to embedded linux...the way I did with Atmega..

That's good news and making my life a bit easier if I can directly develope my code inside rpi.

Currently I have Mini6410 board, I haven't bought raspberry pi yet....

How can I make it boot to Standard Ubuntu ?
I have the image and install Ubuntu, but it's different with Ubuntu I saw in my PC..??

Thanks again..

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

That's good news and making my life a bit easier if I can directly develope my code inside rpi.

But why would you do that? This is thew whole point of using OpenCV (and Linux in general) - you do all your development work using your favourite desktop PC, Windows or Linux: OpenCV is available in both. When you have your solution finally honed only then do you take the work to the ARm target and compile it with the OpenCV on that. There's no point doing all the work on the ARM board itself.

As for Ubuntu - it's an x86/AMD64 distribution of Linux so you can't just use that on ARM. You have to find an ARM based Linux distribution. As I say Angstrom is a popular choice (or it was a few years back anyway). Clearly Raspbian shows that a variant of Debia (like Ubuntu is) has been ported to ARM. It might even be possible to run Raspbian on things other than an RPi. But if you want a "distrubution" for Mini6410 I'd simply be visiting the FriendlyArm site to see what (if anything) they have arranged to provide.

EDIT: a quick Google suggests this for Mini6410:

http://www.timesys.com/supported...

Having said that the manual:

http://armbest.com/Mini6410_User...

Seems to suggest separate building of Kernel and a rootfs using Qtopia for a GUI. So that isn't really a "distribution" with a repository as such.

(personally I'd just blow the entire $35 and get an Rpi!).

EDIT2: actually this looks pretty interesting:

http://code.google.com/p/mini641...

It's interesting to read the tutorial about the differences between Debian for ARM and emDebian - I can't help thinking the latter might be a better bet. More about that code here:

http://www.friendlyarm.net/forum...

Oh and a related video:

http://mini6410.blogspot.co.uk/

(I'd still just get an Rpi or BB Black though!).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I would've thought that if you were beginning with opencv you would use python. In most cases there is no need to compile as you just download the packages and run. If you already have a mini6410, why havent you downloaded these packages and tried it?
I cannot understand why you ask these questions here when there is a wealth of stuff on the interwebs.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I would've thought that if you were beginning with opencv you would use python

But almost all prior art relating to OpenCv is plain ANSI C or, more recently, C++ ?

I just typed "opencv tutorial" into Google. Of the 10 links on the first page nine were C/C++ and one was Python.

As the general trend is towards C++ for vision work I think I'd concentrate on that in fact though you can "get by" with C alone as (currently) all the APIs are provided in both C and C++ form.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I would not recommend C++ in this particular instance.

This reluctance to recommend C++ has nothing to do with the language, everything to do with the target of the recommendation.

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

Pages