Strange behavior in acquisition data

Go To Last Post
13 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi to all!

I need to acquire 3 analog signals and one digital signal. I have done a program for do this, but i get a strange behavior in the acquired data.

My system is a thrust stand, and it is used to acquire data on the current, voltage, thrust and speed. These signals are generated by a brushless motor and a propeller.

For acquire this data, i use an arduino mega 2560 board, without the arduino framework. I acquire the three analog signlas using the first three adc channels and i read the square waveform from the encoder using the ICP5 present on the atmega2560. I acquire the data at 100 Hz, so every 10 ms I first acquire the analog signals and after i acquire the digital signal. I save this data in the buffer (one for every signals) and when the acquisition is end, i send the data from the atmega2560 to the pc via serial link.

In particular, i obtain some strange spike and i don't understand if this behavior is an hardware problem or a software problem.

This is that i obtatin from an acquisition session:

 

In this image, the strange behavior for me, is that the peaks appear at the same moment in all graphs. Notes that only the second and fourth graphs are valid, the first and third are values in volts, they will be multiplicate by the respective sensitivity values. 

After the acquisition session, i save the data in a txt file, and this is that i obtain:

 

The first columm are the time, the follows three columms are the current, voltage and thrust and the last columm are the speed.

My doubts are that the three values in the first row in the red square are zero and the last value seem correct, but the second row in the same red square contains worng value (remember that all values are in volt, so i can't measure a voltage that is great than 5v) for all columms minus the time. This values are the same for the second-fouth columms, the last columm differ, but this values are not valid because the speed of the motor is unchanged.

 

What i would to know is in your opinion, when i can obtain a zero value from the adc? I possible that this i due to an overflow, maybe the ICP timer overflow or similar?

 

Thanks!

Last Edited: Sat. Mar 4, 2017 - 01:05 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Interesting project.

 

More information would be helpful.

 

Was the data shown a real test?

Is the data reproducible if you run the test several times?

(Does it always give a spike about 0.75 of a second into the recording?)

 

Does something else happen in the system at 0.75 Sec into the test?

(Igniter fires, Igniter turns off, fuel valves switch from start to run mode, etc.)

 

Right now you need to determine at what point in the signal processing the spike occurs.

Is it a real signal, generated by the sensors?

Can you put an digital O'scope on the sensors and record the signals?

 

Is it a software problem on the micro? 

If so is it a sampling problem, or a buffer problem?

(Atomic operations, buffer wrap around, stack overflow, other interrupts, PC was busy, etc.)

 

Several approaches you might take:

See if the process is reproducible, and if it always happens at the same time.

 

Digital scope the sensors, see if their signal is good, or also shows the spike.

 

Digital scope the power supply, good or spike?

 

Add a couple of lines to the micro's code to watch the current sample values as they are read from the ADC.  If the current values are high then turn on an LED and leave it on.

This will help to isolate the problem to the sensors, or to the initial ADC sampling, before any buffer operations.

 

For many projects it is worth the time and effort to build a simulator.

 

In this case the simulator would replace your Thrust Stand.

It would generate three analog signals and the digital signal that you expect a typical test to generate.

You would have a push button to start the signal output.

You could then easily verify the output signals of the simulator with an O'scope.

 

You could then use the simulator to feed the test signals into your project and use them to help debug the system.

In this case one would probably want to use a slowly increasing voltage for the current signal, etc, so that you could see the signal change, slightly, over the capture interval.

Another Arduino, several PWM channels, and a couple of op-amp filters and you would have a simple signal generator, (simulator), for your Thrust Stand, and make it easy to generate test after test while working on the system.

 

JC

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi JC, thank a lot for the reply! ;)

I try to give you more information about my project.

This are the signals in my system:

  • The current is in the range [~0,8] [A]
  • The voltage is in the range [0,~8.4] [V]
  • The thrust is in the range [0, ~1.5] [N]
  • The speed is in the range [~400,~2300] [Hz]

This values are near at the nominal value that i expect from my system. This nominal values get out from the varius datasheets relative at the varius components in the system.

Note that this system is a university version of a professional system, so the board is a homemade version of a professional one. However i measure the thrust using a strain gauge with an instrument op-amp (ina125p). The circuit that i have done is on the datasheet of the ina125p. The other analog signals go out from an resistor divisor and an acs712 sensor, respectly the voltage signal and the current signal.

The resistor divisor and the acs712 are mounted to measure the power supply in input of the motor. This is a lipo battery @8.4[V]. The board have the pin out to drive the esc and the pin to drive the signal from the encoder to the micro. All this component have a common ground (ground plane).

 

Using this nominal data some signals in the graph above make sense becasue the second graph display a value about 5[V] that are correct because the battery is at ~8.4[V], and the last graph display a step signal that are near 2300[Hz]. The other values, the first and the tird are quite correct because the acs712 sensor give ~2.5[V] @ ~0[A] and the straing gauge measure only the weigth of the motor, so it value is valid because 0.4[V] are ~16[g].

 

The software is my actor framework. In practice i make two actors, and they communicate using a queue. One actor is responsible to the communication from world-micro and viceversa. The other actor contains the program logic so it handle the command and the interrupt, and send data to the world. The runtime is a loop that receive a message and send it to the receiver actor.

 

I have done some experiment after read your post and the data isn't reproducible, some time the spike appear before and some time the spike appear after the 0.75[s]. At this moment i haven't access to laboratory, monday i have an oscilloscope to measure and record all!

 

However at this instant, in the system nothing is happening. I give a step signal to the motor, and this start at 0.5[s]

 

However I have done some other experiment before this post. I obtained this:

 

 

I have done this experiment using the same software, the only two thing that i do in this case is comment out this line in my c++ routine that run when an ICP interrupt occours:

bool
Icp5::isr() volatile{
	curr = (unsigned int)ICR5;
	//period = curr-prev;  <-----
	prev = curr;
	if( ++i == 2 ){
		TCCR5B &= ~(1 << CS51);
		TIMSK5 &= ~(1 << ICIE5);
		return true;
	}
	else return false;
}

and send a costant value of one for all acquistion session for this value. Using this value and a timer resolution of 5e-7[s] (16[Mhz]/8) i obtain a value of 2[MHz].

My question at the end of my previous post are relative at this last experiment, so i'm confused a lot!

In this case the spike don't appear, but if i repeat the experiment some times the spike appear near at 0.75[s].

 

I have an homemade simulator that i use at home. It consist of an arduino board that generate a pulse. I have used this pulse to design and test the software for the encoder. 

However monday i try to build a most complete simulator in the laboratory.

 

Thanks a lot for the advice!!

Last Edited: Sat. Mar 4, 2017 - 06:41 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi!

I have tested the hardware in the laboratory and it seem to work properly, so the problem is the firmware..

Now i have redesigned the firmware so, at every sample tik, i synchronize all operation respect to the edge of the incoming encoder signal (that it's a square ware).

So, at every sample tik, i'm wait that the icp detect an event. When an event is detected, i start the adc to sampling signals. I would that the adc complete all conversion operations between this event and the successive incoming event (i would that the adc complete all sampling operation in one period of the incoming encoder square ware).

In this mode, if I have an encoder signal that have a minimum frequency of 2-3 times the sampling rate of the system (the system sampling rate is 100Hz, the minimum encoder frequency is 200-300Hz) i'm sure that in a sampling interval i catch at least 2 period of the encoder square wave. My idea is to use one period to sampling the analog signals, and the successive period to read the frequency of the digital signal (encoder signal).

For do this, i need to speed up the adc sampling rate. Now, i have configured the adc using a prescaler of 64 so, it give me a sampling rate of about 19 KHz. 

This i fast enough for my purpose, but now i have a doubt on this because in this mode what is my real sampling rate? The my 100 Hz or the adc 19KHz?

I have read some post, and i think that the correct answer is that the real sampling rate is the adc 19 KHz, but in this case how i get a real sampling rate of 100 Hz?

If i use a timer to start adc at every 10 ms, and i configure the adc with a prescaler of 64, i get always a sampling rate of 19 KHz or i'm wrong? 

Please, someone can clarifing this questions? 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

luca80 wrote:
I have read some post, and i think that the correct answer is that the real sampling rate is the adc 19 KHz,
To get 10bits from the ADC the datasheet suggests the ADC prescaler is set so that the ADC clock is a maximum 200kHz (you may need to switch F_CPU to be able to get close to this 200kHz!). With a 200kHz ADC clock and it taking 13 clocks to make a conversion this means the maximum rate is 200kHz/13 = 15.3kHz. If you want 10 bits you cannot go faster. The datasheet does say that for lower resolution you can go faster. If you run fADC at 1MHz for example you would get a 1MHz/13 = 76.9kHz

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi Clawson, thank for the reply! At this moment I don't care about the adc resolution. I know that if I increase adc clock I obtain an high sample rate but low resolution. The problem here, is that the datasheet don't tell nothing about how the sampling rate affect the adc resolution. However this isn't my problem. My problem is how I get correctly the 100 Hz sampling rate.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Clawson, I'm sorry but your answer don't help and confuse me. I know from the datasheet that you told me. So if I have a fcpu of 16MHz and I use a prescaler of 64, my fadc is 16MHz/64=250KHz so the adc sampling rate is 250KHz/13=19.2KHz. Using this sampling rate I obtain a resolution that is lower than 10 bits. Ok, this is good for me. But now I have my signals that are sampled at 19.2KHz, right? And I 'resample' this signals every 10 ms. For resample, I mean that every 10 ms I start the adc for sampling the signals. When the adc finish, I wait the next tick to start the adc and acquire the next sample. Is this the correct way to acquire my signals at 100 Hz? This is my main question!
I realise that the adc has a maximum and a minimum sampling rate, but even if I change the fcpu so that the minimum fadc is 50KHz, I obtain a sampling rate that is 50KHz/13=3.8KHz, and this is too hight respect my 100 Hz. Again, if I need to acquire a signals @ 100 Hz, how I do with an avr microcontroller?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

luca80 wrote:
But now I have my signals that are sampled at 19.2KHz, right?
Only if you do "back to back" conversions - that is set ADSC again as soon as each conversion completes and you have collected the "previous" result.
luca80 wrote:
And I 'resample' this signals every 10 ms.
No, if you are only triggering and collecting a result every 10ms then that is nothing but a 100Hz sample rate.

 

Put it this way. At 19.2kHz the conversion time is 52us. You could pick up a result every 52us and you would be sampling at 19.2kHz. But if you trigger on the 10,000us (10ms) boundaries then you will get a reading 52us later then you will wait 9,948us doing nothing until you trigger again at the next 10,000us boundary. What you may want to do is fit 192 readings (each 52us long) into your 10,000 "window" and then average those at the end. You are then sampling at 10ms but taking the average over the entire window. You may even do things like throw away wildly high/low values or go the whole hog and use something like a Kalman filter.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Only if you do "back to back" conversions - that is set ADSC again as soon as each conversion completes and you have collected the "previous" result.

Like free running?

 

But if you trigger on the 10,000us (10ms) boundaries then you will get a reading 52us later then you will wait 9,948us doing nothing until you trigger again at the next 10,000us boundary. 

This is my conception of sampling at 100 Hz (this is that my prof suggest me). Precisely i would acquire exactly one sample of each channels (3 channels in tot) every 10 ms.

After the acquisition session, I elaborate this data on the pc. The purpose of this acquisition is to collect some data for identifing some parameters of the dynamic model of the rotor. For do this, i use a matlab specific toolbox that handle all mathematical operation. In practice i give a matlab the vectors of data and matlab usign an optimization algorithms find the unknow parameter of the given dynamical model.

For do this, i need to collects the data in the 10 ms range. Is not a problem when this appens in the 10 ms range because the theory say that in the sampling process there is a lost of information ecc ecc...

However thank you so much Clawson, your answer help me a lot to clarifing this aspect on the avr adc. If i correct understand the adc sampling rate is the "speed" on how the avr mpu sample an analog signals. This really is the sampling rate only if the sampling process is fast enough ("back to back"), otherwise one can use the adc sampling rate, like a parameter to speed up/down and increase/decrease the resolution of the conversion process at given frequency, right?

 

However now i have redesigned the firmware such that all operation are like sequential and not in multithreading fashion. In this way, i think that i minimize the sync problem due to the use of the interrupts. 

I have read a lot about volatile, memory barrier and compiler optimization option. For me this kind of arguments are very hard to learn and uderstand because there is leak of learnig material! However i have read some your interesting post about this kind of arguments and i would to thank you so much to share your know how! :)

For this proposit, i have notice your signature faq#4, what is really mean?

For design my framework, i have used this setting on atmel studio:

  • optimization level = -O0
  • debug level = Maximum (-g3)
  • enable the warning as error, pedantic and pedantic warning as error
  • -std=c++11

I have obtained a big code, like 32 kb and it seem to work. Hovewer, i have done a version of the firmware that doesn't run at 100 Hz, but if i change the optimization level (-O1 and others), the firmware run at 100 Hz!

After this i understand the potential of the compiler optimizator, but i don't know how to use it correctly. Please can you give me some info or advice on how i can maximise the use of the optimizator?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

optimization level = -O0

 

 Well, if you use delay (busy waiting for a given time) http://www.nongnu.org/avr-libc/u... there might be some trouble (O1...O3 is very reliable, anyway, IIRC).

Only trouble with optimisation is that instructions or -incl-  variables are removed if optimizer "thinks" they are useless : ex a is modified by a function that main() -always exists-  does not call directly or indirectly... (an interrupt service routine is kind of such a function). Telling "a " is volatile means it should not be optimized out (else, if variables could not not be volatile, any program, with a good optimizer would do ... nothing at all and be therefore very short and quick).

 

pedantic warning , Werr  : that is recommanded (for further portability), on PC, too.

Last Edited: Mon. Mar 13, 2017 - 07:27 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thank for reply dbrion0606!

In my framework i tried to minimize the use of the external library, so i can't use any of this kind of delay. When i need a delay, i send a delay message to the scheduler. In the scheduler loop, at every loop i check if there is some message to execute, if there aren't message, the loop restart, so recheck the message queue and so on. In this way i obtain a delay.. :)  Obviosly this is not an hard real time framework..

However i understand that you tell me and i used all kind of precaution that i know! Declare volatile the variables that can change in inaspected way and so on.

The problem is that i do all in c++, so i use object. With the object, the volatile keyword as the same mean as non object variable, but the semantics change a bit. In this case there is a leak in the licterature because all the c++ OOP books that i have read, don't give sufficient information about the object and the volatile keyword. There are some good article on internet that clarifing some aspect on the object and the volatile keyword in c++, but the leak of know how remains and however, a very limited number of this article is avr architecture friendly!

I have to read some other articles to tell you more about this argument.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I know that this argument it's hard, but i try to understand how it work, so i need to ask some other question about this project..

The problem is how the compiler write the final code (hex,elf) from my program file, and how i check correctly that the final code is what i want!

At this moments, i compile My file using the -O1 optimization and using the debugger i try to trace the execution of the program between the c++ code

and the assembly using the disassembly function with show source code option enabled.

Using this approach i noted a strange behavior in the execution flow. I have two method from two different object that do the same work with some difference.

One object is a priority queue, and the second object is a normal queue. The method under the magnifier is the pop method associated with the queue.

This is the code from the priority queue object:

template <class T, class W>
bool
PriorityQueue<T,W>::popTop(T& elem, W& ts) volatile {
	volatile Mutex m;
	if(elementCount == 0) return false;
	else{
	        if ( buffer[1].getFireTime() <= ts.nowForQueue() ){
			elem = buffer[1];                 // copy constructor
			buffer[1] = buffer[elementCount]; // operator =
			elementCount--;
			heapify(1);
			return true;
		}
		return false;
	}
}

 

and this is the method from the queue object:

template <class T>
bool
Queue<T>::popFront(T& elem) volatile {
	volatile Mutex m;
	if( elementCount == 0 ) return false;
	else{
		elem = buffer[start];
		elementCount--;
		start++;
		if (start == size) start = 0;
		return true;
	}
}

 

Both method were written using a race condition interface protection, so the signature and some strange operation in the body.

My intend is to make both this method atomic, so i use a mutex to create a critical section.

The mutex is this:

 

class Mutex 
{
public:
    Mutex():state(SREG){
        __asm__ __volatile__("cli" : : : "memory"); // memory barrier 
        
    }
    ~Mutex(){
        __asm__ __volatile__("" : : : "memory"); // memory barrier
        SREG = state;            
    }    
private:
     char state;
}; //Mutex

I write it using the c++ RAII program idiom. And finally, this is the function where i call the above pop methods:

 

void
TimePriorityScheduler::run() { // thread safe version
    while (true) {//forever loop

		Message* tmp = nullptr; //tmp message

		if( TQ.popTop(dispatch,*this) ) tmp = &dispatch;
		else if ( MQ.popFront(dispatch) ) tmp = &dispatch; 

		if( tmp != nullptr ){ // execute message
			Actor* receiver = dispatch.getReceiver();
			receiver->handler(dispatch);
		}

    }//while
}

The problem is that when i trace the execution of the program, the debugger execute two times the first pop call ( TQ.popTop(dispatch,*this) ) and only in the second time i trace the mutex associated with this method. Otherwise, when i trace the other pop call ( MQ.popFront(dispatch) ) i trace correctly the execution flow because it do what i want!

 

If i trace correctly the execution flow of the program, this strange behavior is a compiler problem if i'm right. So, how i find some info on how think the avr compiler?  The above code are the about the same (in the execution flow), but the compiler interpret it in different way, so how i can tell to the compiler that the above code have the same execution flow?

Last Edited: Sat. Mar 18, 2017 - 06:16 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

..it was a labview problem, with some little problem in the avr side. But now it work! :)

 

Or it seem to work! 

The problem now is that when i change the acquisition time, for example 7 sec, the acquisition start but "the program" doesn't see any change in the reference signal.

The rotor run at minimum velocity how i expect, but when i expect that the rotor follow the reference signal, it doesn't follow the reference signal and after a few second it stop to run and the esc emits a series of "beep" that indicate that the pwm signal is off (under the minimum) and the battery is connected. It seem that the reference signal is magically changed in some way.

I have done some test using the debug, and it stop when, in the initialization phase, i create the buffers. I create the buffer, and some other variables, using the new operator.

For obtain the reference signal, i put all the values in an array. In this way i'm able to create the reference signals in matlab and export all the values in the c language.

The buffer size is the length of the reference signal. So, for 7 sec, i acquire 4 signals @ 100 Hz, so i have 7*100 = 700 unsigned int for signal.

I not use the progmem attribute.

I think that the problem is in the creation of the variables, but i'm not sure and i'm a little lost..

The application consume the follows quantity in the ATmega2560 microcontroller:

				Program Memory Usage 	:	28042 bytes   10,7 % Full
				Data Memory Usage 	:	2398 bytes   29,3 % Full

 

Is possible that the heap memory is full? Using the acquisition time of 7 sec, i have at minimum in the heap memory 4 arrays of 700 unsigned int plus some other variables and buffers.

How i can see the heap memory and the stack memory capacity at compile time or runtime if is necessary?

If i use the progmem attribute, i can solve the problem?

Any help is much appreciated! :)

Last Edited: Wed. May 3, 2017 - 09:48 PM