XMega128A4U ADC Calibration

Go To Last Post
14 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi All I have some troubles with the ADC of the X128A4U. The absolute converted results are approx. 0.3 ~ 0.5 % higher than expected. For reproducing this I used 5 pcs. X128A4U and applied to each subsequently the identical circuit creating a reference voltage of 2.5V and a measurement voltage of 2.000 V. The measurement voltage is derived from the reference voltage (precision voltage divider) and is applied to ADC0 of the controllers. The circuit creating reference voltage and measurement voltage was attached to the 5 controllers subsequently. Hence, the absolute difference of the applied voltages should be neglect able. The converted values do not refer to 2.000 V but to 2.006 ~ 2.010 V. I think I could compensate the error by doing gain calibration. But since I use single ended mode with a gain of 1x I'm not sure if the error is really caused by gain. Accordingly to datasheet 8331 the gain is always 1 in single mode and Figure 28-1 shows the gain stage is bypassed. The ADC characteristics in datasheet 8387 state in table 36-106 that the gain error applies only for differential mode, but table 36-107 refers to the gain stage in common. In the case that the gain stage is active for single ended mode obviously gain calibration must be performed, but if the gain stage is disabled I have to search for another reason causing the wrong values. Does someone have more details? Should I do gain calibration in any case? Has someone experiences which absolute accuracy I can expect? I hopped to reach 0,1% by 16 fold oversampling. BTW I am aware of: AVR120, AVR127, AVR1300 Decoupling of reference and measurement voltage is not an issue and the measured values are stable within +/- 2 LSB. ADC is properly clocked and offset compensation was performed. Thank you very much for your help Klaus

Last Edited: Sat. Aug 15, 2015 - 04:10 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

For me, I'd like to see some code fragments showing ADC setup and conversion. In addition, I'd like to see the results in ADC counts, and also results for other voltages--including 0V.

There is no ripple anywhere in the system? And you have measured the signal right at the AVR pin (and the reference pin) with a meter with enough calibration and resolution to verify your <1% "error"?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Of course the supply voltage is clean and the voltages were measured with a precision voltage meter. The resistance of the voltage source (measurement voltage) is buffered with a max. 150µV offset amplifier. So far the output resistance of the analoge part should not cause the error. I also reduced the ADC clock in order to verify the output resistance, but the result did not change.

Please note that the values are absolutely stable over hundreds of readings with typ. +/- 2 LSB

Some measurement results with VRef = 2.5000V:
Device#; ADC (VIN=0V); ADC (VIN=2.000V); VInCalc
#11; 192; 3478; ~ 2.0056V
#13; 194; 3484; ~ 2.0081V
#17; 193; 3483; ~ 2.0078V
#20; 199; 3489; ~ 2.0070V

Example for #11:
VInCalc = 2.5V/2^12 * (3478 - 192) = 2.0056V

Do you think the absolute values of the ADC should be correct without gain calibration?

Lets have a look on the code. I use ASF and the controller is suspended during conversion.

Initialization:


	// Initialize configuration structures
	adc_read_configuration(&ADCA, &adc_conf);
	adcch_read_configuration(&ADCA, ADC_CH0, &adcch_conf);

	/*
	 * Configure ADC module:
	 * - unsigned, 12-bit results
	 * - PORT B voltage reference
	 * - 2 MHz maximum clock rate
	 * - manual conversion triggering
	 */

	adc_set_conversion_parameters(&adc_conf, ADC_SIGN_OFF, ADC_RES_12,
		ADC_REF_AREFB);

	adc_set_clock_rate(&adc_conf, 2000000UL );
	adc_set_conversion_trigger(&adc_conf, ADC_TRIG_MANUAL, 0, 0);

	adc_write_configuration(&ADCA, &adc_conf);


	/*
	 * Configure ADC Channel 0:
	 * - single-ended measurement from configured input pin
	 * - interrupt flag set on completed conversion
	 */
	adcch_set_input(&adcch_conf, ADCCH_POS_PIN0, ADCCH_NEG_NONE, 1);
	adcch_set_interrupt_mode(&adcch_conf, ADCCH_MODE_COMPLETE);
	adcch_disable_interrupt(&adcch_conf);

	adcch_write_configuration(&ADCA, ADC_CH0, &adcch_conf);


And here the start of the Conversion


	uint8_t pos = ADCCH_POS_PIN0;
	uint8_t neg = ADCCH_NEG_NONE;

	adc_read_configuration(&ADCA, &adc_conf);
	adcch_read_configuration(&ADCA, ADC_CH0, &adcch_conf);

	switch (chPos)
	{
	case 0:
		pos = ADCCH_POS_PIN0;
		break;
	case 1:
		pos = ADCCH_POS_PIN1;
		break;
	case 2:
		pos = ADCCH_POS_PIN2;
		break;
	case 3:
		pos = ADCCH_POS_PIN3;
		break;
	default:
		return -1;
	}

	switch (chNeg)
	{
	case 0xFF:
		break;
	case 0:
		neg = ADCCH_POS_PIN0;
		break;
	case 1:
		neg = ADCCH_POS_PIN1;
		break;
	case 2:
		neg = ADCCH_POS_PIN2;
		break;
	case 3:
		neg = ADCCH_POS_PIN3;
		break;
	default:
		return -1;
	}

	adcch_set_input(&adcch_conf, pos, neg, 1);

	// Disable Modules
	PR.PRPA |= PR_DAC_bm;
	PR.PRPD |= PR_TC0_bm | PR_HIRES_bm;
	PR.PRPE |= PR_USART1_bm | PR_TC1_bm | PR_SPI_bm;

	// Setup Idle Sleep Mode
	SLEEP.CTRL |= SLEEP_SEN_bm;

	if (chNeg != 0xFF)
	{	// Use signed mode in differential mode
		adc_set_conversion_parameters(&adc_conf, ADC_SIGN_ON, ADC_RES_12,
			ADC_REF_AREFB);
	}
	else
	{
		adc_set_conversion_parameters(&adc_conf, ADC_SIGN_OFF, ADC_RES_12,
			ADC_REF_AREFB);
	}


	adcch_write_configuration(&ADCA, ADC_CH0, &adcch_conf);
	adc_write_configuration(&ADCA, &adc_conf);

	adc_start_conversion(&ADCA, ADC_CH0);

	// Enter Idle Sleep Mode
	do
	{
		__asm__ __volatile__ ( "sleep" "\n\t" :: );
	} while (0);

	// WOKE UP

	SLEEP.CTRL &= ~SLEEP_SEN_bm;

	// Reenable Modules
	PR.PRPA &= ~PR_DAC_bm;
	PR.PRPD &= ~(PR_TC0_bm | PR_HIRES_bm);
	PR.PRPE &= ~(PR_USART1_bm | PR_TC1_bm | PR_SPI_bm);

	return 0;



Help if appreciated very much.

Klaus

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think I should keep it simple and concentrate on these questions:
- Is ADC gain calibration necessary for single ended mode of X128A4U?
- Is the gain stage bypassed in single ended mode so that the errors of datasheet 8387 table 36-107 do not apply?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'd agree that you have addressed the situation in an orderly and complete manner.

I mentioned other voltages, such as 1.00V, to see if there is a constant offset or whether it is proportional. That would go to your questions about gain. With the numbers you posted it appears that it is about 10 counts too high when converting a 2.00V signal. How many counts is it "off" with a 1.00V signal? with a 0.50V signal? with a 0.00V signal--e.g., conversion on the resistor-divider ground when routed to an input channel?

[full disclosure: I'm not an Xmega expert. At all. But I've been following the ADC discussions here closely as we have two Xmega designs in the works that will use the ADC as an important part of the app.]

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Absolute accuracy might be tough with the ADC pipeline architecture. This architecture is good for medium power, medium resolution, medium conversion rate. Best architecture of absolute accuracy is the super slow ADC architectures like dual-slope integration, but usually MCUs won't use that because it is not general purpose enough.

Anyways, on to trying to get absolute accuracy with this architecture:
1) The reference voltage and how it is decoupled is very important. Is your precision reference voltage buffered and decoupled? Decoupling is very tricky because you might need an inductor. The reference voltage is sampled by every stage in the pipeline onto the capacitor so it has to be able to supply all the capacitors and settle quickly enough.
2) Is your input buffered? You mention it has low output resistance, but this signal, like the reference is sampled onto a capacitor through switches. It needs to be stable and settle very quickly.
3) How are your power supplies decoupled? Since you are doing single ended, the power supplies can inject noise through either the input or the reference.

Quote:
I think I should keep it simple and concentrate on these questions:
- Is ADC gain calibration necessary for single ended mode of X128A4U?
- Is the gain stage bypassed in single ended mode so that the errors of datasheet 8387 table 36-107 do not apply?

Gain calibration will only fix linear errors. Datasheet says integral nonlinearity should be within 2-3 LSB so it should be pretty linear, but much of this depends on how they tested it.
From the diagram in the xmega au manual looks like the gain amp is bypassed, but that is only a diagram. Only Atmel knows exactly what the connections are.

Only way to check if absolute accuracy and figure out what the problem is especially since you don't have the circuit schematic of the internal ADC would to characterize the ADC by going through every code by slowly varying the input voltage. Then try modifying the above 3 items: input, reference, power and then characterizing again.

Note also many of these characteristics change if your input is AC.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thank you for the explanations especially for the distinction between pipeline and dual-slope architecture and their characteristics which I haven’t known. I can live with a lower accuracy, that’s not my problem but I have to know where it comes from.
I used the ADC of the Mega128 in the past and thought it was better in linearity, but having only 10 bit resolution makes a 1% error not detectable.
The reference should be decoupled quite decently but is not buffered. I will try buffering it now but I think (or better hope) that the analog circuit is working.
The analog input stage is buffered by a precision opamp. And capacitors are present at the ADC inputs.
I’m aware of the analog issues but they mostly cause unstable values and not stable values with an error and changing ADC configuration e.g. the clock would influence the result. The ADC has also ways to limit the current and a high impedance mode. None of these settings change the result in my case.
Of course I can only correct linear errors but with an INL of 2 LSB I can live.

To keep it easy I will implement gain calibration first and see how accurate the ADC will be then.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

In single ended mode there is always some offset because the internal GND reference used by the ADC is a little bit below external GND. This is deliberate and explained in the datasheet as allowing single ended mode to do zero crossing detection and read slightly negative voltages.

You can calibrate it out easily by simply measuring GND with an ADC pin. You can do it once during manufacture or just dedicate an ADC pin to GND and so it whenever you like.

If you can live with only 11 bits of resolution you can use differential mode with external GND as the -ve input as well.

I did a lot of testing of the ADC and experimented with calibration. Gain error is extremely small and linear over the entire voltage range.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks, the offset calibration was already done which gives a value of approx. 190 which is subtracted from each ADC value before! gain compensation was done.

I tested now 10 of my devices at 5 constant input voltages spread over the whole input range.

Without gain compensation the biggest absolute error with a 2.5V reference was ~ 9 mV. There is a tendency that a constant factor corrects this over the whole range.

Recalculation with the gain correction factor reduces this error to max. 1,6 mV over the wohle range. Now we are talking about a few LSB which may be caused by INL or whatsever. This is the accuracy I wanted to reach.

When you are talking about "extremely small" gain error do you mean 10 LSB over the whole range or much less?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I can live with a lower accuracy, that’s not my problem but I have to know where it comes from.

Quote:

To keep it easy I will implement gain calibration first and see how accurate the ADC will be then.


I thought my suggestion about testing with a few other signal values was a good one. With what you have presented, the single 2.00V value, you don't KNOW yet that it is a gain situation. It could be offset, or non-linearity. You can gain and gain and gain all week--and it might not do any good.

Is my suggestion about trying 1.00V and 0.50V and 0.00V as inputs really stupid or useless? perhaps. It sounds good to me, though, but it >>is<< the morning after my poker night.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You are right and I already measured more points inbetween GND and VRef and came to the conclusion that there is a constant factor (at least partly)

In theory the gain correction reduces the error to nearly 1/10 of the original one. Now after I corrected the gain for each device I can see that the remaining error is approx. +/- 3 LSB for the tested cases. I don't think that I can expect more. Even +/- 5 LSB would be quite decent.

Thank you for your help.

Klaus

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

kumme wrote:
When you are talking about "extremely small" gain error do you mean 10 LSB over the whole range or much less?

In my tests it has always been less, up to around 3 LSB, but keep in mind that I am using differential mode so only get an 11 bit result. Therefore for you 6 LSB is more likely, or about 3mV.

I just got some new prototype boards so am going to do more extensive testing over a larger number of devices.

I'm glad you reached your goal. I think between a few of us here we have finally figured the XMEGA ADCs out and managed to get good accuracy. We could do with a wiki to collect all this knowledge and experience together.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I worked a bit more on the accuracy of the ADC of my XMega128A4U and want to share my results regarding the gain and offset calibration of the ADC in single ended mode.

At first, I verified the influences of the capacitors at the input pins of the ADC. Normally, I use 100nF capacitors to be on the save side. Because of hardware restrictions I had to reduce them because the operational amplifiers driving the signal did not work stable with a high capacitive load. It is a good compromise reducing them to 10nF. Reducing them further (e.g. 1nF) gives a bit less accuracy.

It is easy to find out if the result is stable by changing the ADC clock. For a stable signal the converted value should be stable, too. Otherwise the converted value changes depending on the ADC clock. I also recommend setting the ADC clock to a minimum value of ~ 200 kHz.

As I wrote the gain calibration helps a lot to increase accuracy over the whole range. Executing the gain calibration is easy during the production step. Take a very accurate voltage reference (I recommend 2.5V 0.05% if available) and use a very accurate voltage divider with 3k and 12k (0.1% or better).
The voltage divider is important since the ADC signal must not exceed approx. VRef – 0.120 V (Offset Voltage of the ADC). So far the Resulting 2.5 / 15k * 12k = 2.0V gives nearly the upper limit. I assume VCC = 3.3V, otherwise the 2.5V VRef may be out of spec. Using a MOSFET allows to connect the ADC input alternatively to GND and 2.0V.

Connecting the ADC input to GND allows measurement of the offset of the ADC and should give a value of approx. 190. Connecting the ADC Input with 2.0V should result in 2.0 / 2.5 * 2^12 + 190 = 3466 (ideal). With the real measured value and the calculated ideal value it is possible to calculate a correction factor. Shift operations are your friend in order not to lose accuracy during calculations. I used uint32_t values and shifted the values by 18.

Finally, store the gain correction factor and the offset in the EEPROM and correct the values of every single ADC cycle.

Please be careful when using the ADC with the DAC switched on! When the DAC is operating normally inside its linear range it has no influence on the ADC values. BUT when the data register of the DAC contains values < 0x0100 (default 0x0000) this has approx. 5 bits influence on the ADC result. This happens as soon as the DAC output channels 0 or 1 are switched on, even without load connected to the pins. This took me hours to find out. Maybe it has something to do with the reference. I source ADC and DAC from the same voltage reference.

I could reproduce and correct the gain error and the DAC influence on more than 50 different circuits where I am using the AtXMega128A4U now. After doing these calibration steps the accuracy in single ended mode increased a lot can be increased further by using oversampling. I myself use 16 fold oversampling which gives me reliable 14 bits resolution.

I hope my findings are useful for others who want to use the ADC for very accurate measurements.

Klaus

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I have been testing my new prototype boards and had some good results. These boards are not really designed for precision ADC measurements (those are coming later), but even so I'm happy with the results I am getting.

For external voltage measurements I use the ADC in signed single ended mode with offset error correction. Offset error is measured by measuring external GND and is typically 5 to 8 LSB higher than the XMEGA internal GND that is used in signed mode. In hindsight I should have put the GND reference on one of the low numbered ADC inputs so I could use it as the negative input in a differential measurement, but this method works fine as well.

I don't bother with gain error at all. It is too small to worry about in this application, but I plan to calibrate it for the associated test equipment I am developing.

For internal measurements I look at SCALEDVCC and TEMPSENSE. SCALEDVCC has a large offset error that has to be calibrated out during factory testing and it doesn't seem to correspond to anything else. There is bound to be some gain error as well but I have not do any tests on it because I find that accuracy over the working voltage (2.5-3.6V) is better than 10mV, good enough.

I initially had a lot of problems with the temperature sensor. It needs somewhat complex calibration which I have detailed elsewhere, but the good news is that it can be done at room temperature. Once calibrated and with a carefully laid out circuit I get around +/- 0.2C jitter and better than 0.5C accuracy over a range of about -10C to +35C. Actually the jitter is probably due to genuine fluctuations in the temperature of the IC.

For calibration I use an MLX90614 IR temperature sensor on a test rig. Low cost, pretty good accuracy and readings are instant.

Kumme, what you say about the DAC is interesting because it implies that trying to calibrate the DAC with the ADC isn't going to work very well. I wrote a calibration routine that does seem to work well, but it uses the internal DAC output rather than looping one of the pins back. Perhaps it is quieter when used on the internal channel.