## ADC's sample and hold capacitor

23 posts / 0 new
Author
Message

The ADC on AVR's use a sample and hold circuit for it's ADC conversion. Does anyone know what the recommended/max impedance for that sample and hold circuit is? I just noticed the other day looking through a PIC PDF file (helping someone on another forum) that they list the recommend impeddance at 2.5k ohms and the max impedance at 10k ohms, in order to fully charge the capacitor for the ADC reading. What would be the equivilant values on an AVR? This is in concern to using 2 1 meg resistors as a voltage divider for a battery monitoring system.

-Curiosity may have killed the cat
-But that's why they have nine lives

I recall from reading the datasheet (dont know how I'd find out otherwise) that max recommended impedance is 10k in order to let the s/h cap charge in the 2 clocks alloted. It even gives the pf of the cap, but I cant remember it right now

Imagecraft compiler user

Since you are measuring a very slow changing signal, you can connect a 100nF cap which will effectively reduce the impedance. The s/h has a 14pF cap - the 100nF cap will charge this. I have recently experimented with 2 100k resistors and a 100nF cap with exceptionally good results.

The data sheets recommend a drive impedance of 10K, for the full AVR ADC bandwidth. Its analog input resistance is 100M Ohm, and its S/H capacitor is 14pF.

Since you monitor a battery voltage level, and you are not trying to catch the quick voltage immersions, you can use a 10...100nF capacitor in parallel with the resistor divider output, to decrease its impedance. To charge the S/H ADC capacitor, try to use an ADC minimum input capacitance of x100 the S/H value (> 1400pF).
Do not forget to take into account the slower resistor divider output response.

Also, "using 2 1 meg resistors as a voltage divider" seems to be good enough to drive the 100M ADC input. Try lowering their values, though.

I hope for nothing; I fear nothing; I am free. (Nikos Kazantzakis)

Thanks for the posts folks, that about covers everything I think.

-Curiosity may have killed the cat
-But that's why they have nine lives

Hi,

Usually after a conversion the S/H capacitor has a spacial voltage (VRef, VCC or GND depending on ADC). Thus every time you start a conversion the capacitor has to be charged. Resulting in a current with DC characteristics. Creating a DC error with the ADC reading.
The current increases with sampling frequency.
Example:
1MOhms input resistance
10 kSamples/s = 100us
14pF
V(charge) = VRef = 2.56V
V(In) = 0V

Every conversion the capacitor has to be (dis-)charged from VRef to 0V.
14pF x 2.56V / 100us = 0.36uA
0.36uA x 1MOhms = 0.36V
--> 0.36V error with 0V input, but 0V error if you input VRef.
It behaves like a 7.1MOhms resistor connected on ADCinput to VRef.

Klaus
********************************
Look at: www.megausb.de (German)
********************************

That would explain why you lose the 2 LSB's on higher speed conversions.

-Curiosity may have killed the cat
-But that's why they have nine lives

bobgardner wrote:
I recall from reading the datasheet (dont know how I'd find out otherwise) that max recommended impedance is 10k in order to let the s/h cap charge in the 2 clocks alloted. It even gives the pf of the cap, but I cant remember it right now

Everyone is focusing on the S/H capacitance in Fig. 23-8 from the ATMega328P datasheet (or similar) and ignoring IIH and IIL in that same figure, which are speced to be up to 1 uA. For many high-impedance inputs to the ADC (e.g. a Mohm-ish divider), this leakage seems likely to be highly significant.
Even if its actually much less its going to distort measurement coming off high-impedance dividers. It seems like it would be better to be safe and use an op amp buffer to turn any high-output-impedance one wants to send to the ADC to low-output-impedance in the range the ATMega datasheet officially advises.

Or perhaps I'm missing something.

This is an 8 year old Thread, but its a good topic to discuss!

Clearly if one is interested in a high bandwidth signal and is running the ADC at its maximum rate, with a rapidly changing signal, using an op-amp to buffer a high impedance source and to drive the ADC is often a great approach. One often has the Op-Amp do dual duty as a Shannon-Nyquist filter as well.

In this case, however, the OP is likely using 1 M resistors in the input divider to minimize the current being drawn, as it is likely a battery application. Adding a "power hungry" Op-Amp might not be a good overall design approach.

For a low bandwidth measurement, such as monitoring a battery voltage, the cap across the divider is often a great approach.

JC

Those IH and IL current,s with the S/H cap, set the longest allowable conversion time.

This is because those currents tend to charge/discharge the S/H cap after the sample switch is open. The result is capacitor voltage drift during the conversion.

You are safe so long as you do not set the ADC clock to be a lower frequency than the stated minimum.

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

DocJC wrote:
This is an 8 year old Thread, but its a good topic to discuss!JC
And one I'd not seen before. Though it's old, one of the last posts from eight years ago caught my eye:
MegaUSBFreak wrote:
Usually after a conversion the S/H capacitor has a spacial voltage (VRef, VCC or GND depending on ADC). Thus every time you start a conversion the capacitor has to be charged. Resulting in a current with DC characteristics. Creating a DC error with the ADC reading.
The current increases with sampling frequency.
.
.
.
Every conversion the capacitor has to be (dis-)charged from VRef to 0V.
That's an intriguing claim.

I see no mention of this kind of behaviour in any of the datasheets I've read. Further, the bench tests I've done suggest this to be untrue.

So long as the ADC is enabled the S/H cap is connected to whichever input is selected by ADMUX. It will follow the voltage on that input, charging and discharging within the limits imposed by the TC of the RC filter formed by it and the analog input path resistance (see NOTE below).

The S/H cap is disconnected from the input only during a conversion (the 'hold' in sample-and-hold), and when the ADC is disabled. Even then, it is not 'discharged to 0V'. The other side of the S/H cap is tied not to GND, but to Vcc/2, so it will tend to leak towards that, but VERY slowly. My test show an average leakage TC in the neighbourhood of 274 seconds. This corresponds to a leakage resistance of about 20 teraohms, and a maximum leakage current of 0.13 pA (Vcc = 5V).

The only circumstance when the S/H cap is 'discharged to 0V' is when the input is at 0V, such as when the selected MUX is the GND channel.

If the MUX is left alone and the ADC is left enabled, the S/H cap remains connected to the channel (following the input) except during a conversion.

ka7ehk wrote:
Those IH and IL current,s with the S/H cap, set the longest allowable conversion time.

This is because those currents tend to charge/discharge the S/H cap after the sample switch is open. The result is capacitor voltage drift during the conversion.

Iih and Iil have no direct relevance to the ADC, the S/H cap, or the analog input path resistance. They represent the leakage current experienced by the I/O pin (all I/O pins, actually). That leakage current is due to the input buffer, the output drivers, and the protection diodes, all of which are outside the analog input path.

They will have an effect on a high-impedance signal source connected to an ADC input, but not any part of the ADC itself.

Quote:
You are safe so long as you do not set the ADC clock to be a lower frequency than the stated minimum.
Since the S/H cap is isolated from the I/O pin during conversion, Iih and Iil won't affect conversion accuracy. I expect the lower limit imposed on the ADC clock speed is due to worst-case leakage of the S/H cap into the ADC comparator and other parts of the device.

-----------------------
NOTE:

That resistance is specified in datasheets to be between 1K and 100K, and the S/H cap is 14 pF, so the TC is between 14 ns and 1.4 Î¼s. With a TC of 14 ns, the -3 dB cut-off would occur at ~11.4 MHz. With a TC of 1.4 Î¼s, the -3 dB knee occurs at ~114 kHz.

While the cut-off for the longer TC case might be 114 kHz, remember that the ADC itself is limited to about 15.385 kHz sample rate when clocked at 200 kHz (the maximum for 10-bit results). The Nyquist frequency is half that, about 7.692 kHz. With a TC of 1.4 us, a 7.692 kHz input will be attenuated (at the S/H cap) by less than 0.02 dB, and the phase shift will be less than 4 degrees.

Mind you, the ADC can be run at up to 1 MHz (for 8-bit results), giving a sample rate as high as 76.923 kHz and a Nyquist frequency of 38.462 kHz. With a TC of 1.4 us, that still will be attenuated (at the S/H cap) by less than 0.5 dB, with a phase shift of less than 20 degrees.

Remember also that the figures above are based on the longer TC of 1.4 Î¼s and the upper limit for the analog input path resistance of 100K. For the lower limit, the figures are very much more favourable.

Here's a chart for comparison:

```                     TC == 14 ns                 TC == 1.4 Î¼s
Input        Attenuation  Phase shift    Attenuation  Phase shift     ADC resolution (1 LSB)
----------   -----------  -----------    -----------  -----------     ----------------------
7.692 kHz        ~2 Î¼dB    ~0.04 deg        ~20 mdB       ~4 deg            ~2 mdB (10-bit)
38.462 kHz       ~50 Î¼dB     ~0.2 deg       ~500 mdB      ~20 deg           ~34 mdB (8-bit)
```

For the best-case TC of 14 ns we see that the ADC resolution, and not the RC filter formed by the S/H cap, is the limiting factor.

Of course none of this accounts for any other real-world factors which would affect ADC conversion accuracy.

`============================================================================================`
 "Experience is what enables you to recognise a mistake the second time you make it." "Good judgement comes from experience.  Experience comes from bad judgement." "Wisdom is always wont to arrive late, and to be a little approximate on first possession." "When you hear hoofbeats, think horses, not unicorns." "Fast.  Cheap.  Good.  Pick two." "We see a lot of arses on handlebars around here." - [J Ekdahl]

DocJC wrote:
This is an 8 year old Thread, but its a good topic to discuss!

In this case, however, the OP is likely using 1 M resistors in the input divider to minimize the current being drawn, as it is likely a battery application. Adding a "power hungry" Op-Amp might not be a good overall design approach.

For a low bandwidth measurement, such as monitoring a battery voltage, the cap across the divider is often a great approach.

JC

You can get op-amps that can be shut down (On Semi NCS2002 claims 1.9 uA, LTC6255, advertised as an ADC driver, claims 7 uA).

I don't think the high-impedance divider feed off a battery actually works, despite being advertised all over the internet (if you read carefully all those answers appear to be speculation). The problem has nothing to do with the sampling frequency and a parallel cap won't help, its a DC issue. It consistently reads low for me, exactly as you would expect if the leakage is acting as a parallel path to ground and putting the divided voltage lower than expected.

The cap will charge up to the resistor divider's voltage.
The voltage will read low if you don't give it enough time to charge up.
With the large meg-ohm caps it can take a while.

Once the cap is charged, there will be a very small leakage current, which depends upon the chemistry of the cap, and its size, (ceramic, aluminum electrolytic, tant, etc.)

The cap is not a DC parallel path to ground, unless one does a very detailed analysis of the cap including its leakage current.

Try taking serial readings with your setup and see if the divider voltage climbs, and plateaus. Or just give it a while to charge up.

JC

bkerin wrote:
a parallel cap won't help, its a DC issue.
Nonsense.
Quote:
exactly as you would expect if the leakage is acting as a parallel path to ground and putting the divided voltage lower than expected.
You've misunderstood the meaning of Iil and Iih.

Iil is leakage current to source when the pin is logic low, and Iih is leakage current to sink when the pin is logic high.

In other words, when the pin is held low any leakage current will flow towards Vcc, and when the pin is held high any leakage current will flow towards GND. Signals with intermediate voltages might exhibit both leakage currents such that an input at Vcc/2 will see no net current flow, or they might see virtually no leakage at all.

The analog input resistance is typically 100M. This means that leakage current due to the ADC (separate from Iil and Iih) will be on the order of a couple of dozen nanoamps.

Bear in mind that the datasheet specifies that Iil and Iih have maximum values of 1 Î¼A. Those are likely only to be seen in the device's extreme operational temperatures. In practice you will find that the true leakage current is far lower.

Quote:
It consistently reads low for me,
Then you're not doing it correctly. I've used high-value (multi-megaohm) voltage dividers with output-side capacitors on AVR ADC in several apps and have never seen the effect you're reporting.

Have you calibrated for offset?
Have you disabled the input buffer for your selected ADC input via DIDRn?
Do you have squeaky clean anlog layout on your PCB?
Do you have an inductor on AVcc?
Do you have a bypass cap on AREF?
Are you using precision resistors? Have you confirmed their resistance with a good DMM?

- Arrange a voltage source and voltage divider with whatever high-value resistors you've thus far found to result in your reported low ADC readings.
- Don't connect it to an AVR yet, and leave out the cap for now.
- Place a precision DMM on the output of the divider and get a reading.
- Now add the AVR. Make sure you've properly configured the ADC input as mentioned above and set ADEN and configure ADMUX accordingly. Don't do any conversions yet.
- Now what does the DMM read?
- Now have the AVR do free-running conversions.
- Now what does the DMM read?
- Now use the DMM to measure current flow from the voltage divider to the AVR input pin. What does it read?

- Now add the cap and repeat all of the above. Be sure to allow it to reach equilibrium (with megaohm resistors and a 100 nF cap, at least several seconds) before each measurement.

 "Experience is what enables you to recognise a mistake the second time you make it." "Good judgement comes from experience.  Experience comes from bad judgement." "Wisdom is always wont to arrive late, and to be a little approximate on first possession." "When you hear hoofbeats, think horses, not unicorns." "Fast.  Cheap.  Good.  Pick two." "We see a lot of arses on handlebars around here." - [J Ekdahl]

Quote:
Have you disabled the input buffer for your selected ADC input via DIDRn?
The datasheet says it reduces power consumption, when set to 1.

Does setting this to 1 have any affect on ADC performance?

davef wrote:
The datasheet says it reduces power consumption, when set to 1.

Does setting this to 1 have any affect on ADC performance?

It doesn't reduce power consumption so much as prevent a condition whereby excessive current can flow through in input buffer.

The input buffer is basically a pair of MOSFETS in a totem pole arrangement:

Under normal digital input conditions, the input will be either low (at or near GND) or high (at or near Vcc). When low, the low-side MOSFET turns on tying the output low also. When high, the high-side MOSFET turn on tying the output high.

The input should be driven such that the time it spends between the two digital states is minimised, i.e. rise and fall time should be small. The reason is that there is a range of voltages which when applied to the input of the buffer will result in both MOSFETS being turned on (at least partially) at the same time. This will cause current to from directly between GND and Vcc. It's is called shoot through and it is not desirable.

When sampling, on an I/O pin which also has an input buffer, an analog voltage which spends any time near Vcc/2, shoot through can occur. You can prevent this by disabling the input buffer on that pin altogether via DIDRn.

Shoot through shouldn't have too large an impact on the performance of the ADC, at least not directly. I can imagine that the local heating it causes might cause some offset, but doubt it is by itself responsible for whatever you've observed.

Just a note: the tests I've run suggest that bits in DIDRn must be set while the PRADC bit in PRR is clear. If the ADC is powered down, setting bits in DIDRn won't disable the input buffers, even after your re-power the ADC, and even though those bits in DIDRn will read back as set.

 "Experience is what enables you to recognise a mistake the second time you make it." "Good judgement comes from experience.  Experience comes from bad judgement." "Wisdom is always wont to arrive late, and to be a little approximate on first possession." "When you hear hoofbeats, think horses, not unicorns." "Fast.  Cheap.  Good.  Pick two." "We see a lot of arses on handlebars around here." - [J Ekdahl]

I had not observed anything ... only the realisation that I had never even looked for this issue when using the ADC :)

Thanks for the explanation.

davef wrote:
I had not observed anything ... only the realisation that I had never even looked for this issue when using the ADC :)
Whoops, mistook you for someone else!

 "Experience is what enables you to recognise a mistake the second time you make it." "Good judgement comes from experience.  Experience comes from bad judgement." "Wisdom is always wont to arrive late, and to be a little approximate on first possession." "When you hear hoofbeats, think horses, not unicorns." "Fast.  Cheap.  Good.  Pick two." "We see a lot of arses on handlebars around here." - [J Ekdahl]

Quote:

Just a note: the tests I've run suggest that bits in DIDRn must be set while the PRADC bit in PRR is clear. If the ADC is powered down, setting bits in DIDRn won't disable the input buffers, even after your re-power the ADC, and even though those bits in DIDRn will read back as set.

Now >>that<< must have set you back a few hours.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

theusch wrote:
Now >>that<< must have set you back a few hours.
Surprisingly, no :)

I discovered this while running some tests on the ADC of a 328P, mostly 'for fun'... well, and to answer a question I had about S/H timing for a specific app on which I was working. My test app got 'out-of-hand' though ... and >>that<< set me back hours ;)

The conclusion re: DIDRn is based on circumstantial evidence, and I haven't confirmed it with an alternate test approach which would actually measure shoot-through current on an I/O pin under different input conditions and DIDRn states. For the time being though I make a point of wiggling bits in DIDRn 'cautiously'.

At some point I'll make a detailed post of my ADC tests and musings, and post a tidied-up test app. And probably tape a big 'kick me' sign on my back as I do ;) ... but I'll be interested to hear from real sparkies and better programmers.

... but not before 12pm PDT today...

 "Experience is what enables you to recognise a mistake the second time you make it." "Good judgement comes from experience.  Experience comes from bad judgement." "Wisdom is always wont to arrive late, and to be a little approximate on first possession." "When you hear hoofbeats, think horses, not unicorns." "Fast.  Cheap.  Good.  Pick two." "We see a lot of arses on handlebars around here." - [J Ekdahl]

joeymorin wrote:

bkerin wrote:

a parallel cap won't help, its a DC issue.

Nonsense.

Well ok, it may help, but it won't put you in spec. for the part.

joeymorin wrote:

Quote:

exactly as you would expect if the leakage is acting as a parallel path to ground and putting the divided voltage lower than expected.

You've misunderstood the meaning of Iil and Iih.

I don't think so.  From the ATMega328P datasheet, section 23..6.1:

" An analog source applied to ADCn is subjected to the pin capacitance and input leakage of that pin, regard-
less of whether that channel is selected as input for the ADC."

Table 28-1 specifies the symbols for input leakage as IIL and IIH.

joeymorin wrote:

An analog
source applied to ADCn is subjected to the pin capacitance and input leakage of that pin, regard-
less of whether that channel is selected as input for the ADC.

Iil is leakage current to source when the pin is logic low, and Iih is leakage current to sink when the pin is logic high.

In which case, you've got a parallel load besides the one in your divider, and a distorted measurement *at DC*.

joeymorin wrote:

In other words, when the pin is held low any leakage current will flow towards Vcc, and when the pin is held high any leakage current will flow towards GND. Signals with intermediate voltages might exhibit both leakage currents such that an input at Vcc/2 will see no net current flow, or they might see virtually no leakage at all.

The analog input resistance is typically 100M. This means that leakage current due to the ADC (separate from Iil and Iih) will be on the order of a couple of dozen nanoamps.

The 100M is a typical value, with neither min nor max specified.  The maximum leakages are specified for the analog comparator.  I see no

similar spec for the ADC and the above datasheet quote suggests there isn't one.t

joeymorin wrote:

Bear in mind that the datasheet specifies that Iil and Iih have maximum values of 1 Î¼A. Those are likely only to be seen in the device's extreme operational temperatures. In practice you will find that the true leakage current is far lower.

Ok, but you want to assume worst case in good design, right?  My device

runs in -40 degrees C...

joeymorin wrote:

Quote:

It consistently reads low for me,

Then you're not doing it correctly. I've used high-value (multi-megaohm) voltage dividers with output-side capacitors on AVR ADC in several apps and have never seen the effect you're reporting.

Have you calibrated for offset?
Have you disabled the input buffer for your selected ADC input via DIDRn?
Do you have squeaky clean anlog layout on your PCB?
Do you have an inductor on AVcc?
Do you have a bypass cap on AREF?
Are you using precision resistors? Have you confirmed their resistance with a good DMM?

- Arrange a voltage source and voltage divider with whatever high-value resistors you've thus far found to result in your reported low ADC readings.
- Don't connect it to an AVR yet, and leave out the cap for now.
- Place a precision DMM on the output of the divider and get a reading.
- Now add the AVR. Make sure you've properly configured the ADC input as mentioned above and set ADEN and configure ADMUX accordingly. Don't do any conversions yet.
- Now what does the DMM read?
- Now have the AVR do free-running conversions.
- Now what does the DMM read?
- Now use the DMM to measure current flow from the voltage divider to the AVR input pin. What does it read?

- Now add the cap and repeat all of the above. Be sure to allow it to reach equilibrium (with megaohm resistors and a 100 nF cap, at least several seconds) before each measurement.

Good advice.  Most of this I have done, but not offset calibration and possibly I got the last item badly

wrong by sampling many times in a row and averaging to try to beat noise.  That might have depressed

the reading.  It still looks to me like a strict reading of the spec requires a buffer, however.

Last Edited: Sun. Aug 2, 2015 - 04:25 AM

'Freaks "NOTIFY" finally kicked in?

My device

runs in -40 degrees C...

Indeed.  Mine only go down to -40 degrees F.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.