Signal Conditioning for ADC - Input Impedance and Sample Time SAMD20

Go To Last Post
3 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I need to divide a DC analog voltage from a sensor or battery down to match the SAMD20 ADC range, but I also want to minimize quiescent current.  The datasheet implies that any impedance will work so long as the adc is given a longer sample time.

 

I have two questions:

1. Does anyone have clarification regarding maximum source impedance for the SAMD20 microcontroller ADC?

2. What is the maximum sample hold time period for the ADC?

 

Below I will show calculations for the Thevenin Equivalent of a resistor divider network to determine ADC sample hold time.  Does this make sense as an approach for calculating the source impedance of a resistor-divider voltage source?  I am providing this analysis in case anyone else also has these questions.  Hopefully we can get some consensus from other designers.  

 

Dividing sensor voltage down to ADC voltageSAMD20 ADC

 

VSEN is the analog output of a sensor or battery.  VIN feeds into the ADC pin.  Rs is the source impedance of the sensor or battery, but this is assumed to be insignificant compared to R1 and hence we ignore the value.  R1 and R2 are the divider network.  The Thevenin equivalent circuit of the divider network is given by the following formulas:

 

Thevenin equivalent of a voltage divider

 

Let's say I want to divide a battery with a maximum voltage of 17V down to 3.3V and only sink up to 10 uA.  I choose a 1.37 MOhm resistor for R1 and a 330 kOhm resistor for R2.  For this circuit, RTH = RSOURCE = 266 kOhms.  

 

On page 567, RSAMPLE = 3.5 kOhms and CSAMPLE = 3.5 pF.  On page 571, the sample time for a given input source is calculated as follows.

 

 

For full 12-bit resolution, my t_SAMPLEHOLD comes out to 8.50 us.  This doesn't seem unreasonable at all, though leakage current could affect the system when trying to optimize for low-power consumption.  

 

On page 562, the datasheet says maximum input leakage current is normally 0.015 uA for a regular IO pin but it can max out at around 1 uA.  Should this leakage current be used to help calculate maximum source impedance?

 

If the ADC is sinking 1 uA of current, Vin would experience a voltage drop of 0.27V.  This is around 8% of the total range (3.3V).  To be in bounds, the added error must be less than 0.1% of the full input range for my project.  However I cannot relax the maximum 10 uA quiescent requirement by an order of magnitude to decrease Rthevenin.  I'm between a rock and a hard place, though perhaps going into "typical" territory will be good enough.

 

I am using the SAMD20 datasheet revision DS60001504C:

http://ww1.microchip.com/downloads/en/DeviceDoc/SAM_D20_%20Family_Datasheet_DS60001504C.pdf

 

Thank you,

 biobuilder

Last Edited: Wed. Feb 12, 2020 - 02:12 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

Put a capacitor on the input. If the value is significantly larger than  Csample, then the problem disappears.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Very true.  The capacitor would create a low pass filter and stabilize the DC voltage.  A capacitor must be sized small enough such that the charge up time happens reasonably quickly after the sensor turns on.

Last Edited: Wed. Feb 12, 2020 - 05:11 PM