I am looking at the possibility of using the xmega256A3 for a data acquisition device to measure a low voltage (8uV resolution, 0-15mV dynamic range) signal.
Despite all the problems and errata on the xmega, I cannot find anything that suggests this will not work in principle.
Resolution is okay with the 64x amp and the 11bits left over for positive measurement. I also plan to oversample - a lot.
But what about the ACCURACY of the ADC??? Of concern are values in table 35.5 of the xmega256A3 datasheet: gain error +/=10mV, offset error +/-2mV and INL +/-5 LSB.
1. What is gain error in table 34.5, as opposed to the amp gain error in table 34.6? Is table 34.5 implying there is gain error in the ADC when no gain is used?
2. If atmel do a super duper factory calibration using precision instruments, why does appnote 1300 suggest using the "characterisation and calibration of the ADC" in appnote 120? Are you supposed to calibrate the ADC 'again'?
3. Are the ratings in table 34.5 given before the factory calibration, after the factory calibration or after the appnote 102 calibration? i.e. can I improve on the table 34.5 gain and offset errors if I doe the appnote 102 calibration?
4. Similar question for the INL in table 34.5. Is this the value after the atmel factory calibration? If so, some lookup table for the linearity correction is possibly required.
5. Finally, if the appnote 102 calibration is done and a lookup table for the linearity problem derived, are the values for gain correction, offset correction and non-linearity characteristics constant for a given device over time and a reasonable temperature range? i.e. can I calibrate a single device once and expect the corrections to hold for the useful life of the product?
Thanks for any help!