I've been getting into finer details of using the xmega adc. At this point I'm looking at the "calibration bytes" and am wondering if anyone knows exactly what the significance of the cal data is. The claim is that it corrects (somehow) offset and gain error, but the data sheets and manuals do not specify how. My tests imply that the cal data has a very minor influence- the difference between loading 0x0000 and 0x0fff is about a two or three count shift in the result. Somewhat pointless when you would be already be compensating gain and offset in your application code and/or hardware. Interestingly, the factory cal data is the same for several different chips.