Magic way to get 12 bits from 10

Go To Last Post
7 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I told this guy I know 'you cant use the avr a/d cuz it only has 10 bits' and he said 'no prob, I just use decimation to get 12 bits'. Sounded like snake oil to me, but evidently, its sort of like averaging.... two 10 bit readings added up (but not divided) gives an 11 bit reading, so 4 readings give 12 bits. I think the secreat is you need to make sure the signal isnt moving during the 4 readings... so might be useful for slow moving signals etc. Anyone think this will work, or is it really snake oil?

Imagecraft compiler user

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi Bob,

It does work and works quite well.

One caveat is that the signal does need to be changing - sort of. You have to have enough noise on it to toggle the LSB minimum otherwise you are just adding up identical numbers.

This was a trick used a lot in instrumentation circles to increase the effective number of bits of ADCs when the resolutions weren't as cheap as they are now.

But what I think you were saying is also correct - your signal can't be drifting or that screws things up too. So you need noise but not drift. ;)

Please note - this post may not present all information available on a subject.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Are you refering to Dithering - adding some random noise and get an average the quantised signal? It does indeed work well if done properly.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yep - I didn't know it by that name. Just the technique. And it's only important if the noise is less than your LSB. In that case, averaging is just a waste of time as you always get the same number.

Also I hadn't mentioned specifically adding noise to get above the LSB - just that it is necessary to have noise such that the LSB toggles. But the technique of adding random noise into a signal on purpose to get more bits when averaging is real too - as you already know.

All too many years ago in an electronics class we did an experiment that illustrated this very thing. It was very cool, and at the time counterintuitive.

Please note - this post may not present all information available on a subject.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi!

There is of course a catch here.... It is true that you get higher resolution by averaging several samples if there is enough (white) noise present. However, the quantization steps are not exacly the same for all 2^10 steps (due to DNL, differential non-liearity) So you don't get much better total accuracy, just much smoother accuracy errors :(

Anyway, I used this technique myself with a 12-bit external ADC averaging 64 samples to get 16 bits. The noise added to get the "extra resolution" will then be present in the averaged signal but reduced by a factor of SQRT(64) = 8. So don't add too much noise, a few LSB (at the real ADC resolution) is enough

/Björn

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The idea is pretty easy to understand. If L is the smallest step of your A/D, and the actual reading is, say a quarter of the lsb (L/4) then the reading will be one bit higher 1 out of 4 measurements.

Is my explanation easy to understand?

If you don't know my whole story, keep your mouth shut.

If you know my whole story, you're an accomplice. Keep your mouth shut. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This technique has a pretty good pedigree; a number of 'scopes employ the general principle to 'fake' extra resolution (Agilent and LeCroy included) if you're using a 'slow' timebase.

But most 'scope inputs don't need to worry too much about whether there's enough noise present; 60MHz+ of bandwidth tends to ensure that noise is a problem, not a solution.

Regards,

Colin