ATmega168PA Internal Temperature Sensor ... Variability

Go To Last Post
4 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

There's about 30 posts in the forums regarding the internal temperature sensors in the AVR processors. It's obviously a topic which has caused considerable pain and anguish in the past and I have no desire to 'rake over the coals' again so I'll keep this as specific and single dimensioned as I can.

We have a volume production unit ... it's for the UK market and is fine. It's a radio based product and we want to export it to the states. The FCC only approves the transmitter that its used with there down to -10 degrees Celsius.

The plan is/was to use the internal temperature sensor as a crude transmitter inhibit when the temperature dropped to the region of -7 or so.

The code is in place, the unit already uses analogue channels with the internal bandgap reference so its no great deal to read the internal channel for the temperature sensor. But the values we get back are way off from the typical values in the data sheet and vary greatly from processor to processor.

It's not the +/- 10 degrees, or quantization error or the bandgap reference ... its like 60 degrees off on some units.

We've characterized a few of the units and the temperature/reading value is pretty linear (typically 1.23mV per degree) but even the gradient of the line seems to vary from unit to unit.

My specific question is this:

Has anyone had a reasonable sized batch of 168s that did perform as the datasheet says ... ie 314mV +/- 10mV at 25 degrees?

(To put it into context we're getting 350 mV at 20 degrees instead of the 309'ish that we would expect.)

We can calibrate the units at room temperature as the last stage of the production/test process but the range of variablility and the unpredictablity of the line gradient is killing us on this one. We can't afford to do two point calibration and with the current variability a single point calibration won't cut it.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I've got one app with a smaller member of the same family, and the internal temperature sensor is "close enough" without calibration.

The BG variance will indeed cause big swings. Note that 350<=>315 is indeed 10%.

I don't see how you are going to use internal temperature sensor for very cold temperature reading, unless you only run the AVR very intermittently and deep-sleep for long periods. There is going to be quite a bit of self-heating.

Do you get the same results if you make several readings? 10? 100? Is this after you've switched to the BG reference from a higher reference? It can take many conversions for the reference to "die back" to the BG level.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Lee,

Interesting ideas ...

BG variance ... the unit that reads 350mV at 20 degrees has a BG of 1.098, pretty close to the 1.100 that I'm using in the calculation sequence to work out what the temperature is and pretty close to the specified 1.1 volts on the data sheet. Although the datasheet suggests a range of 1.0-1.2 for the BG my experience so far suggests that they are usually much more consistant than that and the temperature curves suggest a swing of 1.129 at 25 down to 1.119 at -40. So we don't think that reading 350 instead of 315 at room temperature can be caused by variation in BG.

We always use the BG as reference for all readings ... we've tried switching to AVcc but that doesn't produce different results (and is specificaly proscribed for the internal temperature sensor anyway). So it can't be reference switching time that's the problem.

Also, we've tried reading lots of times ... the current method is to set up the ADC and the multiplexor ... wait 500us (tried 10mS but no different) ... and then read 128 times using an exponential filtering algorithm. It's a lot of overkill to be honest, reading it just the once after the 500uS gives much the same result, but we were grasping at straws.

On the error front ... the datasheet suggests +/- 10 degrees rather than +/- 10% ... 350 is over 13% higher than the expected value of 309.

On the self-heating front ... the application does spend most of its time asleep, runs at a slow clock rate, draws very little current (200uA for the whole board) and drives no significant loads. In addition we've actually characterized a couple of units right down to -20 and the result is very linear with no sign of selfheating effects at the lower end.

Tis a mystery ...

When you say close enough, are you saying that you had a batch that actually gave you the numbers that you would expect from reading the data sheet?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

When you say close enough, are you saying that you had a batch that actually gave you the numbers that you would expect from reading the data sheet?


yes, but it wasn't critical. We take battery voltage reading under load, for a low-battery indicator. We found that the readings had to be adjusted at cold temperatures. So we used the onboard sensor. (As in your app, it slept a lot so we assumed the AVR was at the battery temperature.)

I didn't do the code so I don't know much more.

[edit] Oh, yeah, it was a P not a PA...

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.