## Maths

38 posts / 0 new
Author
Message

Two topics in one day!, tonight I will sleep like a baby

I quite like maths, I do a lot of modelling with differential equations and such like

I also use Matlab quite a lot, I use it to model systems with complex equations (pardon the pun!) and the basic fitting is very useful to reduce complicated systems to something much more simple

Look at the attached pic

I have fitted a curve as a 5th degree polynomial which is more than I need but its a good demonstration, it uses the ADC reading on the x axis to correspond to an output on the y axis

So I can use the ADC reading directly to see what a value maybe, all good so far

My question is how to best deal with equations like this and how I should program them, I need to be aware of overflow errors and at this minute I just don't know how to best program this sort of stuff

here is a separate equation from a different process

```data=Read_ADC(Channel);
volatile double value=((-6.732*(float)data*(float)data*(float)data)/10000000.0) +0.0.0006775*((float)data*(float)data) -0.367813*((float)data) +134.01;```

So I read an ADC value the max value is 1023, hence I have to be careful about overflow as 1023^3.2 is almost 2^32

So I am interested to see how to code the equation in the screen shot, how to raise a number to a power and divide by large numbers like 10^9 etc

how I should deal with this stuff

One day i will get around to avoiding floating points but for my work I am doubtful I can even do it

I just don't see how I can calculate sin/cos etc without float but thats a discussion for another day

## Attachment(s): Thermistor.png

In many cases, the range of solutions is finite eg: the adc will only give 1024 possible results, a stepper motor can only achieve a given maximum speed or the pwm will only accept 256 values. Thus you can do tricks like precalculating a table or use a smaller table and use interpolation. A integer source maps to an integer result. There are other techniques that can be more computationally efficient such as using CORDIC for transcendental functions. This was something HP used in their calculators.

A common technique for avoiding floating point is to use fixed point math - premultiply your inputs to get extra implied decimal (or radix) points. FP is handy for dealing with both very small and very large values.

Quote:

In many cases, the range of solutions is finite eg: the adc will only give 1024 possible results,...

Indeed. Even with your well-fitted equation and a super-clean analog subsystem the precision over a 150-degree range will be only useful to some fraction of one degree.

I thought that curve looked familiar, and indeed the picture is titled "thermistor". I've got a spreadsheet that I use to fiddle with the bias resistor value to pull the curve as close to linear as I can get it over my range of interest.

If the result with a linear fitted curve, y=mx+b, is close enough (say, +/-1 degree) over my area of interest, then I just use that.

If not, I usually use a small table of values and linear interpolate between them. That gives me good enough results, given the 10-bit nature of the input value (when perfect) and the needs of the application.

Either way is much faster and less code space than the floating-point for your equation.

Some here use the Steinhart-Hart equation in their AVR apps. I'm not good enough with my math to tell you what order that equation might be, with the ln() in there.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

Quote:

hence I have to be careful about overflow as 1023^3.2 is almost 2^32

Sorry but what makes you say that? You haven't shown what type "data" is (I sort of assume uint16_t as it's most appropriate for an ADC reading) but it hardly matters because at the point of usage you cast it up to float anyway. So where does your 2^32 limit come into this?
Quote:

I just don't see how I can calculate sin/cos etc without float but thats a discussion for another day

Well in principle:

```float sines[] = { 0, 0.174, 0.342, 0.5, 0.643, 0.766, 0.866, 0.937, 0.981, 1.0 };

float sin_val = sines[(int)angle_deg / 10];```

As I've used steps of 10 it's very "coarse". So say you used:

`sin_val = sines[(int)45 / 10];`

it would get 0.643 not 0.707 as 45/10 in "int" is 4 so it just takes the 4th entry from the table. I guess you could "interpolate" between the 40 and 50 degree values. Half way between 0.643 and 0.766 is 0.7045.

The more room you have for more samples the more accurate it gets - but that's the general idea. It discards the need for runtime calculation of sin() but eats flash space for the table so it's a tricky call as to which is "better". I guess it depends whether size or speed are important?

Oh and to drop float all together:

`uint16_t sines[] = { 0, 174, 342, 500, 643, 766, 866, 937, 981, 1000 }; `

they're obviously the actual sin() value * 1000 so at some point you are going to need to /1000 to offset that.

To evaluate the polynomial you can use Horner's method, something like this:

```const float polynomial[] = {
-1.203e-12, 3.279e-9, -3.598e-6, 0.002009, -0.6809, 146.8
};

// Evaluate a polynomial at point x with Horner's method.
// polynomial[] contains coefficients in decreasing order of power.
float polyval(const float polynomial[], uint8_t poly_len, float x)
{
uint8_t i;
float b = polynomial;
for (i = 1; i < poly_len; i++)
b = b * x + polynomial[i];
return b;
}
```
`===================================================================`

Kartman wrote:
In many cases, the range of solutions is finite eg: the adc will only give 1024 possible results, a stepper motor can only achieve a given maximum speed or the pwm will only accept 256 values. Thus you can do tricks like precalculating a table or use a smaller table and use interpolation. A integer source maps to an integer result. There are other techniques that can be more computationally efficient such as using CORDIC for transcendental functions. This was something HP used in their calculators.

A common technique for avoiding floating point is to use fixed point math - premultiply your inputs to get extra implied decimal (or radix) points. FP is handy for dealing with both very small and very large values.

I understand the limits of PWM registers but its the limits of math on an AVR that I haven't fully got a grasp on.

Interpolation is something that I have never actually done but I know what it is and how it works its simple enough but CORDIC isn't something that I have heard of before and looking at wiki its right up my street!, thank you for this, I don't fully understand it yet but it looks neat

Quote:

Indeed. Even with your well-fitted equation and a super-clean analog subsystem the precision over a 150-degree range will be only useful to some fraction of one degree.

Nothing in the real world is completely accurate, I am well aware of this, hardware limitations aside the real world is an extremely complicated system with an unlistable amount of errors and deviations

The modelling of engineering systems as I am sure you know is mostly based on linear time invariant approximations which can not ever be really accurate as nothing is ever totally linear

In other words

The coefficients of differential equations are never constants and are in fact functions of other variables so nothing is linear time invariant and everything is modelled as a partial differential equation, ordinary differential equations are just looking at a slice of the picture

I mean even the speed of light is finite so everything you look at might not even be there anymore!

I work for a mechanical engineering company with a fair bit of chemical engineering and process control, I come into contact with some complicated things, even things where models don't exist, I have spent the last ten years studying maths as an aside to all the electrical stuff and I still haven't had enough

I get the chance to build hardware to try out some really interesting things where time periods are a lot longer than electrical engineers usually have so the 8bit AVR is the weapon of choice, at least for me

Quote:
I thought that curve looked familiar, and indeed the picture is titled "thermistor". I've got a spreadsheet that I use to fiddle with the bias resistor value to pull the curve as close to linear as I can get it over my range of interest.

You are indeed right this example is a thermistor and I use Matlab to linearise the bridge just like you do, but its just a nice example to present, I know theres some excellent methods to do this job but the point of this thread for me is to get better at doing maths on a micro, there are much more complicated curves I want to work with

Quote:

If the result with a linear fitted curve, y=mx+b, is close enough (say, +/-1 degree) over my area of interest, then I just use that.

If not, I usually use a small table of values and linear interpolate between them. That gives me good enough results, given the 10-bit nature of the input value (when perfect) and the needs of the application.

+/- 1C is accurate for most things, however a straight line will only approximate something as nonelinear as a thermistor accross a narrow range and I have no arguments that your way is perhaps the best way but you are in a different position to me as I have so much to learn and everyone has to learn to walk before they can run, theres no doubt that I will be starting threads about avoiding floats in the near future as its something that I need to know how to do, its another maths trick I never tried so theres something to learn but I am not ready for that just yet

Quote:
Either way is much faster and less code space than the floating-point for your equation.

Some here use the Steinhart-Hart equation in their AVR apps. I'm not good enough with my math to tell you what order that equation might be, with the ln() in there

Faster and code space are two things that do not trouble me (at the minute), a 1MHz RC oscillator can do some serious work when time periods are measured in seconds, I could put in a 16MHz crystal to speed things up should I ever hit timing issues without touching the code not that I think thats a good solution but its an option

Code space is plenty for me, my programs are simple

Program memory 25.6%
Data memory 8.4%

Is one of my larger programs and thats with printf and floats everywhere!

Surely if I have lots of space then using floats makes my program much easier to follow rather than neat programming tricks, given the amount I don't know it wouldn't make sense right now to complicate things

I use the steinhart equation a lot and it can be modelled very accurately with a cubic equation, I have the plot on my laptop and I will post it up after I post

However I am informed by the manufacturers of the thermistors i use a lot that the steinhart equation is only accurate accross the 0-50C range and I need to measure -40C up to 80C, so the polynomial is the best method for me to keep as much accuracy as realistic simply

Quote:

Sorry but what makes you say that? You haven't shown what type "data" is (I sort of assume uint16_t as it's most appropriate for an ADC reading) but it hardly matters because at the point of usage you cast it up to float anyway. So where does your 2^32 limit come into this?

Clawson it was badly worded of me, what I was trying to explain is that using an ADC value that can be as large as 1023 in an equation with high orders then at some point I will run into trouble, 1023^3 is almost up to a 32bit, 1023^4 will overflow a 32bit value I need to know the limits, you know compared to you guys I don't know anything about C but I am keen, really keen in fact

I can see a gaping hole in my knowledge as I need to know how to deal with these things, I mean I am such a noob I don't definitely know what is the biggest number I can use on an 8 bit AVR!

Quote:
Exact-width integer types

Integer types having exactly the specified width
typedef signed char int8_t
typedef unsigned char uint8_t
typedef signed int int16_t
typedef unsigned int uint16_t
typedef signed long int int32_t
typedef unsigned long int uint32_t
typedef signed long long int int64_t
typedef unsigned long long int uint64_t

So its (looking like) a long long 64bit

1023^6.400902252=2^64

So if I had an equation with 1023^7 in there I will overflow and the whole goal of this thread is to learn how to get smarter when dealing with these high order equations

I might have

`x=1023^7/1000000`

So I could break this down into parts

`x=((1023^3)/1000)*((1023^4)/1000)`

Quote:
it hardly matters because at the point of usage you cast it up to float anyway. So where does your 2^32 limit come into this

I don't see how casting to a float makes a difference, and googling has left me wondering do I even know my own name

Whats the the smallest number I can possibly use on an ATMega328?

the whole reason for this thread was to see how you guys would write a function to deal with the equations I have presented, how do you break them down, I just want to see it!

Last Edited: Fri. Aug 8, 2014 - 05:37 PM

Lee the Steinhart model

Look at how nice a cubic equation fits

snigelen wrote:
To evaluate the polynomial you can use Horner's method, something like this:

```const float polynomial[] = {
-1.203e-12, 3.279e-9, -3.598e-6, 0.002009, -0.6809, 146.8
};

// Evaluate a polynomial at point x with Horner's method.
// polynomial[] contains coefficients in decreasing order of power.
float polyval(const float polynomial[], uint8_t poly_len, float x)
{
uint8_t i;
float b = polynomial;
for (i = 1; i < poly_len; i++)
b = b * x + polynomial[i];
return b;
}
```
`===================================================================`

I like the look of this, I can see its evaluating each term at a specific point and summing the result but I need to sit down and use it so it becomes clear

I like it. thanks

## Attachment(s): Steinhart.png

Quote:

a straight line will only approximate something as nonelinear as a thermistor accross a narrow range

It depends on what the definition of "narrow" is to you. And you don't seem to have grasped what I said about using a bias resistor to flatten the curve.

I also don't get this comment:

Quote:

I get the chance to build hardware to try out some really interesting things where time periods are a lot longer than electrical engineers usually have so the 8bit AVR is the weapon of choice, at least for me

What "time periods"? And how long are they? Where do you think AVRs are applied? While indeed an electrical engineer might design the boards and/or do the programming, the application might well be for an ice-cream machine or dairy plant cleaning or agricultural chemical mixing or ...

"Limits of math on an AVR"? What limits? Pick a toolchain with the data types needed. Write the C statements. Run them on a PC or an AVR. Do you expect the results will differ? If so, why?

Quote:

So if I had an equation with 1023^7 in there

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

Quote:

Clawson it was badly worded of me, what I was trying to explain is that using an ADC value that can be as large as 1023 in an equation with high orders then at some point I will run into trouble, 1023^3 is almost up to a 32bit, 1023^4 will overflow a 32bit value I need to know the limits, you know compared to you guys I don't know anything about C but I am keen, really keen in fact

You completely missed the point I was making. You are dealing with float not 32 bit int's so you are not limited to 2^32 as an interim calculation result. In fact float can represent up to about:

`#define __FLT_MAX__ 3.40282347e+38F`

which is considerably larger than 2^32.

Horner's rule is the traditional method for evaluating polynomials.
If it is fast enough, that is probably the way to go.

A float is 4 bytes, so a table of values would cost at most 4 KB.
If the range is really temperatures, I suspect 1 or 2 KB of possibly scaled integers would be enough.

The function is strictly decreasing and there seems to be
at most 150 or 300 values that need to be distinguished.
A range to domain table would need at most 600 bytes.
It would be more complicated to access.

Note that avr-gcc has only 4-byte doubles, contrary to all C standards.
The constants are doubles, so there could be some differences on a PC.

I suspect that OP is having trouble wrapping his brain around floating point variables.

Iluvatar is the better part of Valar.

Quote:
It depends on what the definition of "narrow" is to you. And you don't seem to have grasped what I said about using a bias resistor to flatten the curve.

Sorry but what makes you think that I haven't grasped how to linearise a thermistor bridge?, I don't think that you grasped that I use Matlab to do this

Look at the plots, for different values of RB

Choosing the resistor to have the same value as the thermistor in the middle of the measurement range is what I was taught(rightly or wrongly)

Now choose the most linear looking model and doing a linear fit, we can see looking at the plot od residuals its not a large range of temperature that the error is less than 1C

Surely thats by anyones standards?

Please Lee teach me, show me I am wrong as I think I may well be here and I always want to learn from those more experienced

Quote:
What "time periods"? And how long are they? Where do you think AVRs are applied? While indeed an electrical engineer might design the boards and/or do the programming, the application might well be for an ice-cream machine or dairy plant cleaning or agricultural chemical mixing or ...

Time periods for a process to start to change, the time constant, everything dynamic has one and in my work there is a large variation depending on what I am doing, so its impossible to list them all, plenty of things have a time constant under a second, like heating applications (depending on the mass to heat) but many chemical procsses have very large 'pure delays' so it can be many seconds before a change starts, even minutes

I don't understand what you mean by where are AVR's applied?, they are applied everywhere!

Well I am an electrical engineer that does lots of hardware building, make bespoke electronics, design prototypes I also write the software and do lots of research so its not just electronics and programming that I do,

Quote:

Nothing is infinite though is it, there is clearly a limit and I am so noob that I don't even understand why a float can be bigger than anything else, its bigger than 64 bit?

if you could explain that would be great as that comment for me isn't even a fraction of the story

## Attachment(s): Steinhart_Linearised_Multiplot.png

clawson wrote:

You completely missed the point I was making. You are dealing with float not 32 bit int's so you are not limited to 2^32 as an interim calculation result. In fact float can represent up to about:

`#define __FLT_MAX__ 3.40282347e+38F`

which is considerably larger than 2^32.

Clawson I completely (still) don't understand

A micro has registers of a finite length which restricts the size a number can be, surely if its float then we have to take some of the digits for a decimal point to go before so we lose magnitude for every point we add

But thinking about this I really don't know how a micro works do I, 10^38 is a crazy sized number, its ridiculously big but we have over three times this as a maximum

Whats the smallest number?

http://en.wikipedia.org/wiki/IEE...

and specifically this:

http://en.wikipedia.org/wiki/IEE...

If you don't understand floats after that give up ;-)

The second page even tells you that the lower limit is 1.18x10^-38

clawson wrote:

http://en.wikipedia.org/wiki/IEE...

and specifically this:

http://en.wikipedia.org/wiki/IEE...

If you don't understand floats after that give up ;-)

The second page even tells you that the lower limit is 1.18x10^-38

Thanks clawson

I did discover the minimum size after a 2 second google, it was easy after I knew the maximum

It makes more sense already, a floats max is defined as part of the C language where as the maximum int is machine dependant

Thanks

Quote:
a floats max is defined as part of the C language where as the maximum int is machine dependant
What are you talking about? How a C compiler actually implements float is implementation dependent, it's just that most C compilers choose IEEE754 as it's the common method.

clawson wrote:
What are you talking about? How a C compiler actually implements float is implementation dependent, it's just that most C compilers choose IEEE754 as it's the common method.

talking out my ass
:oops:

The float isn't limited by the hardware like an int, have I got this bit right?

I think I will rethink life to be honest, maybe I would make an excellent shelf stacker

Quote:

Choosing the resistor to have the same value as the thermistor in the middle of the measurement range is what I was taught(rightly or wrongly)

And indeed that is probably the best starting point IME.

Now, there are applications and there are applications. Below is a plot of a linearized thermistor, that is used to control a line of commercial soft-serve ice cream makers. My apps reported temperatures are within 1/2 degree F when the machine maker tested with a calibrated thermometer. Well enough.

In the industrial world some of our controller lines must accommodate different thermistors with the same circuit board. The bias resistor is then a compromise and a lookup table is used as in the code fragments below. Linear interpolation between the points. Again the reported temperatures are within a degree or two of "actual". New tables can be loaded as parameters "in the field" into EEPROM so no "reprogramming" of the units is necessary.

And in that industrial environment that is plenty good enough. Unless you set up a very clean measurement system (translated as "expensive") you are only getting a useful 9 bits or so from an AVR8 reading. Trying to report beyond that is ... misleading at best.

```eeprom	unsigned char	ee_table_temp_entries_30k	= 12;	// A/D counts to degrees F table
eeprom	unsigned char	ee_table_temp_entries_10k	= 12;	// A/D counts to degrees F table

// w/ 31.6k bias resistor; assumes 30k thermistor
//	Beta probe CTP3403, CTP1104
//
//	use Murata NTSxxWB203 as a pattern
{
617,
557,
498,
441,
387,
338,
293,
252,
217,
187,
160,
137
};
eeprom	unsigned char	ee_table_temp_degf[MAX_TABLE_ENTRIES] =
{
59,
68,
77,
86,
95,
104,
113,
122,
131,
140,
149,
158
};
...
//
// **************************************************************************
// *
// *		C A L C T E M P
// *
// **************************************************************************
//
//	Common routine to calculate temperature (degrees F) value from A/D counts.
//
//	Interpolate between given values in the table.  Extend the table off both
//	ends using linear interpolation.
//
unsigned char					calctemp		(	int				ad_counts,
unsigned char 	table_entries,
unsigned char	*table_temp)
{
unsigned char	slot;	//
int				value;

// Check for low temperature/high A/D counts "off the table"
{
slot = 0;					// use the first two slots
}
// Check for high temperature/low A/D counts "off the table"
{
slot = table_entries - 2;	// use the last two slots
}
// "In" the table.  Find the correct slot.
else
{
slot = 0;
{
slot++;
}
}
//
//	Calculate the temperature using starting "slot" and linear interpolation
//	with the next value.
(*(table_temp + slot + 1) - *(table_temp + slot)) /
*(table_temp + slot)
);
#if 1
if (value < TEMP_TOO_LOW)
{
value = TEMP_NA;
}
if (value > TEMP_TOO_HIGH)
{
value = TEMP_NA;
}
#endif
return ((unsigned char)value);
}

```

Interesting note is that I kept the table values small, so I was able to use vanilla 16-bit arithmetic.

Is my result good to three significant digits? Probably very near to that. At least as good IMO as the accuracy of the raw input signal. As good as your high-order equation? Dunno. I'd have to dig out my old Numerical Analysis test and look at your equation and figure out what the e term would be after all that manipulation of floats.

## Attachment(s): linear.jpg

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

Quote:
The float isn't limited by the hardware like an int

Depends on what you actually mean by this. An int32 is the same regardless of the actual computing platform but an int is implementation dependent - on the AVR and many other 8 and 16 bit cpus it is 16 bits, on other platforms it may be 32 or even 64 bits.
Similarly with a float.

This is what a given compiler gives you - nevertheless you are only limited by the available compute time and memory as to what precision calculations you want to do. Many scientific calcs use only a four bit processor and calculate in bcd.

For those of us that were taught math before the universal adoption of the electronic calculator, floating point is familiar as we were always dealing with a mantissa and exponent.

Bignoob wrote:
how I should deal with this stuff

If that is really what you want to know, use Horner's rule
with float coefs={ -1.203E-12, 3.279E-9, -3.598E-6, 0.002009, 0.6809, 146.8 };
Bignoob wrote:
I might have

`x=1023^7/1000000`

So I could break this down into parts

`x=((1023^3)/1000)*((1023^4)/1000)`

Don't. There is no need.
This is the kind of thing floating point is for.
1023**7< 2E21 .
C requires float be capable of representing at least 1E37 to one part in 1E7

Iluvatar is the better part of Valar.

But all your figuring goes wrong 'cause 7*13 is 28.

The largest known prime number: 282589933-1

Without adult supervision. Just a side question for this kind of problems.
Are there (in C or ASM) a fast float lib that don't follow
IEEE?
For this kind of caluations a simple int could be used as the fraction and 8 bit for the exp.
That way the 5'th order solution could be done in something like 3000 clk (10 mul and 5 add's).

Ups a small error a mul should take about 40 clk and a add about the same. So about 600 clk. (the other numbers was for a tiny)

Quote:

Are there (in C or ASM) a fast float lib that don't follow
IEEE?

Generally or AVR?

https://gcc.gnu.org/onlinedocs/g...

But I don't know if that can be (or is?) available for AVR-GCC.

N1312 is here:

www.open-std.org/jtc1/sc22/wg14/...

In fact this is IEEE too. The base 2 floats we refer to as IEEE754 are more correctly IEEE754-1985 and later in IEEE754-2008 they added this "decimal" (base 10) support to the existing base 2. So technically it is IEEE too.

BTW early work on decimal floats was done at IBM but they freed it to the public domain here:

http://speleotrove.com/decimal/

(the intention of it is not speed/performance but accuracy - however as it is a "different" implementation it may be more efficient too).

Yes but I want to use 2's complement, so it's faster, and I guess store the high bit in the fragment as 1, (loss one bit, but faster).

There must be at least a DSP lib out there.
Otherwise I have to make it.
I have no need but it could be fun :)

Is Bignoob still around?

I'm investigating an RTD interface for a critically-important project (my smoker ;) ) and came across this Analog Devices app note:
http://www.analog.com/static/imp...

Very interesting is that there are three approaches in a table, corresponding to the three mentioned here:

Quote:
Table 1. Comparison of Linearization Methods
Direct Mathematical Method ...

Single Linear Approximation Method ...

Piecewise Linear Approximation Method ...

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

Quote:

Yes but I want to use 2's complement, so it's faster,

You lost me there. I guess you mean that the pieces are normalized and such so cannot be used directly?

Quote:

Are there (in C or ASM) a fast float lib that don't follow
IEEE?
For this kind of caluations a simple int could be used as the fraction and 8 bit for the exp.
That way the 5'th order solution could be done in something like 3000 clk (10 mul and 5 add's).

Anyway, why don't you just do your "improved" FFP (Fast Floating Point ;) ) calculations and then convert to (and/or from) the standard format for storage?

And if your FFP is superior, then why are all packages not based on it already?

While not trivial, extracting the pieces for a 32-bit "float" isn't going to take a lot of cycles. So now you've got (say) the exponent in 2's-complement 8-bit value, and the full mantissa in a signed 24-bit 2's-complement value.

Why didn't the people that already did AVR FP functionality like Jack Tidwell do it in the "best" way in the first place?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

I wrote a floating point package in the mid 70s for the 6800 we were using in the training simulators. 16 bit 2s comp frac and 8 bit 2s comp exp. So an fp mult was a 16x16 mult and add the exponents. Etc.
This gives 15 bit res over a 2^127 range, about 10^38. I think the steinhart eqn is within 0.1 deg. I dont think the range of temperatures is limited... if you take the R at 0C, 25C and 100C, get the a,b,c coeffs from the US sensors applet, the formula interpolates beyond those temps. To get better than this, you need to linearly interp between points that are close enough so that the max error in between points is less than .1C.

Imagecraft compiler user

Quote:

get the a,b,c coeffs from the US sensors applet,

I don't think it is on the site anymore. Can you find it? You mentioned it in this thread a couple months ago:
https://www.avrfreaks.net/index.p...

Quote:
Quote:

US Sensors has a good applet ...

https://www.avrfreaks.net/index.p... ... 69#1133769

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

@theusch
You need to look at the IEEE. It has some good things but it's a pain to implement.
1: It use sign magnitude so it's symmetrical, a real pain on a AVR.
2: When it normalise it's known that the high bit is 1 so it not stored, gain a bit but a pain.

Quote:
And if your FFP is superior, then why are all packages not based on it already?

I do believe it's a straightforward way to do it on an AVR, if you don't need it to good.
And let's say in this case we make the output from 10000 downto -5000 (100 times more)then it fit an int as input and output values. So there is no converting in and out of the format (other than denormalise), when the exponent have a certain value, the mantissa holds the integer value.

Quote:
Is Bignoob still around?

Yeah!, I am always on this forum, I don't always have a lot of time to post and the weekend was hectic to say the least

Quote:
Very interesting is that there are three approaches in a table, corresponding to the three mentioned here:

It is a very interesting read, it basically talks about everything we discussed in this thread and you would think we were all reading it as we posted!!

This has been another informative thread, I never intended to get into the float vs int debate, or even to discuss electronics techniques like linearising a bridge but it was all good relevant discussion

I know now just how big a float is and how my original fears were unjustified, for me using a function with floats in is my method of attack, theres no doubt an equation is the neatest and simplest code to follow

I use some thermistors were the manufacturer supplies the values accross the temperature range and they are guaranteed (so they say) to be within 1% of the data sheet

The Steinhart equation starts to drift below 0C this week I will be doing some experiments with them if I get chance I will repeat the ytests with the Steinhart and post some results up

Lees method of interpolating is the best use of resource, no arguments from me

So we were all right in out own way, everyones a winner!

Thanks to all who input to this thread

Quote:
get the a,b,c coeffs from the US sensors applet,

I don't think it is on the site anymore. Can you find it? You mentioned it in this thread a couple months ago:

Is this the app being referred to? You now have to ask for it.

[url] http://www.ussensor.com/steinhar... [/url]

Quote:

The Steinhart equation starts to drift below 0C

Interesting. I thought I saw some plots (in a scholarly-type article IIRC) where the variation from actual was a very small amount.

Also, what is the source of the coefficients?

As I opined in one of the threads I linked to, the manufacturer's R-T tables are so close to Steinhart-Hart that I speculate that it is the equation that is used to generate the tables.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

I haven't yet found the article I was thinking of, but a couple quotes from a treatise:
http://www.temperature.ie/Manual...

Quote:
It should be noted that the Steinhart-Hart equation
produces a good approximation to the relationship
between T and R for the complete range of a thermistor
based on data from just three calibration points.
...
The published R/T tables are based on actual measurements, but the difference between values calculated from the Steinhart-Hart equation and the published data should typically be less than +/- 0.01 Â°C.

From another document, it can be seen that the coefficients vary slightly depending on what temperature range they are derived from. The typical rule-of-thumb is to derive the coefficients from the top, bottom, and middle of the range of interest:
http://www.cornerstonesensors.co...

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

I have the self extracting exe thermistor621.exe at home and at work, but we cant attach exes can we?

Imagecraft compiler user

bobgardner wrote:
I have the self extracting exe thermistor621.exe at home and at work, but we cant attach exes can we?
You can wrap it in a .zip file.

Iluvatar is the better part of Valar.

Its only taken me a week to get back to this but here I am!, not going away am I!

Quote:
Interesting. I thought I saw some plots (in a scholarly-type article IIRC) where the variation from actual was a very small amount.

Interesting indeed, when I say that the equation drifts below 0C this is me repeating what a manufacturers technical department yold me a few weeks ago, I was wanting to measure -40 to +70C and reading the links it seems to tally up as they recommend changing the coefficients when the temp range is over about 50C, this is cheating! as anything can be approximated piece wise linear...

Quote:
Also, what is the source of the coefficients?

Well this is obviously calculated from the data sheet, plug in values from temperature points accross the temp range and solve the equations, voila

Quote:
I speculate that it is the equation that is used to generate the tables

Maybe you are right I don't know but I doubt this is true as the manufacturers would just provide the equation and the coefficients surely their technical department wouldn't give me the advice they did, which seems to be correct as changing the coefficients over a 50C range counteracts this

Last week I set up a basic experimant with basic equipment

I set the freezer to -40(ish) and I had a calibrated sensor, I put them in the freezer and used the manufacturers data and Matlab to generate a high order equation

My thermistor and the calibrated sensor had quite different response rates which I was aware of but if the temperature changed slo enough then the two sensors will match

When I checked the data the next day it was clear that some ass had been in and wrecked the experiment, I will post up the plot soon but its clear the door got opened!

I was so busy solving other problems at work that I didn't get time to repeat the test but I will certainly be doing this very soon, I am learning from it all the time

Lee you are making me think here and its learning me things, first thing is that I am going around saying I use the steinhart equation but the reality is that I am using the B parameter equation which is a simplified version of the Steinhart so all this time I was not using the full version so it would never be as accurate

No harm done though as I have never measured any errors and blamed it on the steinhart equation, not yet!, so thanks for making me think and check myself, I love learning Thermistor plot.png