Hello!! I'm trying to implement a polynomic regression on an Atmega to calculate coefficients for a line, a quadratic and a qubic polynom; all this from a set of values obtained from the ADC. I am using the Least Squares method to calculate the coefficients and I have already found the formulas for the three equation, I have also written the code to calculate the coefficients and it works fine on the computer (I have already checked doing the regression on Excel). However once I program the Microcontroller and get the coefficients using the same set of data and code, they are very different from the ones I get using the computer.

I've read about the error that one gets using floating point math and I'm trying to avoid it. But it's very difficult since the values tend to get too big too fast and go above the limit of 32 bit signed integers. So I was wondering if there's a way to avoid using floating point math or a way of using it so that it does't affect so much the acuraccy of the results.

To show what I'm referring to, I add an snippet of the code I'm using. The x array would be the values I'd get from the ADC, and the values on y array won't go higher than 200. As you see the variables s1 to s7, are the sums of the products of the arrays. Therefore they tend to surpass the limit of the 32bit integer. And the floating point math is only really needed when the coefficients a1 to a3 are calculated

float x[16]={448.0,407.0,374.0,342.0,321.0,310.0,289.0,267.0,258.0,241.0,226.0,212.0,198.0,179.0,163.0,134.0}; float y[16]={10.0,15.0,20.0,25.0,30.0,35.0,40.0,45.0,50.0,55.0,60.0,65.0,70.0,75.0,80.0,85.0}; uint_16 n=16; s1 = s2 = s3 = s4 = s5 = s6 = s7 = s = stot = 0; for (i=0; i<n; i++) { x = px[i]; y = py[i]; s1 += x; s2 += x * x; s3 += x * x * x; s4 += x * x * x * x; s5 += y; s6 += x * y; s7 += x * x * y; } if (denom = n * (s2 * s4 - s3 * s3) - s1 * (s1 * s4 - s2 * s3) + s2 * (s1 * s3 - s2 * s2) ) { a1 = (s5 * (s2 * s4 - s3 * s3) - s6 * (s1 * s4 - s2 * s3) + s7 * (s1 * s3 - s2 * s2)) / denom; a2 = (n * (s6 * s4 - s3 * s7) - s1 * (s5 * s4 - s7 * s2) + s2 * (s5 * s3 - s6 * s2)) / denom; a3 = (n * (s2 * s7 - s6 * s3) - s1 * (s1 * s7 - s5 * s3) + s2 * (s1 * s6 - s5 * s2)) / denom;