Level: Hangaround

Joined: Thu. Nov 7, 2013

Posted by jtw_11:
Wed. Apr 14, 2021 - 09:44 PM

Evening all,

I suspect I'm just being stupid, and the answer to this is simple.

I've done plenty of integer maths in a variety of applications to replace floating point arithmetic on small embedded platforms, without FPU hardware etc. For example:

var * 237 / 100; // Replaces var * 2.37

So much so - I've always avoided FP math, and always used integer math! However, I have an application right now where I am filtering analog data using a 4-pole Butterworth FIR filter - and the filter coefficients are all very small floating point values, i.e:

0.003284735
-0.00112346

So... in this case - I can't just scale the values up until they can be truncated into suitable positive integers - as the output values are so large (e.g. 10^300). For example, my input data is only between 0 - 4095 - so if I multiply 0.003284735 by 10,000 to give 32.84, truncated to 33, then that's OK, but the equation for a 4-pole Butterworth digital filter has multiple terms, which end up with me multiplying multiple large number by other large numbers - and ending up at 10^300 type number sizes.

So my question is - it's always been my school of thought that FP math should be avoided as far as possible - or is this one of those applications where that simply isn't true, and FP is the solution?

This topic has a solution. Jump to the solution.

## Tags:

Last Edited: Thu. Apr 15, 2021 - 10:22 PM