55 posts / 0 new

## Pages

Author
Message

I am working with a sensor which is a 3-axis accelerometer. It outputs data in two bytes in standard-format signed 16-bit value. The sensor is uncalibrated and the value needs to be converted to g-units. Full scale is +/-2g and I have, by measurement, a reading, As, for +1g. The current question is: How to scale the readings into g units?

One COULD use standard floating-point division, A / As, where A is the uncalibrated reading corrected for offset. BUT, in my case, the combination of the sample rate and the number of channels to be sampled make floating-point problematic; there are occasions when there is not enough remaining time to write data to a microSD card.

Another option is fixed-point arithmetic. There are at least two ways to do this: (1) a straight fixed-point division, and (2) multiply the readings by a precomputed reciprocal of As (1 / As). This posting describes a comparison of these two methods. For example purposes, we will use A = 1234 = 0x04d2 and As = 17039 = 0x428f. In this example, the calculator answer is 0.0724g.

METHOD 1 - FIXED POINT DIVISION

We start off by noting that a simple integer division of A by As will result in zero, because A is smaller than As. So, if we multiply A by some convenient value, say 0x10000, then the answer from the division will be multiplied by the same factor.  This is equivalent to shifting A to the left by 16 bits. In effect, the numerator for the division problem becomes 0x04d20000. We can mentally think of this as 0x04d2.0000 (or, substitute your favorite radix point). Now, we are computing 0x04d20000/0x428f which is 0x0000128a (or, if you like 0x0000.128a). Since this is 0x010000 = 65536 times the ratio, this is the same as 0.0724. Thus, we have an answer that is within the number of significant digits shown, to the expected value.

How can this be implemented? Here is the code that was used (volatiles are used to prevent removal by optimizer):

```#include <avr/io.h>

//AS is assumed to be the 1g "full scale" value
#define AS (int16_t)17039
//Ao is the offset corrected reading.
#define Ao (int16_t)1234

// the following union is to convert an int16_t into a fixed-point int32_t
// with the value in the two high bytes of the 4-byte "fixed".
volatile union axis
{
int16_t input[2];
int32_t fixed;
} axis_x;

int main(void)
{

//put the value  into the upper half of axis_x.fixed
axis_x.input[1] = Ao;
axis_x.input[0] = 0;

volatile int32_t scaled1 = axis_x.fixed / AS;   //good! ANSWER IS 256*256 * fractional part

while(1) {}
return(0)
}
```

METHOD 2 - FIXED POINT RECIPROCAL MULTIPLICATION

With this method, it is first necessary to compute 1/As. This computation is not counted in the operation cycle count since it can be done just once before sampling begins.  We might start by trying 1 * 65536 which effectively makes the numerator 0x010000. But, this is not sufficient because the result is only 0x000003 which does not have sufficient resolution. So, for testing purposes, uint32_t 0x01000000 (effectively 0x01.000000) will be used. Note that the radix point does NOT have to fall on byte boundaries but it makes post processing a LOT easier! Then 1/As = 0x01000000 / 0x428f = 0x000003d8 and more resolution has been provided. The result is 0x00128730 which is 0.0732; the small error is probably due to the small number of significant bytes in 1/As.

Here is the code that was used to implement this (volatiles used to prevent removal by optimizer):

```#include <avr/io.h>

int main(void)
{

volatile int32_t mult = (int32_t)0x01000000 / As;

volatile int32_t scaled3 = Ao * mult;

//Takes 57 clock cycles!

while (1) {}
return 0;

}
```

ANALYSIS

While the “normal” fixed point division is a lot faster than a floating point one, the division using multiplication by the reciprocal of the divisor is more than 10 times faster than normal fixed-point. While an improvement was expected, it was not expected to be this great. Results may vary, according to the numeric values used in the computation.

While it might seem that the multiplication method has a penalty of a 32-bit intermediate value, the normal division, as implemented, requires a union to be implemented. Thus, the variable-memory footprint is similar.

What is NOT shown, here, is how to turn these fixed point values into ASCIIfied decimal strings. That is the next challenge. When there is something to report, a tutorial will be added.

Jim

This topic has a solution.

Until Black Lives Matter, we do not have "All Lives Matter"!

Last Edited: Mon. Dec 21, 2020 - 03:52 AM

Just checking but you are aware of...
.
https://gcc.gnu.org/wiki/avr-gcc#Fixed-Point_Support

Yes, I was aware of that. I wanted to explore the basics a bit before delving into what the compiler might offer. And, I wanted something to show about using fixed point.

Also, not very clear how to use the information that is there. That will probably be the subject of another note.

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

Last Edited: Sat. Dec 19, 2020 - 07:43 PM

About 25 years ago a coworker came to me &  said he needed some advice about a side project...apparently he had been working a week or two to scale & display some linear slider sensor value to inches or mm & to do so, needed to divide by 3.43 or similar, depending on the individual sensor cal needs.  He had been arm-wrestling to create floating point calculations, with no luck.  I told him to forget about dividing, just multiply..this made him somewhat upset, like I didn't understand what he was telling me....I NEED to DIVIDE!!!.   I said just form a multiplier, start at 1, & set two buttons to increase/decrease the multiplier by one count.  Adjust the multiplier until the display reads the correct value (with the decimal forced to the left 3 places, so a result of 10000 is displayed as 10.000). Then it is cal'd.

He came back the next day shaking my hand like crazy...he had it up & running in an hour.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

This is something I need a whole lot more on.  I hated math in school

Just gettin' started, again....

If you have questions, please ask. It won't be REALLY complete until figuring out an easy (easier) way to make the decimal value more apparent.

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

I need to go through the basics first, and a better reason to be working in math.  So far, I don't have a problem that needs a solution.  Soon though, lots of stuff lined up.

I did some looking on line for some math "routines" or tutorials that worked around floating point.  Like, how to perform math without actually using floating point.  But I'm not sure I found much.  Have yet to decipher it.  Like I said, I need a better problem.   I plan on hitting the accelerometers soon, so that would be a good point.  Just about finished with that RV sign, so maybe move on to something else soon.

Kicking myself in the backend again for not getting back into this sooner.  Having fun and missing it for a number of years, should have got back into it much sooner.

But thanks for the assistance, you have been very helpful, as many here have been.

Just gettin' started, again....

Note for method 2 you generally don't calculate the multiplier in/by the program itself.  This can be done with a spreadsheet & even optimized.  The idea is it is often known ahead of time.

Her'es some fun stuff

https://homepage.divms.uiowa.edu...

https://www.drdobbs.com/parallel...

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

Last Edited: Sat. Dec 19, 2020 - 10:35 PM

The comment was for the calibration, not for the correct "math" value ;)

In general multiplication is way faster than division, so using 1/As where As is (near) constant, seems it will always be a win.

(For example, on the Power PC 750 a double precision multiply is 2 cycles, a double precision division is 31 cycles.  That's a 15:1 difference.)

Be aware that the fixed-point support in avr-gcc is 16 or 32 "fract" types while the fp mantissa is 24 bits.  (I don't remember what the accumulators are.)   So, I'm not sure 32 bit fixed pt will necessarily be faster than 32 bit floating.

Indeed, the reciprocal value MIGHT be saved in the form of the scaling factor, from which the reciprocal can be determined OR it can be saved with the reciprocal predetermined.

Part of the issue will be that there are 3 scale factors for each sensitivity range and 6 sensitivity ranges. Saved as scale factor will take two bytes per value. Saved as a precomputed reciprocal will take a number of bytes that depend on the fixed-point format selected. They will all have to be saved in EEPROM. I THINK there is enough space for either method, but I have not looked at it, yet.

Until Black Lives Matter, we do not have "All Lives Matter"!

avrcandies wrote:
Her'es some fun stuff

That'll take some time to absorb.  But just what I need I think.

Thanks for posting that.

I saved some other UofIOWA stuff on programming embedding systems and the AVR.    Some basic stuff, and a lot of "go read the datasheet"....

Just gettin' started, again....

ka7ehk wrote:

METHOD 1 - FIXED POINT DIVISION

We start off by noting that a simple integer division of A by As will result in zero, because A is smaller than As. So, if we multiply A by some convenient value, say 0x10000, then the answer from the division will be multiplied by the same factor. [...]

How can this be implemented? Here is the code that was used (volatiles are used to prevent removal by optimizer)

Um... There's no objective need to use a "convenient" value as a scale factor in fixed point arithmetic. Absolutely any positive value can be used as a scale factor. You can choose anything (taking into account the precision you need) and just multiply by it. It can be 1234, 87435, 1000 or 42. Again: anything. Fixed-point arithmetic will work perfectly fine with such scale factors, providing you with the fixed precision determined by that scale factor. You just have to remember that after some calculations you will have to adjust the intermediate or final values (e.g. by dividing them by the very same scale factor).

In your example you decided to chose a scale factor that is a power of two (to replace ordinary multiplication with a shift - a fairly good optimization), and you chose that power to be 16 in order to force the boundary between the whole and fractional parts precisely onto the 16-bit word boundary (to replace the shift with a memory reinterpretation through a union - a rather questionable optimization). These are optimizations, and the problem is that these optimizations get in the way, obfuscate the code and thus obfuscate the topic of fixed-point arithmetic.

If I understood you correctly, you wanted to demonstrate how fixed-point arithmetic works. Involving the above optimizations so early makes your presentation less readable that it could have been.

Dessine-moi un mouton

Last Edited: Sun. Dec 20, 2020 - 06:22 AM

ka7ehk wrote:
there are occasions when there is not enough remaining time to write data to a microSD card.

Now there's an interesting question. Are you using an expensive industrial SDcard which uses Single Level Cell (SLC) flash or a consumer grade card which would use NAND flash ?

The maximum write time for a single 512B sector Twr varies hugely. Summarising info found here: https://jitter.company/blog/2019/07/31/microsd-performance-on-memory-constrained-devices/ I get.

Max Twr Consumer = 113ms

Max Twr Industrial = 15ms

I would have thought 15ms was plenty of time to perform a few floating point divides, and 113ms is an eternity.

What clock frequency are you running at ? We can hit the Atmel AVR Simulator and get some real numbers.

This reply has been marked as the solution.

Choosing the best scale factor may well depend on what other calculations you need to do.

For example, if all you wanted was to display the  value in decimal (i guess this isn't all, but just as an example), then you could use 100000000 (100 million), so the 'reciprical multiplication' method is multiply by 100000000 / As = 100000000 / 17039 = 5868

Then value 1234 scaled becomes 1234 * 5868 = 7241112

To display, now just move decimal point 8 places to left giving .0724...

EDIT: or rounding to closest gives (100000000 + (17039/2)) / 17039 = 5869, slightly better result 1234 * 5869 = 7242346

Last Edited: Sun. Dec 20, 2020 - 10:44 AM

MrKendo wrote:
Choosing the best scale factor may well depend on what other calculations you need to do.

Yes, I think this is a critical factor.

Another consideration is, what is the precision of the accelerometer output?  To put it another way, how many bits of good data are in the 16 bit output?  That will determine the minimum precision you need for the scaling factor.  If the scaling factor is less precise, you're throwing away good data.  More precise, and you run more risk of running out of calculation headroom.

I don't know what the specs are for the OP's device, but I did find a part that was spec'ed at 4096 counts/g, so a total range of 16k counts for +/- 2g.  That range could scale nicely into a 16-bit int, either going to the full int range, so a scale of 16384/g, or scaling to nice friendly decimal values of +/- 20000, so a scale of 10000/g.

To scale to 10000/g, for example (starting with the calibration value of 17039/g), you can set MULT to (65536*10000)/17039.  This will give you the scaled output in the top 16 bits of the 32-bit calculation.  This sets MULT to 38462.  For an input of 1234, this gives (1234*38462)/65536 = 724, or 0.0724g.  If you want to add the rounding the calculation is ((1234*38462)+(65536/2))/65536, which in this case still yields 724.

Note that we are scaling down from the calibration value of 17039/g to 10000/g without losing data because the hardware (in the example I looked up - may be different for the OP's hardwire) is only good to 4096/g.

For fixed-point math that needs displaying in decimal, I use a decimal scaling factor (0x2710 works very nicely!) so that the insertion of the decimal point is merely a matter for display.  S.

ka7ehk wrote:
Part of the issue will be that there are 3 scale factors for each sensitivity range and 6 sensitivity ranges.
Significantly more than in the Opus codec.

fyi, C++23 may have a scaled_integer type.

https://gitlab.xiph.org/xiph/opus/-/blob/master/celt/fixed_generic.h#L149

CNL: cnl::scaled_integer< Rep, Scale > Class Template Reference

Opus Codec (Opus is in C89)

GitHub - johnmcfarlane/cnl: A Compositional Numeric Library for C++

"Dare to be naïve." - Buckminster Fuller

Then value 1234 scaled becomes 1234 * 5868 = 7241112

To display, now just move decimal point 8 places to left giving .0724..

Note also you might not even need such a large number...Here you are tuning the display to about 1 part in 5900...that's quite a fine finesse.  Say you only needed a result to within 0.5% (full scale), that is one part in 200 & a low multiplier may be used, which of course might mean less storage, and faster results.  Of course a 16x16 mult is pretty quick , so 16x8 isn't a huge advantage.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

AndreyT - I made no claim about "optimization". I simply wanted to explore the statement made fairly often on Freaks that (approximately) "multiplication can be better than division". I wanted find out how much better that might be in my application. I think that the choices I made in that example are probably adequate for my application. That is the only claim that I make.

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

OldMicroGuy wrote:
But I'm not sure I found much.
Try video game programming; a megaAVR is akin to an Intel 386 compute speed-wise (there are some megaAVR video game PCBA)

IIRC, fixed-point math is a part of a course in numerical analysis.

cites

Numerical analysis (1981 edition) | Open Library

"Dare to be naïve." - Buckminster Fuller

OldMicroGuy wrote:
I saved some other UofIOWA stuff on programming embedding systems and the AVR.
Cornell University and PIC32MX :

ECE4760 fixed point

"Dare to be naïve." - Buckminster Fuller

It seems like it would go without saying that fixed point scaling via integer multiply would be far faster. than any division..since the AVR (most) has a multiplier & multiplier instructions...so you are already 50 steps ahead in the game compared to division.

On the AVR, a 16x16 multiply can be done in 17 cycles (only 13 instructions)...finished before a divide could even get things ready to get underway !

Less than 1us at 20MHz...that should be fast enough!

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

Last Edited: Sun. Dec 20, 2020 - 07:43 PM

Sure, multiplication is faster than division. But, with possible differences in required data widths, I did not know how big the difference would be. I simply provided a concrete example. For my problem, it is about 10:1 or a bit more.

Nor, I had I thought about other constraints. The difference would not be nearly so great if the divisor is a true variable, rather than a "constant" for which the reciprocal can be pre-computed.

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

avrcandies wrote:
On the AVR, a 16x16 multiply can be done in 17 cycles (only 13 instructions)
a few more in an AVR DSP app note

AN2701 Digital Signal Processing Performance of the 8-bit AVR Core

unzip, ../SAME/SAME/mult16x16.h

ATMEGA4809 - 8-bit Microcontrollers

"Dare to be naïve." - Buckminster Fuller

The difference would not be nearly so great if the divisor is a true variable, rather than a "constant" for which the reciprocal can be pre-computed.

Very true, however, since in this case it is also a calibration, it is inherently part of a pre-computation numerical chain.   In that case, the cal is just a slightly tweaked multiplier

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

Last Edited: Sun. Dec 20, 2020 - 08:58 PM

Yes, but this does point out the importance of understanding your "situation" before you start and recognizing that different algorithms may be more or less "optimum" in different situations.

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

Another point which I think MAY be true: Optimization of the fixed point arrangements may change when you take into account the need to convert the result into ASCII decimal strings. If that is true, then you need to optimize the entire system, from binary input to string output.

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

make floating-point problematic; there are occasions when there is not enough remaining time to write data to a microSD card.

What do you need the data for?

if this is like your treetop moves, then why not write the raw numbers together a gain and offset value, and let the reader do the conversion.

In fact, that is my first choice. But, a new "market" may be opening to monitor shipping containers containing precision equipment (big CNC mills, laser cutters, etc) and they want recorded data in g units. It has to with evidentiary standards in courts and manipulation of the data, post-event (it then being considered less reliable, even though I would write that software).

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

Last Edited: Mon. Dec 21, 2020 - 12:28 AM

MrKendo -

Thanks, you just hit the jackpot for me!

I had been fixated on a physical radix point, like a decimal point, BUT, what really helps is to not think of it as a radix point but a "scale factor". In fact, the whole idea of "fixed point" is a red herring.

Suppse that you have an operation (especially division, but it does not have to be) that does not produce a useful result in the binary integer domain. If you scale it by some factor, K, that does give a useful result, then unscale it by 1/K, then maybe there is a benefit. That scale factor DOES NOT have to be related to the radix. Here is a simple example.

Suppose that you need to compute (6/4) = 0x06/0x04 and convert the answer to decimal ( maybe with itoa() ). Now, as an integer division, you get 1, which MIGHT, in your application, not be very useful (its off by 33%, after all). BUT, if you scale the starting by 10 = 0x0A, you now are computing 0x3C/0x04 and get 0x0F. Does not look like much gain, does it? Think, again, though, because when you to itoa(0x0F) - yes the argument is bastardized, the result is "15". Now by the original scaling, you know that this result is 10 times larger what the real answer is, so you manually shift the decimal point left by 1 digit. Bingo, you get "1.5".

Now one can wax eloquently about linear transformations, and such, but if F(x) is linear, then scaling x by k, operating with F, then unscaling by (1/k) preserves the result of the operation. k does NOT have to be related to the base/radix, or anything else. It is chosen in a way that produces the desired benefit. In this case, scaling the original binary domain division by some multiple of 10, doing the division, and a binary-to-decimal (even though it is a string, not a number), then undoing that scale by inserting a decimal point in the corresponding place does TWO things for you: (1) improves the resolution of the computation and (2) simplifies the binary-decimal conversion. Note, also, that some problems may benefit by a scale factor LESS than 1.

kk6gm, Scroungre, and avrcandies all suggested this in various forms, above, but it took a bit of cooking the stew to make sense of it all.

To everyone, a BIG thanks for expanding the initial post.  I will try to put together a coherent tutorial, because there does not appear to be any and we do get a steady, but not voluminous, stream of questions about this.

Cheers and thanks

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

Last Edited: Mon. Dec 21, 2020 - 05:06 AM

kk6gm, Scroungre, and avrcandies all suggested this in various forms, above, but it took a bit of cooking the stew to make sense of it all.

Indeed, sometimes the secret is in the simmering sauce.  In #2 it is saying you don't even need to calculate or think too much about the multiplier...by simply doing the calibration, you get the correct result (the goal of calibration) & hence, the correct multiplier.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

Overall i think the main thing is to separate the concept from the implementation. AndreyT touched on this earlier.

The concept really is as simple as multiplying (or scaling) by some value. That's it. Can be any value you like.

Then it is just basic maths

eg. if you have x and y both scaled by some factor M then

Mx + My = M(x + y)

Mx * My = MMxy

Mx / My = x / y

When it comes to the implementation, it is common to choose a scale factor that is a power of 2, so then all your scaling/de-scaling operations become multiply and divide by some power of 2, which allows for optimisations like bit shifts. This is why you often see things like 'using format 8.8' which implies 8 bits for the whole part and 8 bits for the fractional part, this is simply using a scale factor of 256. Likewise a 12.4 format, using 12 bits for the whoie part and 4 bits for fractional part, is simply using a scale factor of 16. But it doesn't have to be a power of 2, if a power of 10 scale factor makes more sense in a given case then you can use that instead. Whatever vallue you use, the fiddly part is making sure that none of the calculations will overflow at any point.

ka7ehk wrote:
what really helps is to not think of it as a radix point but a "scale factor".

Indeed.

As I often suggest, instead of working in 'units', work in 'deci-units' (scale by 10), or 'centi-units' (scale by 100), or 'milli-units' (scale by 1000) or whatever gives a suitable scaling. Then scaling back to 'units' for display is simply a matter of inserting the decimal point at the appropriate position.

And, if it's not for "human display", then there may be no need to stick to decimal factors ...

One thing to note in this day & age - now that microcontrollers with floating-point are widely available (eg, Cortex-M4F) - is that using the floating point may well be "better" (sic) than messing about with fixed-point ...

Top Tips:

1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...

awneil wrote:
As I often suggest, instead of working in 'units', work in 'deci-units' (scale by 10), or 'centi-units' (scale by 100), or 'milli-units' (scale by 1000) or whatever gives a suitable scaling. Then scaling back to 'units' for display is simply a matter of inserting the decimal point at the appropriate position.

Right, and the benefit goes even farther than this (displaying).  Taking the original goal of measuring and scaling acceleration, the numbers could be scaled to e.g. 1g = 2^14, so +/-2g fits within a 16-bit int.  But trying to write the code for that scaling could get confusing and produce errors.  Much better, IMO, to scale to 1g = 10000 (assuming the actual precision of the data is lower than this, so no data is lost in the scaling).  It is simply easier, I believe, for programmers to think in terms of scaled-by-tens values.  Same with ADC values - scale to millivolts or some power of ten of whatever the ADC reading represents (maybe tenths of a degree, or hundredths of a kilogram), whatever makes sense for keeping the original data precision while allowing for adequate calculation headroom.

This is a topic that does not get much discussion, and it is often given little thought in system design. But, for many projects, its a make-or-break challenge.

As I been learning over the last couple of years, sometimes painfully, issues of arithmetic overflow, loss of precision, and generation of human readable results all need to be considered early in a design because they often have unexpected impacts. These impacts can range from (but not limited to) execution time, to occupied code space, to ram usage. Sometimes it is even whether or not the device works.

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

Unfortunately (or maybe for the sane, fortunately)...the advent of high level languages used everywhere makes it enticing to just "throw" equations at problems with little thinking through the details.

So a bit of the developed art gets lost in the sands of time.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

When you have gazillion-bit floating point, gigabyte system memories. and terabyte drives, who cares about minimizing the memory foot print?

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

When you have gazillion-bit floating point, gigabyte system memories. and terabyte drives, who cares about minimizing the memory foot print?

Yep, progress is a two-sided coin...I wonder what the cordic people who very carefully crafted their efficient algorithms think?   The FFT was developed to speed up computation, with a lot of efficiency tricks...for graphics you have Bresenham's, speedy line drawing algorithm. Maybe some of those will be forgotten, or maybe they will linger "under the hood" of smarter compilers.  Efficiency is allowed to take a back seat & rest up.   It's like no having 300 TV channels, but nothing to watch.

https://www.versci.com/fft/index...

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

I remember Nicolet's audio spectrum analyzers from that era!

Jim

Until Black Lives Matter, we do not have "All Lives Matter"!

ka7ehk wrote:

When you have gazillion-bit floating point, gigabyte system memories. and terabyte drives, who cares about minimizing the memory foot print?

People care a great deal when two sets of equations that should result in the same value don't, and your equality test fails.  Even an infinite number of bits is inadequate for infinitely repeating decimals*.

This is also why you never do financial math in floating-point, because nobody's going to cash a check for (e.g.) \$10.00000037... **

S.

* - With the exception of the repeating decimal being "0", for you mathematical pedants

** - People still try, though, and the results often appear on places like "Worse Than Failure" websites:  http://www.thedailywtf.com/

Scroungre wrote:

This is also why you never do financial math in floating-point, because nobody's going to cash a check for (e.g.) \$10.00000037... **

No idea if this is true or urban legend, but back in the 80s I heard a story of a banking programmer who wrote code that stripped off all values less than one cent and sent those values to his own account, and supposedly made millions.  Probably not true, but we got a kick out of the story.

kk6gm wrote:
No idea if this is true or urban legend, but back in the 80s I heard a story of a banking programmer who wrote code that stripped off all values less than one cent and sent those values to his own account, and supposedly made millions.  Probably not true, but we got a kick out of the story.
I heard the same story.

He got caught.

Moderation in all things. -- ancient proverb

Apropos, consider signed fixed point binary arithmetic... if you have two values, each of one sign bit, seven digit bits, and eight fractional bits, and multiply them together, what you end up with is a result of sixteen fractional bits, fourteen digit bits, and *two* sign bits... this is in fact the same as any signed multiplication, but it's kind of hidden from view in that case. The problems arise when you try to normalise the result.

The first time you come across this it can be a bit confusing, particularly if you're trying to make sense of efficient digital filtering algorithms.

Neil

skeeve wrote:
I heard the same story.

He got caught.

It's called penny shaving, an instance of salami slicing.

https://en.wikipedia.org/wiki/Salami_slicing

 "Experience is what enables you to recognise a mistake the second time you make it." "Good judgement comes from experience.  Experience comes from bad judgement." "Wisdom is always wont to arrive late, and to be a little approximate on first possession." "When you hear hoofbeats, think horses, not unicorns." "Fast.  Cheap.  Good.  Pick two." "We see a lot of arses on handlebars around here." - [J Ekdahl]

barnacle wrote:
and *two* sign bits...

Scaling can be additive as well as multiplicative.  Stop doing signed math...     S.

barnacle wrote:
*two* sign bits...

I remember a certain GPS chipset maker had a "fixed point" implementation where the integer & fraction parts were separate items in a struct; something like:

```struct {
int16_t  integer;
uint16_t fraction;
}```

Can you spot the problem ... ?

I came across it  because I live near where longitude goes from -1.0000 to -0.9999

Top Tips:

1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...

There is a reason that float (normally) is sign magnitude.

Not a math error but a place I worked made chart plotters etc. and first version just plotted after any compass direction it was told. So in a setup that gave info about magnetic north and true north it would flip direction all the time! That is not a problem in Denmark where the difference is less 0.3 deg but most other places you have to filter one out :)

Can you spot the problem ... ?

well of course..... you need sign, integer, and fraction

now, let's not be negative about the situation

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

only if your compiler use 2's complement, it would be fine in a sign magnitude integer format (as a part of a float where the exponent is a const)

sparrow2 wrote:
only if your compiler use 2's complement,

Which is (almost?) universally the case - and was certainly the case for the compiler for which this was supplied.

Top Tips:

1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...