I just found out that dtostrf function is too long to execute, so anyone have faster code for converting float to string?
Which compiler are you using?
What is "too long"?
Is (s)printf faster?
There is a project in the Academy that I used when I wanted to control the output format of float-to-string that is much smaller (code size) than printf(). Digging...
I have no idea about the speed vs. other library routines. IIRC it took ~700uS on 3.68MHz AT90S4433 (no MUL instruction) to take an A/D reading, adjust with slope/intercept for calibration values, convert to "user units", and prepare the results for display. I'm guessing an AVR with MUL would have cut down that time considerably.
I have no idea whether these methods are faster or slower than what you are doing. I know that for my particular use project 38 was smaller and gave me more control over the output formating.
You can put lipstick on a pig, but it is still a pig.
I've never met a pig I didn't like, as long as you have some salt and pepper.
Just a thought but does the thing you are holding as a float and want to output as a string REALLY have to be 'float' in the first place? Is it not possible to use scaled integers for whatever quantity it is you are holding there? (it's a lot more code space/speed efficient outputting integers in human readable form than it is for floats and there also (potentially) more accurate)
(it's a lot more code space/speed efficient outputting integers in human readable form than it is for floats and there also (potentially) more accurate)
I've got an interesting experiment for you to do, Cliff--
I'll add on to the description of the app that I mentioned above. Imagine a pressure sensor, and the display is to be in various user-selectable units such as PSI, kPa, bar, etc. The desired units will have a wide scaling range: 1.23 psi might be 0.000123 in one of the other units.
I did the whole scaled integer thing, and needed many paths through the "normalized input value to converted display string" routine to accomodate the wide variance in output scales. As mentioned, it took about 700uS on a '4433 typical, and was a pain to get correct and maintain.
As an experiment, I did:
--Converted normalized input value to float
--Single float multiply with the float scaling factor from a table
--Used modified project 38 to produce the display string
(this routine uses only a couple float primitives, and gave me control over the output scaling. A complicating factor was that the output device was a 7-segment display, so I wanted packed BCD as output vs. ASCII, so the modified routine gave me that control)
The result: Using the float as described was twice as fast, much cleaner, and no larger than the messy scaled integer solution.
As is often the case, the summary is "It depends". Careful use of float (trying not to pull in every primitive) can be very useful. So try the experiment sometime and see what the results are. YMMV depending on the FP implementation, etc.
floats in and of themselves are often fine. Let's face it an IEEE754 uses the same 32 bits as a long on an AVR so fundamentally there's no reason why they shouldn't be used in terms of the storage of the numbers themselves. But it's usually when you want to get them back out of float form and into something humanly digestible that the problems usually start as any of the library functions tend to have a heavy payload.
I wasn't aware of project 38 but just looked at it and can see the advantages it gives over dumping the printf() baggage but it still uses float math library functions which could still incur quite a lot of overhead though I need to actually use the code in a avr-gcc project to see what the impact is.
As you say, it does have quite a bearing on how efficient the maths libs from the C compiler are. I guess the ultimate might be to encode peared down versions of the lib functions that just cater for exactly the task in hand.
But at the end of the day I guess I'm just a bit of a luddite as I've actually written a few calculators in my time (one in a PDA that never entered production and one in a multi-function telephone that did) and I've always been disappointed by the inherent inaccuracy (8.5 digits) of 23 bit matisaa floats. (but have ended up using them in the end any way then spend a lot of work trying to hide the inaccuracies from the user)
In fact in the Win2K calaculator just now I did 2 x^y 0.5 * = - 2 and the display shows: 4.231503478368152916468244968377e-38 - this is better than the Win 3.x calculator and shows that maybe they're using 64 bit floats rather than 32 bit but it's still not right! I thought this $3,000 calculator on my desk would be able to do a better job than this $10 Casio next to it ... oh dear ... actually the Casio says -1E-09
Just as well we're not programming missile guidance systems here or that missile just hit the building next door!
I pretty much agree with you on avoiding floats in general/when practical. The particular app that I cited was a problem for me >>because<< of the large dynamic range of the output, and that is what float operations give you over long, not more precision. Starting with a 10-bit A/D reading, there are only about 3 meaningful decimal digits in the result anyway.
IIIRC the project 38 did not pull in too many primitives from the compiler's FP package. And I already had some from the convert & multiply to do the previous steps, so the multiply by 10 (or was it divide by 10?) repeatedly applied did not "cost" me any additional code space.
It sure was a lot cleaner when it was done. About a dozen lines of C source for all the output units, including the call to the modified-project 38 "library" routine.
I have to try the same experiment on a modern AVR with MUL to see the results.
© 2020 Microchip Technology Inc.