Decimal floating point

Go To Last Post
41 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Is there a decimal floating point library for 8-bit AVRs? I read that decimal support is buried somewhere in GCC 4, but I have no idea how to use it or if it's even supported by AVR-GCC.

I've seen a couple of fixed-point libraries, but I need at least 12 significant decimal digits. If no such library exists, I'm going to take a crack at porting the DFP library (http://dfp.sourceforge.net) from Java to C. DFP uses base 10000 (each "digit" is 16 bits and represents 4 decimal digits) but I figure that base 100 would work better on an 8-bit platform, and computation would be faster than with packed BCD. An 8-byte struct would store 6 bytes (12 digits) of mantissa, 1 byte of exponent (-128 to 127), and 1 byte for sign and other flags.

If it's been done already, let me know.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

libm aka "floating point" this is standard in C... fixed point would be relatively non-standard (unless talking about the latest ISO C spec) AVR-GCC implements the standard IEEE-754 32bit floating point format. (64bit floats are not supported, they are internally handled as 32 bits)

For efficiency, you might want to look at fixed point, or scaled integers.

Writing code is like having sex.... make one little mistake, and you're supporting it for life.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm not talking about floats or fixed point. Accuracy is much more important than speed for me.

http://en.wikipedia.org/wiki/Dec... is used to accurately represent base-10 numbers. Calculators, databases, etc. use it. Java's BigDecimal class uses decimal floating point.

Intel has a decimal floating point library called BID, but it seems very x86-centric and difficult to port.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

glitch wrote:
libm aka "floating point" this is standard in C... fixed point would be relatively non-standard (unless talking about the latest ISO C spec)
AFAIR some freak implemented that standard supplement some time ago for GCC.

Stealing Proteus doesn't make you an engineer.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

There is a decimal (packed-BCD) floating point library for '51 which comes with the Intel's BASIC52 sources. Google for FP52.SRC. It would be not entirely impossible to port it to AVR, if you are proficient enough in both asms.

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The previous IBM Hursley research project now lives at:

http://speleotrove.com/decimal/

(not sure how big this is though!)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I worked on porting DFP tonight and so far I've been successful. I'm doing a straight port from Java to C on my Mac first just to make sure the algorithms work correctly, but I run it through avr-gcc from time to time to get a feel for the code size.

So far I've ported the four arithmetic functions, comparisons, negation, and rounding. The code works very well but it's extremely inefficient and huge at this point.

Based on what I've seen, the functions will probably be far too complex and slow for real-time applications, but I'm going to use it to make a 12-digit scientific calculator, probably with a 'mega328P.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

autorelease wrote:
I'm going to use it to make a 12-digit scientific calculator, probably with a 'mega328P.
May I recommend the mega644P instead? You're going to need as much flash as possible to pull off a reasonable calculator.

In fact, I would have pushed you toward the mega1281 or mega2561.

Another site you may want to visit is the DIY Calculator which gives all kinds of hints on math and such. It's not AVR oriented, but the ideas should port well.

Stu

Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If packed BCD is used (like for example my TI-85 does), multiplys are nothing more than a table lookup if you don't wish to actually multiply. however for that you would need 10*10 array, but to avoid multiply by 10, you'd need 16x10 and use shifting by 4. which again may shift a bit once 4 times, but there may be a asm opcode to swap nibbles in a byte though.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Another site you may want to visit is the DIY Calculator

Oh! Very interesting! The book "How Computers Do Math" has one author named Clive "Max" Maxfield. I have his "Bee-Boop To The Boolean Boogie". I highly recommend that book (although it is not on the subject of this thread) and based on that I also recommend "Max" as a writer. Knowledge and fun at the same time!

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

You're going to need as much flash as possible to pull off a reasonable calculator.

??? Perhaps we need to explore what you consider "reasonable". Many commercial calculators, including "scientific" types, were (are?) done with simple 8-bit micros.

[We're getting bacl to the Intel 4004 roots...]

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
Quote:

You're going to need as much flash as possible to pull off a reasonable calculator.

??? Perhaps we need to explore what you consider "reasonable". Many commercial calculators, including "scientific" types, were (are?) done with simple 8-bit micros.

[We're getting bacl to the Intel 4004 roots...]


... which certainly was not even able to address as much as 32kB of program memory, and did not run 1 instruction per 60 nanoseconds; yet made it easily to a "reasonable" calculator and more than just that.

People get incredibly lazy these days. An arithmetic library is exactly the example of task which MUST be written in asm, except for some really amateurish (or "software science school") undertaking.

And, Jepael, yes, the AVRs DO have a SWAP (swap nibbles) instruction.

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
Quote:

You're going to need as much flash as possible to pull off a reasonable calculator.

??? Perhaps we need to explore what you consider "reasonable". Many commercial calculators, including "scientific" types, were (are?) done with simple 8-bit micros.

[We're getting bacl to the Intel 4004 roots...]

Lee

Most actually used 4-bit chips. IIRC, the HP41C had two 4-bit CPUs.

Leon

Leon Heller G1HSM

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

But calculators use special processors that are tailored to calculating in BCD and have special instructions to deal easily with them. The first scientific calculator, the HP-35 has a very interesting architecture as everything is done serially, e.g. the ALU can work on only one bit at a time :) and the code that make it all go is just 768 10 bit words ;)

The HP-28S has 128K of code and this is quite an complicated machine.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jayjay1974 wrote:
But calculators use special processors that are tailored to calculating in BCD and have special instructions to deal easily with them.
Oh, really?

Believe me or not, the AVR core DOES have special features just to allow for comfortable (packed) BCD arithmetics... It might be not THAT comfortable as using explicit or implicit decimal adjust, but not THAT complicated either...

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Maybe this site is interesting for the algorithms used.

http://www.jacques-laporte.org/HP%2035%20Saga.htm

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

wek wrote:
People get incredibly lazy these days. An arithmetic library is exactly the example of task which MUST be written in asm, except for some really amateurish (or "software science school") undertaking.

I agree. The library I'm looking at was written in *shudder* Java. Very inefficient Java.

I guess the only upside is that the code is easy to understand and porting to C isn't too hard. However, I'm going to rewrite significant portions of it in assembler afterward.

Jacques' page about the HP35 is extremely interesting. I'm sure I'll use some of the techniques described there to replace the more naive parts of the Java implementation.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Most processors should have opcodes dealing with BCD.

My TI-85 has an ASIC with Z80 cpu core built in, and as far as I know, there is no special BCD instructions added, just the ones that a Z80 and others have.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Is the calculator going to be RPN? And don't forget to check this link :) It uses an MCU from the company whose name cannot be mentioned here, but the code is in C.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jayjay1974 wrote:
Is the calculator going to be RPN? And don't forget to check this link :) It uses an MCU from the company whose name cannot be mentioned here, but the code is in C.

I looked at the code, and they just use doubles and libm. (cheaters!) An interesting project though. It would be nice in my project to switch between standard and RPN input, but I haven't given that much though.

If we had 8-byte doubles in AVR-GCC, I woudn't have to write a BCD math library. 4-byte floats are definitely unsuitable for a scientific calculator--pi to 11 decimal places in a float is 3.14159274101!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yes, I know the HP calculators used 4 bit processors (I worked for HP in their IC division designing ICs so I'm somewhat (key word) familiar with their processors). Yes, I know they had less ROM to use than the mega324P has (actually, most of the later calculators had far more and IIRC were 8-bit processors - could be wrong on the 8-bit stuff).

They were also programmed in assembly (let's not start the assembly-vs-C wars, but it made a difference back then) by lots of very smart folks who had months to do it. And the processors were custom and had all sorts of special instructions to help with the extended math format.

I am trying to make the OP as successful as possible, as soon as possible. He is one person, working by himself, and not worrying about flash space would make his life easier and the cost to upgrade is minimal.

Sheesh! :evil:

Stu

Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thought I'd bump this to let people know that I've been working on a decimal floating point "calculator framework" written in assembly and it's coming along fairly well. I've implemented the basic four functions, square root, logarithm, and the inverse trig functions; the rest (exponential, power, nth root, trig functions, and hyperbolic functions) are in the works.

Speed is about what you'd expect (around 50 times slower than 32-bit float math) but it's accurate to an arbitrary number of digits. (It's currently set at 12) The current code size is about 2K and it shouldn't be much more than 4K when complete. I'll probably make it available after everything's done.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

autorelease wrote:
Thought I'd bump this to let people know that I've been working on a decimal floating point "calculator framework" written in assembly and it's coming along fairly well. I've implemented the basic four functions, square root, logarithm, and the inverse trig functions; the rest (exponential, power, nth root, trig functions, and hyperbolic functions) are in the works.

I am truly impressed. Good job.

autorelease wrote:
Speed is about what you'd expect (around 50 times slower than 32-bit float math) but it's accurate to an arbitrary number of digits.

Are you using packed BCD? If so, wouldn't using unpacked BCD be an option to speed things up at some RAM cost?

Do you mind sharing a snippet of it at this stage of things - maybe the plus - so we can chew on it? ;-)

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What method did you use for the root function?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

wek wrote:
Are you using packed BCD? If so, wouldn't using unpacked BCD be an option to speed things up at some RAM cost?

I am using unpacked BCD. Constants (from log and arctangent tables) are stored packed in ROM and are unpacked when needed. The main slowdown is probably due to the large amounts of copying, shifting, and iteration required.

The math routines are written in assembly but can be called from C. To simplify the assembly code, the functions don't operate on arbitrary variables. Instead, operands are first loaded into static intermediate "registers." For example, to divide 123.4 by -5.678 in C, you'd perform the following:

// DFPU is a macro for creating a decimal floating-point struct
dfp_unpacked arg1 = DFPU(POS 1,2,3,4,0,0,0,0,0,0,0,0 EXP 3);
dfp_unpacked arg2 = DFPU(NEG 5,6,7,8,0,0,0,0,0,0,0,0 EXP 1);
reg_load(REG_B, &arg1); // Load 123.4 into register B
reg_load(REG_C, &arg2); // Load -5.678 into register C
calc_divide();          // Computes B/C and stores result in register A

Having the operands in known locations in memory greatly reduces the number of indirect load/stores in the math routines. Of course, it would be possible to abstract the details of the intermediate registers away and write functions that take dfp_unpacked structs as arguments. However, in its current state it fits the "scientific calculator" paradigm very well.

Here are a couple assembly snippets. The first is the private "add" subroutine, used to add the mantissas of positive numbers with their decimal points aligned:

; mantissa add routine
; adds X to Z, both should be aligned and normalized
; Z and X should point to the start of the mantissa
; r22 should contain the number of digits to add
dfp_add_mantissas_x_z:
    clc                     ;make sure carry is clear
.dfp_add_mantissas_loop:
    ld r24,Z                ;get digit of Z
    ld r25,X+               ;get digit of X
    adc r25,r24             ;add with carry from previous digit
    cpi r25,10              ;did we overflow?
    clc
    brmi .dfp_add_mantissas_st
    ldi r24,246             ;if so, wrap around to valid digit and set carry
    add r25,r24             ;carry is now set
.dfp_add_mantissas_st:
    st Z+,r25               ;store new digit sum, advancing Z
    dec r22
    brne .dfp_add_mantissas_loop
    ret

The actual plus routine is more complicated, since it has to deal with unaligned and negative operands. However, this snippet is called often in many other functions that perform repeated addition.

Here's the common logarithm function, which performs ln(x)/ln(10). First, the natural log of the B register is taken and returned in register A. A is copied to B (the dividend register) and ln(10) is loaded from program space into C (the divisor register). The division routine then stores the quotient of B/C in A.

; void calc_log10()
; compute base 10 logarithm of B and store it in A
.global calc_log10
calc_log10:
;first take log(B) and save it in B
    rcall calc_log
    ldi16 Z,B
    ldi16 X,A
    rcall dfp_copy_x_to_z
;divide log(B) by log(10)
    ldi16 X,C
    ldi16 Z,LOG_10
    rcall dfp_load_constant_into_x ;loads mantissa of ln(10)
    ldi r22,1
    sts C+exp,r22           ;set exponent of ln(10) to 1
    rcall calc_divide
    ret

jayjay1974: Square root is computed with a method from a 1962 paper called "Pseudo Division and Pseudo Multiplication Processes." It describes how to calculate all the elementary functions with only adds, shifts, and lookups. These are the same algorithms that were used in the HP-35.

It might not be tiny or blazing fast; it's mainly an exercise for me to practice AVR assembly and learn the techniques used by the first calculators. However, it actually seems to be working, which is a nice surprise :)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I am truly impressed.

Me too - do you intend to publish this when it's "complete" ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Ah, you used CORDIC :) I used the simple Babylonian method, works for most numbers, except for 144, it won't return 12 :)

I'm actually doing more or the less same as you :) Years ago I reimplemented the hardware of the HP-45 in a FPGA from the descriptions from the patents as accurately as possible. And the original ROM source code is also available plus a Java simulator. But I never got round to actually finishing it. After all it's pretty pointless of course :D

The challenge I posted in this thread is the one I've taken up myself. Yes, pointless, useless, but fun.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

autorelease wrote:
wek wrote:
Are you using packed BCD? If so, wouldn't using unpacked BCD be an option to speed things up at some RAM cost?

I am using unpacked BCD. Constants (from log and arctangent tables) are stored packed in ROM and are unpacked when needed. The main slowdown is probably due to the large amounts of copying, shifting, and iteration required.


I see, thanks.

You might consider maintaining an "accumulator" in the registers, reducing the SRAM moves to and fro. I see that it constrains the number of digits, but IMHO up to 16-20 it should be quite viable.

Quote:

; mantissa add routine
; adds X to Z, both should be aligned and normalized
; Z and X should point to the start of the mantissa
; r22 should contain the number of digits to add
dfp_add_mantissas_x_z:
    clc                     ;make sure carry is clear
.dfp_add_mantissas_loop:
    ld r24,Z                ;get digit of Z
    ld r25,X+               ;get digit of X
    adc r25,r24             ;add with carry from previous digit
    cpi r25,10              ;did we overflow?
    clc
    brmi .dfp_add_mantissas_st
    ldi r24,246             ;if so, wrap around to valid digit and set carry
    add r25,r24             ;carry is now set
.dfp_add_mantissas_st:
    st Z+,r25               ;store new digit sum, advancing Z
    dec r22
    brne .dfp_add_mantissas_loop
    ret


It pays off to optimise this often used function. Perhaps the following migh be slightly better (although untested):

dfp_add_mantissas_x_z:
    clc                     ;make sure carry is clear
.dfp_add_mantissas_loop:
    ld r24,Z                ;get digit of Z
    ld r25,X+               ;get digit of X
    adc r25,r24             ;add with carry from previous digit
    ldi r24,246
    add r24,r25             ;this overflows only if r24 > 9
    sbrs r24,7
    mov  r25,r24
    st Z+,r25               ;store new digit sum, advancing Z
    dec r22
    brne .dfp_add_mantissas_loop
    ret    

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

autorelease wrote:
If we had 8-byte doubles in AVR-GCC, I woudn't have to write a BCD math library. 4-byte floats are definitely unsuitable for a scientific calculator--pi to 11 decimal places in a float is 3.14159274101!
You might try something like
struct Number {
    int32_t hi, lo;  // same sign, lo in 1-10**9..10**9-1
    int16_t ex;
} ;  // (hi*10**9 + lo)*10**ex

Some intermediate results will need to be int64_t.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

There are commercial calculators that use binary floating point, like the latest toy calculators from HP (the HP-30s and HP-9g).

This might interest you:http://voidware.com/binarycalculators.htm

Main page is at voidware.com and the guy also has written a BCD library ;) but it's C++ iirc.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I also am impressed. Way cool!

I also hope to see the source when you get to a stopping point. (Code like this never seems to be "done" - we just decide that we've learned enough and move on. :lol:)

HMMmm... Add a keyboard to a Butterfly and you get a scientific calculator... Yeah, that's the ticket! :lol:

Stu

Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

stu_san wrote:
Add a keyboard to a Butterfly and you get a scientific calculator... Yeah, that's the ticket! :lol:

I was thinking more along the lines of a big chunky keypad and a Nixie tube display, like a 1970s-era desk calculator. Certainly more fun than buying one used off eBay. :P Ted Johnson made one (http://users.rcn.com/ted.johnson/c1.htm) with a Panaplex display, but it was only four-function.

I'd like to publish it when it's complete, however, there's lots more work left to do, so everyone will have to be patient. At this point I haven't even tested anything on an actual chip yet! :)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

autorelease wrote:
stu_san wrote:
Add a keyboard to a Butterfly and you get a scientific calculator... Yeah, that's the ticket! :lol:

I was thinking more along the lines of a big chunky keypad and a Nixie tube display, like a 1970s-era desk calculator.

8) Aw, cool! Maybe even one of those old adding machine interfaces with the ranks of number of buttons and the hand crank on the side.

Have fun!

Stu

Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
It pays off to optimise this often used function. Perhaps the following might be slightly better

Slight is the operative word here. It saves one clock when the answer overflows. Since it will overflow 45% of the time, that gives you an average saving of 0.45 clocks. Hardly earth shaking.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Koshchi wrote:
Quote:
It pays off to optimise this often used function. Perhaps the following might be slightly better

Slight is the operative word here. It saves one clock when the answer overflows. Since it will overflow 45% of the time, that gives you an average saving of 0.45 clocks. Hardly earth shaking.
No it's not. But there is no tradeoff in doing so; and as it is a library with supposedly multiple use in the future, it was worth the 5 minutes of staring at it.

With a - usually acceptable - tradeoff of spending a few more bytes of code memory, the branch can use brcs and the branches can be then split completely, making it one clock spare in both branches. That's 1 clock down of the 14/15 of the cycle, making the whole routine roughly 5% faster. Not a big feat either, but again, worth the 10 extra minutes.

It hardly can be made much faster than that. The support (or, lack of) for packed BCD appears to be too prohibitive to go that way (potentially add two digits at a time); and, as autorelease put it down aptly, the move to and fro the memory eats up most of the time. Note, that my first recommendation was along those lines; but that certainly influences the general approach to the library as a whole, so there's no point to post code snippets for that.

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
it was worth the 5 minutes of staring at it.

I disagree. The speed needed here is within human perception. In order for this change to make any difference in that, the routine would have to be run about million times (assuming an F_CPU of 8MHz) before making any difference to the user. Since in reality this routine would be run at most a couple of dozen times in a calculation, any optimization of it would be insignificant and therefore not worth the effort.

The key to optimization is knowing when and where to apply it. Since this has not even been put on a chip, it is not really even known whether any optimization is needed at all. And when you do know that you need optimization, you analyze the code to see where optimizations are needed most. To at this point look for trivial optimizations is totally pointless. The only optimizations that should be applied at this stage of a project is algorithmic ones.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

autorelease wrote:

If we had 8-byte doubles in AVR-GCC, I woudn't have to write a BCD math library. 4-byte floats are definitely unsuitable for a scientific calculator--pi to 11 decimal places in a float is 3.14159274101!

Shameless plug - http://code.google.com/p/calc152 - RPN programmable calculator with 45-bit mantissa floating point.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Koshchi wrote:
The speed needed here is within human perception. In order for this change to make any difference in that, the routine would have to be run about million times (assuming an F_CPU of 8MHz) before making any difference to the user.

You can't know that.
For the "12-digit scientific calculator", you might be right (we haven't seen how mul is done, which can make a difference). But, if autorelease would release the library for general use, it could make it to an application where this might make sense.

Koshchi wrote:
The key to optimization is knowing when and where to apply it.

This is the approach when you do standard commercial programming (a.k.a. the boring stuff). Then you weigh your optimisation decisions in millicents.

The two different cases are, a library, and a hobby work.

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hello again,

The routines are pretty much complete. The library includes the following functions, that operate on 12-digit numbers with two guard digits:
- zero, copy, negate, absolute value, integer part, fractional part, sign
- add, subtract, multiply, divide, square, reciprocal
- square root, nth root, power, log, log base 10, exponential
- sin, cos, tan, asin, acos, atan
- sinh, cosh, tanh, asinh, acosh, atanh

There is also a status byte that indicates if the last operation caused an error (divide by zero, out of domain, etc.)

The code weighs in at just over 3K. (This includes a 232-byte constant table in ROM; without the table, it's under 3K.)

90 bytes of RAM are used for five 18-byte, 16-digit unpacked BCD registers; A, B, C, M, and T. A is the accumulator/result register. B and C are operand registers. M is used during the CORDIC operations. T is a temporary register used by a couple of the higher-level math functions.

I'll probably release the code in the future, after I build some hardware and get this running on an actual chip.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Cool :)

Is that 14 fractional digits + 2 exponent digits?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I came across this site, and I could not help wondering if this guy has not overcomplicated things a little.

http://www.lupinesystems.com/calc/lcdcalc.htm