```Hi!

* Holden Glova; 2003-06-24, 13:55 UTC:
> I am curious what people use to represent decimals when with no
> rounding error?

Suppose you want to multiply a and b where

a = 2.718281828
b = 3.141592654

a and b can be written in this way:

a = 2.718281828 = 2718281828 * 1E-9
b = 3.141592654 = 3141592654 * 1E-9

Multiplication is done in that way:

a * b = 2.718281828 * 3.141592654
= 2718281828  * 3141592654 * 1E-18
= 8539734222346491512 * 1E-18
= 8.539734222346491512

Any computation that involves decimals can be done in that way  - it
does not work for PI or 1/3 but that's a different story.

When manually doing multiplications you actually use the above
scheme:

2718281828 * 3141592654 (*)
8154845484
2718281828
10873127312
2718281828
13591409140
24464536452
5436563656
16309690968
13591409140
10873127312
-------------------
8539734222346491512

(*) Actually the decimal points are present but they don't fit into
this ASCII art.

After the multiplication you count the digits that follow each
decimal point and add them. This gives the position where to put it
in the result.

If you want a more detailed documentation on that kind of arithmetics
all you have to do is finding a good FORTH tutorial.

(^_^) 355.0/113.0 = 3.141592654

When it comes to divisions the problem is slightly more involved.

If a computer does computations that need to be precise to a certain
*base 10* number of digits they *must* be integer computations.

Why that? Because there are decimal numbers with a finite number of
digits that cannot correctly be represented as a binary number with a
finite number of digits. Proof:

ruby -e 'puts 0.1 * 0.1 - 0.01'
1.734723476e-18

Value may differ from platform to platform (especially if it preceeds
IEEE 754).

Gis,

Josef 'Jupp' Schugt

```