```On Nov 11, 2005, at 22:32, jwesley wrote:

> Is there a standard way to get a more precise floating point number?

bigdecimal/rdoc/index.html

Longer rambling answer and reason why it's not a bug in anything,
just a design choice: IEEE 754 double-precision (64-bit) floating
point numbers use 12 bits for sign and exponent, and 52 bits for the
significand.  Therefore, the smallest integer that can't be exactly
represented as a float is 2^53+1

123456789012345678
9007199254740992 (2^53 + 1)

So the number used for the test is fairly obviously larger than that
value, and there's going to be some sort of error if you flip it
around and use it.  Ferinstance:

irb(main):008:0> (2**53 + 1)
=> 9007199254740993
irb(main):009:0> (2**53 + 1).to_f.to_i
=> 9007199254740992

And look at *this* consequence of the inaccuracy:

irb(main):029:0> i = (2**53 - 1).to_f
=> 9.00719925474099e+15
irb(main):030:0> while i < (2**53 + 2)
irb(main):031:1>   puts "#{i.to_i}"
irb(main):032:1>   i += 1.0
irb(main):033:1> end
9007199254740991
9007199254740992
9007199254740992
9007199254740992
...

Fun stuff, eh?

It's also worth noting that depending on the hardware instruction,
you can get different results for the same calculation.  And since we
don't program in assembly, you can get different results depending on
the compiler, or potentially compiler options, while still being
entirely consistent with the spec.

matthew smillie

```