Increasing the precision will not make any difference.

Values of the float type are a finite-precision subset of the rational
numbers, encoded as 64-bit binary floating-point values consisting of a sign
bit, 53-bit significand, and an 11-bit exponent, as defined by the IEEE-754
floating point standard.

There are 53 bits of binary precision. When converted to an equivalent
decimal or string representation, you get about 17 digits of precision.

Integral values that are within the range -9,007,199,254,740,992 through
+9,007,199,254,740,992 (-2^53 to +2^53) and all the power of 2 multiples of
them that are within the float type's representable range are represented
exactly.

Because floats are rational numbers, the irrational numbers such as pi and
e, and rational numbers which don't have terminating expansions (such as
1/3, 1/10, etc.) must be approximated.  Since the significand of a float is
binary, each successive bit of the significand represents a fractional power
of two (e.g. 1/2, 1/4, 1/8, 1/16, 1/32, and so on).

In base 10, some rational numbers cannot be represented exactly because the
expansion is infinite.  For example, the base 10 fraction 1/3 is
0.333333333..., an infinite number of digits.  No matter how much you
increase the precision, the result will never be exact.

Similarly, in binary some rational numbers have an infinite expansion and
cannot be represented exactly. For example, when represented in binary, the
base 10 fraction 1/10 is 0.0001100110011001100110011..., an inifinite number
of digits.  No matter how much you incrase the precision, the result will
never be exact.

The fact that the base 10 rational numbers and the base 2 rational numbers
do not have the /same/ set of values that require infinite expansions means
that some numbers can be represented exactly in base 10 but not in base 2,
and vice versa. When converting from one base to the other, this will
introduce small errors because the conversion cannot be made exact.

For more information, see "What Every Computer Scientist Should Know About
Floating-Point Arithmetic"

  http://docs.sun.com/source/806-3568/ncg_goldberg.html

regards,
gus


On 4/5/09 3:57 AM, "Charles Oliver Nutter" <charles.nutter / sun.com> wrote:

> brian ford wrote:
>> So, this decision takes a marginal case for which a perfectly good
>> mechanism already exists and promotes it to the common case. But
>> that's not all. The consequence for the common case is that 2.4 is
>> unnecessarily and uselessly echoed back to me as 2.3999999999999999.
>> 
>> It is very poor interface design to promote a marginal case above a
>> common case. There is nothing that this change in representation makes
>> better in the common case. It makes the common case hideous.
>> 
>> Floats are what they are. Use them as you will. Ruby used to have
>> nice, friendly representations of floats for humans. Nothing gained,
>> much lost. The decision should be reversed.
> 
> Except that it was all a lie.
> 
> If a float can't be represented accurately, Ruby should not mask that,
> because it further perpetuates the mistaken belief that floats are
> accurate in Ruby. Treating 2.39999999999999 as 2.4 accomplishes exactly
> one thing: it hides the true nature of floats.
> 
> I can appreciate the desire to have arbitrary-precision floating-point
> math as the default in Ruby, but that's not the case right now. What we
> have in Ruby 1.8 and 1.9 before this change is the horrible middle
> ground of imprecise floats *pretending* to be precise. And we have run
> into real-world bugs where JRuby's original lack of float-masking caused
> application failures; people believed they could expect 2.4 in all cases
> instead of 2.399999999999999. We should not have had to make our floats lie.
> 
> I would say either floats should always be arbitrary precision, or they
> should be honest about their imprecision. Anything else is doing the
> developer a disservice.
>