Hello Marc-Andre,

On 19.04.2010 00:14, Marc-Andre Lafortune wrote:
> I hope my dissent will not sound too harsh.

Not at all.

> Arguing that 0.1.to_r should be 3602879701896397/36028797018963968 is
> the same as arguing that 0.1.to_s should outputs these 55 decimals.

Right, that?s my point. 0.1 as a Float has a precise meaning in binary as in decimal, so Float#to_s should keep those 55 decimals. That?s why I said
that Float#to_nearest_s ? choose a better name or an option to Fload#to_s ? should be created that does ?what everyone expects? to_s to do.

The same applies to Float#to_r. It should be as precise as possible, which it is currently. The function that does ?what everyone expects? should be
Float#to_nearest_r in the same way as for the string representation.

> For these reasons, the set S is of little interest to anybody.

The problem is that most people think that Floating point arithmetic is precise, which it is only for the the cases I described in my last mail.

> What *is* interesting is the set of real numbers. Floating numbers are
> used to represent them *approximately*. To add to my voice, here are a
> couple of excerpts from the first links that come up on google
> (highlight mine):
> 
> "In computing, floating point describes a system for representing
> numbers that would be too large or too small to be represented as
> integers. Numbers are in general represented *approximately* to a
> fixed number of significant digits and scaled using an exponent."
> http://en.wikipedia.org/wiki/Floating_point
> 
> "Squeezing infinitely many real numbers into a finite number of bits
> requires an *approximate* representation.... Therefore the result of a
> floating-point calculation must often be rounded in order to fit back
> into its finite representation. This rounding error is the
> characteristic feature of floating-point computation."  source:
> http://docs.sun.com/source/806-3568/ncg_goldberg.html

That?s where the problem starts. Everyone thinks he can do exact math on a computer, and the only problem was the approximation of the binary
representation of a real number, characterized by ?EPSILON/2. No, the _real_ issue is the approximation of calculations which not only accumulates
EPSILON with each calculation, but it can shift EPSILON to any order. Think of something trivial like (1E-40+0.1-0.1) returning 0.0 vs.
(1E-40+0.3-0.2-0.1) returning -2.7E-17. There is no real math in floats.

One can go as far as saying that availability of math-like operators and math-like precedence in a programming language supports the expectations of
real-number-like behavior and precision. But this is slightly off-topic, and in fact method calls for simple math are not doing any good to
readability. Math-like operator precedence is different and something completely unnecessary in a programming language, IMHO.

> Note that typing 0.1 in Ruby is a "calculation" which consists in
> finding the member of S closest to 1/10.
> 
> Your final question was: how do I know that the value someone is
> talking about is 0.1 and not
> 0.1000000000000000055511151231257827021181583404541015625 (or
> equivalently 3602879701896397/36028797018963968) ?
> 
> I call it common sense.

It looks so obvious when we are talking about 0.1. If we talk about any other number with 80 digits, my point may become clearer.

What do you do if it?s not 0.1 a.k.a. 0.1000000000000000055511151231257827021181583404541015625 but
0.09999999999999997779553950749686919152736663818359375 (the result of (0.3-0.2)? What?s the difference for your argument? Now we will not get back
the expected nearest 0.1 anyway without applying the actually required/expected rounding constraints.

If it?s just about 0.1.to_r, i.e. converting from a decimal constant number to rational, use String#to_r.

Bottom line: Floats are not exact in terms of math, but they are exact in terms of computer-level implementation, implementing IEEE 754. We should
respect the latter and help people deal with the former.

? Matthias