On Mon, Jan 30, 2012 at 06:01:02PM +0900, Florian Gilcher wrote:
> 
> On Jan 30, 2012, at 9:32 AM, Tony Arcieri wrote:
> 
> > On Mon, Jan 30, 2012 at 12:22 AM, Robert Klemme
> > <shortcutter / googlemail.com>wrote:
> > 
> >> It seems many people use the "floating point mess" without major
> >> issues.  So it cannot be as bad as you make it sound.
> >> 
> > 
> > Floating points are a great choice for approximating continuous values and
> > thus working on things which require both high performance and
> > approximating real-world data sources. This includes things like games,
> > non-gaming related 3D applications, perceptual media including audio and
> > video codecs, and everything involved in working with perceptual media on
> > computers such as non-linear editing, speech synthesis, and speech
> > recognition.
> 
> Let me add "statistics gathering" to the list...
> 
> > People don't often do these things in Ruby. I'd say they're uncommon use
> > cases.
> 
> ... and suddenly, you have a _very_ common use case. I have no client
> where it doesn't happen.
> 
> Graphing is also not uncommon.
> 
> > 
> > Something people do quite often in Ruby: deal with money. Money isn't
> > continuous, it's discrete. A decimal type is a much better choice for
> > dealing with money than a floating point.
> 
> Yes, but a dedicated money type that encodes the currency is also a much better
> choice. Also the standard in handling monetary values is not using a decimal
> representation anyways: you just encode the smallest value as an Integer.

It's easy to focus on a single tree and ignore the forest.

Dealing with money is not the only use case -- and it becomes a lot more
complicated when you have to deal with exchange rates, completely blowing
the "just use cents" solution out of the running as a simple expedient.

Money is an example of the kind of use cases where having something a bit
more accuracy-predictable than Floats without imposing a lot of syntactic
overhead is a good idea.  It is not the *only* example.  Answering "For
example, there's dealing with money . . ." with a solution that only
works for money is missing the point of an example.  In fact, I'd say
that the biggest "use case" is probably the vast range of circumstances
wherein there is not a singular, systematic requirement for unsurprising
arithmetic; it is, instead, the everyday case of doing division in casual
cases where the entire point of a program is not the math itself, but the
math is expected to fit some definition of predictable accuracy.
Consider:

1. averaging (mean) small numbers from large samples for a rating system

2. simulating dice rolling and subsequent math for roleplaying games

3. using irb as a "desktop" calculator

4. figuring out distances for trip planning

5. percentages in myriad circumstances, such as offering comparisons of
relatively small differences between different customer groups' behaviors
as just one small part of a larger reporting program (or, for roughly the
same usage pattern, a ten year old learning to programming doing some
simplistic statistical work involving his or her friends' trends in
favorite ice cream flavors)

Note that none of these is as rigorously and pervasively dependent on
exactitude of painstakingly defined standards for accuracy of sub-1.0
positive numbers as you're likely to find in the work of computer
scientists working on their dissertations or specialized professional
fields, but they can still result in bugs based solely on the use of
floating point numbers with Ruby's Float comparison capabilities, thus
mandating either the use of much more verbose libraries with much more
finnicky syntax or the (re)invention of alternate comparison methods for
the Float class.

Then, of course, there's what may be the biggest use-case of all: people
who are not aware of the problem of the inconsistencies of decimal math
using the IEEE-standard floating point implementation because when they
look at documentation for Ruby's Float class all they see is Float#==
with no alternative comparison methods.


> 
> If money is the reason against float, its the wrong reason. It may be a tempting
> error, but it only shows that the developer in question did not read a single 
> line about handling money in software - which is another common problem, but
> not one that you can actually fix by the choice of literals.

It's not the only reason, and it's not a reason "against float", either.

It's just a single reason for offering *something* other than the very
limited options we currently have -- which seem designed on the
assumption that any case where the unadorned IEEE-standard floating point
type is an exceedingly rare edge case -- and offering it in a manner that
makes it as close to an equal "citizen" as we reasonably can.  As things
currently stand, it seems difficult to claim (with a straight face) that
math involving positive decimal numbers between 0 and 1 without
inaccuracies that are nontrivial to predict for the average coder even
rises to the level of second-class "citizen".

Realistically, I think the IEEE- standard floating point type is itself a
fairly rare edge case compared to other options like "truncation at Nth
decimal place" and "just get me close without a difference of 0.1 between
two potential inputs to a given expression resulting in the precedence of
two items being unexpectedly swapped thanks solely to unexpected rounding
mismatches between binary and decimal numbers."

-- 
Chad Perrin [ original content licensed OWL: http://owl.apotheon.org ]