On Fri, Jan 27, 2012 at 11:02:52PM +0900, Adam Prescott wrote:
> On Fri, Jan 27, 2012 at 03:05, Josh Cheek <josh.cheek / gmail.com> wrote:
> 
> > They are unsuitable for more uses than they are suitable for, and they
> > contradict
> > the idea that abstractions shouldn't leak implementation.
> >
> 
> I think I understand what you're getting at here, but it's potentially
> misleading. There's not an abstraction leak when you keep in mind that the
> framework you're in is that 1.1 is a float, and therefore has a certain
> representation within the machine and is subject to manipulations within
> some specified system.

No . . . the abstraction is "1.1", and the literal reality is the ever so
slightly different value produced by the binary implementation beneath
it.  The fact that the binary implementation alters the value of float
"1.1" so that it is not equal to decimal 1.1 any longer, despite the fact
that's what someone typed in, is a leak in the abstraction.  No amount of
knowledge of the leak make the leak not exist.  It does, however, mean
you can account for its leakiness and avoid getting into trouble with it
by way of some extra effort.

If you think of 1.1 as notation for a much more complex floating point
number, which is not the same as 1.1, that doesn't mean the abstraction
doesn't exist: it means you're unravelling it in your head to accommodate
the implementation's divergence from decimal 1.1.  In essence, the fact
it looks like 1.1 (but isn't) is the abstraction itself.

The way abstractions are supposed to help us is by saving us the trouble
of thinking about the more complex reality beneath the abstraction.  If
the task of keeping track of which use cases violate the simplicity of
the abstraction is more work than saved by the abstraction, it ends up
being a poor abstraction.  This is where the special comparison method
proposals make sense: if such a method can guarantee that it is accurate
up to a known, "standard" precision, it's easy to think "Floats are as
they appear up to precision X," and just move on with your life, because
it works; without them, we only have something like == as currently
implemented for Float, whose primary value (as far as I can see) is to
provide a tool for learning about the implementation of the Float type,
because there's no simple rule of thumb for "accuracy up to precision X".

Of course, someone might have some other reason for using IEEE-standard
floating point numbers with Float#== is useful, but I don't know what
that is off the top of my head, and I'm pretty sure it's a relatively
rare case.  The upshot, then, is that instead of having either a decimal
implementation that has known precision, or a Float type with a
comparison method that is accurate up to a known precision (plus the
literal comparison method, with the ability to add different comparisons
for cases where other types of comparison might be more suitable to a
specific problem domain), what we have is the need to implement a
comparison method of our own individual choosing every single time we
want to be able to rely on accuracy of decimal math.

This is ignoring the case of cumbersome notations for additional decimal
types, because the floating point abstraction has already claimed the
literal decimal ground even though it doesn't work that way.

Note that a decimal "up to precision X" is also an abstraction, but at
least it is an abstraction that would leak far, far less often, because
of the case of things like rounding.  I think the only way around that,
given the fact there are limits to how much RAM we have available, would
be to store rational literals (e.g. 2/3 instead of 0.666 . . .) somewhere
to provide a back-up method for rounding numbers.

Someone tell me if I'm mistaken about some part of that -- preferably
without invective.

-- 
Chad Perrin [ original content licensed OWL: http://owl.apotheon.org ]