On Thu, Jan 26, 2012 at 3:33 AM, Intransition <transfire / gmail.com> wrote:

> The problem is you're looking at in only as an engineer might, dealing
> with very small deltas. But as an implementer of such a method as
> #close_to? or #approx? I would never assume there are no valid applications
> that are concerned with bulkier scales.
>

I remember reading a complaint about how programmers think of all
constraint numbers as variables where pretty much any value is valid. Why
write something that works for an array of 10 elements? It should work for
ANY array. The array could have 1000 elements. Or 301039384!

...or maybe you really should consider the real problem space and see that
the array will only ever have up to 10 elements, tops.

There's an excellent example of this thinking in The Cryptonomicon (which I
recently re-read) with some characters wondering why the "Pontifex"
algorithm specifically mentioned the number 54 instead of "X, where X could
be 29, or 54, or 128". As it turns out, there's a very good reason for the
54.


As Gavin clearly pointed out, there are two different concerns at play
here. One is the standard problem with IEEE floating-point representation.
The other is deciding whether one number is "close enough" to another. One
of these could be the concern of a programming language, while the other
would be the concern of a particular program.

-- 
-yossef