On Wed, Jan 25, 2012 at 6:05 AM, Alex Chaffee <alexch / gmail.com> wrote:
>
> So how to test around this in unit tests? In RSpec, use be_within (nŮ∆
> be_close) [1]; in Wrong (which works inside many test frameworks), use
> close_to? [2]

I'm really impressed with that Wrong library; well done!

Just throwing this out there: judging float equality based on
_difference_ is incorrect.  You should use _ratio_, or perhaps in some
cases a combination of both.

For instance, if my standard test library defines a default tolerance
of 0.00001, that seems pretty good, right?  OK, but what if the floats
I'm testing are actually really close to zero?

   assert { x.close_to? 0.0000000135 }

Well... x could have the value 0.0000000134999999999999994243561, in
that crazy way that floats behave.  That's clearly "close enough" to
the intended value, but it will fail the test because the default
tolerance is inappropriate for this case.

Of course, I can set a different tolerance for a given test, but the
deeper problem is this: numerate people use ratio instead of
difference to judge the proximity of one number to another, and that's
how we should implement tests for float pseudo-equality.  You
shouldn't need any parameter then; the implementation should work no
matter the scale of the floats involved.

Discuss.