As far as my experience goes, using Float#== is always an error. Float 
should be compared within epsilon because of errors, etc. etc. But the 
question is: why keep Float#== if it is basically useless? Why not have 
Float#== be defined in the core by:

class Float; def ==(o); ((o - self).abs < 0.0000001); end; end

(Most likely with 0.0000001 be a parameter of the Float class so it can 
be changed at runtime.) Is the current implementation of Float#== 
legacy from C (and pretty much every other language I can think of)?

It would make the behavior of Float more natural to humans and should 
not break much, given the uselessness of today's Float#==.

As an example, some unit tests crashed today with this message:

   1) Failure:
test_read_coords(TestScaffoldReorder) 
[./test/test_scaffold_reorder.rb:50]:
<{"9876"=>{"8153"=>[19.1, [63.7, 580.0]], "8154"=>[15.0, [1612.5]]}}> 
expected but was
<{"9876"=>{"8153"=>[19.1, [63.7, 580.0]], "8154"=>[15.0, [1612.5]]}}>.


It looks stupid, doesn't it? It failed because the computed float where 
not exactly equal to the constant entered in the testing code, but they 
display the same. I could drill down the hashes and compare with 
assert_within_epsilon (or whatever it is), but it makes the code much 
uglier and more complicated than a simple assert_equal. Or wrap the 
structure in a class and have a proper equality operator. Or...
But all this seems too complicated for the quick program at hand. And 
redefining Float#== as shown above made the test pass with much less 
pain.

So any good reason to keep Float#== the way it is? Or is there any real 
danger of breaking existing libraries if I redefine Float#== this way?

Guillaume.