```Martin DeMello wrote:

> Sean Russell <ser / germane-software.com> wrote:
> Well, simplifying a little you could say that integer::0 == 0, but
> float::0 is a number in the general region of 0 (no strict equality test

In computer science, this is the most /intuitive/ thing I have ever seen.  0
!= 0.0.

In the recent discussion about whether computer science is more science or
art, I should have used this as an example of why it is art.

When I say 0.0, I /mean/ 0.0, not some number in the general vacinity of
0.0.  If I want "almost" 0.0, I should be able to say ~0.0.

do(something) if x ~= 0.0

I understand that 0.0 != 0, and that 0/0 != 0.0/0.0 because of how floating
point numbers are implemented in the underlying logic.  However, I /still/
think it is funny that, after all this time, we are redefining math because
of limitations in processors.  Don't you?  Isn't it hilarious?  The idea
that computers can't do basic math /properly/?  Try explaining that to a
mathematician who's never dealt with a computer at that level, if you can
find one.

BTW, what I'm peeved about isn't that -1.0/0.0 = -Infinity, but that -1/0
isn't /also/ -Infinity.  If 0 == 0.0, then it should follow that 0/0 ==
0.0/0.0.

--
|..  "The greatest security hazard is a false sense of security."
<|>   -- anon
/|\
/|
|

```