Phillip Gawlowski wrote in post #963658:

> It cannot be infinity. It does, quite literally not compute. There's
> no room for interpretation, it's a fact of (mathematical) life that
> something divided by nothing has an undefined result. It doesn't
> matter if it's 0, 0.0, or -0.0. Undefined is undefined.
>
> That other languages have the same issue makes matters worse, not
> better (but at least it is consistent, so there's that).
>
> --
> Phillip Gawlowski

This is not even wrong.

From the definitive source:
http://en.wikipedia.org/wiki/Division_by_zero

The IEEE floating-point standard, supported by almost all modern
floating-point units, specifies that every floating point arithmetic
operation, including division by zero, has a well-defined result. The
standard supports signed zero, as well as infinity and NaN (not a
number). There are two zeroes, +0 (positive zero) and  (negative zero)
and this removes any ambiguity when dividing. In IEEE 754 arithmetic, a
 +0 is positive infinity when a is positive, negative infinity whens negative, and NaN when a = 0. The infinity signs change when
dividing by 0 instead.