I haven't explained the reason of the error estimation in
Range#step for Float;

      double n = (end - beg)/unit;
      double err = (fabs(beg) + fabs(end) + fabs(end-beg)) / fabs(unit) * epsilon;

The reason is as follows. (including unicode characters)
This is based on the theory of the error propagation;
    http://en.wikipedia.org/wiki/Propagation_of_uncertainty

If f(x,y,z) is given as a function of x, y, z,
f (the error of f) can be estimated as:

    f^2 = |f/x|^2*x^2 + |f/y|^2*y^2 + |f/z|^2*z^2

This is a kind of `statistical' error.  Instead, `maximum' error
can be expressed as:

    f = |f/x|*x + |f/y|*y + |f/z|*z

I considered the latter is enough for this case.
Now, the target function here is:

    n = f(e,b,u) = (e-b)/u

The partial differentiations of f are:

    f/e = 1/u
    f/b = -1/u
    f/u = -(e-b)/u^2

The errors of floating point values are estimated as:

    e = |e|*
    b = |b|*
    u = |u|*

Finally, the error is derived as:

  n = |n/e|*e + |n/b|*b + |n/u|*u
     = |1/u|*|e|* + |1/u|*|b|* + |(e-b)/u^2|*|u|*
     = (|e| + |b| + |e-b|)/|u|*

Masahiro Tanaka