I haven't explained the reason of the error estimation in
Range#step for Float;

      double n = (end - beg)/unit;
      double err = (fabs(beg) + fabs(end) + fabs(end-beg)) / fabs(unit) * epsilon;

The reason is as follows. (including unicode characters)
This is based on the theory of the error propagation;
    http://en.wikipedia.org/wiki/Propagation_of_uncertainty

If f(x,y,z) is given as a function of x, y, z,
Δf (the error of f) can be estimated as:

    Δf^2 = |∂f/∂x|^2*Δx^2 + |∂f/∂y|^2*Δy^2 + |∂f/∂z|^2*Δz^2

This is a kind of `statistical' error.  Instead, `maximum' error
can be expressed as:

    Δf = |∂f/∂x|*Δx + |∂f/∂y|*Δy + |∂f/∂z|*Δz

I considered the latter is enough for this case.
Now, the target function here is:

    n = f(e,b,u) = (e-b)/u

The partial differentiations of f are:

    ∂f/∂e = 1/u
    ∂f/∂b = -1/u
    ∂f/∂u = -(e-b)/u^2

The errors of floating point values are estimated as:

    Δe = |e|*ε
    Δb = |b|*ε
    Δu = |u|*ε

Finally, the error is derived as:

  Δn = |∂n/∂e|*Δe + |∂n/∂b|*Δb + |∂n/∂u|*Δu
     = |1/u|*|e|*ε + |1/u|*|b|*ε + |(e-b)/u^2|*|u|*ε
     = (|e| + |b| + |e-b|)/|u|*ε

Masahiro Tanaka