--000e0ce0b382bb288704b70cd951
Content-Type: text/plain; charset=ISO-8859-1

On Jan 21, 2012 9:34 AM, "Gary Wright" <gwtmp01 / mac.com> wrote:
>
>
> On Jan 21, 2012, at 9:06 AM, Intransition wrote:
>
> > So simple...
> >
> >   1.1 - 1.to_f 0.1
> >   >> false
> >
> > (*rumble*) (*rumble*) Pathetic!
>
>
> Decimal literals (e.g. 1.1) can't always be represented exactly as binary
floats:
>
> >> "%.40f" % 1.1
> "1.1000000000000000888178419700125232338905"
> >> "%.40f" % 1.0
> "1.0000000000000000000000000000000000000000"
> >>
>
> Use BigDecimal if you need exact precision:
>
> >> require 'bigdecimal'
> true
> >> BigDecimal.new("1.1") - BigDecimal.new("1.0")
> #<BigDecimal:7fc50ea75ff0,'0.1E0',9(36)>
> >> puts BigDecimal.new("1.1") - BigDecimal.new("1.0")
> 0.1

This problem comes up from time to time, sometimes from people I'd expect
to know better than to ask. And the answer is always the same: that's how
floats work. And there's usually the same suggestion of using BigDecimal
(or something similar).

Since this time I'm waiting for someone and rather bored, I thought I'd see
how this goes. What's the real benefit of using floats? Why doesn't Ruby
actually use BigDecimal for things like this?

-- 
-yossef

--000e0ce0b382bb288704b70cd951--