Wilson Bilkovich wrote:
> On 11/12/06, bradjpeek <bradjpeek / gmail.com> wrote:
> >
> > The second edition of _The_Ruby_Way_ has an example similar to the
> > following:
> >
> > irb(main):001:0> puts 'not equal' unless (3.2 - 2.0) == 1.2
> > not equal
> > =>nil
> >
> > The point is to illustrate why you might want to use BigDecimal (i.e.
> > so 3.2 - 2.0 would in fact = 1.2).
> >
> > require 'bigdecimal'
> > x = BigDecimal("3.2")
> > y = BigDecimal("2.0")
> > z = BigDecimal("1.2")
> > p x - y == z ?  "equal" : "not equal"    # prints "equal"
> >
> > I'm fairly new to Ruby and don't do much programming, but when I saw
> > this example I was surprised that the default behavior is that 3.2 -
> > 2.0 != 1.2
> >
> > To me, this violates the "Principal of least surprise", but I guess it
> > isn't a big deal because I don't remember it being discussed in
> > Programming Ruby book (but it certainly may have been).
> >
> > Do other languages work this way?
> >
> >
>
> Yep. This is pretty standard.
> This article is tolerable, but rambles a bit:
> http://en.wikipedia.org/wiki/Floating_point

There was a personal computer that had a language in ROM
that used binary-coded-decimal floating point.
Using that language, 3.2 - 2.0 yielded precisely 1.2.