I've been doing some comparisons between two Windows ports of Ruby
1.8.1:
mswin32 and bccwin32, and I've found that the BigDecimal library is
affected 
by the floating point behaviour of VC++. Microsoft compilers (VC++6,
VC++7,
--haven't tried 7.1--) disable the use of the internal floating point
precision of 80 bits that most PC-processors have. GCC, or other
compilers
for Windows, like Borland or Digital Mars, do use that extra precision.

If you try the following in mswin32-1.8.1:

   require 'bigdecimal'
   puts (BigDecimal('355')/BigDecimal('226')).to_f - (355.0/226.0)

the result is not "0.0" but something like "-9.9120711638534e-013"
(it is 0.0 in bcwin32-1.8.1)
In general, you can find that BigDecimal#to_f produces too few
significant digits.

The reason for this is that under mswin32, BigDecimal.double_fig is only
16
while it is 20 for bccwin32. The C variable DBLE_FIG, which is used for
BigDecimal.double_fig is computed in the C code of the library like
this:

    v = 1.0;
    DBLE_FIG = 0;
    while(v + 1.0 > 1.0) {
        ++DBLE_FIG;
        v /= 10;
    }

If you evaluate it with the precision of the double floating point type 
(64 bits), like VC++ does, or like Ruby code would do it, you get 16, 
which is too few digits for an acceptable conversion to double.
If you evaluate it using the extra precision that other compilers offer,
you get 20, which is enough.

I'm afraid that we may found many more nasty surprises in the mswin32
Version of Ruby, due to the numerical precision of VC++ compared to that
of
the compilers which most developers of Ruby extensions use. For this
reason
I'd be much happier if the great Pragmatic Programers distribution would
Choose another compiler without these problems...

--Javier Goizueta