On 15.4.2005, at 00:55, Mel Bohince wrote:

> <1.6> expected but was
> <1.6>.

Floating point accuracy, the bane of mankind.
1.0 + 0.2 + 0.2 + 0.2 - 1.6
=> -2.22044604925031e-16

> Any clues to what I'm doing wrong? Is there a strategy to debug this 
> kind of thing? The debugger is not like I'm use too.

Either do fixed-point decimals with integers and decimal point
divisor (100 => 1.00*100; 100 + 20 + 20 + 20 - 160 => 0)
or pick an error threshold and check that (a - b) < threshold
(1.0 + 0.2 + 0.2 + 0.2 - 1.6) < 0.01
=> true