In article <987803036.405271 / rexx.com>,
 oinkoink+unet / MOOMOOre_xx.com (Bret) writes:

>Python says that 0, 0.0, the empty list, the empty dictionary...are
>false, but other creatures of the same types are true.  This takes a
>wart from C (where it may be sort of excusable because C is at a
>lower level), and inflates it to hideous and alarming proportions.

And how!  I haven't used Python in anger, but I have wrestled with the
same problem in Perl, generally when I'm assigning to variables from a
regexp match.  I assume that the value will be false (because
undefined) if no match happened, then wake up in a sweat in the middle
of the night screaming "Aargh, if it matches "0" my code will do the
wrong thing".  So I have to rewrite it *without* all of Perl's useful
shortcuts to remove the bug.

I'm not out to hammer Perl here: I'm amazed at how well it works.  But
in this case its "helpful" shortcuts make my life harder, not easier.
I was *so* glad to discover that Ruby gets this right!

Cheers, 

Jeremy Henty