In message <5a81kmF28vc6vU1 / mid.individual.net>, Robert Klemme writes:
>Because converting a boolean to a boolean by means of comparison with 
>boolean values
>
>a) is superfluous
>
>b) can do more harm than good (as I pointed out, there are often 
>multiple values for either "true" or "false" in computer languages, so 
>comparing with just one of these values will almost certainly make your 
>program fail at some point).

In particular, consider that none of the following are the boolean value
true:
	'true'
	1
	23
	[ true ]

Ruby's answer is just to say that an expression has truthiness if it's
not nil or false.  (Lua, I think, does the same thing; certainly, in lua,
if (0) ... end will execute.)  C's answer is to say that an expression
has truthiness if it doesn't compare equal to zero; this makes null pointers
and 0-valued ints "false".  I think some languages do it by coercing
everything to boolean.

That sounds really slick, at first!  Not only do you get a nice simple rule,
but you can define your own truth values!  AWESOME!

Two problems:
1.  Pretty high performance cost to coerce everything to boolean, given
that 99% of all comparisons ever will be against objects that used the
default rule anyway -- especially in Ruby, where it'd be method call overhead.
2.  You can define your own truth values.  That means that any boolean test
on an object could conceivably be overridden by some joker three cubicles
away with some idiotic hack to override a particular object's hypothetical
to_bool method.

As is, for the very easy cases (bools, testing whether a variable exists at
all) you get the results you want, and if you want a specific test, you can
perform it.

Advantage:  Ruby.

(And I say this as someone who virtually NEVER writes 'if (ptr == NULL)' in C,
because I think it's gratuitously verbose.  'if (ptr)' seems clearer to me.)

-s