On 17 Dec 2001 04:56:07 -0800, truediogen / my-deja.com (Vladimir)
wrote:

>> Coverage of what? How do you *know* this to be the case?  If you (or
>> anyone else) hasn't run a coverage analyzer, you're just guessing
>> about code coverage(toy problems excluded.) 

The people who have reported coverage used a coverage analyzer to
track coverage while running their tests. That's how we *know* it to
be the case.

>> Why are you able to do
>> what Knuth and Kernighan (and many others) routinely fail to do?

Far be it from me to compare myself with these guys. But XP
programmers do something that is quite different from what K&K do:
when we're on our game, we never write a line of code without a
failing test that needs that line written. It should be clear that to
the extent that one does this, one gets perfect coverage. Naturally we
aren't perfect, but we do get very good coverage.

It's a _constructive_ way to get coverage. Interesting, I think.
>> 
>> There are several freeware coverage analyzers. You can download
>> coverage analyzers with a 30 day try-before-you-buy license from many
>> leading vendors.  Why don't you all get some *real* data to support
>> your claims? This wouldn't take more than a few hours: download and
>> install a tool, instrument your app, run your test suite, and see what
>> you get.  Who knows -- maybe you'll be able to prove you really can
>> walk on water.

Tthere's no water-walking involved. A little consideration and inquiry
would have shown how it's done and why it works as well as it does.

I remind us all that coverage is not the be-all and end-all of
testing. but it is one good measure of testing quality. Only one, but
one.
>
>Maybe they do not want to make their customers who buy XP metoring and
>books from them _a bit_ nervous because of uncertainity about the
>quality emebedded in software XP produces.

XP teams seem to have customers who are happy with quality. They
accomplish this by a continuous and very tight feedback loop, driven
by delivering features to the customer every two weeks, tested as the
customer (team) specifies. When the software passes the test but is
not satisfactory, the team learns how to do better tests.
>
>Kinda blissful unawareness...

Again: the teams that report high coverage measured it. I have a
pretty good explanation of how they got it. I'm not sure why, other
than a general desire to object, one would call that unawareness.


Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com