Florian G. Pflug wrote in message
<20011109022938.A6076 / perception.phlo.org>...
>On Fri, Nov 09, 2001 at 08:45:56AM +0900, Sean Russell wrote:
>> Before you jump on me about the hidden benefits of "pure" XP code, a
couple
>> of caveats:  (1) I'm an XP convert -- we're better off with it than
without
>> it, and (2) I'm not ignoring the benefits of clean code.  I know as
well as
>> you that clean code tends to be less buggy and is certainly more
>> maintainable.  However, code speed is almost never stressed by XP
>> advocates, who tend to stress "never break the rules".
>
>Well, I consider "never breaking the rules" to be a bad guide - in
almost any
>situation, especially in software design.


I now this is kind of off topic for Ruby-Talk, but I had to jump in.

The deal with XP is that you are not supposed to break the rules but you
are supposed to remember that they are just rules. What that means is
that you are supposed to reflect periodically on whether the rules are
still valid and applicable. If they are not then you are supposed to
change the rules. Hence you never _break_ any rules, you are continually
changing the rules to match what is sensible for your circumstances.

As for optimizing code, remember that you have to have a failing test.
If you have a test that fails because the code is too slow, then you are
allowed to have duplicate logic because "all tests pass" has the highest
precedence of all of the rules of simple design. So if the code looks
like it came from an obsfucation contest but it is the only way to pass
the test then that is OK, but you will probably have to write a separate
Technical Memo to explain what the code really does and everyone else
oon the project will at one time or another try to find a cleaner but
just as fast design & implementation so that they can clean up the mess.

The deal though is that you have to get the customer to specify a
performance test that the clean version of the code cannot pass before
you get to tune the code. The rule is "make it run, make it right, make
it fast" - optimization is last unless slow means a failing test.
Although the XP community has looked at that rule lots of times, on
reflection it has always survived. Yes, it means that you write a clean,
slow version first and then with the help of profilers you optimize it
as needed, but you always have the clean slow version to test the
optimized version against and to enable everyone else to understand what
the optimized code is doing.

Cheers, Pete
----
Pete McBreen, McBreen.Consulting , Cochrane, AB
email: petemcbreen / acm.org    http://www.mcbreen.ab.ca/

Author, "Software Craftsmanship The New Imperative"
Addison-Wesley (C) 2002
http://www.amazon.com/exec/obidos/ASIN/0201733862