On Sat, Mar 29, 2008 at 9:18 PM, M. Edward (Ed) Borasky
<znmeb / cesmail.net> wrote:
>  In fact, Dijkstra, in _A Discipline of Programming_, explicitly calls out
> that one must pay attention to the twin concerns of correctness and
> efficiency. And, while the hardware is orders of magnitudes faster and
> cheaper than it was when Dijkstra wrote that, throwing hardware at an
> inefficient design is still not economically viable in the long run and
> probably never will be.

In my experience, this varies with the domain.

For the business apps I presently work on, I often do things that are
10x to 100x slower than they could be - because the hardware power
required to run them is so much cheaper than my time would be, and I
still get response times that are within my requirements.  I presently
run the stuff we have over a distributed cluster of about 10 machines.
 This could almost certainly have been reduced to two and possibly one
- at the cost of much, much more development time.  That would have
been a bad tradeoff.

For games and embedded systems development which I've done before, the
millions units that's produced has often made computer time much more
expensive than my programmer time, so optimizing made a lot of sense -
even at the cycle by cycle level.

You write that you're a performance engineer by profession; in that
profession, you'll mostly encounter the cases where performance IS an
issue, as nobody that has acceptable performance will throw a
performance engineer at the case.  So, your sample will be skewed,
even though you'll have hit a lot more performance cases than I have.

Eivind.