Francis Cianfrocca wrote:
> In general, the problem of architecting a high-performance web server that
> includes external dependencies like databases, legacy applications, SOAP,
> message-queueing systems, etc etc, is a very big problem with no simple
> answer. It's also been intensely studied, so there are resources for you all
> over the web.

In other words, Robert Heinlein's TANSTAAFL principle holds up for this 
domain, like many others: "There Ain't No Such Thing As A Free Lunch!"

Ironically, I was invited to a seminar a couple of weeks ago about 
concurrency titled, "The Free Lunch Is Over". What was ironic about it 
was that I couldn't attend because I had a prior commitment -- a service 
anniversary cruise with my employer at which I received a free lunch. :)

> I made that remark in relation to efforts to make network servers run faster
> by hosting them on multiprocessor or multicore machines. You'll usually find
> that a single computer with one big network pipe attached to it won't be
> able to process the I/O fast enough to keep all the cores busy. You might
> then be tempted to host the DBMS on the same machine, but that's rarely a
> good idea. Simpler is better.

Up to a point, yes, simpler is better. But the goal of system 
performance engineering of this type is to have, as much as possible, a 
balanced system -- network, disk and processor utilizations 
approximately equal and none of them saturated. That's the "sweet spot" 
where you get the highest throughput for the lowest cost.

If your workload is well-behaved, you can sometimes get here "on the 
average over a workday". But web server workloads are anything but 
well-behaved, even in the absence of deliberate denial of service 
attacks. :)