On Thu, Oct 04, 2007 at 02:39:28AM +0900, MenTaLguY wrote:
> On Thu, 4 Oct 2007 01:42:22 +0900, Chad Perrin <perrin / apotheon.com> wrote:
> >> > That's true. However, very roughly, compute resource can scale about
> >> > linearly with compute requirement.
> >>
> >> What about Amdahl's law?
> > 
> > What about it?  Unless you're writing software that doesn't scale with
> > the hardware, more hardware means linear scaling, assuming bandwidth
> > upgrades.  If bandwidth upgrades top out, you've got a bottleneck no
> > amount of hardware purchasing or programmer time will ever solve.
> 
> Amdahl's law is relevant because most software _can't_ be written to
> scale entirely linearly with the hardware, because most computational
> problems are limited in the amount of parallelism they admit.  You may
> have been fortunate enough to have been presented with a lot of
> embarrassingly parallel problems to solve, but that isn't the norm.

Maybe not "entirely", but certainly close enough for government (or
corporate) work.  I was under the impression we were talking about
massive-traffic server-based systems here, where throwing more hardware
at the problem (in the sense of extra blades, or whatever) is an option.
I did not think we were talking about something like a desktop app where
opportunities for parallelism are strictly limited -- in which case I'd
agree that throwing more hardware at the problem is a non-starter.  Of
course, I don't know anyone who thinks endlessly adding processors to
your desktop system is the correct answer to a slow word processor.


> 
> >> > Alternatively, you can reduce the compute requirement by having a more
> >> > complex software system.
> >>
> >> While it's true that very simple systems can perform badly because
> >> they use poor algorithms and/or do not make dynamic optimizations,
> >> more complex software generally means increased computational
> >> requirements.
> > 
> > I thought "complex" was a poor choice of term here, for the most part.
> > It was probably meant as a stand-in for "more work at streamlining
> > design, combined with greater code cleverness needs to scale without
> > throwing hardware at the problem."
> 
> No argument there, as long as it's understood that there are limits to
> what can be achieved.  I don't want to discourage anyone from seeking
> linear scalability as an ideal, but it's not a realistic thing to
> promise or assume.

It's close enough (again), for many purposes, to "realistic".  When you
can get roughly linear scaling up to 100 times as much scaling needs, as
opposed to trying to get similar scaling capabilities out of throwing
programmers (or programmer time) at the problem, that's certainly
"realistic" in my estimation.

Obviously I'm not saying that you should write crap code and throw
hardware at it.  On the other hand, there's a sweet spot for effort spent
in developing good, performant code -- and beyond that point, you should
consider throwing hardware at the problem.  In such circumstances, one of
the primary measures of quality code is "Does it scale in a roughly
linear manner when you add compute resources?"

-- 
CCD CopyWrite Chad Perrin [ http://ccd.apotheon.org ]
MacUser, Nov. 1990: "There comes a time in the history of any project when
it becomes necessary to shoot the engineers and begin production."