2009/1/30 Peter Booth <pbooth / nocoincidences.com>:

> You have an answer, but it isn't the right answer.

I am not sure I understand what you mean by that since you seem to
rather reconfirm what I wrote:

> An Ultrasparc T1  processor is a little more powerful than either of the
> Core 2 Duo CPUs you tested. You can see this from  the published results for
> the  SPECjbb2005  or the specweb benchmarks.

Colin was not interested to learn how much potential a SPARC CPU has
but he wanted to know why his SPARC was outperformed by an Intel box.

> The confusion when using your application as a benchmark, you only make use
> of 5% of the CPU resources of the Sun box.  The T1 processor  only runs at a
> clock speed of 1GHz but, being both multithreaded and multi-core, it gives
> you a total of 24 threads to push work through. If you used a benchmark that
> ran 24 or more instances of your application you could expect to see
>  greater throughput from the Sun host than the Intel hosts.

As I said, there was just a single thread in the benchmark and this is
where SPARC processors fail miserably.

> This is a great example of the "End of Moore's Law" issue. We are at the
> cusp of a change in hardware technology that could force all  developers to
> learn about concurrency and parallelizing workloads. The difference between
> these two architectures is that Sun embraced the issue a little earlier than
> Intel.

Frankly, I am not too optimistic that concurrency will be ubiquitous
soon. There are several reasons for this: judging from what I read in
public forums the concept seems to be difficult to grasp for many
people. Especially testing a multithreaded application is
significantly more complex than testing a single threaded application.
And, lastly, there is a vast amount of software that does exist
already and scarcely uses multithreading; in other words, efforts to
convert this are very high.

Side note: when I was at the university I picked a lecture about
communication in parallel computational models because at that time my
university (Paderborn, Germany) had one of the largest multiprocessor
systems around and was recognized as strong in that area. I did not
follow that path further on because it was easy to see that the theory
was still immature, Big-O calculus had large constants (i.e. algorithm
would be faster from a few million CPUs on) and network topologies had
to be tailored to the algorithm.

> I agree with Robert that RISC has had its day in the sun (pun not intended),
> but I disagree with the suggestion that this is due to technical
> inferiority. The reality is that Sun sat pretty earning great margins for
> their hardware for more than a decade. No longer commercially relevant its
> ironic that they are now, perhaps for the first time, competitive on a
> performance vs price basis. But it's too lae. It doesn't matter that Solaris
> on a Sun server is, in some ways, technically superior to a Linux on Intel
> platform.

IMHO Sun's good position is not attributed to fast CPU speeds but
rather features that make Solaris systems good server systems:
reliability, fault tolerance, IO performance etc.  I guess in practice
most applications that need to scale to large amounts of users require
large IO bandwidth rather than CPU power (just think of typical web
applications like online shops).

> I would never choose Solaris today because it would be like buying a Beta
> VCR, or a NeXT cube in the 1980s.

Now you're getting more pessimistic about it than me. :-)

> Its sad that a company responsible for so
> many technical innovations isn't succeeding economically but that's life.
> Linux on Intel is the safe corporate choice today. Funny to remember that
> installing Slackware on a 386 in 1996 made me feel like a revolutionary.

Oh yes, I also remember those days when I copied Slackware on 50+
floppy disks and installed it at home on my 386.  Those are the days
where the phrase about the largest text adventure application
originated. :-)

Kind regards

robert

-- 
remember.guy do |as, often| as.you_can - without end