> Those show the statistics for a small application IMO.

Sure.

> Once our application serves 200 million page views each day, and generates
> 4T datas on storage. Under that case, the languange is really sensitive, so
> we go with C/C++.

*shrug*

Data storage is irrelevant, since that's all carried by whatever
technology one is using for the data store -- there are no Ruby
performance implications there.

200,000,000 page views a day, though....  So what?  Architect the
_application_ right, and the scaling will follow (though sometimes
getting that architecture right is a bitch).  The devil is in the
details. I can say with confidence, though, that I could spin up an
Engine Yard Cloud instance tomorrow, duplicate this app I referenced
onto it, and using a 100% Ruby stack (i.e. without even using the EY
default nginx for front end web server), I am confident that I could
drive 200,000,000 page views through it in a day on a single instance.

However, it's not a pissing contest. Someone can always come up with
instances where more speed is absolutely vital, requiring one to make
specific language choices to realize the needed speed.  Nor am I
saying that we shouldn't be thinking about performance when working on
the internals of MRI or any other Ruby implementation; we should!  But
the assertion that I was responding to -- that Ruby's performance (MRI
Ruby's anyway) is somehow inadequate for delivering very fast web
applications -- is clearly false.

To come around to the original line of inquiry, though... People are
on the right track by looking at the Ruby 1.9 performance. One might
also look at Rubinius and JRuby.  However, the method of timing is
suspect.  Timings should be taken from _inside_ the code, wrapped
around the specific items being timed, so that one eliminates
differences in startup costs in the final numbers.  One should also,
if looking at jruby, collect timings from multiple iterations so that
the optimizer has a chance to do its thing.


Kirk Haines