I was doing some little tests with various 1.8 new idioms and in
specific with Array#new(size){block}.

Anyway, it seems that ruby is *really* fast[1]  with up to 10^6
elements, while it dies with 10^7.

The problem was this: create an array of N elements and stuff it with 
random values.

time goes up linearly with N, but with N=10000000 (ten millions) I get
a time that is 160 times greater than with N=1000000 (one milion).

Further investigations show that the problem is also present using
different approaches, whenever i go over 8*10^6 elements in my array.

Not that I'd commonly use a ten million array in ruby, but I just
wonder if this is a well known fact or even a conscious choice, or a
strange behaviour.

If this is related to some constant in the interpreter (i.e the GC
starts when there are 8e^6 objects in the Array space) is there a way
to control it dynamically?


----------snippet
[nickel@UltimaThule tmp]$ cat p.rb
Array.new(ARGV[0].to_i) {rand()}

[nickel@UltimaThule tmp]$ time -p ruby p.rb 100000
real 0.06
user 0.05
sys 0.02
[nickel@UltimaThule tmp]$ time -p ruby p.rb 1000000
real 0.74
user 0.68
sys 0.07
[nickel@UltimaThule tmp]$ time -p ruby p.rb 10000000
real 119.07
user 6.94
sys 6.93
[nickel@UltimaThule tmp]$ ruby -v
ruby 1.8.0 (2003-08-04) [i686-linux]



[1]
I mean this:
$ cat p.py
import sys
from random import random
y = []
for x in xrange(int(sys.argv[1])):
            y.append(random())
	    
$ time -p python p.py 1000000; time -p ruby p.rb 1000000
real 7.50
user 6.51
sys 0.19
real 0.76
user 0.67
sys 0.10