Chad Perrin wrote:
> On Thu, Jul 27, 2006 at 04:59:13AM +0900, Francis Cianfrocca wrote:
>> orders of magnitude faster. You can even try to break your computation 
>> up into multiple stages, and stream the intermediate results out to 
>> temporary files. As ugly as that sounds, it will be far faster.
> 
> One of these days, I'll actually know enough Ruby to be sure of what
> language constructs work for what purposes in terms of performance.  I
> rather suspect there are prettier AND better-performing options than
> using temporary files to store data during computation, however.

Ashley was talking about 1GB+ datasets, iirc. I'd love to see an 
in-memory data structure (Ruby or otherwise) that can slug a few of 
those around without breathing hard. And on most machines, you're going 
through the disk anyway with a dataset that large, as it thrashes your 
virtual-memory. So why not take advantage of the tunings that are built 
into the I/O channel?

If I'm using C, I always handle datasets that big with the kernel vm 
functions- generally faster than the I/O functions. I don't know how to 
do that portably in Ruby (yet).

-- 
Posted via http://www.ruby-forum.com/.