Yep, that's pretty much what I've found. The suggestion by Depesz to use
cursors worked wonderfully though. Basically you wrap it up in a
transaction on the server side, and the cursor gives you x amount of
rows at a time when you request it. 1000 rows slurp-read is quite a bit
less resource intensive than 2 million. ;)

Helge

-----Original Message-----
From: Francis Cianfrocca [mailto:garbagecat10 / gmail.com] 
Sent: 4. august 2006 16:01
To: ruby-talk ML
Subject: Re: [postgres] Is there a way to avoid having the library
slurp-read the whole result-set?

On 8/4/06, ara.t.howard / noaa.gov <ara.t.howard / noaa.gov> wrote:
 >
> it's been a while since i used the ruby postgres bindings but, when i
did,
> both 'query' and 'exec' took blocks to iterate over results sets.
>
> have you tried?

Not sure that solves his problem, Ara- he wants to keep the library
from yanking the whole result set into RAM before it starts iterating
over his code.