> -----Original Message-----
> From: Ashley Moran [mailto:work / ashleymoran.me.uk] 
> Sent: Tuesday, August 29, 2006 12:17 PM
> To: ruby-talk ML
> Subject: Evading the limit of a pipe's standard input
> 
> Hi
> 
> I'm trying to write a tool that generates a really long SQL script  
> and passes it to psql, but I've hit a problem because my script is  
> too long to be sent to the stdin of psql.  This is the first time  
> I've run into the limit so it had me scratching my head for a 
> while.   
> I've tried a load of tricks, even putting the lines in an array, eg:
> 
>    MAX_SUCCESSFUL_TIMES = 3047
> 
>    query = ["BEGIN WORK;"]
>    (MAX_SUCCESSFUL_TIMES + 1).times do
>      query << "INSERT INTO people (name) VALUES ('Fred');"
>    end
>    query << "COMMIT;"
> 
>    IO.popen("psql -U test test","r+") do |pipe|
>      query.each { |statement| pipe.puts statement }
>    end
> 
> but it still fails when the total length of commands exceeds the  
> limit (which by experiment I've found to be 128K on Mac OS X, hence  
> the specific number of times above).
> 
> What's the best solution to this.  I would like to stick to inter- 
> process communication, and avoid temporary files and rewriting it to  
> use DBD, if possible.  Or are they my only options?

Make sure the command you spawn in IO.popen is actually reading out
stuff from its stdin in parallel. The limit a pipe have on the system
level is the amount of the unread data it can hold. When the data is
read out, it makes room for more data. However, if your reader is stuck
for some reason, the writer will wait too.

If you say that psql accepts only limited amount of data from its stdin,
it is more of its problem, rather than pipe's. To check it, try to feed
the big file to psql via redirection, like in:

psql -U test test < "your big file"

Hope I am not too off from what you meant,
Gennady.