2013/3/30 Jason Gladish <jason / expectedbehavior.com>:
>
> I've found an issue where calling fork inside a thread, and passing a block
> to the fork, causes the forked process to continue after the block.  I've
> reproduced the issue on the following versions of ruby:

It seems buffered IO data is flushed in several child processes.
The contorol itself dosen't continue after the block.

exit!(true) solves the problem because it doesn't flush IO.
Also, if you use f.syswrite instead of f.write, the problem disappears.

The problem is reproduced more reliably by sleeping a second after f.write.

 1000.times do |j|
   puts "run #{j}"
   threads = []
   100.times do |i|
     threads << Thread.new(i) do |local_i|
       opid = fork do
         # exit!(true) # fixes the issue
         # exit(true) # doesn't fix the issue
         # no 'exit' also exhibits issue
       end
       ::Process.waitpid(opid, 0)
       File.open("/tmp/test_thread_fork_#{local_i}.pid", "w") {|f|
         f.write "1"
         sleep 1
         f.flush
      }
     end
   end
   threads.map { |t| t.join }

   borked = false
   100.times do |i|
     fn = "/tmp/test_thread_fork_#{i}.pid"
     contents = File.read(fn)
     if contents.length > 1
       puts "file #{fn} was written to many times (#{contents})"
       borked = true
     end
   end
   exit(false) if borked
 end

The problem can be solved if Ruby flushes all IO in the fork method.
(Currently only STDOUT and STDERR is flushed.)

I'm not sure we should do it because it slows down fork method
by scanning all objects.
-- 
Tanaka Akira