On Aug 17, 2007, at 11:59 AM, Jos Backus wrote:

> Hi. In order to be able to run swiftiply_mongrel_rails under  
> daemontools I
> patched swiftiply_mongrel_rails to use Ara's Slave library. The  
> patch does
> essentially this:
>
>     slaves = []
>     3.times do |i|
>       require 'slave'
>       require "#{File.dirname(__FILE__)}/mongrel_rails"
>       slaves << Slave.object(:async => true) {
> 	Mongrel::Runner.new.run(args) # args == similar to mongrel_rails
> 	                              # command line args
>       }
>     end
>     slaves.each {|t| t.join}

what is this code supposed to do exactly?  slave.rb puts an object in  
another process which cannot outlive it's parent, but this object is  
meant to be used: a handle to it is expected.  iff  
Mongrel::Runner.new.run never returns then all those slaves are  
essentially half baked: they will not have a reference to their  
objects - in particular the lifelines (sockets) have not been setup  
completely.  so essentially all this code is close to simply

   fork { Mongrel .... }

and never setting up collection of the child.  however i'm not 100%  
clear on how mongrel implements run nor what you are really trying to  
do here?

>
> (See http://rubyforge.org/pipermail/swiftiply-users/2007-August/ 
> 000054.html
> for the actual patch).
>
> Note that Mongrel::Runner.new.run never returns, hence the use of  
> the :async
> option. It is just a wrapper around mongrel_rails.

all async does is wrap your code with a thread.

>
> When a SIGTERM is sent to the swiftiply_mongrel_rails process, the  
> following
> output is seen:

> http://pastie.caboo.se/88665


yeah - that makes sense: slave.rb is trying to exit because the  
parent has died - the question is why SystemExit is being rescued.   
perhaps a blanket 'resuce Exception' in swiftiply?


>
>     f
> This shows swiftiply_mongrel_rails as well as its slaves exiting, and
> daemontools subsequently restarting swiftiply_mongrel_rails.
>
> Now for the problem: after this, /tmp holds 3 slave_proc_* UNIX  
> domain sockets
> that have not been unlinked. Subsequent restarts yield more  
> slave_proc_*
> sockets, and so on.
>
> So my question is: what could cause these sockets not to be  
> removed? am I
> using Slave incorrectly? I instrumented slave.rb to make sure the
> Kernel.at_exit method is called and it is - it's just that the  
> associated
> block isn't executed in the SIGTERM case. It works fine when a  
> SIGUSR2 signal
> is sent - the sockets are cleaned up as expected. A simple test script
> suggests that Kernel.at_exit works okay on the platform (ruby 1.8.5  
> from the
> CentOS testing repo on CentOS 4).
>
> Any help is appreciated...

for SIGTERM the socket should be cleaned up by the parent process -  
however it seems like the normal exit chain is being interupted by  
swiftiply since SystemExit is being rescued.  can you find/show the  
code that is rescuing the SystemExit call and see what's happening  
there: are the normal exit handlers getting called in the case of  
SystemExit being thrown?  in otherwords the code should look vaguely  
like this

cfp:~ > cat a.rb
begin
   exit 42
rescue Exception => e
   p e.class
   p e.status
   exit e.status if SystemExit === e
   p :somethig_else
end


cfp:~ > ruby a.rb
SystemExit
42

regards.


a @ http://drawohara.com/
--
we can deny everything, except that we have the possibility of being  
better. simply reflect on that.
h.h. the 14th dalai lama