------art_1573_3432769.1149457189078
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

So while your process is still running, the connections are in TIME_WAIT,
and then they vanish as soon as your process dies? That means they are being
closed as you expect. A closed TCP connection spends a little time in the
TIME_WAIT state, but as I said above, on a localhost connection it's usually
very short or zero. Try your program on an actual network connection to
another machine. When your process ends, the TIME_WAITing connections should
NOT vanish, but will stick around for a short period of time (usually not as
long on Windows as on Unix).

It that happens, it's a sign that you should probably rethink your design.

On 6/4/06, Alex Young <alex / blackkettle.org> wrote:
>
> Francis Cianfrocca wrote:
> > Are you sure your connections are being closed after each call to
> > client.call(...)?
> A quick hunt through the Net::HTTP code seems to indicate so.  If I'm
> reading it right, there's a call to Net::HTTP::Post#end_request which
> closes the socket client-side.
>
> > You may be hitting a per-process limit on open
> > descriptors. Are you running netstat while your process is running or
> after
> > it ends with the EBADF error? If the latter, then try catching EBADF and
> > put
> > in a long sleep, then look at netstat on a different shell. Localhost
> > connections don't usually need to spend much time in the TIME_WAIT
> state.
> They are still alive, but vanish as soon as the process dies, which
> would seem to indicate that either the call to close the socket is
> wrong, or it's not being respected.
>
> --
> Alex
>
> >
> >
> > On 6/2/06, Alex Young <alex / blackkettle.org> wrote:
> >>
> >> Hi all,
> >>
> >> I'm trying to benchmark a few HTTP server types on Windows (XP Home,
> >> specifically - don't ask why), and I've hit a snag with this code:
> >>
> >> ------------------------------
> >>
> >> require 'xmlrpc/server'
> >> require 'xmlrpc/client'
> >> require 'benchmark'
> >>
> >> class ServerObject < XMLRPC::Server
> >>    def initialize
> >>      super(8080)
> >>      @server.config[:AccessLog]  ['', '']]
> >>      self.add_handler('benchmark.simple') do
> >>        test()
> >>      end
> >>    end
> >>    def test
> >>      'test'
> >>    end
> >> end
> >>
> >> test_obj  erverObject.new
> >> serving_thread  hread.new{ test_obj.serve }
> >>
> >> client  MLRPC::Client.new('127.0.0.1', '/', '8080')
> >>
> >> n  000
> >> Benchmark.bmbm(20) do |b|
> >>    b.report('Direct RPC')  { for i in 1..n;
> >>             client.call('benchmark.simple'); end }
> >> end
> >>
> >> -------------------------
> >>
> >> The problem is that with n that high, I get an
> >>
> >>    c:/ruby/lib/ruby/1.8/net/http.rb:562:in `initialize': Bad file
> >> descriptor - connect(2) (Errno::EBADF)
> >>          from c:/ruby/lib/ruby/1.8/net/http.rb:562:in `connect'
> >>         ...
> >>         from c:/ruby/lib/ruby/1.8/xmlrpc/client.rb:535:in `do_rpc'
> >>
> >> error during the second round.  Looking at netstat -a afterwards, I see
> >> almost every local port in the range 1026-5000 in the TIME_WAIT state.
> >> That's a suspiciously round number, and I suspect there's a
> >> 'client_port_maxP00' setting somewhere.  That's not what bothers me.
> >> Why are these ports waiting, and how can I close them, or reduce their
> >> timeout value?  I'd rather not insert 30 second waits all over the
> place
> >> if that's enough of a delay...
> >>
> >> Any tips?  Moving to a different OS is not, unfortunately, an option,
> >> although shifting up to XP Pro might be in a pinch.
> >>
> >> --
> >> Alex
> >>
> >>
> >
>
>
>

------art_1573_3432769.1149457189078--