Horacio Sanson wrote:
> I have a little web spider that scraps several web pages. Sometimes the
> script gets a Bad File Descriptor error and the script bails out.
> 
> As far as I can understand this error is an OS (Windows XP) error and 
> there
> is nothing ruby can do to avoid it (Maybe XP cannot handle so much http
> connections so rapidly). But I cannot find a way to recover from the
> error.... I don't want the script to bail out, but simply continue with 
> the
> next page.
> 
> I have tried to rescue and try/catch the error with all imaginable 
> exception
> classes, still the script always bails out when this error ocurrs. I 
> know
> this error is in the ruby net/http library since I have used Mechanize,
> Http-access2, http-access and all of them suffer from this error.

Did you try this?

begin
  # Read stuff etc.
rescue Errno::EBADF
  # Whatever
end

> Here are the details of the error
> 
> <snip />
> 
> This code is executed for each page (100~). Adding rescue or try/catch
> anywhere inside the get_page method or around it when called does not 
> catch
> the error... the script always stops when the error ocurrs.  Also since 
> this
> error is very sporadic I cannot reproduce it making it very difficult to
> debug.
> 
> Any tips are very appreciated.
> 
> Horacio


-- 
Posted via http://www.ruby-forum.com/.