On Aug 22, 2006, at 9:21 PM, Ben Johnson wrote:

> why the lucky stiff wrote:
>> On Wed, Aug 23, 2006 at 12:17:42PM +0900, Ben Johnson wrote:
>>> Also the reason I am asking is because I did some tests and came  
>>> to find
>>> out that curl is quite a bit faster than the HTTP library. Is  
>>> this true,
>>> maybe my tests were distorted, but curl seemed to be quite a bit  
>>> faster
>>> in initializing the connection and downloading.
>>
>> The cURL library is indeed very fast, but it also suffers from a  
>> problem
>> that
>> Net::HTTP suffers from: its DNS lookup is not asynchronous and will
>> block your
>> process.  To overcome that, you'll need c-ares[1], which will  
>> probably
>> also need
>> to be wrapped as an extension.
>>
>> In my experience, Net::HTTP actually performs much better when you  
>> use
>> Ruby's
>> non-blocking DNS resolver:
>>
>>   require 'resolv-replace'
>>
>> I wrote a cURL extension and benchmarked it against Net::HTTP with
>> resolv-replace and wasn't completely impressed with the speed
>> difference,
>> so I abandoned the extension.
>>
>> _why
>>
>> [1] http://daniel.haxx.se/projects/c-ares/
>
> What do you mean by the DNY lookup is asynchronous and will block my
> process? If I was to call curl directly from the command line using
> `curl` in ruby wouldn't that be much faster. In this instance it wo  
> uld
> get it's own process and take better advantage of a dual processor
> system. Am I correct, because what I planned on doing was just using
> curl directly from the command line unless there is a downside to  
> this.

No Kernel.`` doesn't fork a new process. It blocks your current  
process and waits for the subprocess to return. See Kernel.fork and  
Process.detach.

Also there's some gems that could probably help you out. Ara T.  
Howard's slave[1] comes to mind.

Corey

1. http://codeforpeople.com/lib/ruby/slave/