Issue #4560 has been updated by Charles Nutter.


This is an interesting one. JRuby recently changed how we generate backtraces to using the Java backtrace as the master. This means our backtraces are as expensive to generate as a full Java backtrace for the full stack (think generating a backtrace for all Ruby and C and intermediate calls in Ruby). As a result, any algorithms that generate backtraces as part of normal flow control took a big perf hit.

On JRuby master, I've made a change that does not generate backtraces for EAGAIN, to avoid the overhead of generating it for the expected case of read_nonblock having nothing available. But it's a bit of a band-aid. The overhead from even *creating* an exception can be weigh into a tight loop over read_nonblock when there's nothing available, and of course having the backtrace disabled could annoy someone if it leaked out (JRuby points them to a flag to turn the backtraces on). Not sure what's the best long-term solution.

Also, the 1.9 practice of mixing in WaitReadable is really dreadful. It's bad enough in JRuby that it has to construct a new singleton class for every raised exception, but the cache effects in 1.9 are really painful.
----------------------------------------
Feature #4560: [PATCH] lib/net/protocol.rb: avoid exceptions in rbuf_fill
http://redmine.ruby-lang.org/issues/4560

Author: Eric Wong
Status: Open
Priority: Low
Assignee: 
Category: lib
Target version: 1.9.x


Blindly hitting IO#read_nonblock() and raising is expensive due
to two factors:

1) method cache being scanned/cleared when the IO::WaitReadable
extended class is GC-ed
2) backtrace generation

This reduces the likelyhood of an IO::WaitReadable exception,
but spurious wakeup can still occur due to bad TCP checksums.

This optimization only applies to non-OpenSSL sockets.  I am
using IO#wait here instead of IO.select() since IO#wait is not
available on OpenSSL sockets.



-- 
http://redmine.ruby-lang.org