samuel / oriontransfer.net wrote:
> I've been playing around with my gem async and I've come to
> the conclusion that it is a great way to do IO, but it does
> have some cases that need to be considered carefully.

Right.

> In particular, when handling HTTP/2 with multiple streams,
> it's tricky to get good performance because utilising multiple
> threads is basically impossible (and this applies to Ruby in
> general). With HTTP/1, multiple "streams" could be easily
> multiplexed across multiple processes easily.

I'm no expert on HTTP/2, but I don't believe HTTP/2 was built
for high-throughput in mind.  By "high-throughput", I mean
capable of maxing out the physical network or storage.

At least, multiplexing multiple streams over a single TCP
connection doesn't make any sense as a way to improve
throughput.  Rather, HTTP/2 was meant to reduce latency by
avoiding TCP connection setup overhead, and maybe avoiding
slow-start-after-idle (by having less idle time).  In other
words, HTTP/2 aims to make better use of a
heavy-in-memory-but-often-idle resource.

> What this means is that a single HTTP/2 connection, even with
> multiple streams, is limited to a single thread with the
> fiver-based/green-thread design.

I don't see that is a big deal because of what I wrote above.

> I actually see two sids to this: It limits bad connections to
> a single thread, which is actually a feature in some ways. On
> the other hand, you can't completely depend on multiplexing
> HTTP/2 streams to improve performance.

Right.

> On the other hand, any green-thread based design is probably
> going to suffer from this problem, unless a work pool is used
> for actually generating responses. In the case of
> `async-http`, it exposes streaming requests and responses, so
> this isn't very easy to achieve.

Exactly.  As I've been say all aalong: use different concurrency
primitives for different things.  fork (or Guilds) for
CPU/memory-bound processing; green threads and/or nonblocking
I/O for low-throughput transfers (virtually all public Internet
stuff), native Threads for high-throughput transfers
(local/LAN/LFN).

So you could use a green thread to coordinate work to the work
pool (forked processes), and still use a green thread to serialize
the low-throughput response back to the client.

This is also why it's desirable (but not a priority) to be able
to migrate green-threads to different Threads/Guilds for load
balancing.  Different stages of an application response will
shift from being CPU/memory-bound to low-throughput trickles.

> I've also been thinking about timeouts.
> 
> I've been thinking about adding a general timeout to all
> socket operations. The user can set some global default, (or
> even set it to nil). When the user calls `io.read` or
> `io.write` there is an implicit timeout. I'm not sure if this
> is a good approach, but I don't think it's stupid, since `io`
> operations are naturally temporal so some kind of default
> temporal limit makes sense.

Timeout-in-VM [Feature #14859] will be most optimized for apps
using the same timeout all around.  I'm not sure it's necessary
to add a new API for this if we already have what's in timeout.rb

Also, adding a timeout arg to every single io.read/io.write call
is going to be worse for performance, because every timeout use
requires arming/disarming a timer.  Whereas a single
"Timeout.timeout" call only arms/disarms the timer once.

Unsubscribe: <mailto:ruby-core-request / ruby-lang.org?subject=unsubscribe>
<http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>