samuel / oriontransfer.org wrote:
> I found an interesting summary of EPOLLET, which I think explains it better than I did: https://stackoverflow.com/a/46634185/29381 Basically, it minimise OS IPC.

Minimize syscalls, you mean.  I completely agree EPOLLET results
in the fewest syscalls.  But again, that falls down when you
have aggressive clients which are pipelining requests and
reading large responses slowly.

> > According to Go user reports, being able to move goroutines
> > between native threads is a big feature to them. But I don't
> > think it's possible with current Ruby C API, anyways :<

> By definition Fibers shouldn't move between threads. If you
> can move the coroutine between threads, it's a green thread
> (user-scheduled thread).

I don't care for those rigid definitions.  They're all just
bytes that's scheduled in userland and not the kernel.
"Auto-fiber" and green thread are the same to me so this feature
might become "green thread".

> deadlocks and other problems of multiple threads. And as you
> say, GVL is a big problem so there is little reason to use it
> anyway.

Again, native threads are still useful despite GVL.

> > Fwiw, yahns makes large performance sacrifices(*) to avoid HOL
> > blocking.

> And yet it has 2x the latency of `async-http`. Can you tell me
> how to test it in more favourable configuration?

yahns is designed to deal with apps with both slow and fast
endpoints simultaneously.  Given N threads running, (N-1) may be
stuck servicing slow endpoints, while the Nth one remains free
to service ANY other client.

Again, having max_events>1 as I mentioned in my previous email
might be worth a shot for benchmarking.  But I would never use
that for apps where different requests can have different
response times.

> > The main thing which bothers me about both ET and LT is you have
> > to remember to disable/reenable events (to avoid unfairness or DoS).
> 
> Fortunately C++ RAII takes care of this.

I'm not familiar with C++, but it looks like you're using
EPOLL_CTL_ADD/DEL, but no EPOLL_CTL_MOD.  Using MOD to disable
events instead of ADD/DEL will save you some allocations and
possibly extra locking+checks inside Linux.

No need to use EPOLL_CTL_MOD to disable with oneshot, only rearm
(this is what makes oneshot more expensive than ET in ideal conditions).

> I just think it needs to be slightly more modular; but not in
> a way that detracts from becoming a ubiquitous solution for
> non-blocking IO.

> It needs to be possible for concurrency library authors to
> process blocking operations with their own selector/reactor
> design.

Really, I think it's a waste of time and resources to support
these things.  As I described earlier, the one-shot scheduler
design is far too different to be worth shoehorning into dealing
with a reactor with inverted control flow.

I also don't want to make the Ruby API too big; we can barely
come up with this API and semantics as-is...

> I would REALLY like to see something like this. So, we can
> explore different models of concurrency. Sometimes we would
> like to choose different selector implementation for pragmatic
> reasons: On macOS, kqueue doesn't work with `tty` devices. But
> `select` does work fine, with lower performance.

The correct thing to do in that case is to get somebody to fix macOS :)

Since that's likely impossible, we'll likely support more quirks
within the kqueue implementation and be transparent to the user.
There's already a one quirk for dealing with the lack of
POLLPRI/exceptfds support in kevent and I always expected more...

Curious if you know this: if `select` works for ttys on macOS, does `poll`?

In Linux, select/poll/ppoll/epoll all share the same
notification internals (->poll callback); but from cursory
reading of FreeBSD source; the kern_events stuff is separate and
huge compared to epoll.

> In addition, such a design let's you easily tune parameters
> (like size of event queue, other details of the implementation
> that can significantly affect performance).

There's no need to tune anything.  maxevents retrieved from
epoll_wait/kevent is the only parameter and that grows as
needed.  Everything else (including maximum queue size) is tied
to number of fibers/FDs which is already controlled by the
application code.

Unsubscribe: <mailto:ruby-core-request / ruby-lang.org?subject=unsubscribe>
<http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>