Issue #18339 has been updated by mame (Yusuke Endoh).


Looks interesting.

I understood that the basic usage is to measure elapsed time between RUBY_INTERNAL_EVENT_GVL_ACQUIRE_ENTER and RUBY_INTERNAL_EVENT_GVL_ACQUIRE_EXIT, to increase the number of threads when the elapsed time is short, and to decrease the number when the time is long. Is my understanding right? I have no idea how to use RUBY_INTERNAL_EVENT_GVL_RELEASE.

I have one concern. I think the hooks are called without GVL, but I wonder if the current hook implementation is robust enough without GVL.

Another idea is to provide a method `Thread.gvl_waiting_thread_count`. Puma or an application monitor can call it periodically (every one second, every request, or something), reduce thread pool size when the count is high, and add a new thread when the count is low. This is less flexible, but maybe more robust.

> require to saturate one process with fake workload

I couldn't understand this. Could you explain this?

----------------------------------------
Feature #18339: GVL instrumentation API
https://bugs.ruby-lang.org/issues/18339#change-94680

* Author: byroot (Jean Boussier)
* Status: Open
* Priority: Normal
----------------------------------------
# GVL instrumentation API

### Context

One of the most common if not the most common way to deploy Ruby application these days is through threaded runners (typically `puma`, `sidekiq`, etc).

While threaded runners can offer more throughput per RAM than forking runners, they do require to carefully set the concurrency level (number of threads).
If you increase concurrency too much, you'll experience GVL contention and the latency will suffer.

The common way today is to start with a relatively low number of threads, and then increase it until CPU usage reach an acceptably high level (generally `~75%`).
But this method really isn't precise, require to saturate one process with fake workload, and doesn't tell how much threads are waiting on the GVLs, just how much the CPU is used.

Because of this, lots of threaded applications are not that well tuned, even more so because the ideal configuration is very dependant on the workload and can vary over time. So a decent setting might not be so good six months later.

Ideally, application owners should be able to continuously see the impact of the GVL contention on their latency metric, so they can more accurately decide what throughput vs latency tradeoff is best for them and regularly adjust it.

### Existing instrumentation methods

Currently, if you want to measure how much GVL contention is happening, you have to use some lower level tracing tools
such as `bpftrace`, or `dtrace`. These are quite advanced tools and require either `root` access or to compile Ruby with different configuration flags etc.

They're also external, so common Application Performance Monitoring (APM) tools can't really report it.

### Proposal

I'd like to have a C-level hook API around the GVL, with 3 events:

  - `RUBY_INTERNAL_EVENT_GVL_ACQUIRE_ENTER`
  - `RUBY_INTERNAL_EVENT_GVL_ACQUIRE_EXIT`
  - `RUBY_INTERNAL_EVENT_GVL_RELEASE`

Such API would allow to implement C extensions that collect various metrics about the GVL impact, such as median / p90 / p99 wait time, or even per thread total wait time.

Additionaly it would be very useful if the hook would pass some metadata, most importantly, the number of threads currently waiting.
People interested in a lower overhead monitoring method not calling `clock_gettime` could instrument that number of waiting thread instead. It would be less accurate, but enough to tell wether there might be problem.

With such metrics, application owners would be able to much more precisely tune their concurrency setting, and delibarately chose their own tradeoff between throughput and latency.




-- 
https://bugs.ruby-lang.org/

Unsubscribe: <mailto:ruby-core-request / ruby-lang.org?subject=unsubscribe>
<http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>