I have been thinking for a while on the pros and cons of relying on
operating system threads vs interpreter controlled context switching as is
done now in Ruby.

From what I guess, Python's use of operating system threads results in an
ugly global interpreter lock which actually means that at any time, only one
Python thread can run the interpreter!

On the other hand, simulating threads in Python results in simpler code but
any blocking call in an extension module freezes all Ruby threads at once.

Why would not choose to get the best of both worlds :

* use one main thread which runs the Ruby interpreter
* use worker threads in which potentially blocking extensions module calls
would run.

This way, the structure of the interpreter and the threading model could
remain under tight control while allowing blocking native calls to run in
parallel without blocking the ruby threads.

Sure enough, it is easier said than done !!
I don't know to which extent the interpreter core would need to be modified
in order to support this model.
Anyone cares to comment ?