On Thu, Apr 21, 2011 at 6:28 AM, Robert Klemme
<shortcutter / googlemail.com> wrote:
> Please keep in mind that in a multithreaded environment there is
> synchronization overhead. A solution could use an AtomicBoolean
> stored somewhere as static final. Now all threads that need to make
> the decision need to go through this. Even if it is "only" volatile
> semantics (and not synchronized) and allows for concurrent reads there
> is a price to pay. Using a ThreadLocal which is initialized during
> thread construction or lazily would reduce synchronization overhead at
> the risk of the flag value becoming outdated - an issue which becomes
> worse with thread lifetime. Applications which use a thread pool
> could suffer.

In this case, I'm not using a synchronized, atomic, *or* boolean
field. Because of the rarity of Fixnum and Float modification and the
potential for heavy perf impact, I'm considering redefinition of
methods in one thread while another thread is calling those methods as
somewhat undefined, at least for Fixnum and Float. That's not perfect
(JVM could optimize such that one thread's modifications never are
seen by another thread), but it's closer.

It's also worth pointing out that usually modifications to Fixnum or
Float are done for DSL purposes, where there's less likelihood of
heavy threading effects.

You're right, though...if I made that field volatile (it doesn't need
to be Atomic, since I only ever read *or* write, never both), the perf
impact would be higher.

> But I agree, the effect vastly depends on the frequency of Hash
> accesses with a Fixnum key. Unfortunately I guess nobody has figures
> about this - and even if, those will probably largely vary with type
> of application.

I operate at too low a level to see the 10000-foot view of application
performance. In other words, I spend my time optimizing individual
core methods, individual Ruby language features, and runtime-level
operations like calls and constant lookup...rather than really looking
at full app performance. Once you get to the scale of a real app, the
performance bottlenecks from badly-written code, slow IO, excessive
object creation, slow libraries and other userland issues almost
always trump runtime-level speed. As an example, I point at the fact
that Ruby 1.9 is almost always much faster than Ruby 1.8, but Rails
under Ruby 1.9 is only marginally faster than Rails on Ruby 1.8 (or so
I've seen when people try to measure it).

The benefit of a faster and faster runtime is often outweighed by
writing better Ruby code in the first place. But I don't live in the
application world...I work on JRuby low-level performance. You have to
do the rest :)

- Charlie