shyouhei / ruby-lang.org wrote:
> Plus I think you have more rooms for optimizations by sticking to
> power-of-two sized bins.  When you rehash a hash, right now all
> elements are cleaned up from the bin, resize it, then insert them
> again one-by-one.  If number of bins are always 2^n, rehashing is
> to resize a 2^n array to 2^(n+1).  This case the elements stored in
> new_bins[2^n+k] can only come from new_bins[k].  This fact does not
> change the algorithmic complexity, but can reduce insertions.
> 
> https://github.com/shyouhei/ruby/commit/f7ca891

Thanks!  However, I wasn't able to show a difference with
"make benchmark"[1].   Were you?

Perhaps rehash is not called often enough, and I think a branch
inside the loop is difficult for the CPU to optimize.  I think
the current dumb loop is very good for CPU pipelining and prefetch.


[1] I have applied my patches for improved benchmark consistency:
	https://bugs.ruby-lang.org/issues/5985#change-44442
	https://bugs.ruby-lang.org/issues/9430