Issue #12142 has been updated by Vladimir Makarov.


Koichi Sasada wrote:
> Thank you for your great contribution.
>

Thanks you for your quick response.  I am not a Rubyist but I like MRI
code.

> Do you compare memory usages?
> 

Sorry, I did not.  Although I evaluated it.  On my evaluation, in the
worst case scenario, memory usage will be about the same as for the
current hash tables taking into account that the element size is now
1/2 of the old size and the element array minimal usage is 50%. This
is because, when the hash table rebuilds, the element array
size is doubled.  Rebuilding means 100% of array element usage before it.  If it
is less, the new array size will be the same or will be smaller.

This evaluation excludes cases when the current hash table uses packed
elements (up to 6).  But I consider it is a pathological case.  The proposed hash
tables can use the same approach.  It is even more natural because the
packed elements of the current hash tables have exactly the same
structure as for the proposed table elements.

So the packed element approach could be implemented too for the proposed
implementation.  It means avoiding creation of entries array for small
size tables.  I don't see it is necessary unless the hash tables will
be again used for method tables where most of them are small.  Hash
tables will be faster than the used binary search.  But it is not a
critical code (at least for benchmarks in MRI) as we search method table
once for a method and all other calls of the method skips this search.
I am sure you know it much better.

Speaking of measurements.  Could you recommend credible benchmarks for
the measurements.  I am in bechmarking business for a long time and I
know benchmarking may be an evil.  It is possible to create benchmarks
which prove opposite things.  In compiler field, we use
SPEC2000/SPEC2006 which is a consensus of most parties involved in the
compiler business.  Do Ruby have something analogous?


> There are good and bad points.
> 
> * Good
>   * removing fwd/prev pointer for doubly linked list
>   * removing per bucket allocation
> * Bad
>   * it requires more "entries" array size. current st doesn't allocate big entries array for chain hash.
>   * (rare case) so many deletion can keep spaces (does it collected? i need to read code more)
> 

In the proposed implementation, the table size can be decreased. So in
some way it is collected.

Reading the responses to all of which I am going to answer, I see people
are worrying about memory usage.  Smaller memory usage is important
for better code locality too (although a better code locality does not mean a
faster code automatically -- the access patter is important too).  But
I consider the speed is the first priority these days (especially when memory
is cheap and it will be much cheaper with new coming memory
technology).

In many cases speed is achieved by methods which requires more memory.
For example, Intel compiler generates much bigger code than GCC to
achieve better performance (this is most important competitive
advantage for their compiler).

This is actually seventh variant of hash tables I tried in MRI.  Only
this variant achieved the best average improvement and no any
benchmark with worse performance.

> I think goods overcomes bads.
> 

Thanks, I really appreciate your opinion.  I'll work on the found
issues.  Although I am a bit busy right now with work on GCC6 release.
I'll have more time to work on this in April.


> We can generalize the last issue as "compaction".
> This is what I didn't touch this issue yet (maybe not a big problem).
> 
> 
> Trivial comments
> 
> * at first, you (or we) should introduce `st_num_entries()` (or something good name) to wrap to access `num_entries`/`num_elements` before your patch.
> * I'm not sure we should continue to use the name st. at least, st.c can be renamed.

Ok.  I'll think about the terminology.  Yura Sokolov wrote that changing
entries to elements can affect all rubygems.  I did not know about
that.  I was reckless about using a terminology more familiar for me.

> * I always confuse about "open addressing" == "closed hashing" https://en.wikipedia.org/wiki/Open_addressing

Yes, the term is confusing but it was used since 1957 according to Knuth.


----------------------------------------
Feature #12142: Hash tables with open addressing
https://bugs.ruby-lang.org/issues/12142#change-57286

* Author: Vladimir Makarov
* Status: Open
* Priority: Normal
* Assignee: 
----------------------------------------
~~~
 Hello, the following patch contains a new implementation of hash
tables (major files st.c and include/ruby/st.h).

  Modern processors have several levels of cache.  Usually,the CPU
reads one or a few lines of the cache from memory (or another level of
cache).  So CPU is much faster at reading data stored close to each
other.  The current implementation of Ruby hash tables does not fit
well to modern processor cache organization, which requires better
data locality for faster program speed.

The new hash table implementation achieves a better data locality
mainly by

  o switching to open addressing hash tables for access by keys.
    Removing hash collision lists lets us avoid *pointer chasing*, a
    common problem that produces bad data locality.  I see a tendency
    to move from chaining hash tables to open addressing hash tables
    due to their better fit to modern CPU memory organizations.
    CPython recently made such switch
    (https://hg.python.org/cpython/file/ff1938d12240/Objects/dictobject.c).
    PHP did this a bit earlier
    https://nikic.github.io/2014/12/22/PHPs-new-hashtable-implementation.html.
    GCC has widely-used such hash tables
    (https://gcc.gnu.org/svn/gcc/trunk/libiberty/hashtab.c) internally
    for more than 15 years.

  o removing doubly linked lists and putting the elements into an array
    for accessing to elements by their inclusion order.  That also
    removes pointer chaising on the doubly linked lists used for
    traversing elements by their inclusion order.

A more detailed description of the proposed implementation can be
found in the top comment of the file st.c.

The new implementation was benchmarked on 21 MRI hash table benchmarks
for two most widely used targets x86-64 (Intel 4.2GHz i7-4790K) and ARM
(Exynos 5410 - 1.6GHz Cortex-A15):

make benchmark-each ITEM=bm_hash OPTS='-r 3 -v' COMPARE_RUBY='<trunk ruby>'

Here the results for x86-64:

hash_aref_dsym       1.094
hash_aref_dsym_long          1.383
hash_aref_fix        1.048
hash_aref_flo        1.860
hash_aref_miss       1.107
hash_aref_str        1.107
hash_aref_sym        1.191
hash_aref_sym_long           1.113
hash_flatten         1.258
hash_ident_flo       1.627
hash_ident_num       1.045
hash_ident_obj       1.143
hash_ident_str       1.127
hash_ident_sym       1.152
hash_keys            2.714
hash_shift           2.209
hash_shift_u16       1.442
hash_shift_u24       1.413
hash_shift_u32       1.396
hash_to_proc         2.831
hash_values          2.701

The average performance improvement is more 50%.  ARM results are
analogous -- no any benchmark performance degradation and about the
same average improvement.

The patch can be seen as

https://github.com/vnmakarov/ruby/compare/trunk...hash_tables_with_open_addressing.patch

or in a less convenient way as pull request changes

https://github.com/ruby/ruby/pull/1264/files


This is my first patch for MRI and may be my proposal and
implementation have pitfalls.  But I am keen to learn and work on
inclusion of this code into MRI.

~~~



-- 
https://bugs.ruby-lang.org/

Unsubscribe: <mailto:ruby-core-request / ruby-lang.org?subject=unsubscribe>
<http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>