Issue #12142 has been updated by Yura Sokolov.


> I don't like lists (through pointer or indexes). This is a disperse data structure hurting locality and performance on modern CPUs for most frequently used access patterns. The lists were cool long ago when a gap between memory and CPU speed was small.

But you destroy cache locality by secondary hash and not storing hashsum in a entries array.

Assume 10000 element hash, lets count cache misses:

My hash:

- hit without collision
-- lookup position in bins +1
-- check st_table_entry +1
-- got 2
- hit after one collision
-- lookup position in bins +1
-- check st_table_entry +1
-- check second entry +1
-- got 3
- miss with empty bin
- lookup position in bins +1
- got 1
- miss after one collision
-- lookup position in bins +1
-- check st_table_entry +1
-- got 2
- miss after one collision
-- lookup position in bins +1
-- check st_table_entry +1
-- check second entry +1
-- got 3

Your hash:

- hit without collision
-- lookup position in entries +1
-- check st_table_element +1
-- got 2
- hit after one collision
-- lookup position in entries +1
-- check st_table_entry +1
-- lookup second position in entries +1
-- check second element +1
-- got 4
- miss with empty entry
- lookup position in entries +1
- got 1
- miss after one collision
-- lookup position in entries +1
-- check st_table_element +1
-- check second position in entries +1
-- got 3
- miss after one collision
-- lookup position in entries +1
-- check st_table_element +1
-- check second position in entries +1
-- check second entry +1
-- check third position in entries +1
-- got 5

So your implementation always generates more cache misses than mine. You complitely destroy whole idea of open-addressing.

To overcome this issue you ought use fill factor 0.5.
Providing, you don't use 32bit indices, you spend at least 24+8*2=40 bytes per element - just before rebuilding.
And just after rebuilding entries with table you spend 24*2+8*2*2=80bytes per element!
That is why your implementation doesn't provide memory saving either.

My current implementation uses at least 32+4/1.5=34 bytes, and at most 32*1.5+4=52 bytes.
And I'm looking for possibility to not allocate double-linked list until neccessary, so it will be at most 24*1.5+4=40 bytes for most of hashes.

Lists are slow when every element is allocated separately. Then there is also TLB miss together with cache miss for every element.
When element are allocated from array per hash, then there are less both cache and TLB misses.

And I repeat again: you do not understand when and why open addressing may save cache misses. 
For open addressing to be effective one need to store whole thing that needed to check hit in an array itself (so at least hash sum owt to be stored).
And second probe should be in a same cache line, that limits you:

- to simple schemes: linear probing, quadratic probing,
- or to custom schemes, when you explicitely check neighbours before long jump,
- or exotic schemes, like Robin-Hood hashing.

You just break every best-practice of open-addressing.

----------------------------------------
Feature #12142: Hash tables with open addressing
https://bugs.ruby-lang.org/issues/12142#change-57359

* Author: Vladimir Makarov
* Status: Open
* Priority: Normal
* Assignee: 
----------------------------------------
~~~
 Hello, the following patch contains a new implementation of hash
tables (major files st.c and include/ruby/st.h).

  Modern processors have several levels of cache.  Usually,the CPU
reads one or a few lines of the cache from memory (or another level of
cache).  So CPU is much faster at reading data stored close to each
other.  The current implementation of Ruby hash tables does not fit
well to modern processor cache organization, which requires better
data locality for faster program speed.

The new hash table implementation achieves a better data locality
mainly by

  o switching to open addressing hash tables for access by keys.
    Removing hash collision lists lets us avoid *pointer chasing*, a
    common problem that produces bad data locality.  I see a tendency
    to move from chaining hash tables to open addressing hash tables
    due to their better fit to modern CPU memory organizations.
    CPython recently made such switch
    (https://hg.python.org/cpython/file/ff1938d12240/Objects/dictobject.c).
    PHP did this a bit earlier
    https://nikic.github.io/2014/12/22/PHPs-new-hashtable-implementation.html.
    GCC has widely-used such hash tables
    (https://gcc.gnu.org/svn/gcc/trunk/libiberty/hashtab.c) internally
    for more than 15 years.

  o removing doubly linked lists and putting the elements into an array
    for accessing to elements by their inclusion order.  That also
    removes pointer chaising on the doubly linked lists used for
    traversing elements by their inclusion order.

A more detailed description of the proposed implementation can be
found in the top comment of the file st.c.

The new implementation was benchmarked on 21 MRI hash table benchmarks
for two most widely used targets x86-64 (Intel 4.2GHz i7-4790K) and ARM
(Exynos 5410 - 1.6GHz Cortex-A15):

make benchmark-each ITEM=bm_hash OPTS='-r 3 -v' COMPARE_RUBY='<trunk ruby>'

Here the results for x86-64:

hash_aref_dsym       1.094
hash_aref_dsym_long          1.383
hash_aref_fix        1.048
hash_aref_flo        1.860
hash_aref_miss       1.107
hash_aref_str        1.107
hash_aref_sym        1.191
hash_aref_sym_long           1.113
hash_flatten         1.258
hash_ident_flo       1.627
hash_ident_num       1.045
hash_ident_obj       1.143
hash_ident_str       1.127
hash_ident_sym       1.152
hash_keys            2.714
hash_shift           2.209
hash_shift_u16       1.442
hash_shift_u24       1.413
hash_shift_u32       1.396
hash_to_proc         2.831
hash_values          2.701

The average performance improvement is more 50%.  ARM results are
analogous -- no any benchmark performance degradation and about the
same average improvement.

The patch can be seen as

https://github.com/vnmakarov/ruby/compare/trunk...hash_tables_with_open_addressing.patch

or in a less convenient way as pull request changes

https://github.com/ruby/ruby/pull/1264/files


This is my first patch for MRI and may be my proposal and
implementation have pitfalls.  But I am keen to learn and work on
inclusion of this code into MRI.

~~~

---Files--------------------------------
0001-st.c-use-array-for-storing-st_table_entry.patch (46.7 KB)


-- 
https://bugs.ruby-lang.org/

Unsubscribe: <mailto:ruby-core-request / ruby-lang.org?subject=unsubscribe>
<http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>