Issue #12142 has been updated by Vladimir Makarov.


Yura Sokolov wrote:
> Good day, everyone.
> 
> I'm presenting my (pre)final version of patch.

  Thanks.  I'll investigate your patch later.

Here is *my work update*.  This is far from the final version.  I'll
continue to work on it.  As I wrote I can spend more time on MRI work
after mid-April (currently I'd like to inspect and optimize my code
and investigate more about hash functions and tune them).

  I am glad that I submitted my patch earlier than I planned
originally.  The discussion was useful for me.

  The discussion reveals that I used a wrong benchmarking procedure (I
took it reading emails on ruby developer mailing lists):

make benchmark-each ITEM=bm_hash OPTS='-r 3 -v' COMPARE_RUBY=<an installed ruby>

  This command measures installed ruby with the current miniruby.  So
the results reported by me were much better than in reality.  Thanks
to Koichi Sasada, now I am using the right one:

ruby ../gruby/benchmark/driver.rb -p hash -r 3 -e <trunk miniruby> -e
     current::<my branch miniruby>

  So I realized that I should work more to improve the performance
results because average 15% improvement is far away than 50% reported
the first time.

  Since the first patch submitting, I did the following (I still use
64-bit long hashes and indexes):

* I changed terminology to keep the same API.  What I called elements
  is now called *entries* and what I called entries are now called
  *bins*.

* I added a code for table consistency checking which helps to debug
  the code (at least it helped me a lot to find a few bugs).

* I implemented the compaction of the array entries and fixed a
  strategy for table size change.  It fixes the reported memory leak.

* I made the entries array is *cyclical* to exclude overhead of table
  compaction or/and table size change for usage the hash tables as a
  queue.

* I implemented a *compact representation of small tables* of up to 8
  elements.  I also added tests for small tables of size 2, 4, 8 to
  check small hash tables performance.

* I also *tried to place hashes inside bins* to improve data locality in
  cases of collisions as Yura wrote.  It did not help.  The average
  results were a bit worse.  I used the same number of elements in the
  bins.  So the array bins became 2 times bigger and probably that
  worsened the locality.  I guess tuning ratio `#bin
  elements/#entries` could improve the results but I believe the
  improvement will not worth of doing this.  Also implementing a
  better hashing will probably make the improvement impossible at all.

* Working on the experiment described above I figured that sometimes
  MRI hash functions produces terrible hashes and the collisions
  achieves 100% on some benchmarks.  This is bad for open addressing
  hash tables where big number of collisions results in more cache
  line reads than for tables with chains.  Yura Sokolov alreday wrote
  about this.

  * I ran ruby murmur and sip24 hashing on *smhasher* test
    (https://github.com/aappleby/smhasher).  MurMurHashing in st.c in
    about 3 faster than sip24 on bulk speed test (2GB/sec to 6GB/s) but
    murmur purely performs practically on all hashing quality tests
    except differential Tests and Keyset 'Cyclic' Tests.  E.g. on
    avalanche tests the worst bias is 100% (vs 0.6% for sip24).  It is
    very strange because all murmur hash functions from smhasher behave
    well.

  * I did not start to figure out the reason because *I believe we should
    use City64 hash* (distributed under MIT license) as currently the
    fastest high quality non-crypto hash function.  Its speed achieves
    12GB/s.  So I changed *murmur by City64*.

  * I also *changed the specialized hash function* (rb_num_hash_start)
    used for bm_hash_ident tests.  It permits to improve collisions for
    these tests, e.g. from 73% to 0.3% for hash_ident_num.

  * I believe usage of City64 will help to improve the table performance
    for most widely used case when the keys are strings.

* Examining siphash24.  I got to conclusion that although using a fast
  crypto-level hash function is a good thing but *there is a simpler
  solution* to solve the problem of possible denial attack based on
  hash collisions.

  * When a new table element is inserted we just need to count
    collisions with entries with the same hash (more accurately a part
    of the hash) but different keys and when some threshold is achieved,
    we rebuild the table and start to use a crypto-level hash function.
    In reality such function will be never used until someone really
    tries to do the denial attack.

  * Such approach permits to use faster non-crypto level hash functions
    in the majority of cases.  It also permits easily to switch to usage
    of other slower crypto-level functions (without losing speed in real
    world scenario), e.g. sha-2 or sha-3.  Siphash24 is pretty a new
    functions and it is not time-tested yet as older functions.

  * So I implemented this approach.

* I also *tried double probing* as Yora proposed.  Although
  performance of some benchmarks looks better it makes worse code in
  average (average performance decrease is about 1% but geometric mean
  is about 14% because of huge degradation of hash_aref_flo).  I guess
  it means that usage of the double probing can produce better results
  because of better data locality but also more collisions for small
  tables as it always uses only small portions of hashes, e.g. 5-6
  lower bits.  It might also mean the the specialized ruby hash
  function still behaves purely on flonums although all 64 bits of the
  hash avoids collisions well.  Some hybrid scheme of usage of double
  probing for big tables and secondary hash function using other hash
  bits for small tables might improve performance more.  But I am a
  bit skeptical about such scheme because of additional overhead code.

All the described work permitted to achieve about 35% and 53% (using
the right measurements) better average performance than the trunk
correspondingly on x86-64 (Intel i7-4790K) and ARM (Exynos 5410).

I've just submitted my changes to the github branch (again it is far
from the final version of the patch).  The current version of the pach
can be seen as


https://github.com/vnmakarov/ruby/compare/trunk...hash_tables_with_open_addressing.patch




----------------------------------------
Feature #12142: Hash tables with open addressing
https://bugs.ruby-lang.org/issues/12142#change-57466

* Author: Vladimir Makarov
* Status: Open
* Priority: Normal
* Assignee: 
----------------------------------------
~~~
 Hello, the following patch contains a new implementation of hash
tables (major files st.c and include/ruby/st.h).

  Modern processors have several levels of cache.  Usually,the CPU
reads one or a few lines of the cache from memory (or another level of
cache).  So CPU is much faster at reading data stored close to each
other.  The current implementation of Ruby hash tables does not fit
well to modern processor cache organization, which requires better
data locality for faster program speed.

The new hash table implementation achieves a better data locality
mainly by

  o switching to open addressing hash tables for access by keys.
    Removing hash collision lists lets us avoid *pointer chasing*, a
    common problem that produces bad data locality.  I see a tendency
    to move from chaining hash tables to open addressing hash tables
    due to their better fit to modern CPU memory organizations.
    CPython recently made such switch
    (https://hg.python.org/cpython/file/ff1938d12240/Objects/dictobject.c).
    PHP did this a bit earlier
    https://nikic.github.io/2014/12/22/PHPs-new-hashtable-implementation.html.
    GCC has widely-used such hash tables
    (https://gcc.gnu.org/svn/gcc/trunk/libiberty/hashtab.c) internally
    for more than 15 years.

  o removing doubly linked lists and putting the elements into an array
    for accessing to elements by their inclusion order.  That also
    removes pointer chaising on the doubly linked lists used for
    traversing elements by their inclusion order.

A more detailed description of the proposed implementation can be
found in the top comment of the file st.c.

The new implementation was benchmarked on 21 MRI hash table benchmarks
for two most widely used targets x86-64 (Intel 4.2GHz i7-4790K) and ARM
(Exynos 5410 - 1.6GHz Cortex-A15):

make benchmark-each ITEM=bm_hash OPTS='-r 3 -v' COMPARE_RUBY='<trunk ruby>'

Here the results for x86-64:

hash_aref_dsym       1.094
hash_aref_dsym_long          1.383
hash_aref_fix        1.048
hash_aref_flo        1.860
hash_aref_miss       1.107
hash_aref_str        1.107
hash_aref_sym        1.191
hash_aref_sym_long           1.113
hash_flatten         1.258
hash_ident_flo       1.627
hash_ident_num       1.045
hash_ident_obj       1.143
hash_ident_str       1.127
hash_ident_sym       1.152
hash_keys            2.714
hash_shift           2.209
hash_shift_u16       1.442
hash_shift_u24       1.413
hash_shift_u32       1.396
hash_to_proc         2.831
hash_values          2.701

The average performance improvement is more 50%.  ARM results are
analogous -- no any benchmark performance degradation and about the
same average improvement.

The patch can be seen as

https://github.com/vnmakarov/ruby/compare/trunk...hash_tables_with_open_addressing.patch

or in a less convenient way as pull request changes

https://github.com/ruby/ruby/pull/1264/files


This is my first patch for MRI and may be my proposal and
implementation have pitfalls.  But I am keen to learn and work on
inclusion of this code into MRI.

~~~

---Files--------------------------------
0001-st.c-use-array-for-storing-st_table_entry.patch (46.7 KB)
0001-st.c-change-st_table-implementation.patch (59.4 KB)


-- 
https://bugs.ruby-lang.org/

Unsubscribe: <mailto:ruby-core-request / ruby-lang.org?subject=unsubscribe>
<http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>