```Issue #12142 has been updated by Vladimir Makarov.

Yura Sokolov wrote:
> Vladimir, you acts as if i said a rubbish or i'm trying to cheat you. It makes me angry.
>
> You wrote:
> > I believe your code above is incorrect for tables of sizes of power of 2.
> > The function should look like h(k,i) = (h(k) + c1 * i + c2 * i^2) mod m,
> > where "c1 = c2 = 1/2 is a good choice". You can not simplify it.
>
> And you cited Wikipedia
> > With the exception of the triangular number case for a power-of-two-sized hash table,
> > there is no guarantee of finding an empty cell once the table gets more than half full
>
> But couple of lines above you cited my cite from Wikipedia:
> > This leads to a probe sequence of h(k), h(k)+1, h(k)+3, h(k)+6, ...
> > where the values increase by 1, 2, 3, ...
>
> **It is** implementation of **triangular number** sequence - single quadratic probing
> sequence which walks across all elements of `2^n` table.
>
> You even can remember arithmetic: https://en.wikipedia.org/wiki/Arithmetic_progression
> ````
> (1/2)*i + (1/2)*i*i = i*(i+1)/2 = 1 + 2 + 3 + ... + (i-1) + i
> ````
> Or use Ruby to check you sentence:
> ````
> 2.1.5 :002 > p, d = 0, 1; 8.times.map{ a=p; p=(p+d)&7; d+=1; a}
>  => [0, 1, 3, 6, 2, 7, 5, 4]
> 2.1.5 :008 > p = 0; 8.times.map{|i| a=p+ 0.5*i + 0.5*i*i; a.to_i&7}
>  => [0, 1, 3, 6, 2, 7, 5, 4]
> ````
> If you still don't believe me, read this: https://en.wikipedia.org/wiki/Triangular_number
>
> Google Dense/Sparse hash uses this sequence with table of `2^n`
>
> khash uses quadratic probing now with table of `2^n`
> https://github.com/lh3/minimap/blob/master/khash.h#L51-L53
> https://github.com/lh3/minimap/blob/master/khash.h#L273
>
> Or at least say with less level of confidence.
>

I am really sorry.  People do mistakes.  I should have not written this in a hurry while running some errands.

Still I can not see how using the function `(p + d) & h` instead of `(i << 2 + i + p + 1) & m` visibly speedups hash tables (which is *memory bound* code) on modern OOO super-scalar CPUs.  That is besides advantages (even tiny as you are writing below) of decreasing collisions by using all hash.

Actually I did an experiment.  I tried these two functions on Intel Haswell with memory access to the same address (so the value will be in the 1st cache) after each function calculation.  I used -O3 for GCC and run the functions 10^9 times. The result is a bit strange.  Code with function `(i << 2 + i + p + 1) & m` is about 7% faster than one with simpler function `(p + d) & h` (14.5s vs. 15.7s).  Sometimes it is hard to predict outcome as modern x86-64 processors are black boxes and actually interpreters inside.  But even if it was the opposite, absence of value in the cache and the fact, that the function is a small part of code for the access by a key, probably will make that difference insignificant.

> > Also as I wrote before your proposal means just throwing away the biggest part of hash value even if it is a 32-bit hash.
> > I don't think ignoring the big part of the hash is a good idea as it probably worsens collision avoiding.
>
> Yes, it will certainly increase probability of hash value collision, but only for *very HUGE* hash tables.
> And it doesn't affect length of a collision chain (cause `2^n` tables uses only low bits).
> It just affects probability of excess call to equality check on value, but not too much:
>
> http://math.stackexchange.com/a/35798
>
> ````
> > N = (2**32).to_f
> 4294967296.0
> > n = 100_000_000.0
> 100000000.0
> > collisions = n*(1-(1-1/N)**(n-1))
>  => 2301410.50385877
> > collisions / n
> 0.0230141050385877
> > n = 300_000_000.0
> 100000000.0
> > collisions = n*(1-(1-1/N)**(n-1))
>  => 20239667.356876057
> > collisions / n
>  => 0.06746555785625352
> ````
> In other words, only 2% of full hash collisions on a Hash with 100_000_000 elements, and 7% for 300_000_000 elements.
> Can you measure, how much time will consume insertion of 100_000_000 elements to a Hash (current or your implementation),
> and how much memory it will consume? Int=>Int ? String=>String?
>

On a machine currently available to me,

./ruby -e 'h = {}; 100_000_000.times {|n| h[n] = n }'

takes 7m for my implementation.

The machine has not enough memory for 300_000_000 elements.  So I did not try.

In GCC community where I am from, we are happy if we all together improved SPEC2000/SPEC2006 by 1%-2% during a year.  So if I can use all hash without visible slowdown even if it decreases number of collisions only by 1% on big tables,  I'll take that chance.

> At my work, we use a huge in-memory hash tables (hundreds of millions elements) (custom in-memory db, not Ruby),
> and it uses 32bit hashsum. No any problem. At this
>
> > Also about storing only part of the hash. Can it affect rubygems? It may be a part of API. But I don't know anything about it
>
> gems ought to be recompiled, but no code change.
>

> > I routinely use a few machines for my development with 128GB memory.
>
> But you wouldn't use Ruby process which consumes 100GB of memory using Ruby Hash. Otherwise you get a big trouble (with GC for example).
> If you need to store such amount of data within Ruby Process, you'd better make your own datastructure.
> I've maid one for my needs :
> https://rubygems.org/gems/inmemory_kv
> https://github.com/funny-falcon/inmemory_kv
> It also can store only `2^31` elements, but hardly beleive you will ever store more inside of Ruby process.
>

IMHO, I think it is better to fix the problems which can occur for using tables with > 2^32 elements instead of introducing the hard constraint.  There are currently machines which have enough memory to hold tables with > 2^32 elements.  You are right it is probably not wise to use MRI to work with such big tables regularly because MRI is currently slow for this.  But it can be used occasionally for prototyping.  Who knows may be MRI will become much faster.

But my major argument is that using 32-bit index does not speed up work with hash tables.  As I wrote I tried it and using 32-bit index did not improve the performance.  So why we should create such hard constraint then.

> >> Could you imagine that Hash with 1M elements starts to rebuild?
> > I can. The current tables do it all the time already and it means traversing all the elements as in the proposed tables case.
>
> Current st_table rebuilds only if its size grow. Your table will rebuild even if size is not changed much, but elements are inserted and deleted repeatedly (1 add, 1 delete, 1 add, 1 delete)
>
> >> May be it is better to keep st_index_t prev, next in struct st_table_entry (or struct st_table_elements as you called it) ?
> > Sorry, I can not catch what do you mean. What prev, next should be used for.
> > How can it avoid table rebuilding which always mean traversing all elements to find a new entry or bucket for the elements.
>
> Yeah, it is inevitable to maintain free list for finding free element.
> But `prev,next` indices will allow to insert new elements in random places (deleted before),
> cause iteration will go by this pseudo-pointers.
>

There is no rebuilding if you use hash tables as a stack.  The same can be also achieved for a queue with some minor changes.  As I wrote there will be always a test where a new implementation will behave worse.  It is a decision in choosing what is better: say 50% improvement on some access patterns or n% improvement on other access patterns.  I don't know how big is `n`.  I tried analogous approach what you proposed.  According to my notes (that time MRI has only 17 tests), the results were the following

hash_aref_dsym       0.811
hash_aref_dsym_long          1.360
hash_aref_fix        0.744
hash_aref_flo        1.123
hash_aref_miss       0.811
hash_aref_str        0.836
hash_aref_sym        0.896
hash_aref_sym_long           0.847
hash_flatten         1.149
hash_ident_flo       0.730
hash_ident_num       0.812
hash_ident_obj       0.765
hash_ident_str       0.797
hash_ident_sym       0.807
hash_keys            1.456
hash_shift           0.038
hash_values          1.450

But unfortunately they are not representative as I used prime numbers as size of the tables and just `mod` for mapping hash value to entry index.

> Perhaps, it is better to make a separate LRU hash structure instead in a standard library,
> and keep Hash implementation as you suggest.
> I really like this case, but it means Ruby will have two hash tables - for Hash and for LRU.

I don't know.

In any case, it is not up to me to decide the size of the index and some other things discussed here.  That is why probably I should have not participated in this discussion.

We spent a lot time arguing.  But what should we do is trying.  Only real experiments can prove or disapprove our speculations.  May be I'll try your ideas but only after adding my code to MRI trunk.  I still have to solve the small table problem people wrote me about before doing this.

----------------------------------------
Feature #12142: Hash tables with open addressing
https://bugs.ruby-lang.org/issues/12142#change-57319

* Status: Open
* Priority: Normal
* Assignee:
----------------------------------------
~~~
Hello, the following patch contains a new implementation of hash
tables (major files st.c and include/ruby/st.h).

Modern processors have several levels of cache.  Usually,the CPU
reads one or a few lines of the cache from memory (or another level of
cache).  So CPU is much faster at reading data stored close to each
other.  The current implementation of Ruby hash tables does not fit
well to modern processor cache organization, which requires better
data locality for faster program speed.

The new hash table implementation achieves a better data locality
mainly by

o switching to open addressing hash tables for access by keys.
Removing hash collision lists lets us avoid *pointer chasing*, a
common problem that produces bad data locality.  I see a tendency
to move from chaining hash tables to open addressing hash tables
due to their better fit to modern CPU memory organizations.
(https://hg.python.org/cpython/file/ff1938d12240/Objects/dictobject.c).
PHP did this a bit earlier
https://nikic.github.io/2014/12/22/PHPs-new-hashtable-implementation.html.
GCC has widely-used such hash tables
(https://gcc.gnu.org/svn/gcc/trunk/libiberty/hashtab.c) internally
for more than 15 years.

o removing doubly linked lists and putting the elements into an array
for accessing to elements by their inclusion order.  That also
removes pointer chaising on the doubly linked lists used for
traversing elements by their inclusion order.

A more detailed description of the proposed implementation can be
found in the top comment of the file st.c.

The new implementation was benchmarked on 21 MRI hash table benchmarks
for two most widely used targets x86-64 (Intel 4.2GHz i7-4790K) and ARM
(Exynos 5410 - 1.6GHz Cortex-A15):

make benchmark-each ITEM=bm_hash OPTS='-r 3 -v' COMPARE_RUBY='<trunk ruby>'

Here the results for x86-64:

hash_aref_dsym       1.094
hash_aref_dsym_long          1.383
hash_aref_fix        1.048
hash_aref_flo        1.860
hash_aref_miss       1.107
hash_aref_str        1.107
hash_aref_sym        1.191
hash_aref_sym_long           1.113
hash_flatten         1.258
hash_ident_flo       1.627
hash_ident_num       1.045
hash_ident_obj       1.143
hash_ident_str       1.127
hash_ident_sym       1.152
hash_keys            2.714
hash_shift           2.209
hash_shift_u16       1.442
hash_shift_u24       1.413
hash_shift_u32       1.396
hash_to_proc         2.831
hash_values          2.701

The average performance improvement is more 50%.  ARM results are
same average improvement.

The patch can be seen as

or in a less convenient way as pull request changes

https://github.com/ruby/ruby/pull/1264/files

This is my first patch for MRI and may be my proposal and
implementation have pitfalls.  But I am keen to learn and work on
inclusion of this code into MRI.

~~~

--
https://bugs.ruby-lang.org/

Unsubscribe: <mailto:ruby-core-request / ruby-lang.org?subject=unsubscribe>
<http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>
```