On Sat, Sep 04, 2004 at 12:40:42PM +0900, James Britt wrote:
> Is there a reason that ordering should be unpredictable?  I'm wonder if, 
> given a set of name/value pairs, using them to create a hash should 
> always give the same ordering, as the hash should be following some 
> specific algorithm to ensure, say, the fastest insertion/retrieval or 
  ==================
the normal algo. with buckets/bins and collisions lists; take a look at
st.c.

> some such thing.  Not that it just randomly decides how and where to 
> store something.

It's not as if it was doing rand() just for the sake of it :-)

When two hash values collide, the order in the collision list will
depend on the insertion order of the corresponding elements.
If you keep adding elements, the hash will have to use more bins and
rehash the items, so that two elements that were in the same bin could
end in different ones afterwards.

> So, your example may be plausible, unless it breaks some fundamental 
> aspect of hashes.

Ordered iteration comes at a cost; if you're willing to pay it, you can
use rbtree...

-- 
Running Debian GNU/Linux Sid (unstable)
batsman dot geo at yahoo dot com