I should probably describe the context in which this situation arises.

Given a vector of vectors:

v = [ [... , ... , ... ... ]
      [... , ... , ... ... ]
      ...
    ]

i've got a routine #norm_v which normalizes the each entry based on some
global information about the vector of vectors.

In another similar situation I have a hash table of tagged vectors:

h = { 't1' =>  [... , ... , ... ... ]
      't2' =>  [... , ... , ... ... ]
       ...
    }

A simple way to normalize the hash without duplicating functionality would
be to say:

  keys,values = h.keys, h.values    # disassemble
  new_vals = norm_v (values)        # normalize
  Hash.make (keys, new_vals)        # reassemble

Intuition and simple testing indicates that #keys and #values return hash
entries in the same order.  But I haven't seen this documented anywhere and
just wanted to double check that I could count on this behavior.

Raja

  def Hash.make  (keys, vals)
    h={}
    keys.each_with_index { |k,i|
      h[k] = vals[i]
    }
    h
  end


raja / cs.indiana.edu (Raja S.) writes:

> Given a  hash, h1, will the following always hold?

>   keys, values = h1.keys, h1.values
>   h2 = Hash.make (keys,values)         # Hash::make is user defined
>   h1 == h2

> More than equality, I'm interested in the sequence of values returned by
> Hash#keys and Hash#values. Is this sequence deterministic and the same?  Is
> this order the same as that obtained when one #inspects a hash? Simple
> testing seems to indicate so.

> Thanks,
> Raja

> p.s. is there a way to override a class method like Hash.new -without-
> sub-classing?  It appears that class methods can't be aliased hence
> redefinition of something like Hash.new doesn't seem possible?