Hi,

From: "Tom Sawyer" <transami / transami.net>
>
> while we're at it i decided to run a simple speed compare just to see
> how they match up. they are all pretty close. these are the fastest
> reported times i got for each solution after running each one many many
> times:
> 
> Transami  AviBryant  BillKelly  DavidNaseby  DavidBlack
> .005810   .005827    .005831    .006141      .009374

I thought it might be interesting to try some timing with a
larger dataset...

The solutions, surprisingly to me, polarized into two groups,
one of which was two orders of magnitude faster than the other group.
(However two solutions in the "fast" group failed to produce the
correct answer.)

# (ruby 1.6.6 (2001-12-26) [i586-mswin32])
# dblack   (1000 iterations) 19.188 (0.019188 per iter)  [failed test]
# transami (1000 iterations) 20.469 (0.020469 per iter)  [failed test]
# dnaseby  (1000 iterations) 61.609 (0.061609 per iter)
# george   (10 iterations)   40.638 (4.0638 per iter)
# avi      (10 iterations)   68.238 (6.8238 per iter)
# bwk      (10 iterations)   78.503 (7.8503 per iter)

I'm surprised at the polarity, and disparity between the fast
and slow groups here... but haven't tried to understand it yet.


For what it's worth, here's the code used to generate the
dataset, and test.  (I removed the "assert" when generating
the times above, and changed 'niter' manually for the two
groups...)


class Array   # from http://www.rubygarden.org/ruby?OneLiners
    def shuffle!
      each_index {|j| i = rand(size-j); self[j], self[j+i] = self[j+i], self[j]}
    end
end 

require 'test/unit'
class TestMe < Test::Unit::TestCase
  def test_big
    puts "\n------------------"
    srand 314159
    nitems = 60  # if increased to just 70, the slow solutions take "forever" !
    big_items = [*1..nitems]
    big_items.shuffle!
    big_rows = []
    big_items.each_with_index do |elm,idx|
      big_rows[idx] = []
      5.times {|i| big_rows[idx] << rand(1000) }
      big_rows[idx] << elm
      5.times {|i| big_rows[idx] << rand(1000) }
    end
    big_items.shuffle!
    
    niter = 10
    auths = %w(dblack dnaseby transami avi bwk george)
    auths.each do |auth|
      eval <<-EOE
        print " #{auth} "
        t1 = Time.now
        niter.times { assert( #{auth}_one_in_each(big_items, big_rows) ) }
        t2 = Time.now
        
        puts "(\#{niter} iterations) \#{(t2-t1)} (\#{(t2-t1)/niter.to_f} per iter)"
      EOE
    end
  end
end


So, for instance, on my system, the code consistently generates
a big_items of:
 [27, 54, 31, 3, 26, 24, 15, 53, 39, 37, 6, 43, 1, 13, 18, 30,
  40, 14, 32, 22, 57, 17, 8, 48, 46, 36, 25, 34, 5, 7, 12, 38,
  42, 2, 11, 10, 50, 28, 49, 47, 51, 44, 35, 19, 59, 23, 55, 60,
  33, 52, 21, 56, 45, 9, 20, 58, 41, 16, 29, 4]

And a big_rows of:
 [[727, 441, 342, 179, 535, 19, 888, 996, 397, 843, 573],
  [626, 899, 823, 204, 951, 14, 498, 617, 104, 22 4, 260],
  [51, 228, 818, 872, 165, 8, 657, 354, 370, 637, 492],
  [653, 420, 583, 541, 292, 4, 773, 809, 721, 42, 72],
  [13, 555, 976, 661, 984, 57, 703, 518, 551, 332, 910],
  . . .
 ]


. . . Anyway, I'd be interested in anyone's thoughts on what's
going on here......  (Hope I didn't botch the tests somehow - 
but I don't think so!)


Thanks & Regards,

Bill