Christoph wrote:

>> . . . Anyway, I'd be interested in anyone's thoughts on what's
>> going on here......  (Hope I didn't botch the tests somehow -
>> but I don't think so!)
>

> From the output of my little benchmarks script which generates
> ``one_in_each''  random square (matrix) examples,  I'd say that most
> published solutions (including yours) are on the average O(n^3)  (n x n
>  = size of the matrix) solutions, but one can write solutions which are
> on the average O(n^2).
>
>
> ---

Those tests weren't necessarily representative of the "average" case.  What is the
"average" case anyway?  Why should it be a returns-true case?  There are more
combinations which should return false then return true given the array/matrix sizes,
aren't there?.  If you take (non-trivial) best and worst cases, you see actually quite
different patterns.  On the one hand:

------------------
Using SlowFailArgs, nitems = 7
dnaseby (10 iterations) 0.055341 (0.0055341 per iter)
george (10 iterations) 1.627122 (0.1627122 per iter)

------------------
Using SlowFailArgs, nitems = 7
dnaseby (10 iterations) 0.051418 (0.0051418 per iter)
george (10 iterations) 1.614257 (0.1614257 per iter)

------------------
Using SlowFailArgs, nitems = 7
dnaseby (10 iterations) 0.050687 (0.0050687 per iter)
george (10 iterations) 1.639815 (0.1639815 per iter)

.... and on the other:

------------------
Using QuickFailArgs, nitems = 300
dnaseby (10 iterations) 3.198134 (0.3198134 per iter)
george (10 iterations) 0.658672 (0.0658672 per iter)

------------------
Using QuickFailArgs, nitems = 300
dnaseby (10 iterations) 3.216936 (0.3216936 per iter)
george (10 iterations) 0.659457 (0.0659457 per iter)

------------------
Using QuickFailArgs, nitems = 300
dnaseby (10 iterations) 3.227569 (0.3227569 per iter)
george (10 iterations) 0.662572 (0.0662572 per iter)

(The script used to generate these is below.)

Yes, mine is choking on a mere 7 elements in the worst case (and starts turning purple at
double figures), but at least it's correct.  After looking into dnaseby's, I found he cut
one too many corners --- this should be false, right?:

one_in_each([1,2,3,4,5], [[1,2,3,4,5],[1,2,3,4,5],[1,2,0,0,0],[1,2,0,0,0],[1,2,0,0,0]])

To correct this, his worst-case complexity must increase dramatically, I think.

-----

The test script (Bill's slightly modified; I don't have the unit test stuff installed):

#!/usr/bin/ruby

### The algorithms

def one_in_each(items,rows)
return false if items.length != rows.length
return true if rows.empty?
item_matches = []
items.each_with_index do |item, index|
item_matches << find_matches(item, rows)
return false if item_matches[index].empty?
end
taken_indices = []
rows.length.times do
item_matches.each_with_index do |matches, index|
if matches.length == 1
return false if taken_indices.include? matches
taken_indices << matches
item_matches.delete_at(index)
else
matches.delete_if {|i| taken_indices.include?(i)}
end
end
return true if taken_indices.length == rows.length
end
return item_matches.flatten.uniq.length + taken_indices.length == rows.length
end

def find_matches(item, rows)
indexArr = []
rows.each_with_index do |row, index|
indexArr << index if row.detect {|toMatch| toMatch == item }
end
return indexArr
end

def one_in_each2
i,r;i==[]||r.find{|s|s.include?(i)&&one_in_each2(i[1..-1],r.reject{|e|s.id==e.id})}!=nil;end

### Arg generators -- "Quick"/"Slow" correspond to one_in_each2

QuickFailArgs = proc{|n|[Array.new(n).map!{0}, Array.new(n).map!{Array.new(n).map!{1}}]}
SlowFailArgs = proc {|n|
if n == 0
[[],[]]
else
i = Array.new(n,0)
r = Array.new(n).map{e = Array.new(n,1); e[-1] = 0; e}
r[-1][-1] = 1
[i,r]
end
}
QuickPassArgs = proc {|n|
[(1..n).to_a, (1..n).map{|i|Array.new(n,i)}]
}
SlowPassArgs = proc {|n|
[(1..n).to_a, (0...n).to_a.map!{|i| (1..(n-i)).to_a + Array.new(i,0)}]
}

### BK's test (modified)

class Array   # from http://www.rubygarden.org/ruby?OneLiners
def shuffle!

each_index {|j| i = rand(size-j); self[j], self[j+i] = self[j+i], self[j]}

end
end

#require 'test/unit'
#class TestMe < Test::Unit::TestCase
def test_big
puts "\n------------------"
srand 314159

nitems = 300  # if increased to just 70, the slow solutions take "forever" !

big_items = [*1..nitems]
big_items.shuffle!
big_rows = []
big_items.each_with_index do |elm,idx|
big_rows[idx] = []
5.times {|i| big_rows[idx] << rand(1000) }
big_rows[idx] << elm
5.times {|i| big_rows[idx] << rand(1000) }
end
big_items.shuffle!
argspec = 'Random pass'

# Uncomment one.
#big_items, big_rows = QuickPassArgs.call(nitems); argspec = 'QuickPassArgs'
#big_items, big_rows = SlowPassArgs.call(nitems);  argspec = 'SlowPassArgs'
#big_items, big_rows = QuickFailArgs.call(nitems); argspec = 'QuickFailArgs'
#big_items, big_rows = SlowFailArgs.call(nitems);  argspec = 'SlowFailArgs'
puts "Using #{argspec}, nitems = #{nitems}"

niter = 10
#auths = %w(dblack dnaseby transami avi bwk george)
auths = %w(dnaseby george)
auths.each do |auth|
eval <<-EOE
print " #{auth} "
t1 = Time.now
#niter.times { assert( #{auth}_one_in_each(big_items, big_rows) ) }
niter.times {
case auth
when 'dnaseby'
one_in_each(big_items, big_rows)
when 'george'
one_in_each2(big_items, big_rows)
end
}
t2 = Time.now

puts "(\#{niter} iterations) \#{(t2-t1)} (\#{(t2-t1)/niter.to_f} per iter)"

EOE
end
end
#end

3.times{test_big}