"George Ogata"  wrote
....

> Those tests weren't necessarily representative of the "average" case.
What is the
> "average" case anyway?  Why should it be a returns-true case?  There are
more

Hm, I probably should have written the ``average behavior'' with respect to
my
test setup.

> combinations which should return false then return true given the
array/matrix sizes,
> aren't there?.  If you take (non-trivial) best and worst cases, you see
actually quite
> different patterns.  On the one hand:

In a variation of an old statistics saying "there are lies, dammed lies and
benchmarks";-).  On the other hand these types of benchmarks  do have
their place.  If you positively have to verify that a given 1000 x 1000
matrix is in an  ``one_in_each''  relation to a 1000 elements item array
or more likely have to actually generate the corresponding bijection
you'll be dammed  (i.e. you will never finish in a reasonable amount
of time) if you asymptotically (worst case or not) slower algorithm.
 ...

> (The script used to generate these is below.)
>
> Yes, mine is choking on a mere 7 elements in the worst case (and starts
turning purple at
> double figures), but at least it's correct.  After looking into dnaseby's,
I found he cut
> one too many corners --- this should be false, right?:
>
> one_in_each([1,2,3,4,5],
[[1,2,3,4,5],[1,2,3,4,5],[1,2,0,0,0],[1,2,0,0,0],[1,2,0,0,0]])

I think so ...

>
> To correct this, his worst-case complexity must increase dramatically, I
think.

In my ``benchmark'' his test was rather slow.

>
> ----
>
> The test script (Bill's slightly modified; I don't have the unit test
stuff installed):

Hm, maybe you want to benchmark my solution, Chr0002, as well?

....

/Christoph