On 10/18/05, Ara.T.Howard <Ara.T.Howard / noaa.gov> wrote:
>
> a couple of observations: zach's method destroys the array at each iteration
> and is unique in this respect.  to be fair a 'dup' must be added to his
> approach.  you arrays are composed in a quite regular fashion which give the
> algoritm's that favour linear search a huge advantage - a random distribution
> of dups makes the task much harder.  i implemented changes for the above and
> now:

Thanks a lot for this Ara, you make some excellent points. I did think
about how the linear nature of the test array might skew the results,
but I never got around to randomizing them. I also assumed that all
the algorithms worked correctly and non-destructively. I considered
adding some tests to ensure they all returned the same result, but
didn't bother.

> also, your concept of 'big' doesn't seem that, er, big - running with larger
> numbers is also revealing - i started a process with

I know it wasn't very big, but I figured it would be big enough to
show how the algorithms scaled. Plus I'm using my work laptop and have
to get some work done, so I can't wait all day while Ruby eats up all
my CPU :)

Like Paul, I've found this thread very enlightening, and it will make
me think twice before using a "cute" solution (though you don't always
need the highest performer all the time.) Though James' solution is
pretty cute and fast, so good job.

Either way this thread was a very good indication of Ruby's TIMTOWTDI nature.

Ryan