Peter Hickman wrote:
> Actually this is a miss reading.

What is a misreading?

> The whole point of the first post was
> to point out that the speed up from going from Perl (or any scripting
> language) to C. I used the same Perl code to pre compute values to
> create the C header file because I already had code at this point that
> did the work and the impact of doing the pre compute was pretty much
> insignificant to the overall run time. I then used to same technique to
> get the Java implementation. I have never claimed that the C version was
> the best possible implementations only that it delivered a massive
> performance boost. I think that I have proven that.

Now I'm puzzled - isn't that just conventional wisdom? Isn't it
conventional wisdom that a program transformed by an optimizing
compiler will be faster than a program interpreted at runtime?

We can see that by looking at the same C program using both a compiler
and an interpreter like CINT.
http://shootout.alioth.debian.org/gp4sandbox/benchmark.php?test=all&lang=cint&lang2=gcc

CINT scripts can call compiled code, which takes us that next step -
it's faster when the script glues together compiled functions, so it's
good to have a large library of highly-optimised compiled functions.

We can figure that out without a programming language flame war :-)


> There are optimisations that could be applied to the C version but they
> are only going to get percentage improvements and are getting much
> harder to reliably measure.
>
> As to Ocaml versions, well they were submitted and proved to be faster
> than my C version. C is quite a fragile language to develop in and so
> any high level language that can deliver that performance is very
> interesting. To me at least.
>
> And all this Java stuff, well that comes from the post by Charles O Nutter:
>
>  > - Write it in C is as valid as write it in Java (as someone else
> mentioned).
>  > Java is at least as fast as C for most algorithms.
>
> It was the "at least as fast as" bit that I disputed and the whole Java
> sub thread comes from that.

Here's what Charles O Nutter said
http://groups.google.com/group/ruby-talk-google/msg/1adf7bdc7ab8aae0

It's not obvious to me what he meant by "Write it in C is as valid as
write it in Java" but the points I'd take from X is at least as fast as
Y for most algorithms are

- it can be true that using the particular implementations of languages
X and Y on his computer X is at least as fast as Y, and using the same
or different implementations of languages X and Y on your computer X is
not as fast as Y

- we don't care about "most algorithms" just the particular ones we
deal with


Now I've read that old post, I understand why he called your Java
program bogus. With apology, it must have seemed that you were
deliberately writing bad Java to prove your point. ;-)


> The only requirement is that I can get an implementation of the language
> in question on my Macintosh.

D and Clean are other languages you might look at, but using a
different language won't make what you're trying to do any more
feasible.

In your original post, you listed grid sizes with the number of
permutations but you only listed N! - the permutations of a single row
1..N

afaict your approach is to enumerate every distinct latin square of
side N, from http://mathworld.wolfram.com/LatinSquare.html

size : squares
1 : 1! x 0! x 1
2 : 2! x 1! x 1
3 : 3! x 2! x 1
4 : 4! x 3! x 4
5 : 5! x 4! x 56
6 : 6! x 5! x 9408
7 : 7! x 6! x 16942080
8 : 8! x 7! x 535281401856
9 : 9! x 8! x 377597570964258816

I don't know much about algorithm analysis but I'd guess that
generating 5524751496156892842531225600 latin squares is a bad idea.

Maybe you can use some of the tricks from this paper "Enumerating
possible Sudoku grids" - the authors programs are available here
http://www.afjarvis.staff.shef.ac.uk/sudoku/bertram.html