On Sat, 11 Dec 2004 05:23:17 +0900
James Edward Gray II <james / grayproductions.net> wrote:

> On Dec 10, 2004, at 2:15 PM, Brian Schr?der wrote:
> 
> > Well, the evalutation routine -100 for lost, +100 for won, 0 otherwise 
> > is not
> > really a evaluation routine, and minimax with alpha beta pruning can 
> > search the whole problem in 6-7s on my machine.
> 
> So what is your program learning at this point and thus how do you 
> justify it as a valid solution?  ;)
> 

Good Question. At least I know have a perfect opponent for my learning
algorithm. The problem is, that it is impossible to win against a perfect
opponent in TicTacToe. So I won't learn more than not to loose.

Regards,

Brian


-- 
Brian Schr?der
http://ruby.brian-schroeder.de/