Jp Hastings-spital wrote:

> I'd like to be able to train a class to return (lets say) *how* happy a
> phrase is. So I could train it with 100 phrases that were between -100%
> happy (ie. sad), 0% happy (neutral) and 100% happy and then on entering
> a new phrase it would return the percentage happy that phrase was.
> 
> Am I looking for Bayesian analysis? Am I missing some feature of the
> Classifier class? Should I be looking elsewhere for this functionality?

Well, pretty much. This is how bayesian spam filters work, by training 
against a set of messages(decomposed to words) to learn what junk email 
is made (and not made) of.

What you get out at the end when you point the classifier at new texts 
is a probability that it belongs to class 'x'.

But in the standard classifiers you don't do the training with 'scores', 
merely the absence or presence of a class. How you would decide at the 
outset that, for training, a phrase if 57% and not 56% happy is hard to 
see - if you know that you already have your algorithm.

What you might do is train the classifier to know about a number of 
emotional classes, eg:

'ecstatic'
'cheerful'
'sad'
'despairing'

They would obviously overlap, but the resulting scores (probabilities) 
might help you then better distinguish everyday happy from very very 
very happy

a