```----- Original Message -----
From: "Mauricio Fern?ndez" <batsman.geo / yahoo.com>

<snip very cool code>

batsman@kodos:~/germany2/src\$ ruby markov2.rb
Trying to approximate NVector.float(6):  [ 0.4, 0.2, 0.2, 0.1, 0.001,
0.099 ]
Needed 112 iterations.
Got transition matrix: NMatrix.float(6,6):
[ [ 0.0, 0.339637, 0.339724, 0.161189, 0.0, 0.15945 ],
[ 0.749724, 0.0, 0.122931, 0.0633426, 0.00125589, 0.0627467 ],
[ 0.749559, 0.12309, 0.0, 0.0633408, 0.00126638, 0.0627433 ],
[ 0.503809, 0.196678, 0.196678, 0.0, 0.00252385, 0.100312 ],
[ 0.4004, 0.2002, 0.2002, 0.1001, 0.0, 0.0990991 ],
[ 0.498523, 0.198466, 0.198466, 0.101994, 0.00255094, 0.0 ] ]
Stationary distribution probabilities
0.3999909783 0.199990125 0.1999928201 0.1000090866 0.001009401954
0.09900758802

0.5888367575
0.1409106736
0.1409106736
0.0647154815
0.0006056404603
0.0640207732

which corresponds to the matrix
NMatrix.float(6,6):
[ [ 0.0, 0.342712, 0.342712, 0.157396, 0.00147299, 0.155706 ],
[ 0.68542, 0.0, 0.164023, 0.0753303, 0.00070498, 0.0745217 ],
[ 0.68542, 0.164023, 0.0, 0.0753303, 0.00070498, 0.0745217 ],
[ 0.62958, 0.150661, 0.150661, 0.0, 0.000647547, 0.0684506 ],
[ 0.589194, 0.140996, 0.140996, 0.0647547, 0.0, 0.0640596 ],
[ 0.629113, 0.150549, 0.150549, 0.069142, 0.000647066, 0.0 ] ]

somewhat different. as you can see.
----------------------------

(Just to clarify, those number from my program are the weights to go into
the bonehead algorithm, *NOT* their probabilities, which are:

0.3999999998780646
0.2000000000439784
0.2000000000439784
0.100000000017005
0.001000000000165337
0.09900000001680012
)

(I will call the objects being weighted:  a, b, c, d, e, f.)

Yes, those transition matrices are quite different.  For example, what if we
have just gotten an `a'.

Your matrix gives absolutely no chance of an `e' following it.  Also, the
chances of getting a `b' or a `c' are not quite equal.

On the other hand, what if we have just gotten an `e'.  Your distribution
seems much more natural there.

And after a `b' or `c', you seem to weight `a' more heavily than I do,
making mine seem more natural in that (more common) case.

I still think I would prefer the matrix given by my own algorithm, because
it is more "evenly uneven":  each row is sort of weird *in the same way*.

----------------------------
> In any case, as I mentioned in #69742, I think this is simply the wrong
(if
> more fun) approach.
========

That is the most important factor for my psychological satisfaction :-)
----------------------------

:)  Fair enough!  Most users care more about the output than the
algorithm... but not us!

Chris

```