Henning Thielemann wrote:
"Markov chain" means, that you have a sequence of random experiments,
where the outcome of each experiment depends exclusively on a fixed number
(the level) of experiments immediately before the current one.
Right. So a "Markov chain" is actually a technical way of describing
something that's intuitively pretty obvious? (E.g., PPM compression
works by assuming that the input data is some sort of Markov chain with
as-yet unknown transition probabilities.)
If the level is too high, you will just reproduce the training text.
Yeah, I can see that happening! ;-)
The key, I think, is for the training set to be much larger than what
you want to produce...
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe