Hi all,

I am using a discrete Hidden Markov Model with discrete observations in
order to detect a sequence of integers. I am using the "hmm.discnp" package.

I am using the following code:

signature <- c(-89, -98, -90, -84, -77, -75, -64, -60, -58, -55, -56, -57,
-57, -63, -77, -81, -82, -91, -85, -89, -93)

quant <- length(-110:-6)

# Initialize and train the hmm with the observed sequence mentioned above.
# "yval" lists the possible values for observations and K is the number of
hidden states.
my_hmm <- hmm(y=signature, yval=c(-110:-6), K=5)

print(my_hmm)


The above shows that the HMM was trained using "signature" and the values
seem to be intuitive.

My question is more a fundamental  one in regards to understanding HMMs. I
know I should use more examples of the above sequences to train the HMM in
order to make it more robust. Assuming, that the HMM is trained good enough,
I can use the viterbi algorithm to find the most probable sequence of hidden
states. However, what I really want to find out is whether a particular
observed sequence is modeled by my HMM (created above). There seems to be a
viterbi() function in hmm.discnp and also mps() but both of them give them
most probable hidden state sequence, whereas, I want the probability of a
particular observed sequence, that is, the likelihood for an arbitrary
observed sequence. This is typically solved using the solution to
"Evaluation Problem" in HMMs, but I do not see a function in hmm.discnp for
calculating this.

Am I missing something?

Thanks for the help.

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to