Hi,

That depends on how you want to use this number.

The easy thing you can get is a model score (look for the 
ConstrainedDecoding feature for forced decoding) but that is not a 
probability and quite useless on it's own but perhaps you can use the 
scores of individual models.

You could also generate an n-best list, re-normalize the models scores 
to probabilities (for a very relaxed definition of probability as 
'numbers that sum to 1') and look for your sentence. This will quite 
likely give you terrible results as most often the exact sentence will 
not appear in the n-best list, even for large n.

On the other hand you should be able to derive the probabilities for 
IBM model 1 using the lexicon- and language model probabilities but 
their usefulness depends on your application.

Finally, you can compute word posterior probabilities from an n-best 
list and generalize these to the sentence level, again with the problem 
that you need to decide how to deal with unknown words.

cheers,
Christian

On Thu 24 Apr 2014 03:57:11 AM CEST, arpit gupta wrote:
> Hi,
>
> I am trying to get P(e|f) , I have both the english and french
> sentences and a phrase-based trained model, trained to translate from
> english to french. Is the functionality built in with Moses or I will
> have to calculate using tables produced in the model.
>
> Thanks
> Arpit
>
>
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support


_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to