Actually the approximation I expect to be:

p(e|f)=p(f|e)

Why would you expect this to give poor results if the TM is well trained? 
Surely the results of my filtering experiments provve otherwise.

James

________________________________________
From: moses-support-boun...@mit.edu <moses-support-boun...@mit.edu> on behalf 
of Rico Sennrich <rico.sennr...@gmx.ch>
Sent: Wednesday, June 17, 2015 5:32 PM
To: moses-support@mit.edu
Subject: Re: [Moses-support] Major bug found in Moses

Read, James C <jcread@...> writes:

> I have been unable to find a logical explanation for this behaviour other
than to conclude that there must be some kind of bug in Moses which causes a
TM only run of Moses to perform poorly in finding the most likely
translations according to the TM when
>  there are less likely phrase pairs included in the race.

I may have overlooked something, but you seem to have removed the language
model from your config, and used default weights. your default model will
thus (roughly) implement the following model:

p(e|f) = p(e|f)*p(f|e)

which is obviously wrong, and will give you poor results. This is not a bug
in the code, but a poor choice of models and weights. Standard steps in SMT
(like tuning the model weights on a development set, and including a
language model) will give you the desired results.

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to