Which features would you like me to tune? The whole purpose of the exercise was 
to eliminate all variables except the TM and to keep constant those that could 
not be eliminated so that I could see which types of phrase pairs contribute 
most to increases in BLEU score in a TM only setup.

Now you are saying I have to tune but tuning won't work without a LM. So how do 
you expect a researcher to be able to understand how well the TM component of 
the system is working if you are going to insist that I must include a LM for 
tuning to work. 

Clearly the system is broken. It is designed to work well with a LM and poorly 
without. When clearly good results can be obtained with a functional TM and 
well chosen phrase pairs.

James

________________________________________
From: moses-support-boun...@mit.edu <moses-support-boun...@mit.edu> on behalf 
of Kenneth Heafield <mo...@kheafield.com>
Sent: Wednesday, June 17, 2015 7:13 PM
To: moses-support@mit.edu
Subject: Re: [Moses-support] Major bug found in Moses

I'll bite.

The moses.ini files ship with bogus feature weights.  One is required to
tune the system to discover good weights for their system.  You did not
tune.  The results of an untuned system are meaningless.

So for example if the feature weights are all zeros, then the scores are
all zero.  The system will arbitrarily pick some awful translation from
a large space of translations.

The filter looks at one feature p(target | source).  So now you've
constrained the awful untuned model to a slightly better region of the
search space.

In other words, all you've done is a poor approximation to manually
setting the weight to 1.0 on p(target | source) and the rest to 0.

The problem isn't that you are running without a language model (though
we generally do not care what happens without one).  The problem is that
you did not tune the feature weights.

Moreover, as Marcin is pointing out, I wouldn't necessarily expect
tuning to work without an LM.

On 06/17/15 11:56, Read, James C wrote:
> Actually the approximation I expect to be:
>
> p(e|f)=p(f|e)
>
> Why would you expect this to give poor results if the TM is well trained? 
> Surely the results of my filtering experiments provve otherwise.
>
> James
>
> ________________________________________
> From: moses-support-boun...@mit.edu <moses-support-boun...@mit.edu> on behalf 
> of Rico Sennrich <rico.sennr...@gmx.ch>
> Sent: Wednesday, June 17, 2015 5:32 PM
> To: moses-support@mit.edu
> Subject: Re: [Moses-support] Major bug found in Moses
>
> Read, James C <jcread@...> writes:
>
>> I have been unable to find a logical explanation for this behaviour other
> than to conclude that there must be some kind of bug in Moses which causes a
> TM only run of Moses to perform poorly in finding the most likely
> translations according to the TM when
>>  there are less likely phrase pairs included in the race.
> I may have overlooked something, but you seem to have removed the language
> model from your config, and used default weights. your default model will
> thus (roughly) implement the following model:
>
> p(e|f) = p(e|f)*p(f|e)
>
> which is obviously wrong, and will give you poor results. This is not a bug
> in the code, but a poor choice of models and weights. Standard steps in SMT
> (like tuning the model weights on a development set, and including a
> language model) will give you the desired results.
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to