*The Apertium free/open-source machine translation project in Google
Code-In 2013*
[Apologies for multiple postings]
The *Apertium project* [1], which develops a free/open-source rule-based
machine translation platform, is, for the fourth year in a row, one of
the 10 free/open-source organiza
Hi,
I'll throw in the anecdote that gappy phrases are currently not in use
at Stanford. My predecessor told me that it took a lot longer and only
improved BLEU slightly on Chinese-English. But it's also possible that
something didn't get passed down correctly from Michel to my predecesso
can you please send me the output search graph and the n-best list for the
1 sentence where you are seeing this error.
Also, please send me the script you use to check whether a sentence is in
the graph.
I don't think it is possible for a sentence to be in the nbest list but not
in the search gra
My understanding is that they used a similar approach as the grammar extraction
to extract the gappy phrases. Would it be a massive undertaking to get Moses to
support this?
James
From: Barry Haddow [bhad...@staffmail.ed.ac.uk]
Sent: 30 October 2013 09:26
Thanks.
So if you wanted to train and at a later date use a different LM with the
already trained TM would it just be a simple case of manually editing moses.ini?
If I were to edit the training script to skip the check that LM file exists (it
doesn't) it wouldn't break anything would it?
James
You are correct that train-model.perl script does not use the -lm
parameter through any of the word alignment or phrase scoring steps. The
script's step 9 builds a template moses.ini configuration file and
includes the values from the -lm parameter. At the beginning, the script
checks that the
Hi,
does anybody know what the effect of the -lm training parameter in the training
script is? Surely the LM used has no effect on typical training tasks like word
alignment and phrase scoring?
thanks,
James
___
Moses-support mailing list
Moses-suppo