(apologies for duplicate posting)
Call for Papers
The 6th International Joint Conference on Natural Language Processing
(IJCNLP 2013) October 14-18, 2013 Nagoya, Japan
Website: http://www.ijcnlp2013.org
The 6th International Joint Conference on Natural Language Processing,
organized by the
You can give a tagged corpus to the EMS, using the format:
*word1|POS1 word2|POS2 word3|POS3
*
*I think you have to set the variable
factorized-stem = [filePath]
*
*instead of
raw-stem = [filePath]
*
*However, when you give the EMS raw-stem, it will tokenize, escape special
characters, and
hi zai,
in the pre-made models we released with version 1. of Moses
http://www.statmt.org/moses/RELEASE-1.0/models/fr-en/
The phrase 'vous êtes' appears to be aligned correctly. There are 330
translations of the phrase but the most probable translation is
vous êtes ||| you are ||| 0.219726
Hi list,
I have added two phrase table types to Moses, internally called
MultiModel and MultiModelCounts.
These table types construct a virtual phrase table online from a vector
of component models. MultiModel so far only supports a linear
interpolation of the probabilities in the component
HI,
Does anyone know of any published results which invesitage the effect of
the size of the tuning data set. I'm primarily interested in relation to
Mert, but other optimization methods would also be interesting,
Best,
Sara
___
Moses-support mailing
Hi,
There is a work by Marco Turchi, where they look at evolution of BLEU with
respect to increasing data set size used for MERT. The investigation is
primarily for Spanish-English language pair, so the inferences might not be
scalable when considering for a challenging language pair.
The draft
Hi,
it was not the main topic of the paper, but in
Log-linear weight optimisation via Bayesian adaptation in statistical
machine translation. Germán Sanchis-Trilles, Francisco Casacuberta.
CoLing 2010. http://www.aclweb.org/anthology/C/C10/C10-2124.pdf
I published some stability results for
The JHU summer workshop final report had some experiments on this:
http://www.learningace.com/doc/3098660/be148017730f3f3a7b45d656276b482a/jhu-summer-workshop-final-report
(See Fig. 6.7 and surrounding)
In general:
1) MERT works on so few features that you don't need much dev data to learn them
Hello!
I'm going to create a plug-in to my translator which with the help of the
user it helps to improve the translation quality, through the best
translations that my system produces.
So, I like to know if it's possible to modify the phrase-table?
Thank you,
Nelson.
Hi, All
I used moses-1.0 and ran moses-chart.
The training stage was finished successfully, but met an error when start
the tuning.
The information is as follows:
...
Start loading text SCFG phrase table. Moses format : [0.879] seconds
Reading ./hiero-tune/filtered/phrase-table.0-0.1.1.gz
10 matches
Mail list logo