Hi,
I just what to confirm what the LGPL license for Moses means:
If some of us modify its codes, or develop a new MT engine based on Moses,
can we use this new MT engine for commercial use (distribute it as a
commercial product)? Or at least use it internally in our company?
Thanks so much,
Wen
Hi Barry,
Thanks for your information.
I am still not sure about what the 'tokenizing' and the 'detokenizing' is, I
mean, what they did and why those handlings are needed.
Is the 'tokenizing' something the same with the segmenting?
BTW, I am not familar with Java.
Is there any such script wrote
Hi Francois,
Thanks for your information.
Can you help to let me know or send your scripts for the Chinese tokenizing
and detokenizing for my information?
It seems the default tokenizing script doesn't support the Chinese language
code.
Thanks so,
Wenlong
2010/8/31
> Send Moses-support mail
Hi,
Is there anybody can share your experience about how to train a Moses engine
from Engish to Simplified Chinese?
I know there are some differences and some the training/tuning scripts are
not supporting this language pair as default.
I intend to train one Moses engine for this langauge pair, c
Hi all,
>From the training manual, we can see that, there are totally 9 steps for a
new engine's training:
Steps: (--first-step to --last-step)
(1) prepare corpus
(2) run GIZA
(3) align words
(4) learn lexical translation
(5) extract phrases
(6) score phrases
(7) learn reordering model
(8) learn g
Hi All,
We made a web service based on the information on this page:
http://www.statmt.org/moses/?n=Moses.WebTranslation
The web service works, but I can see some flaws in it:
It only calls the Moses engine itself to do the translation in the
daemon.plscript, but during our evaluation steps, the
into translation without a language model.
>
> I'd suggest simply to train a correct language model and repeat the
> MERT and evaluation steps, unless you are interested in your setup as
> a researcher :)
>
> Best,
> Mark
>
> On Tue, Aug 10, 2010 at 5:04 PM, Wen
Hi,
I have trained a Moses Engine for the language pair English to
Spanish, but for the training script, during below step:
"nohup nice
tools/moses-scripts/scripts-20100203-0505/training/train-factored-phrase-model.perl
-scripts-root-dir tools/moses-scripts/scripts-20100203-0505 -root-dir
Work-SP
idence intervals with bootstrap resampling, which
> will give you some indication how reliable a, say, 0.5, 1.0, or 2.0 point
> difference in BLEU is.
>
> Regarding test set sizes, it should be at least 1000 sentence pairs,
> 12,000 is certainly very large.
>
> -phi
>
> On S
, Wenlong Yang :
> Hi all,
>
> can any of you help to provide some materials about how to select the
> sample
> data for BLEU/NIST evaluation?
> I mean, how many lines of data shoud I choose for the evaluation? and how
> can I choose the data to let them can be more representable
Hi Guys,
I have a question here:
I want to train a moses engine for domain A, now I have some training data
for domain A (for example, 4 words) and more training data (for example,
20 words) which is not specifically belongs domain A, but also relevant.
How can I use the extra training da
Hi all,
can any of you help to provide some materials about how to select the sample
data for BLEU/NIST evaluation?
I mean, how many lines of data shoud I choose for the evaluation? and how
can I choose the data to let them can be more representable for our
domain/use?
I have tried to generate B
ls
> http://www.statmt.org/moses/?n=Moses.AdvancedFeatures#ntoc4
>
> best regards
> Barry
>
> On Tuesday 03 August 2010 07:48, Wenlong Yang wrote:
> > Hi,
> >
> > I have set up the Moses, but when I used it to do the machine
> transaltion,
> > I can not find whet
Hi,
I have set up the Moses, but when I used it to do the machine transaltion, I
can not find whether there is somewhere I can set the dictionary (terminoloy
database) for it.
Does anyone know how to set this in Moses? Or shall we need to modify the
Moses's code?
Does anybody has the same experie
Hello,
We have setup the Moses engine on my desktop, and have run the pilot
training from EN to FR by following the steps listed at
http://www.statmt.org/moses_steps.html and got the BLEU score as 25.20.
Then we use more training data (maybe 343431 segments) for a specific
technical doamin to tra
15 matches
Mail list logo