Hello there,
Quick Qn:
it is my first time to tune the decoder,
I use 2500 parallel sentences for tunning and run1.mert.log has 0.200397
and, after taking about 7 hours, the 5th run5.mert.log has 0.224502
-How many times moses mert should run for the better weights, shoold I wait
further?
-is i
hello Moses Team,
Quick Qns:
-is there other mechanisms in phrase based SMT, that can perform better
than distortion
probability in reordering of target language?
-can Moses work with tagged and morphological analyzed data? and is it more
advantageous?
Thank you.
Yared.
_
Hello Moses Team;
I have been working on building translation system for my thesis,
and it has some reordering problems still.
but I want to put it on the web to translate given sentences like the moses
demo(translate an input sentence to the target) and there is no much
resources on how.
please i
Hello moses Team,
any one who have used one sided, English parser like enju and
and use head finalization rule to restructure the english order to form SOV
structure.
please guide me through.
thank you.
Yared.
___
Moses-support mailing list
Moses-suppor
Hello Everyone,I posted this message before and I did not get replay,
any quick suggestions pls help,
I read the page http://www.statmt.org/moses/?n=Moses.SyntaxTutorial
and I try to train the model using the command by just adding
the following parameters from the one used to train phrase based m
Hello Moses Team,
I read the page http://www.statmt.org/moses/?n=Moses.SyntaxTutorial
and I try to train the model using the following command:
nohup nice moses/moses-scripts/scripts-20120409-0748/training/train-model.perl
-scripts-root-dir moses/moses-scripts/scripts-20110409-0748/ -root-dir
work
we know that english has "subject verb object"
the language I want to translate english to,(Amharic) has "subject
object verb" structure.
- which approach phrase based, hierarchical or syntax model is
preferred in this case?
- and what is the criteria to select one of these approaches?
any sugge
Hello Moses Team,
as we all know english has "subject verb object"
the language I want to translate english to,(Amharic) has "subject
object verb" structure.
my question is how giza align words from the corpus?
and is it necessary to use supervised alignment like Barkley tool.
and I don't have c
On 4/29/12, Yared Mekuria wrote:
Hi Daniel,
I use mteval-v13a.pl from recent generic folder of the released
scripts,
https://github.com/srush/transforest/blob/297ed0b0f473b09556912c9c1059468389ed02e4/mteval-v13a.pl
and I use the command:
mose/mteval-v13a.pl -s data/devtest/nc-test2007
Hi Daniel,
thank you for your replay,
I make the cased data the tokenized English corpus
"news-commentary.tok.en" file,
since the lower cased data was "news-commentary.lowercased.en"
and it works as you say.
- am I right to use the tokenized cased data?
- and I can't get a NIST scoring tool.
;
>
> Today's Topics:
>
> 1. how to train the recaser (Yared Mekuria)
>
>
> ----------
>
> Message: 1
> Date: Mon, 23 Apr 2012 10:24:33 -0400
> From: Yared Mekuria
> Subject: [Moses-support] how to t
how to train the recaser
I use the following commands
mkdir work/recaser
/home/admin1/mose/moses-scripts/scripts-20120409-0748/recaser/train-recaser.perl
-train-script
/home/admin1/mose/moses-scripts/scripts-20120409-0748/training/train-model.perl
-ngram-count mose/bin/irstlm/bin/i686/ngram-cou
Hello There,
I am on tuning part, but what is the difference of the tunning data
from training data?
and while I am trying to tune using command on
http://www.statmt.org/moses_steps.html
nohup nice mose/moses-scripts/scripts-20120409-0748/training/mert-moses.pl
worked/tuning/nc-dev2007.lowercased.
library
ERROR:no LM created. We probably don't have it compiled
On 4/22/12, Yared Mekuria wrote:
> the step to train the model end up with the moses.ini, phrase-table,
> reordering-table.
>
> I use the command below to check trained model
>
> echo "c' est une petite
the step to train the model end up with the moses.ini, phrase-table,
reordering-table.
I use the command below to check trained model
echo "c' est une petite maison ." | TMP=/tmp
mose/moses/moses-cmd/src/moses -f works/model/moses-bin.ini
Defined parameters (per moses.ini or switch):
con
Hello there, I am using sentences in Welsh and English to train. I
tokenized, lower cased and build language model but when I try to train
Phrase model using the command below
nohup nice
mose/moses-scripts/scripts-20120409-0748/training/train-factored-phrase-model.perl
-scripts-root-dir mose/moses
what are the differences between the Srilm to Irstlm toolkit?
which toolkit is recommended or better(srilm/Irstlm)?
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
hello there, I am using welsh and english parallel corpus to test the moses
translation.
I tokenize and lower case the training data and while
I am using IRSTLM to build a tri-gram language model using the command
export IRSTLM=/home/guest/tools/irstlmtools/irstlm/bin/build-lm.sh -t
/tmp -i work/l
Hello Moses support @mit
I had hard time installing moses. and at last while I try to test the
installation with sample model
I get the following error.
Loading lexical distortion models...have 0 models
Start loading LanguageModel lm/europarl.srilm.gz : [0.000] seconds
util/file.cc:33 in int util:
19 matches
Mail list logo