Re: [Moses-support] How to re-run tuning using EMS

2015-06-22 Thread Barry Haddow
Just remove steps/1/TUNING_tune.1.DONE (replacing 1 with your experiment id) and then re-run. It would be nice if EMS supported multiple tuning runs without intervention, but afaik it doesn't. On 22/06/15 16:15, Lane Schwartz wrote: Given a successful run of EMS, what do I need to do to

[Moses-support] How to re-run tuning using EMS

2015-06-22 Thread Lane Schwartz
Given a successful run of EMS, what do I need to do to configure a new run that re-uses all of the training, but re-runs tuning? Thanks, Lane ___ Moses-support mailing list Moses-support@mit.edu http://mailman.mit.edu/mailman/listinfo/moses-support

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Hokage Sama
Hi I delete all the files (I think) generated during a training job before rerunning the entire training. You think this could cause variation? Here's the commands I run to delete: rm ~/corpus/train.tok.en rm ~/corpus/train.tok.sm rm ~/corpus/train.true.en rm ~/corpus/train.true.sm rm

Re: [Moses-support] Major bug found in Moses

2015-06-22 Thread Marcin Junczys-Dowmunt
That would make very cool student projects. Also that video is acing it, even the voice-over is synthetic :) On 23.06.2015 00:27, Ondrej Bojar wrote: ...and I wouldn't be surprised to find Moses also behind this Java-to-C# automatic translation: https://www.youtube.com/watch?v=CHDDNnRm-g8

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Marcin Junczys-Dowmunt
I don't think so. However, when you repeat those experiments, you might try to identify where two trainings are starting to diverge by pairwise comparisions of the same files between two runs. Maybe then we can deduce something. On 23.06.2015 00:25, Hokage Sama wrote: Hi I delete all the

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Hokage Sama
Ok will do On 22 June 2015 at 17:47, Marcin Junczys-Dowmunt junc...@amu.edu.pl wrote: I don't think so. However, when you repeat those experiments, you might try to identify where two trainings are starting to diverge by pairwise comparisions of the same files between two runs. Maybe then we

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Marcin Junczys-Dowmunt
Hi, I think the average is OK, your variance is however quite high. Did you retrain the entire system or just optimize parameters a couple of times? Two useful papers on the topic: https://www.cs.cmu.edu/~jhclark/pubs/significance.pdf http://www.mt-archive.info/MTS-2011-Cettolo.pdf On

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Marcin Junczys-Dowmunt
Hm. That's interesting. The language should not matter. 1) Do not report results without tuning. They are meaningless. There is a whole thread on that, look for Major bug found in Moses. If you ignore the trollish aspects it contains may good descriptions why this is a mistake. 2) Assuming it

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Hokage Sama
Thanks Marcin. Its for a new resource-poor language so I only trained it with what I could collect so far (i.e. only 190,630 words of parallel data). I retrained the entire system each time without any tuning. On 22 June 2015 at 01:00, Marcin Junczys-Dowmunt junc...@amu.edu.pl wrote: Hi, I

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Marcin Junczys-Dowmunt
Difficult to tell with that little data. Once you get beyond 100,000 segments (or 50,000 at least) i would say 2000 per dev (for tuning) and test set, rest for training. With that few segments it's hard to give you any recommendations since it might just not give meaningful results. It's

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Marcin Junczys-Dowmunt
You're welcome. Take another close look at those varying bleu scores though. That would make me worry if it happened to me for the same data and the same weights. On 22.06.2015 10:31, Hokage Sama wrote: Ok thanks. Appreciate your help. On 22 June 2015 at 03:22, Marcin Junczys-Dowmunt

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Hokage Sama
Ok thanks. Appreciate your help. On 22 June 2015 at 03:22, Marcin Junczys-Dowmunt junc...@amu.edu.pl wrote: Difficult to tell with that little data. Once you get beyond 100,000 segments (or 50,000 at least) i would say 2000 per dev (for tuning) and test set, rest for training. With that few

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Hokage Sama
Yes the language model was built earlier when I first went through the manual to build a French-English baseline system. So I just reused it for my Samoan-English system. Yes for all three runs I used the same training and testing files. How can I determine how much parallel data I should set

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Hokage Sama
Ok I will. On 22 June 2015 at 03:35, Marcin Junczys-Dowmunt junc...@amu.edu.pl wrote: You're welcome. Take another close look at those varying bleu scores though. That would make me worry if it happened to me for the same data and the same weights. On 22.06.2015 10:31, Hokage Sama wrote:

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Hokage Sama
Wow that was a long read. Still reading though :) but I see that tuning is essential. I am fairly new to Moses so could you please check if the commands I ran were correct (minus the tuning part). I just modified the commands on the Moses website for building a baseline system. Below are the

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Marcin Junczys-Dowmunt
Don't see any reason for indeterminism here. Unless mgiza is less stable for small data than I thought. The lm lm/news-commentary-v8.fr-en.blm.en has been built earlier somewhere? And to be sure: for all three runs you used exactly the same data, training and test set? On 22.06.2015 09:34,

Re: [Moses-support] BLEU Score Variance: Which score to use?

2015-06-22 Thread Hokage Sama
Ok my scores don't vary so much when I just run tokenisation, truecasing, and cleaning once. Found some differences beginning from the truecased files. Here are my results now: BLEU = 16.85, 48.7/21.0/11.7/6.7 (BP=1.000, ratio=1.089, hyp_len=3929, ref_len=3609) BLEU = 16.82, 48.6/21.1/11.6/6.7