You MUST use the tuned weights. You'll be in deep water if you don't https://www.mail-archive.com/moses-support%40mit.edu/msg12446.html
If they produce bad results, it indicates there's something wrong somewhere in your pipeline Hieu Hoang http://www.hoang.co.uk/hieu On 25 April 2016 at 14:32, Rajnath Patel <patelrajn...@gmail.com> wrote: > Hi Jasneet, > > Thanks for quick response. We are comparing the results on default weight > (moses.ini) vs tuned weight. And with default weights we are getting > higher BLEU on test set than tuned weights. > > . > > On Mon, Apr 25, 2016 at 3:17 PM, Jasneet Sabharwal < > jasneet.sabhar...@sfu.ca> wrote: > >> Hi Rajnath, >> >> Against what test set are you comparing your BLEU scores? If you mean >> that your BLEU score on test set is lower than the BLEU on dev/tuning set >> then that is fine. The BLEU score on tuning set is generally higher than >> the BLEU score on test set as the parameters of the features were tuned >> using the tuning set. >> >> Best, >> Jasneet >> >> > On Apr 25, 2016, at 2:38 AM, Rajnath Patel <patelrajn...@gmail.com> >> wrote: >> > >> > Hi all, >> > >> > I am trying to tune a phrase based model with default tuning parameters >> (MERT, BLEU). But, instead of improvement getting reduced BLEU on test set. >> Kindly help to choose the appropriate algorithm and metrics for >> English-French SMT. >> > >> > Thank you! >> > >> > -- >> > Regards, >> > Raj Nath Patel >> > >> > _______________________________________________ >> > Moses-support mailing list >> > Moses-support@mit.edu >> > http://mailman.mit.edu/mailman/listinfo/moses-support >> >> > > > -- > Regards: > राज नाथ पटेल/Raj Nath Patel > KBCS dept. > CDAC Mumbai. > http://kbcs.in/ > > _______________________________________________ > Moses-support mailing list > Moses-support@mit.edu > http://mailman.mit.edu/mailman/listinfo/moses-support > >
_______________________________________________ Moses-support mailing list Moses-support@mit.edu http://mailman.mit.edu/mailman/listinfo/moses-support