it could be any number of things.

all that tuning does is improve BLEU scores with respect to the given tuning
set of sentences; so, this could result in:

--better BLEU scores, but worse subjective translation (since doing better
at BLEU does not always guarantee better subjective translation --see our
EACL paper on the subject)

--better BLEU scores on a tuning set that is not representative of the
actual intended test set (this is a domain problem)

--your tuning set is far too small and you are overfitting with respect to
it

--bugs

etc etc

Miles

On 01/02/2008, Panos <[EMAIL PROTECTED]> wrote:
>
> Hello,
>
> I built a baseline system for testing English to Greek translations. I
> used the
> bilingual corpus for the training process and the Greek translations of
> the same
> corpus for the language model (about 145000 sentences). Eveything seems ok
> and
> the system is able to produce some nice translations, domain-specific of
> course.
> However, the tuning process seems to create an .ini that produces pretty
> bad
> results. I tried the tuning process twice, one time with input and
> reference
> files of 2000 sentences and a second with 1000 sentences. Results are much
> worse
> than the ones I get with the untuned moses.ini. What am I doing wrong?
>
> Thanks in advance.
>
> Panos
>
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>


-- 
The University of Edinburgh is a charitable body, registered in Scotland,
with registration number SC005336.
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to