Philipp Koehn wrote:
> if the input is provided as a lattice, then it is not
> the case that all translations have the same number
> of unknown words.
Moreover, the penalty is certainly needed at weight-training time, correct?
Then there are different input sentences with different amounts of O
Thanks a lot for helping.
Best regards,
Kaveh
On Mon, Oct 3, 2011 at 2:33 PM, Philipp Koehn wrote:
> Hi,
>
> if the input is provided as a lattice, then it is not
> the case that all translations have the same number
> of unknown words.
>
> -phi
>
> On Mon, Oct 3, 2011 at 11:28 AM, Kaveh Taghi
Hi,
if the input is provided as a lattice, then it is not
the case that all translations have the same number
of unknown words.
-phi
On Mon, Oct 3, 2011 at 11:28 AM, Kaveh Taghipour
wrote:
> Hi Christian,
>
> Thanks. You are right. But I think there is no need for such a penalty,
> since all ca
Hi Christian,
Thanks. You are right. But I think there is no need for such a penalty,
since all candidates for a given source sentence contain the same number of
OOVs and so the penalty does not help at all. Do you know the reason?
Cheers,
Kaveh
On Sun, Oct 2, 2011 at 9:53 PM, Christian Hardme
I think you're on the right track. For some reason, moses doesn't report the
OOV penalty feature, which adds a hardcoded penalty of -100 to the total score
for each input word that was copied to the output because no suitable
translation was found in the phrase table. Your test sentence probably
Hi,
I have generated an N-best list with moses, but I do not know how to compute
the final score. I tried a log-linear model but came up with a wrong number.
For example:
moses.ini:
# distortion (reordering) weight
[weight-d]
0.120662
# language model wei