musa ghurab wrote:

> I trained a system of Chinese-Arabic language, but many alignments  
> are wrong.
> The same thing to lexical model, where are many words are wrongly  
> aligned
> Here is an example of lexical model (lex.e2f):

The point of Moses is not to get good alignments, but to get good  
translation output.  The target language model will help the decoder  
to pick good translations, even if the translation probabilities that  
come out of the alignment do not appear to be ideal.  A great deal of  
research effort has been wasted (in my opinion) on getting better  
alignments, without actually achieving better translation.

Have you run the resulting models on a test set?  What was the score?   
How big is your language model?  More LM data is probably the easiest  
way to make up for what might appear to be poor alignments.

- John D. Burger
   MITRE

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to