Catalin Braescu wrote:

> I wonder what would be required for Moses to handle more than 1
> language pair, 1-way? As in, the same instance of Moses to translate
> both from Lang1 to Lang2 and from Lang2 to Lang1. Ideally to translate
> to/from more than only 1 language pair.

What would be the benefit of this, over having two separate sets of  
models?

Would you want language ID on the front, to figure out which models to  
use?

If you really need to do something like this, I would try training the  
system on a doubled data set, with the direction reversed in the  
second copy.  And then concatenate the two sets of language modeling  
data as well.  I have no idea if the result would "code switch" too  
often to be useful, but it might work.

- John D. Burger
   MITRE

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to