Hey guys, I've got a very large training set (on the order of many billions of parallel training examples). I've been doing a lit review for existing work on SMT or NMT with very large training corpora, but haven't found much.
What's the largest amount of training data that you have trained (or seen mentioned in a paper)? Either SMT or NMT. Thanks, Lane
_______________________________________________ Moses-support mailing list Moses-support@mit.edu http://mailman.mit.edu/mailman/listinfo/moses-support