Yes, here are some data:

Average source sentence length: 29 tokens
Phrase-table size, probingPT: 11G
Phrase-table size, compact phrase-table: 2.1G

Translation time Moses2 with 32 threads: 1m36.511s
Translation time Moses with 32 threads: 6m14.248s
Translation time Moses2 with 32 threads in server mode: 16m30.137s
Translation time Moses with 32 threads in server mode: 62m33.208s

Ram consumption during decoding: 4G for Moses2, 5G for Moses

So Moses2 is 4 times faster, and 3 times faster in server mode.

Do you know why in server mode the speed is so much slower with respect to
batch mode, for both Moses and Moses2?

Best regards,
Vito

2016-09-28 18:52 GMT+02:00 Hieu Hoang <hieuho...@gmail.com>:

> cool. do you have any indications of speed, especially when using multiple
> threads? model sizes and average input sentence length are also relevant.
>
>
>


-- 
*M**. Vito MANDORINO -- Chief Scientist*


[image: Description : Description : lingua_custodia_final full logo]

 *The Translation Trustee*

*1, Place Charles de Gaulle, **78180 Montigny-le-Bretonneux*

*Tel : +33 1 30 44 04 23   Mobile : +33 6 84 65 68 89*

*Email :*  *vito.mandor...@linguacustodia.com
<massinissa.ah...@linguacustodia.com>*

*Website :*
*www.linguacustodia.finance <http://www.linguacustodia.com/>*
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to