you will get another big speedup fromm integrating the lexro into the pt Hieu Hoang http://www.hoang.co.uk/hieu
On 29 September 2016 at 15:03, Vito Mandorino < vito.mandor...@linguacustodia.com> wrote: > Yes the model includes a lexicalised reordering model but is not > integrated into the probingPT. The size of the LM is 1.8G. > > 2016-09-29 15:59 GMT+02:00 Hieu Hoang <hieuho...@gmail.com>: > >> ps. how big is your LM? >> >> Hieu Hoang >> http://www.hoang.co.uk/hieu >> >> On 29 September 2016 at 14:58, Hieu Hoang <hieuho...@gmail.com> wrote: >> >>> great, thanks. Do you use the lexicalised reordering model, and is it >>> integrated into the phrase-table in Moses2? >>> >>> There is latency in communicating with the server. As Moses2 is much >>> faster now, the client can't feed it fast enough. You should see that >>> moses2 command line will max out the CPU, whereas the server won't. I'm >>> thinking of extending the server to processing multiple sentences at a time >>> to speed it up >>> >>> Hieu Hoang >>> http://www.hoang.co.uk/hieu >>> >>> On 29 September 2016 at 14:49, Vito Mandorino < >>> vito.mandor...@linguacustodia.com> wrote: >>> >>>> Yes, here are some data: >>>> >>>> Average source sentence length: 29 tokens >>>> Phrase-table size, probingPT: 11G >>>> Phrase-table size, compact phrase-table: 2.1G >>>> >>>> Translation time Moses2 with 32 threads: 1m36.511s >>>> Translation time Moses with 32 threads: 6m14.248s >>>> Translation time Moses2 with 32 threads in server mode: 16m30.137s >>>> Translation time Moses with 32 threads in server mode: 62m33.208s >>>> >>>> Ram consumption during decoding: 4G for Moses2, 5G for Moses >>>> >>>> So Moses2 is 4 times faster, and 3 times faster in server mode. >>>> >>>> Do you know why in server mode the speed is so much slower with respect >>>> to batch mode, for both Moses and Moses2? >>>> >>>> Best regards, >>>> Vito >>>> >>>> 2016-09-28 18:52 GMT+02:00 Hieu Hoang <hieuho...@gmail.com>: >>>> >>>>> cool. do you have any indications of speed, especially when using >>>>> multiple threads? model sizes and average input sentence length are also >>>>> relevant. >>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> *M**. Vito MANDORINO -- Chief Scientist* >>>> >>>> >>>> [image: Description : Description : lingua_custodia_final full logo] >>>> >>>> *The Translation Trustee* >>>> >>>> *1, Place Charles de Gaulle, **78180 Montigny-le-Bretonneux* >>>> >>>> *Tel : +33 1 30 44 04 23 Mobile : +33 6 84 65 68 89 >>>> <%2B33%206%2084%2065%2068%2089>* >>>> >>>> *Email :* *vito.mandor...@linguacustodia.com >>>> <massinissa.ah...@linguacustodia.com>* >>>> >>>> *Website :* >>>> *www.linguacustodia.finance <http://www.linguacustodia.com/>* >>>> >>> >>> >> > > > -- > *M**. Vito MANDORINO -- Chief Scientist* > > > [image: Description : Description : lingua_custodia_final full logo] > > *The Translation Trustee* > > *1, Place Charles de Gaulle, **78180 Montigny-le-Bretonneux* > > *Tel : +33 1 30 44 04 23 Mobile : +33 6 84 65 68 89 > <%2B33%206%2084%2065%2068%2089>* > > *Email :* *vito.mandor...@linguacustodia.com > <massinissa.ah...@linguacustodia.com>* > > *Website :* > *www.linguacustodia.finance <http://www.linguacustodia.com/>* >
_______________________________________________ Moses-support mailing list Moses-support@mit.edu http://mailman.mit.edu/mailman/listinfo/moses-support