Github user mjpost commented on the pull request:

    https://github.com/apache/incubator-joshua/pull/1#issuecomment-205633596
  
    For reference, I ran this across my timing suite, running start to finish, 
single-threaded, iincluding model loading time). For two packed models:
    
    Phrase-based es-en model, 3k sentences: 495 seconds → 368 seconds (1.3x 
faster)
    Hiero zh-en model (1,357 sentences): 586 seconds → 259 seconds (2.2x 
faster)
    
    Decoding with a RAM-loaded model (unpacked model) is the same. This merge 
basically means that packed grammars are no longer any slower than just loading 
to memory, and also don't pay the startup cost. This is obviously really 
awesome.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to