My decoding options are the default, I tried using cube-pruning-limit, &
stack size but still not good enough
On Wed, Apr 6, 2016 at 8:02 PM, Ayah ElMaghraby
wrote:
> Hello
>
> I am trying to created a SMT using ghkm extraction but it is very slow
> during translation it translated 53 sentences
When running mosesserver with --output-search-graph, I don't get a search
graph file created. Is this the expected behavior? Or is something else
going on?
Thanks,
Lane
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/l
Hello
I am trying to created a SMT using ghkm extraction but it is very slow during
translation it translated 53 sentences in 24 hrs using Ubuntu-64bit 14.04 & 4GB
Ram
I changed rule table to be on disk using executable createOnDiskPt.
Is there any thing I can do to speed it up a little bit.
I
Hello Vincent,
According to some experiences, your option number 3 is the best one.
Cheers,
Christophe
De : moses-support-boun...@mit.edu [mailto:moses-support-boun...@mit.edu] De la
part de Vincent Nguyen
Envoyé : mercredi 6 avril 2016 17:11
À : Philipp Koehn
Cc : moses-support
Sorry Philipp, I did not ask my question properly.
I was not talking about the phrase table.
I was talking about the language model options that we have. when I said
corpus I was referring to the data for the LM itself.
and in terms of "performance" I was more talking about the impact on
qua
Hi,
the number of phrase tables should not matter much, but the number of
language models has a significant impact on speed. There are no general
hard numbers on this, since it depends on a lot of other settings, but
adding a second language model will slow down decoder around 30-50%.
The size of
Hi,
What are (in terms of performance) the difference between the 3
following solutions :
2 corpus, 2 LM, 2 weights calculated at tuning time
2 corpus merged into one, 1 LM
2 corpus, 2 LM interpolated into 1 LM with tuning
Will the results be different in the end ?
thanks.
__
Probing format models can't be filtered because they only retain hashes
of ngrams.
Trie format models can be filtered and dumped, but only with the very
hacky and undocumented dump_trie program in the bounded-noquant branch.
Hasn't been a priority to make it release quality; volunteers?
Kenneth
Dear Matthias and Kenneth,
Thank you for the note on the --static options!
Regards,
Liling
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
Dear Moses devs/users,
The filter tool in KenLM is able to filter a LM based on a dev set (
https://kheafield.com/code/kenlm/filter/) but it only allows raw|arpa file.
Is there another tool that filters binarized LMs? Given a binarized LM, is
there a way to "debinarize" the LM?
Thanks in advance
10 matches
Mail list logo