Hi,
I have a factored phrase-based model and I get this error when decoding some
input that looks correct to me:
Exception: moses/Phrase.cpp:214 in void
Moses::Phrase::CreateFromString(Moses::FactorDirection, const std::vector >&, const StringPiece&,
Moses::Word**) threw util::Exception becau
Hi folks,
For those who work on the Moses code and need to generate
(pseudo-)random numbers sometimes, there's a new module that takes away
some of the drudgery: util/random.cc and util/random.hh.
It has simple templated wrappers for the built-in random()/srandom() and
rand/srand(), to let you ge
there's currently no wrapper program to convert the stanford output to
moses factored representation.
You should write your own, you can look at similar wrapper scripts in
scripts/training/wrappers
please share your program with us if it works
On 27/04/2015 02:01, Marwa Refaie wrote:
I st
I still stucking here !
I need to use word lemma & pos, here example of stanford output for lemmtizer
& pos
Artificial
intelligence
Artificial
intelligence
-LRB-
Any one knows how to convert this to the Moses factored models
Artificial | artificial | NN intelli
The size of the trie looks unusually small unless you did quantization as
well. I usually see non-quantized KenLM tries being about 50-70% the
filesize of probing hash tables. The trie binarization time sounds normal
relative to the hash time.
Best,
-Jon
On Sun, Apr 26, 2015 at 9:20 PM, lilin
Hello
I had the same problem once and i have been told to recompile moses with
the "-a" option and it worked just fine. Just notice that from what i've
noticed the "-a" option recompile moses from scratch so if you compiled
moses with some other libraries like for example the boost library you
shou
Size reduction and binarizing time are normal. At WIPO we use only
quantized models, with no quality loss so far. Your speed issues were
caused by insufficient RAM then. Interesting, since I was able to use
98GB models on my 128GB server with several moses instances running in
parallel, but may
Dear Moses devs/users,
@Ken, I'm working with 128 RAM, the default binarized LM works but it's
kind of slow when tuning.
I've tried the trie and it's wonderful!! Effectively, it brought down the
size of the LM:
Text: 16GB
ARPA: 38G
Binary (no trie): 71GB
Trie Binary: 17GB
*Does the small trie b
Dear Sir/madam,
I was trying to compile IRSTLM with moses. I was doing my project on IRSTLM
language model with moses. I synthax I have used is
./bjam --with-irstlm=/home/dawit/irstlm-5.80.03 -j5
But I fails to build.
have properly followed the instruction on
http://www.statmt.org/moses/?n=D