Dear Moses community,
I have a question regarding filtered and non-filtered translation models,
is it a common methodology to always filter the models prior to testing?
Is it right that the filtering is to reduce the phrase-table and reordering
table size such that decoding is faster? Given the a
Thanks Hieu!!
Exactly what i needed =)
Regards,
Nat
On Tue, Oct 11, 2016 at 5:23 AM, Hieu Hoang wrote:
> I'm not sure what you mean. Try running
>
> ./moses ... -T [filename]
>
> or
>
> ./moses ... -t
>
> On 10/10/2016 10:54, Nat Gillin wrote:
>
> Dear Moses community,
>
> Is it possible f
I'm not sure what you mean. Try running
./moses ... -T [filename]
or
./moses ... -t
On 10/10/2016 10:54, Nat Gillin wrote:
Dear Moses community,
Is it possible for moses to generate some sort of phrase-boundaries
together with the moses' decoded output? If so, how?
Thank you in advan
That's what I meant when I said I do not like the evaluation part. Since
their baseline NMT model has no mechanism to deal with unknown words, it
is quite likely that the effect is mainly due to that (although I might
be totally wrong on that). Add for instance subword units and the effect
migh
Dear Marcin and Moses,
That looks interesting!
It's rather alarming that vanilla RNN encoder/decoder scored close to Moses
and the proposed phrasenet scored +3 =(
Regards,
Nat
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/
Dear Moses community,
Is it possible for moses to generate some sort of phrase-boundaries
together with the moses' decoded output? If so, how?
Thank you in advance for the tips =)
Regards,
Liling
___
Moses-support mailing list
Moses-support@mit.edu
htt
Hi,
There is this work:
https://arxiv.org/abs/1606.01792
The model is interesting, but the evaluation part is a bit weak. For
some reason this group of authors restricts their findings to Chinese
only. There is also no other attempt to deal with unknown words, so the
impact of the phrase mode
Dear Moses community,
Does anyone know of a existing list of work that tries to combine PBMT and
NMT?
More specifically, NMT is currently word to word
translation/generation/prediction, is there any one that tries NMT at some
sort of a phrasal level? I.e. the lookup table will be phrases instead