Hi all,
I'm wondering how to decode hypergraph using the
-search-algorithm 5
feature in the moses decoder? What format should the hypergraph be written
in? (Is it the same as what https://github.com/kpu/lazy requires?) What
format of the language model does it support?
Thanks,
Angli
___
Hi, I was using lattice mbr to decode the source sentences; the model was
tuned using mert. However, despite the fact that other decoding methods
such as maximum probability decoding and consensus decoding can output
results without a problem, mbr decoding using the -lmbr flag let the
decoder outpu
Hi Moses community,
What is -nscores used for as a parameter of
mosesdecoder/bin/processPhraseTableMin ?
(In the baseline system at http://www.statmt.org/moses/?n=Moses.Baseline,
this parameter was set to 4. )
Thanks!
Angli
___
Moses-support mailing li
cube pruning and MBR together. As mentioned above, the
> "decision rule" (MBR vs. max-prob) is applied after search is finished.
>
> -phi
>
> On Fri, Mar 24, 2017 at 11:50 AM, Angli Liu
> wrote:
>
>> Thanks!
>>
>> Furthermore, does "output-search-g
mt.org/moses/?n=Advanced.Search for details.
>
> The source code is in $MOSES/moses-cmd and $MOSES/moses
>
> -phi
>
>
>
> On Thu, Mar 23, 2017 at 6:30 PM, Angli Liu
> wrote:
>
>> Hi Moses community,
>>
>> In decoding, is it possible to have Moses output
Hi Moses community,
In decoding, is it possible to have Moses output a confusion network (CN)
or a word lattice (WL), instead of the decoded text for each sentence? I'm
aware that one parameter of the decoder is "-inputtype", so the question is
what parameter of the decoder should be used to deter
Hi all,
Is there a way to do lattice decoding with BLEU in Moses? I.e., given a
word lattice, find the path that represents the highest BLEU score? If so,
what function to call and in what format should I feed a lattice in?
Thanks!
Angli
___
Moses-suppo
ter all, you don't care if the
> output surface form is correct but the other factors are wrong.
>
> Will the results be compatible with tuning done with a factored tuning
> corpus?
>
> yes
>
> Best regards,
>
> Sašo
>
> 2016-12-04 1:37 GMT+01:00 Hieu Hoang :
Hi, what's the major difference between the tuning process for a factored
phrase based system (i.e., surface+pos data) and a simple baseline phrase
based system? Do I need to organize the dev set the same way as the
training set (i.e., surface|pos)? Is there a tutorial on the moses website
on this
Hi - I trained a phrase based system from a low resource language to
english, and got *13.6633* as the BLEU score. However, when I tested on the
same dev set and computed BLEU against the English corpus in the dev set, I
only got *3.69*. Then I did a manual grid search over the parameter space
in m
10 matches
Mail list logo