Hello all,
I know that there is a provision for multiple decoding paths which can use
multiple phrase tables.
For example if I have 2 phrase tables for en-fr:
0 T 0
1 T 1
This specifies 2 decoding paths.
What about when I have multiple reordering tables ?
How can I specify something like:
0 R 0
Hi Prashant
We had to do these kind of dynamic weight updates for online MIRA. The
code is still there, although might have rotted, start by looking at the
weight update methods in StaticData,
cheers - Barry
On 13/11/14 17:05, Prashant Mathur wrote:
But in CAT scenario we do like this:
Apropos MIRA, what's the current best practice tuner for sparse
features? What are you guys using now for say WMT-grade systems?
W dniu 14.11.2014 o 10:39, Barry Haddow pisze:
Hi Prashant
We had to do these kind of dynamic weight updates for online MIRA. The
code is still there, although
Ken - should we add encoding on open to all python scripts, rather than set
the PYTHONIOENCODING env variable? That's basically what happens with the
perl scripts/
What python/Linux version are you using? I don't see it on my version
(Python 2.7.3, Ubuntu 12.04)
Qin - Thanks. I've added you as
Hi Marcin
Our default option would be kbmira (kbest batch mira). It seems to be
the most stable,
cheers - Barry
On 14/11/14 09:43, Marcin Junczys-Dowmunt wrote:
Apropos MIRA, what's the current best practice tuner for sparse
features? What are you guys using now for say WMT-grade systems?
Thanks. For some reasons I usually have quite week results with kbmira.
What happened to that interesting Online MIRA idea? Died due to lack of
maintenance?
W dniu 14.11.2014 o 10:54, Barry Haddow pisze:
Hi Marcin
Our default option would be kbmira (kbest batch mira). It seems to be
the
Let's say there was a bit of disillusionment about the advantages of online
vs batch mira. Online mira was slow in comparison, but that's also because
the implementation was still in a kind of development state and not
optimised
On Fri, Nov 14, 2014 at 9:59 AM, Marcin Junczys-Dowmunt
Speed aside, quality did not improve significantly?
W dniu 14.11.2014 o 11:11, Eva Hasler pisze:
Let's say there was a bit of disillusionment about the advantages of
online vs batch mira. Online mira was slow in comparison, but that's
also because the implementation was still in a kind of
In comparison to MERT? not really, we compared English-French and
German-English at IWSLT 2012 and the baseline scores were a bit higher for
En-Fr a bit lower for De-En.
But of course the point is that you can use more features, so you have to
define useful feature sets that are sparse but still
Hi Marcin
I think if you look at the situations where sparse features are
successful, you often find they are tuning with multiple references.This
paper lends support to the idea that multiple references are important:
http://www.statmt.org/wmt14/pdf/W14-3360.pdf.
cheers - Barry
On 14/11/14
Hi,
Eva: And in a sparse-feature scenario compared to PRO or kbmira?
Barry: Thanks for the pointer. I understand the main problem is
evidence-sparsity for sparse features. I am currently trying to counter
that by using huge devsets (up to 50.000 sentences, divided into pieces
of 5.000, then
For what it's worth the server is running Python 3.2.2. Rico seems to
know what he's doing with Python much more than I do.
Kenneth
On 11/14/14 04:55, Hieu Hoang wrote:
Ken - should we add encoding on open to all python scripts, rather than
set the PYTHONIOENCODING env variable? That's
ah. I've rolled back Ken's change 'cos I need it to work with Python 2.7.
I've set the env variable in train-model.perl just before the call to
merge-alignment.py. That should patch ken's problem for now.
https://github.com/moses-smt/mosesdecoder/commit/acd3ac964a7df646e15e3c4210853e7b70bebcbf
this email may explain it for you
https://www.mail-archive.com/moses-support%40mit.edu/msg00901.html
On 13/11/14 10:46, Maria Marpaung wrote:
Hi please help me,
I have been can run Moses and get a score of BLUEis: BLEU= 74.25,
86.2/77.1/70.7/64.8 (BP=1.000, ratio= 1.000, hyp_len= 59134,
Hi,
it is not possible to make selective use of reordering tables. If two
reordering tables, then each translation option is scored with both of
them.
Be aware that if there is no entry in a reordering table for a phrase
pair, then there is no reordering cost, which is probably not what you
Hello Heiu,
I am final year student studying computer science at king’s college London. I
am working on my final year project it’s a translation software, I was
initially going translate using a database dictionary and query the database to
get the
Thank you for your guidance Sir.
Just a question though: Will the selective choice of reordering tables be
incorporated in moses decoder in the future ?
Regards.
On Sat, Nov 15, 2014 at 4:21 AM, Philipp Koehn pko...@inf.ed.ac.uk wrote:
Hi,
it is not possible to make selective use of
Hello,
I would be interested to try this out.
After I go through the code I think I will try to incorporate this.
Regards.
On Sat, Nov 15, 2014 at 11:09 AM, Philipp Koehn pko...@inf.ed.ac.uk wrote:
Hi,
there is no plan for this - if you think this would be a good feature,
you can look at
18 matches
Mail list logo