Hi, 

Eva: And in a sparse-feature scenario compared to PRO or kbmira? 

Barry: Thanks for the pointer. I understand the main problem is
evidence-sparsity for sparse features. I am currently trying to counter
that by using huge devsets (up to 50.000 sentences, divided into pieces
of 5.000, then averaging weights, cross-validation basically) which
seems to help, but I am always suspicious that the optimization method
is not doing as well as it could. So I was hoping you might have
something new :) I remember Collin Cherry talking about lattice Mira, we
don't have this in Moses, have we? 

W dniu 2014-11-14 11:27, Barry Haddow napisaƂ(a): 

> Hi Marcin
> 
> I think if you look at the situations where sparse features are 
> successful, you often find they are tuning with multiple references.This 
> paper lends support to the idea that multiple references are important: 
> http://www.statmt.org/wmt14/pdf/W14-3360.pdf [1].
> 
> cheers - Barry
> 
> On 14/11/14 10:24, Eva Hasler wrote:
> 
>> In comparison to MERT? not really, we compared English-French and 
>> German-English at IWSLT 2012 and the baseline scores were a bit higher for 
>> En-Fr a bit lower for De-En. But of course the point is that you can use 
>> more features, so you have to define useful feature sets that are sparse but 
>> still able to generalise On Fri, Nov 14, 2014 at 10:16 AM, Marcin 
>> Junczys-Dowmunt <junc...@amu.edu.pl <mailto:junc...@amu.edu.pl>> wrote: 
>> Speed aside, quality did not improve significantly? W dniu 14.11.2014 o 
>> 11:11, Eva Hasler pisze:

 

Links:
------
[1] http://www.statmt.org/wmt14/pdf/W14-3360.pdf
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to