[Moses-support] Multiple reordering models in moses

2014-11-14 Thread Raj Dabre
Hello all,

I know that there is a provision for multiple decoding paths which can use
multiple phrase tables.
For example if I have 2 phrase tables for en-fr:
0 T 0
1 T 1
This specifies 2 decoding paths.

What about when I have multiple reordering tables ?
How can I specify something like:
0 R 0
1 R 1

Does my question make sense ?

Or do I just resort to interpolation and merge both reordering tables ?

Any help will be appreciated.

Regards.

-- 
Raj Dabre.
Research Student,
Graduate School of Informatics,
Kyoto University.
CSE MTech, IITB., 2011-2014
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] using sparse features

2014-11-14 Thread Barry Haddow
Hi Prashant

We had to do these kind of dynamic weight updates for online MIRA. The 
code is still there, although might have rotted, start by looking at the 
weight update methods in StaticData,

cheers - Barry

On 13/11/14 17:05, Prashant Mathur wrote:
> But in CAT scenario we do like this:
>
> translate: sentence 1
> tune: sentence 1 , post-edit 1
> translate: sentence 2
> tune: sentence 2 , post-edit 2
> ...
>
> In this case, I don't have any features generated or tuned before I 
> start translating the first sentence.
>
> Old version is complicated, I am coding on the latest version now.
>
> --Prashant
>
>
> On Thu, Nov 13, 2014 at 5:26 PM, Philipp Koehn  > wrote:
>
> Hi,
>
> Typically you want to learn these feature weights when tuning. The
> current setup supports and produces a sparse feature file.
>
> -phi
>
> On Nov 13, 2014 11:18 AM, "Prashant Mathur"  > wrote:
>
> what if I don't know the feature names before hand?
> In that case, can I set the weights directly during decoding?
>
> On Thu, Nov 13, 2014 at 4:59 PM, Barry Haddow
>  > wrote:
>
> Hi Prashant
>
> You add something like this to your moses.ini:
>
> [weight-file]
> /path/to/sparse/weights/file
>
> The sparse weights file has the form:
>
> name1 weight1
> name2 weight2
> name3 weight3
> .
> .
> .
>
> At least that's how it works in Moses v2.
>
> cheers - Barry
>
> On 13/11/14 15:42, Prashant Mathur wrote:
>
> Thanks a lot Barry for your answers.
>
> I have another question.
> When I print these sparse features at the end of
> decoding, all sparse features are assigned a weight of
> 0 because all of them were initialized during decoding.
> How can I set these weights for sparse features before
> they are evaluated?
>
>
> Thanks Hieu for the link..
> I am going to update the code as soon as I can.. but
> it will take some time.. will get back to you when I
> do that.
>
> --Prashant
>
>
> On Thu, Nov 13, 2014 at 2:34 PM, Hieu Hoang
> mailto:hieu.ho...@ed.ac.uk>
>  >> wrote:
>
> re-iterating what Barry said, you should use the
> github moses if
> you want to create your own feature functions,
> especially with
> sparse features. The reasons:
>   1. Adding new feature functions is a pain in v
> 0.91. It's
> trivial now. You can watch my talk to find out why
> http://lectures.ms.mff.cuni.cz/video/recordshow/index/44/184
>   2. It's confusing exactly when the feature
> functions are
> computed. It's clear now (hopefully!)
>   3. I think you had to set special flags
> somewhere to use sparse
> features. Now, all feature functions can use
> sparse features as
> well as dense features
>   4. I don't remember the 0.91 code very well. So
> I can't help you
> if you get stuck
>
>
> On 13 November 2014 11:06, Barry Haddow
>  
>  >>
>
> wrote:
>
> Hi Prashant
>
> I tried to answer your questions inline:
>
>
> On 12/11/14 20:27, Prashant Mathur wrote:
> > Hi All,
> >
> > I have a question about implementing sparse
> feature function.
> > I went through the details on its
> implementation, still
> somethings are
> > not clear.
> > FYI, I am using an old version of moses
> which dates back to
> Release
> > 0.91 I guess. So, I am sorry if my questions
> don't relate to the
> > latest implementation.
>
> This is a bad idea. The FF interface has
> changed a lot since 0.91.
>
> >
> > 1. I was looking at the Targe

Re: [Moses-support] using sparse features

2014-11-14 Thread Marcin Junczys-Dowmunt
Apropos MIRA, what's the current best practice tuner for sparse 
features? What are you guys using now for say WMT-grade systems?

W dniu 14.11.2014 o 10:39, Barry Haddow pisze:
> Hi Prashant
>
> We had to do these kind of dynamic weight updates for online MIRA. The
> code is still there, although might have rotted, start by looking at the
> weight update methods in StaticData,
>
> cheers - Barry
>
> On 13/11/14 17:05, Prashant Mathur wrote:
>> But in CAT scenario we do like this:
>>
>> translate: sentence 1
>> tune: sentence 1 , post-edit 1
>> translate: sentence 2
>> tune: sentence 2 , post-edit 2
>> ...
>>
>> In this case, I don't have any features generated or tuned before I
>> start translating the first sentence.
>>
>> Old version is complicated, I am coding on the latest version now.
>>
>> --Prashant
>>
>>
>> On Thu, Nov 13, 2014 at 5:26 PM, Philipp Koehn > > wrote:
>>
>>  Hi,
>>
>>  Typically you want to learn these feature weights when tuning. The
>>  current setup supports and produces a sparse feature file.
>>
>>  -phi
>>
>>  On Nov 13, 2014 11:18 AM, "Prashant Mathur" >  > wrote:
>>
>>  what if I don't know the feature names before hand?
>>  In that case, can I set the weights directly during decoding?
>>
>>  On Thu, Nov 13, 2014 at 4:59 PM, Barry Haddow
>>  >  > wrote:
>>
>>  Hi Prashant
>>
>>  You add something like this to your moses.ini:
>>
>>  [weight-file]
>>  /path/to/sparse/weights/file
>>
>>  The sparse weights file has the form:
>>
>>  name1 weight1
>>  name2 weight2
>>  name3 weight3
>>  .
>>  .
>>  .
>>
>>  At least that's how it works in Moses v2.
>>
>>  cheers - Barry
>>
>>  On 13/11/14 15:42, Prashant Mathur wrote:
>>
>>  Thanks a lot Barry for your answers.
>>
>>  I have another question.
>>  When I print these sparse features at the end of
>>  decoding, all sparse features are assigned a weight of
>>  0 because all of them were initialized during decoding.
>>  How can I set these weights for sparse features before
>>  they are evaluated?
>>
>>
>>  Thanks Hieu for the link..
>>  I am going to update the code as soon as I can.. but
>>  it will take some time.. will get back to you when I
>>  do that.
>>
>>  --Prashant
>>
>>
>>  On Thu, Nov 13, 2014 at 2:34 PM, Hieu Hoang
>>  mailto:hieu.ho...@ed.ac.uk>
>>  >  >> wrote:
>>
>>  re-iterating what Barry said, you should use the
>>  github moses if
>>  you want to create your own feature functions,
>>  especially with
>>  sparse features. The reasons:
>>1. Adding new feature functions is a pain in v
>>  0.91. It's
>>  trivial now. You can watch my talk to find out why
>>  http://lectures.ms.mff.cuni.cz/video/recordshow/index/44/184
>>2. It's confusing exactly when the feature
>>  functions are
>>  computed. It's clear now (hopefully!)
>>3. I think you had to set special flags
>>  somewhere to use sparse
>>  features. Now, all feature functions can use
>>  sparse features as
>>  well as dense features
>>4. I don't remember the 0.91 code very well. So
>>  I can't help you
>>  if you get stuck
>>
>>
>>  On 13 November 2014 11:06, Barry Haddow
>>  >  
>>  >  >>
>>
>>  wrote:
>>
>>  Hi Prashant
>>
>>  I tried to answer your questions inline:
>>
>>
>>  On 12/11/14 20:27, Prashant Mathur wrote:
>>  > Hi All,
>>  >
>>  > I have a question about implementing sparse
>>  feature function.
>>  > I went through the details on its
>>  implementation, still
>>  somethings are
>>  > not clear.
>>  > FYI, I am using an old version of moses
>>  which dates back to

Re: [Moses-support] Encoding in MGIZA

2014-11-14 Thread Hieu Hoang
Ken - should we add encoding on open to all python scripts, rather than set
the PYTHONIOENCODING env variable? That's basically what happens with the
perl scripts/

What python/Linux version are you using? I don't see it on my version
(Python 2.7.3, Ubuntu 12.04)

Qin - Thanks. I've added you as admin for moses on github. We may change
this if it doesn't suit you. mgiza is a sister project of moses
   https://github.com/moses-smt/mgiza
So everyone who has commit access to moses also has access to mgiza, which
is quite a lot!

We monitor all commits to mgiza on the same mailing list as moses in case
people mess around, eg.

http://lists.inf.ed.ac.uk/pipermail/moses-commits/2014-November/001826.html

On 14 November 2014 09:42, Gao Qin  wrote:

> Good idea, I am not yet admin of the new repro, Hieu will add me and I cam
> make change then.
>
> --Q
>
> On Thu, Nov 13, 2014 at 8:54 AM, Kenneth Heafield 
> wrote:
>
>> Hi,
>>
>> MGIZA has some Python programs that process raw text:
>> https://github.com/moses-smt/mgiza/tree/master/mgizapp/scripts .
>>
>> Since those scripts were released, Python messed up file encoding
>> and
>> made the default ascii.  Should we just change every open call to have
>> encoding = 'utf-8' ?
>>
>> Kenneth
>>
>
>


-- 
Hieu Hoang
Research Associate
University of Edinburgh
http://www.hoang.co.uk/hieu
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] using sparse features

2014-11-14 Thread Barry Haddow
Hi Marcin

Our default option would be kbmira (kbest batch mira). It seems to be 
the most stable,

cheers - Barry

On 14/11/14 09:43, Marcin Junczys-Dowmunt wrote:
> Apropos MIRA, what's the current best practice tuner for sparse
> features? What are you guys using now for say WMT-grade systems?
>
> W dniu 14.11.2014 o 10:39, Barry Haddow pisze:
>> Hi Prashant
>>
>> We had to do these kind of dynamic weight updates for online MIRA. The
>> code is still there, although might have rotted, start by looking at the
>> weight update methods in StaticData,
>>
>> cheers - Barry
>>
>> On 13/11/14 17:05, Prashant Mathur wrote:
>>> But in CAT scenario we do like this:
>>>
>>> translate: sentence 1
>>> tune: sentence 1 , post-edit 1
>>> translate: sentence 2
>>> tune: sentence 2 , post-edit 2
>>> ...
>>>
>>> In this case, I don't have any features generated or tuned before I
>>> start translating the first sentence.
>>>
>>> Old version is complicated, I am coding on the latest version now.
>>>
>>> --Prashant
>>>
>>>
>>> On Thu, Nov 13, 2014 at 5:26 PM, Philipp Koehn >> > wrote:
>>>
>>>   Hi,
>>>
>>>   Typically you want to learn these feature weights when tuning. The
>>>   current setup supports and produces a sparse feature file.
>>>
>>>   -phi
>>>
>>>   On Nov 13, 2014 11:18 AM, "Prashant Mathur" >>   > wrote:
>>>
>>>   what if I don't know the feature names before hand?
>>>   In that case, can I set the weights directly during decoding?
>>>
>>>   On Thu, Nov 13, 2014 at 4:59 PM, Barry Haddow
>>>   >>   > wrote:
>>>
>>>   Hi Prashant
>>>
>>>   You add something like this to your moses.ini:
>>>
>>>   [weight-file]
>>>   /path/to/sparse/weights/file
>>>
>>>   The sparse weights file has the form:
>>>
>>>   name1 weight1
>>>   name2 weight2
>>>   name3 weight3
>>>   .
>>>   .
>>>   .
>>>
>>>   At least that's how it works in Moses v2.
>>>
>>>   cheers - Barry
>>>
>>>   On 13/11/14 15:42, Prashant Mathur wrote:
>>>
>>>   Thanks a lot Barry for your answers.
>>>
>>>   I have another question.
>>>   When I print these sparse features at the end of
>>>   decoding, all sparse features are assigned a weight of
>>>   0 because all of them were initialized during decoding.
>>>   How can I set these weights for sparse features before
>>>   they are evaluated?
>>>
>>>
>>>   Thanks Hieu for the link..
>>>   I am going to update the code as soon as I can.. but
>>>   it will take some time.. will get back to you when I
>>>   do that.
>>>
>>>   --Prashant
>>>
>>>
>>>   On Thu, Nov 13, 2014 at 2:34 PM, Hieu Hoang
>>>   mailto:hieu.ho...@ed.ac.uk>
>>>   >>   >> wrote:
>>>
>>>   re-iterating what Barry said, you should use the
>>>   github moses if
>>>   you want to create your own feature functions,
>>>   especially with
>>>   sparse features. The reasons:
>>> 1. Adding new feature functions is a pain in v
>>>   0.91. It's
>>>   trivial now. You can watch my talk to find out why
>>>   
>>> http://lectures.ms.mff.cuni.cz/video/recordshow/index/44/184
>>> 2. It's confusing exactly when the feature
>>>   functions are
>>>   computed. It's clear now (hopefully!)
>>> 3. I think you had to set special flags
>>>   somewhere to use sparse
>>>   features. Now, all feature functions can use
>>>   sparse features as
>>>   well as dense features
>>> 4. I don't remember the 0.91 code very well. So
>>>   I can't help you
>>>   if you get stuck
>>>
>>>
>>>   On 13 November 2014 11:06, Barry Haddow
>>>   >>   
>>>   >>   >>
>>>
>>>   wrote:
>>>
>>>   Hi Prashant
>>>
>>>   I tried to answer your questions inline:
>>>
>>>
>>>   On 12/11/14 20:27, Prashant Mathur wrote:
>>>   > Hi All,
>>>   >
>>>   > I h

Re: [Moses-support] using sparse features

2014-11-14 Thread Marcin Junczys-Dowmunt
Thanks. For some reasons I usually have quite week results with kbmira. 
What happened to that interesting Online MIRA idea? Died due to lack of 
maintenance?

W dniu 14.11.2014 o 10:54, Barry Haddow pisze:
> Hi Marcin
>
> Our default option would be kbmira (kbest batch mira). It seems to be 
> the most stable,
>
> cheers - Barry
>
> On 14/11/14 09:43, Marcin Junczys-Dowmunt wrote:
>> Apropos MIRA, what's the current best practice tuner for sparse
>> features? What are you guys using now for say WMT-grade systems?
>>
>> W dniu 14.11.2014 o 10:39, Barry Haddow pisze:
>>> Hi Prashant
>>>
>>> We had to do these kind of dynamic weight updates for online MIRA. The
>>> code is still there, although might have rotted, start by looking at 
>>> the
>>> weight update methods in StaticData,
>>>
>>> cheers - Barry
>>>
>>> On 13/11/14 17:05, Prashant Mathur wrote:
 But in CAT scenario we do like this:

 translate: sentence 1
 tune: sentence 1 , post-edit 1
 translate: sentence 2
 tune: sentence 2 , post-edit 2
 ...

 In this case, I don't have any features generated or tuned before I
 start translating the first sentence.

 Old version is complicated, I am coding on the latest version now.

 --Prashant


 On Thu, Nov 13, 2014 at 5:26 PM, Philipp Koehn >>> > wrote:

   Hi,

   Typically you want to learn these feature weights when 
 tuning. The
   current setup supports and produces a sparse feature file.

   -phi

   On Nov 13, 2014 11:18 AM, "Prashant Mathur" >>>   > wrote:

   what if I don't know the feature names before hand?
   In that case, can I set the weights directly during 
 decoding?

   On Thu, Nov 13, 2014 at 4:59 PM, Barry Haddow
   >>>   > wrote:

   Hi Prashant

   You add something like this to your moses.ini:

   [weight-file]
   /path/to/sparse/weights/file

   The sparse weights file has the form:

   name1 weight1
   name2 weight2
   name3 weight3
   .
   .
   .

   At least that's how it works in Moses v2.

   cheers - Barry

   On 13/11/14 15:42, Prashant Mathur wrote:

   Thanks a lot Barry for your answers.

   I have another question.
   When I print these sparse features at the end of
   decoding, all sparse features are assigned a 
 weight of
   0 because all of them were initialized during 
 decoding.
   How can I set these weights for sparse features 
 before
   they are evaluated?


   Thanks Hieu for the link..
   I am going to update the code as soon as I can.. but
   it will take some time.. will get back to you when I
   do that.

   --Prashant


   On Thu, Nov 13, 2014 at 2:34 PM, Hieu Hoang
   mailto:hieu.ho...@ed.ac.uk>
   >> wrote:

   re-iterating what Barry said, you should use the
   github moses if
   you want to create your own feature functions,
   especially with
   sparse features. The reasons:
 1. Adding new feature functions is a pain in v
   0.91. It's
   trivial now. You can watch my talk to find 
 out why
 http://lectures.ms.mff.cuni.cz/video/recordshow/index/44/184
 2. It's confusing exactly when the feature
   functions are
   computed. It's clear now (hopefully!)
 3. I think you had to set special flags
   somewhere to use sparse
   features. Now, all feature functions can use
   sparse features as
   well as dense features
 4. I don't remember the 0.91 code very 
 well. So
   I can't help you
   if you get stuck


   On 13 November 2014 11:06, Barry Haddow
   >>>   
   >

Re: [Moses-support] using sparse features

2014-11-14 Thread Eva Hasler
Let's say there was a bit of disillusionment about the advantages of online
vs batch mira. Online mira was slow in comparison, but that's also because
the implementation was still in a kind of development state and not
optimised


On Fri, Nov 14, 2014 at 9:59 AM, Marcin Junczys-Dowmunt 
wrote:

> Thanks. For some reasons I usually have quite week results with kbmira.
> What happened to that interesting Online MIRA idea? Died due to lack of
> maintenance?
>
> W dniu 14.11.2014 o 10:54, Barry Haddow pisze:
> > Hi Marcin
> >
> > Our default option would be kbmira (kbest batch mira). It seems to be
> > the most stable,
> >
> > cheers - Barry
> >
> > On 14/11/14 09:43, Marcin Junczys-Dowmunt wrote:
> >> Apropos MIRA, what's the current best practice tuner for sparse
> >> features? What are you guys using now for say WMT-grade systems?
> >>
> >> W dniu 14.11.2014 o 10:39, Barry Haddow pisze:
> >>> Hi Prashant
> >>>
> >>> We had to do these kind of dynamic weight updates for online MIRA. The
> >>> code is still there, although might have rotted, start by looking at
> >>> the
> >>> weight update methods in StaticData,
> >>>
> >>> cheers - Barry
> >>>
> >>> On 13/11/14 17:05, Prashant Mathur wrote:
>  But in CAT scenario we do like this:
> 
>  translate: sentence 1
>  tune: sentence 1 , post-edit 1
>  translate: sentence 2
>  tune: sentence 2 , post-edit 2
>  ...
> 
>  In this case, I don't have any features generated or tuned before I
>  start translating the first sentence.
> 
>  Old version is complicated, I am coding on the latest version now.
> 
>  --Prashant
> 
> 
>  On Thu, Nov 13, 2014 at 5:26 PM, Philipp Koehn   > wrote:
> 
>    Hi,
> 
>    Typically you want to learn these feature weights when
>  tuning. The
>    current setup supports and produces a sparse feature file.
> 
>    -phi
> 
>    On Nov 13, 2014 11:18 AM, "Prashant Mathur"     > wrote:
> 
>    what if I don't know the feature names before hand?
>    In that case, can I set the weights directly during
>  decoding?
> 
>    On Thu, Nov 13, 2014 at 4:59 PM, Barry Haddow
>    > wrote:
> 
>    Hi Prashant
> 
>    You add something like this to your moses.ini:
> 
>    [weight-file]
>    /path/to/sparse/weights/file
> 
>    The sparse weights file has the form:
> 
>    name1 weight1
>    name2 weight2
>    name3 weight3
>    .
>    .
>    .
> 
>    At least that's how it works in Moses v2.
> 
>    cheers - Barry
> 
>    On 13/11/14 15:42, Prashant Mathur wrote:
> 
>    Thanks a lot Barry for your answers.
> 
>    I have another question.
>    When I print these sparse features at the end of
>    decoding, all sparse features are assigned a
>  weight of
>    0 because all of them were initialized during
>  decoding.
>    How can I set these weights for sparse features
>  before
>    they are evaluated?
> 
> 
>    Thanks Hieu for the link..
>    I am going to update the code as soon as I can.. but
>    it will take some time.. will get back to you when I
>    do that.
> 
>    --Prashant
> 
> 
>    On Thu, Nov 13, 2014 at 2:34 PM, Hieu Hoang
>    mailto:hieu.ho...@ed.ac.uk>
>        >> wrote:
> 
>    re-iterating what Barry said, you should use the
>    github moses if
>    you want to create your own feature functions,
>    especially with
>    sparse features. The reasons:
>  1. Adding new feature functions is a pain in v
>    0.91. It's
>    trivial now. You can watch my talk to find
>  out why
>  http://lectures.ms.mff.cuni.cz/video/recordshow/index/44/184
>  2. It's confusing exactly when the feature
>    functions are
>    computed. It's clear now (hopefully!)
>  3. I think you had to set special flags
>    somewhere to use sparse
>    features. Now, all fea

Re: [Moses-support] using sparse features

2014-11-14 Thread Marcin Junczys-Dowmunt

Speed aside, quality did not improve significantly?

W dniu 14.11.2014 o 11:11, Eva Hasler pisze:
Let's say there was a bit of disillusionment about the advantages of 
online vs batch mira. Online mira was slow in comparison, but that's 
also because the implementation was still in a kind of development 
state and not optimised



On Fri, Nov 14, 2014 at 9:59 AM, Marcin Junczys-Dowmunt 
mailto:junc...@amu.edu.pl>> wrote:


Thanks. For some reasons I usually have quite week results with
kbmira.
What happened to that interesting Online MIRA idea? Died due to
lack of
maintenance?

W dniu 14.11.2014 o 10:54, Barry Haddow pisze:
> Hi Marcin
>
> Our default option would be kbmira (kbest batch mira). It seems
to be
> the most stable,
>
> cheers - Barry
>
> On 14/11/14 09:43, Marcin Junczys-Dowmunt wrote:
>> Apropos MIRA, what's the current best practice tuner for sparse
>> features? What are you guys using now for say WMT-grade systems?
>>
>> W dniu 14.11.2014 o 10:39, Barry Haddow pisze:
>>> Hi Prashant
>>>
>>> We had to do these kind of dynamic weight updates for online
MIRA. The
>>> code is still there, although might have rotted, start by
looking at
>>> the
>>> weight update methods in StaticData,
>>>
>>> cheers - Barry
>>>
>>> On 13/11/14 17:05, Prashant Mathur wrote:
 But in CAT scenario we do like this:

 translate: sentence 1
 tune: sentence 1 , post-edit 1
 translate: sentence 2
 tune: sentence 2 , post-edit 2
 ...

 In this case, I don't have any features generated or tuned
before I
 start translating the first sentence.

 Old version is complicated, I am coding on the latest version
now.

 --Prashant


 On Thu, Nov 13, 2014 at 5:26 PM, Philipp Koehn
mailto:pko...@inf.ed.ac.uk>
 >> wrote:

   Hi,

   Typically you want to learn these feature weights when
 tuning. The
   current setup supports and produces a sparse feature file.

   -phi

   On Nov 13, 2014 11:18 AM, "Prashant Mathur"
mailto:prash...@fbk.eu>
   >> wrote:

   what if I don't know the feature names before hand?
   In that case, can I set the weights directly during
 decoding?

   On Thu, Nov 13, 2014 at 4:59 PM, Barry Haddow
   mailto:bhad...@staffmail.ed.ac.uk>
   >> wrote:

   Hi Prashant

   You add something like this to your moses.ini:

   [weight-file]
  /path/to/sparse/weights/file

   The sparse weights file has the form:

   name1 weight1
   name2 weight2
   name3 weight3
   .
   .
   .

   At least that's how it works in Moses v2.

   cheers - Barry

   On 13/11/14 15:42, Prashant Mathur wrote:

   Thanks a lot Barry for your answers.

   I have another question.
   When I print these sparse features at the
end of
   decoding, all sparse features are assigned a
 weight of
   0 because all of them were initialized during
 decoding.
   How can I set these weights for sparse features
 before
   they are evaluated?


   Thanks Hieu for the link..
   I am going to update the code as soon as I
can.. but
   it will take some time.. will get back to
you when I
   do that.

   --Prashant


   On Thu, Nov 13, 2014 at 2:34 PM, Hieu Hoang
   mailto:hieu.ho...@ed.ac.uk> >
   
   

Re: [Moses-support] using sparse features

2014-11-14 Thread Eva Hasler
In comparison to MERT? not really, we compared English-French and
German-English at IWSLT 2012 and the baseline scores were a bit higher for
En-Fr a bit lower for De-En.
But of course the point is that you can use more features, so you have to
define useful feature sets that are sparse but still able to generalise

On Fri, Nov 14, 2014 at 10:16 AM, Marcin Junczys-Dowmunt  wrote:

>  Speed aside, quality did not improve significantly?
>
> W dniu 14.11.2014 o 11:11, Eva Hasler pisze:
>
> Let's say there was a bit of disillusionment about the advantages of
> online vs batch mira. Online mira was slow in comparison, but that's also
> because the implementation was still in a kind of development state and not
> optimised
>
>
> On Fri, Nov 14, 2014 at 9:59 AM, Marcin Junczys-Dowmunt <
> junc...@amu.edu.pl> wrote:
>
>> Thanks. For some reasons I usually have quite week results with kbmira.
>> What happened to that interesting Online MIRA idea? Died due to lack of
>> maintenance?
>>
>> W dniu 14.11.2014 o 10:54, Barry Haddow pisze:
>>  > Hi Marcin
>> >
>> > Our default option would be kbmira (kbest batch mira). It seems to be
>> > the most stable,
>> >
>> > cheers - Barry
>> >
>> > On 14/11/14 09:43, Marcin Junczys-Dowmunt wrote:
>> >> Apropos MIRA, what's the current best practice tuner for sparse
>> >> features? What are you guys using now for say WMT-grade systems?
>> >>
>> >> W dniu 14.11.2014 o 10:39, Barry Haddow pisze:
>> >>> Hi Prashant
>> >>>
>> >>> We had to do these kind of dynamic weight updates for online MIRA. The
>> >>> code is still there, although might have rotted, start by looking at
>> >>> the
>> >>> weight update methods in StaticData,
>> >>>
>> >>> cheers - Barry
>> >>>
>> >>> On 13/11/14 17:05, Prashant Mathur wrote:
>>  But in CAT scenario we do like this:
>> 
>>  translate: sentence 1
>>  tune: sentence 1 , post-edit 1
>>  translate: sentence 2
>>  tune: sentence 2 , post-edit 2
>>  ...
>> 
>>  In this case, I don't have any features generated or tuned before I
>>  start translating the first sentence.
>> 
>>  Old version is complicated, I am coding on the latest version now.
>> 
>>  --Prashant
>> 
>> 
>>  On Thu, Nov 13, 2014 at 5:26 PM, Philipp Koehn >  > wrote:
>> 
>>    Hi,
>> 
>>    Typically you want to learn these feature weights when
>>  tuning. The
>>    current setup supports and produces a sparse feature file.
>> 
>>    -phi
>> 
>>    On Nov 13, 2014 11:18 AM, "Prashant Mathur" >    > wrote:
>> 
>>    what if I don't know the feature names before hand?
>>    In that case, can I set the weights directly during
>>  decoding?
>> 
>>    On Thu, Nov 13, 2014 at 4:59 PM, Barry Haddow
>>    >    > wrote:
>> 
>>    Hi Prashant
>> 
>>    You add something like this to your moses.ini:
>> 
>>    [weight-file]
>>    /path/to/sparse/weights/file
>> 
>>    The sparse weights file has the form:
>> 
>>    name1 weight1
>>    name2 weight2
>>    name3 weight3
>>    .
>>    .
>>    .
>> 
>>    At least that's how it works in Moses v2.
>> 
>>    cheers - Barry
>> 
>>    On 13/11/14 15:42, Prashant Mathur wrote:
>> 
>>    Thanks a lot Barry for your answers.
>> 
>>    I have another question.
>>    When I print these sparse features at the end of
>>    decoding, all sparse features are assigned a
>>  weight of
>>    0 because all of them were initialized during
>>  decoding.
>>    How can I set these weights for sparse features
>>  before
>>    they are evaluated?
>> 
>> 
>>    Thanks Hieu for the link..
>>    I am going to update the code as soon as I can..
>> but
>>    it will take some time.. will get back to you when
>> I
>>    do that.
>> 
>>    --Prashant
>> 
>> 
>>    On Thu, Nov 13, 2014 at 2:34 PM, Hieu Hoang
>>    mailto:hieu.ho...@ed.ac.uk>
>>    >    >> wrote:
>> 
>>    re-iterating what Barry said, you should use
>> the
>>    github moses if
>>    you want to create your own feature functions,
>>    especially with
>> 

Re: [Moses-support] using sparse features

2014-11-14 Thread Barry Haddow
Hi Marcin

I think if you look at the situations where sparse features are 
successful, you often find they are tuning with multiple references.This 
paper lends support to the idea that multiple references are important: 
http://www.statmt.org/wmt14/pdf/W14-3360.pdf.

cheers - Barry

On 14/11/14 10:24, Eva Hasler wrote:
> In comparison to MERT? not really, we compared English-French and 
> German-English at IWSLT 2012 and the baseline scores were a bit higher 
> for En-Fr a bit lower for De-En.
> But of course the point is that you can use more features, so you have 
> to define useful feature sets that are sparse but still able to 
> generalise
>
> On Fri, Nov 14, 2014 at 10:16 AM, Marcin Junczys-Dowmunt 
> mailto:junc...@amu.edu.pl>> wrote:
>
> Speed aside, quality did not improve significantly?
>
> W dniu 14.11.2014 o 11:11, Eva Hasler pisze:
>> Let's say there was a bit of disillusionment about the advantages
>> of online vs batch mira. Online mira was slow in comparison, but
>> that's also because the implementation was still in a kind of
>> development state and not optimised
>>
>>
>> On Fri, Nov 14, 2014 at 9:59 AM, Marcin Junczys-Dowmunt
>> mailto:junc...@amu.edu.pl>> wrote:
>>
>> Thanks. For some reasons I usually have quite week results
>> with kbmira.
>> What happened to that interesting Online MIRA idea? Died due
>> to lack of
>> maintenance?
>>
>> W dniu 14.11.2014 o 10:54, Barry Haddow pisze:
>> > Hi Marcin
>> >
>> > Our default option would be kbmira (kbest batch mira). It
>> seems to be
>> > the most stable,
>> >
>> > cheers - Barry
>> >
>> > On 14/11/14 09:43, Marcin Junczys-Dowmunt wrote:
>> >> Apropos MIRA, what's the current best practice tuner for
>> sparse
>> >> features? What are you guys using now for say WMT-grade
>> systems?
>> >>
>> >> W dniu 14.11.2014 o 10:39, Barry Haddow pisze:
>> >>> Hi Prashant
>> >>>
>> >>> We had to do these kind of dynamic weight updates for
>> online MIRA. The
>> >>> code is still there, although might have rotted, start by
>> looking at
>> >>> the
>> >>> weight update methods in StaticData,
>> >>>
>> >>> cheers - Barry
>> >>>
>> >>> On 13/11/14 17:05, Prashant Mathur wrote:
>>  But in CAT scenario we do like this:
>> 
>>  translate: sentence 1
>>  tune: sentence 1 , post-edit 1
>>  translate: sentence 2
>>  tune: sentence 2 , post-edit 2
>>  ...
>> 
>>  In this case, I don't have any features generated or
>> tuned before I
>>  start translating the first sentence.
>> 
>>  Old version is complicated, I am coding on the latest
>> version now.
>> 
>>  --Prashant
>> 
>> 
>>  On Thu, Nov 13, 2014 at 5:26 PM, Philipp Koehn
>> mailto:pko...@inf.ed.ac.uk>
>>  > >> wrote:
>> 
>>    Hi,
>> 
>>    Typically you want to learn these feature weights when
>>  tuning. The
>>    current setup supports and produces a sparse
>> feature file.
>> 
>>    -phi
>> 
>>    On Nov 13, 2014 11:18 AM, "Prashant Mathur"
>> mailto:prash...@fbk.eu>
>>    >>
>> wrote:
>> 
>>    what if I don't know the feature names before
>> hand?
>>    In that case, can I set the weights directly
>> during
>>  decoding?
>> 
>>    On Thu, Nov 13, 2014 at 4:59 PM, Barry Haddow
>>    > 
>>    > >> wrote:
>> 
>>    Hi Prashant
>> 
>>    You add something like this to your moses.ini:
>> 
>>   [weight-file]
>>   /path/to/sparse/weights/file
>> 
>>    The sparse weights file has the form:
>> 
>>    name1 weight1
>>    name2 weight2
>>    name3 weight3
>>    .
>>    .
>>    .
>> 
>>    At least that's how it works in Moses v2.
>> 
>>    cheers - Barry
>>  

Re: [Moses-support] using sparse features

2014-11-14 Thread Marcin Junczys-Dowmunt
 

Hi, 

Eva: And in a sparse-feature scenario compared to PRO or kbmira? 

Barry: Thanks for the pointer. I understand the main problem is
evidence-sparsity for sparse features. I am currently trying to counter
that by using huge devsets (up to 50.000 sentences, divided into pieces
of 5.000, then averaging weights, cross-validation basically) which
seems to help, but I am always suspicious that the optimization method
is not doing as well as it could. So I was hoping you might have
something new :) I remember Collin Cherry talking about lattice Mira, we
don't have this in Moses, have we? 

W dniu 2014-11-14 11:27, Barry Haddow napisał(a): 

> Hi Marcin
> 
> I think if you look at the situations where sparse features are 
> successful, you often find they are tuning with multiple references.This 
> paper lends support to the idea that multiple references are important: 
> http://www.statmt.org/wmt14/pdf/W14-3360.pdf [1].
> 
> cheers - Barry
> 
> On 14/11/14 10:24, Eva Hasler wrote:
> 
>> In comparison to MERT? not really, we compared English-French and 
>> German-English at IWSLT 2012 and the baseline scores were a bit higher for 
>> En-Fr a bit lower for De-En. But of course the point is that you can use 
>> more features, so you have to define useful feature sets that are sparse but 
>> still able to generalise On Fri, Nov 14, 2014 at 10:16 AM, Marcin 
>> Junczys-Dowmunt mailto:junc...@amu.edu.pl>> wrote: 
>> Speed aside, quality did not improve significantly? W dniu 14.11.2014 o 
>> 11:11, Eva Hasler pisze:

 

Links:
--
[1] http://www.statmt.org/wmt14/pdf/W14-3360.pdf
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] using sparse features

2014-11-14 Thread Barry Haddow
Hi Marcin

One practical problem that online-mira faced, is that it is not a 
drop-in replacement for mert in that way that pro and kbmira are, so it 
requires people to change their pipeline a bit. This means that it has a 
higher bar for acceptance (in the sense of offering consistent 
improvements) than the other methods.

On lattice mira, yes it is implemented. In my tests so far I did not 
find it to be clearly different from kbmira, but I have not tested it 
extensively with sparse features. It's quite a bit slower (can't 
remember exactly how much) but probably we can at least optimise the 
number of iterations, and hopefully optimise the code. I won't have a 
chance to look at it again for a while, so if anyone else wants to pick 
it up ...

cheers - Barry

On 14/11/14 12:41, Marcin Junczys-Dowmunt wrote:
>
> Hi,
>
> Eva: And in a sparse-feature scenario compared to PRO or kbmira?
>
> Barry: Thanks for the pointer. I understand the main problem is 
> evidence-sparsity for sparse features. I am currently trying to 
> counter that by using huge devsets (up to 50.000 sentences, divided 
> into pieces of 5.000, then averaging weights, cross-validation 
> basically) which seems to help, but I am always suspicious that the 
> optimization method is not doing as well as it could. So I was hoping 
> you might have something new :) I remember Collin Cherry talking about 
> lattice Mira, we don't have this in Moses, have we?
>
> W dniu 2014-11-14 11:27, Barry Haddow napisał(a):
>
>> Hi Marcin
>>
>> I think if you look at the situations where sparse features are
>> successful, you often find they are tuning with multiple references.This
>> paper lends support to the idea that multiple references are important:
>> http://www.statmt.org/wmt14/pdf/W14-3360.pdf.
>>
>> cheers - Barry
>>
>> On 14/11/14 10:24, Eva Hasler wrote:
>>> In comparison to MERT? not really, we compared English-French and 
>>> German-English at IWSLT 2012 and the baseline scores were a bit 
>>> higher for En-Fr a bit lower for De-En. But of course the point is 
>>> that you can use more features, so you have to define useful feature 
>>> sets that are sparse but still able to generalise On Fri, Nov 14, 
>>> 2014 at 10:16 AM, Marcin Junczys-Dowmunt >>  >> >> wrote: Speed aside, quality did not 
>>> improve significantly? W dniu 14.11.2014 o 11:11, Eva Hasler pisze:


-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Encoding in MGIZA

2014-11-14 Thread Rico Sennrich
Hieu Hoang  writes:

> Ken - should we add encoding on open to all python scripts, rather than
set the PYTHONIOENCODING env variable? That's basically what happens with
the perl scripts/
> 
> What python/Linux version are you using? I don't see it on my version
(Python 2.7.3, Ubuntu 12.04)

Hi all,

It's kinda tricky to have consistent encoding between Python 2.X and Python
3. The patch to merge_alignment.py will fail under 2.X. I suggest to use
io.open instead, which works with all versions from 2.6 up. And if any
string processing is done, I suggest using 'from __future__ import
unicode_literals' to ensure that all string literals are interpreted as
unicode, and making sure that all input/output is UTF-8 (including
stdin/stdout/stderr). I usually do this with the following code block:

import codecs
if sys.version_info < (3,0,0):
  sys.stdin = codecs.getreader('UTF-8')(sys.stdin)
  sys.stdout = codecs.getwriter('UTF-8')(sys.stdout)
  sys.stderr = codecs.getwriter('UTF-8')(sys.stderr)

best,
Rico

___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


[Moses-support] Moses Build Error: Failed gcc.link

2014-11-14 Thread Rajen Chatterjee
Hi Everyone,

When I build moses with the following command it works:
./bjam --with-boost=/home/chatterjee/Public/SMT/boost_1_55_0 -j4


but when I try to build with SRILM with the following command it shows
error "failed gcc.link":(PFA log file)
./bjam --with-boost=/home/chatterjee/Public/SMT/boost_1_55_0
--with-srilm=/home/chatterjee/Public/SMT/srilm-1.7.1 -j4

Did anyone face similar problem and any solution to it?


PS: SRILM is installed successfully and all test cases produced identical
result. So I guess there is no problem with SRILM installation.

-- 
-Regards,
 Rajen Chatterjee.


build.log.gz
Description: GNU Zip compressed data
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Encoding in MGIZA

2014-11-14 Thread Kenneth Heafield
For what it's worth the server is running Python 3.2.2.  Rico seems to
know what he's doing with Python much more than I do.

Kenneth

On 11/14/14 04:55, Hieu Hoang wrote:
> Ken - should we add encoding on open to all python scripts, rather than
> set the PYTHONIOENCODING env variable? That's basically what happens
> with the perl scripts/
> 
> What python/Linux version are you using? I don't see it on my version
> (Python 2.7.3, Ubuntu 12.04)
> 
> Qin - Thanks. I've added you as admin for moses on github. We may change
> this if it doesn't suit you. mgiza is a sister project of moses
>https://github.com/moses-smt/mgiza
> So everyone who has commit access to moses also has access to mgiza,
> which is quite a lot!
> 
> We monitor all commits to mgiza on the same mailing list as moses in
> case people mess around, eg.
>   
> http://lists.inf.ed.ac.uk/pipermail/moses-commits/2014-November/001826.html
> 
> On 14 November 2014 09:42, Gao Qin  > wrote:
> 
> Good idea, I am not yet admin of the new repro, Hieu will add me and
> I cam make change then.
> 
> --Q
> 
> On Thu, Nov 13, 2014 at 8:54 AM, Kenneth Heafield  > wrote:
> 
> Hi,
> 
> MGIZA has some Python programs that process raw text:
> https://github.com/moses-smt/mgiza/tree/master/mgizapp/scripts .
> 
> Since those scripts were released, Python messed up file
> encoding and
> made the default ascii.  Should we just change every open call
> to have
> encoding = 'utf-8' ?
> 
> Kenneth
> 
> 
> 
> 
> 
> -- 
> Hieu Hoang
> Research Associate
> University of Edinburgh
> http://www.hoang.co.uk/hieu
> 
> 
> 
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
> 
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Encoding in MGIZA

2014-11-14 Thread Hieu Hoang
ah. I've rolled back Ken's change 'cos I need it to work with Python 2.7.

I've set the env variable in train-model.perl just before the call to
merge-alignment.py. That should patch ken's problem for now.

https://github.com/moses-smt/mosesdecoder/commit/acd3ac964a7df646e15e3c4210853e7b70bebcbf
But the better way is adding Rico's code to all python scripts


On 14 November 2014 13:20, Rico Sennrich  wrote:

> Hieu Hoang  writes:
>
> > Ken - should we add encoding on open to all python scripts, rather than
> set the PYTHONIOENCODING env variable? That's basically what happens with
> the perl scripts/
> >
> > What python/Linux version are you using? I don't see it on my version
> (Python 2.7.3, Ubuntu 12.04)
>
> Hi all,
>
> It's kinda tricky to have consistent encoding between Python 2.X and Python
> 3. The patch to merge_alignment.py will fail under 2.X. I suggest to use
> io.open instead, which works with all versions from 2.6 up. And if any
> string processing is done, I suggest using 'from __future__ import
> unicode_literals' to ensure that all string literals are interpreted as
> unicode, and making sure that all input/output is UTF-8 (including
> stdin/stdout/stderr). I usually do this with the following code block:
>
> import codecs
> if sys.version_info < (3,0,0):
>   sys.stdin = codecs.getreader('UTF-8')(sys.stdin)
>   sys.stdout = codecs.getwriter('UTF-8')(sys.stdout)
>   sys.stderr = codecs.getwriter('UTF-8')(sys.stderr)
>
> best,
> Rico
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>



-- 
Hieu Hoang
Research Associate
University of Edinburgh
http://www.hoang.co.uk/hieu
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] BLEU score Help

2014-11-14 Thread Hieu Hoang

this email may explain it for you
https://www.mail-archive.com/moses-support%40mit.edu/msg00901.html

On 13/11/14 10:46, Maria Marpaung wrote:

Hi please help me,

I have been can run Moses and get a score of BLUEis: BLEU= 74.25, 
86.2/77.1/70.7/64.8 (BP=1.000, ratio= 1.000, hyp_len= 59134, ref_len= 
59144)


but I dont understand these values. Can you explain the meaning of 
each of these values?

Therefore, I have to explain my current thesis presentation

Please, I need you help


Thank you,

Maria


___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Multiple reordering models in moses

2014-11-14 Thread Philipp Koehn
Hi,

it is not possible to make selective use of reordering tables. If two
reordering tables, then each translation option is scored with both of
them.

Be aware that if there is no entry in a reordering table for a phrase
pair, then there is no reordering cost, which is probably not what you
want. To prevent this, you can specify default scores by adding the
parameter "default-scores"  to the reordering table specification, as
in the following example:

LexicalReordering name=LexicalReordering1 num-features=8
type=hier-mslr-bidirectional-fe-allff input-factor=0 output-factor=0
path=... default-scores=0.5,0.3,0.1,0.1,0.5,0.3,0.1,0.1

-phi

On Fri, Nov 14, 2014 at 4:11 AM, Raj Dabre  wrote:
> Hello all,
>
> I know that there is a provision for multiple decoding paths which can use
> multiple phrase tables.
> For example if I have 2 phrase tables for en-fr:
> 0 T 0
> 1 T 1
> This specifies 2 decoding paths.
>
> What about when I have multiple reordering tables ?
> How can I specify something like:
> 0 R 0
> 1 R 1
>
> Does my question make sense ?
>
> Or do I just resort to interpolation and merge both reordering tables ?
>
> Any help will be appreciated.
>
> Regards.
>
> --
> Raj Dabre.
> Research Student,
> Graduate School of Informatics,
> Kyoto University.
> CSE MTech, IITB., 2011-2014
>
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


[Moses-support] Moses Installation Problems

2014-11-14 Thread elijah hezekiah




















Hello Heiu,

 





















I am final year student studying computer science at king’s college London. I 
am working on my final year project it’s a translation software, I was 
initially going translate using a database dictionary and query the database to 
get the corresponding target words then I found about machine translation due 
to my curiosity to understand how Google translate works. I am trying to use 
your Moses decoder but I have encountered a lot of problems right from trying 
to install it I tried this command. /bjam -j8 but I got an error and when I 
checked the build.log file I found that the Moses compile could not access the 
boost and I decided to download the boost from source forge but it also 
encountered errors so I used this command brew install boost and downloaded the 
boost.1-56-0 but I can’t find the location of the new boost on my mac. I am 
going to try look for other ways to download boost.kindly update your tutorial 
on the  moses website to include a working version of boost . I will need your 
assistants in understanding the concept of machine translation i.e. from using 
the decoder to aligning with giza++ and also creating a language model. I look 
forward to your reply  Yours sincerely Elijah.








  

build.log
Description: Binary data
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Multiple reordering models in moses

2014-11-14 Thread Raj Dabre
Thank you for your guidance Sir.

Just a question though: Will the selective choice of reordering tables be
incorporated in moses decoder in the future ?

Regards.

On Sat, Nov 15, 2014 at 4:21 AM, Philipp Koehn  wrote:

> Hi,
>
> it is not possible to make selective use of reordering tables. If two
> reordering tables, then each translation option is scored with both of
> them.
>
> Be aware that if there is no entry in a reordering table for a phrase
> pair, then there is no reordering cost, which is probably not what you
> want. To prevent this, you can specify default scores by adding the
> parameter "default-scores"  to the reordering table specification, as
> in the following example:
>
> LexicalReordering name=LexicalReordering1 num-features=8
> type=hier-mslr-bidirectional-fe-allff input-factor=0 output-factor=0
> path=... default-scores=0.5,0.3,0.1,0.1,0.5,0.3,0.1,0.1
>
> -phi
>
> On Fri, Nov 14, 2014 at 4:11 AM, Raj Dabre  wrote:
> > Hello all,
> >
> > I know that there is a provision for multiple decoding paths which can
> use
> > multiple phrase tables.
> > For example if I have 2 phrase tables for en-fr:
> > 0 T 0
> > 1 T 1
> > This specifies 2 decoding paths.
> >
> > What about when I have multiple reordering tables ?
> > How can I specify something like:
> > 0 R 0
> > 1 R 1
> >
> > Does my question make sense ?
> >
> > Or do I just resort to interpolation and merge both reordering tables ?
> >
> > Any help will be appreciated.
> >
> > Regards.
> >
> > --
> > Raj Dabre.
> > Research Student,
> > Graduate School of Informatics,
> > Kyoto University.
> > CSE MTech, IITB., 2011-2014
> >
> >
> > ___
> > Moses-support mailing list
> > Moses-support@mit.edu
> > http://mailman.mit.edu/mailman/listinfo/moses-support
> >
>



-- 
Raj Dabre.
Research Student,
Graduate School of Informatics,
Kyoto University.
CSE MTech, IITB., 2011-2014
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Multiple reordering models in moses

2014-11-14 Thread Philipp Koehn
Hi,

there is no plan for this - if you think this would be a good feature,
you can look at the code how to implement it.

-phi

On Fri, Nov 14, 2014 at 9:06 PM, Raj Dabre  wrote:
> Thank you for your guidance Sir.
>
> Just a question though: Will the selective choice of reordering tables be
> incorporated in moses decoder in the future ?
>
> Regards.
>
> On Sat, Nov 15, 2014 at 4:21 AM, Philipp Koehn  wrote:
>>
>> Hi,
>>
>> it is not possible to make selective use of reordering tables. If two
>> reordering tables, then each translation option is scored with both of
>> them.
>>
>> Be aware that if there is no entry in a reordering table for a phrase
>> pair, then there is no reordering cost, which is probably not what you
>> want. To prevent this, you can specify default scores by adding the
>> parameter "default-scores"  to the reordering table specification, as
>> in the following example:
>>
>> LexicalReordering name=LexicalReordering1 num-features=8
>> type=hier-mslr-bidirectional-fe-allff input-factor=0 output-factor=0
>> path=... default-scores=0.5,0.3,0.1,0.1,0.5,0.3,0.1,0.1
>>
>> -phi
>>
>> On Fri, Nov 14, 2014 at 4:11 AM, Raj Dabre  wrote:
>> > Hello all,
>> >
>> > I know that there is a provision for multiple decoding paths which can
>> > use
>> > multiple phrase tables.
>> > For example if I have 2 phrase tables for en-fr:
>> > 0 T 0
>> > 1 T 1
>> > This specifies 2 decoding paths.
>> >
>> > What about when I have multiple reordering tables ?
>> > How can I specify something like:
>> > 0 R 0
>> > 1 R 1
>> >
>> > Does my question make sense ?
>> >
>> > Or do I just resort to interpolation and merge both reordering tables ?
>> >
>> > Any help will be appreciated.
>> >
>> > Regards.
>> >
>> > --
>> > Raj Dabre.
>> > Research Student,
>> > Graduate School of Informatics,
>> > Kyoto University.
>> > CSE MTech, IITB., 2011-2014
>> >
>> >
>> > ___
>> > Moses-support mailing list
>> > Moses-support@mit.edu
>> > http://mailman.mit.edu/mailman/listinfo/moses-support
>> >
>
>
>
>
> --
> Raj Dabre.
> Research Student,
> Graduate School of Informatics,
> Kyoto University.
> CSE MTech, IITB., 2011-2014
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Multiple reordering models in moses

2014-11-14 Thread Raj Dabre
Hello,

I would be interested to try this out.
After I go through the code I think I will try to incorporate this.

Regards.

On Sat, Nov 15, 2014 at 11:09 AM, Philipp Koehn  wrote:

> Hi,
>
> there is no plan for this - if you think this would be a good feature,
> you can look at the code how to implement it.
>
> -phi
>
> On Fri, Nov 14, 2014 at 9:06 PM, Raj Dabre  wrote:
> > Thank you for your guidance Sir.
> >
> > Just a question though: Will the selective choice of reordering tables be
> > incorporated in moses decoder in the future ?
> >
> > Regards.
> >
> > On Sat, Nov 15, 2014 at 4:21 AM, Philipp Koehn 
> wrote:
> >>
> >> Hi,
> >>
> >> it is not possible to make selective use of reordering tables. If two
> >> reordering tables, then each translation option is scored with both of
> >> them.
> >>
> >> Be aware that if there is no entry in a reordering table for a phrase
> >> pair, then there is no reordering cost, which is probably not what you
> >> want. To prevent this, you can specify default scores by adding the
> >> parameter "default-scores"  to the reordering table specification, as
> >> in the following example:
> >>
> >> LexicalReordering name=LexicalReordering1 num-features=8
> >> type=hier-mslr-bidirectional-fe-allff input-factor=0 output-factor=0
> >> path=... default-scores=0.5,0.3,0.1,0.1,0.5,0.3,0.1,0.1
> >>
> >> -phi
> >>
> >> On Fri, Nov 14, 2014 at 4:11 AM, Raj Dabre  wrote:
> >> > Hello all,
> >> >
> >> > I know that there is a provision for multiple decoding paths which can
> >> > use
> >> > multiple phrase tables.
> >> > For example if I have 2 phrase tables for en-fr:
> >> > 0 T 0
> >> > 1 T 1
> >> > This specifies 2 decoding paths.
> >> >
> >> > What about when I have multiple reordering tables ?
> >> > How can I specify something like:
> >> > 0 R 0
> >> > 1 R 1
> >> >
> >> > Does my question make sense ?
> >> >
> >> > Or do I just resort to interpolation and merge both reordering tables
> ?
> >> >
> >> > Any help will be appreciated.
> >> >
> >> > Regards.
> >> >
> >> > --
> >> > Raj Dabre.
> >> > Research Student,
> >> > Graduate School of Informatics,
> >> > Kyoto University.
> >> > CSE MTech, IITB., 2011-2014
> >> >
> >> >
> >> > ___
> >> > Moses-support mailing list
> >> > Moses-support@mit.edu
> >> > http://mailman.mit.edu/mailman/listinfo/moses-support
> >> >
> >
> >
> >
> >
> > --
> > Raj Dabre.
> > Research Student,
> > Graduate School of Informatics,
> > Kyoto University.
> > CSE MTech, IITB., 2011-2014
> >
>



-- 
Raj Dabre.
Research Student,
Graduate School of Informatics,
Kyoto University.
CSE MTech, IITB., 2011-2014
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support