Re: [Moses-support] different tune set diferent tuned parameters !

2010-05-02 Thread Hieu Hoang
i think in theory, the parameters don't have to scale, but in practise 
they have to be otherwise the beam setting will screw up decoding. MERT 
normalise the L1 weights, eg. these are my weights for a tuned (syntax) 
model:

weight  L1  L2
0.110.110.01
0.120.120.02
-0.03   0.030.00
0.030.030.00
-0.71   0.710.50
1.000.53



On 02/05/2010 10:11, Miles Osborne wrote:
there is a large amount of randomness involved with parameter tuning. 
 each time you run it (using the same language resources) you might 
get different parameters,


also, the parameters are not scaled.  this means that one run might 
give you these values:


10 20 30

and the next run might give you these ones:

0.1 0.2 0.3

Miles

On 2 May 2010 09:34, Somayeh Bakhshaei > wrote:



Hi All,

A problem:
Isn't it true that the parameter tuning must gain the structure of
the language so  i must get the same set of tuned parameters sets
with different kind of tune sets?
So why with changing the tuning set i get different amounts for
parameters?


another awful result:

I changed my test set, the Bleu result changed from 19 to 3 !
How its may while there is no overlap between none of the test
sets and train set?!!
--
Best Regards,
S.Bakhshaei



___
Moses-support mailing list
Moses-support@mit.edu 
http://mailman.mit.edu/mailman/listinfo/moses-support




--
The University of Edinburgh is a charitable body, registered in 
Scotland, with registration number SC005336.



___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
   
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] different tune set diferent tuned parameters !

2010-05-02 Thread Miles Osborne
there is a large amount of randomness involved with parameter tuning.  each
time you run it (using the same language resources) you might get different
parameters,

also, the parameters are not scaled.  this means that one run might give you
these values:

10 20 30

and the next run might give you these ones:

0.1 0.2 0.3

Miles

On 2 May 2010 09:34, Somayeh Bakhshaei  wrote:

>
> Hi All,
>
> A problem:
> Isn't it true that the parameter tuning must gain the structure of the
> language so  i must get the same set of tuned parameters sets with different
> kind of tune sets?
> So why with changing the tuning set i get different amounts for parameters?
>
>
> another awful result:
>
> I changed my test set, the Bleu result changed from 19 to 3 !
> How its may while there is no overlap between none of the test sets and
> train set?!!
> --
> Best Regards,
> S.Bakhshaei
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>


-- 
The University of Edinburgh is a charitable body, registered in Scotland,
with registration number SC005336.
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


[Moses-support] different tune set diferent tuned parameters !

2010-05-02 Thread Somayeh Bakhshaei

Hi All,

A problem: 
Isn't it true that the parameter tuning must gain the structure of the language 
so  i must get the same set of tuned parameters sets with different kind of 
tune sets?
So why with changing the tuning set i get different amounts for parameters?


another awful result:

I changed my test set, the Bleu result changed from 19 to 3 !
How its may while there is no overlap between none of the test sets and train 
set?!!
--

Best Regards,

S.Bakhshaei


  ___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support