Hi,
I am wondering about this piece of code in  the get_weights_from_mert 
function of mert-moses.pl:

     my $sum = 0.0;
     while (<$fh>) {
       if (/^F(\d+) ([\-\.\de]+)/) {     # regular features
         $WEIGHT[$1] = $2;
         $sum += abs($2);
       } elsif (/^M(\d+_\d+) ([\-\.\de]+)/) {     # mix weights
         push @$mix_weights,$2;
       } elsif (/^(.+_.+) ([\-\.\de]+)/) { # sparse features
         $$sparse_weights{$1} = $2;
       }
     }
     close $fh;
     die "It seems feature values are invalid or unable to read 
$outfile." if $sum < 1e-09;

     $devbleu = "unknown";
     foreach (@WEIGHT) { $_ /= $sum; }
     foreach (keys %{$sparse_weights}) { $$sparse_weights{$_} /= $sum; }

I understand that the division by "$sum" is meant as a normalization, 
but I notice that sparse features are not being summed, nevertheless 
they are being normalized by the sum of the dense features. Does this 
actually make sense? Also kbmira often produces several sets of weights 
during one run of which only the last set is kept (the rest is being 
overwritten), but the sum is collected over all sets. Looks kinda fishy :)
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to