Hi all,

after experimenting with the perceptron sequence training for
the name finder I found an issue with the normalization of
the perceptron model.

The perceptron models eval methods outputs scores which
indicate how likely an even is, when they are normalized
the scores should be between zero and one.

I observed that the score also are Infinity, which does
not work that well for beam search, depending on the scores outputted
it is not able to find a sequence at all.

Why is a score Infinity? They are normalized with the exponential
function which returns Infinity if the value for example is 850.

Any suggestions how we should fix the normalization?

Thanks,
Jörn

Reply via email to