[
https://issues.apache.org/jira/browse/SOLR-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318012#comment-16318012
]
Yuki Yano commented on SOLR-11597:
----------------------------------
Thanks for your reply, [~malcorn_redhat].
bq. "Nonlinearity" and "activation function" are used more or less
interchangeably when talking about neural networks. See, e.g., this Stanford
course, "In other words, each neuron performs a dot product with the input and
its weights, adds the bias and applies the non-linearity (or activation
function)". Because the two terms are interchangeable, I'm OK with either being
used.
I see. Then, I think it is better to keep the name of "nonlinearity" for the
simplicity.
bq. In my opinion, if this is a route Solr eventually wants to go, I think a
better strategy would be to just add a dependency on Deeplearning4j.
That's a great idea :)
> Implement RankNet.
> ------------------
>
> Key: SOLR-11597
> URL: https://issues.apache.org/jira/browse/SOLR-11597
> Project: Solr
> Issue Type: New Feature
> Security Level: Public(Default Security Level. Issues are Public)
> Components: contrib - LTR
> Reporter: Michael A. Alcorn
>
> Implement RankNet as described in [this
> tutorial|https://github.com/airalcorn2/Solr-LTR].
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]