My team has a custom optimization routine that we would have wanted to plug
in as a replacement for the default  LBFGS /  OWLQN for use by some of the
ml/mllib algorithms.

However it seems the choice of optimizer is hard-coded in every algorithm
except LDA: and even in that one it is only a choice between the internally
defined Online or batch version.

Any suggestions on how we might be able to incorporate our own optimizer?
Or do we need to roll all of our algorithms from top to bottom - basically
side stepping ml/mllib?

thanks
stephen

Reply via email to