This is out of the scope of scikit-learn, which is a toolkit meant to be
used for easier machine learning. Optimization is a component of machine
learning, but not one that is readily-useable by itself.
Gaƫl
On Tue, Sep 04, 2018 at 12:45:09PM -0600, Touqir Sajed wrote:
> Hi Andreas,
> Is there a
Hi Andreas,
Is there a particular reason why there is no general purpose optimization
module? Most of the optimizers (atleast the first order methods) are
general purpose since you just need to feed the gradient. In some special
cases, you probably need problem specific formulation for better
perf
Hi Touqir.
We don't usually implement general purpose optimizers in
scikit-learn, in particular because usually different optimizers
apply to different kinds of problems.
For linear models we have SAG and SAGA, for neural nets we have adam.
I don't think the authors claim to be faster than SAG, so
Hi,
I have been looking for stochastic optimization algorithms in scikit-learn
that are faster than SGD and so far I have come across Adam and momentum.
Are there other methods implemented in scikit-learn? Particularly, the
variance reduction methods such as SVRG (
https://papers.nips.cc/paper/493