Absolutely. I will read through.  The idea is to first  fix the learning
rate update equation in OLR.
I think this code  in  OnlineLogisticRegression is the current equation ?

@Override

  public double currentLearningRate() {

    return mu0 * Math.pow(decayFactor, getStep()) * Math.pow(getStep() +
stepOffset, forgettingExponent);

  }


I presume that you would like  Adagrad-like solution to replace the above ?






On Wed, Nov 27, 2013 at 8:18 PM, Ted Dunning <ted.dunn...@gmail.com> wrote:

> On Wed, Nov 27, 2013 at 7:07 AM, Vishal Santoshi <
> vishal.santo...@gmail.com>
>
> >
> >
> > Are we to assume that SGD is still a work in progress and
> implementations (
> > Cross Fold, Online, Adaptive ) are too flawed to be realistically used ?
> >
>
> They are too raw to be accepted uncritically, for sure.  They have been
> used successfully in production.
>
>
> > The evolutionary algorithm seems to be the core of
> > OnlineLogisticRegression,
> > which in turn builds up to Adaptive/Cross Fold.
> >
> > >>b) for truly on-line learning where no repeated passes through the
> data..
> >
> > What would it take to get to an implementation ? How can any one help ?
> >
>
> Would you like to help on this?  The amount of work required to get a
> distributed asynchronous learner up is moderate, but definitely not huge.
>
> I think that OnlineLogisticRegression is basically sound, but should get a
> better learning rate update equation.  That would largely make the
> Adaptive* stuff unnecessary, expecially if OLR could be used in the
> distributed asynchronous learner.
>

Reply via email to