I think Sachin wants to provide something similar to the LossFunction but
for the convergence criterion. This would mean that the user can specify a
convergence calculator, for example to the optimization framework, which is
used from within a iterateWithTermination call.

I think this is a good idea and it would be a nice addition to the
optimization framework, to begin with. I think that with (data,
previousModel, currentModel) one can already model a lot of different
convergence criteria. @Sachin, do you want to take the lead?

However, I don’t think that a convergence criterion makes sense for the
Predictor, because there are also algorithms which are non-iterative, e.g.
some linear regression implementations. However, those which are, can take
a convergence calculator as a parameter value.

Cheers,
Till
​

On Mon, Jul 6, 2015 at 6:17 PM, Theodore Vasiloudis <
theodoros.vasilou...@gmail.com> wrote:

> >
> > The point is to provide user with the solution before an iteration and
> >
>
> Am I correct to assume that by "user" you mean library developers here?
> Regular users who just use the API are unlikely to write their own
> convergence
> criterion function, yes? They would just set a value, for example the
> relative
> error change in gradient descent, perhaps after choosing the criterion from
> a few available options.
>
> We can very well employ the iterateWithTermination
> > semantics even under this by setting the second term in the return value
> to
> > originalSolution.filter(x =>  !converged)
>
>
> Yes, we use this approach in the GradientDescent code, where we check for
> convergence using the relative loss between iterations.
>
> So assuming that this is aimed at developers and checking for convergence
> can be done quite efficiently using the above technique, what extra
> functionality
> would these proposed functions provide?
>
> I expect any kind of syntactic sugar aimed at developers will still have to
> use
> iterateWithTermination underneath.
>
> On Mon, Jul 6, 2015 at 4:47 PM, Sachin Goel <sachingoel0...@gmail.com>
> wrote:
>
> > Sure.
> > Usually, the convergence criterion can be user defined. For example, for
> a
> > linear regression problem, user might want to run the training until the
> > relative change in squared error falls below a specific threshold, or the
> > weights fail to  shift by a relative or absolute percentage.
> > Similarly, for example, in the kmeans problem, we again have several
> > different convergence criteria based on the change in wcss value, or the
> > relative change in centroids.
> >
> > The point is to provide user with the solution before an iteration and
> > solution after an iteration and let them decide whether it's time to just
> > be done with iterating. We can very well employ the
> iterateWithTermination
> > semantics even under this by setting the second term in the return value
> to
> > originalSolution.filter(x =>  !converged)
> > where converged is determined by the  user defined convergence criteria.
> Of
> > course, we're free to use our own convergence criteria in case the user
> > doesn't specify any.
> >
> > This achieves the desired effect.
> >
> > This way user has more fine grained control over the training phase.
> > Of course, to aid the user in defining their own convergence criteria, we
> > can provide some generic functions in the Predictor itself, for example,
> to
> > calculate the current value of the objective function. After this, rest
> is
> > upto the imagination of the user.
> >
> > Thinking more about this, I'd actually like to drop the idea of providing
> > an iteration state to the user. That only makes it more complicated and
> > further requires user to know what exactly goes in the algorithm.
> Usually,
> > the before and after solutions should suffice. I got too hung up on my
> > decision tree implementation and wanted to incorporate the convergence
> > criteria used there too.
> >
> > Cheers!
> > Sachin
> >
> > [Written from a mobile device. Might contain some typos or grammatical
> > errors]
> > On Jul 6, 2015 1:31 PM, "Theodore Vasiloudis" <
> > theodoros.vasilou...@gmail.com> wrote:
> >
> > > Hello Sachin,
> > >
> > > could you share the motivation behind this? The iterateWithTermination
> > > function provides us with a means of checking for convergence during
> > > iterations, and checking for convergence depends highly on the
> algorithm
> > > being implemented. It could be the relative change in error, it could
> > > depend on the state (error+weights) history, or relative or absolute
> > change
> > > in the model etc.
> > >
> > > Could you provide an example where having this function makes
> development
> > > easier? My concern is that this is a hard problem to generalize
> properly,
> > > given the dependence on the specific algorithm, model, and data.
> > >
> > > Regards,
> > > Theodore
> > >
> > > On Wed, Jul 1, 2015 at 9:23 PM, Sachin Goel <sachingoel0...@gmail.com>
> > > wrote:
> > >
> > > > Hi all
> > > > I'm trying to work out a general convergence framework for Machine
> > > Learning
> > > > Algorithms which utilize iterations for optimization. For now, I can
> > > think
> > > > of three kinds of convergence functions which might be useful.
> > > > 1. converge(data, modelBeforeIteration, modelAfterIteration)
> > > > 2. converge(data, modelAfterIteration)
> > > > 3. converge(data, modelBeforeIteration, iterationState,
> > > > modelAfterIteration)
> > > >
> > > > where iterationState is some state computed while performing the
> > > iteration.
> > > >
> > > > Algorithm implementation would have to support all three of these, if
> > > > possible. While specifying the {{Predictor}}, user would implement
> the
> > > > Convergence class and override these methods with their own
> > > implementation.
> > > >
> > > > Any feedback and design suggestions are welcome.
> > > >
> > > > Regards
> > > > ​​
> > > > Sachin Goel
> > > >
> > >
> >
>

Reply via email to