I always forget that marking patch available doesn't actually make the patch
available.

Patch will be there very shortly.

On Wed, Dec 23, 2009 at 1:20 PM, Jake Mannix <jake.man...@gmail.com> wrote:

> Wait, I thought there was a patch, is there no code on this yet?  The JIRA
> ticket
> says "patch available", but there's no files attached?
>
>  -jake
>
> On Wed, Dec 23, 2009 at 1:15 PM, Jake Mannix <jake.man...@gmail.com>
> wrote:
>
> > Hey Ted,
> >
> >   I'll try out the patch, but I doubt it duplicates any of the stuff I've
> > got coming in - I've
> > been meaning to put together an SGD impl, but while ideologically it
> > overlaps with some
> > of my decomposition stuff (and the current in-memory SVD which is in
> Taste
> > is actually
> > of the SGD variety, so there may be some overlap with that) but any
> > scalable impl of
> > that would be awesome.
> >
> >   But this patch is for SGD for logistic regression, right?  How
> > customizable is it for
> > solving different plugged in optimization functions?  I guess I could
> just
> > try it out and
> > see, eh?
> >
> >   -jake
> >
> >
> > On Wed, Dec 23, 2009 at 12:52 PM, Ted Dunning <ted.dunn...@gmail.com
> >wrote:
> >
> >> Jake,
> >>
> >> I would appreciate your comments on this, especially in light of any
> >> duplication.
> >>
> >> David,
> >>
> >> If you have any time, your comments are always very welcome as well.
> >>
> >> On Wed, Dec 23, 2009 at 12:50 PM, Ted Dunning (JIRA) <j...@apache.org
> >> >wrote:
> >>
> >> >
> >> >     [
> >> >
> >>
> https://issues.apache.org/jira/browse/MAHOUT-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
> >> ]
> >> >
> >> > Ted Dunning updated MAHOUT-228:
> >> > -------------------------------
> >> >
> >> >    Fix Version/s: 0.3
> >> >           Status: Patch Available  (was: Open)
> >> >
> >> > Here is an early implementation.  The learning has been implemented,
> but
> >> > not tested.  Most other aspects are reasonably well tested.
> >> >
> >> > > Need sequential logistic regression implementation using SGD
> >> techniques
> >> > >
> >> -----------------------------------------------------------------------
> >> > >
> >> > >                 Key: MAHOUT-228
> >> > >                 URL:
> https://issues.apache.org/jira/browse/MAHOUT-228
> >> > >             Project: Mahout
> >> > >          Issue Type: New Feature
> >> > >          Components: Classification
> >> > >            Reporter: Ted Dunning
> >> > >             Fix For: 0.3
> >> > >
> >> > >
> >> > > Stochastic gradient descent (SGD) is often fast enough for highly
> >> > scalable learning (see Vowpal Wabbit, 
> >> > http://hunch.net/~vw/<http://hunch.net/%7Evw/>
> <
> >> http://hunch.net/%7Evw/>
> >> > ).
> >> > > I often need to have a logistic regression in Java as well, so that
> is
> >> a
> >> > reasonable place to start.
> >> >
> >> > --
> >> > This message is automatically generated by JIRA.
> >> > -
> >> > You can reply to this email to add a comment to the issue online.
> >> >
> >> >
> >>
> >>
> >> --
> >> Ted Dunning, CTO
> >> DeepDyve
> >>
> >
> >
>



-- 
Ted Dunning, CTO
DeepDyve

Reply via email to