yes - In fact my real goal is to implement RGF ultimately, though I had
considered building/forking off the current GradientBoostingRegressor
package as a starting point A) b/c I'm new to developing for scikit-learn
and B) to maintain as much consistency as possible with the rest of the
package.
Upon further review though, I don't think there's much point in adding
fully corrective updates to GBR because without the regularization (the
rest of RGF) it is probably useless and leads to over fitting, as per
http://stat.rutgers.edu/home/tzhang/software/rgf/tpami14-rgf.pdf.
So it would likely be more useful to go ahead and create RGF as an entirely
separate module. But that could take some time, of course.
Is anyone working on RGF for sklearn?
Arnaud, thanks for directing me to your sparse implementation, I will have
a look! It would certainly be excellent to have this work for all decision
tree algorithms.
Ken
On Tue, Sep 16, 2014 at 11:16 AM, Peter Prettenhofer <
[email protected]> wrote:
> The only reference I know is the Regularized Greedy Forest paper by
> Johnson and Zhang [1]
> I havent read the primary source (by Zhang as well).
>
> [1] http://arxiv.org/abs/1109.0887
>
> 2014-09-16 15:15 GMT+02:00 Mathieu Blondel <[email protected]>:
>
>> Could you give a reference for gradient boosting with fully corrective
>> updates?
>>
>> Since the philosophy of gradient boosting is to fit each tree against the
>> residuals (or negative gradient) so far, I am wondering how such fully
>> corrective update would work...
>>
>> Mathieu
>>
>> On Tue, Sep 16, 2014 at 9:16 AM, c TAKES <[email protected]> wrote:
>>
>>> Is anyone working on making Gradient Boosting Regressor work with sparse
>>> matrices?
>>>
>>> Or is anyone working on adding an option for fully corrective gradient
>>> boosting, I.E. all trees in the ensemble are re-weighted at each iteration?
>>>
>>> These are things I would like to see and may be able to help with if no
>>> one is currently working on them.
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Want excitement?
>>> Manually upgrade your production database.
>>> When you want reliability, choose Perforce.
>>> Perforce version control. Predictably reliable.
>>>
>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
>>> _______________________________________________
>>> Scikit-learn-general mailing list
>>> [email protected]
>>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>>>
>>>
>>
>>
>> ------------------------------------------------------------------------------
>> Want excitement?
>> Manually upgrade your production database.
>> When you want reliability, choose Perforce.
>> Perforce version control. Predictably reliable.
>>
>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
>> _______________________________________________
>> Scikit-learn-general mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>>
>>
>
>
> --
> Peter Prettenhofer
>
>
> ------------------------------------------------------------------------------
> Want excitement?
> Manually upgrade your production database.
> When you want reliability, choose Perforce.
> Perforce version control. Predictably reliable.
>
> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
> _______________________________________________
> Scikit-learn-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
------------------------------------------------------------------------------
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce.
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general