On Tue, Dec 6, 2011 at 10:37 AM, Peter Prettenhofer
<peter.prettenho...@gmail.com> wrote:
> 2011/12/6 James Bergstra <james.bergs...@gmail.com>:
>> On Fri, Dec 2, 2011 at 12:54 PM, Peter Prettenhofer
>> <peter.prettenho...@gmail.com> wrote:
>>> [...]
>>>
>>
>> How does the current tree implementation support boosting? I don't see
>> anything in the code about weighted samples.
>>
>> - James
>
> You're right - we don't support sample weights at the moment but one
> might use sampling with replacement to implement e.g. AdaBoost.
>
> Gradient boosting [1], on the other hand, does not need sample weights
> but fits a series of regression trees on the residuals of their
> predecessors. You can think of gradient boosting as a generalization
> of boosting (forward stage-wise additive modelling) for arbitrary loss
> functions (e.g. if you use exponential loss you recover AdaBoost)
>
> [1] http://en.wikipedia.org/wiki/Gradient_boosting

Thanks for pointing that out! That will be fine in my application.

- James

------------------------------------------------------------------------------
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to