Gael - I tried separating out the lambda function into a standalone
function. Unfortunately no luck - same result :(.
Olivier - I'm using the default, which is n_jobs=1, so hopefully that
shouldn't be a problem.
The reason I'm using the threads is that I want to be able to train models
using a v
I'm trying to use cross_val_score inside a lambda function to take full
advantage of my processors - especially because previously I was having
problems when setting cross_val_score's built in n_jobs > 1. I have
successfully done something similar before, though in a little bit
different way, so I
C\redist\Debug_NonRedist\x86\Microsoft.VC90.DebugCRT\msvcr90d.dll"
"C:\Program Files (x86)\Microsoft Visual Studio
9.0\VC\redist\Debug_NonRedist\amd64\Microsoft.VC90.DebugCRT\msvcr90d.dll"
On Mon, Sep 22, 2014 at 4:35 AM, Lars Buitinck wrote:
> 2014-09-22 11:30 GMT+02:00 Lars
wrote:
> 2014-09-20 21:29 GMT+02:00 c TAKES :
> > Exception MemoryError: MemoryError() in 'sklearn.tree._tree.Tree._resize'
> > ignored
> >
> > Anyone recognize this error?
>
> All too well, but I thought it was fixed for good last time we went
> throu
ctive updates is also present in the
> matching pursuit (-> orthogonal matching pursuit) and frank-wolfe
> literatures.
>
> Mathieu
>
> [1] "Boosting Algorithms: Regularization, Prediction and Model Fitting",
> Peter B ̈uhlmann and Torsten Hothorn (thanks to Peter fo
fe
> literatures.
>
> Mathieu
>
> [1] "Boosting Algorithms: Regularization, Prediction and Model Fitting",
> Peter B ̈uhlmann and Torsten Hothorn (thanks to Peter for telling me about
> this paper)
>
> [2]
> https://github.com/mblondel/ivalice/blob/master/ivalice/
st the
>> residuals (or negative gradient) so far, I am wondering how such fully
>> corrective update would work...
>>
>> Mathieu
>>
>> On Tue, Sep 16, 2014 at 9:16 AM, c TAKES wrote:
>>
>>> Is anyone working on making Gradient Boosting Regressor wo
Is anyone working on making Gradient Boosting Regressor work with sparse
matrices?
Or is anyone working on adding an option for fully corrective gradient
boosting, I.E. all trees in the ensemble are re-weighted at each iteration?
These are things I would like to see and may be able to help with i