+1
On Tue, 19 Feb 2019 at 20:40, Fabian Pedregosa wrote:
>
> +1 (not sure if my previous email went through)
>
> On Tue, Feb 19, 2019 at 11:31 AM Andreas Mueller wrote:
>>
>>
>>
>> On 2/19/19 10:55 AM, Paolo Losi wrote:
>> > +1 if my opinion matters
>> >
>> Thank you and it does :)
>>
>>
>>
Hi Andy,
I read through to document. Even though I have not been really active
these past months/years, I think it summarizes well our governance
model.
+1.
Gilles
On Sat, 9 Feb 2019 at 12:01, Adrin wrote:
>
> +1
>
> Thanks for the work you've put in it!
>
> On Sat, Feb 9, 2019, 03:00 Andreas
, though.
> Probably not even the ExtraTrees.
> I really need to get around to reading your thesis :-/
> Do you recommend using max_features=1 with ExtraTrees?
> On 05/05/2018 05:21 AM, Gilles Louppe wrote:
> > Hi,
> >
> > See also chapters 6 and 7 of http://arxiv.or
Hi,
See also chapters 6 and 7 of http://arxiv.org/abs/1407.7502 for another
point of view regarding the "issue" with feature importances. TLDR: Feature
importances as we have them in scikit-learn (i.e. MDI) are provably **not**
biased, provided trees are built totally at random (as in ExtraTrees w
Hi Javier,
In the particular case of tree-based models, you case use the soft
labels to create a multi-output regression problem, which would yield
an equivalent classifier (one can show that reduction of variance and
the gini index would yield the same trees).
So basically,
reg = RandomForestRe