On Jun 7, 2013, at 11:57 PM, Ken Geis wrote:
> On May 23, 2013, at 5:03 AM, Gilles Louppe wrote:
>>> So I'd like to contribute a simple MAE criterion that would be efficient
>>> for random splits (i.e. O(n) given a single batch update.) Is the direction
>>> forward for something like this to h
On May 23, 2013, at 5:03 AM, Gilles Louppe wrote:
>> So I'd like to contribute a simple MAE criterion that would be efficient for
>> random splits (i.e. O(n) given a single batch update.) Is the direction
>> forward for something like this to hard-code more criteria in _tree.pyx, or
>> would it
Hi Ken,
I share and understand your concerns about the rigidity of the current
implementation.
> I like using Extremely Randomized Trees, but I'm looking for more flexibility
> in generating them. In particular, I'd like to be able to specify my own
> criterion and split finding algorithm. I'm
Hi. I am fairly new to scikit-learn. I first tried it out for a Kaggle
competition a few months ago. I didn't have enough time to put into that, but
it gave me an appreciation of the powerful yet simple design of the library.
Also, I've used Python on and off for about a decade for small things,