Hi -
is there an explicit mathematical statement somewhere of what the
'learning_rate' parameter does in AdaBoost? I can't actually find a formula
that makes sense and shows what the difference is when varying the parameter.
Or failing that, where in the code does it get applied?
Thanks, Th
Hi Gilles -
thanks for the reply. I think changing the relative class weights does more or
less what we want, which is to optimize the classification at very low false
alarm probability.
Another question on the DecisionTreeClassifier, does the argument
splitter='best'
actually do anything?
Hi,
the only current options for deciding on feature splits in trees / forests are
'entropy' and 'gini', two questions on this:
- is anyone planning on implementing others?
- how feasible would it be to have the option of passing custom function to
the tree or forest to use in splitting?
W
--
-
Institute for Gravitational Physics
(Albert Einstein Institute)
Callinstr. 38
D-30167 Hannover, Germany
On Jan 21, 2013, at 11:41 PM, Lars Buitinck wrote:
> 2013/1/21 Jake Vanderplas :
>> On 01/21/2013 01:13 PM, Lars Buitinck wrote:
>>> 2013/1/21 Thomas Dent :
>>>> my ques
38
D-30167 Hannover, Germany
On Jan 21, 2013, at 8:57 PM, Willi Richert wrote:
> Yes, it is the parameter p in the Minkowski distance:
> http://en.wikipedia.org/wiki/Minkowski_distance.
>
> I think the master branch should have this information in the docs.
>
>
> 201
Hi all,
I just started using sklearn nearest-neighbors for classification & would like
to apply my own distance weighting function.
To do this I need to know exactly what the 'distance' that is fed to the
function represents. (Current documentation doesn't give me an immediate
answer.)
Fo