Hi,
Is it possible for me to contribute a library to introduce SVM's with tree
kernel (like current available one in svmlight) which is currently missing
in scikit-learn?
Best,
Shreyas
On 5 Mar 2017 11:03 a.m., "Andreas Mueller" wrote:
> There was a PR here:
> https://github.com/scikit-learn/s
> On 13 Mar 2017, at 21:18, Andreas Mueller wrote:
>
> No, if all the samples are normalized and your aggregation function is sane
> (like the mean), the output will also be normalised.
You are completely right, I hadn’t checked this for random forests.
Still, my purpose is to reduce model com
> You could use a regression model with a logistic sigmoid in the output layer.
By training a regression network with logistic activation the outputs do not
add to 1.
I just checked on a minimal example on the iris dataset.
___
scikit-learn mailing lis
On 03/13/2017 08:35 AM, Javier López Peña wrote:
Training a regression tree would require sticking some kind of
probability normaliser at the end to ensure proper probabilities,
this might somehow hurt sharpness or calibration.
No, if all the samples are normalized and your aggregation function
On 03/12/2017 03:11 PM, Javier López Peña wrote:
The purpose is two-fold, on the one hand use the probabilities
generated by a very complex
model (e.g. a massive ensemble) to train a simpler one that achieves
comparable performance at a
fraction of the cost. Any universal classifier will do (
Hi, Stuart,
I think the only way to do that right now would be through the SGD classifier,
e.g.,
sklearn.linear_model.SGDClassifier(loss='log', penalty='elasticnet' …)
Best,
Sebastian
> On Mar 13, 2017, at 12:57 PM, Stuart Reynolds
> wrote:
>
> Is there an implementation of logistic regress
Perfect. Thanks -- will give it a go.
On Mon, Mar 13, 2017 at 10:04 AM, Jacob Schreiber
wrote:
> Hi Stuart
>
> Take a look at this issue: https://github.com/scikit-learn/scikit-learn/
> issues/2968
>
> On Mon, Mar 13, 2017 at 9:57 AM, Stuart Reynolds <
> stu...@stuartreynolds.net> wrote:
>
>> Is
Recently, there are some issues/PRs tackling the topic:
https://github.com/scikit-learn/scikit-learn/issues/8288
https://github.com/scikit-learn/scikit-learn/issues/8446
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailm
Both libraries are heavily parameterized. You should check what the
defaults are for both.
Some ideas:
- What regularization is being used. L1/L2?
- Does the regularization parameter have the same interpretation? 1/C =
lambda? Some libraries use C. Some use lambda.
- Also, some libraries regula
Hi Stuart
Take a look at this issue:
https://github.com/scikit-learn/scikit-learn/issues/2968
On Mon, Mar 13, 2017 at 9:57 AM, Stuart Reynolds
wrote:
> Is there an implementation of logistic regression with elastic net
> regularization in scikit?
> (or pointers on implementing this - its seems
Is there an implementation of logistic regression with elastic net
regularization in scikit?
(or pointers on implementing this - its seems non-convex and so you might
expect poor behavior with some optimizers)
- Stuart
___
scikit-learn mailing list
scik
Hi Giles,
thanks for the suggestion!
Training a regression tree would require sticking some kind of
probability normaliser at the end to ensure proper probabilities,
this might somehow hurt sharpness or calibration.
Unfortunately, one of the things I am trying to do
with this is moving away fr
Hi Javier,
In the particular case of tree-based models, you case use the soft
labels to create a multi-output regression problem, which would yield
an equivalent classifier (one can show that reduction of variance and
the gini index would yield the same trees).
So basically,
reg = RandomForestRe
13 matches
Mail list logo