Congratulations!
Thank you very much for everyone's hard work!
Raga
On Wed, Sep 26, 2018, 2:57 PM Andreas Mueller wrote:
> Hey everbody!
> I'm happy to (finally) announce scikit-learn 0.20.0.
> This release is dedicated to the memory of Raghav Rajagopalan.
>
> You can upgrade now with pip or co
Great! Thank you very much!
Best,
Raga
On Oct 23, 2017 11:44 AM, "Gael Varoquaux"
wrote:
> Hurray! Great job; thanks to all involved!
>
> Gaël
>
> On Mon, Oct 23, 2017 at 12:23:11PM -0400, Andreas Mueller wrote:
> > Hey everybody.
>
> > We just released 0.19.1, fixing some issues and bugs in th
No worries.. ur answer is helpful for me too.. I was actually exploring
different ways to get the coeff, what i can and can't get :)..
Thanks!
On Aug 28, 2017 8:24 PM, "Joel Nothman" wrote:
> Sorry if I misunderstood your question.
>
> On 29 August 2017 at 06
Sounds good.. tried it and works.. thank you!
On Mon, Aug 28, 2017 at 3:20 PM, Andreas Mueller wrote:
> you can also use grid.best_estimator_ (and then all the rest)
>
> On 08/28/2017 03:07 PM, Raga Markely wrote:
>
> Ah.. got it :D..
>
> The pipeline was run in gridsearch
Ah.. got it :D..
The pipeline was run in gridsearchcv..
It works now after calling fit..
Thanks!
Raga
On Mon, Aug 28, 2017 at 2:55 PM, Andreas Mueller wrote:
> Have you called "fit" on the pipeline?
>
>
> On 08/28/2017 02:12 PM, Raga Markely wrote:
>
> Thank
n, Aug 28, 2017 at 12:01 PM, Andreas Mueller wrote:
> Can can get the coefficients on the scaled data with
> pipeline_lr.named_steps_['clf'].coef_
> though
>
>
> On 08/28/2017 12:08 AM, Raga Markely wrote:
>
> No problem, thank you!
>
> Best,
> Raga
>
>
No problem, thank you!
Best,
Raga
On Mon, Aug 28, 2017 at 12:01 AM, Joel Nothman
wrote:
> No, we do not have a way to get the coefficients with respect to the input
> (pre-scaling) space.
>
> On 28 August 2017 at 13:20, Raga Markely wrote:
>
>> Hello,
>>
>> I
Hello,
I am wondering if it's possible to get the weight coefficients of logistic
regression from a pipeline?
For instance, I have the followings:
> clf_lr = LogisticRegression(penalty='l1', C=0.1)
> pipe_lr = Pipeline([['sc', StandardScaler()], ['clf', clf_lr]])
> pipe_lr.fit(X, y)
Does pipe_
Thanks a lot for all the hard work and congratz!
Best,
Raga
On Aug 12, 2017 1:21 AM, "Sebastian Raschka" wrote:
> Yay, as an avid user, thanks to all the developers! This is a great
> release indeed -- no breaking changes (at least for my code base) and so
> many improvements and additions (tha
ill be out later this year).
> Also, I saw an interesting poster on a Set Covering Machine algorithm
> once, which they benchmarked against SVMs, random forests and the like for
> categorical (genomics data). Looked promising.
>
> Best,
> Sebastian
>
>
> > On Jul 21, 20
l likely perform worse
> than those that treat them appropriately.
>
> On Fri, Jul 21, 2017 at 8:11 AM, Raga Markely
> wrote:
>
>> Hello,
>>
>> I am wondering if there are some classifiers that perform better for
>> datasets with categorical features (converted i
Hello,
I am wondering if there are some classifiers that perform better for
datasets with categorical features (converted into sparse input matrix with
pd.get_dummies())? The data for the categorical features are nominal (order
doesn't matter, e.g. country, occupation, etc).
If you could provide
Will definitely acknowledge scikit-learn, scipy, etc community in papers,
posters, talks, etc.. i also saw suggested citations on scikit-learn
website.. i will include these as well..if there is anything else that will
be helpful, please let us know..
Sincerely hope that all of your contributions
nce … :) https://www.crcpress.com/An-
> Introduction-to-the-Bootstrap/Efron-Tibshirani/p/book/9780412042317)
>
> > On Mar 1, 2017, at 10:07 PM, Raga Markely
> wrote:
> >
> > No worries, Sebastian :) .. thank you very much for your help.. I
> learned a lot of new things from yo
d from weighted ACC_h,i and ACC_r,i
>
> > 2. For regression algorithms, is there a recommended equation for the
> no-information rate gamma?
>
>
> Sorry, can’t be of much help here; I am not sure what the equivalent of
> the no-information rate for regression would be ...
>
016/model-evaluation-selection-part2.html#the-bootstrap-method-and-
> empirical-confidence-intervals) if it helps.
>
> Best,
> Sebastian
>
> > On Mar 1, 2017, at 3:07 PM, Raga Markely wrote:
> >
> > Hi everyone,
> >
> > I wonder if you could prov
Hi everyone,
I wonder if you could provide me with some suggestions on how to determine
the confidence and prediction intervals of SVR? If you have suggestions for
any machine learning algorithms in general, that would be fine too (doesn't
have to be specific for SVR).
So far, I have found:
1. Bo
Hello,
I am planning to buy office PC desktop for machine learning work. I wonder
if you could provide some recommendation on the computer specs and brand? I
don't need cloud capacity, just a standalone, but powerful desktop.. to
simplify, let's ignore the price.. i can scale down according to bud
Hello,
I ran LDA for dimensionality reduction, and got the following message on
the command prompt (not on the Jupyter Notebook):
"The priors do not sum to 1. Renormalizing", UserWarning
If I understand correctly, the prior = sum of y bincount/ len(y)? So, does
it mean I am getting this message d
klearn 0.18?
>
> Best,
> Sebastian
>
> > On Jan 30, 2017, at 2:48 PM, Raga Markely
> wrote:
> >
> > Hi Sebastian,
> >
> > Following up on the original question on repeated Grid Search CV, I
> tried to do repeated nested loop using the followings:
&g
, I get the following error: TypeError: 'StratifiedKFold' object is not
iterable
I did some trials, and the error is gone when I remove cv=k_fold_inner from
gs = ...
Could you give me some tips on what I can do?
Thank you!
Raga
On Fri, Jan 27, 2017 at 1:16 PM, Raga Markely
wr
whole training set
> > 4.) evaluate on test set
> > 5.) fit classifier to whole dataset, done
> >
> > Best,
> > Sebastian
> >
> >> On Jan 27, 2017, at 10:23 AM, Raga Markely
> wrote:
> >>
> >> Sounds good, Sebastian.. than
sting, I’d train on
> train/validation splits and evaluate on the test set. And to compare e.g.,
> two networks against each other on large test sets, you could do a McNemar
> test.
>
> Best,
> Sebastian
>
> > On Jan 26, 2017, at 8:09 PM, Raga Markely
> wrote:
&
t,
> Sebastian
>
> > On Jan 26, 2017, at 5:39 PM, Raga Markely
> wrote:
> >
> > Hello,
> >
> > I was trying to do repeated Grid Search CV (20 repeats). I thought that
> each time I call GridSearchCV, the training and test sets separated in
> different splits
g with large datasets and for early stopping on
>> neural nets.
>>
>> Best,
>> Sebastian
>>
>>
>> > On Jan 26, 2017, at 1:19 PM, Raga Markely
>> wrote:
>> >
>> > Thank you, Guillaume.
>> >
>> > 1. I agree wi
Hello,
I was trying to do repeated Grid Search CV (20 repeats). I thought that
each time I call GridSearchCV, the training and test sets separated in
different splits would be different.
However, I got the same best_params_ and best_scores_ for all 20 repeats.
It looks like the training and test
for your problems.
2. The function is call in _fit_and_score, l. 260 and 263 for instance.
On 26 January 2017 at 17:02, Raga Markely https://mail.python.org/mailman/listinfo/scikit-learn>> wrote:
>* Hello,
*>>* I have 2 questions regarding cross_val_score.
*>* 1. Do the scores re
Hello,
I have 2 questions regarding cross_val_score.
1. Do the scores returned by cross_val_score correspond to only the test
set or the whole data set (training and test sets)?
I tried to look at the source code, and it looks like it returns the score
of only the test set (line 145: "return_train
ransformed
> feature vectors. If you choose dimensionality of the target space
> (n_components) large enough (depending on your kernel and data),
> Nystroem approximator should provide sufficiently good kernel
> approximation for such combination to approximate GDA.
>
&
Hello,
I wonder if scikit-learn has implementation for generalized discriminant
analysis using kernel approach?
http://www.kernel-machines.org/papers/upload_21840_GDA.pdf
I did some search, but couldn't find.
Thank you,
Raga
___
scikit-learn mailing li
30 matches
Mail list logo