gt; Outside the framework, everything is possible by calling the "fit",
> "transform", and "predict" methods of the various objects.
>
> Gaël
>
> On Fri, Jan 15, 2016 at 07:55:45PM +0100, Fabrizio Fasano wrote:
>> Thank a lot, Andreas,
>
an a-priori? Or there are other
(scikit learn supported) CV methods?
Thank you very much again,
Best,
Fabrizio
On Jan 15, 2016, at 7:44 PM, Andreas Mueller wrote:
>
>
> On 01/15/2016 01:16 PM, Fabrizio Fasano wrote:
>> Dear community,
>>
>> I would like to
Dear community,
I would like to use ANOVA + SVM pipeline to check 2 group classification
performances of neuroimaging datasets,
My questions are:
1) In pipeline approach implemented by Scikit-learn
(http://scikit-learn.org/stable/auto_examples/svm/plot_svm_anova.html) is the
cross validation
Dear community,
I was wondering if it is possible to combine permutation test score with
leaveOneOut cross validation
My code:
>>>loo = cross_validation.LeaveOneOut(len(age))
>>> score, permutation_scores, pvalue =
>>> cross_validation.permutation_test_score(svr_rbf, ALL,age,
>>> scoring='mea
mean
> and sdev on the training set and standardize the test set using those
> estimated values. If this method worsens your results, there may be an
> unaccounted-for trend in your data.
>
> Michael
>
>
> On Thu, Apr 30, 2015 at 10:32 AM, Fabrizio Fasano
> mailto:fa
t valid as that includes
> the test data.
>
> It is hard to say whether 100% is believable or not, but you should
> probably only take scaling over training data.
>
> On Wed, Apr 29, 2015 at 11:13 AM, Fabrizio Fasano wrote:
>> Dear experts,
>>
>> I’m experiencing a
lievable or not, but you should
> probably only take scaling over training data.
>
> On Wed, Apr 29, 2015 at 11:13 AM, Fabrizio Fasano wrote:
>> Dear experts,
>>
>> I’m experiencing a dramatic improvement in cross-validation when data are
>> standardised
>>
t
> implementations.
>
> Best,
> Sebastian
>
>> On Apr 29, 2015, at 11:13 AM, Fabrizio Fasano wrote:
>>
>> Dear experts,
>>
>> I’m experiencing a dramatic improvement in cross-validation when data are
>> standardised
>>
>>
Dear experts,
I’m experiencing a dramatic improvement in cross-validation when data are
standardised
I mean accuracy increased from 48% to 100% when I shift from X to X_scaled =
preprocessing.scale(X)
Does it make sense in your opinion?
Thank You a lot for any suggestion,
Fabrizio
my COD
that is where that comes from.
> If you repeat over different assignments, you will get 50/50.
>
> On 04/27/2015 11:33 AM, Fabrizio Fasano wrote:
>> Dear Andy,
>>
>> Yes, the classes have the same size, 8 and 8
>>
>> this is one example of code I used to cr
tell.
> Are you sure the classes have the same size?
>
> On 04/26/2015 11:22 AM, Fabrizio Fasano wrote:
>> Dear Andreas,
>>
>> Thanks a lot for your help,
>>
>> about the random assignment of values to my labels y. What I mean is that
>> being suspici
Hi Sebastian,
Thank You for your answer,
what I mean is that by using a cross validation test I get 100% accuracy (on
the testing set, not on the training set).
It seemed to me a too good result, thus I changed the y labels (I mean, I
replaced the true labels with false ones) to check that, a
Dear Andreas,
Thanks a lot for your help,
about the random assignment of values to my labels y. What I mean is that being
suspicious about the too good performances, I changed the labels manually,
retaining the 50% 1,0 but in different orders, and the labels were always
predicted very well, wi
Dear community,
I'm performing a binary classification on a very small data set:
details:
-binary classification (Y=0,1)
-small dataset (16 samples)
-large features set (112 features)
-balanced labels (y=0 and y=1 occur 8 times each)
-linear SVM classifier.
accuracy was 100% when tested on the t
14 matches
Mail list logo