On 01/06/2015 01:21 PM, Morgan Hoffman wrote:
Hi Andy,
Thanks for your help. Is there something in the scikit-learn
documentation (or any other resource) that explains why the kernel
matrix at test time needs to be the kernel between the test data and
the training data? I am quite new to mach
Thanks Gaƫl, and a big thank-you to the entire dev team!
I'm responding rather late, as I have just finished my PhD and moved from
Vancouver, Canada to Melbourne, Australia.
It has been a huge pleasure working with you all over the past few years.
Thank you for welcoming me as a new core contribu
Hi Andy,
Thanks for your help. Is there something in the scikit-learn documentation (or
any other resource) that explains why the kernel matrix at test time needs to
be the kernel between the test data and the training data? I am quite new to
machine learning. What is the reason as to why we do
Hey Guys,
While working on a temporary fix for this dimension issue for my package
(PyMKS), I also found that the mse metric from sklearn.metrics has changed
since the summer and also requires the same dimension check. Was this also
fixed with #3987(https://github.com/scikit-learn/scikit-learn/pul
I am a bit confused as to why you code doesn't crash on the call to the
scaler.
What is the shape of train_gram_matrix and test_gram_matrix?
On 01/06/2015 12:27 PM, Morgan Hoffman wrote:
Hi,
I am trying to do a k-fold cross validation with a precomputed kernel.
However, I end up with an erro
The kernel matrix at test time needs to be the kernel between the test
data and the training data.
Which I guess is not what get_gram_matrix does.
Why are you applying the MinMaxScaler to the gram matrix? I'm not sure
that makes sense...
Without the scaler you could just do
print(cross_val_sc
Hi,
I am trying to do a k-fold cross validation with a precomputed kernel. However,
I end up with an error message that looks like this:
Traceback (most recent call last): File "kfold_simple_data.py", line 64, in
score = clf.score(test_gram_matrix, test_labels) File
"/usr/local/lib/python2
Hi Timothy.
Without seeing the actual code, it is hard to guess what is happening.
Often there is some slight error in the processing that results in
strange outcomes.
Firstly, you could perform the splitting and scoring using the
cross_val_score function and using a RandomizedSplitCV with ju
I am having some confusing results arising when I am carrying out a permutation
exercise on my data using the SVC function from the sklearn.svm module. The
data I am using is quite large and has very high dimensionality, but I will try
to explain it briefly.
The dataset represent risk scores fo