Dear Luca,
If I understand correctly, your approach is deflationary PCA that uses
the l1 prox to enforce sparsity.
I am not sure how this compares to the lars-based implementation of the
scikit (the non-convexity of the problem makes it hard to compare
algorithms).
Moreover, I have run you
Hello everyone,
I have blogged about my progress this week,
http://manojbits.wordpress.com/2014/05/31/gsoc-2nd-week-roadblocks-and-progress/
On Fri, May 23, 2014 at 6:00 PM, Olivier Grisel
wrote:
> Thanks Manoj!
>
> BTW, if you use the Rackspace Cloud account for your next benchmarking
> sessi
On Sat, May 31, 2014 at 9:22 PM, Mathieu Blondel
wrote:
> K_test = pairwise_kernels(X_train, X_test, metric="sigmoid")
>
This line should read
K_test = pairwise_kernels(X_test, X_train, metric="sigmoid")
Mathieu
--
T
You can always train a LinearSVC directly on the kernel matrix. This won't
be exactly the same as a kernel SVC [*] but it doesn't make any PSD
assumption.
K_train = pairwise_kernels(X_train, metric="sigmoid")
clf = LinearSVC()
clf.fit(K_train)
K_test = pairwise_kernels(X_train, X_test, metric="si
On Sat, May 31, 2014 at 06:15:44PM +0800, Benjamin Li wrote:
> In [1], Lin suggest an implementation of SVM for non psd kernels.
> So my question is does scikit learn handle this case.
No.
Gaƫl
--
Time is money. Stop was
Dear Nelle,
Thanks for your reply.
I do understand that the majority of kernels are psd.
Yet I am currently dealing with non psd kernel such as sigmoid kernel in
[1] and optimal assignment kernel in [2, 3].
In [1], Lin suggest an implementation of SVM for non psd kernels.
So my question is does sc
Hi Gilles,
On 23 May 2014 15:06, Gilles Louppe wrote:
> Hi Tim,
>
> In principles, what you describe exactly corresponds to the decision tree
> algorithm. You partition the input space into smaller subspaces, on which
> you recursively build sub-decision trees.
>
Exactly. What I was wondering wa