Hi:
There is a new "Sparse- and low-rank approximation wiki", created by
Stephen Becker:
http://ugcs.caltech.edu/~srbecker/wiki/Main_Page
I added a link to the scikit-learn OMP solver:
http://ugcs.caltech.edu/~srbecker/wiki/Category:Greedy_Solvers
And created a scikit-learn page:
http://ugcs.
Hi:
The outcome of some searches in the website points to old versions of
scikit-learn. For instance, if I search for "expectation maximization"
I get
http://scikit-learn.org/0.5/modules/gmm.html
http://scikit-learn.org/0.5/modules/generated/scikits.learn.gmm.GMM.html
Note that it points to vers
On Sun, Jan 15, 2012 at 07:39:00PM +0100, Philipp Singer wrote:
> The problem is that my representation is very sparse so I have a huge
> amount of zeros.
That's actually good: some of our estimators are able to use a sparse
representation to speed up computation.
> Furthermore the dataset is ske
Hey guys!
I am currently trying to use the best possible classifier for my task.
In my case I have regularly slightly more features than training
examples and overall about 5000 features. The problem is that my
representation is very sparse so I have a huge amount of zeros. The
labels range fr
sklearn.svm.SVC has a parameter probability, which enables
'probability' estimates.
How are these estimates calculated (% Right during training?
Similarity to training data?) ?
Thanks,
Steven
--
RSA(R) Conference 2012
Mar