Hi,
It was a cosmetic variant: 'weights' and 'coefficients' mean the same for
linear models. And as iterations are done in a logarithmic way, the x
axis was correct to a scaling.
None the less, I fixed the example to be less confusing:
https://github.com/scikit-learn/scikit-learn/commit/f924d3e46
I'm looking at the example page for Lasso and ElasticNet (
http://scikit-learn.org/0.11/auto_examples/linear_model/plot_lasso_coordinate_descent_path.html
). The code is clearly plotting coefficients, but the ylabel says "weights".
Similarly, the xlabel is "-log(Lambda)", but it looks like it
It looks like an out-of-date Cython-generated file, maybe. Did you
install using the windows binaries? It would indeed be helpful to
describe the installation procedure that you followed.
Vlad
On Mon, Apr 29, 2013 at 6:58 PM, Andreas Mueller
wrote:
> Hey Chandrika.
> Sorry for the late reply.
>
> Why don't you use a kernel SVM (SVC)?
> There is no kernel Logistic Regression in sklearn. But there are some
> kernel-approximation
> methods that you could use together with various kernels and then use
> the standard LogisticRegression.
I don't know how to combine these methods, so I will hav
On Mon, Apr 29, 2013 at 10:53:21AM +0200, Andreas Mueller wrote:
> > You might try totally random trees embedding for this purpose:
> > http://scikit-learn.org/stable/modules/ensemble.html#totally-random-trees-embedding
> > and
> > http://scikit-learn.org/stable/auto_examples/ensemble/plot_random_f
>> So is there any method within scikit, that could help me finding a
>> feature mapping?
>
> I am not sure what you mean by feature mapping? Do you mean a non
> linear
> mapping to a feature spacing in which the classes should be
> separable?
Yes, sorry for being imprecise.
> You might try tot
I know there are a bunch of PhD candidates on the mailing list so I thought
I would share this:
-
The 30th International Conference on Machine Learning (ICML 2013) seeks
student vol
Hi Gael,
> followed by Cheng and Church probably.
I agree that including an algorithm that optimizes mean-square residue
would be useful. Another option would be to implement FLOC (Yang 2003; 223
citations), which enhances Cheng and Church in a number of ways: it faster,
it finds multiple biclust
Thank you Andreas!
On Sat, Apr 27, 2013 at 2:03 PM, Andreas Mueller
wrote:
> Hi Youssef.
> I would strongly advise you to use a image specific random forest
> implementation.
> There is a very good implementation by some other MSRC people:
>
> http://research.microsoft.com/en-us/downloads/03e0c
On Mon, Apr 29, 2013 at 09:20:24AM +0200, Kemal Eren wrote:
> The Spectral coclustering algorithm from 2001 with 888 citations is a very
> similar model to the Kluger paper from 2003, which applied the same concepts
> to
> microarray data. I originally cited the Kluger paper only because it is mor
I can't find the code you quoted.
Which example is this.
There is this affinity propagation example:
http://scikit-learn.org/stable/auto_examples/cluster/plot_affinity_propagation.html#example-cluster-plot-affinity-propagation-py
which doesn't contain the lines.
On 04/28/2013 12:05 AM, ap wrote:
Using 0.13.1 example code I found the following difference in the Affinity
call, and I am proposing a possible fix/change for that example program
below.
WAS: (throws warnings)
af = AffinityPropagation().fit(S, p)
Traceback (most recent call last):
File "E:\p\cr\rho\plot_affinity_propag
Hey Chandrika.
Sorry for the late reply.
These errors look pretty weird. Which version have you installed and how
did you run the tests?
Cheers,
Andy
On 02/24/2011 05:05 PM, Chandrika Bhardwaj wrote:
Hey all,
I wanted to use sci-kits. After completing installation of scikits, I
tried testin
Thanks for the feedback.
Actually, implementation and testing of association rule learning can be
finished sooner than what I thought, two week in total, because of its
simplicity so updated schedule would be like:
• Getting familiar with scikit-learn, API structure etc. (1 week)
• Gene
On 04/28/2013 08:06 PM, Richard Cubek wrote:
> Hello everyone,
>
> 2) Playing around with LR, the results "look interesting"
> (https://dl.dropboxusercontent.com/u/95888530/logreg_1.png), but I was
> not able to reproduce a model adopting/"overfitting" to every single
> data point, as in the SVM ex
On 04/28/2013 11:19 PM, Gael Varoquaux wrote:
> On Sun, Apr 28, 2013 at 08:06:11PM +0200, Richard Cubek wrote:
>> how stable the python binding is regarding the website issue mentioned
>> above.
> Faily stable I would say. The remarks applied years ago.
>
>> So is there any method within scikit, th
On Mon, Apr 29, 2013 at 01:28:09AM +0300, Şükrü Bezen wrote:
> • Getting familiar with scikit-learn, API structure etc. (1 week)
> • Generating, finding datasets for future use. (1-3 days)
> • Implementing association rule learning, (1 week)
> • Testing, documenting (1 week)
> • Implement
Hi all,
Thanks for your comments. I have made the suggested revisions to my
proposal. A few comments and questions:
Since nsNMF is out, there is still some time available. Any other
algorithms that you would be interested in?
The Spectral coclustering algorithm from 2001 with 888 citations is a
18 matches
Mail list logo