Re: [Scikit-learn-general] Contributing to scikit-learn

2012-06-06 Thread xinfan meng
Deep learning literature said that the more layers you have, the less hidden nodes in one layer you need. But I agree one hidden layer would be sufficient now. On Thu, Jun 7, 2012 at 11:12 AM, David Warde-Farley < warde...@iro.umontreal.ca> wrote: > On 2012-06-05, at 1:51 PM, David Marek wrote:

Re: [Scikit-learn-general] Contributing to scikit-learn

2012-06-06 Thread David Warde-Farley
On 2012-06-05, at 1:51 PM, David Marek wrote: > 1) Afaik all you need is one hidden layer, The universal approximator theorem says that any continuous function can be approximated arbitrarily well if you have one hidden layer with enough hidden units, but it says nothing about the ease of find

Re: [Scikit-learn-general] Error function in the output layer of MLP

2012-06-06 Thread xinfan meng
Thank you. I see the differences now. Your explanation should be put into the MLP docs :-) On Thu, Jun 7, 2012 at 2:27 AM, David Warde-Farley < warde...@iro.umontreal.ca> wrote: > On Wed, Jun 06, 2012 at 04:38:16PM +0800, xinfan meng wrote: > > Hi, all. I post this question to the list, since it

Re: [Scikit-learn-general] penalty factor with misclassification of positive class?

2012-06-06 Thread LI Wei
Seems you want to train with different class cost? Maybe you can try to set different class_weight in SVC training. You may refer to http://scikit-learn.org/stable/modules/svm.html for the unbalanced class training. You should tune both the class weight and the penalty factor together to satisfy y

Re: [Scikit-learn-general] Error function in the output layer of MLP

2012-06-06 Thread David Warde-Farley
On Wed, Jun 06, 2012 at 04:38:16PM +0800, xinfan meng wrote: > Hi, all. I post this question to the list, since it might be related to the > MLP being developed. > > I found two versions of the error function for output layer of MLP are used > in the literature. > > >1. \delta_o = (y-a) f'(z

Re: [Scikit-learn-general] Scipy 2012 Austin Sprint?

2012-06-06 Thread Olivier Grisel
2012/6/6 Jacob VanderPlas : > Alejandro, > Newbies are certainly welcome! > I'll get things organized for the sprint.  Try to find me during the > conference - I'll be giving a talk at the Astronomy mini-symposium.  We > can chat about how you can best get involved. Reading this is always a good p

Re: [Scikit-learn-general] Scipy 2012 Austin Sprint?

2012-06-06 Thread Jacob VanderPlas
Alejandro, Newbies are certainly welcome! I'll get things organized for the sprint. Try to find me during the conference - I'll be giving a talk at the Astronomy mini-symposium. We can chat about how you can best get involved. Jake Alejandro Weinstein wrote: > On Tue, Jun 5, 2012 at 4:36 PM

Re: [Scikit-learn-general] Scipy 2012 Austin Sprint?

2012-06-06 Thread Alejandro Weinstein
On Tue, Jun 5, 2012 at 4:36 PM, Jacob VanderPlas wrote: > Hi all, > Is there any interest to do a scikit-learn sprint at Scipy in Austin > next month?  I will be there, and I have a few ideas brewing that I'd > love to work on... > I'd be happy to be the contact person for the conference organizer

Re: [Scikit-learn-general] Scikit-learn-general Digest, Vol 29, Issue 13

2012-06-06 Thread 张鹏
Great,I want to meet you. 在 2012-6-6 9:39 PM, 写道: > Send Scikit-learn-general mailing list submissions to >scikit-learn-general@lists.sourceforge.net > > To subscribe or unsubscribe via the World Wide Web, visit >https://lists.sourceforge.net/lists/listinfo/scikit-learn-general > o

Re: [Scikit-learn-general] Scipy 2012 Austin Sprint?

2012-06-06 Thread Vlad Niculae
On Jun 6, 2012, at 12:12 , Olivier Grisel wrote: > 2012/6/6 Vlad Niculae : >> I won't open a new thread, but is anybody planning to go to Europython 2012? >> I might get a sponsorship to attend and I was wondering what is the >> community overlap. > > I will go for the Europython conference on

Re: [Scikit-learn-general] Error function in the output layer of MLP

2012-06-06 Thread xinfan meng
Yes, I think your explanation is correct. Thanks. Those notation differences really make me confused, given that MLP is much more complex than Perceptron. :-( On Wed, Jun 6, 2012 at 8:59 PM, David Marek wrote: > > On Wed, Jun 6, 2012 at 1:50 PM, xinfan meng wrote: >> >> I think these two delta

Re: [Scikit-learn-general] Error function in the output layer of MLP

2012-06-06 Thread David Marek
On Wed, Jun 6, 2012 at 1:50 PM, xinfan meng wrote: > > I think these two delta_o have the same meaning. If you have "Pattern > Recognition and Machine Learning" by Bishop, you can find that Bishop use > exactly the second formula in the back propagation algorithm. I suspect > these two formulae le

Re: [Scikit-learn-general] In Beijing next week

2012-06-06 Thread xinfan meng
That would be cool! It would be my pleasure to mee you. And it would also be great to meet other sklearn users in Beijing. On Wed, Jun 6, 2012 at 7:59 PM, Gael Varoquaux < gael.varoqu...@normalesup.org> wrote: > Hey, > > Seeing a mail from Xinfan, I just realized that we have a few Chinese > cont

Re: [Scikit-learn-general] In Beijing next week

2012-06-06 Thread LI Wei
It is such a pity that I am leaving Beijing for NYC on 9 June. Maybe we will meet at the airport by chance :-) However, I am not a contributor yet but look for a chance to contribute :-( LI, Wei On Wed, Jun 6, 2012 at 11:59 AM, Gael Varoquaux < gael.varoqu...@normalesup.org> wrote: > Hey, > > S

[Scikit-learn-general] In Beijing next week

2012-06-06 Thread Gael Varoquaux
Hey, Seeing a mail from Xinfan, I just realized that we have a few Chinese contributors (and maybe users that I don't know about) that I'd love to meet. I am in Beijing next week for a conference. I am arriving on the 9th and leaving on the 15th, although I'll have a busy schedule in the mean tim

Re: [Scikit-learn-general] Error function in the output layer of MLP

2012-06-06 Thread xinfan meng
Thanks for your reply. I think these two delta_o have the same meaning. If you have "Pattern Recognition and Machine Learning" by Bishop, you can find that Bishop use exactly the second formula in the back propagation algorithm. I suspect these two formulae lead to the same update iterations, but

Re: [Scikit-learn-general] Error function in the output layer of MLP

2012-06-06 Thread David Marek
Hi On Wed, Jun 6, 2012 at 10:38 AM, xinfan meng wrote: > Hi, all. I post this question to the list, since it might be related to > the MLP being developed. > > I found two versions of the error function for output layer of MLP are > used in the literature. > > >1. \delta_o = (y-a) f'(z) >

Re: [Scikit-learn-general] Scipy 2012 Austin Sprint?

2012-06-06 Thread Olivier Grisel
2012/6/6 Vlad Niculae : > I won't open a new thread, but is anybody planning to go to Europython 2012? > I might get a sponsorship to attend and I was wondering what is the community > overlap. I will go for the Europython conference on the weekend and might extend for an additional day or two a

Re: [Scikit-learn-general] Scipy 2012 Austin Sprint?

2012-06-06 Thread Vlad Niculae
I won't open a new thread, but is anybody planning to go to Europython 2012? I might get a sponsorship to attend and I was wondering what is the community overlap. Best, Vlad On Jun 6, 2012, at 09:38 , Fernando Perez wrote: > On Tue, Jun 5, 2012 at 9:52 PM, Gael Varoquaux > wrote: >> I won't

[Scikit-learn-general] Error function in the output layer of MLP

2012-06-06 Thread xinfan meng
Hi, all. I post this question to the list, since it might be related to the MLP being developed. I found two versions of the error function for output layer of MLP are used in the literature. 1. \delta_o = (y-a) f'(z) http://ufldl.stanford.edu/wiki/index.php/Backpropagation_Algorithm 2.

Re: [Scikit-learn-general] SVM: link beween predict_proba() and predict() functions

2012-06-06 Thread Emeline Landemaine
Thanks! 2012/6/6 Alexandre Gramfort > hi Emeline, > > svc.predict and svc.predict_proba > 0.5 may not match > > the predict_proba uses a recalibration using Platt's method. > > Alex > > On Wed, Jun 6, 2012 at 9:42 AM, Emeline Landemaine > wrote: > > Hey! > > > > I'm training a SVM and would lik

Re: [Scikit-learn-general] SVM: link beween predict_proba() and predict() functions

2012-06-06 Thread Alexandre Gramfort
hi Emeline, svc.predict and svc.predict_proba > 0.5 may not match the predict_proba uses a recalibration using Platt's method. Alex On Wed, Jun 6, 2012 at 9:42 AM, Emeline Landemaine wrote: > Hey! > > I'm training a SVM and would like to use predict_proba in order to know with > which confiden

[Scikit-learn-general] SVM: link beween predict_proba() and predict() functions

2012-06-06 Thread Emeline Landemaine
Hey! I'm training a SVM and would like to use predict_proba in order to know with which confidence a label is given. If predict_proba gives more than 50% for a picture to be in-class, it would mean that the picture will be classified as such. However, some pictures that have a percentage of 30-35%