Deep learning literature said that the more layers you have, the less
hidden nodes in one layer you need. But I agree one hidden layer would be
sufficient now.
On Thu, Jun 7, 2012 at 11:12 AM, David Warde-Farley <
warde...@iro.umontreal.ca> wrote:
> On 2012-06-05, at 1:51 PM, David Marek wrote:
On 2012-06-05, at 1:51 PM, David Marek wrote:
> 1) Afaik all you need is one hidden layer,
The universal approximator theorem says that any continuous function can be
approximated arbitrarily well if you have one hidden layer with enough hidden
units, but it says nothing about the ease of find
Thank you. I see the differences now. Your explanation should be put into
the MLP docs :-)
On Thu, Jun 7, 2012 at 2:27 AM, David Warde-Farley <
warde...@iro.umontreal.ca> wrote:
> On Wed, Jun 06, 2012 at 04:38:16PM +0800, xinfan meng wrote:
> > Hi, all. I post this question to the list, since it
Seems you want to train with different class cost? Maybe you can try to set
different class_weight in SVC training.
You may refer to http://scikit-learn.org/stable/modules/svm.html for the
unbalanced class training. You should tune both the class weight and the
penalty factor together to satisfy y
On Wed, Jun 06, 2012 at 04:38:16PM +0800, xinfan meng wrote:
> Hi, all. I post this question to the list, since it might be related to the
> MLP being developed.
>
> I found two versions of the error function for output layer of MLP are used
> in the literature.
>
>
>1. \delta_o = (y-a) f'(z
2012/6/6 Jacob VanderPlas :
> Alejandro,
> Newbies are certainly welcome!
> I'll get things organized for the sprint. Try to find me during the
> conference - I'll be giving a talk at the Astronomy mini-symposium. We
> can chat about how you can best get involved.
Reading this is always a good p
Alejandro,
Newbies are certainly welcome!
I'll get things organized for the sprint. Try to find me during the
conference - I'll be giving a talk at the Astronomy mini-symposium. We
can chat about how you can best get involved.
Jake
Alejandro Weinstein wrote:
> On Tue, Jun 5, 2012 at 4:36 PM
On Tue, Jun 5, 2012 at 4:36 PM, Jacob VanderPlas
wrote:
> Hi all,
> Is there any interest to do a scikit-learn sprint at Scipy in Austin
> next month? I will be there, and I have a few ideas brewing that I'd
> love to work on...
> I'd be happy to be the contact person for the conference organizer
Great,I want to meet you.
在 2012-6-6 9:39 PM, 写道:
> Send Scikit-learn-general mailing list submissions to
>scikit-learn-general@lists.sourceforge.net
>
> To subscribe or unsubscribe via the World Wide Web, visit
>https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
> o
On Jun 6, 2012, at 12:12 , Olivier Grisel wrote:
> 2012/6/6 Vlad Niculae :
>> I won't open a new thread, but is anybody planning to go to Europython 2012?
>> I might get a sponsorship to attend and I was wondering what is the
>> community overlap.
>
> I will go for the Europython conference on
Yes, I think your explanation is correct. Thanks.
Those notation differences really make me confused, given that MLP is much
more complex than Perceptron. :-(
On Wed, Jun 6, 2012 at 8:59 PM, David Marek wrote:
>
> On Wed, Jun 6, 2012 at 1:50 PM, xinfan meng wrote:
>>
>> I think these two delta
On Wed, Jun 6, 2012 at 1:50 PM, xinfan meng wrote:
>
> I think these two delta_o have the same meaning. If you have "Pattern
> Recognition and Machine Learning" by Bishop, you can find that Bishop use
> exactly the second formula in the back propagation algorithm. I suspect
> these two formulae le
That would be cool! It would be my pleasure to mee you. And it would also
be great to meet other sklearn users in Beijing.
On Wed, Jun 6, 2012 at 7:59 PM, Gael Varoquaux <
gael.varoqu...@normalesup.org> wrote:
> Hey,
>
> Seeing a mail from Xinfan, I just realized that we have a few Chinese
> cont
It is such a pity that I am leaving Beijing for NYC on 9 June. Maybe we
will meet at the airport by chance :-)
However, I am not a contributor yet but look for a chance to contribute :-(
LI, Wei
On Wed, Jun 6, 2012 at 11:59 AM, Gael Varoquaux <
gael.varoqu...@normalesup.org> wrote:
> Hey,
>
> S
Hey,
Seeing a mail from Xinfan, I just realized that we have a few Chinese
contributors (and maybe users that I don't know about) that I'd love to
meet.
I am in Beijing next week for a conference. I am arriving on the 9th and
leaving on the 15th, although I'll have a busy schedule in the mean tim
Thanks for your reply.
I think these two delta_o have the same meaning. If you have "Pattern
Recognition and Machine Learning" by Bishop, you can find that Bishop use
exactly the second formula in the back propagation algorithm. I suspect
these two formulae lead to the same update iterations, but
Hi
On Wed, Jun 6, 2012 at 10:38 AM, xinfan meng wrote:
> Hi, all. I post this question to the list, since it might be related to
> the MLP being developed.
>
> I found two versions of the error function for output layer of MLP are
> used in the literature.
>
>
>1. \delta_o = (y-a) f'(z)
>
2012/6/6 Vlad Niculae :
> I won't open a new thread, but is anybody planning to go to Europython 2012?
> I might get a sponsorship to attend and I was wondering what is the community
> overlap.
I will go for the Europython conference on the weekend and might
extend for an additional day or two a
I won't open a new thread, but is anybody planning to go to Europython 2012? I
might get a sponsorship to attend and I was wondering what is the community
overlap.
Best,
Vlad
On Jun 6, 2012, at 09:38 , Fernando Perez wrote:
> On Tue, Jun 5, 2012 at 9:52 PM, Gael Varoquaux
> wrote:
>> I won't
Hi, all. I post this question to the list, since it might be related to the
MLP being developed.
I found two versions of the error function for output layer of MLP are used
in the literature.
1. \delta_o = (y-a) f'(z)
http://ufldl.stanford.edu/wiki/index.php/Backpropagation_Algorithm
2.
Thanks!
2012/6/6 Alexandre Gramfort
> hi Emeline,
>
> svc.predict and svc.predict_proba > 0.5 may not match
>
> the predict_proba uses a recalibration using Platt's method.
>
> Alex
>
> On Wed, Jun 6, 2012 at 9:42 AM, Emeline Landemaine
> wrote:
> > Hey!
> >
> > I'm training a SVM and would lik
hi Emeline,
svc.predict and svc.predict_proba > 0.5 may not match
the predict_proba uses a recalibration using Platt's method.
Alex
On Wed, Jun 6, 2012 at 9:42 AM, Emeline Landemaine
wrote:
> Hey!
>
> I'm training a SVM and would like to use predict_proba in order to know with
> which confiden
Hey!
I'm training a SVM and would like to use predict_proba in order to know
with which confidence a label is given.
If predict_proba gives more than 50% for a picture to be in-class, it would
mean that the picture will be classified as such. However, some pictures
that have a percentage of 30-35%
23 matches
Mail list logo