> If we manage to have a least the same performance, supporting both L1 or
L1+L2 regularization, without penalizing the intercept
Yes, this would make me happy too!
On 6 February 2014 07:45, Alexandre Gramfort <
alexandre.gramf...@telecom-paristech.fr> wrote:
> the idea of dropping LibLinear fo
the idea of dropping LibLinear for Logisitic Regression has been
around for some time now. If we manage to have a least the same
performance, supporting both L1 or L1+L2 regularization, without
penalizing the intercept ... it would be great.
@mblondel any thought wrt to lightning ?
Alex
Hi, (Sorry to be spamming this list)
I just created a wiki page for the project discussion, so that I can dump
my ideas.
https://github.com/scikit-learn/scikit-learn/wiki/Linear-models-project-discussion,-GSoC-2014
I went through the gist and had a quick look at the research paper and I
understoo
I can say that pylearn2 does NOT (in the main branch, at least) have an
implementation of DropConnect - only dropout as Nick mentioned. A tutorial
on using DropConnect is here:
http://fastml.com/regularizing-neural-networks-with-dropout-and-with-dropconnect/
Kyle
On Wed, Feb 5, 2014 at 1:32 PM,
hi,
We are looking for a Scikit-Learn/Python fan interested in helping us
to implement native persistence of Scikit-Learn estimators and data.
The technology we plan to use is called NEO (http://www.neoppod.org/).
It is a distributed object database that can store serialized python
objects on a re
Hi, Thomas:
Pylearn2 supports dropout:
https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/costs/mlp/dropout.py
Regards,
Nick
On Wed, Feb 5, 2014 at 12:17 PM, Thomas Johnson
wrote:
> Apologies if this is slightly offtopic, but is there a high-quality Python
> implementation of DropOut
Hi all,
As this is the topic for neural networks extension in scikit-learn for
GSoC, I'd like to ask if the GSoC projects can be done in groups of two as
I'm interesting in developing extensions but it would be great to have some
help from @issam.
Regards,
Abhishek
On Feb 5, 2014 8:19 PM, "Thomas
Apologies if this is slightly offtopic, but is there a high-quality Python
implementation of DropOut / DropConnect available somewhere?
On Wed, Feb 5, 2014 at 12:58 PM, Andy wrote:
> On 02/05/2014 04:30 PM, Gael Varoquaux wrote:
> > On Wed, Feb 05, 2014 at 03:02:24PM +0300, Issam wrote:
> >> I
On 02/05/2014 04:30 PM, Gael Varoquaux wrote:
> On Wed, Feb 05, 2014 at 03:02:24PM +0300, Issam wrote:
>> I have been working with scikit-learn for three pull requests - namely,
>> Multi-layer Perceptron (MLP), Sparse Auto-encoders, and Gaussian
>> Restricted Boltzmann Machines.
> Yes, you have be
On 02/05/2014 06:40 PM, Kyle Kastner wrote:
> Not to bandwagon extra things on this particular effort, but one
> future consideration is that if scikit-learn supported multilayer
> neural networks, and eventually multilayer convolutional neural
> networks, it would become feasible to load pretra
Not to bandwagon extra things on this particular effort, but one future
consideration is that if scikit-learn supported multilayer neural networks,
and eventually multilayer convolutional neural networks, it would become
feasible to load pretrained nets ALA OverFeat, DeCAF (recent papers with
sweet
GSoC is on!
We are going to need mentors for this year, and ideally someone to
volunteer to coordinate the while process.
Cheers,
Gaël
- Forwarded message from Terri Oda -
Date: Tue, 04 Feb 2014 22:08:38 -0800
From: Terri Oda
To: "soc2013-ment...@python.org"
Subject: [Soc2013-mentor
On Wed, Feb 05, 2014 at 03:02:24PM +0300, Issam wrote:
> I have been working with scikit-learn for three pull requests - namely,
> Multi-layer Perceptron (MLP), Sparse Auto-encoders, and Gaussian
> Restricted Boltzmann Machines.
Yes, you have been doing good work here!
> For the upcoming GSoC,
Joel,
Thanks so much for the quick response! I built from that PR and it is
working great!
Hope that PR can get merged into master soon, I am sure others would love
the feature too!
Thanks again,
Cory
On Tue, Feb 4, 2014 at 8:05 PM, Joel Nothman wrote:
> Two options:
> 1. Use SGDClassifier
I finally worked it out myself from this research paper,
http://www.stanford.edu/~hastie/Papers/glmnet.pdf
There were two mistakes that I made,
1. We need to add a w_{j} * X[:, j] term to update the residuals. (Eq n 7)
2. When is X is not standardised, there should be a sum of square terms in
th
I think that I would go for the option that minimize the amount of code
duplication.
I would probably start with 2. Since we don’t pickle anymore the Splitter and
criterion, the constructor
arguments could be used to pass the X and the y matrix.
Cheers,
Arnaud
On 04 Feb 2014, at 17:38, Feli
Hi Scikit reviewers,
I have been working with scikit-learn for three pull requests - namely,
Multi-layer Perceptron (MLP), Sparse Auto-encoders, and Gaussian
Restricted Boltzmann Machines.
For the upcoming GSoC, I propose to ensure completing these three pull
requests. I also would develop Gr
A quick comment about scaling.
The correct way to scale your data is to use your training set to calculate
the mean and standard deviation, and then use the mean and standard
deviation of the training set to scale both the training and the test set.
If you use the full dataset (training+test) to
On Wed, 05 Feb 2014 00:36:45 +0100,
wrote:
> Date: Tue, 4 Feb 2014 17:36:37 -0600
> From: Kyle Kastner
> Subject: Re: [Scikit-learn-general] Strange Error Message
> To: scikit-learn-general@lists.sourceforge.net
> Message-ID:
>
> Content-Type: text/plain; charset="iso-8859-1"
> Sorry -
19 matches
Mail list logo