Hi Arnaud,
You are right, I was supposed to finish my MLP PR before the summer, but
my thesis took over my time, which, fortunately, I am completing this
semester :).
Anyhow, I would start with multi-layer perceptron and deep networks
before delving into developing other algorithms.
For th
No that's not what I meant. Maybe we can chat about this off-list
after I read more about ELMs e.g. the reference you gave below. And
maybe your Masters thesis too?
Sure thing, I will give you the thesis as soon as I finish the first
write-up. :)
This is the first paper in Extreme Learni
On Fri, Mar 21, 2014 at 11:19 AM, Issam wrote:
> >
> > Also: new topic: Did you mention earlier in the thread that you need
> > derivatives to implement a regularized ELM? Why don't you just use
> > some of the existing linear (or even non-linear?) regression models in
> > sklearn to classify the
>
> I get that if you have 10, 000 samples and 150 features, then your
> system is over-determined.
> Where I think you go wrong is in worrying about a large number of
> unique solutions. Over-determined typically means 0 solutions! (Have
> another look at that page you linked, it's the under-d
On Fri, Mar 21, 2014 at 8:50 AM, Issam wrote:
> On 3/21/2014 4:25 PM, James Bergstra wrote:
>
> The proposal looks good to me! A few small comments:
>
> 1. I'm confused by the paragraph on regularized ELMs: I think you mean
> that in cases where the hidden weights (the classifier?) are
> *under
On 3/21/2014 4:25 PM, James Bergstra wrote:
The proposal looks good to me! A few small comments:
1. I'm confused by the paragraph on regularized ELMs: I think you mean
that in cases where the hidden weights (the classifier?) are
*underdetermined* because there are far more *unknowns* then *sam
The proposal looks good to me! A few small comments:
1. I'm confused by the paragraph on regularized ELMs: I think you mean that
in cases where the hidden weights (the classifier?) are *underdetermined*
because there are far more *unknowns* then *samples* then you need to
regularize somehow. (Righ
Hi all,
I updated the Neural Network proposal in melange,
http://www.google-melange.com/gsoc/proposal/public/google/gsoc2014/issamou/5668600916475904
Thank you.
~Issam
--
Learn Graph Databases - Download FREE O'Reilly
On Fri, Mar 21, 2014 at 5:27 PM, Issam wrote:
> How about assigning the first week to finalizing the PR? Because the
> documentation haven't been thoroughly reviewed yet.
>
> Thanks
>
Good enough.
>
> On 3/21/2014 2:28 PM, Jaidev Deshpande wrote:
> >
> > I'm sorry, I did not intend to +1 tha
How about assigning the first week to finalizing the PR? Because the
documentation haven't been thoroughly reviewed yet.
Thanks
On 3/21/2014 2:28 PM, Jaidev Deshpande wrote:
>
> I'm sorry, I did not intend to +1 that. Does it make sense for Issam
> to allocate two weeks of GSoC coding time to f
On Fri, Mar 21, 2014 at 4:51 PM, Jaidev Deshpande <
deshpande.jai...@gmail.com> wrote:
>
>
>
> On Fri, Mar 21, 2014 at 4:38 PM, Mathieu Blondel wrote:
>
>> Hi Issam,
>>
>> Thanks for the clarification. Another remark. It seems to me that it
>> would be nice if you allocated, say, two weeks at the
On Fri, Mar 21, 2014 at 4:38 PM, Mathieu Blondel wrote:
> Hi Issam,
>
> Thanks for the clarification. Another remark. It seems to me that it would
> be nice if you allocated, say, two weeks at the beginning of your GSOC to
> finish your on-going PRs, especially the MLP one.
>
+1
>
> Mathieu
>
>
Hi Issam,
Thanks for the clarification. Another remark. It seems to me that it would
be nice if you allocated, say, two weeks at the beginning of your GSOC to
finish your on-going PRs, especially the MLP one.
Mathieu
On Fri, Mar 21, 2014 at 7:16 PM, Gael Varoquaux <
gael.varoqu...@normalesup.or
On Fri, Mar 21, 2014 at 01:38:46PM +0300, Issam wrote:
> The thesis is due April 30, 2014, which is 19 days before GSoC starts :).
Good luck! I hope you finish on time, to be able to enjoy some rest.
Gaël
--
Learn Graph
The thesis is due April 30, 2014, which is 19 days before GSoC starts :).
> On Fri, Mar 21, 2014 at 01:26:00PM +0300, Issam wrote:
>> You are right, I was supposed to finish my MLP PR before the summer, but
>> my thesis took over my time, which, fortunately, I am completing this
>> semester :).
>
On Fri, Mar 21, 2014 at 01:26:00PM +0300, Issam wrote:
> You are right, I was supposed to finish my MLP PR before the summer, but
> my thesis took over my time, which, fortunately, I am completing this
> semester :).
Out of curiosity, when is it due?
> On the other hand, I have already worked o
Hi Arnaud,
You are right, I was supposed to finish my MLP PR before the summer, but
my thesis took over my time, which, fortunately, I am completing this
semester :).
Anyhow, I would start with multi-layer perceptron and deep networks
before delving into developing other algorithms.
For the l
On Fri, Mar 21, 2014 at 11:14:18AM +0100, Arnaud Joly wrote:
> To neural network expert, is it interesting to have layer configuration à la
> Torch
> https://github.com/torch/nn/blob/master/README.md ?
I think that this should be done only after implementing a base set of
useful algorithms. I am
Hi Issam,
Why not starting by improving multilayer neural network before adding new
algorithms ?
To neural network expert, is it interesting to have layer configuration à la
Torch
https://github.com/torch/nn/blob/master/README.md ?
Best,
Arnaud
On 21 Mar 2014, at 10:18, Issam wrote:
> Hi
Hi Mathieu,
The regularized version is fundamentally different from the
non-regularized version, in that it uses the derivative of the objective
function for which it solves using least-square solution. The basic ELM
version is classic; so, it wouldn't hurt having them both :). Further,
it is
Naive questions from someone who knows nothing about ELM. What's the
motivation for implementing both non-regularized and regularized ELM? If
the former tends to overfit, I would keep only the latter. And why do you
need 2 weeks for implementing the regularized variant? Is the algorithm
fundamental
Hi all,
I uploaded the proposal for Neural Networks to melange, here is the
public link.
http://www.google-melange.com/gsoc/proposal/public/google/gsoc2014/issamou/5668600916475904
Thank you.
Regards,
~Issam
On 3/20/2014 10:47 AM, Jaidev Deshpande wrote:
On Thu, Mar 20, 2014 at 5:54 AM
True that, I uploaded the proposal as a gist to this link,
https://gist.github.com/IssamLaradji/9660324
Thank you.
~Issam
Hi Issam,
Looks OK at first glance, but please add it as a gist. Those are much
easier to comment on.
Thanks
--
JD
--
On Thu, Mar 20, 2014 at 5:54 AM, Issam wrote:
> Hi all,
>
> I uploaded the Neural Network proposal to this link,
>
>
> https://github.com/scikit-learn/scikit-learn/wiki/GSoC-2014:-Extending-Neural-Networks-Module-for-Scikit-learn
>
> Please see if it is detailed enough as a promising proposal.
>
Hi all,
I uploaded the Neural Network proposal to this link,
https://github.com/scikit-learn/scikit-learn/wiki/GSoC-2014:-Extending-Neural-Networks-Module-for-Scikit-learn
Please see if it is detailed enough as a promising proposal.
Thank you.
~Issam
On 3/19/2014 9:00 PM, Jaidev Deshpande wro
Thank you for the reminder, I didn't realize the deadline is close :(; I
got side tracked by a quiz I had.
Anyhow, I will finish and upload the proposal today, and hopefully we
can review it thoroughly tomorrow.
Sorry for the delay.
Thanks.
On 3/19/2014 9:00 PM, Jaidev Deshpande wrote:
On Sat, Mar 15, 2014 at 6:59 PM, Issam wrote:
> Thanks Olivier, I will upload the proposal very soon.
>
> While doing so, I will strengthen my proposal by implementing a basic
> version of each of the proposed algorithms, which I will cite in my
> proposal.
>
> Cheers. :)
>
Hi Issam,
What's the
Thanks Olivier, I will upload the proposal very soon.
While doing so, I will strengthen my proposal by implementing a basic
version of each of the proposed algorithms, which I will cite in my
proposal.
Cheers. :)
On 3/14/2014 5:38 PM, Olivier Grisel wrote:
> Issam if I am not mistaken you have
Issam if I am not mistaken you have not written an official proposal
for this GSoC application.
If you are still interested, there is an official template to follow
for PSF affiliated sub-projects (such as scikit-learn):
https://wiki.python.org/moin/SummerOfCode/ApplicationTemplate2014
You can a
> I'd be happy to help define and mentor this PR, if a mentor is needed.
> I'd really like to see this nnet work merged into sklearn, and some of
> the other ideas that have been mentioned here too (e.g. docs & code
> for ELM).
>
Hi James, that would be more than great.
@Gael has sent the messa
I'd be happy to help define and mentor this PR, if a mentor is needed. I'd
really like to see this nnet work merged into sklearn, and some of the
other ideas that have been mentioned here too (e.g. docs & code for ELM).
On Wed, Feb 26, 2014 at 10:56 AM, federico vaggi
wrote:
> As an aside Lars -
As an aside Lars - I'd actually love to see the recepy, if you don't mind
putting up a gist or notebook.
On Wed, Feb 26, 2014 at 1:29 PM, Lars Buitinck wrote:
> 2014-02-25 7:52 GMT+01:00 Gael Varoquaux :
> >> Extreme learning machine: theory and applications has 1285 citations
> >> and it got p
+1 for an RBF network transformer (with an option to choose between k-means
and random sampling).
Mathieu
On Wed, Feb 26, 2014 at 9:40 PM, Vlad Niculae wrote:
> On Wed Feb 26 13:32:08 2014, Gael Varoquaux wrote:
> > documentation and example
>
> This was exactly my thought. Many such (near-)e
On Wed, Feb 26, 2014 at 01:55:11PM +0100, Lars Buitinck wrote:
> > I'd rather avoid special pipelines. For we, that would mean that we have
> > an API problem with the pipeline, that needs to be identified and
> > solved.
> Well, for deep learning, you'd want a generalized backprop on the
> final
2014-02-26 13:51 GMT+01:00 Gael Varoquaux :
> On Wed, Feb 26, 2014 at 03:42:50PM +0300, Issam wrote:
>> Or perhaps special pipelines to simplify such common tasks.
>
> I'd rather avoid special pipelines. For we, that would mean that we have
> an API problem with the pipeline, that needs to be ident
On Wed, Feb 26, 2014 at 03:42:50PM +0300, Issam wrote:
> Or perhaps special pipelines to simplify such common tasks.
I'd rather avoid special pipelines. For we, that would mean that we have
an API problem with the pipeline, that needs to be identified and
solved.
G
--
2014-02-26 13:40 GMT+01:00 Vlad Niculae :
> This was exactly my thought. Many such (near-)equivalences are not
> obvious, especially
> for beginners. If Lars's hinge ELM and RBF network would work well (or
> provide
> interesting feature visualisations) on some sklearn.dataset, an example
> would
On 2/26/2014 3:29 PM, Lars Buitinck wrote:
> We have a PR that implements them, but in too convoluted a way. My
> personal choice for implementing these would be a transformer doing a
> random projection + nonlinear activation. That way, you can stack any
> linear model on top (think SGDClassifier
On Wed Feb 26 13:32:08 2014, Gael Varoquaux wrote:
> documentation and example
This was exactly my thought. Many such (near-)equivalences are not
obvious, especially
for beginners. If Lars's hinge ELM and RBF network would work well (or
provide
interesting feature visualisations) on some sklea
On Wed, Feb 26, 2014 at 01:29:43PM +0100, Lars Buitinck wrote:
> I recently implemented baseline RBF networks in pretty much the same
> way: k-means + RBF kernel + linear classifier. I didn't submit a PR
> because it's just a pipeline of existing components.
All your points about transformers and
2014-02-25 7:52 GMT+01:00 Gael Varoquaux :
>> Extreme learning machine: theory and applications has 1285 citations
>> and it got published in 2006; a large number of citations for a fairly
>> recent article. I believe scikit-learn could add such an interesting
>> learning algorithm along with its
On Mon, Feb 24, 2014 at 11:47:07PM +0300, Issam wrote:
> I am working on extending Extreme Learning Machine in my thesis, I
> think that would be a good It differs from Backpropagation in that,
> instead of running newton's gradient descent for finding the weights
> minimizing the objective functio
I'm also still hopeful there.
Unfortunately I will definitely be unable to mentor.
About pretraining: that is really out of style now ;)
Afaik "everybody" is now doing purely supervised training using drop-out.
Implementing pretrained deep nets should be fairly easy for a user if we
support mo
I can say that pylearn2 does NOT (in the main branch, at least) have an
implementation of DropConnect - only dropout as Nick mentioned. A tutorial
on using DropConnect is here:
http://fastml.com/regularizing-neural-networks-with-dropout-and-with-dropconnect/
Kyle
On Wed, Feb 5, 2014 at 1:32 PM,
Hi, Thomas:
Pylearn2 supports dropout:
https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/costs/mlp/dropout.py
Regards,
Nick
On Wed, Feb 5, 2014 at 12:17 PM, Thomas Johnson
wrote:
> Apologies if this is slightly offtopic, but is there a high-quality Python
> implementation of DropOut
Hi all,
As this is the topic for neural networks extension in scikit-learn for
GSoC, I'd like to ask if the GSoC projects can be done in groups of two as
I'm interesting in developing extensions but it would be great to have some
help from @issam.
Regards,
Abhishek
On Feb 5, 2014 8:19 PM, "Thomas
Apologies if this is slightly offtopic, but is there a high-quality Python
implementation of DropOut / DropConnect available somewhere?
On Wed, Feb 5, 2014 at 12:58 PM, Andy wrote:
> On 02/05/2014 04:30 PM, Gael Varoquaux wrote:
> > On Wed, Feb 05, 2014 at 03:02:24PM +0300, Issam wrote:
> >> I
On 02/05/2014 04:30 PM, Gael Varoquaux wrote:
> On Wed, Feb 05, 2014 at 03:02:24PM +0300, Issam wrote:
>> I have been working with scikit-learn for three pull requests - namely,
>> Multi-layer Perceptron (MLP), Sparse Auto-encoders, and Gaussian
>> Restricted Boltzmann Machines.
> Yes, you have be
On 02/05/2014 06:40 PM, Kyle Kastner wrote:
> Not to bandwagon extra things on this particular effort, but one
> future consideration is that if scikit-learn supported multilayer
> neural networks, and eventually multilayer convolutional neural
> networks, it would become feasible to load pretra
Not to bandwagon extra things on this particular effort, but one future
consideration is that if scikit-learn supported multilayer neural networks,
and eventually multilayer convolutional neural networks, it would become
feasible to load pretrained nets ALA OverFeat, DeCAF (recent papers with
sweet
On Wed, Feb 05, 2014 at 03:02:24PM +0300, Issam wrote:
> I have been working with scikit-learn for three pull requests - namely,
> Multi-layer Perceptron (MLP), Sparse Auto-encoders, and Gaussian
> Restricted Boltzmann Machines.
Yes, you have been doing good work here!
> For the upcoming GSoC,
Hi Scikit reviewers,
I have been working with scikit-learn for three pull requests - namely,
Multi-layer Perceptron (MLP), Sparse Auto-encoders, and Gaussian
Restricted Boltzmann Machines.
For the upcoming GSoC, I propose to ensure completing these three pull
requests. I also would develop Gr
52 matches
Mail list logo