Re: [Pharo-users] Neural Networks in Pharo

2017-04-29 Thread Oleks
And I have added a PolyMath dependency to the configuration file, so now
the Metacello script should update you to the latest stable versions of
everything automatically.
Thanks for pointing it out.

On Fri, Apr 28, 2017 at 11:05 PM, francescoagati [via Smalltalk] <
ml+s1294792n4944863...@n4.nabble.com> wrote:

> now work all good with last version of polymath
> thanks :-)
>
> 2017-04-27 17:29 GMT+02:00 Oleks <[hidden email]
> >:
>
>> Hello,
>>
>> I have finally added a configuration to the NeuralNetwork project. Now
>> you can use this Metacello script to load it into your Pharo image:
>>
>> Metacello new
>>   repository: 'http://smalltalkhub.com/mc/Oleks/NeuralNetwork/main';
>>   configuration: 'MLNeuralNetwork';
>>   version: #development;
>>   load.
>>
>> Sorry for the delay
>>
>> Oleks
>>
>> On Tue, Apr 25, 2017 at 4:13 PM, francescoagati [via Smalltalk] <[hidden
>> email] > wrote:
>>
>>> thanks ;-)
>>>
>>> 2017-04-25 15:09 GMT+02:00 Oleks <[hidden email]
>>> >:
>>>
 Hello,

 There isn't one yet. But I will try to create it today. I will let you
 know

 Cheers,
 Oleks

 On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <[hidden email]
 > wrote:

> Hi Oleks,
> there is a mode for install neural network from metacello?
>
> 2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]
> >:
>
>> Continue to push that topic Oleks. You are on the right track!
>>
>> Alexandre
>>
>> > On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]
>> > wrote:
>> >
>> > Hello,
>> >
>> > Thanks a lot for your advice! It was very helpful and educating (for
>> > example, I thought that we store biases in the weight matrix and
>> prepend 1
>> > to input to make it faster, but now I see why it's actually slower
>> that
>> > way).
>> >
>> > I've implemented a multi-layer neural network as a linked list of
>> layers
>> > that propagate the input and error from one to another, similar to
>> the Chain
>> > of Responsibility pattern. Also, now I represent biases as separate
>> vectors.
>> > The LearningAlgorithm is a separate class with Backpropagation as
>> its
>> > subclass (though at this point the network can only learn through
>> > backpropagation, but I'm planning to change that). I'm trying to
>> figure out
>> > how the activation and cost functions should be connected. For
>> example,
>> > cross-entropy works best with logistic sigmoid activation etc. I
>> would like
>> > to give the user a freedom to use whatever he wants (plug in
>> whatever you
>> > like and see what happens), but it can be very inefficient (because
>> some
>> > time-consuming parts of activation and cost derivatives cancel out
>> each
>> > other).
>> >
>> > Also, there is an interface for setting the learning rate for the
>> whole
>> > network, which can be used to choose the learning rate prior to
>> learning, as
>> > well as to change the learning rate after each iteration. I am
>> planning to
>> > implement some optimization algorithms that would automize the
>> process of
>> > choosing a learning rate (adagrad for example), but this would
>> require a bit
>> > different design (maybe I will implement the Optimizer, as you
>> suggested).
>> >
>> > I'm attaching two images with UML diagrams, describing my current
>> > implementation. Could you please tell me what you think about this
>> design?
>> > The first image is a class diagram that shows the whole
>> architecture, and
>> > the second one is a sequence diagram of backpropagation.
>> >
>> > mlnn.png 
>> > backprop.png 
>> >
>> > Sincerely yours,
>> > Oleksandr
>> >
>> >
>> >
>> > --
>> > View this message in context: http://forum.world.st/Neural-N
>> etworks-in-Pharo-tp4941271p4943698.html
>> > Sent from the Pharo Smalltalk Users mailing list archive at
>> Nabble.com.
>> >
>>
>> --
>> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
>> Alexandre Bergel  http://www.bergel.eu
>> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>>
>>
>>
>>
>>
>
>
> --
> If you reply to this email, your message will be added to the
> discussion below:
> http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
> To unsubscribe from Neu

Re: [Pharo-users] Neural Networks in Pharo

2017-04-28 Thread francesco agati
now work all good with last version of polymath
thanks :-)

2017-04-27 17:29 GMT+02:00 Oleks :

> Hello,
>
> I have finally added a configuration to the NeuralNetwork project. Now you
> can use this Metacello script to load it into your Pharo image:
>
> Metacello new
>   repository: 'http://smalltalkhub.com/mc/Oleks/NeuralNetwork/main';
>   configuration: 'MLNeuralNetwork';
>   version: #development;
>   load.
>
> Sorry for the delay
>
> Oleks
>
> On Tue, Apr 25, 2017 at 4:13 PM, francescoagati [via Smalltalk] <[hidden
> email] > wrote:
>
>> thanks ;-)
>>
>> 2017-04-25 15:09 GMT+02:00 Oleks <[hidden email]
>> >:
>>
>>> Hello,
>>>
>>> There isn't one yet. But I will try to create it today. I will let you
>>> know
>>>
>>> Cheers,
>>> Oleks
>>>
>>> On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <[hidden email]
>>> > wrote:
>>>
 Hi Oleks,
 there is a mode for install neural network from metacello?

 2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]
 >:

> Continue to push that topic Oleks. You are on the right track!
>
> Alexandre
>
> > On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]
> > wrote:
> >
> > Hello,
> >
> > Thanks a lot for your advice! It was very helpful and educating (for
> > example, I thought that we store biases in the weight matrix and
> prepend 1
> > to input to make it faster, but now I see why it's actually slower
> that
> > way).
> >
> > I've implemented a multi-layer neural network as a linked list of
> layers
> > that propagate the input and error from one to another, similar to
> the Chain
> > of Responsibility pattern. Also, now I represent biases as separate
> vectors.
> > The LearningAlgorithm is a separate class with Backpropagation as its
> > subclass (though at this point the network can only learn through
> > backpropagation, but I'm planning to change that). I'm trying to
> figure out
> > how the activation and cost functions should be connected. For
> example,
> > cross-entropy works best with logistic sigmoid activation etc. I
> would like
> > to give the user a freedom to use whatever he wants (plug in
> whatever you
> > like and see what happens), but it can be very inefficient (because
> some
> > time-consuming parts of activation and cost derivatives cancel out
> each
> > other).
> >
> > Also, there is an interface for setting the learning rate for the
> whole
> > network, which can be used to choose the learning rate prior to
> learning, as
> > well as to change the learning rate after each iteration. I am
> planning to
> > implement some optimization algorithms that would automize the
> process of
> > choosing a learning rate (adagrad for example), but this would
> require a bit
> > different design (maybe I will implement the Optimizer, as you
> suggested).
> >
> > I'm attaching two images with UML diagrams, describing my current
> > implementation. Could you please tell me what you think about this
> design?
> > The first image is a class diagram that shows the whole
> architecture, and
> > the second one is a sequence diagram of backpropagation.
> >
> > mlnn.png 
> > backprop.png 
> >
> > Sincerely yours,
> > Oleksandr
> >
> >
> >
> > --
> > View this message in context: http://forum.world.st/Neural-N
> etworks-in-Pharo-tp4941271p4943698.html
> > Sent from the Pharo Smalltalk Users mailing list archive at
> Nabble.com.
> >
>
> --
> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
> Alexandre Bergel  http://www.bergel.eu
> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>
>
>
>
>


 --
 If you reply to this email, your message will be added to the
 discussion below:
 http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
 To unsubscribe from Neural Networks in Pharo, click here.
 NAML
 

>>>
>>> --
>>> View this message in context: Re: Neural Networks in Pharo
>>> 

Re: [Pharo-users] Neural Networks in Pharo

2017-04-27 Thread Oleks
Hello,

I have finally added a configuration to the NeuralNetwork project. Now you
can use this Metacello script to load it into your Pharo image:

Metacello new
  repository: 'http://smalltalkhub.com/mc/Oleks/NeuralNetwork/main';
  configuration: 'MLNeuralNetwork';
  version: #development;
  load.

Sorry for the delay

Oleks

On Tue, Apr 25, 2017 at 4:13 PM, francescoagati [via Smalltalk] <
ml+s1294792n4944028...@n4.nabble.com> wrote:

> thanks ;-)
>
> 2017-04-25 15:09 GMT+02:00 Oleks <[hidden email]
> >:
>
>> Hello,
>>
>> There isn't one yet. But I will try to create it today. I will let you
>> know
>>
>> Cheers,
>> Oleks
>>
>> On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <[hidden email]
>> > wrote:
>>
>>> Hi Oleks,
>>> there is a mode for install neural network from metacello?
>>>
>>> 2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]
>>> >:
>>>
 Continue to push that topic Oleks. You are on the right track!

 Alexandre

 > On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]
 > wrote:
 >
 > Hello,
 >
 > Thanks a lot for your advice! It was very helpful and educating (for
 > example, I thought that we store biases in the weight matrix and
 prepend 1
 > to input to make it faster, but now I see why it's actually slower
 that
 > way).
 >
 > I've implemented a multi-layer neural network as a linked list of
 layers
 > that propagate the input and error from one to another, similar to
 the Chain
 > of Responsibility pattern. Also, now I represent biases as separate
 vectors.
 > The LearningAlgorithm is a separate class with Backpropagation as its
 > subclass (though at this point the network can only learn through
 > backpropagation, but I'm planning to change that). I'm trying to
 figure out
 > how the activation and cost functions should be connected. For
 example,
 > cross-entropy works best with logistic sigmoid activation etc. I
 would like
 > to give the user a freedom to use whatever he wants (plug in whatever
 you
 > like and see what happens), but it can be very inefficient (because
 some
 > time-consuming parts of activation and cost derivatives cancel out
 each
 > other).
 >
 > Also, there is an interface for setting the learning rate for the
 whole
 > network, which can be used to choose the learning rate prior to
 learning, as
 > well as to change the learning rate after each iteration. I am
 planning to
 > implement some optimization algorithms that would automize the
 process of
 > choosing a learning rate (adagrad for example), but this would
 require a bit
 > different design (maybe I will implement the Optimizer, as you
 suggested).
 >
 > I'm attaching two images with UML diagrams, describing my current
 > implementation. Could you please tell me what you think about this
 design?
 > The first image is a class diagram that shows the whole architecture,
 and
 > the second one is a sequence diagram of backpropagation.
 >
 > mlnn.png 
 > backprop.png 
 >
 > Sincerely yours,
 > Oleksandr
 >
 >
 >
 > --
 > View this message in context: http://forum.world.st/Neural-N
 etworks-in-Pharo-tp4941271p4943698.html
 > Sent from the Pharo Smalltalk Users mailing list archive at
 Nabble.com.
 >

 --
 _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
 Alexandre Bergel  http://www.bergel.eu
 ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.





>>>
>>>
>>> --
>>> If you reply to this email, your message will be added to the discussion
>>> below:
>>> http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
>>> To unsubscribe from Neural Networks in Pharo, click here.
>>> NAML
>>> 
>>>
>>
>> --
>> View this message in context: Re: Neural Networks in Pharo
>> 
>>
>> Sent from the Pharo Smalltalk Users mailing list archive
>>  at
>> Nabble.com.
>>
>
>
>
> --
> If you reply to this email,

Re: [Pharo-users] Neural Networks in Pharo

2017-04-25 Thread francesco agati
thanks ;-)

2017-04-25 15:09 GMT+02:00 Oleks :

> Hello,
>
> There isn't one yet. But I will try to create it today. I will let you know
>
> Cheers,
> Oleks
>
> On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <[hidden email]
> > wrote:
>
>> Hi Oleks,
>> there is a mode for install neural network from metacello?
>>
>> 2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]
>> >:
>>
>>> Continue to push that topic Oleks. You are on the right track!
>>>
>>> Alexandre
>>>
>>> > On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]
>>> > wrote:
>>> >
>>> > Hello,
>>> >
>>> > Thanks a lot for your advice! It was very helpful and educating (for
>>> > example, I thought that we store biases in the weight matrix and
>>> prepend 1
>>> > to input to make it faster, but now I see why it's actually slower that
>>> > way).
>>> >
>>> > I've implemented a multi-layer neural network as a linked list of
>>> layers
>>> > that propagate the input and error from one to another, similar to the
>>> Chain
>>> > of Responsibility pattern. Also, now I represent biases as separate
>>> vectors.
>>> > The LearningAlgorithm is a separate class with Backpropagation as its
>>> > subclass (though at this point the network can only learn through
>>> > backpropagation, but I'm planning to change that). I'm trying to
>>> figure out
>>> > how the activation and cost functions should be connected. For example,
>>> > cross-entropy works best with logistic sigmoid activation etc. I would
>>> like
>>> > to give the user a freedom to use whatever he wants (plug in whatever
>>> you
>>> > like and see what happens), but it can be very inefficient (because
>>> some
>>> > time-consuming parts of activation and cost derivatives cancel out each
>>> > other).
>>> >
>>> > Also, there is an interface for setting the learning rate for the whole
>>> > network, which can be used to choose the learning rate prior to
>>> learning, as
>>> > well as to change the learning rate after each iteration. I am
>>> planning to
>>> > implement some optimization algorithms that would automize the process
>>> of
>>> > choosing a learning rate (adagrad for example), but this would require
>>> a bit
>>> > different design (maybe I will implement the Optimizer, as you
>>> suggested).
>>> >
>>> > I'm attaching two images with UML diagrams, describing my current
>>> > implementation. Could you please tell me what you think about this
>>> design?
>>> > The first image is a class diagram that shows the whole architecture,
>>> and
>>> > the second one is a sequence diagram of backpropagation.
>>> >
>>> > mlnn.png 
>>> > backprop.png 
>>> >
>>> > Sincerely yours,
>>> > Oleksandr
>>> >
>>> >
>>> >
>>> > --
>>> > View this message in context: http://forum.world.st/Neural-N
>>> etworks-in-Pharo-tp4941271p4943698.html
>>> > Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>>> >
>>>
>>> --
>>> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
>>> Alexandre Bergel  http://www.bergel.eu
>>> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
>> To unsubscribe from Neural Networks in Pharo, click here.
>> NAML
>> 
>>
>
> --
> View this message in context: Re: Neural Networks in Pharo
> 
>
> Sent from the Pharo Smalltalk Users mailing list archive
>  at Nabble.com.
>


Re: [Pharo-users] Neural Networks in Pharo

2017-04-25 Thread Oleks
Hello,

There isn't one yet. But I will try to create it today. I will let you know

Cheers,
Oleks

On Apr 25, 2017 16:10, "francescoagati [via Smalltalk]" <
ml+s1294792n494402...@n4.nabble.com> wrote:

> Hi Oleks,
> there is a mode for install neural network from metacello?
>
> 2017-04-25 13:00 GMT+02:00 Alexandre Bergel <[hidden email]
> >:
>
>> Continue to push that topic Oleks. You are on the right track!
>>
>> Alexandre
>>
>> > On Apr 24, 2017, at 1:43 AM, Oleks <[hidden email]
>> > wrote:
>> >
>> > Hello,
>> >
>> > Thanks a lot for your advice! It was very helpful and educating (for
>> > example, I thought that we store biases in the weight matrix and
>> prepend 1
>> > to input to make it faster, but now I see why it's actually slower that
>> > way).
>> >
>> > I've implemented a multi-layer neural network as a linked list of layers
>> > that propagate the input and error from one to another, similar to the
>> Chain
>> > of Responsibility pattern. Also, now I represent biases as separate
>> vectors.
>> > The LearningAlgorithm is a separate class with Backpropagation as its
>> > subclass (though at this point the network can only learn through
>> > backpropagation, but I'm planning to change that). I'm trying to figure
>> out
>> > how the activation and cost functions should be connected. For example,
>> > cross-entropy works best with logistic sigmoid activation etc. I would
>> like
>> > to give the user a freedom to use whatever he wants (plug in whatever
>> you
>> > like and see what happens), but it can be very inefficient (because some
>> > time-consuming parts of activation and cost derivatives cancel out each
>> > other).
>> >
>> > Also, there is an interface for setting the learning rate for the whole
>> > network, which can be used to choose the learning rate prior to
>> learning, as
>> > well as to change the learning rate after each iteration. I am planning
>> to
>> > implement some optimization algorithms that would automize the process
>> of
>> > choosing a learning rate (adagrad for example), but this would require
>> a bit
>> > different design (maybe I will implement the Optimizer, as you
>> suggested).
>> >
>> > I'm attaching two images with UML diagrams, describing my current
>> > implementation. Could you please tell me what you think about this
>> design?
>> > The first image is a class diagram that shows the whole architecture,
>> and
>> > the second one is a sequence diagram of backpropagation.
>> >
>> > mlnn.png 
>> > backprop.png 
>> >
>> > Sincerely yours,
>> > Oleksandr
>> >
>> >
>> >
>> > --
>> > View this message in context: http://forum.world.st/Neural-N
>> etworks-in-Pharo-tp4941271p4943698.html
>> > Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>> >
>>
>> --
>> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
>> Alexandre Bergel  http://www.bergel.eu
>> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>>
>>
>>
>>
>>
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944025.html
> To unsubscribe from Neural Networks in Pharo, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4944027.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.

Re: [Pharo-users] Neural Networks in Pharo

2017-04-25 Thread francesco agati
Hi Oleks,
there is a mode for install neural network from metacello?

2017-04-25 13:00 GMT+02:00 Alexandre Bergel :

> Continue to push that topic Oleks. You are on the right track!
>
> Alexandre
>
> > On Apr 24, 2017, at 1:43 AM, Oleks  wrote:
> >
> > Hello,
> >
> > Thanks a lot for your advice! It was very helpful and educating (for
> > example, I thought that we store biases in the weight matrix and prepend
> 1
> > to input to make it faster, but now I see why it's actually slower that
> > way).
> >
> > I've implemented a multi-layer neural network as a linked list of layers
> > that propagate the input and error from one to another, similar to the
> Chain
> > of Responsibility pattern. Also, now I represent biases as separate
> vectors.
> > The LearningAlgorithm is a separate class with Backpropagation as its
> > subclass (though at this point the network can only learn through
> > backpropagation, but I'm planning to change that). I'm trying to figure
> out
> > how the activation and cost functions should be connected. For example,
> > cross-entropy works best with logistic sigmoid activation etc. I would
> like
> > to give the user a freedom to use whatever he wants (plug in whatever you
> > like and see what happens), but it can be very inefficient (because some
> > time-consuming parts of activation and cost derivatives cancel out each
> > other).
> >
> > Also, there is an interface for setting the learning rate for the whole
> > network, which can be used to choose the learning rate prior to
> learning, as
> > well as to change the learning rate after each iteration. I am planning
> to
> > implement some optimization algorithms that would automize the process of
> > choosing a learning rate (adagrad for example), but this would require a
> bit
> > different design (maybe I will implement the Optimizer, as you
> suggested).
> >
> > I'm attaching two images with UML diagrams, describing my current
> > implementation. Could you please tell me what you think about this
> design?
> > The first image is a class diagram that shows the whole architecture, and
> > the second one is a sequence diagram of backpropagation.
> >
> > mlnn.png 
> > backprop.png 
> >
> > Sincerely yours,
> > Oleksandr
> >
> >
> >
> > --
> > View this message in context: http://forum.world.st/Neural-
> Networks-in-Pharo-tp4941271p4943698.html
> > Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
> >
>
> --
> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
> Alexandre Bergel  http://www.bergel.eu
> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>
>
>
>
>


Re: [Pharo-users] Neural Networks in Pharo

2017-04-25 Thread Alexandre Bergel
Continue to push that topic Oleks. You are on the right track!

Alexandre

> On Apr 24, 2017, at 1:43 AM, Oleks  wrote:
> 
> Hello,
> 
> Thanks a lot for your advice! It was very helpful and educating (for
> example, I thought that we store biases in the weight matrix and prepend 1
> to input to make it faster, but now I see why it's actually slower that
> way).
> 
> I've implemented a multi-layer neural network as a linked list of layers
> that propagate the input and error from one to another, similar to the Chain
> of Responsibility pattern. Also, now I represent biases as separate vectors.
> The LearningAlgorithm is a separate class with Backpropagation as its
> subclass (though at this point the network can only learn through
> backpropagation, but I'm planning to change that). I'm trying to figure out
> how the activation and cost functions should be connected. For example,
> cross-entropy works best with logistic sigmoid activation etc. I would like
> to give the user a freedom to use whatever he wants (plug in whatever you
> like and see what happens), but it can be very inefficient (because some
> time-consuming parts of activation and cost derivatives cancel out each
> other).
> 
> Also, there is an interface for setting the learning rate for the whole
> network, which can be used to choose the learning rate prior to learning, as
> well as to change the learning rate after each iteration. I am planning to
> implement some optimization algorithms that would automize the process of
> choosing a learning rate (adagrad for example), but this would require a bit
> different design (maybe I will implement the Optimizer, as you suggested).
> 
> I'm attaching two images with UML diagrams, describing my current
> implementation. Could you please tell me what you think about this design?
> The first image is a class diagram that shows the whole architecture, and
> the second one is a sequence diagram of backpropagation.
> 
> mlnn.png   
> backprop.png   
> 
> Sincerely yours,
> Oleksandr
> 
> 
> 
> --
> View this message in context: 
> http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4943698.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
> 

-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.






Re: [Pharo-users] Neural Networks in Pharo

2017-04-23 Thread Oleks
Hello,

Thanks a lot for your advice! It was very helpful and educating (for
example, I thought that we store biases in the weight matrix and prepend 1
to input to make it faster, but now I see why it's actually slower that
way).

I've implemented a multi-layer neural network as a linked list of layers
that propagate the input and error from one to another, similar to the Chain
of Responsibility pattern. Also, now I represent biases as separate vectors.
The LearningAlgorithm is a separate class with Backpropagation as its
subclass (though at this point the network can only learn through
backpropagation, but I'm planning to change that). I'm trying to figure out
how the activation and cost functions should be connected. For example,
cross-entropy works best with logistic sigmoid activation etc. I would like
to give the user a freedom to use whatever he wants (plug in whatever you
like and see what happens), but it can be very inefficient (because some
time-consuming parts of activation and cost derivatives cancel out each
other).

Also, there is an interface for setting the learning rate for the whole
network, which can be used to choose the learning rate prior to learning, as
well as to change the learning rate after each iteration. I am planning to
implement some optimization algorithms that would automize the process of
choosing a learning rate (adagrad for example), but this would require a bit
different design (maybe I will implement the Optimizer, as you suggested).

I'm attaching two images with UML diagrams, describing my current
implementation. Could you please tell me what you think about this design?
The first image is a class diagram that shows the whole architecture, and
the second one is a sequence diagram of backpropagation.

mlnn.png   
backprop.png   

Sincerely yours,
Oleksandr



--
View this message in context: 
http://forum.world.st/Neural-Networks-in-Pharo-tp4941271p4943698.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.



Re: [Pharo-users] Neural Networks in Pharo

2017-04-13 Thread Johann Hibschman
I definitely agree with this. Performance-wise, I expect it to be terrible
to model each individual neuron as an object. The logical unit (IMHO)
should be a layer of neurons, with matrix weights, vector biases, and
vector output.

Similarly, I think you'd be better off keeping the bias as a separate
value, rather than concatenating a 1 to the input vector. I know it's what
they do when presenting the math, but it means you'll be allocating a new
vector each time through.

Finally, I suspect that you'll eventually want to move the learning rate
(and maybe even the learn methods) out of the neuron and into a dedicated
"training"/"learning"/"optimizing" object. Perhaps overkill for the
perceptron, but for a multilayer network, you definitely want to control
the learning rate from the outside.

I've been working in TensorFlow, so my perceptions may be a bit colored by
that framework.

Cheers,
Johann

On Wed, Apr 12, 2017 at 10:45 AM Ben Coman  wrote:

>
>
> On Wed, Apr 5, 2017 at 8:27 AM, Oleksandr Zaytsev 
> wrote:
>
> Hello!
>
> Several weeks ago I've announced my NeuralNetworks project. Thank you very
> much for your ideas and feedback. As suggested, I wrote examples for every
> class and tested my perceptron on linearly-separable logical functions.
>
> I have just completed a post about my implementation of a single-layer
> perceptron:
> https://medium.com/@i.oleks/single-layer-perceptron-in-pharo-5b13246a041d.
> It has a detailed explanation of every part of the design and illustrates
> different approaches to implementation.
>
> Please, tell me what you think.
>
> Are my class diagrams correct or did I mess something up?
> Is there a design pattern that I should consider?
> Do you think that I should do something differently?
> Should I improve the quality of my code?
>
> Yours sincerely,
> Oleksandr
>
>
> Hi Oleks,
>
> (sorry for the delayed response. Saw you other post in pharo-dev and found
> this sitting in my drafts from last week.)
>
> Nice article and interesting read.  I only did NeuralNetworks in my
> undergrad 25 years ago so my knowledge is a bit vague.
> I think you design reasoning is fine for this stage.  Down the track you
> might consider that OO is about hiding implementation details. So just a
> vague idea, you might have SLPerceptron storing neuron data internally in
> arrays that a GPU can process efficiently, but when "SLPerceptron>>#at:" is
> asked for a neuron, it constructs a "real" neuron object, whose methods
> forward to the arrays in theSLPerceptron.
>
> cheers -ben
>


Re: [Pharo-users] Neural Networks in Pharo

2017-04-12 Thread Ben Coman
On Wed, Apr 5, 2017 at 8:27 AM, Oleksandr Zaytsev 
wrote:

> Hello!
>
> Several weeks ago I've announced my NeuralNetworks project. Thank you very
> much for your ideas and feedback. As suggested, I wrote examples for every
> class and tested my perceptron on linearly-separable logical functions.
>
> I have just completed a post about my implementation of a single-layer
> perceptron: https://medium.com/@i.oleks/single-layer-perceptron-in-pharo
> -5b13246a041d. It has a detailed explanation of every part of the design
> and illustrates different approaches to implementation.
>
> Please, tell me what you think.
>
> Are my class diagrams correct or did I mess something up?
> Is there a design pattern that I should consider?
> Do you think that I should do something differently?
> Should I improve the quality of my code?
>
> Yours sincerely,
> Oleksandr
>

Hi Oleks,

(sorry for the delayed response. Saw you other post in pharo-dev and found
this sitting in my drafts from last week.)

Nice article and interesting read.  I only did NeuralNetworks in my
undergrad 25 years ago so my knowledge is a bit vague.
I think you design reasoning is fine for this stage.  Down the track you
might consider that OO is about hiding implementation details. So just a
vague idea, you might have SLPerceptron storing neuron data internally in
arrays that a GPU can process efficiently, but when "SLPerceptron>>#at:" is
asked for a neuron, it constructs a "real" neuron object, whose methods
forward to the arrays in theSLPerceptron.

cheers -ben


[Pharo-users] Neural Networks in Pharo

2017-04-04 Thread Oleksandr Zaytsev
Hello!

Several weeks ago I've announced my NeuralNetworks project. Thank you very
much for your ideas and feedback. As suggested, I wrote examples for every
class and tested my perceptron on linearly-separable logical functions.

I have just completed a post about my implementation of a single-layer
perceptron:
https://medium.com/@i.oleks/single-layer-perceptron-in-pharo-5b13246a041d.
It has a detailed explanation of every part of the design and illustrates
different approaches to implementation.

Please, tell me what you think.

Are my class diagrams correct or did I mess something up?
Is there a design pattern that I should consider?
Do you think that I should do something differently?
Should I improve the quality of my code?

Yours sincerely,
Oleksandr


Re: [Pharo-users] Neural Networks in Pharo

2017-03-29 Thread Alidra Abdelghani via Pharo-users
--- Begin Message ---

Hi Oleksandr,

Since you are interested in the implementation aspect of neural networks, may 
be you should take a look on heuristiclab 
(http://dev.heuristiclab.com/trac.fcgi/wiki); a general framework for 
developing heuristic algorithms (not only neural networks, actually). 

The interesting part is that the focus is on the implementation aspects too. 
The approach adopted by the developers is a Software Product Line one which 
seems to bring significant improvement to the way the code can be reused and 
adapted to different/new problems.

Have a nice journey with Pharo ;)
Abdelghani

> On 21 Mar 2017, at 10:22, pharo-users-requ...@lists.pharo.org wrote:
> 
> From: Oleksandr Zaytsev mailto:olk.zayt...@gmail.com>>
> To: pharo-users@lists.pharo.org <mailto:pharo-users@lists.pharo.org>
> Subject: [Pharo-users] Neural Networks in Pharo
> Message-ID:
><mailto:caep0uzuu-smvgjywtqsumoqfu1jx1joyy48u8s08xafzix8...@mail.gmail.com>>
> Content-Type: text/plain; charset="utf-8"
> 
> Hello.
> 
> I'm implementing Neural Networks in Pharo as part of my thesis
> (object-oriented approaches to neural networks implementation). It would be
> really nice to receive some feedback. So if you have any ideas,
> recommendations, or critique, please, write me. What are the existing
> projects or common pitfalls that I should consider? Maybe, you can
> recommend me some nice book or paper on a related topic. I would be
> grateful for any kind of feedback.
> 
> Here is the repository: http://smalltalkhub.com/#!/~Oleks/NeuralNetwork 
> <http://smalltalkhub.com/#!/~Oleks/NeuralNetwork>.
> It's not much, but I'm working on it.
> 
> Yours sincerely,
> Oleksandr

--- End Message ---


Re: [Pharo-users] Neural Networks in Pharo

2017-03-21 Thread Alexandre Bergel
Excellent Guillermo! 
I also wanted to play with the Mnist dataset. I will try your code

Alexandre
-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



> On Mar 21, 2017, at 8:24 AM, Guillermo Polito  
> wrote:
> 
> Hi Oleksandr,
> 
> I'm working half-time on a team doing simulations of spiking neural networks 
> (on scala). Since the topic is "new" to me, I started following a MOOC on 
> traditional machine learning and putting some of my code in here:
> 
> https://github.com/guillep/neural-experiments 
> 
> 
> Also, I wanted to experiment on hand-written characters with the Mnist 
> dataset (yann.lecun.com/exdb/mnist/ ) so I 
> wrote a reader for the IDX format to load the dataset.
> 
> https://github.com/guillep/idx-reader 
> 
> If you want I can take a look and we can discuss further :)
> 
> Guille
> 
> 
> On Tue, Mar 21, 2017 at 11:30 AM, Oleksandr Zaytsev  > wrote:
> I started by implementing some simple threshold neurons. The current goal is 
> a multilayer perceptron (similar to the one in scikit-learn), and maybe other 
> kinds of networks, such as self-organizing maps or radial basis networks.
> 
> I could try to implement a deep learning algorithm, but the big issue with 
> them is time complexity. Probably, it would require the use of GPU, or some 
> advanced "tricks", so I should start with something smaller.
> 
> Also, I want to try different kinds of design approaches, including those 
> that are not based on highly optimized vector algebra (I know that it might 
> not be the best idea, but I want try it and see what happens). For example, a 
> network, where each neuron is an object (normally the whole network is 
> represented as a collection of weight matrices). It might turn out to be very 
> slow, but more object-friendly. For now it's just an idea, but to try 
> something like that I would need a small network with 1-100 neurons.
> 
> Yours sincerely,
> Oleksandr
> 



Re: [Pharo-users] Neural Networks in Pharo

2017-03-21 Thread Alexandre Bergel
Having a neuron as an object is exactly what I have in my implementation.
Sounds exciting!

Share your code when ready! Eager to try it!

Alexandre
-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



> On Mar 21, 2017, at 7:30 AM, Oleksandr Zaytsev  wrote:
> 
> I started by implementing some simple threshold neurons. The current goal is 
> a multilayer perceptron (similar to the one in scikit-learn), and maybe other 
> kinds of networks, such as self-organizing maps or radial basis networks.
> 
> I could try to implement a deep learning algorithm, but the big issue with 
> them is time complexity. Probably, it would require the use of GPU, or some 
> advanced "tricks", so I should start with something smaller.
> 
> Also, I want to try different kinds of design approaches, including those 
> that are not based on highly optimized vector algebra (I know that it might 
> not be the best idea, but I want try it and see what happens). For example, a 
> network, where each neuron is an object (normally the whole network is 
> represented as a collection of weight matrices). It might turn out to be very 
> slow, but more object-friendly. For now it's just an idea, but to try 
> something like that I would need a small network with 1-100 neurons.
> 
> Yours sincerely,
> Oleksandr




Re: [Pharo-users] Neural Networks in Pharo

2017-03-21 Thread Offray Vladimir Luna Cárdenas
Nice to see this development. On the examples issues suggested by 
Alexandre, maybe Grafoscopio[1] could be useful to combine prose with 
code. I have write a new user manual [2] and I'm going to work on it 
this summer of code.


[1] http://mutabit.com/grafoscopio/index.en.html
[2] 
http://mutabit.com/repos.fossil/grafoscopio/doc/tip/Docs/En/Books/Manual/manual.pdf
[3] 
http://gsoc.pharo.org/#topic-grafoscopio-literate-computing-and-reproducible-research-for-pharo


Cheers,

Offray

On 21/03/17 08:38, Alexandre Bergel wrote:

Hi Oleksandr!

I had a look at your code a couple of weeks ago, as I also got some 
interest in Neural networks and genetic algorithm (I will start a 
lecture here at my university on this topic).


I think that your code needs examples. Maybe you can add some simple 
examples, such as learning boolean expressions, or having a more 
complete example on recognizing handing writing. There is in python 
here that does exactly that: 
http://neuralnetworksanddeeplearning.com/chap1.html


My code is available here: 
http://smalltalkhub.com/#!/~abergel/NeuralNetworks 

I wrote this code to support some aspect of my lecture. Do not feel 
this as an absolute answer. Having concrete and relevant example is 
important and my code do not have such example.


Push push your code! We need it!

Cheers,
Alexandre

--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



On Mar 21, 2017, at 6:09 AM, Oleksandr Zaytsev > wrote:


Hello.

I'm implementing Neural Networks in Pharo as part of my thesis 
(object-oriented approaches to neural networks implementation). It 
would be really nice to receive some feedback. So if you have any 
ideas, recommendations, or critique, please, write me. What are the 
existing projects or common pitfalls that I should consider? Maybe, 
you can recommend me some nice book or paper on a related topic. I 
would be grateful for any kind of feedback.


Here is the repository: 
http://smalltalkhub.com/#!/~Oleks/NeuralNetwork 
. It's not much, 
but I'm working on it.


Yours sincerely,
Oleksandr






Re: [Pharo-users] Neural Networks in Pharo

2017-03-21 Thread Serge Stinckwich
On Tue, Mar 21, 2017 at 2:38 PM, Alexandre Bergel
 wrote:
> Hi Oleksandr!

Hi all,

> I had a look at your code a couple of weeks ago, as I also got some interest
> in Neural networks and genetic algorithm (I will start a lecture here at my
> university on this topic).

This is great !

> I think that your code needs examples. Maybe you can add some simple
> examples, such as learning boolean expressions, or having a more complete
> example on recognizing handing writing. There is in python here that does
> exactly that: http://neuralnetworksanddeeplearning.com/chap1.html
>
> My code is available here:
> http://smalltalkhub.com/#!/~abergel/NeuralNetworks
> I wrote this code to support some aspect of my lecture. Do not feel this as
> an absolute answer. Having concrete and relevant example is important and my
> code do not have such example.

Would be nice to have something similar to Scikit-learn in Python.
http://scikit-learn.org/stable/

> Push push your code! We need it!

Yes we definitively need something like that !
Have a look to PolyMath that already provide a lot of math libraries:
https://github.com/PolyMathOrg/PolyMath

I just release v0.85 of PolyMath.

Regards,
-- 
Serge Stinckwich
UCN & UMI UMMISCO 209 (IRD/UPMC)
Every DSL ends up being Smalltalk
http://www.doesnotunderstand.org/



Re: [Pharo-users] Neural Networks in Pharo

2017-03-21 Thread Alexandre Bergel
Hi Oleksandr!

I had a look at your code a couple of weeks ago, as I also got some interest in 
Neural networks and genetic algorithm (I will start a lecture here at my 
university on this topic).

I think that your code needs examples. Maybe you can add some simple examples, 
such as learning boolean expressions, or having a more complete example on 
recognizing handing writing. There is in python here that does exactly that: 
http://neuralnetworksanddeeplearning.com/chap1.html 


My code is available here: http://smalltalkhub.com/#!/~abergel/NeuralNetworks 

I wrote this code to support some aspect of my lecture. Do not feel this as an 
absolute answer. Having concrete and relevant example is important and my code 
do not have such example.

Push push your code! We need it!

Cheers,
Alexandre

-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



> On Mar 21, 2017, at 6:09 AM, Oleksandr Zaytsev  wrote:
> 
> Hello.
> 
> I'm implementing Neural Networks in Pharo as part of my thesis 
> (object-oriented approaches to neural networks implementation). It would be 
> really nice to receive some feedback. So if you have any ideas, 
> recommendations, or critique, please, write me. What are the existing 
> projects or common pitfalls that I should consider? Maybe, you can recommend 
> me some nice book or paper on a related topic. I would be grateful for any 
> kind of feedback.
> 
> Here is the repository: http://smalltalkhub.com/#!/~Oleks/NeuralNetwork 
> . It's not much, but I'm 
> working on it.
> 
> Yours sincerely,
> Oleksandr



Re: [Pharo-users] Neural Networks in Pharo

2017-03-21 Thread Guillermo Polito
Hi Oleksandr,

I'm working half-time on a team doing simulations of spiking neural
networks (on scala). Since the topic is "new" to me, I started following a
MOOC on traditional machine learning and putting some of my code in here:

https://github.com/guillep/neural-experiments

Also, I wanted to experiment on hand-written characters with the Mnist
dataset (yann.lecun.com/exdb/mnist/) so I wrote a reader for the IDX format
to load the dataset.

https://github.com/guillep/idx-reader

If you want I can take a look and we can discuss further :)

Guille


On Tue, Mar 21, 2017 at 11:30 AM, Oleksandr Zaytsev 
wrote:

> I started by implementing some simple threshold neurons. The current goal
> is a multilayer perceptron (similar to the one in scikit-learn), and maybe
> other kinds of networks, such as self-organizing maps or radial basis
> networks.
>
> I could try to implement a deep learning algorithm, but the big issue with
> them is time complexity. Probably, it would require the use of GPU, or some
> advanced "tricks", so I should start with something smaller.
>
> Also, I want to try different kinds of design approaches, including those
> that are not based on highly optimized vector algebra (I know that it might
> not be the best idea, but I want try it and see what happens). For example,
> a network, where each neuron is an object (normally the whole network is
> represented as a collection of weight matrices). It might turn out to be
> very slow, but more object-friendly. For now it's just an idea, but to try
> something like that I would need a small network with 1-100 neurons.
>
> Yours sincerely,
> Oleksandr
>


Re: [Pharo-users] Neural Networks in Pharo

2017-03-21 Thread Oleksandr Zaytsev
I started by implementing some simple threshold neurons. The current goal
is a multilayer perceptron (similar to the one in scikit-learn), and maybe
other kinds of networks, such as self-organizing maps or radial basis
networks.

I could try to implement a deep learning algorithm, but the big issue with
them is time complexity. Probably, it would require the use of GPU, or some
advanced "tricks", so I should start with something smaller.

Also, I want to try different kinds of design approaches, including those
that are not based on highly optimized vector algebra (I know that it might
not be the best idea, but I want try it and see what happens). For example,
a network, where each neuron is an object (normally the whole network is
represented as a collection of weight matrices). It might turn out to be
very slow, but more object-friendly. For now it's just an idea, but to try
something like that I would need a small network with 1-100 neurons.

Yours sincerely,
Oleksandr


Re: [Pharo-users] Neural Networks in Pharo

2017-03-21 Thread Serge Stinckwich
On Tue, Mar 21, 2017 at 10:09 AM, Oleksandr Zaytsev
 wrote:
> Hello.
>
> I'm implementing Neural Networks in Pharo as part of my thesis
> (object-oriented approaches to neural networks implementation). It would be
> really nice to receive some feedback. So if you have any ideas,
> recommendations, or critique, please, write me. What are the existing
> projects or common pitfalls that I should consider? Maybe, you can recommend
> me some nice book or paper on a related topic. I would be grateful for any
> kind of feedback.
>
> Here is the repository: http://smalltalkhub.com/#!/~Oleks/NeuralNetwork.
> It's not much, but I'm working on it.

You should talk to Gullermo Polito. He is working on neural networks in Pharo.
Are you implementing Deep Learning algorithms ?

Regards,

-- 
Serge Stinckwich
UCN & UMI UMMISCO 209 (IRD/UPMC)
Every DSL ends up being Smalltalk
http://www.doesnotunderstand.org/



[Pharo-users] Neural Networks in Pharo

2017-03-21 Thread Oleksandr Zaytsev
Hello.

I'm implementing Neural Networks in Pharo as part of my thesis
(object-oriented approaches to neural networks implementation). It would be
really nice to receive some feedback. So if you have any ideas,
recommendations, or critique, please, write me. What are the existing
projects or common pitfalls that I should consider? Maybe, you can
recommend me some nice book or paper on a related topic. I would be
grateful for any kind of feedback.

Here is the repository: http://smalltalkhub.com/#!/~Oleks/NeuralNetwork.
It's not much, but I'm working on it.

Yours sincerely,
Oleksandr