Hi,
I don't get it - whats is idea of parallelising algorithm, that is quite
fast and in general focuses only on simple arithmetic operations. The
algorithm itself is difficult to parallelise as it requires series of
weight updates, so I believe that MapReduce payload would be bigger than
parallelisation profit. Eventually, in case of extreme big networks you
could get something but it would be rather "ars pro aprte" as extremely big
networks are not as efficient. Moreover, the algorithm used to stick in
local minimum, so you would get almost nothing... Looking further, I see
that you are using sigmoid perceptrons - you know derivative a-priori...
The only point, where i could imagine using parallel approach to calculate
gradient in neural network, is situation where you are to learn big,
unknown network (you do not know the structure and activation functions).
But it is pointless to approach such a case with NN.

Finally - the stop condition. Usual this is a learning error (if you have a
big faith) or error descent (if you are a bit smarter).


2015-02-12 11:14 GMT+01:00 unmesha sreeveni <unmeshab...@gmail.com>:

> I am trying to implement Neural Network in MapReduce. Apache mahout is
> reffering this paper
> <
> http://www.cs.stanford.edu/people/ang/papers/nips06-mapreducemulticore.pdf
> >
>
> Neural Network (NN) We focus on backpropagation By defining a network
> structure (we use a three layer network with two output neurons classifying
> the data into two categories), each mapper propagates its set of data
> through the network. For each training example, the error is back
> propagated to calculate the partial gradient for each of the weights in the
> network. The reducer then sums the partial gradient from each mapper and
> does a batch gradient descent to update the weights of the network.
>
> Here <http://homepages.gold.ac.uk/nikolaev/311sperc.htm> is the worked out
> example for gradient descent algorithm.
>
> Gradient Descent Learning Algorithm for Sigmoidal Perceptrons
> <http://pastebin.com/6gAQv5vb>
>
>    1. Which is the better way to parallize neural network algorithm While
>    looking in MapReduce perspective? In mapper: Each Record owns a partial
>    weight(from above example: w0,w1,w2),I doubt if w0 is bias. A random
> weight
>    will be assigned initially and initial record calculates the output(o)
> and
>    weight get updated , second record also find the output and deltaW is
> got
>    updated with the previous deltaW value. While coming into reducer the
> sum
>    of gradient is calculated. ie if we have 3 mappers,we will be able to
> get 3
>    w0,w1,w2.These are summed and using batch gradient descent we will be
>    updating the weights of the network.
>    2. In the above method how can we ensure that which previous weight is
>    taken while considering more than 1 map task.Each map task has its own
>    weight updated.How can it be accurate? [image: enter image description
>    here]
>    3. Where can I find backward propogation in the above mentioned gradient
>    descent neural network algorithm?Or is it fine with this implementation?
>    4. what is the termination condition mensioned in the algorithm?
>
> Please help me with some pointers.
>
> Thanks in advance.
>
> --
> *Thanks & Regards *
>
>
> *Unmesha Sreeveni U.B*
> *Hadoop, Bigdata Developer*
> *Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
> http://www.unmeshasreeveni.blogspot.in/
>



-- 
Pozdrawiam,
Grzegorz Ewald

<mailto:grzegorz.ew...@gmail.com>

Reply via email to