I've been not reading my mail for a couple of days but I have been working
on convolutional neural nets for image processing myself.  When I get some
time, I'd like to better read the recent traffic and perhaps contribute how
I'm going about it.


On Wed, Apr 17, 2019 at 10:36 PM Brian Schott <[email protected]>
wrote:

> I have renamed this message because the topic has changed, but considered
> moving it to jchat as well. However I settled on jprogramming because there
> are definitely some j programming issues to discuss.
>
> Jon,
>
> Your script code is beautifully commented and very valuable, imho. The lack
> of an example has slowed down my study of the script, but now I have some
> questions and comments.
>
> I gather from your comments that the word tensor is used to designate a 4
> dimensional array. That's new to me, but it is very logical.
>
> Your definition convFunc=: +/@:,@:* works very well. However, for some
> reason I wish I could think of a way to defined convFunc in terms of X=:
> dot=: +/ . * .
>
> The main insight I have gained from your code is that (x u;.+_3 y)  can be
> used with x of shape 2 n where n>2 (and not just 2 2). This is great
> information. And that you built the convFunc directly into cf is also very
> enlightening.
>
> I have created a couple of examples of the use of your function `cf` to
> better understand how it works. [The data is borrowed from the fine example
> at http://cs231n.github.io/convolutional-networks/#conv . Beware that the
> dynamic example seen at the link changes everytime the page is refreshed,
> so you will not see the exact data I present, but the shapes of the data
> are constant.]
>
> Notice that in my first experiments both `filter` and the RHA of cf"3 are
> arrays and not tensors. Consequently(?) the result is an array, not a
> tensor, either.
>
>    i=: _7]\".;._2 (0 : 0)
> 0 0 0 0 0 0 0
> 0 0 0 1 2 2 0
> 0 0 0 2 1 0 0
> 0 0 0 1 2 2 0
> 0 0 0 0 2 0 0
> 0 0 0 2 2 2 0
> 0 0 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 2 1 2 2 2 0
> 0 0 1 0 2 0 0
> 0 1 1 1 1 1 0
> 0 2 0 0 0 2 0
> 0 0 0 2 2 2 0
> 0 0 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 1 2 1 0
> 0 1 1 0 0 0 0
> 0 2 1 2 0 2 0
> 0 1 0 0 2 2 0
> 0 1 0 1 2 2 0
> 0 0 0 0 0 0 0
> )
>
>    k =: _3]\".;._2(0 :0)
> 1  0 0
> 1 _1 0
> _1 _1 1
> 0 _1 1
> 0  0 1
> 0 _1 1
> 1  0 1
> 0 _1 0
> 0 _1 0
> )
>
>    $i NB. 3 7 7
>    $k NB.  3 3 3
>
>    filter =: k
>    convFunc=: +/@:,@:*
>
>    cf=: 4 :  '|:"2 |: +/ x filter&(convFunc"3 3);._3 y'
>    (1 2 2,:3 3 3) cf"3 i NB. 3 3$1 1 _2 _2 3 _7 _3 1  0
>
> My next example makes both the `filter` and the RHA into tensors. And
> notice the shape of the result shows it is a tensor, also.
>
>    filter2 =: filter,:_1+filter
>    cf2=: 4 :  '|:"2 |: +/ x filter2&(convFunc"3 3);._3 y'
>    $ (1 2 2,:3 3 3) cf2"3 i,:5+i NB. 2 2 3 3
>
> Much of my effort regarding CNN has been studying the literature that
> discusses efficient ways of computing these convolutions by translating the
> filters and the image data into flattened (and somewhat sparse} forms that
> can be restated in matrix  formats. These matrices accomplish the
> convolution and deconvolution as *efficient* matrix products. Your
> demonstration of the way that j's ;._3 can be so effective challenges the
> need for such efficiencies.
>
> On the other hand, I could use some help understanding how the 1 0 2 3 |:
> transpose you apply to `filter` is effective in the backpropogation stage.
> Part of my confusion is that I would have thought the transpose would have
> been 0 1 3 2 |:, instead. Can you say more about that?
>
> I have yet to try to understand your verbs `forward` and `backward`, but I
> look forward to doing so.
>
> I could not find definitions for the following functions and wonder if you
> can say more about them, please?
>
> bmt_jLearnUtil_
> setSolver
>
> I noticed that your definitions of relu and derivRelu were more complicated
> than mine, so I attempted to test yours out against mine as follows.
>
>
>
>    relu     =: 0&>.
>    derivRelu =: 0&<
>    (relu -: 0:`[@.>&0) i: 4
> 1
>    (derivRelu -: 0:`1:@.>&0) i: 4
> 1
>
>
>
>
> On Sun, Apr 14, 2019 at 8:31 AM jonghough via Programming <
> [email protected]> wrote:
>
> >  I had a go writing conv nets in J.
> > See
> > https://github.com/jonghough/jlearn/blob/master/adv/conv2d.ijs
> >
> > This uses ;.3 to do the convolutions. Using a version of this , with a
> > couple of fixes/, I managed to get 88% accuracy on the cifar-10 imageset.
> > Took several days to run, as my algorithms are not optimized in any way,
> > and no gpu was used.
> > If you look at the references in the above link, you may get some ideas.
> >
> > the convolution verb is defined as:
> > cf=: 4 : 0
> > |:"2 |: +/ x filter&(convFunc"3 3);._3 y
> > )
> >
> > Note that since the input is an batch of images, each 3-d (width, height,
> > channels), we are actually doing the whole forward pass over a 4d array,
> > and outputting another 4d array of different shape, depending on output
> > channels, filter width, and filter height.
> >
> > Thanks,
> > Jon
> >
>
> Thank you,
>
> --
> (B=)
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm



-- 

Devon McCormick, CFA

Quantitative Consultant
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to