Has anybody written a quadratic optimization solver in J? Or is there one in
any of the packages?
Examples: https://en.m.wikipedia.org/wiki/Quadratic_programming
--
For information about J forums see http://www.jsoftware.com/f
A. 0 1 5 is the same as
A. 2 3 4 0 1 5
so the "missing" items seem to be implicitly placed before the specified items,
in order.
Sent from Outlook Mobile
On Sat, Apr 30, 2016 at 8:32 AM -0700, "'Pascal Jasmin' via Programming"
wrote:
A. 1 3 5
36
A. 5 1 3
40
A. 1 5 3
37
A
Hi,
I don't have my PC at hand now, but what is the problem with your calculation?
I had never heard of Nivens constant, but looking at Wikipedia there is a
formula for it containing the zeta function.
Why not use this formula?
Zeta 2 3 4 are well known constants, and you can calculate or h
Yes, Raul is absolutely correct. And the flaw (in my solution, at least) is
obvious now. I'll try for a correct solution again tomorrow.
From: 'Mike Day' via Programming
Sent: Saturday, September 3, 00:26
Subject: Re: [Jprogramming] Greatest Increasing Subsequence
To: 'Mike Day' via Progr
I had a go writing conv nets in J.
See
https://github.com/jonghough/jlearn/blob/master/adv/conv2d.ijs
This uses ;.3 to do the convolutions. Using a version of this , with a couple
of fixes, I managed to get 88% accuracy on the cifar-10 imageset. Took several
days to run, as my algorithms are no
sors. And
notice the shape of the result shows it is a tensor, also.
filter2 =: filter,:_1+filter
cf2=: 4 : '|:"2 |: +/ x filter2&(convFunc"3 3);._3 y'
$ (1 2 2,:3 3 3) cf2"3 i,:5+i NB. 2 2 3 3
Much of my effort regarding CNN has been studying the literature tha
stage.
Part of my confusion is that I would have thought the transpose would have
been 0 1 3 2 |:, instead. Can you say more about that?
I have yet to try to understand your verbs `forward` and `backward`, but I
look forward to doing so.
I could not find definitions for the following functions and w
ailure: create__w
| 4 =#shape
I am pretty sure you are using different `create`s and are using them in
unstated `cocurrent` environments. Would you mind providing the j
environment at the start of this example?
This most recent example with 5 3 8 8 shaped tensors is likely to be
exactly what I
|:"2 |: +/ x filter&(convFunc"3 3);._3 y'
> > (1 2 2,:3 3 3) cf"3 i NB. 3 3$1 1 _2 _2 3 _7 _3 1 0
> >
> > My next example makes both the `filter` and the RHA into tensors. And
> > notice the shape of the result shows it is a tensor, also.
> &g
Sorry, as I said in a previous email, the example I gave with runConv will not
work, as it was made for a much older version of the project. Please try this
as is, in jqt.
NB. ==
A1=: 3 8 8 $ 1 1 1 1 1 1 1 1, 0 0 0 0 0 0 0 0, 0 0 0 0 0 0 0 0, 1 1 1 1 1 1 1
1, 0 0 0
il 19, 2019, 1:25:03 PM GMT+9, jonghough via Programming
wrote:
Sorry, as I said in a previous email, the example I gave with runConv will
not work, as it was made for a much older version of the project. Please try
this as is, in jqt.
NB. ==
A1=: 3 8 8 $ 1 1 1
| 0!:0 y[4!:55<'y'
load'/Users/brian/j64-807-user/projects/jlearn/init.ijs'
1
Test success Simple GMM test, diagonal covariance
...
load jpath '~temp/simple_conv_test.ijs'
not found: /users/brian/j64-807-user/temp/simple_conv_test.ijs
On Fri, Apr 19, 2019 a
> so I should run `OUTPUT fit__pipe INPUT` 2 or 3 more times.
Yes, I think so. After 2 or three more times, you should get all correct. 100%
accuracy.
>What does the other output mean? For example what is alternating 1 and 2,
> what is 1...20, what is 10?
There are 15 images. When we constructe
different.
>
> Thanks,
>
> --
> Raul
>
> On Thu, Apr 18, 2019 at 8:13 PM jonghough via Programming
> wrote:
> >
> > The convolution kernel function is just a straight up elementwise
> multiply and then sum all, it is not a dot product or matrix product.
>
07-user\projects\jlearn\init.ijs
> | 0!:0 y[4!:55<'y'
> |script[0]
> |fn[0]
> | fn fl
> |load[:7]
> | 0 load y
> |load[0]
> |
> load'c:\Users\devon_mccormick\j64-807-user\projects\jlearn\init.ijs'
>
> The arguments to "dot&quo
I think you may be right. Thanks for pointing this out. However, since my
networks mostly work, I am going to assume that having too many biases doesn't
negatively impact the results, except for adding "useless" calculations. If you
are correct, I should fix this.
I have edited the source on a
under the assumption this is "wd" defined in JQt and
this is some sort of progress message. Is this correct?
Thanks,
Devon
On Sun, Apr 28, 2019 at 9:20 AM jonghough via Programming <
[email protected]> wrote:
> I think you may be right. Thanks for pointing this out. Howe
The locales may be a bit confusing, and if they are slowing down the training,
then I will definitely rethink them. The main idea is that
every layer is its own object and conducts its own forward and backward passes
during training and prediction.
Every layer, including Conv2D, LSTM, SimpleLaye
This looks very interesting. Sorry, I am traveling until next week so cannot
give it much more than a quick look through at the moment. Next week I will try
to run it.
By the way, following you advice and issues you discovered with my convnet
(bias shape in particular), I am refactoring my sour
19 matches
Mail list logo