I tested with J calling lapack for matrix multiplication with the
following script,
NB. extern dgemm_(char * transa, char * transb, int * m, int * n, int * k,
NB. double * alpha, double * A, int * lda,
NB. double * B, int * ldb, double * beta,
NB. double
name =: verb define
smoutput 'time at point 1: ' , ": 6!:1''
NB. do something
smoutput 'time at point 2: ' , ": 6!:1''
NB. etc.
)
untested
Henry Rich
On 4/20/2017 8:15 PM, Michael Goodrich wrote:
Henry
You could save me some time if you could post a code example
---
Henry
You could save me some time if you could post a code example
--
For information about J forums see http://www.jsoftware.com/forums.htm
Roger that - I'll give it a whirl.
Any other reports of perf issues?
--
For information about J forums see http://www.jsoftware.com/forums.htm
You can use 6!:1'' to get the session time (number of seconds in the
current session) and type that out in different places. See where the
difference between 8.05 and 8.06 shows up.
If you can narrow it down to a small snippet that runs slower, I will
look into it.
Henry Rich
On 4/20/2017
Not quite sure how to do that but I posted the app in weirdness #2
Sent from my iPhone
> On Apr 20, 2017, at 6:58 PM, Henry Rich wrote:
>
> If you can isolate what's slower that would be helpful. The changes are in
> the i.-family, sort/grade, and +/ . * .
>
> Henry Rich
>
>> On 4/20/2017
If you can isolate what's slower that would be helpful. The changes are
in the i.-family, sort/grade, and +/ . * .
Henry Rich
On 4/20/2017 6:54 PM, Michael Goodrich wrote:
First test of my neural net app is much slower - 20 sec vs. 8 sec for 805
stable 😩.
Sent from my iPhone
On Apr
First test of my neural net app is much slower - 20 sec vs. 8 sec for 805
stable 😩.
Sent from my iPhone
> On Apr 17, 2017, at 1:38 PM, David Mitchell wrote:
>
> Here are my results with beta-3:
>
> 2017 4 16 4 41
> Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz
> j805/j64/windows/beta-12/comm
Thanks. Have you seen
https://www.astro.umd.edu/~jph/J_page.html ?
lots of number crunching code.
On Thu, Apr 20, 2017 at 11:23 AM, Raul Miller wrote:
> You probably do not need a readtensor verb for your application.
>
> Those capabilities are waiting there in J for you to use
You probably do not need a readtensor verb for your application.
Those capabilities are waiting there in J for you to use them*,
however most (maybe all?) applications tend to need only a small
fraction of the general capabilities implemented in J.
* As an example use of higher dimensioned arrays
Xiao,
One more timing result: The compute pattern optimized R version takes
about 30 sec meaning it is 3-4X slower than J and about 50% slower than non
compute pattern optimized C and 10X slower than compute pattern optimized C.
On Tue, Apr 18, 2017 at 8:21 PM, Michael Goodrich <
michael.goodr.
I dug out the pre compute pattern optimized C version- It is about twice
as slow as the J version. OTOH the compute pattern optimized C version is
about 2-3 faster than the J version; it took some analysis and refactoring
to achieve this however and it would be nice to focus on the application
ra
Thanx much Henry. I see your point. I notice that coercing a list (a rank
1 data object) into an Nx1 "rank 2" data object implies that J considers
it rank 2 since #$ will evidently return '2' whereas the data would seem to
actually be a rank 1 tensor (a column vector) and similarly a 1xN row
vec
Raul,
I see your point. I was expecting too much from 'readtable', and I
apparently need a 'readtensor' verb for my app.
On Sun, Apr 16, 2017 at 4:02 PM, Raul Miller wrote:
> On Sun, Apr 16, 2017 at 2:26 PM, Michael Goodrich
> wrote:
> > Why does J not treat a column of numbers as a N by 1 'm
14 matches
Mail list logo