If any of this is implemented, it would make sense to me to have u b. n
query the attributes.
Henry Rich
On 1/10/2023 6:18 PM, 'Pascal Jasmin' via Programming wrote:
u/ P. 'Associative' and P:
would let you/us describe/annotate verbs and invoke associative optimizations
on it.
The Associat
u/ P. 'Associative' and P:
would let you/us describe/annotate verbs and invoke associative optimizations
on it.
The Associative adverb or attribute applies only to functions (doesn't make
sense for data)
It also applies to only insert operations, and then as an adverb it could all
be built in.
Exactly,
e=. &.>
u=. ('('"_ , 'u '"_ , ] , ')'"_) :('('"_ , [ , ' u '"_ , ] , ')'"_)
YS=. <;._1 ' Y0 Y1 .. YN'
u e/\YS
┌──┬─┬┬───┐
│Y0│(Y0 u Y1)│(Y0 u (Y1 u ..))│(Y0 u (Y1 u (.. u YN)))│
└──┴─┴┴──
Intuitively, to me it would make sense to have a conjunction, e.g. P., to
set properties, provided as literal characters, to a verb (or, if it would
become desirable for other use cases, for nouns as well), and a
corresponding adverb (in line with b.) to query them. I bet there are other
properties
scan=. ((~/)\.)(&.|.)
-scan p:i.5
2 _1 _6 _13 _24
-/\ p:i.5
2 _1 4 _3 8
--
Raul
On Tue, Jan 10, 2023 at 3:18 PM Jose Mario Quintana
wrote:
>
> Indeed,
>
>scan=. ((~/)\.)(&.|.)
>
>%&:{./"_1 +/ .* /\ (1 0,:~,&1)"0 ]10$1
> 1 2 1.5 1.7 1.6 1.625 1.61538 1.61905 1.61765 1.6181
Indeed,
scan=. ((~/)\.)(&.|.)
%&:{./"_1 +/ .* /\ (1 0,:~,&1)"0 ]10$1
1 2 1.5 1.7 1.6 1.625 1.61538 1.61905 1.61765 1.61818
%&:{./"_1 +/ .*scan (1 0,:~,&1)"0 ]10$1
1 2 1.5 1.7 1.6 1.625 1.61538 1.61905 1.61765 1.61818
stp=. ] (([ ((<;._1 '|Sentence|Space|Time|Space * Time') ,
> S.
I was going to suggest some foreign equivalent to S. But this approach is
sufficient. A: (for (A)ssociative declaration) would be another candidate. if
A: (or a foreign adverb) were used for this. u S. could be a declaration/hint
that the y argument to u is in sorted order.
plus =: + A
J uses +/!.0 for accurate sums.
+/!.0 ] 1 1e_30 _1
1e_30
Henry Rich
On 1/9/2023 9:38 PM, Marshall Lochbaum wrote:
I'm aware but I think consistent results are more important than speed.
BQN's approach is to provide •math.Sum for fast sums, which is a little
over 10x faster for floats on AVX
:) for more, see guy blelloch's classic little book of scans
https://www.cs.cmu.edu/~guyb/papers/Ble93.pdf
On Mon, 9 Jan 2023, Omar Antolín Camarena wrote:
For those who, like me, were unaware of efficient parallel scan algorithms for
associative operations, Wikipedia has the basics, at least
My preference is to allow the user to specify what transformations they would
like to permit the implementation to perform in what contexts, as recommended
by ieee 754 (sec 10.4). Perhaps an adverb S., such that [x] u S. y applies u
with strict fp semantics. Or perhaps a function attribute, sp
Like I said, with a little care, I think it is possible to come up with an
ordering that works everywhere and be committed to. I think this basically
looks like folding the array over on top of itself repeatedly; that requires
logn space, with a constant factor corresponding to the degree of pa
I'm aware but I think consistent results are more important than speed.
BQN's approach is to provide •math.Sum for fast sums, which is a little
over 10x faster for floats on AVX2 as you say.
Marshall
On Mon, Jan 09, 2023 at 09:26:12PM -0500, Henry Rich wrote:
> It's impossible NOT to 'rearrange'
It's impossible NOT to 'rearrange' +/, if you want speed. With 1 ALU,
you add 0+1+2... except: suppose your add latency is 4? Then you have 4
accumulators and interleave the addition of 0+4+8+12, 1+5+9+13, etc.
But now AVX comes along and you add 4 at a time, so with 4 accumulators
you are a
You will be assured of no imprecision if all terms are in the range that can
be faithfully represented by fp. Since these matrices are positive, and
therefore monotonically increasing, it suffices to check the the final result.
On Mon, 9 Jan 2023, Marshall Lochbaum wrote:
Yes, my point was t
Well, true, I'm not in favor of rearranging +/ either. The dangers of
floating point don't include nondeterminism, unless you make them.
However, I also think matrix products have it worse. Numbers with widely
varying exponents are a bit of an edge case. But when you're multiplying
a few large mat
(I forgot about that. It would have to go too, in such a mode. Fortunately
avx512 has an (almost) full-width integer multiply. Or do it with extendeds,
which should be the same speed once I'm done with them.)
On Mon, 9 Jan 2023, Henry Rich wrote:
The implementation is floating point.
Henr
Yes, my point was that I don't think you can reliably determine in
advance whether a matrix product scan will invoke floating point
imprecision.
Marshall
On Mon, Jan 09, 2023 at 08:25:43PM -0500, Henry Rich wrote:
> The implementation is floating point.
>
> Henry Rich
>
> On Mon, Jan 9, 2023, 8
The implementation is floating point.
Henry Rich
On Mon, Jan 9, 2023, 8:24 PM Marshall Lochbaum wrote:
> The matrices in my example are integral.
>
> Marshall
>
> On Mon, Jan 09, 2023 at 05:20:49PM -0800, Elijah Stone wrote:
> > I've thought in the past that it would be nice to have a mode for
The matrices in my example are integral.
Marshall
On Mon, Jan 09, 2023 at 05:20:49PM -0800, Elijah Stone wrote:
> I've thought in the past that it would be nice to have a mode for
> reproducible fp, but there is none yet (try +/ vs ([+])/). But it is moot
> in this case, as the matrices in quest
I've thought in the past that it would be nice to have a mode for reproducible
fp, but there is none yet (try +/ vs ([+])/). But it is moot in this case, as
the matrices in question were integral.
(I don't think strict interpretation per se is necessary; just something
reliable and consistent
But that's just normal floating non-associativity. It happens even for addition
of "integers":
1 + (_1e19 + 1e19)
1
(1 + _1e19) + 1e19
0
People using floating point are probably aware of the dangers or at least
should be.
--
Omar
-
I don't think you'd want to reorder matrix multiplications as that can
change the results by an arbitrary amount. Here's an example where
multiplication on one side cancels but not on the other side.
(+/ .*&.>/ , +/ .*&.>~/@:|.) 1e6 1e7;(1e13+=i.2);1 _1
┌┬──┐
│_900│_8.99482e
If f is known to be associative, f/\ proceeds from start to end in
linear time. Elijah has thoughts about how you could inform the
interpreter that a function is associative. Till that is implemented,
you can use f/\.&.|. which has linear time, or you can use Fold.
Henry Rich
On 1/9/2023 1:2
Am I right in thinking that the J interpreter does use a linear time algorithm
for u/\ for some u including + and *? If so, maybe the case of +/ . * is worth
adding.
--
Omar
--
For information about J forums see http://www.jsof
f/\ is quadratic but f/\. shouldn't be.
Henry Rich
On 1/9/2023 11:20 AM, Marshall Lochbaum wrote:
It's not just Fibonacci numbers. This is equivalent to a general method
for computing continued fractions convergents in linear time, using
matrix multiplication. There's a nice explanation of why
It's not just Fibonacci numbers. This is equivalent to a general method
for computing continued fractions convergents in linear time, using
matrix multiplication. There's a nice explanation of why it works in
here:
https://perl.plover.com/classes/cftalk/INFO/gosper.txt
And a J version, with phi's
For those who, like me, were unaware of efficient parallel scan algorithms for
associative operations, Wikipedia has the basics, at least enough to convince
me these algorithms exist and are actually not that difficult! :) I'm very glad
I learned this, thanks Elijah.
https://en.wikipedia.org/wi
Can we generate phi, the golden ratio, in parallel?
Of course we can! Follows is an exposition of a classic method for doing it,
which may be of interest; I have not seen it satisfactorily described
elsewhere.
The classic method for generating phi in j uses a continued fraction:
(+%)/10#
28 matches
Mail list logo