On Fri, 20 May 2022, Raul Miller wrote:

a space efficient representation of a relatively sparse directed graph would be a two column matrix, with one column representing connections and the other column representing node ids.

Yes--that was one of the representations I was thinking of. But it makes random access hard. One alternative is a pair of parallel vectors which can be fed to i.. This has the advantage that, like with adjacency matrices, it is easy to flip the arrows when that is convenient.


But for highly optimized micro-modules you want a compiler which automatically detects and removes as many unnecessary mechanisms as possible from the implementation.

And, for us to have that in J, we would probably want a J compiler for some constrained subset of the language, and a systematic and well documented way of calling between J and these compiled artifacts (along with callbacks for compiled adverbs and conjunctions).

I have tentative plans for something like this. But I would like to point out two things:

1. Calling convention should be effortless, because verbs are transparent. It should be possible for a user to define an adverb 'compile' st [x] u compile y behaves as [x] u y but is faster, for a subset of verbs u.

2. Restricted subset of j is not necessary in order to get good performance, as demonstrated eg by http://www.softwarepreservation.org/projects/apl/Papers/DYNAMICINCREMENTAL. This sort of thing does require support from the underlying implementation, though.


The original design of APL (and inherited by J) was rooted in specification of machine architecture. So it shouldn't be too surprising that there's a good match between machine operations and language primitives.

The original design of APL was a notation for _humans_ to use on blackboards.


Pointer chasing maps onto arrays as indexing.

'Sort of'. If i. is the prototype of a hash table, and I. is the prototype of a tree search, then { and } are the prototype of random access (vs stream processing). But the language does not give you good tools for _chasing_ pointers. If you are using { and }, you had better be giving them a large list of indices--this is the sort of thing the GPU loves because it has enough cores to hide the latency of such an operation--and that's just stream processing with bells on.

I speak of _chasing_ pointers, not just following them; that means deep dependency chains. Coming back to the issue of BVH: I can imagine an efficient BVH lookup for many keys implemented in pure j using index-following. But for an efficient search for just one key, it is necessary to leverage primitives, and I don't know of a good way to do that. (I recognise this is moving the goalposts a bit; sorry about that. But this is the problem I was trying to solve.)

To give an example, here is a compacting GC I wrote in just a couple of lines of j for a simple lisp implementation: https://github.com/moon-chilled/j/blob/master/lisp/lisp.ijs#L73. The actual 'compacting' part is rather small and neat. That's because it can think of memory as an essentially flat structure. But the _tracing_ part does a lot of extraneous work. I'm sure somebody cleverer than I could do a better job. But, I would argue, this is the sort of solution the language directed me towards.

 -E

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to