Hi Elijah,

Thanks for your suggestions, they've been really helpful, the idea of
having an apply node is exactly what I was missing.
With that, verbs can be leafs, as well as nouns, so they can both
coexist in the same egraph.
I've got the hunch that this is desirable as it allows
- to keep semantics out of the graph construction
- treating applied verbs as monad, dyad or both
- working at the verb-noun level at the same time as at the
adv/conj-verb level at the same time.

Whether this all is a good thing, future will tell, as I have still to
see whether it works out.

Jan-Pieter

Op za 8 okt. 2022 om 00:22 schreef Elijah Stone <elro...@elronnd.net>:
>
> Distinguish:
>
> 1. Notional function (f+)
>
> 2. Monadic application of a function (m+, conjugate)
>
> 3. Dyadic application of a function (d+, add)
>
> Semantically, 2 and 3 are distinct from each other; but j puns them.  (And 1
> needs to be distinguished to bridge the gap with j semantics.)
>
> The j -@:% is an application of d@: to f- and f%.  If you take the derived
> verb and apply it (I have a special 'apply' node), cf -@:% y, you can have a
> rewrite rule matching (apply (d@: fu fv) y) -> (du y (mv y)).  At some point
> this needs to be more sophisticated than simple rewrite rules, when you go to
> substitute and parse in an explicit definition (or even to substitute du for
> fu), but that is par for the course; it can still be treated uniformly.
>
> Re ambivalent verbs: usually the user will have a particular valence in mind
> and can specify that.  But if you care: split the verb into monadic/dyadic
> cases, apply to symbolic arguments, simplify, and then restore if necessary to
> round-trip.
>
> > in + + + + +, all +'s would collapse to a single enode
>
> Why?
>
> On Fri, 7 Oct 2022, Jan-Pieter Jacobs wrote:
>
> > Dear all,
> >
> > After recent discussion of the math/calculus addon, I started playing
> > around with what could become a symbolic toolkit for J, which could in turn
> > serve as a backend for other projects, like future versions of the
> > math/calculus addon, explicit-to-tacit and vice versa, etc. It should thus
> > remain as agnostic of verb meaning as possible.
> >
> > Current status
> > --------------------
> > I started out implementing egraphs as it looked like a good idea (you can
> > find my progress here: https://github.com/jpjacobs/general_sym). It is
> > based on this colab notebook I've found:
> > https://colab.research.google.com/drive/1tNOQijJqe5tw-Pk9iqd6HHb2abC5aRid?usp=sharing#scrollTo=4YJ14dN1awEA.
> > I've gotten so far that I can convert AR's of forks and hooks into an
> > egraph structure, with place-holder arguments. For now, only named and
> > primitive components are allowed, no derived verbs resulting from applying
> > adverbs/conjunctions. I chose to work on AR's since all parts of speech
> > have an AR, and they are already conveniently parsed for recursive
> > treatment (discovered through my dabblings in math/calculus). My idea was
> > that the egraph should ignore the meaning of each individual part of
> > speech, only taking into account syntax. The meaning and equivalences would
> > then be handled by the rewrite rules applied on this egraph structure.
> >
> > Perhaps easier to show than to explain:
> >
> >  install'github:jpjacobs/general_sym'
> > installed: jpjacobs/general_sym master into folder: general/sym
> >  load'general/sym'
> >  eg=:sym''
> >  3 add__eg ((!+#*%+) arofu__eg)
> > 5
> >  en__eg
> > ┌───┬───┐
> > │y::│   │
> > ├───┼───┤
> > │+  │0  │
> > ├───┼───┤
> > │*  │0  │
> > ├───┼───┤
> > │%  │2 1│
> > ├───┼───┤
> > │#  │1 3│
> > ├───┼───┤
> > │!  │0 4│
> > └───┴───┘
> > The above creates an empty graph, adds the (nonsensical) train (!+#*%+) as a
> > monad (x=3) and shows the resulting enodes in the egraph, with y:: being
> > what I came up with as implied argument. You can see that + is added only
> > once, because it was noticed by adden that it was already present, and
> > applied to the same arguments, when found the second time. I thought x::,
> > y::, u:: and v:: were nice, since they are syntactically recognised as J
> > words (by e.g. ;:), but not yet used (others I'd have in mind for later are
> > d:: and D:: for deriv and pderiv). The first column shows the arguments,
> > verbs, the second column their argument references; the third I added so
> > you can easily see where the argument references point to. For instance, !
> > takes as arguments eclassid 0, i.e. y:: and eclassid 5, with 5 being the
> > result of # applied between arguments 4 and 3, and so on. The same works
> > for dyads (try x=4), NVV forks and capped forks. The interface should be
> > adapted to be more user-friendly but it's just a POC at the moment.
> >
> > At the moment, enodes (in en__eg) without arguments (i.e. leafs) are nouns;
> > verbs are enodes with 1 (monads) and 2 (dyads) arguments. The numbers
> > listed as arguments are , in egraph terminology, called eclassid, and point
> > to classes containing enodes that all represent the same result when
> > executed. For now, eclasses are all singleton sets of enodes, as no
> > rewrites can be done yet.
> >
> > Now I tried adding conjunctions and adverbs in the mix and came to the
> > conclusion that my approach runs into troubles:
> > To know how arguments are applied to verbs, I would need to interpret the
> > conjunctions (something I would have prefered to leave up to the rewriting
> > rules to implement).
> > Treating applied conjunctions as opaque, monolithic blocks, is clearly not
> > an option (as you then can't influence anything in its arguments, e.g.
> > -@-@-@- could not be reduced to ]).
> > An idea that at first sight seems to make sense is to give operators up to
> > 4 arguments, up to 2 for the arguments of the operator (u/v) and upto 2 for
> > the generated verb, but that breaks down on operators generating operators
> > (e.g. 2 : '') and nouns (e.g. b. 0).
> > Treating the verbs on a higher level, i.e. without caring (yet) about nouns
> > to serve as verb arguments,crossed my mind. However, this does not let one
> > distinguish for instance which instance in the compound verb the enode
> > refers to. e.g. in + + + + +, all +'s would collapse to a single enode, and
> > it gets worse when they are not all at the same level.
> > Interpreting adverbs and conjunctions and applying arguments to verbs
> > directly, thereby removing the need for storing the conjunctions and
> > adverbs sounds fine for simple operators like @, & and family, but soon
> > gets quite difficult to imagine with operators like /. or ;. .
> >
> > A last thing I wonder about is how to handle ambivalent verbs. Now I'm
> > either adding monads, or dyads, but have no way of indicating x:: might  or
> > not be present. For instance, (#~ {."1) is useful for selecting by 0/1 in
> > the first column of y, but can also be used to filter from x based on y.
> > The same holds for (/: {."1).
> >
> > So, who would have a suggestion as to how to represent operators in this
> > kind of framework? Any guidance would be welcome. I could go on
> > implementing all other machinery to do rewrites, but I feel I'd need to
> > have the basic representation right to start with.
> >
> > Thanks,
> > Jan-Pieter
> > ----------------------------------------------------------------------
> > For information about J forums see http://www.jsoftware.com/forums.htm
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to