Full support for general, first-class dependent types requires
some changes to the Axiom design. That implies some language
design questions.

Given that mathematics is such a general subject with a lot of
"local" notation and ideas (witness logical judgment notation)
careful thought is needed to design a language that is able to
handle a wide range.

Normally language design is a two-level process. The language
designer creates a language and then an implementation. Various
design choices affect the final language.

There is "The Metaobject Protocol" (MOP)
https://www.amazon.com/Art-Metaobject-Protocol-Gregor-Kiczales/dp/0262610744
which encourages a three-level process. The language designer
works at a Metalevel to design a family of languages, then the
language specializations, then the implementation. A MOP design
allows the language user to optimize the language to their problem.

A simple paper on the subject is "Metaobject Protocols"
https://users.cs.duke.edu/~vahdat/ps/mop.pdf

Tim


On Mon, Oct 25, 2021 at 7:42 PM Tim Daly <axiom...@gmail.com> wrote:

> I have a separate thread of research on Self-Replicating Systems
> (ref: Kinematics of Self Reproducing Machines
> http://www.molecularassembler.com/KSRM.htm)
>
> which led to watching "Strange Dreams of Stranger Loops" by Will Byrd
> https://www.youtube.com/watch?v=AffW-7ika0E
>
> Will referenced a PhD Thesis by Jon Doyle
> "A Model for Deliberation, Action, and Introspection"
>
> I also read the thesis by J.C.G. Sturdy
> "A Lisp through the Looking Glass"
>
> Self-replication requires the ability to manipulate your own
> representation in such a way that changes to that representation
> will change behavior.
>
> This leads to two thoughts in the SANE research.
>
> First, "Declarative Representation". That is, most of the things
> about the representation should be declarative rather than
> procedural. Applying this idea as much as possible makes it
> easier to understand and manipulate.
>
> Second, "Explicit Call Stack". Function calls form an implicit
> call stack. This can usually be displayed in a running lisp system.
> However, having the call stack explicitly available would mean
> that a system could "introspect" at the first-class level.
>
> These two ideas would make it easy, for example, to let the
> system "show the work". One of the normal complaints is that
> a system presents an answer but there is no way to know how
> that answer was derived. These two ideas make it possible to
> understand, display, and even post-answer manipulate
> the intermediate steps.
>
> Having the intermediate steps also allows proofs to be
> inserted in a step-by-step fashion. This aids the effort to
> have proofs run in parallel with computation at the hardware
> level.
>
> Tim
>
>
>
>
>
>
> On Thu, Oct 21, 2021 at 9:50 AM Tim Daly <axiom...@gmail.com> wrote:
>
>> So the current struggle involves the categories in Axiom.
>>
>> The categories and domains constructed using categories
>> are dependent types. When are dependent types "equal"?
>> Well, hummmm, that depends on the arguments to the
>> constructor.
>>
>> But in order to decide(?) equality we have to evaluate
>> the arguments (which themselves can be dependent types).
>> Indeed, we may, and in general, we must evaluate the
>> arguments at compile time (well, "construction time" as
>> there isn't really a compiler / interpreter separation anymore.)
>>
>> That raises the question of what "equality" means. This
>> is not simply a "set equality" relation. It falls into the
>> infinite-groupoid of homotopy type theory. In general
>> it appears that deciding category / domain equivalence
>> might force us to climb the type hierarchy.
>>
>> Beyond that, there is the question of "which proof"
>> applies to the resulting object. Proofs depend on their
>> assumptions which might be different for different
>> constructions. As yet I have no clue how to "index"
>> proofs based on their assumptions, nor how to
>> connect these assumptions to the groupoid structure.
>>
>> My brain hurts.
>>
>> Tim
>>
>>
>> On Mon, Oct 18, 2021 at 2:00 AM Tim Daly <axiom...@gmail.com> wrote:
>>
>>> "Birthing Computational Mathematics"
>>>
>>> The Axiom SANE project is difficult at a very fundamental
>>> level. The title "SANE" was chosen due to the various
>>> words found in a thesuarus... "rational", "coherent",
>>> "judicious" and "sound".
>>>
>>> These are very high level, amorphous ideas. But so is
>>> the design of SANE. Breaking away from tradition in
>>> computer algebra, type theory, and proof assistants
>>> is very difficult. Ideas tend to fall into standard jargon
>>> which limits both the frame of thinking (e.g. dependent
>>> types) and the content (e.g. notation).
>>>
>>> Questioning both frame and content is very difficult.
>>> It is hard to even recognize when they are accepted
>>> "by default" rather than "by choice". What does the idea
>>> "power tools" mean in a primitive, hand labor culture?
>>>
>>> Christopher Alexander [0] addresses this problem in
>>> a lot of his writing. Specifically, in his book "Notes on
>>> the Synthesis of Form", in his chapter 5 "The Selfconsious
>>> Process", he addresses this problem directly. This is a
>>> "must read" book.
>>>
>>> Unlike building design and contruction, however, there
>>> are almost no constraints to use as guides. Alexander
>>> quotes Plato's Phaedrus:
>>>
>>>   "First, the taking in of scattered particulars under
>>>    one Idea, so that everyone understands what is being
>>>    talked about ... Second, the separation of the Idea
>>>    into parts, by dividing it at the joints, as nature
>>>    directs, not breaking any limb in half as a bad
>>>    carver might."
>>>
>>> Lisp, which has been called "clay for the mind" can
>>> build virtually anything that can be thought. The
>>> "joints" are also "of one's choosing" so one is
>>> both carver and "nature".
>>>
>>> Clearly the problem is no longer "the tools".
>>> *I* am the problem constraining the solution.
>>> Birthing this "new thing" is slow, difficult, and
>>> uncertain at best.
>>>
>>> Tim
>>>
>>> [0] Alexander, Christopher "Notes on the Synthesis
>>> of Form" Harvard University Press 1964
>>> ISBN 0-674-62751-2
>>>
>>>
>>> On Sun, Oct 10, 2021 at 4:40 PM Tim Daly <axiom...@gmail.com> wrote:
>>>
>>>> Re: writing a paper... I'm not connected to Academia
>>>> so anything I'd write would never make it into print.
>>>>
>>>> "Language level parsing" is still a long way off. The talk
>>>> by Guy Steele [2] highlights some of the problems we
>>>> currently face using mathematical metanotation.
>>>>
>>>> For example, a professor I know at CCNY (City College
>>>> of New York) didn't understand Platzer's "funny
>>>> fraction notation" (proof judgements) despite being
>>>> an expert in Platzer's differential equations area.
>>>>
>>>> Notation matters and is not widely common.
>>>>
>>>> I spoke to Professor Black (in LTI) about using natural
>>>> language in the limited task of a human-robot cooperation
>>>> in changing a car tire.  I looked at the current machine
>>>> learning efforts. They are no where near anything but
>>>> toy systems, taking too long to train and are too fragile.
>>>>
>>>> Instead I ended up using a combination of AIML [3]
>>>> (Artificial Intelligence Markup Language), the ALICE
>>>> Chatbot [4], Forgy's OPS5 rule based program [5],
>>>> and Fahlman's SCONE [6] knowledge base. It was
>>>> much less fragile in my limited domain problem.
>>>>
>>>> I have no idea how to extend any system to deal with
>>>> even undergraduate mathematics parsing.
>>>>
>>>> Nor do I have any idea how I would embed LEAN
>>>> knowledge into a SCONE database, although I
>>>> think the combination would be useful and interesting.
>>>>
>>>> I do believe that, in the limited area of computational
>>>> mathematics, we are capable of building robust, proven
>>>> systems that are quite general and extensible. As you
>>>> might have guessed I've given it a lot of thought over
>>>> the years :-)
>>>>
>>>> A mathematical language seems to need >6 components
>>>>
>>>> 1) We need some sort of a specification language, possibly
>>>> somewhat 'propositional' that introduces the assumptions
>>>> you mentioned (ref. your discussion of numbers being
>>>> abstract and ref. your discussion of relevant choice of
>>>> assumptions related to a problem).
>>>>
>>>> This is starting to show up in the hardware area (e.g.
>>>> Lamport's TLC[0])
>>>>
>>>> Of course, specifications relate to proving programs
>>>> and, as you recall, I got a cold reception from the
>>>> LEAN community about using LEAN for program proofs.
>>>>
>>>> 2) We need "scaffolding". That is, we need a theory
>>>> that can be reduced to some implementable form
>>>> that provides concept-level structure.
>>>>
>>>> Axiom uses group theory for this. Axiom's "category"
>>>> structure has "Category" things like Ring. Claiming
>>>> to be a Ring brings in a lot of "Signatures" of functions
>>>> you have to implement to properly be a Ring.
>>>>
>>>> Scaffolding provides a firm mathematical basis for
>>>> design. It provides a link between the concept of a
>>>> Ring and the expectations you can assume when
>>>> you claim your "Domain" "is a Ring". Category
>>>> theory might provide similar structural scaffolding
>>>> (eventually... I'm still working on that thought garden)
>>>>
>>>> LEAN ought to have a textbook(s?) that structures
>>>> the world around some form of mathematics. It isn't
>>>> sufficient to say "undergraduate math" is the goal.
>>>> There needs to be some coherent organization so
>>>> people can bring ideas like Group Theory to the
>>>> organization. Which brings me to ...
>>>>
>>>> 3) We need "spreading". That is, we need to take
>>>> the various definitions and theorems in LEAN and
>>>> place them in their proper place in the scaffold.
>>>>
>>>> For example, the Ring category needs the definitions
>>>> and theorems for a Ring included in the code for the
>>>> Ring category. Similarly, the Commutative category
>>>> needs the definitions and theorems that underlie
>>>> "commutative" included in the code.
>>>>
>>>> That way, when you claim to be a "Commutative Ring"
>>>> you get both sets of definitions and theorems. That is,
>>>> the inheritance mechanism will collect up all of the
>>>> definitions and theorems and make them available
>>>> for proofs.
>>>>
>>>> I am looking at LEAN's definitions and theorems with
>>>> an eye to "spreading" them into the group scaffold of
>>>> Axiom.
>>>>
>>>> 4) We need "carriers" (Axiom calls them representations,
>>>> aka "REP"). REPs allow data structures to be defined
>>>> independent of the implementation.
>>>>
>>>> For example, Axiom can construct Polynomials that
>>>> have their coefficients in various forms of representation.
>>>> You can define "dense" (all coefficients in a list),
>>>> "sparse" (only non-zero coefficients), "recursive", etc.
>>>>
>>>> A "dense polynomial" and a "sparse polynomial" work
>>>> exactly the same way as far as the user is concerned.
>>>> They both implement the same set of functions. There
>>>> is only a difference of representation for efficiency and
>>>> this only affects the implementation of the functions,
>>>> not their use.
>>>>
>>>> Axiom "got this wrong" because it didn't sufficiently
>>>> separate the REP from the "Domain". I plan to fix this.
>>>>
>>>> LEAN ought to have a "data structures" subtree that
>>>> has all of the definitions and axioms for all of the
>>>> existing data structures (e.g. Red-Black trees). This
>>>> would be a good undergraduate project.
>>>>
>>>> 5) We need "Domains" (in Axiom speak). That is, we
>>>> need a box that holds all of the functions that implement
>>>> a "Domain". For example, a "Polynomial Domain" would
>>>> hold all of the functions for manipulating polynomials
>>>> (e.g polynomial multiplication). The "Domain" box
>>>> is a dependent type that:
>>>>
>>>>   A) has an argument list of "Categories" that this "Domain"
>>>>       box inherits. Thus, the "Integer Domain" inherits
>>>>       the definitions and axioms from "Commutative"
>>>>
>>>>      Functions in the "Domain" box can now assume
>>>>      and use the properties of being commutative. Proofs
>>>>      of functions in this domain can use the definitions
>>>>      and proofs about being commutative.
>>>>
>>>>   B) contains an argument that specifies the "REP"
>>>>        (aka, the carrier). That way you get all of the
>>>>        functions associated with the data structure
>>>>       available for use in the implementation.
>>>>
>>>>       Functions in the Domain box can use all of
>>>>       the definitions and axioms about the representation
>>>>       (e.g. NonNegativeIntegers are always positive)
>>>>
>>>>   C) contains local "spread" definitions and axioms
>>>>        that can be used in function proofs.
>>>>
>>>>       For example, a "Square Matrix" domain would
>>>>       have local axioms that state that the matrix is
>>>>       always square. Thus, functions in that box could
>>>>       use these additional definitions and axioms in
>>>>       function proofs.
>>>>
>>>>   D) contains local state. A "Square Matrix" domain
>>>>        would be constructed as a dependent type that
>>>>        specified the size of the square (e.g. a 2x2
>>>>        matrix would have '2' as a dependent parameter.
>>>>
>>>>   E) contains implementations of inherited functions.
>>>>
>>>>        A "Category" could have a signature for a GCD
>>>>        function and the "Category" could have a default
>>>>        implementation. However, the "Domain" could
>>>>        have a locally more efficient implementation which
>>>>        overrides the inherited implementation.
>>>>
>>>>       Axiom has about 20 GCD implementations that
>>>>       differ locally from the default in the category. They
>>>>       use properties known locally to be more efficient.
>>>>
>>>>   F) contains local function signatures.
>>>>
>>>>       A "Domain" gives the user more and more unique
>>>>       functions. The signature have associated
>>>>       "pre- and post- conditions" that can be used
>>>>       as assumptions in the function proofs.
>>>>
>>>>       Some of the user-available functions are only
>>>>       visible if the dependent type would allow them
>>>>       to exist. For example, a general Matrix domain
>>>>       would have fewer user functions that a Square
>>>>       Matrix domain.
>>>>
>>>>       In addition, local "helper" functions need their
>>>>       own signatures that are not user visible.
>>>>
>>>>   G) the function implementation for each signature.
>>>>
>>>>        This is obviously where all the magic happens
>>>>
>>>>   H) the proof of each function.
>>>>
>>>>        This is where I'm using LEAN.
>>>>
>>>>        Every function has a proof. That proof can use
>>>>        all of the definitions and axioms inherited from
>>>>        the "Category", "Representation", the "Domain
>>>>        Local", and the signature pre- and post-
>>>>        conditions.
>>>>
>>>>    I) literature links. Algorithms must contain a link
>>>>       to at least one literature reference. Of course,
>>>>       since everything I do is a Literate Program
>>>>       this is obviously required. Knuth said so :-)
>>>>
>>>>
>>>> LEAN ought to have "books" or "pamphlets" that
>>>> bring together all of this information for a domain
>>>> such as Square Matrices. That way a user can
>>>> find all of the related ideas, available functions,
>>>> and their corresponding proofs in one place.
>>>>
>>>> 6) User level presentation.
>>>>
>>>>     This is where the systems can differ significantly.
>>>>     Axiom and LEAN both have GCD but they use
>>>>     that for different purposes.
>>>>
>>>>     I'm trying to connect LEAN's GCD and Axiom's GCD
>>>>     so there is a "computational mathematics" idea that
>>>>     allows the user to connect proofs and implementations.
>>>>
>>>> 7) Trust
>>>>
>>>> Unlike everything else, computational mathematics
>>>> can have proven code that gives various guarantees.
>>>>
>>>> I have been working on this aspect for a while.
>>>> I refer to it as trust "down to the metal" The idea is
>>>> that a proof of the GCD function and the implementation
>>>> of the GCD function get packaged into the ELF format.
>>>> (proof carrying code). When the GCD algorithm executes
>>>> on the CPU, the GCD proof is run through the LEAN
>>>> proof checker on an FPGA in parallel.
>>>>
>>>> (I just recently got a PYNQ Xilinx board [1] with a CPU
>>>> and FPGA together. I'm trying to implement the LEAN
>>>> proof checker on the FPGA).
>>>>
>>>> We are on the cusp of a revolution in computational
>>>> mathematics. But the two pillars (proof and computer
>>>> algebra) need to get know each other.
>>>>
>>>> Tim
>>>>
>>>>
>>>>
>>>> [0] Lamport, Leslie "Chapter on TLA+"
>>>> in "Software Specification Methods"
>>>> https://www.springer.com/gp/book/9781852333539
>>>> (I no longer have CMU library access or I'd send you
>>>> the book PDF)
>>>>
>>>> [1] https://www.tul.com.tw/productspynq-z2.html
>>>>
>>>> [2] https://www.youtube.com/watch?v=dCuZkaaou0Q
>>>>
>>>> [3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE"
>>>> https://arxiv.org/pdf/1307.3091.pdf
>>>>
>>>> [4] ALICE Chatbot
>>>> http://www.scielo.org.mx/pdf/cys/v19n4/1405-5546-cys-19-04-00625.pdf
>>>>
>>>> [5] OPS5 User Manual
>>>>
>>>> https://kilthub.cmu.edu/articles/journal_contribution/OPS5_user_s_manual/6608090/1
>>>>
>>>> [6] Scott Fahlman "SCONE"
>>>> http://www.cs.cmu.edu/~sef/scone/
>>>>
>>>> On 9/27/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> > I have tried to maintain a list of names of people who have
>>>> > helped Axiom, going all the way back to the pre-Scratchpad
>>>> > days. The names are listed at the beginning of each book.
>>>> > I also maintain a bibliography of publications I've read or
>>>> > that have had an indirect influence on Axiom.
>>>> >
>>>> > Credit is "the coin of the realm". It is easy to share and wrong
>>>> > to ignore. It is especially damaging to those in Academia who
>>>> > are affected by credit and citations in publications.
>>>> >
>>>> > Apparently I'm not the only person who feels that way. The ACM
>>>> > Turing award seems to have ignored a lot of work:
>>>> >
>>>> > Scientific Integrity, the 2021 Turing Lecture, and the 2018 Turing
>>>> > Award for Deep Learning
>>>> >
>>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html
>>>> >
>>>> > I worked on an AI problem at IBM Research called Ketazolam.
>>>> > (https://en.wikipedia.org/wiki/Ketazolam). The idea was to recognize
>>>> > and associated 3D chemical drawings with their drug counterparts.
>>>> > I used Rumelhart, and McClelland's books. These books contained
>>>> > quite a few ideas that seem to be "new and innovative" among the
>>>> > machine learning crowd... but the books are from 1987. I don't believe
>>>> > I've seen these books mentioned in any recent bibliography.
>>>> >
>>>> https://mitpress.mit.edu/books/parallel-distributed-processing-volume-1
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > On 9/27/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >> Greg Wilson asked "How Reliable is Scientific Software?"
>>>> >>
>>>> https://neverworkintheory.org/2021/09/25/how-reliable-is-scientific-software.html
>>>> >>
>>>> >> which is a really interesting read. For example"
>>>> >>
>>>> >>  [Hatton1994], is now a quarter of a century old, but its conclusions
>>>> >> are still fresh. The authors fed the same data into nine commercial
>>>> >> geophysical software packages and compared the results; they found
>>>> >> that, "numerical disagreement grows at around the rate of 1% in
>>>> >> average absolute difference per 4000 fines of implemented code, and,
>>>> >> even worse, the nature of the disagreement is nonrandom" (i.e., the
>>>> >> authors of different packages make similar mistakes).
>>>> >>
>>>> >>
>>>> >> On 9/26/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>> I should note that the lastest board I've just unboxed
>>>> >>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).
>>>> >>>
>>>> >>> What makes it interesting is that it contains 2 hard
>>>> >>> core processors and an FPGA, connected by 9 paths
>>>> >>> for communication. The processors can be run
>>>> >>> independently so there is the possibility of a parallel
>>>> >>> version of some Axiom algorithms (assuming I had
>>>> >>> the time, which I don't).
>>>> >>>
>>>> >>> Previously either the hard (physical) processor was
>>>> >>> separate from the FPGA with minimal communication
>>>> >>> or the soft core processor had to be created in the FPGA
>>>> >>> and was much slower.
>>>> >>>
>>>> >>> Now the two have been combined in a single chip.
>>>> >>> That means that my effort to run a proof checker on
>>>> >>> the FPGA and the algorithm on the CPU just got to
>>>> >>> the point where coordination is much easier.
>>>> >>>
>>>> >>> Now all I have to do is figure out how to program this
>>>> >>> beast.
>>>> >>>
>>>> >>> There is no such thing as a simple job.
>>>> >>>
>>>> >>> Tim
>>>> >>>
>>>> >>>
>>>> >>> On 9/26/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>> I'm familiar with most of the traditional approaches
>>>> >>>> like Theorema. The bibliography contains most of the
>>>> >>>> more interesting sources. [0]
>>>> >>>>
>>>> >>>> There is a difference between traditional approaches to
>>>> >>>> connecting computer algebra and proofs and my approach.
>>>> >>>>
>>>> >>>> Proving an algorithm, like the GCD, in Axiom is hard.
>>>> >>>> There are many GCDs (e.g. NNI vs POLY) and there
>>>> >>>> are theorems and proofs passed at runtime in the
>>>> >>>> arguments of the newly constructed domains. This
>>>> >>>> involves a lot of dependent type theory and issues of
>>>> >>>> compile time / runtime argument evaluation. The issues
>>>> >>>> that arise are difficult and still being debated in the type
>>>> >>>> theory community.
>>>> >>>>
>>>> >>>> I am putting the definitions, theorems, and proofs (DTP)
>>>> >>>> directly into the category/domain hierarchy. Each category
>>>> >>>> will have the DTP specific to it. That way a commutative
>>>> >>>> domain will inherit a commutative theorem and a
>>>> >>>> non-commutative domain will not.
>>>> >>>>
>>>> >>>> Each domain will have additional DTPs associated with
>>>> >>>> the domain (e.g. NNI vs Integer) as well as any DTPs
>>>> >>>> it inherits from the category hierarchy. Functions in the
>>>> >>>> domain will have associated DTPs.
>>>> >>>>
>>>> >>>> A function to be proven will then inherit all of the relevant
>>>> >>>> DTPs. The proof will be attached to the function and
>>>> >>>> both will be sent to the hardware (proof-carrying code).
>>>> >>>>
>>>> >>>> The proof checker, running on a field programmable
>>>> >>>> gate array (FPGA), will be checked at runtime in
>>>> >>>> parallel with the algorithm running on the CPU
>>>> >>>> (aka "trust down to the metal"). (Note that Intel
>>>> >>>> and AMD have built CPU/FPGA combined chips,
>>>> >>>> currently only available in the cloud.)
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> I am (slowly) making progress on the research.
>>>> >>>>
>>>> >>>> I have the hardware and nearly have the proof
>>>> >>>> checker from LEAN running on my FPGA.
>>>> >>>>
>>>> >>>> I'm in the process of spreading the DTPs from
>>>> >>>> LEAN across the category/domain hierarchy.
>>>> >>>>
>>>> >>>> The current Axiom build extracts all of the functions
>>>> >>>> but does not yet have the DTPs.
>>>> >>>>
>>>> >>>> I have to restructure the system, including the compiler
>>>> >>>> and interpreter to parse and inherit the DTPs. I
>>>> >>>> have some of that code but only some of the code
>>>> >>>> has been pushed to the repository (volume 15) but
>>>> >>>> that is rather trivial, out of date, and incomplete.
>>>> >>>>
>>>> >>>> I'm clearly not smart enough to prove the Risch
>>>> >>>> algorithm and its associated machinery but the needed
>>>> >>>> definitions and theorems will be available to someone
>>>> >>>> who wants to try.
>>>> >>>>
>>>> >>>> [0] https://github.com/daly/PDFS/blob/master/bookvolbib.pdf
>>>> >>>>
>>>> >>>>
>>>> >>>> On 8/19/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>>> =========================================
>>>> >>>>>
>>>> >>>>> REVIEW (Axiom on WSL2 Windows)
>>>> >>>>>
>>>> >>>>>
>>>> >>>>> So the steps to run Axiom from a Windows desktop
>>>> >>>>>
>>>> >>>>> 1 Windows) install XMing on Windows for X11 server
>>>> >>>>>
>>>> >>>>> http://www.straightrunning.com/XmingNotes/
>>>> >>>>>
>>>> >>>>> 2 WSL2) Install Axiom in WSL2
>>>> >>>>>
>>>> >>>>> sudo apt install axiom
>>>> >>>>>
>>>> >>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug:
>>>> >>>>> (someone changed the axiom startup script.
>>>> >>>>> It won't work on WSL2. I don't know who or
>>>> >>>>> how to get it fixed).
>>>> >>>>>
>>>> >>>>> sudo emacs /usr/bin/axiom
>>>> >>>>>
>>>> >>>>> (split the line into 3 and add quote marks)
>>>> >>>>>
>>>> >>>>> export SPADDEFAULT=/usr/local/axiom/mnt/linux
>>>> >>>>> export AXIOM=/usr/lib/axiom-20170501
>>>> >>>>> export "PATH=/usr/lib/axiom-20170501/bin:$PATH"
>>>> >>>>>
>>>> >>>>> 4 WSL2) create a .axiom.input file to include startup cmds:
>>>> >>>>>
>>>> >>>>> emacs .axiom.input
>>>> >>>>>
>>>> >>>>> )cd "/mnt/c/yourpath"
>>>> >>>>> )sys pwd
>>>> >>>>>
>>>> >>>>> 5 WSL2) create a "myaxiom" command that sets the
>>>> >>>>>     DISPLAY variable and starts axiom
>>>> >>>>>
>>>> >>>>> emacs myaxiom
>>>> >>>>>
>>>> >>>>> #! /bin/bash
>>>> >>>>> export DISPLAY=:0.0
>>>> >>>>> axiom
>>>> >>>>>
>>>> >>>>> 6 WSL2) put it in the /usr/bin directory
>>>> >>>>>
>>>> >>>>> chmod +x myaxiom
>>>> >>>>> sudo cp myaxiom /usr/bin/myaxiom
>>>> >>>>>
>>>> >>>>> 7 WINDOWS) start the X11 server
>>>> >>>>>
>>>> >>>>> (XMing XLaunch Icon on your desktop)
>>>> >>>>>
>>>> >>>>> 8 WINDOWS) run myaxiom from PowerShell
>>>> >>>>> (this should start axiom with graphics available)
>>>> >>>>>
>>>> >>>>> wsl myaxiom
>>>> >>>>>
>>>> >>>>> 8 WINDOWS) make a PowerShell desktop
>>>> >>>>>
>>>> >>>>>
>>>> https://superuser.com/questions/886951/run-powershell-script-when-you-open-powershell
>>>> >>>>>
>>>> >>>>> Tim
>>>> >>>>>
>>>> >>>>> On 8/13/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>>>> A great deal of thought is directed toward making the SANE
>>>> version
>>>> >>>>>> of Axiom as flexible as possible, decoupling mechanism from
>>>> theory.
>>>> >>>>>>
>>>> >>>>>> An interesting publication by Brian Cantwell Smith [0],
>>>> "Reflection
>>>> >>>>>> and Semantics in LISP" seems to contain interesting ideas related
>>>> >>>>>> to our goal. Of particular interest is the ability to reason
>>>> about
>>>> >>>>>> and
>>>> >>>>>> perform self-referential manipulations. In a dependently-typed
>>>> >>>>>> system it seems interesting to be able "adapt" code to handle
>>>> >>>>>> run-time computed arguments to dependent functions. The abstract:
>>>> >>>>>>
>>>> >>>>>>    "We show how a computational system can be constructed to
>>>> >>>>>> "reason",
>>>> >>>>>> effectively
>>>> >>>>>>    and consequentially, about its own inferential processes. The
>>>> >>>>>> analysis proceeds in two
>>>> >>>>>>    parts. First, we consider the general question of
>>>> computational
>>>> >>>>>> semantics, rejecting
>>>> >>>>>>    traditional approaches, and arguing that the declarative and
>>>> >>>>>> procedural aspects of
>>>> >>>>>>    computational symbols (what they stand for, and what behaviour
>>>> >>>>>> they
>>>> >>>>>> engender) should be
>>>> >>>>>>    analysed independently, in order that they may be coherently
>>>> >>>>>> related. Second, we
>>>> >>>>>>    investigate self-referential behavior in computational
>>>> processes,
>>>> >>>>>> and show how to embed an
>>>> >>>>>>    effective procedural model of a computational calculus within
>>>> that
>>>> >>>>>> calculus (a model not
>>>> >>>>>>    unlike a meta-circular interpreter, but connected to the
>>>> >>>>>> fundamental operations of the
>>>> >>>>>>    machine in such a way as to provide, at any point in a
>>>> >>>>>> computation,
>>>> >>>>>> fully articulated
>>>> >>>>>>    descriptions of the state of that computation, for inspection
>>>> and
>>>> >>>>>> possible modification). In
>>>> >>>>>>    terms of the theories that result from these investigations,
>>>> we
>>>> >>>>>> present a general architecture
>>>> >>>>>>    for procedurally reflective processes, able to shift smoothly
>>>> >>>>>> between dealing with a given
>>>> >>>>>>    subject domain, and dealing with their own reasoning processes
>>>> >>>>>> over
>>>> >>>>>> that domain.
>>>> >>>>>>
>>>> >>>>>>    An instance of the general solution is worked out in the
>>>> context
>>>> >>>>>> of
>>>> >>>>>> an applicative
>>>> >>>>>>    language. Specifically, we present three successive dialects
>>>> of
>>>> >>>>>> LISP: 1-LISP, a distillation of
>>>> >>>>>>    current practice, for comparison purposes; 2-LISP, a dialect
>>>> >>>>>> constructed in terms of our
>>>> >>>>>>    rationalised semantics, in which the concept of evaluation is
>>>> >>>>>> rejected in favour of
>>>> >>>>>>    independent notions of simplification and reference, and in
>>>> which
>>>> >>>>>> the respective categories
>>>> >>>>>>    of notation, structure, semantics, and behaviour are strictly
>>>> >>>>>> aligned; and 3-LISP, an
>>>> >>>>>>    extension of 2-LISP endowed with reflective powers."
>>>> >>>>>>
>>>> >>>>>> Axiom SANE builds dependent types on the fly. The ability to
>>>> access
>>>> >>>>>> both the refection
>>>> >>>>>> of the tower of algebra and the reflection of the tower of
>>>> proofs at
>>>> >>>>>> the time of construction
>>>> >>>>>> makes the construction of a new domain or specific algorithm
>>>> easier
>>>> >>>>>> and more general.
>>>> >>>>>>
>>>> >>>>>> This is of particular interest because one of the efforts is to
>>>> build
>>>> >>>>>> "all the way down to the
>>>> >>>>>> metal". If each layer is constructed on top of previous proven
>>>> layers
>>>> >>>>>> and the new layer
>>>> >>>>>> can "reach below" to lower layers then the tower of layers can be
>>>> >>>>>> built without duplication.
>>>> >>>>>>
>>>> >>>>>> Tim
>>>> >>>>>>
>>>> >>>>>> [0], Smith, Brian Cantwell "Reflection and Semantics in LISP"
>>>> >>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-SIGPLAN
>>>> >>>>>> ymposium on Principles of programming languagesJanuary 1
>>>> >>>>>> 984 Pages 23–35https://doi.org/10.1145/800017.800513
>>>> >>>>>>
>>>> >>>>>> On 6/29/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>>>>> Having spent time playing with hardware it is perfectly clear
>>>> that
>>>> >>>>>>> future computational mathematics efforts need to adapt to using
>>>> >>>>>>> parallel processing.
>>>> >>>>>>>
>>>> >>>>>>> I've spent a fair bit of time thinking about structuring Axiom
>>>> to
>>>> >>>>>>> be parallel. Most past efforts have tried to focus on making a
>>>> >>>>>>> particular algorithm parallel, such as a matrix multiply.
>>>> >>>>>>>
>>>> >>>>>>> But I think that it might be more effective to make each domain
>>>> >>>>>>> run in parallel. A computation crosses multiple domains so a
>>>> >>>>>>> particular computation could involve multiple parallel copies.
>>>> >>>>>>>
>>>> >>>>>>> For example, computing the Cylindrical Algebraic Decomposition
>>>> >>>>>>> could recursively decompose the plane. Indeed, any
>>>> tree-recursive
>>>> >>>>>>> algorithm could be run in parallel "in the large" by creating
>>>> new
>>>> >>>>>>> running copies of the domain for each sub-problem.
>>>> >>>>>>>
>>>> >>>>>>> So the question becomes, how does one manage this?
>>>> >>>>>>>
>>>> >>>>>>> A similar problem occurs in robotics where one could have
>>>> multiple
>>>> >>>>>>> wheels, arms, propellers, etc. that need to act independently
>>>> but
>>>> >>>>>>> in coordination.
>>>> >>>>>>>
>>>> >>>>>>> The robot solution uses ROS2. The three ideas are ROSCORE,
>>>> >>>>>>> TOPICS with publish/subscribe, and SERVICES with
>>>> request/response.
>>>> >>>>>>> These are communication paths defined between processes.
>>>> >>>>>>>
>>>> >>>>>>> ROS2 has a "roscore" which is basically a phonebook of "topics".
>>>> >>>>>>> Any process can create or look up the current active topics. eq:
>>>> >>>>>>>
>>>> >>>>>>>    rosnode list
>>>> >>>>>>>
>>>> >>>>>>> TOPICS:
>>>> >>>>>>>
>>>> >>>>>>> Any process can PUBLISH a topic (which is basically a typed data
>>>> >>>>>>> structure), e.g the topic /hw with the String data "Hello
>>>> World".
>>>> >>>>>>> eg:
>>>> >>>>>>>
>>>> >>>>>>>    rostopic pub /hw std_msgs/String "Hello, World"
>>>> >>>>>>>
>>>> >>>>>>> Any process can SUBSCRIBE to a topic, such as /hw, and get a
>>>> >>>>>>> copy of the data.  eg:
>>>> >>>>>>>
>>>> >>>>>>>    rostopic echo /hw   ==> "Hello, World"
>>>> >>>>>>>
>>>> >>>>>>> Publishers talk, subscribers listen.
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>> SERVICES:
>>>> >>>>>>>
>>>> >>>>>>> Any process can make a REQUEST of a SERVICE and get a RESPONSE.
>>>> >>>>>>> This is basically a remote function call.
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>> Axiom in parallel?
>>>> >>>>>>>
>>>> >>>>>>> So domains could run, each in its own process. It could provide
>>>> >>>>>>> services, one for each function. Any other process could request
>>>> >>>>>>> a computation and get the result as a response. Domains could
>>>> >>>>>>> request services from other domains, either waiting for
>>>> responses
>>>> >>>>>>> or continuing while the response is being computed.
>>>> >>>>>>>
>>>> >>>>>>> The output could be sent anywhere, to a terminal, to a browser,
>>>> >>>>>>> to a network, or to another process using the publish/subscribe
>>>> >>>>>>> protocol, potentially all at the same time since there can be
>>>> many
>>>> >>>>>>> subscribers to a topic.
>>>> >>>>>>>
>>>> >>>>>>> Available domains could be dynamically added by announcing
>>>> >>>>>>> themselves as new "topics" and could be dynamically looked-up
>>>> >>>>>>> at runtime.
>>>> >>>>>>>
>>>> >>>>>>> This structure allows function-level / domain-level parallelism.
>>>> >>>>>>> It is very effective in the robot world and I think it might be
>>>> a
>>>> >>>>>>> good structuring mechanism to allow computational mathematics
>>>> >>>>>>> to take advantage of multiple processors in a disciplined
>>>> fashion.
>>>> >>>>>>>
>>>> >>>>>>> Axiom has a thousand domains and each could run on its own core.
>>>> >>>>>>>
>>>> >>>>>>> In addition. notice that each domain is independent of the
>>>> others.
>>>> >>>>>>> So if we want to use BLAS Fortran code, it could just be another
>>>> >>>>>>> service node. In fact, any "foreign function" could
>>>> transparently
>>>> >>>>>>> cooperate in a distributed Axiom.
>>>> >>>>>>>
>>>> >>>>>>> Another key feature is that proofs can be "by node".
>>>> >>>>>>>
>>>> >>>>>>> Tim
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>> On 6/5/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>>>>>> Axiom is based on first-class dependent types. Deciding when
>>>> >>>>>>>> two types are equivalent may involve computation. See
>>>> >>>>>>>> Christiansen, David Thrane "Checking Dependent Types with
>>>> >>>>>>>> Normalization by Evaluation" (2019)
>>>> >>>>>>>>
>>>> >>>>>>>> This puts an interesting constraint on building types. The
>>>> >>>>>>>> constructed types has to export a function to decide if a
>>>> >>>>>>>> given type is "equivalent" to itself.
>>>> >>>>>>>>
>>>> >>>>>>>> The notion of "equivalence" might involve category ideas
>>>> >>>>>>>> of natural transformation and univalence. Sigh.
>>>> >>>>>>>>
>>>> >>>>>>>> That's an interesting design point.
>>>> >>>>>>>>
>>>> >>>>>>>> Tim
>>>> >>>>>>>>
>>>> >>>>>>>>
>>>> >>>>>>>> On 5/5/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>>>>>>> It is interesting that programmer's eyes and expectations
>>>> adapt
>>>> >>>>>>>>> to the tools they use. For instance, I use emacs and expect to
>>>> >>>>>>>>> work directly in files and multiple buffers. When I try to
>>>> use one
>>>> >>>>>>>>> of the many IDE tools I find they tend to "get in the way". I
>>>> >>>>>>>>> already
>>>> >>>>>>>>> know or can quickly find whatever they try to tell me. If you
>>>> use
>>>> >>>>>>>>> an
>>>> >>>>>>>>> IDE you probably find emacs "too sparse" for programming.
>>>> >>>>>>>>>
>>>> >>>>>>>>> Recently I've been working in a sparse programming
>>>> environment.
>>>> >>>>>>>>> I'm exploring the question of running a proof checker in an
>>>> FPGA.
>>>> >>>>>>>>> The FPGA development tools are painful at best and not
>>>> intuitive
>>>> >>>>>>>>> since you SEEM to be programming but you're actually
>>>> describing
>>>> >>>>>>>>> hardware gates, connections, and timing. This is an
>>>> environment
>>>> >>>>>>>>> where everything happens all-at-once and all-the-time (like
>>>> the
>>>> >>>>>>>>> circuits in your computer). It is the "assembly language of
>>>> >>>>>>>>> circuits".
>>>> >>>>>>>>> Naturally, my eyes have adapted to this rather raw level.
>>>> >>>>>>>>>
>>>> >>>>>>>>> That said, I'm normally doing literate programming all the
>>>> time.
>>>> >>>>>>>>> My typical file is a document which is a mixture of latex and
>>>> >>>>>>>>> lisp.
>>>> >>>>>>>>> It is something of a shock to return to that world. It is
>>>> clear
>>>> >>>>>>>>> why
>>>> >>>>>>>>> people who program in Python find lisp to be a "sea of
>>>> parens".
>>>> >>>>>>>>> Yet as a lisp programmer, I don't even see the parens, just
>>>> code.
>>>> >>>>>>>>>
>>>> >>>>>>>>> It takes a few minutes in a literate document to adapt vision
>>>> to
>>>> >>>>>>>>> see the latex / lisp combination as natural. The latex markup,
>>>> >>>>>>>>> like the lisp parens, eventually just disappears. What remains
>>>> >>>>>>>>> is just lisp and natural language text.
>>>> >>>>>>>>>
>>>> >>>>>>>>> This seems painful at first but eyes quickly adapt. The upside
>>>> >>>>>>>>> is that there is always a "finished" document that describes
>>>> the
>>>> >>>>>>>>> state of the code. The overhead of writing a paragraph to
>>>> >>>>>>>>> describe a new function or change a paragraph to describe the
>>>> >>>>>>>>> changed function is very small.
>>>> >>>>>>>>>
>>>> >>>>>>>>> Using a Makefile I latex the document to generate a current
>>>> PDF
>>>> >>>>>>>>> and then I extract, load, and execute the code. This loop
>>>> catches
>>>> >>>>>>>>> errors in both the latex and the source code. Keeping an open
>>>> file
>>>> >>>>>>>>> in
>>>> >>>>>>>>> my pdf viewer shows all of the changes in the document after
>>>> every
>>>> >>>>>>>>> run of make. That way I can edit the book as easily as the
>>>> code.
>>>> >>>>>>>>>
>>>> >>>>>>>>> Ultimately I find that writing the book while writing the
>>>> code is
>>>> >>>>>>>>> more productive. I don't have to remember why I wrote
>>>> something
>>>> >>>>>>>>> since the explanation is already there.
>>>> >>>>>>>>>
>>>> >>>>>>>>> We all have our own way of programming and our own tools.
>>>> >>>>>>>>> But I find literate programming to be a real advance over IDE
>>>> >>>>>>>>> style programming and "raw code" programming.
>>>> >>>>>>>>>
>>>> >>>>>>>>> Tim
>>>> >>>>>>>>>
>>>> >>>>>>>>>
>>>> >>>>>>>>> On 2/27/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>>>>>>>> The systems I use have the interesting property of
>>>> >>>>>>>>>> "Living within the compiler".
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> Lisp, Forth, Emacs, and other systems that present themselves
>>>> >>>>>>>>>> through the Read-Eval-Print-Loop (REPL) allow the
>>>> >>>>>>>>>> ability to deeply interact with the system, shaping it to
>>>> your
>>>> >>>>>>>>>> need.
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> My current thread of study is software architecture. See
>>>> >>>>>>>>>> https://www.youtube.com/watch?v=W2hagw1VhhI&feature=youtu.be
>>>> >>>>>>>>>> and https://www.georgefairbanks.com/videos/
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> My current thinking on SANE involves the ability to
>>>> >>>>>>>>>> dynamically define categories, representations, and functions
>>>> >>>>>>>>>> along with "composition functions" that permits choosing a
>>>> >>>>>>>>>> combination at the time of use.
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> You might want a domain for handling polynomials. There are
>>>> >>>>>>>>>> a lot of choices, depending on your use case. You might want
>>>> >>>>>>>>>> different representations. For example, you might want dense,
>>>> >>>>>>>>>> sparse, recursive, or "machine compatible fixnums" (e.g. to
>>>> >>>>>>>>>> interface with C code). If these don't exist it ought to be
>>>> >>>>>>>>>> possible
>>>> >>>>>>>>>> to create them. Such "lego-like" building blocks require
>>>> careful
>>>> >>>>>>>>>> thought about creating "fully factored" objects.
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> Given that goal, the traditional barrier of "compiler" vs
>>>> >>>>>>>>>> "interpreter"
>>>> >>>>>>>>>> does not seem useful. It is better to "live within the
>>>> compiler"
>>>> >>>>>>>>>> which
>>>> >>>>>>>>>> gives the ability to define new things "on the fly".
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> Of course, the SANE compiler is going to want an associated
>>>> >>>>>>>>>> proof of the functions you create along with the other parts
>>>> >>>>>>>>>> such as its category hierarchy and representation properties.
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> There is no such thing as a simple job. :-)
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> Tim
>>>> >>>>>>>>>>
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> On 2/18/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>>>>>>>>> The Axiom SANE compiler / interpreter has a few design
>>>> points.
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>> 1) It needs to mix interpreted and compiled code in the same
>>>> >>>>>>>>>>> function.
>>>> >>>>>>>>>>> SANE allows dynamic construction of code as well as dynamic
>>>> type
>>>> >>>>>>>>>>> construction at runtime. Both of these can occur in a
>>>> runtime
>>>> >>>>>>>>>>> object.
>>>> >>>>>>>>>>> So there is potentially a mixture of interpreted and
>>>> compiled
>>>> >>>>>>>>>>> code.
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>> 2) It needs to perform type resolution at compile time
>>>> without
>>>> >>>>>>>>>>> overhead
>>>> >>>>>>>>>>> where possible. Since this is not always possible there
>>>> needs to
>>>> >>>>>>>>>>> be
>>>> >>>>>>>>>>> a "prefix thunk" that will perform the resolution.
>>>> Trivially,
>>>> >>>>>>>>>>> for
>>>> >>>>>>>>>>> example,
>>>> >>>>>>>>>>> if we have a + function we need to type-resolve the
>>>> arguments.
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>> However, if we can prove at compile time that the types are
>>>> both
>>>> >>>>>>>>>>> bounded-NNI and the result is bounded-NNI (i.e. fixnum in
>>>> lisp)
>>>> >>>>>>>>>>> then we can inline a call to + at runtime. If not, we might
>>>> have
>>>> >>>>>>>>>>> + applied to NNI and POLY(FLOAT), which requires a thunk to
>>>> >>>>>>>>>>> resolve types. The thunk could even "specialize and compile"
>>>> >>>>>>>>>>> the code before executing it.
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>> It turns out that the Forth implementation of
>>>> >>>>>>>>>>> "threaded-interpreted"
>>>> >>>>>>>>>>> languages model provides an efficient and effective way to
>>>> do
>>>> >>>>>>>>>>> this.[0]
>>>> >>>>>>>>>>> Type resolution can be "inserted" in intermediate thunks.
>>>> >>>>>>>>>>> The model also supports dynamic overloading and tail
>>>> recursion.
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>> Combining high-level CLOS code with low-level threading
>>>> gives an
>>>> >>>>>>>>>>> easy to understand and robust design.
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>> Tim
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>> [0] Loeliger, R.G. "Threaded Interpretive Languages" (1981)
>>>> >>>>>>>>>>> ISBN 0-07-038360-X
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>> On 2/5/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>>>>>>>>>> I've worked hard to make Axiom depend on almost no other
>>>> >>>>>>>>>>>> tools so that it would not get caught by "code rot" of
>>>> >>>>>>>>>>>> libraries.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> However, I'm also trying to make the new SANE version much
>>>> >>>>>>>>>>>> easier to understand and debug.To that end I've been
>>>> >>>>>>>>>>>> experimenting
>>>> >>>>>>>>>>>> with some ideas.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> It should be possible to view source code, of course. But
>>>> the
>>>> >>>>>>>>>>>> source
>>>> >>>>>>>>>>>> code is not the only, nor possibly the best,
>>>> representation of
>>>> >>>>>>>>>>>> the
>>>> >>>>>>>>>>>> ideas.
>>>> >>>>>>>>>>>> In particular, source code gets compiled into data
>>>> structures.
>>>> >>>>>>>>>>>> In
>>>> >>>>>>>>>>>> Axiom
>>>> >>>>>>>>>>>> these data structures really are a graph of related
>>>> structures.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> For example, looking at the gcd function from NNI, there
>>>> is the
>>>> >>>>>>>>>>>> representation of the gcd function itself. But there is
>>>> also a
>>>> >>>>>>>>>>>> structure
>>>> >>>>>>>>>>>> that is the REP (and, in the new system, is separate from
>>>> the
>>>> >>>>>>>>>>>> domain).
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> Further, there are associated specification and proof
>>>> >>>>>>>>>>>> structures.
>>>> >>>>>>>>>>>> Even
>>>> >>>>>>>>>>>> further, the domain inherits the category structures, and
>>>> from
>>>> >>>>>>>>>>>> those
>>>> >>>>>>>>>>>> it
>>>> >>>>>>>>>>>> inherits logical axioms and definitions through the proof
>>>> >>>>>>>>>>>> structure.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> Clearly the gcd function is a node in a much larger graph
>>>> >>>>>>>>>>>> structure.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> When trying to decide why code won't compile it would be
>>>> useful
>>>> >>>>>>>>>>>> to
>>>> >>>>>>>>>>>> be able to see and walk these structures. I've thought
>>>> about
>>>> >>>>>>>>>>>> using
>>>> >>>>>>>>>>>> the
>>>> >>>>>>>>>>>> browser but browsers are too weak. Either everything has
>>>> to be
>>>> >>>>>>>>>>>> "in
>>>> >>>>>>>>>>>> a
>>>> >>>>>>>>>>>> single tab to show the graph" or "the nodes of the graph
>>>> are in
>>>> >>>>>>>>>>>> different
>>>> >>>>>>>>>>>> tabs". Plus, constructing dynamic graphs that change as the
>>>> >>>>>>>>>>>> software
>>>> >>>>>>>>>>>> changes (e.g. by loading a new spad file or creating a new
>>>> >>>>>>>>>>>> function)
>>>> >>>>>>>>>>>> represents the huge problem of keeping the browser "in sync
>>>> >>>>>>>>>>>> with
>>>> >>>>>>>>>>>> the
>>>> >>>>>>>>>>>> Axiom workspace". So something more dynamic and embedded is
>>>> >>>>>>>>>>>> needed.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> Axiom source gets compiled into CLOS data structures. Each
>>>> of
>>>> >>>>>>>>>>>> these
>>>> >>>>>>>>>>>> new SANE structures has an associated surface
>>>> representation,
>>>> >>>>>>>>>>>> so
>>>> >>>>>>>>>>>> they
>>>> >>>>>>>>>>>> can be presented in user-friendly form.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> Also, since Axiom is literate software, it should be
>>>> possible
>>>> >>>>>>>>>>>> to
>>>> >>>>>>>>>>>> look
>>>> >>>>>>>>>>>> at
>>>> >>>>>>>>>>>> the code in its literate form with the surrounding
>>>> explanation.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> Essentially we'd like to have the ability to "deep dive"
>>>> into
>>>> >>>>>>>>>>>> the
>>>> >>>>>>>>>>>> Axiom
>>>> >>>>>>>>>>>> workspace, not only for debugging, but also for
>>>> understanding
>>>> >>>>>>>>>>>> what
>>>> >>>>>>>>>>>> functions are used, where they come from, what they
>>>> inherit,
>>>> >>>>>>>>>>>> and
>>>> >>>>>>>>>>>> how they are used in a computation.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> To that end I'm looking at using McClim, a lisp windowing
>>>> >>>>>>>>>>>> system.
>>>> >>>>>>>>>>>> Since the McClim windows would be part of the lisp image,
>>>> they
>>>> >>>>>>>>>>>> have
>>>> >>>>>>>>>>>> access to display (and modify) the Axiom workspace at all
>>>> >>>>>>>>>>>> times.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> The only hesitation is that McClim uses quicklisp and
>>>> drags in
>>>> >>>>>>>>>>>> a
>>>> >>>>>>>>>>>> lot
>>>> >>>>>>>>>>>> of other subsystems. It's all lisp, of course.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> These ideas aren't new. They were available on Symbolics
>>>> >>>>>>>>>>>> machines,
>>>> >>>>>>>>>>>> a truly productive platform and one I sorely miss.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> Tim
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> On 1/19/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>>>>>>>>>>> Also of interest is the talk
>>>> >>>>>>>>>>>>> "The Unreasonable Effectiveness of Dynamic Typing for
>>>> >>>>>>>>>>>>> Practical
>>>> >>>>>>>>>>>>> Programs"
>>>> >>>>>>>>>>>>> https://vimeo.com/74354480
>>>> >>>>>>>>>>>>> which questions whether static typing really has any
>>>> benefit.
>>>> >>>>>>>>>>>>>
>>>> >>>>>>>>>>>>> Tim
>>>> >>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>
>>>> >>>>>>>>>>>>> On 1/19/21, Tim Daly <axiom...@gmail.com> wrote:
>>>> >>>>>>>>>>>>>> Peter Naur wrote an article of interest:
>>>> >>>>>>>>>>>>>> http://pages.cs.wisc.edu/~remzi/Naur.pdf
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> In particular, it mirrors my notion that Axiom needs
>>>> >>>>>>>>>>>>>> to embrace literate programming so that the "theory
>>>> >>>>>>>>>>>>>> of the problem" is presented as well as the "theory
>>>> >>>>>>>>>>>>>> of the solution". I quote the introduction:
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> This article is, to my mind, the most accurate account
>>>> >>>>>>>>>>>>>> of what goes on in designing and coding a program.
>>>> >>>>>>>>>>>>>> I refer to it regularly when discussing how much
>>>> >>>>>>>>>>>>>> documentation to create, how to pass along tacit
>>>> >>>>>>>>>>>>>> knowledge, and the value of the XP's metaphor-setting
>>>> >>>>>>>>>>>>>> exercise. It also provides a way to examine a
>>>> methodolgy's
>>>> >>>>>>>>>>>>>> economic structure.
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> In the article, which follows, note that the quality of
>>>> the
>>>> >>>>>>>>>>>>>> designing programmer's work is related to the quality of
>>>> >>>>>>>>>>>>>> the match between his theory of the problem and his
>>>> theory
>>>> >>>>>>>>>>>>>> of the solution. Note that the quality of a later
>>>> >>>>>>>>>>>>>> programmer's
>>>> >>>>>>>>>>>>>> work is related to the match between his theories and the
>>>> >>>>>>>>>>>>>> previous programmer's theories.
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> Using Naur's ideas, the designer's job is not to pass
>>>> along
>>>> >>>>>>>>>>>>>> "the design" but to pass along "the theories" driving the
>>>> >>>>>>>>>>>>>> design.
>>>> >>>>>>>>>>>>>> The latter goal is more useful and more appropriate. It
>>>> also
>>>> >>>>>>>>>>>>>> highlights that knowledge of the theory is tacit in the
>>>> >>>>>>>>>>>>>> owning,
>>>> >>>>>>>>>>>>>> and
>>>> >>>>>>>>>>>>>> so passing along the thoery requires passing along both
>>>> >>>>>>>>>>>>>> explicit
>>>> >>>>>>>>>>>>>> and tacit knowledge.
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> Tim
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>
>>>> >>>>>>>>>
>>>> >>>>>>>>
>>>> >>>>>>>
>>>> >>>>>>
>>>> >>>>>
>>>> >>>>
>>>> >>>
>>>> >>
>>>> >
>>>>
>>>

Reply via email to