Re: Axiom musings ... runtime computation of first-class dependent types

2022-05-30 Thread Martin Baker

Tim,

Thanks for your reply, lots to think about in this subject isn't there.
Like you I'm attempting the daunting (but interesting) path of trying to 
understand this subject better.



Dependent types allow functions to exist in the type definition. The
functions can be executed "at run time". Current languages that have
support for dependent types such as Agda and Idris use very, very
limited forms of this facility. They only extend the type in certain
carefully defined ways that allow the compiler to type check the code.


Yes, as you say, natural numbers (and related structures such as lists) 
have very nice properties. Idris does a good job of seamlessly 
converting between the Peano form (for proofs) and the usual form (for 
efficient algebra).


I like this idea that the representation holds the full information 
about the domain so its fully defined for proofs as well as holding the 
data for algebra.


As a thought experiment, I am trying to work out what it would take for 
this to be done on all mathematical structures.


As a first step, could this be extended to integers?

We could say that Integer = Nat+Nat. In spad terms that would be:
Rep := Union(negative : NNI, positive NNI)

Or Integer = 2*Nat. In spad terms that would be:
Rep := Record(sign : Bool, value NNI)

This would provide a structure to store an integer so it would work for 
the algebra but it doesn't seem to fully define an integer for proofs in 
that it doesn't glue the negative and positive Nats together. For that 
we seem to need equations like:


S (-1) = 0
P 0 = -1
S P = P S = Id

where:
S is successor
P is predecessor

As soon as we add equations to the representation then we loose some of 
the nice properties - more complexity - no canonical form.


At each step we seem to loose nice properties, when we get to fraction:

Rep:= Record(num:IntegralDomain, den:IntegralDomain)

Then we seem to need to need a normalisation algorithm to put in normal 
form so we can test for equality.


I still think there is something very powerful about combining algebra 
and proofs in this way so I would be interested to know if you have 
found a path through all this?


Martin



Re: Axiom musings ... runtime computation of first-class dependent types

2022-05-29 Thread Tim Daly
Martin,

>In this series of videos Robert Harper gives an provocative take on
>these subjects:
>https://www.youtube.com/watch?v=LE0SSLizYUI


I watched Harper's videos years ago. I will revisit them. If you haven't
seen them I highly recommend them. Indeed, the whole series of
lectures from professors at OPLSS is well worth the time.



>At about 37 min into this video  he says: "what is true has unbounded
>quantifier complexity whereas any formalism is always relentlessly
>recursively innumerable" - Gödels theorem.

Yes, in general, the quantifier complexity is unbounded. But for a given
particular case it is not. Can a system always shape itself to the
particular
problem using only bounded quantifiers?

I previously made an argument that while Godel used self-reference
he did not use self-modification. I conjectured that self-modification could
weaken his conclusion. The response was that self-modification could
be modeled in a type system and would have no effect.

I'm still unconvinced but that's the conclusion from people who study types
for a
living so I'm almost certainly wrong.

The topic is still open in my mind and I'm spending a bit of time reading
everything I can about the Godel proof. [3,4,5] Self modifying code
involves time.
I need to study time-related type theory.

There is also the question of embedding self-modifying code in some
Godel-like numbering system (or something equally clever I'm probably
not capable of inventing).

My current thinking is to find some way to write Godel's proof in a
"Game of Life" automata. There are whole computers that have been
embedded in these systems. If I could embed Godel and a computer in the
same space with some clever encoding I'd have a way of executing programs
that reference themselves. [0,1,2]




>Do you think that dynamic types and/or the sort of self modifying
>programs you are suggesting are necessary to model all of mathematics?

A model of all of mathematics seems to be the goal of homotopy
type theory. Despite numerous online courses and various readings
it is still beyond my understanding.

Dependent types allow functions to exist in the type definition. The
functions can be executed "at run time". Current languages that have
support for dependent types such as Agda and Idris use very, very
limited forms of this facility. They only extend the type in certain
carefully defined ways that allow the compiler to type check the code.

These "bottom up" limited extensions will never reach high enough to
type check Axiom code. One of the reasons is the artificial split between
compile-time and run-time. What the compiler can't type check in general,
the runtime can type check in particular.

I think that type erasure is one of the goals but Axiom carries types
all the time and can create new types "on the fly" so type erasure
is a non-issue for me. I'm not even sure that type erasure is meaningful
in a dependent type system where types are first-class.

I'm merging the compiler-interpreter into a single thing, trying to do as
much type checking as possible at any point, some of which has to be
pushed to the last instant.

Further, I'm able to invoke the entire system (it is all in Lisp) as "the
function"
evident in some type declarations. The call gives Lisp the type being
defined,
the program being typed, and the entire environment containing definitions
and axioms.

In order for this to work at all the various Axiom functions need really
good
specifications. The specifications focus the proof search. For instance, GCD
is a triple case test such as

Given x:Nat and y:Nat and updated copies x' and y' repeat case
(x > y) and (x' = x - y) and (y' = y)
(y > x) and (y' = y - x) and (x' = x)
(x = y) done

we now have essentially a state machine with monotonically decreasing
values. Note that there are about 20 domain specific GCD algorithms in
Axiom, such as the polynomial GCD. Is there a general specification for all
of
the algorithms or are additional definitions needed in scope? That's
still to be researched. Is there a way to "factor" the specifications?
Again,
another research question I can't yet answer.

In Axiom-speak, NAT is NonNegativeInteger. Axiom conflates the data
structure, known in logic as the carrier, with the domain, known in logic
as the type. In SANE the data structure and domain are separate so it
should be possible to write specifications over multiple "carriers".

Another item on the "ponder list" is how to dynamically construct CLOS
types that carry the definitions, axioms, and type structure. I'm still at
the
stage of embedding LEAN axioms into a hierarchy similar to the existing
Axiom hierarchy.

As for modeling all of mathematics my goal is only on computer algebra.
Rumor has it that programs are proofs. If that's assumed I have
several hundred programs (proofs) in Axiom. I'm trying to unify the programs
with their more formal proofs.


>These videos are 4 years old, do you 

Re: Axiom musings ... runtime computation of first-class dependent types

2022-05-29 Thread Martin Baker

Tim

In this series of videos Robert Harper gives an provocative take on 
these subjects:

https://www.youtube.com/watch?v=LE0SSLizYUI

At about 37 min into this video  he says: "what is true has unbounded 
quantifier complexity whereas any formalism is always relentlessly 
recursively innumerable" - Gödels theorem.


"there is a distinction between proof and truth"
(by formalism here I think he means formal logic?)

It looks to be like the programming style he is putting forward is 
dynamic typing? I cant find it now, in these videos, but I think he 
might have suggested that this style of programming is required to model 
all of mathematics (be foundational)? Even cubical type theory might not 
be foundational?


Do you think that dynamic types and/or the sort of self modifying 
programs you are suggesting are necessary to model all of mathematics?


These videos are 4 years old, do you know if this research has come to 
any conclusions on these issues?


MJB

On 29/05/2022 08:50, Tim Daly wrote:

Well, SANE is certainly turning into a journey.

When we look at the idea of first-class dependent types
as implemented in languages like Agda and Idris we find
what I call "peano types". For example, Fin is a type
of "finite numbers", usually defined as

data Fin : Nat -> Type where
   FZ : Fin (S n)             -- the base case
   FS : Fin n -> Fin (S n)    -- the recursive case

Fin 7 describes 7 numbers 0..6

This construction is popular because the compiler can
compute that the type construct is total. Thus the
type construct is its own proof.

A dependent type allows computation to occur during
its construction. The canonical example is defining
a list of length m, which depends on the value of m.
Another list of length n is a separate type. An
append function of these lists computes a new typed
list of length (m + n).

The current approach, Agda, Idris, etc., is "bottom up"
where restrictions are placed which will allow proofs
in all contexts.

There are excellent reasons for such carefully
styled dependent types.

Reading the literature you come across the famous
Dan Friedman admonition "Recursion is not allowed",
usually in reference to his Pie language [0]. While
Pie does not allow it, he admits 2 kinds of recursion
(pp 358-359).

The general case of first-class dependent types I'm
considering is capable of much broader and more
ill-founded behavior. This is deliberate because some
of the mathematics in Axiom can only be proven correct
under limited cases. These cases have to be constructed
"on the fly" from more general dependent type definitions.

For example, a type might not be total in general but
it may be possible to compute a bounded version of the
type when used in context. This can only be decided at
run time with the given parameters in the given context.
In general, this means computing that "in this context
with these bindings the result can be proven".

This general case occurs because a SANE type G is able to
invoke all of lisp with an environment that contains the
type G under construction as well as its environment
which contains the program. This allows the type G to self
reference and self modify based on context.

The general type G:List(G,REPL) in a context of a statement
M:G := (append '(1 2 3) '(4 5 6 7)) as effectively typed
M:List(7).

The SANE type G is neither sound nor complete but List(7) is.
The game is to construct a contextually dependent type and
the associated proof "on the fly".

An intuitive Gedankenexperiment is defining an arm for a
general robot. One could have a 2 arm robot defined as
Robot : TwoArm(Robot,Repl).

The problem, based on context, might generate each of the
two arms separately, one with 7 degress of freedom to
reach the whole of the workspace (in context) and a second
4 degree robot that can position and hold a part in the
reach of the first arm. The same TwoArm type might resolve
to a different configuration in a different context.

Self-reference is already a metamathematical problem
(witness Godel's proof). Self-modification is, as Monty
Python famously says, "right out". One might argue with
Godel but contradicting Python is heresy.

The SANE game is to create the general case of first-class
dependent types with such abilities and then find certain
restrictions as necessary to try to construct the proof
in a given context. This SOUNDs COMPLETEly crazy, of course.

One has to construct a lisp program "on the fly" in the
dependent type context, prove it correct, and return the
new type with the type-program and proof. Values, during the
run time computation may have to be dynamically substituted
in the type-program, re-running the proof with those values.

At best this ranges from the nearly impossible to horribly
inefficient. We all know this can never work.

This seems necessary as Axiom's computer algebra algorithms
were never developed with proof in mind. It seems necessary
to start "from the top" and work downward to the instance
of 

Re: Axiom musings ... Hamming and SANE

2022-05-05 Thread Tim Daly
... [snip] ...

I've been scratching at Wittgenstein, the Vienna Circle,
and Godel. It got me thinking about the notion of Type
hierarchies and the current view of Type theory, specifically
its use to eliminate paradox.

Which leads my thoughts to the current fad of types and
functional programming.

I think the whole notion of types and functional programming
eliminates the "interesting" part of computing. Specifically,
it attempts to eliminate self-reference and its most interesting
aspect of self-modification.

Systems that can't self-reference and self-modify are sterile.

I'm beginning to think that the key flaw in Godel's 2 theorems
is that they don't take self-reference and self-modification into
account. At this point it's just not clear to me how to express
self-reference and self-modification in Godel numbering.

... [snip] ...

Tim

On Wed, Apr 27, 2022 at 9:22 AM Tim Daly  wrote:

> ... [snip] ...
>
> I guess that's where we differ. Axiom is a piece of research software.
>
> Axiom was never intended to be a product. IBM gave us two choices. Either
> find someone to market it (NAG) or throw it away. FOSS software did
> not exist at that time and we were not allowed to work on it in any
> capacity once it was sold. That stopped the research but not the desire.
>
> On the "little problem" front I have many things I would like to do with
> Axiom. For example, I have a hierarchical graph of about 50 different
> kinds of matrices arranged so Axiom's inheritence would fit perfectly.
> Within that set are things like unitary matrices which would have just
> the algorithms you need to do quantum computers. The DHMATRIX
> domain would be usefully extended to drive the several robots I have
> in my office. Fourier and Laplace transform algorithms would be very
> useful for robot dynamics. Graphics algorithms should be extended
> to show Bode plots. The graphics world could be extended to do 3D solid
> models to use with my 3D printer. Quantum-safe encryption uses
> lattices which would fit well with skew matrix algorithms. There are
> a lot of "little problems" I wish I had time to work on.
>
> Richard Hamming (of Hamming distance fame) has a youtube video [0].
> He used to ask his Bell Labs colleagues "What are the fundamental
> problems in your area and why aren't you working on them?"
>
> So what are the "fundamental problems" of combining computers and math?
> Why aren't you working on them?
>
> I see two streams of computer mathematics. The computer algebra
> stream started in the 60s/70s and uses the "It works for me" approach.
> The proof stream started in the 90s and uses the "Lets prove my
> current theorem". The two streams have only James Davenport
> in common.
>
> "Computational Mathematics" would be a combination of both of these
> streams. Proofs would make Axiom less ad-hoc. Proven computer
> algebra algorithms would vastly expand proof assistants with trusted
> results.
>
> I view "Computational Mathematics", combining computer algebra
> and proofs as one of the "fundamental problems" of this era. The
> SANE version of Axiom is my attack on a fundamental problem.
>
> Axiom is the best research platform available to show both camps
> what is possible in future mathematics. Axiom showed the power that
> comes from using mathematics as a scaffold architecture guiding
> the organization. Grounding the group theory scaffold with proofs
> will make Axiom a much more solid structure for future mathematics.
>
> That said, my real motivation is that this research is fun.
> I am pretty sure nobody but me will ever care.
> I'm a "research rat" by nature.
>
> Tim
>
> [0] You and Your Research
> https://www.youtube.com/watch?v=a1zDuOPkMSw
>
>
>


Re: Axiom musings...

2021-11-13 Thread Tim Daly
Full support for general, first-class dependent types requires
some changes to the Axiom design. That implies some language
design questions.

Given that mathematics is such a general subject with a lot of
"local" notation and ideas (witness logical judgment notation)
careful thought is needed to design a language that is able to
handle a wide range.

Normally language design is a two-level process. The language
designer creates a language and then an implementation. Various
design choices affect the final language.

There is "The Metaobject Protocol" (MOP)
https://www.amazon.com/Art-Metaobject-Protocol-Gregor-Kiczales/dp/0262610744
which encourages a three-level process. The language designer
works at a Metalevel to design a family of languages, then the
language specializations, then the implementation. A MOP design
allows the language user to optimize the language to their problem.

A simple paper on the subject is "Metaobject Protocols"
https://users.cs.duke.edu/~vahdat/ps/mop.pdf

Tim


On Mon, Oct 25, 2021 at 7:42 PM Tim Daly  wrote:

> I have a separate thread of research on Self-Replicating Systems
> (ref: Kinematics of Self Reproducing Machines
> http://www.molecularassembler.com/KSRM.htm)
>
> which led to watching "Strange Dreams of Stranger Loops" by Will Byrd
> https://www.youtube.com/watch?v=AffW-7ika0E
>
> Will referenced a PhD Thesis by Jon Doyle
> "A Model for Deliberation, Action, and Introspection"
>
> I also read the thesis by J.C.G. Sturdy
> "A Lisp through the Looking Glass"
>
> Self-replication requires the ability to manipulate your own
> representation in such a way that changes to that representation
> will change behavior.
>
> This leads to two thoughts in the SANE research.
>
> First, "Declarative Representation". That is, most of the things
> about the representation should be declarative rather than
> procedural. Applying this idea as much as possible makes it
> easier to understand and manipulate.
>
> Second, "Explicit Call Stack". Function calls form an implicit
> call stack. This can usually be displayed in a running lisp system.
> However, having the call stack explicitly available would mean
> that a system could "introspect" at the first-class level.
>
> These two ideas would make it easy, for example, to let the
> system "show the work". One of the normal complaints is that
> a system presents an answer but there is no way to know how
> that answer was derived. These two ideas make it possible to
> understand, display, and even post-answer manipulate
> the intermediate steps.
>
> Having the intermediate steps also allows proofs to be
> inserted in a step-by-step fashion. This aids the effort to
> have proofs run in parallel with computation at the hardware
> level.
>
> Tim
>
>
>
>
>
>
> On Thu, Oct 21, 2021 at 9:50 AM Tim Daly  wrote:
>
>> So the current struggle involves the categories in Axiom.
>>
>> The categories and domains constructed using categories
>> are dependent types. When are dependent types "equal"?
>> Well, hu, that depends on the arguments to the
>> constructor.
>>
>> But in order to decide(?) equality we have to evaluate
>> the arguments (which themselves can be dependent types).
>> Indeed, we may, and in general, we must evaluate the
>> arguments at compile time (well, "construction time" as
>> there isn't really a compiler / interpreter separation anymore.)
>>
>> That raises the question of what "equality" means. This
>> is not simply a "set equality" relation. It falls into the
>> infinite-groupoid of homotopy type theory. In general
>> it appears that deciding category / domain equivalence
>> might force us to climb the type hierarchy.
>>
>> Beyond that, there is the question of "which proof"
>> applies to the resulting object. Proofs depend on their
>> assumptions which might be different for different
>> constructions. As yet I have no clue how to "index"
>> proofs based on their assumptions, nor how to
>> connect these assumptions to the groupoid structure.
>>
>> My brain hurts.
>>
>> Tim
>>
>>
>> On Mon, Oct 18, 2021 at 2:00 AM Tim Daly  wrote:
>>
>>> "Birthing Computational Mathematics"
>>>
>>> The Axiom SANE project is difficult at a very fundamental
>>> level. The title "SANE" was chosen due to the various
>>> words found in a thesuarus... "rational", "coherent",
>>> "judicious" and "sound".
>>>
>>> These are very high level, amorphous ideas. But so is
>>> the design of SANE. Breaking away from tradition in
>>> computer algebra, type theory, and proof assistants
>>> is very difficult. Ideas tend to fall into standard jargon
>>> which limits both the frame of thinking (e.g. dependent
>>> types) and the content (e.g. notation).
>>>
>>> Questioning both frame and content is very difficult.
>>> It is hard to even recognize when they are accepted
>>> "by default" rather than "by choice". What does the idea
>>> "power tools" mean in a primitive, hand labor culture?
>>>
>>> Christopher Alexander [0] addresses this problem in
>>> a lot of his 

Re: Axiom musings...

2021-10-25 Thread Tim Daly
I have a separate thread of research on Self-Replicating Systems
(ref: Kinematics of Self Reproducing Machines
http://www.molecularassembler.com/KSRM.htm)

which led to watching "Strange Dreams of Stranger Loops" by Will Byrd
https://www.youtube.com/watch?v=AffW-7ika0E

Will referenced a PhD Thesis by Jon Doyle
"A Model for Deliberation, Action, and Introspection"

I also read the thesis by J.C.G. Sturdy
"A Lisp through the Looking Glass"

Self-replication requires the ability to manipulate your own
representation in such a way that changes to that representation
will change behavior.

This leads to two thoughts in the SANE research.

First, "Declarative Representation". That is, most of the things
about the representation should be declarative rather than
procedural. Applying this idea as much as possible makes it
easier to understand and manipulate.

Second, "Explicit Call Stack". Function calls form an implicit
call stack. This can usually be displayed in a running lisp system.
However, having the call stack explicitly available would mean
that a system could "introspect" at the first-class level.

These two ideas would make it easy, for example, to let the
system "show the work". One of the normal complaints is that
a system presents an answer but there is no way to know how
that answer was derived. These two ideas make it possible to
understand, display, and even post-answer manipulate
the intermediate steps.

Having the intermediate steps also allows proofs to be
inserted in a step-by-step fashion. This aids the effort to
have proofs run in parallel with computation at the hardware
level.

Tim






On Thu, Oct 21, 2021 at 9:50 AM Tim Daly  wrote:

> So the current struggle involves the categories in Axiom.
>
> The categories and domains constructed using categories
> are dependent types. When are dependent types "equal"?
> Well, hu, that depends on the arguments to the
> constructor.
>
> But in order to decide(?) equality we have to evaluate
> the arguments (which themselves can be dependent types).
> Indeed, we may, and in general, we must evaluate the
> arguments at compile time (well, "construction time" as
> there isn't really a compiler / interpreter separation anymore.)
>
> That raises the question of what "equality" means. This
> is not simply a "set equality" relation. It falls into the
> infinite-groupoid of homotopy type theory. In general
> it appears that deciding category / domain equivalence
> might force us to climb the type hierarchy.
>
> Beyond that, there is the question of "which proof"
> applies to the resulting object. Proofs depend on their
> assumptions which might be different for different
> constructions. As yet I have no clue how to "index"
> proofs based on their assumptions, nor how to
> connect these assumptions to the groupoid structure.
>
> My brain hurts.
>
> Tim
>
>
> On Mon, Oct 18, 2021 at 2:00 AM Tim Daly  wrote:
>
>> "Birthing Computational Mathematics"
>>
>> The Axiom SANE project is difficult at a very fundamental
>> level. The title "SANE" was chosen due to the various
>> words found in a thesuarus... "rational", "coherent",
>> "judicious" and "sound".
>>
>> These are very high level, amorphous ideas. But so is
>> the design of SANE. Breaking away from tradition in
>> computer algebra, type theory, and proof assistants
>> is very difficult. Ideas tend to fall into standard jargon
>> which limits both the frame of thinking (e.g. dependent
>> types) and the content (e.g. notation).
>>
>> Questioning both frame and content is very difficult.
>> It is hard to even recognize when they are accepted
>> "by default" rather than "by choice". What does the idea
>> "power tools" mean in a primitive, hand labor culture?
>>
>> Christopher Alexander [0] addresses this problem in
>> a lot of his writing. Specifically, in his book "Notes on
>> the Synthesis of Form", in his chapter 5 "The Selfconsious
>> Process", he addresses this problem directly. This is a
>> "must read" book.
>>
>> Unlike building design and contruction, however, there
>> are almost no constraints to use as guides. Alexander
>> quotes Plato's Phaedrus:
>>
>>   "First, the taking in of scattered particulars under
>>one Idea, so that everyone understands what is being
>>talked about ... Second, the separation of the Idea
>>into parts, by dividing it at the joints, as nature
>>directs, not breaking any limb in half as a bad
>>carver might."
>>
>> Lisp, which has been called "clay for the mind" can
>> build virtually anything that can be thought. The
>> "joints" are also "of one's choosing" so one is
>> both carver and "nature".
>>
>> Clearly the problem is no longer "the tools".
>> *I* am the problem constraining the solution.
>> Birthing this "new thing" is slow, difficult, and
>> uncertain at best.
>>
>> Tim
>>
>> [0] Alexander, Christopher "Notes on the Synthesis
>> of Form" Harvard University Press 1964
>> ISBN 0-674-62751-2
>>
>>
>> On Sun, Oct 10, 2021 at 4:40 PM Tim Daly  

Re: Axiom musings...

2021-10-21 Thread Tim Daly
So the current struggle involves the categories in Axiom.

The categories and domains constructed using categories
are dependent types. When are dependent types "equal"?
Well, hu, that depends on the arguments to the
constructor.

But in order to decide(?) equality we have to evaluate
the arguments (which themselves can be dependent types).
Indeed, we may, and in general, we must evaluate the
arguments at compile time (well, "construction time" as
there isn't really a compiler / interpreter separation anymore.)

That raises the question of what "equality" means. This
is not simply a "set equality" relation. It falls into the
infinite-groupoid of homotopy type theory. In general
it appears that deciding category / domain equivalence
might force us to climb the type hierarchy.

Beyond that, there is the question of "which proof"
applies to the resulting object. Proofs depend on their
assumptions which might be different for different
constructions. As yet I have no clue how to "index"
proofs based on their assumptions, nor how to
connect these assumptions to the groupoid structure.

My brain hurts.

Tim


On Mon, Oct 18, 2021 at 2:00 AM Tim Daly  wrote:

> "Birthing Computational Mathematics"
>
> The Axiom SANE project is difficult at a very fundamental
> level. The title "SANE" was chosen due to the various
> words found in a thesuarus... "rational", "coherent",
> "judicious" and "sound".
>
> These are very high level, amorphous ideas. But so is
> the design of SANE. Breaking away from tradition in
> computer algebra, type theory, and proof assistants
> is very difficult. Ideas tend to fall into standard jargon
> which limits both the frame of thinking (e.g. dependent
> types) and the content (e.g. notation).
>
> Questioning both frame and content is very difficult.
> It is hard to even recognize when they are accepted
> "by default" rather than "by choice". What does the idea
> "power tools" mean in a primitive, hand labor culture?
>
> Christopher Alexander [0] addresses this problem in
> a lot of his writing. Specifically, in his book "Notes on
> the Synthesis of Form", in his chapter 5 "The Selfconsious
> Process", he addresses this problem directly. This is a
> "must read" book.
>
> Unlike building design and contruction, however, there
> are almost no constraints to use as guides. Alexander
> quotes Plato's Phaedrus:
>
>   "First, the taking in of scattered particulars under
>one Idea, so that everyone understands what is being
>talked about ... Second, the separation of the Idea
>into parts, by dividing it at the joints, as nature
>directs, not breaking any limb in half as a bad
>carver might."
>
> Lisp, which has been called "clay for the mind" can
> build virtually anything that can be thought. The
> "joints" are also "of one's choosing" so one is
> both carver and "nature".
>
> Clearly the problem is no longer "the tools".
> *I* am the problem constraining the solution.
> Birthing this "new thing" is slow, difficult, and
> uncertain at best.
>
> Tim
>
> [0] Alexander, Christopher "Notes on the Synthesis
> of Form" Harvard University Press 1964
> ISBN 0-674-62751-2
>
>
> On Sun, Oct 10, 2021 at 4:40 PM Tim Daly  wrote:
>
>> Re: writing a paper... I'm not connected to Academia
>> so anything I'd write would never make it into print.
>>
>> "Language level parsing" is still a long way off. The talk
>> by Guy Steele [2] highlights some of the problems we
>> currently face using mathematical metanotation.
>>
>> For example, a professor I know at CCNY (City College
>> of New York) didn't understand Platzer's "funny
>> fraction notation" (proof judgements) despite being
>> an expert in Platzer's differential equations area.
>>
>> Notation matters and is not widely common.
>>
>> I spoke to Professor Black (in LTI) about using natural
>> language in the limited task of a human-robot cooperation
>> in changing a car tire.  I looked at the current machine
>> learning efforts. They are no where near anything but
>> toy systems, taking too long to train and are too fragile.
>>
>> Instead I ended up using a combination of AIML [3]
>> (Artificial Intelligence Markup Language), the ALICE
>> Chatbot [4], Forgy's OPS5 rule based program [5],
>> and Fahlman's SCONE [6] knowledge base. It was
>> much less fragile in my limited domain problem.
>>
>> I have no idea how to extend any system to deal with
>> even undergraduate mathematics parsing.
>>
>> Nor do I have any idea how I would embed LEAN
>> knowledge into a SCONE database, although I
>> think the combination would be useful and interesting.
>>
>> I do believe that, in the limited area of computational
>> mathematics, we are capable of building robust, proven
>> systems that are quite general and extensible. As you
>> might have guessed I've given it a lot of thought over
>> the years :-)
>>
>> A mathematical language seems to need >6 components
>>
>> 1) We need some sort of a specification language, possibly
>> somewhat 'propositional' 

Re: Axiom musings...

2021-10-18 Thread Tim Daly
"Birthing Computational Mathematics"

The Axiom SANE project is difficult at a very fundamental
level. The title "SANE" was chosen due to the various
words found in a thesuarus... "rational", "coherent",
"judicious" and "sound".

These are very high level, amorphous ideas. But so is
the design of SANE. Breaking away from tradition in
computer algebra, type theory, and proof assistants
is very difficult. Ideas tend to fall into standard jargon
which limits both the frame of thinking (e.g. dependent
types) and the content (e.g. notation).

Questioning both frame and content is very difficult.
It is hard to even recognize when they are accepted
"by default" rather than "by choice". What does the idea
"power tools" mean in a primitive, hand labor culture?

Christopher Alexander [0] addresses this problem in
a lot of his writing. Specifically, in his book "Notes on
the Synthesis of Form", in his chapter 5 "The Selfconsious
Process", he addresses this problem directly. This is a
"must read" book.

Unlike building design and contruction, however, there
are almost no constraints to use as guides. Alexander
quotes Plato's Phaedrus:

  "First, the taking in of scattered particulars under
   one Idea, so that everyone understands what is being
   talked about ... Second, the separation of the Idea
   into parts, by dividing it at the joints, as nature
   directs, not breaking any limb in half as a bad
   carver might."

Lisp, which has been called "clay for the mind" can
build virtually anything that can be thought. The
"joints" are also "of one's choosing" so one is
both carver and "nature".

Clearly the problem is no longer "the tools".
*I* am the problem constraining the solution.
Birthing this "new thing" is slow, difficult, and
uncertain at best.

Tim

[0] Alexander, Christopher "Notes on the Synthesis
of Form" Harvard University Press 1964
ISBN 0-674-62751-2


On Sun, Oct 10, 2021 at 4:40 PM Tim Daly  wrote:

> Re: writing a paper... I'm not connected to Academia
> so anything I'd write would never make it into print.
>
> "Language level parsing" is still a long way off. The talk
> by Guy Steele [2] highlights some of the problems we
> currently face using mathematical metanotation.
>
> For example, a professor I know at CCNY (City College
> of New York) didn't understand Platzer's "funny
> fraction notation" (proof judgements) despite being
> an expert in Platzer's differential equations area.
>
> Notation matters and is not widely common.
>
> I spoke to Professor Black (in LTI) about using natural
> language in the limited task of a human-robot cooperation
> in changing a car tire.  I looked at the current machine
> learning efforts. They are no where near anything but
> toy systems, taking too long to train and are too fragile.
>
> Instead I ended up using a combination of AIML [3]
> (Artificial Intelligence Markup Language), the ALICE
> Chatbot [4], Forgy's OPS5 rule based program [5],
> and Fahlman's SCONE [6] knowledge base. It was
> much less fragile in my limited domain problem.
>
> I have no idea how to extend any system to deal with
> even undergraduate mathematics parsing.
>
> Nor do I have any idea how I would embed LEAN
> knowledge into a SCONE database, although I
> think the combination would be useful and interesting.
>
> I do believe that, in the limited area of computational
> mathematics, we are capable of building robust, proven
> systems that are quite general and extensible. As you
> might have guessed I've given it a lot of thought over
> the years :-)
>
> A mathematical language seems to need >6 components
>
> 1) We need some sort of a specification language, possibly
> somewhat 'propositional' that introduces the assumptions
> you mentioned (ref. your discussion of numbers being
> abstract and ref. your discussion of relevant choice of
> assumptions related to a problem).
>
> This is starting to show up in the hardware area (e.g.
> Lamport's TLC[0])
>
> Of course, specifications relate to proving programs
> and, as you recall, I got a cold reception from the
> LEAN community about using LEAN for program proofs.
>
> 2) We need "scaffolding". That is, we need a theory
> that can be reduced to some implementable form
> that provides concept-level structure.
>
> Axiom uses group theory for this. Axiom's "category"
> structure has "Category" things like Ring. Claiming
> to be a Ring brings in a lot of "Signatures" of functions
> you have to implement to properly be a Ring.
>
> Scaffolding provides a firm mathematical basis for
> design. It provides a link between the concept of a
> Ring and the expectations you can assume when
> you claim your "Domain" "is a Ring". Category
> theory might provide similar structural scaffolding
> (eventually... I'm still working on that thought garden)
>
> LEAN ought to have a textbook(s?) that structures
> the world around some form of mathematics. It isn't
> sufficient to say "undergraduate math" is the goal.
> There needs to be some coherent organization 

Re: Axiom musings...

2021-09-27 Thread Tim Daly
I have tried to maintain a list of names of people who have
helped Axiom, going all the way back to the pre-Scratchpad
days. The names are listed at the beginning of each book.
I also maintain a bibliography of publications I've read or
that have had an indirect influence on Axiom.

Credit is "the coin of the realm". It is easy to share and wrong
to ignore. It is especially damaging to those in Academia who
are affected by credit and citations in publications.

Apparently I'm not the only person who feels that way. The ACM
Turing award seems to have ignored a lot of work:

Scientific Integrity, the 2021 Turing Lecture, and the 2018 Turing
Award for Deep Learning
https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html

I worked on an AI problem at IBM Research called Ketazolam.
(https://en.wikipedia.org/wiki/Ketazolam). The idea was to recognize
and associated 3D chemical drawings with their drug counterparts.
I used Rumelhart, and McClelland's books. These books contained
quite a few ideas that seem to be "new and innovative" among the
machine learning crowd... but the books are from 1987. I don't believe
I've seen these books mentioned in any recent bibliography.
https://mitpress.mit.edu/books/parallel-distributed-processing-volume-1




On 9/27/21, Tim Daly  wrote:
> Greg Wilson asked "How Reliable is Scientific Software?"
> https://neverworkintheory.org/2021/09/25/how-reliable-is-scientific-software.html
>
> which is a really interesting read. For example"
>
>  [Hatton1994], is now a quarter of a century old, but its conclusions
> are still fresh. The authors fed the same data into nine commercial
> geophysical software packages and compared the results; they found
> that, "numerical disagreement grows at around the rate of 1% in
> average absolute difference per 4000 fines of implemented code, and,
> even worse, the nature of the disagreement is nonrandom" (i.e., the
> authors of different packages make similar mistakes).
>
>
> On 9/26/21, Tim Daly  wrote:
>> I should note that the lastest board I've just unboxed
>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).
>>
>> What makes it interesting is that it contains 2 hard
>> core processors and an FPGA, connected by 9 paths
>> for communication. The processors can be run
>> independently so there is the possibility of a parallel
>> version of some Axiom algorithms (assuming I had
>> the time, which I don't).
>>
>> Previously either the hard (physical) processor was
>> separate from the FPGA with minimal communication
>> or the soft core processor had to be created in the FPGA
>> and was much slower.
>>
>> Now the two have been combined in a single chip.
>> That means that my effort to run a proof checker on
>> the FPGA and the algorithm on the CPU just got to
>> the point where coordination is much easier.
>>
>> Now all I have to do is figure out how to program this
>> beast.
>>
>> There is no such thing as a simple job.
>>
>> Tim
>>
>>
>> On 9/26/21, Tim Daly  wrote:
>>> I'm familiar with most of the traditional approaches
>>> like Theorema. The bibliography contains most of the
>>> more interesting sources. [0]
>>>
>>> There is a difference between traditional approaches to
>>> connecting computer algebra and proofs and my approach.
>>>
>>> Proving an algorithm, like the GCD, in Axiom is hard.
>>> There are many GCDs (e.g. NNI vs POLY) and there
>>> are theorems and proofs passed at runtime in the
>>> arguments of the newly constructed domains. This
>>> involves a lot of dependent type theory and issues of
>>> compile time / runtime argument evaluation. The issues
>>> that arise are difficult and still being debated in the type
>>> theory community.
>>>
>>> I am putting the definitions, theorems, and proofs (DTP)
>>> directly into the category/domain hierarchy. Each category
>>> will have the DTP specific to it. That way a commutative
>>> domain will inherit a commutative theorem and a
>>> non-commutative domain will not.
>>>
>>> Each domain will have additional DTPs associated with
>>> the domain (e.g. NNI vs Integer) as well as any DTPs
>>> it inherits from the category hierarchy. Functions in the
>>> domain will have associated DTPs.
>>>
>>> A function to be proven will then inherit all of the relevant
>>> DTPs. The proof will be attached to the function and
>>> both will be sent to the hardware (proof-carrying code).
>>>
>>> The proof checker, running on a field programmable
>>> gate array (FPGA), will be checked at runtime in
>>> parallel with the algorithm running on the CPU
>>> (aka "trust down to the metal"). (Note that Intel
>>> and AMD have built CPU/FPGA combined chips,
>>> currently only available in the cloud.)
>>>
>>>
>>>
>>> I am (slowly) making progress on the research.
>>>
>>> I have the hardware and nearly have the proof
>>> checker from LEAN running on my FPGA.
>>>
>>> I'm in the process of spreading the DTPs from
>>> LEAN across the category/domain hierarchy.
>>>
>>> The current Axiom build 

Re: Axiom musings...

2021-09-27 Thread Tim Daly
Greg Wilson asked "How Reliable is Scientific Software?"
https://neverworkintheory.org/2021/09/25/how-reliable-is-scientific-software.html

which is a really interesting read. For example"

 [Hatton1994], is now a quarter of a century old, but its conclusions
are still fresh. The authors fed the same data into nine commercial
geophysical software packages and compared the results; they found
that, "numerical disagreement grows at around the rate of 1% in
average absolute difference per 4000 fines of implemented code, and,
even worse, the nature of the disagreement is nonrandom" (i.e., the
authors of different packages make similar mistakes).


On 9/26/21, Tim Daly  wrote:
> I should note that the lastest board I've just unboxed
> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).
>
> What makes it interesting is that it contains 2 hard
> core processors and an FPGA, connected by 9 paths
> for communication. The processors can be run
> independently so there is the possibility of a parallel
> version of some Axiom algorithms (assuming I had
> the time, which I don't).
>
> Previously either the hard (physical) processor was
> separate from the FPGA with minimal communication
> or the soft core processor had to be created in the FPGA
> and was much slower.
>
> Now the two have been combined in a single chip.
> That means that my effort to run a proof checker on
> the FPGA and the algorithm on the CPU just got to
> the point where coordination is much easier.
>
> Now all I have to do is figure out how to program this
> beast.
>
> There is no such thing as a simple job.
>
> Tim
>
>
> On 9/26/21, Tim Daly  wrote:
>> I'm familiar with most of the traditional approaches
>> like Theorema. The bibliography contains most of the
>> more interesting sources. [0]
>>
>> There is a difference between traditional approaches to
>> connecting computer algebra and proofs and my approach.
>>
>> Proving an algorithm, like the GCD, in Axiom is hard.
>> There are many GCDs (e.g. NNI vs POLY) and there
>> are theorems and proofs passed at runtime in the
>> arguments of the newly constructed domains. This
>> involves a lot of dependent type theory and issues of
>> compile time / runtime argument evaluation. The issues
>> that arise are difficult and still being debated in the type
>> theory community.
>>
>> I am putting the definitions, theorems, and proofs (DTP)
>> directly into the category/domain hierarchy. Each category
>> will have the DTP specific to it. That way a commutative
>> domain will inherit a commutative theorem and a
>> non-commutative domain will not.
>>
>> Each domain will have additional DTPs associated with
>> the domain (e.g. NNI vs Integer) as well as any DTPs
>> it inherits from the category hierarchy. Functions in the
>> domain will have associated DTPs.
>>
>> A function to be proven will then inherit all of the relevant
>> DTPs. The proof will be attached to the function and
>> both will be sent to the hardware (proof-carrying code).
>>
>> The proof checker, running on a field programmable
>> gate array (FPGA), will be checked at runtime in
>> parallel with the algorithm running on the CPU
>> (aka "trust down to the metal"). (Note that Intel
>> and AMD have built CPU/FPGA combined chips,
>> currently only available in the cloud.)
>>
>>
>>
>> I am (slowly) making progress on the research.
>>
>> I have the hardware and nearly have the proof
>> checker from LEAN running on my FPGA.
>>
>> I'm in the process of spreading the DTPs from
>> LEAN across the category/domain hierarchy.
>>
>> The current Axiom build extracts all of the functions
>> but does not yet have the DTPs.
>>
>> I have to restructure the system, including the compiler
>> and interpreter to parse and inherit the DTPs. I
>> have some of that code but only some of the code
>> has been pushed to the repository (volume 15) but
>> that is rather trivial, out of date, and incomplete.
>>
>> I'm clearly not smart enough to prove the Risch
>> algorithm and its associated machinery but the needed
>> definitions and theorems will be available to someone
>> who wants to try.
>>
>> [0] https://github.com/daly/PDFS/blob/master/bookvolbib.pdf
>>
>>
>> On 8/19/21, Tim Daly  wrote:
>>> =
>>>
>>> REVIEW (Axiom on WSL2 Windows)
>>>
>>>
>>> So the steps to run Axiom from a Windows desktop
>>>
>>> 1 Windows) install XMing on Windows for X11 server
>>>
>>> http://www.straightrunning.com/XmingNotes/
>>>
>>> 2 WSL2) Install Axiom in WSL2
>>>
>>> sudo apt install axiom
>>>
>>> 3 WSL2) modify /usr/bin/axiom to fix the bug:
>>> (someone changed the axiom startup script.
>>> It won't work on WSL2. I don't know who or
>>> how to get it fixed).
>>>
>>> sudo emacs /usr/bin/axiom
>>>
>>> (split the line into 3 and add quote marks)
>>>
>>> export SPADDEFAULT=/usr/local/axiom/mnt/linux
>>> export AXIOM=/usr/lib/axiom-20170501
>>> export "PATH=/usr/lib/axiom-20170501/bin:$PATH"
>>>
>>> 4 WSL2) create a .axiom.input file to include startup 

Re: Axiom musings...

2021-09-26 Thread Tim Daly
I should note that the lastest board I've just unboxed
(a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).

What makes it interesting is that it contains 2 hard
core processors and an FPGA, connected by 9 paths
for communication. The processors can be run
independently so there is the possibility of a parallel
version of some Axiom algorithms (assuming I had
the time, which I don't).

Previously either the hard (physical) processor was
separate from the FPGA with minimal communication
or the soft core processor had to be created in the FPGA
and was much slower.

Now the two have been combined in a single chip.
That means that my effort to run a proof checker on
the FPGA and the algorithm on the CPU just got to
the point where coordination is much easier.

Now all I have to do is figure out how to program this
beast.

There is no such thing as a simple job.

Tim


On 9/26/21, Tim Daly  wrote:
> I'm familiar with most of the traditional approaches
> like Theorema. The bibliography contains most of the
> more interesting sources. [0]
>
> There is a difference between traditional approaches to
> connecting computer algebra and proofs and my approach.
>
> Proving an algorithm, like the GCD, in Axiom is hard.
> There are many GCDs (e.g. NNI vs POLY) and there
> are theorems and proofs passed at runtime in the
> arguments of the newly constructed domains. This
> involves a lot of dependent type theory and issues of
> compile time / runtime argument evaluation. The issues
> that arise are difficult and still being debated in the type
> theory community.
>
> I am putting the definitions, theorems, and proofs (DTP)
> directly into the category/domain hierarchy. Each category
> will have the DTP specific to it. That way a commutative
> domain will inherit a commutative theorem and a
> non-commutative domain will not.
>
> Each domain will have additional DTPs associated with
> the domain (e.g. NNI vs Integer) as well as any DTPs
> it inherits from the category hierarchy. Functions in the
> domain will have associated DTPs.
>
> A function to be proven will then inherit all of the relevant
> DTPs. The proof will be attached to the function and
> both will be sent to the hardware (proof-carrying code).
>
> The proof checker, running on a field programmable
> gate array (FPGA), will be checked at runtime in
> parallel with the algorithm running on the CPU
> (aka "trust down to the metal"). (Note that Intel
> and AMD have built CPU/FPGA combined chips,
> currently only available in the cloud.)
>
>
>
> I am (slowly) making progress on the research.
>
> I have the hardware and nearly have the proof
> checker from LEAN running on my FPGA.
>
> I'm in the process of spreading the DTPs from
> LEAN across the category/domain hierarchy.
>
> The current Axiom build extracts all of the functions
> but does not yet have the DTPs.
>
> I have to restructure the system, including the compiler
> and interpreter to parse and inherit the DTPs. I
> have some of that code but only some of the code
> has been pushed to the repository (volume 15) but
> that is rather trivial, out of date, and incomplete.
>
> I'm clearly not smart enough to prove the Risch
> algorithm and its associated machinery but the needed
> definitions and theorems will be available to someone
> who wants to try.
>
> [0] https://github.com/daly/PDFS/blob/master/bookvolbib.pdf
>
>
> On 8/19/21, Tim Daly  wrote:
>> =
>>
>> REVIEW (Axiom on WSL2 Windows)
>>
>>
>> So the steps to run Axiom from a Windows desktop
>>
>> 1 Windows) install XMing on Windows for X11 server
>>
>> http://www.straightrunning.com/XmingNotes/
>>
>> 2 WSL2) Install Axiom in WSL2
>>
>> sudo apt install axiom
>>
>> 3 WSL2) modify /usr/bin/axiom to fix the bug:
>> (someone changed the axiom startup script.
>> It won't work on WSL2. I don't know who or
>> how to get it fixed).
>>
>> sudo emacs /usr/bin/axiom
>>
>> (split the line into 3 and add quote marks)
>>
>> export SPADDEFAULT=/usr/local/axiom/mnt/linux
>> export AXIOM=/usr/lib/axiom-20170501
>> export "PATH=/usr/lib/axiom-20170501/bin:$PATH"
>>
>> 4 WSL2) create a .axiom.input file to include startup cmds:
>>
>> emacs .axiom.input
>>
>> )cd "/mnt/c/yourpath"
>> )sys pwd
>>
>> 5 WSL2) create a "myaxiom" command that sets the
>> DISPLAY variable and starts axiom
>>
>> emacs myaxiom
>>
>> #! /bin/bash
>> export DISPLAY=:0.0
>> axiom
>>
>> 6 WSL2) put it in the /usr/bin directory
>>
>> chmod +x myaxiom
>> sudo cp myaxiom /usr/bin/myaxiom
>>
>> 7 WINDOWS) start the X11 server
>>
>> (XMing XLaunch Icon on your desktop)
>>
>> 8 WINDOWS) run myaxiom from PowerShell
>> (this should start axiom with graphics available)
>>
>> wsl myaxiom
>>
>> 8 WINDOWS) make a PowerShell desktop
>>
>> https://superuser.com/questions/886951/run-powershell-script-when-you-open-powershell
>>
>> Tim
>>
>> On 8/13/21, Tim Daly  wrote:
>>> A great deal of thought is directed toward making the SANE version
>>> of Axiom as flexible as 

Re: Axiom musings...

2021-09-26 Thread Tim Daly
I'm familiar with most of the traditional approaches
like Theorema. The bibliography contains most of the
more interesting sources. [0]

There is a difference between traditional approaches to
connecting computer algebra and proofs and my approach.

Proving an algorithm, like the GCD, in Axiom is hard.
There are many GCDs (e.g. NNI vs POLY) and there
are theorems and proofs passed at runtime in the
arguments of the newly constructed domains. This
involves a lot of dependent type theory and issues of
compile time / runtime argument evaluation. The issues
that arise are difficult and still being debated in the type
theory community.

I am putting the definitions, theorems, and proofs (DTP)
directly into the category/domain hierarchy. Each category
will have the DTP specific to it. That way a commutative
domain will inherit a commutative theorem and a
non-commutative domain will not.

Each domain will have additional DTPs associated with
the domain (e.g. NNI vs Integer) as well as any DTPs
it inherits from the category hierarchy. Functions in the
domain will have associated DTPs.

A function to be proven will then inherit all of the relevant
DTPs. The proof will be attached to the function and
both will be sent to the hardware (proof-carrying code).

The proof checker, running on a field programmable
gate array (FPGA), will be checked at runtime in
parallel with the algorithm running on the CPU
(aka "trust down to the metal"). (Note that Intel
and AMD have built CPU/FPGA combined chips,
currently only available in the cloud.)



I am (slowly) making progress on the research.

I have the hardware and nearly have the proof
checker from LEAN running on my FPGA.

I'm in the process of spreading the DTPs from
LEAN across the category/domain hierarchy.

The current Axiom build extracts all of the functions
but does not yet have the DTPs.

I have to restructure the system, including the compiler
and interpreter to parse and inherit the DTPs. I
have some of that code but only some of the code
has been pushed to the repository (volume 15) but
that is rather trivial, out of date, and incomplete.

I'm clearly not smart enough to prove the Risch
algorithm and its associated machinery but the needed
definitions and theorems will be available to someone
who wants to try.

[0] https://github.com/daly/PDFS/blob/master/bookvolbib.pdf


On 8/19/21, Tim Daly  wrote:
> =
>
> REVIEW (Axiom on WSL2 Windows)
>
>
> So the steps to run Axiom from a Windows desktop
>
> 1 Windows) install XMing on Windows for X11 server
>
> http://www.straightrunning.com/XmingNotes/
>
> 2 WSL2) Install Axiom in WSL2
>
> sudo apt install axiom
>
> 3 WSL2) modify /usr/bin/axiom to fix the bug:
> (someone changed the axiom startup script.
> It won't work on WSL2. I don't know who or
> how to get it fixed).
>
> sudo emacs /usr/bin/axiom
>
> (split the line into 3 and add quote marks)
>
> export SPADDEFAULT=/usr/local/axiom/mnt/linux
> export AXIOM=/usr/lib/axiom-20170501
> export "PATH=/usr/lib/axiom-20170501/bin:$PATH"
>
> 4 WSL2) create a .axiom.input file to include startup cmds:
>
> emacs .axiom.input
>
> )cd "/mnt/c/yourpath"
> )sys pwd
>
> 5 WSL2) create a "myaxiom" command that sets the
> DISPLAY variable and starts axiom
>
> emacs myaxiom
>
> #! /bin/bash
> export DISPLAY=:0.0
> axiom
>
> 6 WSL2) put it in the /usr/bin directory
>
> chmod +x myaxiom
> sudo cp myaxiom /usr/bin/myaxiom
>
> 7 WINDOWS) start the X11 server
>
> (XMing XLaunch Icon on your desktop)
>
> 8 WINDOWS) run myaxiom from PowerShell
> (this should start axiom with graphics available)
>
> wsl myaxiom
>
> 8 WINDOWS) make a PowerShell desktop
>
> https://superuser.com/questions/886951/run-powershell-script-when-you-open-powershell
>
> Tim
>
> On 8/13/21, Tim Daly  wrote:
>> A great deal of thought is directed toward making the SANE version
>> of Axiom as flexible as possible, decoupling mechanism from theory.
>>
>> An interesting publication by Brian Cantwell Smith [0], "Reflection
>> and Semantics in LISP" seems to contain interesting ideas related
>> to our goal. Of particular interest is the ability to reason about and
>> perform self-referential manipulations. In a dependently-typed
>> system it seems interesting to be able "adapt" code to handle
>> run-time computed arguments to dependent functions. The abstract:
>>
>>"We show how a computational system can be constructed to "reason",
>> effectively
>>and consequentially, about its own inferential processes. The
>> analysis proceeds in two
>>parts. First, we consider the general question of computational
>> semantics, rejecting
>>traditional approaches, and arguing that the declarative and
>> procedural aspects of
>>computational symbols (what they stand for, and what behaviour they
>> engender) should be
>>analysed independently, in order that they may be coherently
>> related. Second, we
>>investigate self-referential behavior in computational processes,

Re: Axiom musings...

2021-08-19 Thread Tim Daly
=

REVIEW (Axiom on WSL2 Windows)


So the steps to run Axiom from a Windows desktop

1 Windows) install XMing on Windows for X11 server

http://www.straightrunning.com/XmingNotes/

2 WSL2) Install Axiom in WSL2

sudo apt install axiom

3 WSL2) modify /usr/bin/axiom to fix the bug:
(someone changed the axiom startup script.
It won't work on WSL2. I don't know who or
how to get it fixed).

sudo emacs /usr/bin/axiom

(split the line into 3 and add quote marks)

export SPADDEFAULT=/usr/local/axiom/mnt/linux
export AXIOM=/usr/lib/axiom-20170501
export "PATH=/usr/lib/axiom-20170501/bin:$PATH"

4 WSL2) create a .axiom.input file to include startup cmds:

emacs .axiom.input

)cd "/mnt/c/yourpath"
)sys pwd

5 WSL2) create a "myaxiom" command that sets the
DISPLAY variable and starts axiom

emacs myaxiom

#! /bin/bash
export DISPLAY=:0.0
axiom

6 WSL2) put it in the /usr/bin directory

chmod +x myaxiom
sudo cp myaxiom /usr/bin/myaxiom

7 WINDOWS) start the X11 server

(XMing XLaunch Icon on your desktop)

8 WINDOWS) run myaxiom from PowerShell
(this should start axiom with graphics available)

wsl myaxiom

8 WINDOWS) make a PowerShell desktop

https://superuser.com/questions/886951/run-powershell-script-when-you-open-powershell

Tim

On 8/13/21, Tim Daly  wrote:
> A great deal of thought is directed toward making the SANE version
> of Axiom as flexible as possible, decoupling mechanism from theory.
>
> An interesting publication by Brian Cantwell Smith [0], "Reflection
> and Semantics in LISP" seems to contain interesting ideas related
> to our goal. Of particular interest is the ability to reason about and
> perform self-referential manipulations. In a dependently-typed
> system it seems interesting to be able "adapt" code to handle
> run-time computed arguments to dependent functions. The abstract:
>
>"We show how a computational system can be constructed to "reason",
> effectively
>and consequentially, about its own inferential processes. The
> analysis proceeds in two
>parts. First, we consider the general question of computational
> semantics, rejecting
>traditional approaches, and arguing that the declarative and
> procedural aspects of
>computational symbols (what they stand for, and what behaviour they
> engender) should be
>analysed independently, in order that they may be coherently
> related. Second, we
>investigate self-referential behavior in computational processes,
> and show how to embed an
>effective procedural model of a computational calculus within that
> calculus (a model not
>unlike a meta-circular interpreter, but connected to the
> fundamental operations of the
>machine in such a way as to provide, at any point in a computation,
> fully articulated
>descriptions of the state of that computation, for inspection and
> possible modification). In
>terms of the theories that result from these investigations, we
> present a general architecture
>for procedurally reflective processes, able to shift smoothly
> between dealing with a given
>subject domain, and dealing with their own reasoning processes over
> that domain.
>
>An instance of the general solution is worked out in the context of
> an applicative
>language. Specifically, we present three successive dialects of
> LISP: 1-LISP, a distillation of
>current practice, for comparison purposes; 2-LISP, a dialect
> constructed in terms of our
>rationalised semantics, in which the concept of evaluation is
> rejected in favour of
>independent notions of simplification and reference, and in which
> the respective categories
>of notation, structure, semantics, and behaviour are strictly
> aligned; and 3-LISP, an
>extension of 2-LISP endowed with reflective powers."
>
> Axiom SANE builds dependent types on the fly. The ability to access
> both the refection
> of the tower of algebra and the reflection of the tower of proofs at
> the time of construction
> makes the construction of a new domain or specific algorithm easier
> and more general.
>
> This is of particular interest because one of the efforts is to build
> "all the way down to the
> metal". If each layer is constructed on top of previous proven layers
> and the new layer
> can "reach below" to lower layers then the tower of layers can be
> built without duplication.
>
> Tim
>
> [0], Smith, Brian Cantwell "Reflection and Semantics in LISP"
> POPL '84: Proceedings of the 11th ACM SIGACT-SIGPLAN
> ymposium on Principles of programming languagesJanuary 1
> 984 Pages 23–35https://doi.org/10.1145/800017.800513
>
> On 6/29/21, Tim Daly  wrote:
>> Having spent time playing with hardware it is perfectly clear that
>> future computational mathematics efforts need to adapt to using
>> parallel processing.
>>
>> I've spent a fair bit of time thinking about structuring Axiom to
>> be parallel. Most past efforts have tried to focus on making a
>> particular algorithm parallel, 

Re: Axiom musings...

2021-08-13 Thread Tim Daly
A great deal of thought is directed toward making the SANE version
of Axiom as flexible as possible, decoupling mechanism from theory.

An interesting publication by Brian Cantwell Smith [0], "Reflection
and Semantics in LISP" seems to contain interesting ideas related
to our goal. Of particular interest is the ability to reason about and
perform self-referential manipulations. In a dependently-typed
system it seems interesting to be able "adapt" code to handle
run-time computed arguments to dependent functions. The abstract:

   "We show how a computational system can be constructed to "reason",
effectively
   and consequentially, about its own inferential processes. The
analysis proceeds in two
   parts. First, we consider the general question of computational
semantics, rejecting
   traditional approaches, and arguing that the declarative and
procedural aspects of
   computational symbols (what they stand for, and what behaviour they
engender) should be
   analysed independently, in order that they may be coherently
related. Second, we
   investigate self-referential behavior in computational processes,
and show how to embed an
   effective procedural model of a computational calculus within that
calculus (a model not
   unlike a meta-circular interpreter, but connected to the
fundamental operations of the
   machine in such a way as to provide, at any point in a computation,
fully articulated
   descriptions of the state of that computation, for inspection and
possible modification). In
   terms of the theories that result from these investigations, we
present a general architecture
   for procedurally reflective processes, able to shift smoothly
between dealing with a given
   subject domain, and dealing with their own reasoning processes over
that domain.

   An instance of the general solution is worked out in the context of
an applicative
   language. Specifically, we present three successive dialects of
LISP: 1-LISP, a distillation of
   current practice, for comparison purposes; 2-LISP, a dialect
constructed in terms of our
   rationalised semantics, in which the concept of evaluation is
rejected in favour of
   independent notions of simplification and reference, and in which
the respective categories
   of notation, structure, semantics, and behaviour are strictly
aligned; and 3-LISP, an
   extension of 2-LISP endowed with reflective powers."

Axiom SANE builds dependent types on the fly. The ability to access
both the refection
of the tower of algebra and the reflection of the tower of proofs at
the time of construction
makes the construction of a new domain or specific algorithm easier
and more general.

This is of particular interest because one of the efforts is to build
"all the way down to the
metal". If each layer is constructed on top of previous proven layers
and the new layer
can "reach below" to lower layers then the tower of layers can be
built without duplication.

Tim

[0], Smith, Brian Cantwell "Reflection and Semantics in LISP"
POPL '84: Proceedings of the 11th ACM SIGACT-SIGPLAN
ymposium on Principles of programming languagesJanuary 1
984 Pages 23–35https://doi.org/10.1145/800017.800513

On 6/29/21, Tim Daly  wrote:
> Having spent time playing with hardware it is perfectly clear that
> future computational mathematics efforts need to adapt to using
> parallel processing.
>
> I've spent a fair bit of time thinking about structuring Axiom to
> be parallel. Most past efforts have tried to focus on making a
> particular algorithm parallel, such as a matrix multiply.
>
> But I think that it might be more effective to make each domain
> run in parallel. A computation crosses multiple domains so a
> particular computation could involve multiple parallel copies.
>
> For example, computing the Cylindrical Algebraic Decomposition
> could recursively decompose the plane. Indeed, any tree-recursive
> algorithm could be run in parallel "in the large" by creating new
> running copies of the domain for each sub-problem.
>
> So the question becomes, how does one manage this?
>
> A similar problem occurs in robotics where one could have multiple
> wheels, arms, propellers, etc. that need to act independently but
> in coordination.
>
> The robot solution uses ROS2. The three ideas are ROSCORE,
> TOPICS with publish/subscribe, and SERVICES with request/response.
> These are communication paths defined between processes.
>
> ROS2 has a "roscore" which is basically a phonebook of "topics".
> Any process can create or look up the current active topics. eq:
>
>rosnode list
>
> TOPICS:
>
> Any process can PUBLISH a topic (which is basically a typed data
> structure), e.g the topic /hw with the String data "Hello World". eg:
>
>rostopic pub /hw std_msgs/String "Hello, World"
>
> Any process can SUBSCRIBE to a topic, such as /hw, and get a
> copy of the data.  eg:
>
>rostopic echo /hw   ==> "Hello, World"
>
> Publishers talk, subscribers listen.
>
>
> SERVICES:
>
> Any process can make a 

Re: Axiom musings...

2021-06-29 Thread Tim Daly
Having spent time playing with hardware it is perfectly clear that
future computational mathematics efforts need to adapt to using
parallel processing.

I've spent a fair bit of time thinking about structuring Axiom to
be parallel. Most past efforts have tried to focus on making a
particular algorithm parallel, such as a matrix multiply.

But I think that it might be more effective to make each domain
run in parallel. A computation crosses multiple domains so a
particular computation could involve multiple parallel copies.

For example, computing the Cylindrical Algebraic Decomposition
could recursively decompose the plane. Indeed, any tree-recursive
algorithm could be run in parallel "in the large" by creating new
running copies of the domain for each sub-problem.

So the question becomes, how does one manage this?

A similar problem occurs in robotics where one could have multiple
wheels, arms, propellers, etc. that need to act independently but
in coordination.

The robot solution uses ROS2. The three ideas are ROSCORE,
TOPICS with publish/subscribe, and SERVICES with request/response.
These are communication paths defined between processes.

ROS2 has a "roscore" which is basically a phonebook of "topics".
Any process can create or look up the current active topics. eq:

   rosnode list

TOPICS:

Any process can PUBLISH a topic (which is basically a typed data
structure), e.g the topic /hw with the String data "Hello World". eg:

   rostopic pub /hw std_msgs/String "Hello, World"

Any process can SUBSCRIBE to a topic, such as /hw, and get a
copy of the data.  eg:

   rostopic echo /hw   ==> "Hello, World"

Publishers talk, subscribers listen.


SERVICES:

Any process can make a REQUEST of a SERVICE and get a RESPONSE.
This is basically a remote function call.



Axiom in parallel?

So domains could run, each in its own process. It could provide
services, one for each function. Any other process could request
a computation and get the result as a response. Domains could
request services from other domains, either waiting for responses
or continuing while the response is being computed.

The output could be sent anywhere, to a terminal, to a browser,
to a network, or to another process using the publish/subscribe
protocol, potentially all at the same time since there can be many
subscribers to a topic.

Available domains could be dynamically added by announcing
themselves as new "topics" and could be dynamically looked-up
at runtime.

This structure allows function-level / domain-level parallelism.
It is very effective in the robot world and I think it might be a
good structuring mechanism to allow computational mathematics
to take advantage of multiple processors in a disciplined fashion.

Axiom has a thousand domains and each could run on its own core.

In addition. notice that each domain is independent of the others.
So if we want to use BLAS Fortran code, it could just be another
service node. In fact, any "foreign function" could transparently
cooperate in a distributed Axiom.

Another key feature is that proofs can be "by node".

Tim




On 6/5/21, Tim Daly  wrote:
> Axiom is based on first-class dependent types. Deciding when
> two types are equivalent may involve computation. See
> Christiansen, David Thrane "Checking Dependent Types with
> Normalization by Evaluation" (2019)
>
> This puts an interesting constraint on building types. The
> constructed types has to export a function to decide if a
> given type is "equivalent" to itself.
>
> The notion of "equivalence" might involve category ideas
> of natural transformation and univalence. Sigh.
>
> That's an interesting design point.
>
> Tim
>
>
> On 5/5/21, Tim Daly  wrote:
>> It is interesting that programmer's eyes and expectations adapt
>> to the tools they use. For instance, I use emacs and expect to
>> work directly in files and multiple buffers. When I try to use one
>> of the many IDE tools I find they tend to "get in the way". I already
>> know or can quickly find whatever they try to tell me. If you use an
>> IDE you probably find emacs "too sparse" for programming.
>>
>> Recently I've been working in a sparse programming environment.
>> I'm exploring the question of running a proof checker in an FPGA.
>> The FPGA development tools are painful at best and not intuitive
>> since you SEEM to be programming but you're actually describing
>> hardware gates, connections, and timing. This is an environment
>> where everything happens all-at-once and all-the-time (like the
>> circuits in your computer). It is the "assembly language of circuits".
>> Naturally, my eyes have adapted to this rather raw level.
>>
>> That said, I'm normally doing literate programming all the time.
>> My typical file is a document which is a mixture of latex and lisp.
>> It is something of a shock to return to that world. It is clear why
>> people who program in Python find lisp to be a "sea of parens".
>> Yet as a lisp programmer, I don't even see the parens, just 

Re: Axiom musings...

2021-06-05 Thread Tim Daly
Axiom is based on first-class dependent types. Deciding when
two types are equivalent may involve computation. See
Christiansen, David Thrane "Checking Dependent Types with
Normalization by Evaluation" (2019)

This puts an interesting constraint on building types. The
constructed types has to export a function to decide if a
given type is "equivalent" to itself.

The notion of "equivalence" might involve category ideas
of natural transformation and univalence. Sigh.

That's an interesting design point.

Tim


On 5/5/21, Tim Daly  wrote:
> It is interesting that programmer's eyes and expectations adapt
> to the tools they use. For instance, I use emacs and expect to
> work directly in files and multiple buffers. When I try to use one
> of the many IDE tools I find they tend to "get in the way". I already
> know or can quickly find whatever they try to tell me. If you use an
> IDE you probably find emacs "too sparse" for programming.
>
> Recently I've been working in a sparse programming environment.
> I'm exploring the question of running a proof checker in an FPGA.
> The FPGA development tools are painful at best and not intuitive
> since you SEEM to be programming but you're actually describing
> hardware gates, connections, and timing. This is an environment
> where everything happens all-at-once and all-the-time (like the
> circuits in your computer). It is the "assembly language of circuits".
> Naturally, my eyes have adapted to this rather raw level.
>
> That said, I'm normally doing literate programming all the time.
> My typical file is a document which is a mixture of latex and lisp.
> It is something of a shock to return to that world. It is clear why
> people who program in Python find lisp to be a "sea of parens".
> Yet as a lisp programmer, I don't even see the parens, just code.
>
> It takes a few minutes in a literate document to adapt vision to
> see the latex / lisp combination as natural. The latex markup,
> like the lisp parens, eventually just disappears. What remains
> is just lisp and natural language text.
>
> This seems painful at first but eyes quickly adapt. The upside
> is that there is always a "finished" document that describes the
> state of the code. The overhead of writing a paragraph to
> describe a new function or change a paragraph to describe the
> changed function is very small.
>
> Using a Makefile I latex the document to generate a current PDF
> and then I extract, load, and execute the code. This loop catches
> errors in both the latex and the source code. Keeping an open file in
> my pdf viewer shows all of the changes in the document after every
> run of make. That way I can edit the book as easily as the code.
>
> Ultimately I find that writing the book while writing the code is
> more productive. I don't have to remember why I wrote something
> since the explanation is already there.
>
> We all have our own way of programming and our own tools.
> But I find literate programming to be a real advance over IDE
> style programming and "raw code" programming.
>
> Tim
>
>
> On 2/27/21, Tim Daly  wrote:
>> The systems I use have the interesting property of
>> "Living within the compiler".
>>
>> Lisp, Forth, Emacs, and other systems that present themselves
>> through the Read-Eval-Print-Loop (REPL) allow the
>> ability to deeply interact with the system, shaping it to your need.
>>
>> My current thread of study is software architecture. See
>> https://www.youtube.com/watch?v=W2hagw1VhhI=youtu.be
>> and https://www.georgefairbanks.com/videos/
>>
>> My current thinking on SANE involves the ability to
>> dynamically define categories, representations, and functions
>> along with "composition functions" that permits choosing a
>> combination at the time of use.
>>
>> You might want a domain for handling polynomials. There are
>> a lot of choices, depending on your use case. You might want
>> different representations. For example, you might want dense,
>> sparse, recursive, or "machine compatible fixnums" (e.g. to
>> interface with C code). If these don't exist it ought to be possible
>> to create them. Such "lego-like" building blocks require careful
>> thought about creating "fully factored" objects.
>>
>> Given that goal, the traditional barrier of "compiler" vs "interpreter"
>> does not seem useful. It is better to "live within the compiler" which
>> gives the ability to define new things "on the fly".
>>
>> Of course, the SANE compiler is going to want an associated
>> proof of the functions you create along with the other parts
>> such as its category hierarchy and representation properties.
>>
>> There is no such thing as a simple job. :-)
>>
>> Tim
>>
>>
>> On 2/18/21, Tim Daly  wrote:
>>> The Axiom SANE compiler / interpreter has a few design points.
>>>
>>> 1) It needs to mix interpreted and compiled code in the same function.
>>> SANE allows dynamic construction of code as well as dynamic type
>>> construction at runtime. Both of these can occur in a runtime 

Re: Axiom musings...

2021-05-04 Thread Tim Daly
It is interesting that programmer's eyes and expectations adapt
to the tools they use. For instance, I use emacs and expect to
work directly in files and multiple buffers. When I try to use one
of the many IDE tools I find they tend to "get in the way". I already
know or can quickly find whatever they try to tell me. If you use an
IDE you probably find emacs "too sparse" for programming.

Recently I've been working in a sparse programming environment.
I'm exploring the question of running a proof checker in an FPGA.
The FPGA development tools are painful at best and not intuitive
since you SEEM to be programming but you're actually describing
hardware gates, connections, and timing. This is an environment
where everything happens all-at-once and all-the-time (like the
circuits in your computer). It is the "assembly language of circuits".
Naturally, my eyes have adapted to this rather raw level.

That said, I'm normally doing literate programming all the time.
My typical file is a document which is a mixture of latex and lisp.
It is something of a shock to return to that world. It is clear why
people who program in Python find lisp to be a "sea of parens".
Yet as a lisp programmer, I don't even see the parens, just code.

It takes a few minutes in a literate document to adapt vision to
see the latex / lisp combination as natural. The latex markup,
like the lisp parens, eventually just disappears. What remains
is just lisp and natural language text.

This seems painful at first but eyes quickly adapt. The upside
is that there is always a "finished" document that describes the
state of the code. The overhead of writing a paragraph to
describe a new function or change a paragraph to describe the
changed function is very small.

Using a Makefile I latex the document to generate a current PDF
and then I extract, load, and execute the code. This loop catches
errors in both the latex and the source code. Keeping an open file in
my pdf viewer shows all of the changes in the document after every
run of make. That way I can edit the book as easily as the code.

Ultimately I find that writing the book while writing the code is
more productive. I don't have to remember why I wrote something
since the explanation is already there.

We all have our own way of programming and our own tools.
But I find literate programming to be a real advance over IDE
style programming and "raw code" programming.

Tim


On 2/27/21, Tim Daly  wrote:
> The systems I use have the interesting property of
> "Living within the compiler".
>
> Lisp, Forth, Emacs, and other systems that present themselves
> through the Read-Eval-Print-Loop (REPL) allow the
> ability to deeply interact with the system, shaping it to your need.
>
> My current thread of study is software architecture. See
> https://www.youtube.com/watch?v=W2hagw1VhhI=youtu.be
> and https://www.georgefairbanks.com/videos/
>
> My current thinking on SANE involves the ability to
> dynamically define categories, representations, and functions
> along with "composition functions" that permits choosing a
> combination at the time of use.
>
> You might want a domain for handling polynomials. There are
> a lot of choices, depending on your use case. You might want
> different representations. For example, you might want dense,
> sparse, recursive, or "machine compatible fixnums" (e.g. to
> interface with C code). If these don't exist it ought to be possible
> to create them. Such "lego-like" building blocks require careful
> thought about creating "fully factored" objects.
>
> Given that goal, the traditional barrier of "compiler" vs "interpreter"
> does not seem useful. It is better to "live within the compiler" which
> gives the ability to define new things "on the fly".
>
> Of course, the SANE compiler is going to want an associated
> proof of the functions you create along with the other parts
> such as its category hierarchy and representation properties.
>
> There is no such thing as a simple job. :-)
>
> Tim
>
>
> On 2/18/21, Tim Daly  wrote:
>> The Axiom SANE compiler / interpreter has a few design points.
>>
>> 1) It needs to mix interpreted and compiled code in the same function.
>> SANE allows dynamic construction of code as well as dynamic type
>> construction at runtime. Both of these can occur in a runtime object.
>> So there is potentially a mixture of interpreted and compiled code.
>>
>> 2) It needs to perform type resolution at compile time without overhead
>> where possible. Since this is not always possible there needs to be
>> a "prefix thunk" that will perform the resolution. Trivially, for
>> example,
>> if we have a + function we need to type-resolve the arguments.
>>
>> However, if we can prove at compile time that the types are both
>> bounded-NNI and the result is bounded-NNI (i.e. fixnum in lisp)
>> then we can inline a call to + at runtime. If not, we might have
>> + applied to NNI and POLY(FLOAT), which requires a thunk to
>> resolve types. The thunk could even 

Re: Axiom musings...

2021-02-27 Thread Tim Daly
The systems I use have the interesting property of
"Living within the compiler".

Lisp, Forth, Emacs, and other systems that present themselves
through the Read-Eval-Print-Loop (REPL) allow the
ability to deeply interact with the system, shaping it to your need.

My current thread of study is software architecture. See
https://www.youtube.com/watch?v=W2hagw1VhhI=youtu.be
and https://www.georgefairbanks.com/videos/

My current thinking on SANE involves the ability to
dynamically define categories, representations, and functions
along with "composition functions" that permits choosing a
combination at the time of use.

You might want a domain for handling polynomials. There are
a lot of choices, depending on your use case. You might want
different representations. For example, you might want dense,
sparse, recursive, or "machine compatible fixnums" (e.g. to
interface with C code). If these don't exist it ought to be possible
to create them. Such "lego-like" building blocks require careful
thought about creating "fully factored" objects.

Given that goal, the traditional barrier of "compiler" vs "interpreter"
does not seem useful. It is better to "live within the compiler" which
gives the ability to define new things "on the fly".

Of course, the SANE compiler is going to want an associated
proof of the functions you create along with the other parts
such as its category hierarchy and representation properties.

There is no such thing as a simple job. :-)

Tim


On 2/18/21, Tim Daly  wrote:
> The Axiom SANE compiler / interpreter has a few design points.
>
> 1) It needs to mix interpreted and compiled code in the same function.
> SANE allows dynamic construction of code as well as dynamic type
> construction at runtime. Both of these can occur in a runtime object.
> So there is potentially a mixture of interpreted and compiled code.
>
> 2) It needs to perform type resolution at compile time without overhead
> where possible. Since this is not always possible there needs to be
> a "prefix thunk" that will perform the resolution. Trivially, for example,
> if we have a + function we need to type-resolve the arguments.
>
> However, if we can prove at compile time that the types are both
> bounded-NNI and the result is bounded-NNI (i.e. fixnum in lisp)
> then we can inline a call to + at runtime. If not, we might have
> + applied to NNI and POLY(FLOAT), which requires a thunk to
> resolve types. The thunk could even "specialize and compile"
> the code before executing it.
>
> It turns out that the Forth implementation of "threaded-interpreted"
> languages model provides an efficient and effective way to do this.[0]
> Type resolution can be "inserted" in intermediate thunks.
> The model also supports dynamic overloading and tail recursion.
>
> Combining high-level CLOS code with low-level threading gives an
> easy to understand and robust design.
>
> Tim
>
> [0] Loeliger, R.G. "Threaded Interpretive Languages" (1981)
> ISBN 0-07-038360-X
>
>
>
>
> On 2/5/21, Tim Daly  wrote:
>> I've worked hard to make Axiom depend on almost no other
>> tools so that it would not get caught by "code rot" of libraries.
>>
>> However, I'm also trying to make the new SANE version much
>> easier to understand and debug.To that end I've been experimenting
>> with some ideas.
>>
>> It should be possible to view source code, of course. But the source
>> code is not the only, nor possibly the best, representation of the ideas.
>> In particular, source code gets compiled into data structures. In Axiom
>> these data structures really are a graph of related structures.
>>
>> For example, looking at the gcd function from NNI, there is the
>> representation of the gcd function itself. But there is also a structure
>> that is the REP (and, in the new system, is separate from the domain).
>>
>> Further, there are associated specification and proof structures. Even
>> further, the domain inherits the category structures, and from those it
>> inherits logical axioms and definitions through the proof structure.
>>
>> Clearly the gcd function is a node in a much larger graph structure.
>>
>> When trying to decide why code won't compile it would be useful to
>> be able to see and walk these structures. I've thought about using the
>> browser but browsers are too weak. Either everything has to be "in a
>> single tab to show the graph" or "the nodes of the graph are in different
>> tabs". Plus, constructing dynamic graphs that change as the software
>> changes (e.g. by loading a new spad file or creating a new function)
>> represents the huge problem of keeping the browser "in sync with the
>> Axiom workspace". So something more dynamic and embedded is needed.
>>
>> Axiom source gets compiled into CLOS data structures. Each of these
>> new SANE structures has an associated surface representation, so they
>> can be presented in user-friendly form.
>>
>> Also, since Axiom is literate software, it should be possible to look at
>> the code in its literate 

Re: Axiom musings...

2021-02-18 Thread Tim Daly
The Axiom SANE compiler / interpreter has a few design points.

1) It needs to mix interpreted and compiled code in the same function.
SANE allows dynamic construction of code as well as dynamic type
construction at runtime. Both of these can occur in a runtime object.
So there is potentially a mixture of interpreted and compiled code.

2) It needs to perform type resolution at compile time without overhead
where possible. Since this is not always possible there needs to be
a "prefix thunk" that will perform the resolution. Trivially, for example,
if we have a + function we need to type-resolve the arguments.

However, if we can prove at compile time that the types are both
bounded-NNI and the result is bounded-NNI (i.e. fixnum in lisp)
then we can inline a call to + at runtime. If not, we might have
+ applied to NNI and POLY(FLOAT), which requires a thunk to
resolve types. The thunk could even "specialize and compile"
the code before executing it.

It turns out that the Forth implementation of "threaded-interpreted"
languages model provides an efficient and effective way to do this.[0]
Type resolution can be "inserted" in intermediate thunks.
The model also supports dynamic overloading and tail recursion.

Combining high-level CLOS code with low-level threading gives an
easy to understand and robust design.

Tim

[0] Loeliger, R.G. "Threaded Interpretive Languages" (1981)
ISBN 0-07-038360-X




On 2/5/21, Tim Daly  wrote:
> I've worked hard to make Axiom depend on almost no other
> tools so that it would not get caught by "code rot" of libraries.
>
> However, I'm also trying to make the new SANE version much
> easier to understand and debug.To that end I've been experimenting
> with some ideas.
>
> It should be possible to view source code, of course. But the source
> code is not the only, nor possibly the best, representation of the ideas.
> In particular, source code gets compiled into data structures. In Axiom
> these data structures really are a graph of related structures.
>
> For example, looking at the gcd function from NNI, there is the
> representation of the gcd function itself. But there is also a structure
> that is the REP (and, in the new system, is separate from the domain).
>
> Further, there are associated specification and proof structures. Even
> further, the domain inherits the category structures, and from those it
> inherits logical axioms and definitions through the proof structure.
>
> Clearly the gcd function is a node in a much larger graph structure.
>
> When trying to decide why code won't compile it would be useful to
> be able to see and walk these structures. I've thought about using the
> browser but browsers are too weak. Either everything has to be "in a
> single tab to show the graph" or "the nodes of the graph are in different
> tabs". Plus, constructing dynamic graphs that change as the software
> changes (e.g. by loading a new spad file or creating a new function)
> represents the huge problem of keeping the browser "in sync with the
> Axiom workspace". So something more dynamic and embedded is needed.
>
> Axiom source gets compiled into CLOS data structures. Each of these
> new SANE structures has an associated surface representation, so they
> can be presented in user-friendly form.
>
> Also, since Axiom is literate software, it should be possible to look at
> the code in its literate form with the surrounding explanation.
>
> Essentially we'd like to have the ability to "deep dive" into the Axiom
> workspace, not only for debugging, but also for understanding what
> functions are used, where they come from, what they inherit, and
> how they are used in a computation.
>
> To that end I'm looking at using McClim, a lisp windowing system.
> Since the McClim windows would be part of the lisp image, they have
> access to display (and modify) the Axiom workspace at all times.
>
> The only hesitation is that McClim uses quicklisp and drags in a lot
> of other subsystems. It's all lisp, of course.
>
> These ideas aren't new. They were available on Symbolics machines,
> a truly productive platform and one I sorely miss.
>
> Tim
>
>
>
> On 1/19/21, Tim Daly  wrote:
>> Also of interest is the talk
>> "The Unreasonable Effectiveness of Dynamic Typing for Practical Programs"
>> https://vimeo.com/74354480
>> which questions whether static typing really has any benefit.
>>
>> Tim
>>
>>
>> On 1/19/21, Tim Daly  wrote:
>>> Peter Naur wrote an article of interest:
>>> http://pages.cs.wisc.edu/~remzi/Naur.pdf
>>>
>>> In particular, it mirrors my notion that Axiom needs
>>> to embrace literate programming so that the "theory
>>> of the problem" is presented as well as the "theory
>>> of the solution". I quote the introduction:
>>>
>>>
>>>
>>> This article is, to my mind, the most accurate account
>>> of what goes on in designing and coding a program.
>>> I refer to it regularly when discussing how much
>>> documentation to create, how to pass along tacit
>>> knowledge, and the value 

Re: Axiom musings...

2021-02-05 Thread Tim Daly
I've worked hard to make Axiom depend on almost no other
tools so that it would not get caught by "code rot" of libraries.

However, I'm also trying to make the new SANE version much
easier to understand and debug.To that end I've been experimenting
with some ideas.

It should be possible to view source code, of course. But the source
code is not the only, nor possibly the best, representation of the ideas.
In particular, source code gets compiled into data structures. In Axiom
these data structures really are a graph of related structures.

For example, looking at the gcd function from NNI, there is the
representation of the gcd function itself. But there is also a structure
that is the REP (and, in the new system, is separate from the domain).

Further, there are associated specification and proof structures. Even
further, the domain inherits the category structures, and from those it
inherits logical axioms and definitions through the proof structure.

Clearly the gcd function is a node in a much larger graph structure.

When trying to decide why code won't compile it would be useful to
be able to see and walk these structures. I've thought about using the
browser but browsers are too weak. Either everything has to be "in a
single tab to show the graph" or "the nodes of the graph are in different
tabs". Plus, constructing dynamic graphs that change as the software
changes (e.g. by loading a new spad file or creating a new function)
represents the huge problem of keeping the browser "in sync with the
Axiom workspace". So something more dynamic and embedded is needed.

Axiom source gets compiled into CLOS data structures. Each of these
new SANE structures has an associated surface representation, so they
can be presented in user-friendly form.

Also, since Axiom is literate software, it should be possible to look at
the code in its literate form with the surrounding explanation.

Essentially we'd like to have the ability to "deep dive" into the Axiom
workspace, not only for debugging, but also for understanding what
functions are used, where they come from, what they inherit, and
how they are used in a computation.

To that end I'm looking at using McClim, a lisp windowing system.
Since the McClim windows would be part of the lisp image, they have
access to display (and modify) the Axiom workspace at all times.

The only hesitation is that McClim uses quicklisp and drags in a lot
of other subsystems. It's all lisp, of course.

These ideas aren't new. They were available on Symbolics machines,
a truly productive platform and one I sorely miss.

Tim



On 1/19/21, Tim Daly  wrote:
> Also of interest is the talk
> "The Unreasonable Effectiveness of Dynamic Typing for Practical Programs"
> https://vimeo.com/74354480
> which questions whether static typing really has any benefit.
>
> Tim
>
>
> On 1/19/21, Tim Daly  wrote:
>> Peter Naur wrote an article of interest:
>> http://pages.cs.wisc.edu/~remzi/Naur.pdf
>>
>> In particular, it mirrors my notion that Axiom needs
>> to embrace literate programming so that the "theory
>> of the problem" is presented as well as the "theory
>> of the solution". I quote the introduction:
>>
>>
>>
>> This article is, to my mind, the most accurate account
>> of what goes on in designing and coding a program.
>> I refer to it regularly when discussing how much
>> documentation to create, how to pass along tacit
>> knowledge, and the value of the XP's metaphor-setting
>> exercise. It also provides a way to examine a methodolgy's
>> economic structure.
>>
>> In the article, which follows, note that the quality of the
>> designing programmer's work is related to the quality of
>> the match between his theory of the problem and his theory
>> of the solution. Note that the quality of a later programmer's
>> work is related to the match between his theories and the
>> previous programmer's theories.
>>
>> Using Naur's ideas, the designer's job is not to pass along
>> "the design" but to pass along "the theories" driving the design.
>> The latter goal is more useful and more appropriate. It also
>> highlights that knowledge of the theory is tacit in the owning, and
>> so passing along the thoery requires passing along both explicit
>> and tacit knowledge.
>>
>> Tim
>>
>



Re: Axiom musings...

2021-01-19 Thread Tim Daly
Also of interest is the talk
"The Unreasonable Effectiveness of Dynamic Typing for Practical Programs"
https://vimeo.com/74354480
which questions whether static typing really has any benefit.

Tim


On 1/19/21, Tim Daly  wrote:
> Peter Naur wrote an article of interest:
> http://pages.cs.wisc.edu/~remzi/Naur.pdf
>
> In particular, it mirrors my notion that Axiom needs
> to embrace literate programming so that the "theory
> of the problem" is presented as well as the "theory
> of the solution". I quote the introduction:
>
>
>
> This article is, to my mind, the most accurate account
> of what goes on in designing and coding a program.
> I refer to it regularly when discussing how much
> documentation to create, how to pass along tacit
> knowledge, and the value of the XP's metaphor-setting
> exercise. It also provides a way to examine a methodolgy's
> economic structure.
>
> In the article, which follows, note that the quality of the
> designing programmer's work is related to the quality of
> the match between his theory of the problem and his theory
> of the solution. Note that the quality of a later programmer's
> work is related to the match between his theories and the
> previous programmer's theories.
>
> Using Naur's ideas, the designer's job is not to pass along
> "the design" but to pass along "the theories" driving the design.
> The latter goal is more useful and more appropriate. It also
> highlights that knowledge of the theory is tacit in the owning, and
> so passing along the thoery requires passing along both explicit
> and tacit knowledge.
>
> Tim
>



Re: Axiom musings...

2021-01-19 Thread Tim Daly
Peter Naur wrote an article of interest:
http://pages.cs.wisc.edu/~remzi/Naur.pdf

In particular, it mirrors my notion that Axiom needs
to embrace literate programming so that the "theory
of the problem" is presented as well as the "theory
of the solution". I quote the introduction:



This article is, to my mind, the most accurate account
of what goes on in designing and coding a program.
I refer to it regularly when discussing how much
documentation to create, how to pass along tacit
knowledge, and the value of the XP's metaphor-setting
exercise. It also provides a way to examine a methodolgy's
economic structure.

In the article, which follows, note that the quality of the
designing programmer's work is related to the quality of
the match between his theory of the problem and his theory
of the solution. Note that the quality of a later programmer's
work is related to the match between his theories and the
previous programmer's theories.

Using Naur's ideas, the designer's job is not to pass along
"the design" but to pass along "the theories" driving the design.
The latter goal is more useful and more appropriate. It also
highlights that knowledge of the theory is tacit in the owning, and
so passing along the thoery requires passing along both explicit
and tacit knowledge.

Tim



Re: Axiom musings...

2021-01-01 Thread Tim Daly
Another thought is that Axiom is "not fully factored". There is
a logic and category theory concept called "the carrier". In
Axiom terms, this is the Rep aka the representation. Axiom
bundles the representation with the domain, which is reasonable.

However, representations are likely data structures with their
own specification, proof, and associated code. So it seems
reasonable to re-factor Axiom to separate the Rep from the
Domain and make it a parameter.

This has the effect of separating data structure proofs from
domain proofs. It also allows the axioms and theorems from
the Rep data structure to be used in domain function proofs.

The category is, as implemented, a "dictionary" of operations.
The SANE version has much more information, including
logical axioms and definitions.

So a domain is actually composed of (at least) 3 parts:

OPS: operations (from the category)
REP: representations (from data structures, aka Rep)
IMP: implementations (in the domain itself)

A domain is OPS x REP x IMP.
A package is OPS x IMP

Note that, as mentioned above, REP can itself be a domain.

The CLOS implementation needs to factor these into
class / instance objects that exist both at compile time
and at run time (since types, including new categories,
can be constucted at run time).

Tim


On 1/1/21, Tim Daly  wrote:
> One of the struggles of working with first-class dependent types
> is that they can be created "on the fly". Ideally, when used, the
> type information can be "factored out" to yield a specialized
> version that is more efficient. The idea is best described in a
> book, called Partial Evaluation, which used the type information
> to collapse and specialize a function.
>
> Jones, Gomard, and Sestoft "Partial Evaluation and Automatic
> Program Generation" (1993) Prentice Hall
>
>
> On 12/31/20, Tim Daly  wrote:
>> It's been a long, slow effort to "get up to speed" (or more
>> accurately "a slow crawl") on the various subjects one needs
>> to know to work on "computational mathematics" aka SANE.
>>
>> A reading knowledge of a lot of areas is needed, including
>> things like
>>
>> abstract algebra (to understand Axiom's category/domain)
>> type theory (to understand first-order dependent types)
>> proof theory (to understand Gentzen, sequents, etc.)
>> category theory (to understand catamorphs, functors, etc.)
>> metaobject protocol (to understand CLOS extensions)
>> number theory (to understand finite fields)
>> calculus (to understand field extensions for integration)
>> programming (to understand typeclasses, CLOS, etc)
>> philosophy (to understand polymorphic sets, etc)
>> CPU hardware (to understand "down to the metal", FPGA, etc.)
>> Specification (to understand consistancy, completness, etc)
>> Verification (to understand decision procedures, etc)
>> Proof tools (like Coq, Lean, etc.)
>>
>> and a few other subjects that won't come to mind at the
>> moment.
>>
>> The usual academic approach, that creates a special
>> "course of study", requiring classes in areas is probably
>> best. There are a lot of good online courses, such as
>> the UOregon summer school, that introduce the basic
>> ideas clearly. There are some good introductory books
>> like Troelstra and Schwichtenberg's "Basic Proof Theory"
>> that cover Gentsen, Hilbert, and Natural Deduction ideas.
>>
>> It seems worthwhile to put together a full course of study
>> with links to videos and books, so one can get up to speed
>> on required knowledge. I'll start an outline with links to
>> these on the axiom-developer.org website in the coming
>> weeks. It's sort-of an "Axiom University" major in
>> computational mathematics.
>>
>> (There are no degrees :-) But one does mathematics and
>> programming for the joy of it anyway so who cares?)
>>
>> I welcome suggestions and online links.
>>
>> Tim
>>
>



Re: Axiom musings...

2021-01-01 Thread Tim Daly
One of the struggles of working with first-class dependent types
is that they can be created "on the fly". Ideally, when used, the
type information can be "factored out" to yield a specialized
version that is more efficient. The idea is best described in a
book, called Partial Evaluation, which used the type information
to collapse and specialize a function.

Jones, Gomard, and Sestoft "Partial Evaluation and Automatic
Program Generation" (1993) Prentice Hall


On 12/31/20, Tim Daly  wrote:
> It's been a long, slow effort to "get up to speed" (or more
> accurately "a slow crawl") on the various subjects one needs
> to know to work on "computational mathematics" aka SANE.
>
> A reading knowledge of a lot of areas is needed, including
> things like
>
> abstract algebra (to understand Axiom's category/domain)
> type theory (to understand first-order dependent types)
> proof theory (to understand Gentzen, sequents, etc.)
> category theory (to understand catamorphs, functors, etc.)
> metaobject protocol (to understand CLOS extensions)
> number theory (to understand finite fields)
> calculus (to understand field extensions for integration)
> programming (to understand typeclasses, CLOS, etc)
> philosophy (to understand polymorphic sets, etc)
> CPU hardware (to understand "down to the metal", FPGA, etc.)
> Specification (to understand consistancy, completness, etc)
> Verification (to understand decision procedures, etc)
> Proof tools (like Coq, Lean, etc.)
>
> and a few other subjects that won't come to mind at the
> moment.
>
> The usual academic approach, that creates a special
> "course of study", requiring classes in areas is probably
> best. There are a lot of good online courses, such as
> the UOregon summer school, that introduce the basic
> ideas clearly. There are some good introductory books
> like Troelstra and Schwichtenberg's "Basic Proof Theory"
> that cover Gentsen, Hilbert, and Natural Deduction ideas.
>
> It seems worthwhile to put together a full course of study
> with links to videos and books, so one can get up to speed
> on required knowledge. I'll start an outline with links to
> these on the axiom-developer.org website in the coming
> weeks. It's sort-of an "Axiom University" major in
> computational mathematics.
>
> (There are no degrees :-) But one does mathematics and
> programming for the joy of it anyway so who cares?)
>
> I welcome suggestions and online links.
>
> Tim
>



Re: Axiom musings...

2020-12-31 Thread Tim Daly
It's been a long, slow effort to "get up to speed" (or more
accurately "a slow crawl") on the various subjects one needs
to know to work on "computational mathematics" aka SANE.

A reading knowledge of a lot of areas is needed, including
things like

abstract algebra (to understand Axiom's category/domain)
type theory (to understand first-order dependent types)
proof theory (to understand Gentzen, sequents, etc.)
category theory (to understand catamorphs, functors, etc.)
metaobject protocol (to understand CLOS extensions)
number theory (to understand finite fields)
calculus (to understand field extensions for integration)
programming (to understand typeclasses, CLOS, etc)
philosophy (to understand polymorphic sets, etc)
CPU hardware (to understand "down to the metal", FPGA, etc.)
Specification (to understand consistancy, completness, etc)
Verification (to understand decision procedures, etc)
Proof tools (like Coq, Lean, etc.)

and a few other subjects that won't come to mind at the
moment.

The usual academic approach, that creates a special
"course of study", requiring classes in areas is probably
best. There are a lot of good online courses, such as
the UOregon summer school, that introduce the basic
ideas clearly. There are some good introductory books
like Troelstra and Schwichtenberg's "Basic Proof Theory"
that cover Gentsen, Hilbert, and Natural Deduction ideas.

It seems worthwhile to put together a full course of study
with links to videos and books, so one can get up to speed
on required knowledge. I'll start an outline with links to
these on the axiom-developer.org website in the coming
weeks. It's sort-of an "Axiom University" major in
computational mathematics.

(There are no degrees :-) But one does mathematics and
programming for the joy of it anyway so who cares?)

I welcome suggestions and online links.

Tim



Re: [EXTERNAL] Re: Axiom musings...

2020-12-21 Thread Tim Daly
The goal is to prove Axiom algorithms, such as the GCD
over the natural numbers, correct using something like Lean.

I've studied traditional approaches like Hoare triples
but I think there might be a different approach. It depends
on Harper's Trinity.

Suppose I had a set of verification conditions V.
Suppose I had a functor that carries V to a program P.
Suppose I had a functor that carries V to a proof  Q.
By the category definition of functors this implies a
natural transformation between P and Q ... and I'm done.

Except for some annoying details, of course.

1) What is a functor from V to P? Well, I have all of
group theory already available because of the way
Axiom is constructed. So the "elements" of the functor
requires things like "carrying identity". Exactly what
this implies in the constructed program P needs to
be worked out in detail.

2) What is the functor from V to Q? Well, if I construct
a sequent calculus version of Lean's Type Theory then
I have a "complete" set of low-level tactics. It isn't clear
how those tactics can be applied to construct the functor
(at least not yet).

3) The two functors require certain constraints on the
verification conditions V. In dealing with the GCD over
the natural numbers (known as NonNegativeInteger or
NNI in Axiom) we need to make sure that V contains
all of the conditions that the correct program requires
(such as deriving the recursive call).

4) V also needs constraints in dealing with the GCD
so that the proof can use the Lean sequent rules
to derive a proof of the GCD. So I'd have to construct
a sequent proof of the GCD from the sequent-derived
tactics.

5) I don't know of a languge V that covers both the
needs of the functors for P and Q so that will take
some omphaloskepsis. I might have to write a
"linter" program.

This category theory approach, using Harper's Trinity,
has the potential of proving programs correct without
dealing with things like Hoare triples.

This all works in theory but there is a lot of practical
work, such as deriving a sequent form of Lean's Type
Theory, deriving functors that fulfill all the definitions
using things like Axiom's group theory structure, and
inventing the verification language V rich enough
for both needs.

It's gonna be a long winter :-)

Tim
.

On 12/18/20, Tim Daly  wrote:
> Quoting Dijkstra (1988, EWD1036):
>
> ... a program is no more than half a conjecture. The other
> half of a conjecture is the functional specification the
> program is supposed to satisfy. The programmer's task
> is to present such complete conjectures as proven theorems.
>
>
> Axiom's SANE research project is trying to build a system
> where such "complete conjectures as proven theorems"
> can be implemented, at least for mathematical algorithms.
>
> After 50 years of programming it is humbling to realize
> that I've neglected the most important half of programming.
>
> The amount of effort and progress in proof theory, category
> theory, and interactive theorem proving is amazing; most
> of it only occuring in this century. I've spent the last 8 years
> trying to "catch up" with what future programmers will take
> as normal and expected.
>
> Imagine a Google whiteboard interview where you are
> asked to provide an algorithm and an associated proof.
> Will you get hired?
>
> Dijkstra saw this in 1988 and it's taken me decades to
> hear and understand.
>
> Tim
>
>
> On 12/18/20, Tim Daly  wrote:
>> I occasionally get grief because Axiom is implemented
>> in Common Lisp. People don't seem to like that for some
>> odd reason. Lisp, one of the easiest languages to learn,
>> seems to be difficult to accept.
>>
>> I'm working with John Harrison's book "Practical Logic
>> and Automated Reasoning" from 2009. It uses OCaml
>> to implement the programs.
>>
>> I've downloaded the OCaml sources from the website.
>> Apparently OCaml has changed significantly from 2009
>> since the code no longer compiles, spitting out "Alert
>> deprecated" all over the place and simply failing in places.
>>
>> C++ code has a "standard", well, several "standards".
>> When someone asks "Do you know C++" you really
>> need to qualify that question. What you knew about
>> "standard C++" seems to change every 2 years.
>>
>> I recently pulled out my KROPS code, used as the
>> basis for IBM's Expert System FAME. It was written
>> in Common Lisp (on a Symbolics machine). It still runs
>> unmodified despite being written about 40 years ago
>> on hardware and a software stack that no longer exists.
>>
>> Axiom, written in the last century, still compiles and
>> runs. Say what you want about Lisp code, it still is the
>> most portable and long-lasting language I've ever used.
>>
>> Tim
>>
>



Re: [EXTERNAL] Re: Axiom musings...

2020-12-18 Thread Tim Daly
Quoting Dijkstra (1988, EWD1036):

... a program is no more than half a conjecture. The other
half of a conjecture is the functional specification the
program is supposed to satisfy. The programmer's task
is to present such complete conjectures as proven theorems.


Axiom's SANE research project is trying to build a system
where such "complete conjectures as proven theorems"
can be implemented, at least for mathematical algorithms.

After 50 years of programming it is humbling to realize
that I've neglected the most important half of programming.

The amount of effort and progress in proof theory, category
theory, and interactive theorem proving is amazing; most
of it only occuring in this century. I've spent the last 8 years
trying to "catch up" with what future programmers will take
as normal and expected.

Imagine a Google whiteboard interview where you are
asked to provide an algorithm and an associated proof.
Will you get hired?

Dijkstra saw this in 1988 and it's taken me decades to
hear and understand.

Tim


On 12/18/20, Tim Daly  wrote:
> I occasionally get grief because Axiom is implemented
> in Common Lisp. People don't seem to like that for some
> odd reason. Lisp, one of the easiest languages to learn,
> seems to be difficult to accept.
>
> I'm working with John Harrison's book "Practical Logic
> and Automated Reasoning" from 2009. It uses OCaml
> to implement the programs.
>
> I've downloaded the OCaml sources from the website.
> Apparently OCaml has changed significantly from 2009
> since the code no longer compiles, spitting out "Alert
> deprecated" all over the place and simply failing in places.
>
> C++ code has a "standard", well, several "standards".
> When someone asks "Do you know C++" you really
> need to qualify that question. What you knew about
> "standard C++" seems to change every 2 years.
>
> I recently pulled out my KROPS code, used as the
> basis for IBM's Expert System FAME. It was written
> in Common Lisp (on a Symbolics machine). It still runs
> unmodified despite being written about 40 years ago
> on hardware and a software stack that no longer exists.
>
> Axiom, written in the last century, still compiles and
> runs. Say what you want about Lisp code, it still is the
> most portable and long-lasting language I've ever used.
>
> Tim
>



Re: [EXTERNAL] Re: Axiom musings...

2020-12-18 Thread Tim Daly
I occasionally get grief because Axiom is implemented
in Common Lisp. People don't seem to like that for some
odd reason. Lisp, one of the easiest languages to learn,
seems to be difficult to accept.

I'm working with John Harrison's book "Practical Logic
and Automated Reasoning" from 2009. It uses OCaml
to implement the programs.

I've downloaded the OCaml sources from the website.
Apparently OCaml has changed significantly from 2009
since the code no longer compiles, spitting out "Alert
deprecated" all over the place and simply failing in places.

C++ code has a "standard", well, several "standards".
When someone asks "Do you know C++" you really
need to qualify that question. What you knew about
"standard C++" seems to change every 2 years.

I recently pulled out my KROPS code, used as the
basis for IBM's Expert System FAME. It was written
in Common Lisp (on a Symbolics machine). It still runs
unmodified despite being written about 40 years ago
on hardware and a software stack that no longer exists.

Axiom, written in the last century, still compiles and
runs. Say what you want about Lisp code, it still is the
most portable and long-lasting language I've ever used.

Tim



Re: Axiom musings...

2020-12-09 Thread Tim Daly
Robert Harper's "Holy Trinity" of Computer Science
(aka "Computational Trinitarianism")

  Proof Theory
 (Logic)
   /  \
  /\
Category Theory  Type Theory
(Mathematics) (Programming)

There is a correspondence between Proof Theory,
Category Theory, and Type Theory or equivalently
a correspondence of Logic, Mathematics, and
Programming.

A concept may arise in, say, programming or
the same concept can arise in mathematics or
logic.

"You don't know what you are talking about until
you understand it all three ways."

(https://www.cs.uoregon.edu/research/summerschool/summer12/videos/Harper1_0.mp4)




This is essentially the research goal of the SANE version
of Axiom. The goal is to unify computational mathematics
in one system. The current effort is cleaning up things like
proper functors (category theory), adding proofs for
programs (proof theory) and implementing fully-first
class dependent types (type theory).

Harper's Holy Trinity gives a sound way of thinking about
various design decisions in the SANE version of Axiom.
Progress, albeit glacial, is being made.

Tim




On 12/3/20, Tim Daly  wrote:
> HILL CLIMBING
>
> I really encourage you to read the slides:
> Vladimir Voevodsky. Univalent foundations, March 2014. Presentation at
> IAS,http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundations_files/2014_IAS.pdf
>
> The mathematical references likely won't make sense but the
> most important point of the article will. He found mistakes in
> work he and others did. Since these people were famous their
> work was never really checked (fortunately that's not my issue).
>
> quoting from the slide 28:
>
> I now do my mathematics with a proof assistant and do not have to worry
> all the time about mistakes in my arguments or about how to convince
> others that my arguments are correct.
>
> But I think that the sense of urgency that pushed me to hurry with the
> program remains. Sooner or later computer proof assistants will become
> the norm, but the longer this process takes the more misery associated
> with mistakes and with unnecessary self-verification the practitioners of
> the
> field will have to endure.
>
>
>
>
> Of all of the software created, computer algebra seems like the most
> reasonable place to apply proof techniques. My feeling is that computer
> algebra is still 19th century, pre-formal mathematics. We can do better.
> We must. And as a consequence we elevate "computer algebra" to
> "computational mathematics", surely a worthwhile research goal.
>
> Tim
>
>
> On 12/3/20, Tim Daly  wrote:
>> I just ran across an interesting example about the difference
>> between testing and proving (not that I need to be convinced).
>> http://www.lix.polytechnique.fr/Labo/Samuel.Mimram/teaching/INF551/course.pdf
>> (page 10):
>>
>>
>> On day one, our mathematician finds out using a formal computation
>> software
>> that
>>
>> \int{\sin(t)/t}{dt} = \pt/2
>>
>> On day two, he tries to play around a bit with such formulas and finds
>> out
>> that
>>
>> \int{(\sin(t)/t) (\sin(t/101)/(t/101)}{dt} = \p2/2
>>
>> On day three, he thinks maybe a pattern could emerge and discovers that
>>
>> \int{(\sin(t)/t) (\sin(t/101)/(t/101) (\sin(t/201)/(t/201))}{dt} = \p2/2
>>
>> On day four, he gets quite confident and conjectures that, for every n∈N,
>>
>> \int{\prod{ /sin(t/(100n+1)) / (t/(100n+1))}{dt} = \pi/2
>>
>>
>> In fact, the conjecture breaks starting at
>>
>> n= 15 341 178 777 673 149 429 167 740 440 969 249 338 310 889
>>
>> and none of the usual tests would have found this out.
>>
>> Tim
>>
>>
>>
>> On 11/29/20, Tim Daly  wrote:
>>> Axiom, as it was released to me long ago, is being
>>> overtaken by current technology.
>>>
>>> I'm taking an MIT course on "Underactuated Robots" [0]
>>> (I spent a large portion of my life doing robotics).
>>>
>>> The instructor is using Jupyter notebooks to illustrate
>>> control algorithms. He is doing computation and
>>> showing the graphics in a single notebook. Axiom
>>> could "almost" do that in the 1980s.
>>>
>>> It is clear that, with a sufficiently rich set of algorithms,
>>> such as one finds in CoCalc, Axiom is showing its age.
>>>
>>> This is the primary reason why I consider the SANE
>>> research effort vital. An Axiom system that has
>>> proven algorithms, follows knowledge from leading
>>> mathematics (e.g. Category Theory), and pushes the
>>> state of the art will keep Axiom alive and relevant.
>>>
>>> Axiom was, and is, research software. Just doing
>>> "more of the same" goes nowhere slowly. Following
>>> that path leads to the dustbin of history.
>>>
>>> Tim
>>>
>>> [0]
>>> https://www.youtube.com/watch?v=GPvw92IKO44=UUhfUOAhz7ynELF-s_1LPpWg=28
>>>
>>>
>>>
>>> On 11/27/20, Tim Daly  wrote:
 The current mental struggle involves properly re-constructing
 Axiom so Category Theory (e.g. objects, morphisms, identity,

Re: Axiom musings...

2020-12-02 Thread Tim Daly
HILL CLIMBING

I really encourage you to read the slides:
Vladimir Voevodsky. Univalent foundations, March 2014. Presentation at
IAS,http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundations_files/2014_IAS.pdf

The mathematical references likely won't make sense but the
most important point of the article will. He found mistakes in
work he and others did. Since these people were famous their
work was never really checked (fortunately that's not my issue).

quoting from the slide 28:

I now do my mathematics with a proof assistant and do not have to worry
all the time about mistakes in my arguments or about how to convince
others that my arguments are correct.

But I think that the sense of urgency that pushed me to hurry with the
program remains. Sooner or later computer proof assistants will become
the norm, but the longer this process takes the more misery associated
with mistakes and with unnecessary self-verification the practitioners of the
field will have to endure.




Of all of the software created, computer algebra seems like the most
reasonable place to apply proof techniques. My feeling is that computer
algebra is still 19th century, pre-formal mathematics. We can do better.
We must. And as a consequence we elevate "computer algebra" to
"computational mathematics", surely a worthwhile research goal.

Tim


On 12/3/20, Tim Daly  wrote:
> I just ran across an interesting example about the difference
> between testing and proving (not that I need to be convinced).
> http://www.lix.polytechnique.fr/Labo/Samuel.Mimram/teaching/INF551/course.pdf
> (page 10):
>
>
> On day one, our mathematician finds out using a formal computation software
> that
>
> \int{\sin(t)/t}{dt} = \pt/2
>
> On day two, he tries to play around a bit with such formulas and finds out
> that
>
> \int{(\sin(t)/t) (\sin(t/101)/(t/101)}{dt} = \p2/2
>
> On day three, he thinks maybe a pattern could emerge and discovers that
>
> \int{(\sin(t)/t) (\sin(t/101)/(t/101) (\sin(t/201)/(t/201))}{dt} = \p2/2
>
> On day four, he gets quite confident and conjectures that, for every n∈N,
>
> \int{\prod{ /sin(t/(100n+1)) / (t/(100n+1))}{dt} = \pi/2
>
>
> In fact, the conjecture breaks starting at
>
> n= 15 341 178 777 673 149 429 167 740 440 969 249 338 310 889
>
> and none of the usual tests would have found this out.
>
> Tim
>
>
>
> On 11/29/20, Tim Daly  wrote:
>> Axiom, as it was released to me long ago, is being
>> overtaken by current technology.
>>
>> I'm taking an MIT course on "Underactuated Robots" [0]
>> (I spent a large portion of my life doing robotics).
>>
>> The instructor is using Jupyter notebooks to illustrate
>> control algorithms. He is doing computation and
>> showing the graphics in a single notebook. Axiom
>> could "almost" do that in the 1980s.
>>
>> It is clear that, with a sufficiently rich set of algorithms,
>> such as one finds in CoCalc, Axiom is showing its age.
>>
>> This is the primary reason why I consider the SANE
>> research effort vital. An Axiom system that has
>> proven algorithms, follows knowledge from leading
>> mathematics (e.g. Category Theory), and pushes the
>> state of the art will keep Axiom alive and relevant.
>>
>> Axiom was, and is, research software. Just doing
>> "more of the same" goes nowhere slowly. Following
>> that path leads to the dustbin of history.
>>
>> Tim
>>
>> [0]
>> https://www.youtube.com/watch?v=GPvw92IKO44=UUhfUOAhz7ynELF-s_1LPpWg=28
>>
>>
>>
>> On 11/27/20, Tim Daly  wrote:
>>> The current mental struggle involves properly re-constructing
>>> Axiom so Category Theory (e.g. objects, morphisms, identity,
>>> and composition) is respected between the domains. Done
>>> properly, this should make resolving type coercion and
>>> conversion less ad-hoc. This is especially important when
>>> dealing with first-class dependent types.
>>>
>>> Axiom categories and domains, in the SANE model, compile
>>> to lisp classes. These classes are "objects" in the Category
>>> Theory sense. Within a given category or domain, the
>>> representation (aka, the "carrier" in type theory) is an
>>> "object" (in Category Theory) and most functions are
>>> morphisms from Rep to Rep. Coercions are functors,
>>> which need to conform to the Category Theory properties.
>>>
>>> It also raises the interesting question of making Rep
>>> be its own class. This allows, for example, constructing
>>> the polynomial domain with the Rep as a parameter.
>>> Thus you get sparse, recursive, or dense polynomials
>>> by specifying different Rep classes. This is even more
>>> interesting when playing with Rep for things like
>>> Clifford algebras.
>>>
>>> It is quite a struggle to unify Category Theory, Type
>>> Theory, Programming, Computer Algebra, and Proof
>>> Theory so "it all just works as expected". Progress is
>>> being made, although not quickly.
>>>
>>> For those who want to play along I can recommend the
>>> MIT course in Category Theory [0] and the paper on
>>> The Type Theory of Lean [1]. For the programming

Re: Axiom musings...

2020-12-02 Thread Tim Daly
I just ran across an interesting example about the difference
between testing and proving (not that I need to be convinced).
http://www.lix.polytechnique.fr/Labo/Samuel.Mimram/teaching/INF551/course.pdf
(page 10):


On day one, our mathematician finds out using a formal computation software
that

\int{\sin(t)/t}{dt} = \pt/2

On day two, he tries to play around a bit with such formulas and finds out that

\int{(\sin(t)/t) (\sin(t/101)/(t/101)}{dt} = \p2/2

On day three, he thinks maybe a pattern could emerge and discovers that

\int{(\sin(t)/t) (\sin(t/101)/(t/101) (\sin(t/201)/(t/201))}{dt} = \p2/2

On day four, he gets quite confident and conjectures that, for every n∈N,

\int{\prod{ /sin(t/(100n+1)) / (t/(100n+1))}{dt} = \pi/2


In fact, the conjecture breaks starting at

n= 15 341 178 777 673 149 429 167 740 440 969 249 338 310 889

and none of the usual tests would have found this out.

Tim



On 11/29/20, Tim Daly  wrote:
> Axiom, as it was released to me long ago, is being
> overtaken by current technology.
>
> I'm taking an MIT course on "Underactuated Robots" [0]
> (I spent a large portion of my life doing robotics).
>
> The instructor is using Jupyter notebooks to illustrate
> control algorithms. He is doing computation and
> showing the graphics in a single notebook. Axiom
> could "almost" do that in the 1980s.
>
> It is clear that, with a sufficiently rich set of algorithms,
> such as one finds in CoCalc, Axiom is showing its age.
>
> This is the primary reason why I consider the SANE
> research effort vital. An Axiom system that has
> proven algorithms, follows knowledge from leading
> mathematics (e.g. Category Theory), and pushes the
> state of the art will keep Axiom alive and relevant.
>
> Axiom was, and is, research software. Just doing
> "more of the same" goes nowhere slowly. Following
> that path leads to the dustbin of history.
>
> Tim
>
> [0]
> https://www.youtube.com/watch?v=GPvw92IKO44=UUhfUOAhz7ynELF-s_1LPpWg=28
>
>
>
> On 11/27/20, Tim Daly  wrote:
>> The current mental struggle involves properly re-constructing
>> Axiom so Category Theory (e.g. objects, morphisms, identity,
>> and composition) is respected between the domains. Done
>> properly, this should make resolving type coercion and
>> conversion less ad-hoc. This is especially important when
>> dealing with first-class dependent types.
>>
>> Axiom categories and domains, in the SANE model, compile
>> to lisp classes. These classes are "objects" in the Category
>> Theory sense. Within a given category or domain, the
>> representation (aka, the "carrier" in type theory) is an
>> "object" (in Category Theory) and most functions are
>> morphisms from Rep to Rep. Coercions are functors,
>> which need to conform to the Category Theory properties.
>>
>> It also raises the interesting question of making Rep
>> be its own class. This allows, for example, constructing
>> the polynomial domain with the Rep as a parameter.
>> Thus you get sparse, recursive, or dense polynomials
>> by specifying different Rep classes. This is even more
>> interesting when playing with Rep for things like
>> Clifford algebras.
>>
>> It is quite a struggle to unify Category Theory, Type
>> Theory, Programming, Computer Algebra, and Proof
>> Theory so "it all just works as expected". Progress is
>> being made, although not quickly.
>>
>> For those who want to play along I can recommend the
>> MIT course in Category Theory [0] and the paper on
>> The Type Theory of Lean [1]. For the programming
>> aspects, look at The Art of the Metaobject Protocol [2].
>>
>> [0] https://www.youtube.com/user/youdsp/playlists
>> [1]
>> https://github.com/digama0/lean-type-theory/releases/download/v1.0/main.pdf
>> [2] https://mitpress.mit.edu/books/art-metaobject-protocol
>>
>> Tim
>>
>>
>> On 11/10/20, Tim Daly  wrote:
>>> Another area that is taking a great deal of time and effort
>>> is choosing a code architecture that has a solid background
>>> in research.
>>>
>>> Axiom used CLU and Group Theory as two scaffolds to
>>> guide design choices, making it possible to argue whether
>>> and where a domain should be structured and where it
>>> should occur in the system.
>>>
>>> Axiom uses a Pratt-parser scheme. This allows the
>>> ability to define and change syntax by playing with the
>>> LED and NUD numeric values. It works but it is not
>>> very clear or easy to maintain.
>>>
>>> An alternative under consideration is Hutton and Meijer's
>>> Monadic Parser Combinators. This constructs parser
>>> functions and combines them in higher order functions.
>>> It is strongly typed. It provides a hierarchy of functions
>>> that would allow the parser to be abstracted at many
>>> levels, making explanations clearer.
>>>
>>> Ideally the structure of the new parser should be clear,
>>> easy to maintain, and robust under changes. Since SANE
>>> is a research effort, maximum flexibility is ideal.
>>>
>>> Since SANE is intended to be easily maintained, it is
>>> important that the 

Re: Axiom musings...

2020-11-28 Thread Tim Daly
Axiom, as it was released to me long ago, is being
overtaken by current technology.

I'm taking an MIT course on "Underactuated Robots" [0]
(I spent a large portion of my life doing robotics).

The instructor is using Jupyter notebooks to illustrate
control algorithms. He is doing computation and
showing the graphics in a single notebook. Axiom
could "almost" do that in the 1980s.

It is clear that, with a sufficiently rich set of algorithms,
such as one finds in CoCalc, Axiom is showing its age.

This is the primary reason why I consider the SANE
research effort vital. An Axiom system that has
proven algorithms, follows knowledge from leading
mathematics (e.g. Category Theory), and pushes the
state of the art will keep Axiom alive and relevant.

Axiom was, and is, research software. Just doing
"more of the same" goes nowhere slowly. Following
that path leads to the dustbin of history.

Tim

[0] 
https://www.youtube.com/watch?v=GPvw92IKO44=UUhfUOAhz7ynELF-s_1LPpWg=28



On 11/27/20, Tim Daly  wrote:
> The current mental struggle involves properly re-constructing
> Axiom so Category Theory (e.g. objects, morphisms, identity,
> and composition) is respected between the domains. Done
> properly, this should make resolving type coercion and
> conversion less ad-hoc. This is especially important when
> dealing with first-class dependent types.
>
> Axiom categories and domains, in the SANE model, compile
> to lisp classes. These classes are "objects" in the Category
> Theory sense. Within a given category or domain, the
> representation (aka, the "carrier" in type theory) is an
> "object" (in Category Theory) and most functions are
> morphisms from Rep to Rep. Coercions are functors,
> which need to conform to the Category Theory properties.
>
> It also raises the interesting question of making Rep
> be its own class. This allows, for example, constructing
> the polynomial domain with the Rep as a parameter.
> Thus you get sparse, recursive, or dense polynomials
> by specifying different Rep classes. This is even more
> interesting when playing with Rep for things like
> Clifford algebras.
>
> It is quite a struggle to unify Category Theory, Type
> Theory, Programming, Computer Algebra, and Proof
> Theory so "it all just works as expected". Progress is
> being made, although not quickly.
>
> For those who want to play along I can recommend the
> MIT course in Category Theory [0] and the paper on
> The Type Theory of Lean [1]. For the programming
> aspects, look at The Art of the Metaobject Protocol [2].
>
> [0] https://www.youtube.com/user/youdsp/playlists
> [1]
> https://github.com/digama0/lean-type-theory/releases/download/v1.0/main.pdf
> [2] https://mitpress.mit.edu/books/art-metaobject-protocol
>
> Tim
>
>
> On 11/10/20, Tim Daly  wrote:
>> Another area that is taking a great deal of time and effort
>> is choosing a code architecture that has a solid background
>> in research.
>>
>> Axiom used CLU and Group Theory as two scaffolds to
>> guide design choices, making it possible to argue whether
>> and where a domain should be structured and where it
>> should occur in the system.
>>
>> Axiom uses a Pratt-parser scheme. This allows the
>> ability to define and change syntax by playing with the
>> LED and NUD numeric values. It works but it is not
>> very clear or easy to maintain.
>>
>> An alternative under consideration is Hutton and Meijer's
>> Monadic Parser Combinators. This constructs parser
>> functions and combines them in higher order functions.
>> It is strongly typed. It provides a hierarchy of functions
>> that would allow the parser to be abstracted at many
>> levels, making explanations clearer.
>>
>> Ideally the structure of the new parser should be clear,
>> easy to maintain, and robust under changes. Since SANE
>> is a research effort, maximum flexibility is ideal.
>>
>> Since SANE is intended to be easily maintained, it is
>> important that the compiler / interpreter structure be
>> clear and consistent. This is especially important as
>> questions of which proof language syntax and a
>> specification language syntax is not yet decided.
>>
>> Tim
>>
>>
>>
>>
>> On 10/9/20, Tim Daly  wrote:
>>> A great deal of thought has gone into the representation
>>> of functions in the SANE version.
>>>
>>> It is important that a function lives within an environment.
>>> There are no "stand alone" functions. Given a function
>>> its environment contains the representation which itself
>>> is a function type with accessor functions. Wrapped
>>> around these is the domain type containing other
>>> functions. The domain type lives within several
>>> preorders of environments, some dictated by the
>>> structure of group theory, some dictated by the structure
>>> of the logic.
>>>
>>> Lisp has the ability to construct arbitrary structures
>>> which can be nested or non-nested, and even potentially
>>> circular.
>>>
>>> Even more interesting is that these structures can be
>>> "self modified" so that one could 

Re: Axiom musings...

2020-11-27 Thread Tim Daly
The current mental struggle involves properly re-constructing
Axiom so Category Theory (e.g. objects, morphisms, identity,
and composition) is respected between the domains. Done
properly, this should make resolving type coercion and
conversion less ad-hoc. This is especially important when
dealing with first-class dependent types.

Axiom categories and domains, in the SANE model, compile
to lisp classes. These classes are "objects" in the Category
Theory sense. Within a given category or domain, the
representation (aka, the "carrier" in type theory) is an
"object" (in Category Theory) and most functions are
morphisms from Rep to Rep. Coercions are functors,
which need to conform to the Category Theory properties.

It also raises the interesting question of making Rep
be its own class. This allows, for example, constructing
the polynomial domain with the Rep as a parameter.
Thus you get sparse, recursive, or dense polynomials
by specifying different Rep classes. This is even more
interesting when playing with Rep for things like
Clifford algebras.

It is quite a struggle to unify Category Theory, Type
Theory, Programming, Computer Algebra, and Proof
Theory so "it all just works as expected". Progress is
being made, although not quickly.

For those who want to play along I can recommend the
MIT course in Category Theory [0] and the paper on
The Type Theory of Lean [1]. For the programming
aspects, look at The Art of the Metaobject Protocol [2].

[0] https://www.youtube.com/user/youdsp/playlists
[1] https://github.com/digama0/lean-type-theory/releases/download/v1.0/main.pdf
[2] https://mitpress.mit.edu/books/art-metaobject-protocol

Tim


On 11/10/20, Tim Daly  wrote:
> Another area that is taking a great deal of time and effort
> is choosing a code architecture that has a solid background
> in research.
>
> Axiom used CLU and Group Theory as two scaffolds to
> guide design choices, making it possible to argue whether
> and where a domain should be structured and where it
> should occur in the system.
>
> Axiom uses a Pratt-parser scheme. This allows the
> ability to define and change syntax by playing with the
> LED and NUD numeric values. It works but it is not
> very clear or easy to maintain.
>
> An alternative under consideration is Hutton and Meijer's
> Monadic Parser Combinators. This constructs parser
> functions and combines them in higher order functions.
> It is strongly typed. It provides a hierarchy of functions
> that would allow the parser to be abstracted at many
> levels, making explanations clearer.
>
> Ideally the structure of the new parser should be clear,
> easy to maintain, and robust under changes. Since SANE
> is a research effort, maximum flexibility is ideal.
>
> Since SANE is intended to be easily maintained, it is
> important that the compiler / interpreter structure be
> clear and consistent. This is especially important as
> questions of which proof language syntax and a
> specification language syntax is not yet decided.
>
> Tim
>
>
>
>
> On 10/9/20, Tim Daly  wrote:
>> A great deal of thought has gone into the representation
>> of functions in the SANE version.
>>
>> It is important that a function lives within an environment.
>> There are no "stand alone" functions. Given a function
>> its environment contains the representation which itself
>> is a function type with accessor functions. Wrapped
>> around these is the domain type containing other
>> functions. The domain type lives within several
>> preorders of environments, some dictated by the
>> structure of group theory, some dictated by the structure
>> of the logic.
>>
>> Lisp has the ability to construct arbitrary structures
>> which can be nested or non-nested, and even potentially
>> circular.
>>
>> Even more interesting is that these structures can be
>> "self modified" so that one could have a domain (e.g.
>> genetic algorithms) that self-adapt, based on input.
>> Or "AI" domains, representated as weight matrices,
>> that self-adapt to input by changing weights.
>>
>> Ultimately the goal of the representation should be that,
>> given a function, it should be possible to traverse the
>> whole of the environment using a small, well-defined
>> set of well-typed structure walkers.
>>
>> All of this is easy to write down in Lisp.
>> The puzzle is how to reflect these ideas in Spad.
>>
>> Currently the compiler translates all of the Spad
>> code to Lisp objects so the Spad->Lisp conversion
>> is easy. Lisp->Spad is also easy for things that have
>> a surface representation in Spad. But, in general,
>> Lisp->Spad is only a partial function, and somewhat
>> limited at that.
>>
>> I suspect that the solution will allow some domains
>> to be pure Lisp and others will allow in-line Lisp.
>>
>> Most of these ideas are nearly impossible to even
>> think about in other languages or, if attempted,
>> fall afoul of Greenspun's Tenth Rule, to wit:
>>
>>   "Any sufficiently complicated C or Fortran program
>>contains an ad 

Re: Axiom musings...

2020-11-09 Thread Tim Daly
Another area that is taking a great deal of time and effort
is choosing a code architecture that has a solid background
in research.

Axiom used CLU and Group Theory as two scaffolds to
guide design choices, making it possible to argue whether
and where a domain should be structured and where it
should occur in the system.

Axiom uses a Pratt-parser scheme. This allows the
ability to define and change syntax by playing with the
LED and NUD numeric values. It works but it is not
very clear or easy to maintain.

An alternative under consideration is Hutton and Meijer's
Monadic Parser Combinators. This constructs parser
functions and combines them in higher order functions.
It is strongly typed. It provides a hierarchy of functions
that would allow the parser to be abstracted at many
levels, making explanations clearer.

Ideally the structure of the new parser should be clear,
easy to maintain, and robust under changes. Since SANE
is a research effort, maximum flexibility is ideal.

Since SANE is intended to be easily maintained, it is
important that the compiler / interpreter structure be
clear and consistent. This is especially important as
questions of which proof language syntax and a
specification language syntax is not yet decided.

Tim




On 10/9/20, Tim Daly  wrote:
> A great deal of thought has gone into the representation
> of functions in the SANE version.
>
> It is important that a function lives within an environment.
> There are no "stand alone" functions. Given a function
> its environment contains the representation which itself
> is a function type with accessor functions. Wrapped
> around these is the domain type containing other
> functions. The domain type lives within several
> preorders of environments, some dictated by the
> structure of group theory, some dictated by the structure
> of the logic.
>
> Lisp has the ability to construct arbitrary structures
> which can be nested or non-nested, and even potentially
> circular.
>
> Even more interesting is that these structures can be
> "self modified" so that one could have a domain (e.g.
> genetic algorithms) that self-adapt, based on input.
> Or "AI" domains, representated as weight matrices,
> that self-adapt to input by changing weights.
>
> Ultimately the goal of the representation should be that,
> given a function, it should be possible to traverse the
> whole of the environment using a small, well-defined
> set of well-typed structure walkers.
>
> All of this is easy to write down in Lisp.
> The puzzle is how to reflect these ideas in Spad.
>
> Currently the compiler translates all of the Spad
> code to Lisp objects so the Spad->Lisp conversion
> is easy. Lisp->Spad is also easy for things that have
> a surface representation in Spad. But, in general,
> Lisp->Spad is only a partial function, and somewhat
> limited at that.
>
> I suspect that the solution will allow some domains
> to be pure Lisp and others will allow in-line Lisp.
>
> Most of these ideas are nearly impossible to even
> think about in other languages or, if attempted,
> fall afoul of Greenspun's Tenth Rule, to wit:
>
>   "Any sufficiently complicated C or Fortran program
>contains an ad hoc, informally-specified, bug-ridden,
>slow implementation of half of Common Lisp."
>
> Fortunately Spad is only a domain specific language
> on top of Lisp. Once past the syntax level it is Lisp
> all the way down.
>
> Tim
>



Re: Axiom musings...

2020-10-09 Thread Tim Daly
A great deal of thought has gone into the representation
of functions in the SANE version.

It is important that a function lives within an environment.
There are no "stand alone" functions. Given a function
its environment contains the representation which itself
is a function type with accessor functions. Wrapped
around these is the domain type containing other
functions. The domain type lives within several
preorders of environments, some dictated by the
structure of group theory, some dictated by the structure
of the logic.

Lisp has the ability to construct arbitrary structures
which can be nested or non-nested, and even potentially
circular.

Even more interesting is that these structures can be
"self modified" so that one could have a domain (e.g.
genetic algorithms) that self-adapt, based on input.
Or "AI" domains, representated as weight matrices,
that self-adapt to input by changing weights.

Ultimately the goal of the representation should be that,
given a function, it should be possible to traverse the
whole of the environment using a small, well-defined
set of well-typed structure walkers.

All of this is easy to write down in Lisp.
The puzzle is how to reflect these ideas in Spad.

Currently the compiler translates all of the Spad
code to Lisp objects so the Spad->Lisp conversion
is easy. Lisp->Spad is also easy for things that have
a surface representation in Spad. But, in general,
Lisp->Spad is only a partial function, and somewhat
limited at that.

I suspect that the solution will allow some domains
to be pure Lisp and others will allow in-line Lisp.

Most of these ideas are nearly impossible to even
think about in other languages or, if attempted,
fall afoul of Greenspun's Tenth Rule, to wit:

  "Any sufficiently complicated C or Fortran program
   contains an ad hoc, informally-specified, bug-ridden,
   slow implementation of half of Common Lisp."

Fortunately Spad is only a domain specific language
on top of Lisp. Once past the syntax level it is Lisp
all the way down.

Tim



Re: Axiom musings...

2020-10-07 Thread Tim Daly
I'm writing a book on the history of Axiom. As part of
that effort I'm reviewing all of the emails posted to the
mailing list. I came across this posting by Scott Morrison,
the author of the interpreter:

Date: Thu, 3 Nov 2005 08:34:55 -0800
From: Scott Morrison
To: list
Subject: Re: BAD tim

Hello everybody,

This is my first posting to this list. I worked at IBM Research off and on
for 9 years. I'm the primary author/designer of the interpreter and
HyperDoc, and I wrote the first code generator and optimizer for Aldor (then
called A#).

I found Tim's posting very interesting, and I just had to comment. (Hi Tim!)

>>POINT 3: Boot is a dead language.
>>
>>   There are approximately 10 people still living who have coded in
>>   boot and every one of them is doing something else with their
>>   life.


Being one of those 10 people, I have to agree with Tim and his opinion of
the Boot language. Dick Jenks told me that boot was to be the first step in
implementing the entire system in the Spad language. Once the syntax looked
like Spad, the next step would be to convert it to real Spad. Of course that
never happened, so the main purpose behind Boot is gone.

In my opinion Boot can be a convenient language to code in, but it has two
main problems. The use of indentation for grouping is just a bad idea. It
works well for one-page programs, but with anything with reasonable
complexity, it falls down. Just adding braces to Boot for grouping would go
a long way to making it more usable. The other problem is the lack of a way
to structure data. I fully agree with Tim on the problems with that
language.

>>I'm the only person likely to be hacking
>>   the interpreter for the near future, mostly because it is so big,
>>   ugly, unstructured, and undocumented.

Ouch. The truth hurts. Believe me, when I started writing the interpreter
some 21 years ago, I never imagined it would have such a long life. This was
my first major programming project out of college. After 20 years of
experience, I would do a lot of things differently today. These days I'm too
busy with my current job (graphics programming at Audtodesk) to be able to
devote any time to the Axiom project.

>>There is no reason (other than historical) why there are
>>   a dozen entry points into the interpreter that set wierd, undocumented,
>>   stateful flags. All that cruft must go away.

Good luck Tim. As the author of much of this cruft, I think you have your
work cut out for you. Maintaining functionality while refactoring the code
will be a major challenge. Your two year estimate seems about right if you
are planning on working on this full time. If not, I think two years is
optimistic.

I think the transition from Boot to Lisp could be done in an incremental
way, converting modules gradually, thus maintaining the integrity of the
system during the conversion. However, I agree that converting to Lisp
ultimately is the right thing to do.

I'm on the axiom-developer list now, so I can answer the occasional
question, but I won't have much time for this project, unfortunately.



Re: Axiom musings...

2020-10-01 Thread Tim Daly
Hill Climbing

Almost every possible step along the path to creating
the SANE version of Axiom involves a LOT of hill
climbing, by that I mean a LOT to learn to get started.

Axiom was built with a scaffold of group theory which
makes it easier to reason about and understand. Going
forward we need to use the theories we have learned
since the 1980s to ground the system properly.

One hill is that there is almost no documentation or even
literature reference for many of Axiom's algorithms. Some
were done as PhD research; many were ad-hoc "it works
for me", etc. Axiom already has decorated some of the
algorithms with literature references. More needs to be done.

As for hills, there is the large area of logic which has
many different branches. One has to crawl from one
corner of the lambda cube to the other in order to know
what dependent types are and how to use them.
https://www.cs.uoregon.edu/research/summerschool/summer12/curriculum.html

Then there is the category theory area since we would
like to have things like definitionally correct functors.
https://www.youtube.com/watch?v=Y5YCE_mVjvg=PLhgq-BqyZ7i7MTGhUROZy3BOICnVixETS

Then there is the whole area of proof. Currently Axiom
is focusing on Lean as the underlying machinery as it
seems to be the leading edge in constructive type theory
and mathematics.
https://leanprover.github.io

There are many different approaches to specification
ranging from "hand waving" (UML) to strong theory.
But transitioning from the specification to the code is
still a huge gap.
https://formal.iti.kit.edu/~beckert/teaching/Spezifikation-SS04/11Z.pdf

Programming has also advanced. Clojure has raised
some significant issues of concurrent and scalable
programming. It would be great if matrix algorithms
could be run in parallel.
https://clojure.org/about/concurrent_programming

Symbolic-Numeric programming has been an issue.
There is every reason for numeric programs to have a
"place at the table" in a computational mathematics
system. There are interesting issues that arise.
https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868

Of course, none of this gets anywhere near the issues
of proving programs correct, that being the central
question.

Then there is the question of creating a compiler that
uses and integrates all of the above as well as being
able to handle the current Axiom algorithms. But the
compiler needs to also recognize and propagate the
logical definitions and axioms along the inheritance
paths.

Of course, the compiler and interpreter should have the
same semantics and work on the same data structures
so that what compiles will interpret and what interprets
compiles.

That doesn't even mention the supposedly trivial issue
of replacing the front end, hyperdoc, and graphics with
a portable browser interface. CSS seems to be a whole
career skill at this point. Plus it really should work well
in the whole "workbook" environment like CoCalc.

Progress is being made in all of these areas but, as
you can imagine, it takes a lot of thought to combine
them cleanly.

Tim



Re: Axiom musings...

2020-09-28 Thread Tim Daly
My apologies. I reacted poorly to what I
interpreted as a criticism of the developers of
Scratchpad. I consider them among the most
clever and intelligent people I've ever met. I was
lucky to work with them.

Tim



Re: [EXTERNAL] Re: Axiom musings...

2020-09-25 Thread Tim Daly
No, the name 'Axiom" is used by many different companies.

Trademarks only cover infringement "in the same area"
(although the Lexus car company sued a diner named
Lexus and "won" because the diner couldn't compete
against big corporate lawyer money).

One company in New Jersey named their language "Axiom"
and I keep getting calls from recruiters because that company
will pay big bucks to hire an "Axiom Developer". Sigh.

I just thought the space one was "over the top".

I'd sign up to go to space in a heartbeat.
I applied to be an astronaut in the 1990s. NASA
wanted someone with at least a Master's degree,
a background in Robotics, and less than 5'9" tall.
I fit the criteria but never heard back.

Tim







On 9/25/20, William Sit  wrote:
> Axiom Space, a Houston-based company, is not related to Axiom the scientific
> computation system. Or is it?
>
> William
>
> William Sit
> Professor Emeritus
> Department of Mathematics
> The City College of The City University of New York
> New York, NY 10031
> homepage: wsit.ccny.cuny.edu
>
> 
> From: Axiom-developer
>  on behalf of
> Tim Daly 
> Sent: Thursday, September 24, 2020 4:26 AM
> To: Ricardo Corral C.
> Cc: axiom-developer@nongnu.org
> Subject: [EXTERNAL] Re: Axiom musings...
>
> Today's Headline:
>
> Axiom finalizing agreements for private astronaut mission to space station
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__spaceflightnow.com_2020_09_23_axiom-2Dfinalizing-2Dagreements-2Dfor-2Dprivate-2Dastronaut-2Dmission-2Dto-2Dspace-2Dstation_=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc=HjfsC447T7m29H3fRO5mjUS7vORbNarNjIlj8UKFSH8=AvUAZpZ9vnJiQ37yQyMSdNeJQH8qv3HLt5_V1qQ3q2g=
>
> Wow. I HAVE been busy :-)
>
> Tim
>
>
> On 9/24/20, Tim Daly  wrote:
>> Ricardo,
>>
>> Yes, I'm familar with Sage. Axiom was originally connected
>> back around 2006 / 2007. William Stein showed me that it
>> runs fine in the latest version.
>>
>> Unfortunately it doesn't do all of the things Axiom supports.
>>
>> I will look at Elixer LiveView. Thanks.
>>
>> Tim
>>
>>
>> On 9/24/20, Ricardo Corral C.  wrote:
>>> Elixir LiveView offers a nice way to interact with the browser. I’ve been
>>> playing rendering OpenAI Atari frames from their Python objects (using
>>> erlport), so it seems like a plausible option for interacting with axiom
>>> too. Note that sagemath.org already interacts with axiom, so maybe
>>> connecting through it serves like a bridge.
>>>
>>> On Thu 24 Sep 2020 at 0:06 Tim Daly  wrote:
>>>
>>>> The new Axiom version needs to have a better user interface.
>>>>
>>>>
>>>>
>>>> I'm experimenting with a browser front end that has an Axiom
>>>>
>>>> editor that runs Axiom in the background. This isn't really
>>>>
>>>> a new idea. Maxima has been doing it for years.
>>>>
>>>>
>>>>
>>>> Using the browser has the advantage of integrating the
>>>>
>>>> compiler, interpreter, graphics, and documentation in one
>>>>
>>>> interface.
>>>>
>>>>
>>>>
>>>> I managed to get the editor-in-browser working.
>>>>
>>>>
>>>>
>>>> Axiom already has a browser connection (book volume 11)
>>>>
>>>> designed to replace hyperdoc and working as an
>>>>
>>>> interpreter I/O so this editor would be an extension.
>>>>
>>>>
>>>>
>>>> Since almost all of Axiom is already in hyperlinked PDF
>>>>
>>>> files it will be possible to jump directly to related sections
>>>>
>>>> in the various books.
>>>>
>>>>
>>>>
>>>> Tim
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 9/23/20, Tim Daly  wrote:
>>>>
>>>> > Rich Hickey gave a keynote:
>>>>
>>>> > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DoyLBGkS5ICk=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc=HjfsC447T7m29H3fRO5mjUS7vORbNarNjIlj8UKFSH8=oF9rfp-E-CqJ8LCcpq87aURaYtjVP8DwSljlOvjZUFw=
>>>>
>>>> > which, like all of Hickey's talks, is worth watching.
>>>>
>>>> >
>>>

Re: [EXTERNAL] Re: Axiom musings...

2020-09-25 Thread William Sit
Axiom Space, a Houston-based company, is not related to Axiom the scientific 
computation system. Or is it?

William

William Sit
Professor Emeritus
Department of Mathematics
The City College of The City University of New York
New York, NY 10031
homepage: wsit.ccny.cuny.edu


From: Axiom-developer 
 on behalf of Tim 
Daly 
Sent: Thursday, September 24, 2020 4:26 AM
To: Ricardo Corral C.
Cc: axiom-developer@nongnu.org
Subject: [EXTERNAL] Re: Axiom musings...

Today's Headline:

Axiom finalizing agreements for private astronaut mission to space station

https://urldefense.proofpoint.com/v2/url?u=https-3A__spaceflightnow.com_2020_09_23_axiom-2Dfinalizing-2Dagreements-2Dfor-2Dprivate-2Dastronaut-2Dmission-2Dto-2Dspace-2Dstation_=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc=HjfsC447T7m29H3fRO5mjUS7vORbNarNjIlj8UKFSH8=AvUAZpZ9vnJiQ37yQyMSdNeJQH8qv3HLt5_V1qQ3q2g=

Wow. I HAVE been busy :-)

Tim


On 9/24/20, Tim Daly  wrote:
> Ricardo,
>
> Yes, I'm familar with Sage. Axiom was originally connected
> back around 2006 / 2007. William Stein showed me that it
> runs fine in the latest version.
>
> Unfortunately it doesn't do all of the things Axiom supports.
>
> I will look at Elixer LiveView. Thanks.
>
> Tim
>
>
> On 9/24/20, Ricardo Corral C.  wrote:
>> Elixir LiveView offers a nice way to interact with the browser. I’ve been
>> playing rendering OpenAI Atari frames from their Python objects (using
>> erlport), so it seems like a plausible option for interacting with axiom
>> too. Note that sagemath.org already interacts with axiom, so maybe
>> connecting through it serves like a bridge.
>>
>> On Thu 24 Sep 2020 at 0:06 Tim Daly  wrote:
>>
>>> The new Axiom version needs to have a better user interface.
>>>
>>>
>>>
>>> I'm experimenting with a browser front end that has an Axiom
>>>
>>> editor that runs Axiom in the background. This isn't really
>>>
>>> a new idea. Maxima has been doing it for years.
>>>
>>>
>>>
>>> Using the browser has the advantage of integrating the
>>>
>>> compiler, interpreter, graphics, and documentation in one
>>>
>>> interface.
>>>
>>>
>>>
>>> I managed to get the editor-in-browser working.
>>>
>>>
>>>
>>> Axiom already has a browser connection (book volume 11)
>>>
>>> designed to replace hyperdoc and working as an
>>>
>>> interpreter I/O so this editor would be an extension.
>>>
>>>
>>>
>>> Since almost all of Axiom is already in hyperlinked PDF
>>>
>>> files it will be possible to jump directly to related sections
>>>
>>> in the various books.
>>>
>>>
>>>
>>> Tim
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 9/23/20, Tim Daly  wrote:
>>>
>>> > Rich Hickey gave a keynote:
>>>
>>> > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DoyLBGkS5ICk=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc=HjfsC447T7m29H3fRO5mjUS7vORbNarNjIlj8UKFSH8=oF9rfp-E-CqJ8LCcpq87aURaYtjVP8DwSljlOvjZUFw=
>>>
>>> > which, like all of Hickey's talks, is worth watching.
>>>
>>> >
>>>
>>> > He talks about programs breaking due to things like
>>>
>>> > library changes. Around minute 30 he started to talk
>>>
>>> > about why "semantic versioning" (e.g. version 1.2.3)
>>>
>>> > is meaningless.
>>>
>>> >
>>>
>>> > I realized this years ago and changed Axiom to use
>>>
>>> > the date of the release. It provides the same sort of
>>>
>>> > "non-information" but it is easy to find in the changelog.
>>>
>>> >
>>>
>>> > Tim
>>>
>>> >
>>>
>>> >
>>>
>>> > On 9/5/20, Tim Daly  wrote:
>>>
>>> >> Geometric algebra also affects another "in-process" goal.
>>>
>>> >>
>>>
>>> >> I have BLAS and LAPACK in the Axiom sources (volume 10.5).
>>>
>>> >>
>>>
>>> >> I've spent some time on the question of changing BLAS to use
>>>
>>> >> John Gustafson's UNUM representation, which eliminates a lot
>>>
>>> >> of code because var

Re: Axiom musings...

2020-09-24 Thread Tim Daly
Today's Headline:

Axiom finalizing agreements for private astronaut mission to space station

https://spaceflightnow.com/2020/09/23/axiom-finalizing-agreements-for-private-astronaut-mission-to-space-station/

Wow. I HAVE been busy :-)

Tim


On 9/24/20, Tim Daly  wrote:
> Ricardo,
>
> Yes, I'm familar with Sage. Axiom was originally connected
> back around 2006 / 2007. William Stein showed me that it
> runs fine in the latest version.
>
> Unfortunately it doesn't do all of the things Axiom supports.
>
> I will look at Elixer LiveView. Thanks.
>
> Tim
>
>
> On 9/24/20, Ricardo Corral C.  wrote:
>> Elixir LiveView offers a nice way to interact with the browser. I’ve been
>> playing rendering OpenAI Atari frames from their Python objects (using
>> erlport), so it seems like a plausible option for interacting with axiom
>> too. Note that sagemath.org already interacts with axiom, so maybe
>> connecting through it serves like a bridge.
>>
>> On Thu 24 Sep 2020 at 0:06 Tim Daly  wrote:
>>
>>> The new Axiom version needs to have a better user interface.
>>>
>>>
>>>
>>> I'm experimenting with a browser front end that has an Axiom
>>>
>>> editor that runs Axiom in the background. This isn't really
>>>
>>> a new idea. Maxima has been doing it for years.
>>>
>>>
>>>
>>> Using the browser has the advantage of integrating the
>>>
>>> compiler, interpreter, graphics, and documentation in one
>>>
>>> interface.
>>>
>>>
>>>
>>> I managed to get the editor-in-browser working.
>>>
>>>
>>>
>>> Axiom already has a browser connection (book volume 11)
>>>
>>> designed to replace hyperdoc and working as an
>>>
>>> interpreter I/O so this editor would be an extension.
>>>
>>>
>>>
>>> Since almost all of Axiom is already in hyperlinked PDF
>>>
>>> files it will be possible to jump directly to related sections
>>>
>>> in the various books.
>>>
>>>
>>>
>>> Tim
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 9/23/20, Tim Daly  wrote:
>>>
>>> > Rich Hickey gave a keynote:
>>>
>>> > https://www.youtube.com/watch?v=oyLBGkS5ICk
>>>
>>> > which, like all of Hickey's talks, is worth watching.
>>>
>>> >
>>>
>>> > He talks about programs breaking due to things like
>>>
>>> > library changes. Around minute 30 he started to talk
>>>
>>> > about why "semantic versioning" (e.g. version 1.2.3)
>>>
>>> > is meaningless.
>>>
>>> >
>>>
>>> > I realized this years ago and changed Axiom to use
>>>
>>> > the date of the release. It provides the same sort of
>>>
>>> > "non-information" but it is easy to find in the changelog.
>>>
>>> >
>>>
>>> > Tim
>>>
>>> >
>>>
>>> >
>>>
>>> > On 9/5/20, Tim Daly  wrote:
>>>
>>> >> Geometric algebra also affects another "in-process" goal.
>>>
>>> >>
>>>
>>> >> I have BLAS and LAPACK in the Axiom sources (volume 10.5).
>>>
>>> >>
>>>
>>> >> I've spent some time on the question of changing BLAS to use
>>>
>>> >> John Gustafson's UNUM representation, which eliminates a lot
>>>
>>> >> of code because various "standard errors" cannot occur. See
>>>
>>> >> his book "The End Of Error"
>>>
>>> >>
>>> https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868
>>>
>>> >>
>>>
>>> >> But since Geometric algebra is coordinate free, many of the
>>>
>>> >> computations can be done symbolically and then evalulated
>>>
>>> >> in final form.
>>>
>>> >>
>>>
>>> >> BLAS re-caste in Geometric Algebra means that some of the
>>>
>>> >> errors, such as roundoff, cannot occur in the symbolic form.
>>>
>>> >>
>>>
>>> >> This has the potential to make Axiom's BLAS and LAPACK
>>>
>>> >> computations faster and more accurate.
>>>
>>> >>
>>>
>>> >> Tim
>>>
>>> >>
>>>
>>> >>
>>>
>>> >> On 9/5/20, Tim Daly  wrote:
>>>
>>> >>> I'm in the process of re-architecting Axiom, of course.
>>>
>>> >>>
>>>
>>> >>> The primary research effort, as you know, is incorporating
>>>
>>> >>> proof technology.
>>>
>>> >>>
>>>
>>> >>> But in the process of re-architecting there are more things
>>>
>>> >>> to consider. Two of them are "front and center" at the moment.
>>>
>>> >>>
>>>
>>> >>> One concern is "Geometric Algebra". See
>>>
>>> >>> http://geometricalgebra.net/
>>>
>>> >>>
>>> https://www.youtube.com/watch?v=0fF2xToQmgs=PLsSPBzvBkYjzcQ4eCVAntETNNVD2d5S79
>>>
>>> >>>
>>>
>>> >>> Geometric algebra unifies a lot of mathematics. In particular,
>>>
>>> >>> it "cleans up" linear algebra, creating a "coordinate-free"
>>>
>>> >>> representation. This greatly simplifies and unifies a lot of
>>>
>>> >>> mathematics.
>>>
>>> >>>
>>>
>>> >>> So the question becomes, can this be used to "re-represent"
>>>
>>> >>> Axiom mathematics dependent on linear algebra? I don't
>>>
>>> >>> know but the idea has a lot of potential for simplification.
>>>
>>> >>>
>>>
>>> >>>
>>>
>>> >>> The second concern is "Category Theory". This theory
>>>
>>> >>> provides a simplification and a generalization of various
>>>
>>> >>> ideas in Axiom. It also puts constraints on things like an
>>>
>>> >>> Axiom "category" to Axiom "category" functors so that the
>>>
>>> >>> conversion 

Re: Axiom musings...

2020-09-24 Thread Tim Daly
Ricardo,

Yes, I'm familar with Sage. Axiom was originally connected
back around 2006 / 2007. William Stein showed me that it
runs fine in the latest version.

Unfortunately it doesn't do all of the things Axiom supports.

I will look at Elixer LiveView. Thanks.

Tim


On 9/24/20, Ricardo Corral C.  wrote:
> Elixir LiveView offers a nice way to interact with the browser. I’ve been
> playing rendering OpenAI Atari frames from their Python objects (using
> erlport), so it seems like a plausible option for interacting with axiom
> too. Note that sagemath.org already interacts with axiom, so maybe
> connecting through it serves like a bridge.
>
> On Thu 24 Sep 2020 at 0:06 Tim Daly  wrote:
>
>> The new Axiom version needs to have a better user interface.
>>
>>
>>
>> I'm experimenting with a browser front end that has an Axiom
>>
>> editor that runs Axiom in the background. This isn't really
>>
>> a new idea. Maxima has been doing it for years.
>>
>>
>>
>> Using the browser has the advantage of integrating the
>>
>> compiler, interpreter, graphics, and documentation in one
>>
>> interface.
>>
>>
>>
>> I managed to get the editor-in-browser working.
>>
>>
>>
>> Axiom already has a browser connection (book volume 11)
>>
>> designed to replace hyperdoc and working as an
>>
>> interpreter I/O so this editor would be an extension.
>>
>>
>>
>> Since almost all of Axiom is already in hyperlinked PDF
>>
>> files it will be possible to jump directly to related sections
>>
>> in the various books.
>>
>>
>>
>> Tim
>>
>>
>>
>>
>>
>>
>>
>> On 9/23/20, Tim Daly  wrote:
>>
>> > Rich Hickey gave a keynote:
>>
>> > https://www.youtube.com/watch?v=oyLBGkS5ICk
>>
>> > which, like all of Hickey's talks, is worth watching.
>>
>> >
>>
>> > He talks about programs breaking due to things like
>>
>> > library changes. Around minute 30 he started to talk
>>
>> > about why "semantic versioning" (e.g. version 1.2.3)
>>
>> > is meaningless.
>>
>> >
>>
>> > I realized this years ago and changed Axiom to use
>>
>> > the date of the release. It provides the same sort of
>>
>> > "non-information" but it is easy to find in the changelog.
>>
>> >
>>
>> > Tim
>>
>> >
>>
>> >
>>
>> > On 9/5/20, Tim Daly  wrote:
>>
>> >> Geometric algebra also affects another "in-process" goal.
>>
>> >>
>>
>> >> I have BLAS and LAPACK in the Axiom sources (volume 10.5).
>>
>> >>
>>
>> >> I've spent some time on the question of changing BLAS to use
>>
>> >> John Gustafson's UNUM representation, which eliminates a lot
>>
>> >> of code because various "standard errors" cannot occur. See
>>
>> >> his book "The End Of Error"
>>
>> >>
>> https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868
>>
>> >>
>>
>> >> But since Geometric algebra is coordinate free, many of the
>>
>> >> computations can be done symbolically and then evalulated
>>
>> >> in final form.
>>
>> >>
>>
>> >> BLAS re-caste in Geometric Algebra means that some of the
>>
>> >> errors, such as roundoff, cannot occur in the symbolic form.
>>
>> >>
>>
>> >> This has the potential to make Axiom's BLAS and LAPACK
>>
>> >> computations faster and more accurate.
>>
>> >>
>>
>> >> Tim
>>
>> >>
>>
>> >>
>>
>> >> On 9/5/20, Tim Daly  wrote:
>>
>> >>> I'm in the process of re-architecting Axiom, of course.
>>
>> >>>
>>
>> >>> The primary research effort, as you know, is incorporating
>>
>> >>> proof technology.
>>
>> >>>
>>
>> >>> But in the process of re-architecting there are more things
>>
>> >>> to consider. Two of them are "front and center" at the moment.
>>
>> >>>
>>
>> >>> One concern is "Geometric Algebra". See
>>
>> >>> http://geometricalgebra.net/
>>
>> >>>
>> https://www.youtube.com/watch?v=0fF2xToQmgs=PLsSPBzvBkYjzcQ4eCVAntETNNVD2d5S79
>>
>> >>>
>>
>> >>> Geometric algebra unifies a lot of mathematics. In particular,
>>
>> >>> it "cleans up" linear algebra, creating a "coordinate-free"
>>
>> >>> representation. This greatly simplifies and unifies a lot of
>>
>> >>> mathematics.
>>
>> >>>
>>
>> >>> So the question becomes, can this be used to "re-represent"
>>
>> >>> Axiom mathematics dependent on linear algebra? I don't
>>
>> >>> know but the idea has a lot of potential for simplification.
>>
>> >>>
>>
>> >>>
>>
>> >>> The second concern is "Category Theory". This theory
>>
>> >>> provides a simplification and a generalization of various
>>
>> >>> ideas in Axiom. It also puts constraints on things like an
>>
>> >>> Axiom "category" to Axiom "category" functors so that the
>>
>> >>> conversion preserves the mathematical "Category"
>>
>> >>> structure and properties.
>>
>> >>>
>>
>> >>> MIT has a "course" on "Programming with Categories"
>>
>> >>>
>> https://www.youtube.com/playlist?list=PLhgq-BqyZ7i7MTGhUROZy3BOICnVixETS
>>
>> >>> which makes things rather more understandable.
>>
>> >>>
>>
>> >>> So one question is how to re-represent Axiom's type
>>
>> >>> structure so that it has a correct mathematical "Category"
>>
>> >>> structure. This, of course, raises the question of Group

Re: Axiom musings...

2020-09-23 Thread Ricardo Corral C.
Elixir LiveView offers a nice way to interact with the browser. I’ve been
playing rendering OpenAI Atari frames from their Python objects (using
erlport), so it seems like a plausible option for interacting with axiom
too. Note that sagemath.org already interacts with axiom, so maybe
connecting through it serves like a bridge.

On Thu 24 Sep 2020 at 0:06 Tim Daly  wrote:

> The new Axiom version needs to have a better user interface.
>
>
>
> I'm experimenting with a browser front end that has an Axiom
>
> editor that runs Axiom in the background. This isn't really
>
> a new idea. Maxima has been doing it for years.
>
>
>
> Using the browser has the advantage of integrating the
>
> compiler, interpreter, graphics, and documentation in one
>
> interface.
>
>
>
> I managed to get the editor-in-browser working.
>
>
>
> Axiom already has a browser connection (book volume 11)
>
> designed to replace hyperdoc and working as an
>
> interpreter I/O so this editor would be an extension.
>
>
>
> Since almost all of Axiom is already in hyperlinked PDF
>
> files it will be possible to jump directly to related sections
>
> in the various books.
>
>
>
> Tim
>
>
>
>
>
>
>
> On 9/23/20, Tim Daly  wrote:
>
> > Rich Hickey gave a keynote:
>
> > https://www.youtube.com/watch?v=oyLBGkS5ICk
>
> > which, like all of Hickey's talks, is worth watching.
>
> >
>
> > He talks about programs breaking due to things like
>
> > library changes. Around minute 30 he started to talk
>
> > about why "semantic versioning" (e.g. version 1.2.3)
>
> > is meaningless.
>
> >
>
> > I realized this years ago and changed Axiom to use
>
> > the date of the release. It provides the same sort of
>
> > "non-information" but it is easy to find in the changelog.
>
> >
>
> > Tim
>
> >
>
> >
>
> > On 9/5/20, Tim Daly  wrote:
>
> >> Geometric algebra also affects another "in-process" goal.
>
> >>
>
> >> I have BLAS and LAPACK in the Axiom sources (volume 10.5).
>
> >>
>
> >> I've spent some time on the question of changing BLAS to use
>
> >> John Gustafson's UNUM representation, which eliminates a lot
>
> >> of code because various "standard errors" cannot occur. See
>
> >> his book "The End Of Error"
>
> >>
> https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868
>
> >>
>
> >> But since Geometric algebra is coordinate free, many of the
>
> >> computations can be done symbolically and then evalulated
>
> >> in final form.
>
> >>
>
> >> BLAS re-caste in Geometric Algebra means that some of the
>
> >> errors, such as roundoff, cannot occur in the symbolic form.
>
> >>
>
> >> This has the potential to make Axiom's BLAS and LAPACK
>
> >> computations faster and more accurate.
>
> >>
>
> >> Tim
>
> >>
>
> >>
>
> >> On 9/5/20, Tim Daly  wrote:
>
> >>> I'm in the process of re-architecting Axiom, of course.
>
> >>>
>
> >>> The primary research effort, as you know, is incorporating
>
> >>> proof technology.
>
> >>>
>
> >>> But in the process of re-architecting there are more things
>
> >>> to consider. Two of them are "front and center" at the moment.
>
> >>>
>
> >>> One concern is "Geometric Algebra". See
>
> >>> http://geometricalgebra.net/
>
> >>>
> https://www.youtube.com/watch?v=0fF2xToQmgs=PLsSPBzvBkYjzcQ4eCVAntETNNVD2d5S79
>
> >>>
>
> >>> Geometric algebra unifies a lot of mathematics. In particular,
>
> >>> it "cleans up" linear algebra, creating a "coordinate-free"
>
> >>> representation. This greatly simplifies and unifies a lot of
>
> >>> mathematics.
>
> >>>
>
> >>> So the question becomes, can this be used to "re-represent"
>
> >>> Axiom mathematics dependent on linear algebra? I don't
>
> >>> know but the idea has a lot of potential for simplification.
>
> >>>
>
> >>>
>
> >>> The second concern is "Category Theory". This theory
>
> >>> provides a simplification and a generalization of various
>
> >>> ideas in Axiom. It also puts constraints on things like an
>
> >>> Axiom "category" to Axiom "category" functors so that the
>
> >>> conversion preserves the mathematical "Category"
>
> >>> structure and properties.
>
> >>>
>
> >>> MIT has a "course" on "Programming with Categories"
>
> >>>
> https://www.youtube.com/playlist?list=PLhgq-BqyZ7i7MTGhUROZy3BOICnVixETS
>
> >>> which makes things rather more understandable.
>
> >>>
>
> >>> So one question is how to re-represent Axiom's type
>
> >>> structure so that it has a correct mathematical "Category"
>
> >>> structure. This, of course, raises the question of Group
>
> >>> Theory with Type Theory with Proof Theory with Category
>
> >>> Theory.
>
> >>>
>
> >>> Getting all of this "aligned" (and hopefully reasonably
>
> >>> correct) will give Axiom a solid mathematical foundation.
>
> >>>
>
> >>> Mathematics has changed a lot since Axiom was created
>
> >>> and many of those changes have shown that we need a
>
> >>> much stronger basis for ad-hoc things like coercion, etc.
>
> >>>
>
> >>>
>
> >>> Tim
>
> >>>
>
> >>>
>
> >>> On 8/21/20, Tim Daly  wrote:
>
>  A briliant essay:
>
> 

Re: Axiom musings...

2020-09-23 Thread Tim Daly
The new Axiom version needs to have a better user interface.

I'm experimenting with a browser front end that has an Axiom
editor that runs Axiom in the background. This isn't really
a new idea. Maxima has been doing it for years.

Using the browser has the advantage of integrating the
compiler, interpreter, graphics, and documentation in one
interface.

I managed to get the editor-in-browser working.

Axiom already has a browser connection (book volume 11)
designed to replace hyperdoc and working as an
interpreter I/O so this editor would be an extension.

Since almost all of Axiom is already in hyperlinked PDF
files it will be possible to jump directly to related sections
in the various books.

Tim



On 9/23/20, Tim Daly  wrote:
> Rich Hickey gave a keynote:
> https://www.youtube.com/watch?v=oyLBGkS5ICk
> which, like all of Hickey's talks, is worth watching.
>
> He talks about programs breaking due to things like
> library changes. Around minute 30 he started to talk
> about why "semantic versioning" (e.g. version 1.2.3)
> is meaningless.
>
> I realized this years ago and changed Axiom to use
> the date of the release. It provides the same sort of
> "non-information" but it is easy to find in the changelog.
>
> Tim
>
>
> On 9/5/20, Tim Daly  wrote:
>> Geometric algebra also affects another "in-process" goal.
>>
>> I have BLAS and LAPACK in the Axiom sources (volume 10.5).
>>
>> I've spent some time on the question of changing BLAS to use
>> John Gustafson's UNUM representation, which eliminates a lot
>> of code because various "standard errors" cannot occur. See
>> his book "The End Of Error"
>> https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868
>>
>> But since Geometric algebra is coordinate free, many of the
>> computations can be done symbolically and then evalulated
>> in final form.
>>
>> BLAS re-caste in Geometric Algebra means that some of the
>> errors, such as roundoff, cannot occur in the symbolic form.
>>
>> This has the potential to make Axiom's BLAS and LAPACK
>> computations faster and more accurate.
>>
>> Tim
>>
>>
>> On 9/5/20, Tim Daly  wrote:
>>> I'm in the process of re-architecting Axiom, of course.
>>>
>>> The primary research effort, as you know, is incorporating
>>> proof technology.
>>>
>>> But in the process of re-architecting there are more things
>>> to consider. Two of them are "front and center" at the moment.
>>>
>>> One concern is "Geometric Algebra". See
>>> http://geometricalgebra.net/
>>> https://www.youtube.com/watch?v=0fF2xToQmgs=PLsSPBzvBkYjzcQ4eCVAntETNNVD2d5S79
>>>
>>> Geometric algebra unifies a lot of mathematics. In particular,
>>> it "cleans up" linear algebra, creating a "coordinate-free"
>>> representation. This greatly simplifies and unifies a lot of
>>> mathematics.
>>>
>>> So the question becomes, can this be used to "re-represent"
>>> Axiom mathematics dependent on linear algebra? I don't
>>> know but the idea has a lot of potential for simplification.
>>>
>>>
>>> The second concern is "Category Theory". This theory
>>> provides a simplification and a generalization of various
>>> ideas in Axiom. It also puts constraints on things like an
>>> Axiom "category" to Axiom "category" functors so that the
>>> conversion preserves the mathematical "Category"
>>> structure and properties.
>>>
>>> MIT has a "course" on "Programming with Categories"
>>> https://www.youtube.com/playlist?list=PLhgq-BqyZ7i7MTGhUROZy3BOICnVixETS
>>> which makes things rather more understandable.
>>>
>>> So one question is how to re-represent Axiom's type
>>> structure so that it has a correct mathematical "Category"
>>> structure. This, of course, raises the question of Group
>>> Theory with Type Theory with Proof Theory with Category
>>> Theory.
>>>
>>> Getting all of this "aligned" (and hopefully reasonably
>>> correct) will give Axiom a solid mathematical foundation.
>>>
>>> Mathematics has changed a lot since Axiom was created
>>> and many of those changes have shown that we need a
>>> much stronger basis for ad-hoc things like coercion, etc.
>>>
>>>
>>> Tim
>>>
>>>
>>> On 8/21/20, Tim Daly  wrote:
 A briliant essay:

 In exactly the same way a small change in axioms
 (of which we cannot be completely sure) is capable,
 generally speaking, of leading to completely different
 conclusions than those that are obtained from theorems
 which have been deduced from the accepted axioms.
 The longer and fancier is the chain of deductions
 ("proofs"), the less reliable is the final result.

 https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html


 On 8/8/20, Tim Daly  wrote:
> Mark,
>
> You're right, of course. The problem is too large.
> So what. is the plan to achieve a research result?
>
> There are 3 major restrictions on the effort (so far).
>
> First, the focus is on the GCD in NonNegativeInteger.
> Volume 13 is basically a collection of published thoughts
> 

Re: Axiom musings...

2020-09-22 Thread Tim Daly
Rich Hickey gave a keynote:
https://www.youtube.com/watch?v=oyLBGkS5ICk
which, like all of Hickey's talks, is worth watching.

He talks about programs breaking due to things like
library changes. Around minute 30 he started to talk
about why "semantic versioning" (e.g. version 1.2.3)
is meaningless.

I realized this years ago and changed Axiom to use
the date of the release. It provides the same sort of
"non-information" but it is easy to find in the changelog.

Tim


On 9/5/20, Tim Daly  wrote:
> Geometric algebra also affects another "in-process" goal.
>
> I have BLAS and LAPACK in the Axiom sources (volume 10.5).
>
> I've spent some time on the question of changing BLAS to use
> John Gustafson's UNUM representation, which eliminates a lot
> of code because various "standard errors" cannot occur. See
> his book "The End Of Error"
> https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868
>
> But since Geometric algebra is coordinate free, many of the
> computations can be done symbolically and then evalulated
> in final form.
>
> BLAS re-caste in Geometric Algebra means that some of the
> errors, such as roundoff, cannot occur in the symbolic form.
>
> This has the potential to make Axiom's BLAS and LAPACK
> computations faster and more accurate.
>
> Tim
>
>
> On 9/5/20, Tim Daly  wrote:
>> I'm in the process of re-architecting Axiom, of course.
>>
>> The primary research effort, as you know, is incorporating
>> proof technology.
>>
>> But in the process of re-architecting there are more things
>> to consider. Two of them are "front and center" at the moment.
>>
>> One concern is "Geometric Algebra". See
>> http://geometricalgebra.net/
>> https://www.youtube.com/watch?v=0fF2xToQmgs=PLsSPBzvBkYjzcQ4eCVAntETNNVD2d5S79
>>
>> Geometric algebra unifies a lot of mathematics. In particular,
>> it "cleans up" linear algebra, creating a "coordinate-free"
>> representation. This greatly simplifies and unifies a lot of
>> mathematics.
>>
>> So the question becomes, can this be used to "re-represent"
>> Axiom mathematics dependent on linear algebra? I don't
>> know but the idea has a lot of potential for simplification.
>>
>>
>> The second concern is "Category Theory". This theory
>> provides a simplification and a generalization of various
>> ideas in Axiom. It also puts constraints on things like an
>> Axiom "category" to Axiom "category" functors so that the
>> conversion preserves the mathematical "Category"
>> structure and properties.
>>
>> MIT has a "course" on "Programming with Categories"
>> https://www.youtube.com/playlist?list=PLhgq-BqyZ7i7MTGhUROZy3BOICnVixETS
>> which makes things rather more understandable.
>>
>> So one question is how to re-represent Axiom's type
>> structure so that it has a correct mathematical "Category"
>> structure. This, of course, raises the question of Group
>> Theory with Type Theory with Proof Theory with Category
>> Theory.
>>
>> Getting all of this "aligned" (and hopefully reasonably
>> correct) will give Axiom a solid mathematical foundation.
>>
>> Mathematics has changed a lot since Axiom was created
>> and many of those changes have shown that we need a
>> much stronger basis for ad-hoc things like coercion, etc.
>>
>>
>> Tim
>>
>>
>> On 8/21/20, Tim Daly  wrote:
>>> A briliant essay:
>>>
>>> In exactly the same way a small change in axioms
>>> (of which we cannot be completely sure) is capable,
>>> generally speaking, of leading to completely different
>>> conclusions than those that are obtained from theorems
>>> which have been deduced from the accepted axioms.
>>> The longer and fancier is the chain of deductions
>>> ("proofs"), the less reliable is the final result.
>>>
>>> https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html
>>>
>>>
>>> On 8/8/20, Tim Daly  wrote:
 Mark,

 You're right, of course. The problem is too large.
 So what. is the plan to achieve a research result?

 There are 3 major restrictions on the effort (so far).

 First, the focus is on the GCD in NonNegativeInteger.
 Volume 13 is basically a collection of published thoughts
 by various authors on the GCD, a background literature
 search. Build a limited system with essentially one user
 visible function (the NNI GCD) and implement all of the
 ideas there. This demonstrates inheritance of axioms,
 specification of functions, pre- and post-conditions,
 proof integration, provisos, the new compiler, etc.

 Second, make the SANE GCD work in the current Axiom
 system by generating compatible code. This gives a
 stepping-stone approach where things can be grounded.
 Obviously none of the new proof ideas will be expected
 to work in the current system but it "gives a place to stand".

 Third, develop a lattice of functions. The idea is to attack the
 functions that  depend on almost nothing, prove them correct,
 and use them to prove functions that only depend on the
 

Re: Axiom musings...

2020-09-04 Thread Tim Daly
Geometric algebra also affects another "in-process" goal.

I have BLAS and LAPACK in the Axiom sources (volume 10.5).

I've spent some time on the question of changing BLAS to use
John Gustafson's UNUM representation, which eliminates a lot
of code because various "standard errors" cannot occur. See
his book "The End Of Error"
https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868

But since Geometric algebra is coordinate free, many of the
computations can be done symbolically and then evalulated
in final form.

BLAS re-caste in Geometric Algebra means that some of the
errors, such as roundoff, cannot occur in the symbolic form.

This has the potential to make Axiom's BLAS and LAPACK
computations faster and more accurate.

Tim


On 9/5/20, Tim Daly  wrote:
> I'm in the process of re-architecting Axiom, of course.
>
> The primary research effort, as you know, is incorporating
> proof technology.
>
> But in the process of re-architecting there are more things
> to consider. Two of them are "front and center" at the moment.
>
> One concern is "Geometric Algebra". See
> http://geometricalgebra.net/
> https://www.youtube.com/watch?v=0fF2xToQmgs=PLsSPBzvBkYjzcQ4eCVAntETNNVD2d5S79
>
> Geometric algebra unifies a lot of mathematics. In particular,
> it "cleans up" linear algebra, creating a "coordinate-free"
> representation. This greatly simplifies and unifies a lot of
> mathematics.
>
> So the question becomes, can this be used to "re-represent"
> Axiom mathematics dependent on linear algebra? I don't
> know but the idea has a lot of potential for simplification.
>
>
> The second concern is "Category Theory". This theory
> provides a simplification and a generalization of various
> ideas in Axiom. It also puts constraints on things like an
> Axiom "category" to Axiom "category" functors so that the
> conversion preserves the mathematical "Category"
> structure and properties.
>
> MIT has a "course" on "Programming with Categories"
> https://www.youtube.com/playlist?list=PLhgq-BqyZ7i7MTGhUROZy3BOICnVixETS
> which makes things rather more understandable.
>
> So one question is how to re-represent Axiom's type
> structure so that it has a correct mathematical "Category"
> structure. This, of course, raises the question of Group
> Theory with Type Theory with Proof Theory with Category
> Theory.
>
> Getting all of this "aligned" (and hopefully reasonably
> correct) will give Axiom a solid mathematical foundation.
>
> Mathematics has changed a lot since Axiom was created
> and many of those changes have shown that we need a
> much stronger basis for ad-hoc things like coercion, etc.
>
>
> Tim
>
>
> On 8/21/20, Tim Daly  wrote:
>> A briliant essay:
>>
>> In exactly the same way a small change in axioms
>> (of which we cannot be completely sure) is capable,
>> generally speaking, of leading to completely different
>> conclusions than those that are obtained from theorems
>> which have been deduced from the accepted axioms.
>> The longer and fancier is the chain of deductions
>> ("proofs"), the less reliable is the final result.
>>
>> https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html
>>
>>
>> On 8/8/20, Tim Daly  wrote:
>>> Mark,
>>>
>>> You're right, of course. The problem is too large.
>>> So what. is the plan to achieve a research result?
>>>
>>> There are 3 major restrictions on the effort (so far).
>>>
>>> First, the focus is on the GCD in NonNegativeInteger.
>>> Volume 13 is basically a collection of published thoughts
>>> by various authors on the GCD, a background literature
>>> search. Build a limited system with essentially one user
>>> visible function (the NNI GCD) and implement all of the
>>> ideas there. This demonstrates inheritance of axioms,
>>> specification of functions, pre- and post-conditions,
>>> proof integration, provisos, the new compiler, etc.
>>>
>>> Second, make the SANE GCD work in the current Axiom
>>> system by generating compatible code. This gives a
>>> stepping-stone approach where things can be grounded.
>>> Obviously none of the new proof ideas will be expected
>>> to work in the current system but it "gives a place to stand".
>>>
>>> Third, develop a lattice of functions. The idea is to attack the
>>> functions that  depend on almost nothing, prove them correct,
>>> and use them to prove functions that only depend on the
>>> prior layer. I did this with the category structure when I first
>>> got the system since it was necessary to bootstrap Axiom
>>> without a running system (something that was not possible
>>> with the IBM/NAG version). That effort took several months
>>> so I expect that function-lattice to take about the same time.
>>>
>>> This makes the research "incremental" so that a result can
>>> be achieved in one lifetime. Like a PhD thesis, it is initially
>>> intended as a small step forward but still be a valid instance
>>> of "computational mathematics", deeply combining proof and
>>> computer algebra.
>>>
>>> Tim
>>>
>>
>



Re: Axiom musings...

2020-09-04 Thread Tim Daly
I'm in the process of re-architecting Axiom, of course.

The primary research effort, as you know, is incorporating
proof technology.

But in the process of re-architecting there are more things
to consider. Two of them are "front and center" at the moment.

One concern is "Geometric Algebra". See
http://geometricalgebra.net/
https://www.youtube.com/watch?v=0fF2xToQmgs=PLsSPBzvBkYjzcQ4eCVAntETNNVD2d5S79

Geometric algebra unifies a lot of mathematics. In particular,
it "cleans up" linear algebra, creating a "coordinate-free"
representation. This greatly simplifies and unifies a lot of
mathematics.

So the question becomes, can this be used to "re-represent"
Axiom mathematics dependent on linear algebra? I don't
know but the idea has a lot of potential for simplification.


The second concern is "Category Theory". This theory
provides a simplification and a generalization of various
ideas in Axiom. It also puts constraints on things like an
Axiom "category" to Axiom "category" functors so that the
conversion preserves the mathematical "Category"
structure and properties.

MIT has a "course" on "Programming with Categories"
https://www.youtube.com/playlist?list=PLhgq-BqyZ7i7MTGhUROZy3BOICnVixETS
which makes things rather more understandable.

So one question is how to re-represent Axiom's type
structure so that it has a correct mathematical "Category"
structure. This, of course, raises the question of Group
Theory with Type Theory with Proof Theory with Category
Theory.

Getting all of this "aligned" (and hopefully reasonably
correct) will give Axiom a solid mathematical foundation.

Mathematics has changed a lot since Axiom was created
and many of those changes have shown that we need a
much stronger basis for ad-hoc things like coercion, etc.


Tim


On 8/21/20, Tim Daly  wrote:
> A briliant essay:
>
> In exactly the same way a small change in axioms
> (of which we cannot be completely sure) is capable,
> generally speaking, of leading to completely different
> conclusions than those that are obtained from theorems
> which have been deduced from the accepted axioms.
> The longer and fancier is the chain of deductions
> ("proofs"), the less reliable is the final result.
>
> https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html
>
>
> On 8/8/20, Tim Daly  wrote:
>> Mark,
>>
>> You're right, of course. The problem is too large.
>> So what. is the plan to achieve a research result?
>>
>> There are 3 major restrictions on the effort (so far).
>>
>> First, the focus is on the GCD in NonNegativeInteger.
>> Volume 13 is basically a collection of published thoughts
>> by various authors on the GCD, a background literature
>> search. Build a limited system with essentially one user
>> visible function (the NNI GCD) and implement all of the
>> ideas there. This demonstrates inheritance of axioms,
>> specification of functions, pre- and post-conditions,
>> proof integration, provisos, the new compiler, etc.
>>
>> Second, make the SANE GCD work in the current Axiom
>> system by generating compatible code. This gives a
>> stepping-stone approach where things can be grounded.
>> Obviously none of the new proof ideas will be expected
>> to work in the current system but it "gives a place to stand".
>>
>> Third, develop a lattice of functions. The idea is to attack the
>> functions that  depend on almost nothing, prove them correct,
>> and use them to prove functions that only depend on the
>> prior layer. I did this with the category structure when I first
>> got the system since it was necessary to bootstrap Axiom
>> without a running system (something that was not possible
>> with the IBM/NAG version). That effort took several months
>> so I expect that function-lattice to take about the same time.
>>
>> This makes the research "incremental" so that a result can
>> be achieved in one lifetime. Like a PhD thesis, it is initially
>> intended as a small step forward but still be a valid instance
>> of "computational mathematics", deeply combining proof and
>> computer algebra.
>>
>> Tim
>>
>



Re: Axiom musings...

2020-08-21 Thread Tim Daly
A briliant essay:

In exactly the same way a small change in axioms
(of which we cannot be completely sure) is capable,
generally speaking, of leading to completely different
conclusions than those that are obtained from theorems
which have been deduced from the accepted axioms.
The longer and fancier is the chain of deductions
("proofs"), the less reliable is the final result.

https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html


On 8/8/20, Tim Daly  wrote:
> Mark,
>
> You're right, of course. The problem is too large.
> So what. is the plan to achieve a research result?
>
> There are 3 major restrictions on the effort (so far).
>
> First, the focus is on the GCD in NonNegativeInteger.
> Volume 13 is basically a collection of published thoughts
> by various authors on the GCD, a background literature
> search. Build a limited system with essentially one user
> visible function (the NNI GCD) and implement all of the
> ideas there. This demonstrates inheritance of axioms,
> specification of functions, pre- and post-conditions,
> proof integration, provisos, the new compiler, etc.
>
> Second, make the SANE GCD work in the current Axiom
> system by generating compatible code. This gives a
> stepping-stone approach where things can be grounded.
> Obviously none of the new proof ideas will be expected
> to work in the current system but it "gives a place to stand".
>
> Third, develop a lattice of functions. The idea is to attack the
> functions that  depend on almost nothing, prove them correct,
> and use them to prove functions that only depend on the
> prior layer. I did this with the category structure when I first
> got the system since it was necessary to bootstrap Axiom
> without a running system (something that was not possible
> with the IBM/NAG version). That effort took several months
> so I expect that function-lattice to take about the same time.
>
> This makes the research "incremental" so that a result can
> be achieved in one lifetime. Like a PhD thesis, it is initially
> intended as a small step forward but still be a valid instance
> of "computational mathematics", deeply combining proof and
> computer algebra.
>
> Tim
>



Re: Axiom musings...

2020-08-08 Thread Tim Daly
Mark,

You're right, of course. The problem is too large.
So what. is the plan to achieve a research result?

There are 3 major restrictions on the effort (so far).

First, the focus is on the GCD in NonNegativeInteger.
Volume 13 is basically a collection of published thoughts
by various authors on the GCD, a background literature
search. Build a limited system with essentially one user
visible function (the NNI GCD) and implement all of the
ideas there. This demonstrates inheritance of axioms,
specification of functions, pre- and post-conditions,
proof integration, provisos, the new compiler, etc.

Second, make the SANE GCD work in the current Axiom
system by generating compatible code. This gives a
stepping-stone approach where things can be grounded.
Obviously none of the new proof ideas will be expected
to work in the current system but it "gives a place to stand".

Third, develop a lattice of functions. The idea is to attack the
functions that  depend on almost nothing, prove them correct,
and use them to prove functions that only depend on the
prior layer. I did this with the category structure when I first
got the system since it was necessary to bootstrap Axiom
without a running system (something that was not possible
with the IBM/NAG version). That effort took several months
so I expect that function-lattice to take about the same time.

This makes the research "incremental" so that a result can
be achieved in one lifetime. Like a PhD thesis, it is initially
intended as a small step forward but still be a valid instance
of "computational mathematics", deeply combining proof and
computer algebra.

Tim



Re: Axiom musings...

2020-07-30 Thread Tim Daly
This is the video related to the Deep Learning for Symbolic
Mathematics paper.

https://www.youtube.com/watch?v=O_sHHG5_lr8=PLoROMvodv4rMWw6rRoeSpkiseTHzWj6vu=5

I've spent a fair amount of time running some of the
generated problems through Axiom for fuzz testing.

Sometimes we get exact results, or exact up to a constant.
More interesting is that it uncovers some bugs.
I expect to push out some of the results in August.

Tim


On 7/26/20, William Sit  wrote:
> The question of equality in computation is very different than equality in
> mathematics and it is basically due to data representations in computation
> (and for this question, we are not even  concerned with equality between two
> implementations of the same domain but different representations (types),
> when coercion is needed). As you pointed out, the key question is: "what
> should be the definition of equality?" even within one single domain and one
> data representation.
> Arithmetic in any floating point number system does not satisfy many of the
> usual laws. For example, a times (b divided by a) is NOT b in such a system
> unless the system is FP(2,1,c).
> Here FP(r,p,c) is the floating point system with radix r, precision p, and c
> for chopping (truncation).
>  The associative laws of addition and multiplication do NOT hold in
> FP(r,p,c). Strict inequality (less than) between two FP(r,p,c) numbers can
> be weakened to (less than or equal to) after say adding a third number or
> multiplied by a positive number. Ref: Pat.  H. Sterbenz, Floating Point
> Computation,Sections 1.6, 1.7.
>
> So to prove correctness of implementation for an algorithm, say like taking
> square roots, will be far more difficult than the same for integers. How can
> one be convinced that the computation gives the most accurate square root
> for all inputs in the system FP(r,p,c)? In fact, is there such an algorithm
> other than one based on case considerations for every number in
> FP(r,p,c)---you may as well use a lookup table in that case.
>
> In the case of polynomial equality between two domains, one can always do a
> subtraction in the domain of the target of coercion to verify equality. For
> the specific example you gave, I would agree with Cubical type theory
> (whatever that is!) that they should be considered equal (unless there is a
> problem with computing the LCM of denominators of the coefficients).
>
> William
>
> William Sit
> Professor Emeritus
> Department of Mathematics
> The City College of The City University of New York
> New York, NY 10031
> homepage: wsit.ccny.cuny.edu
>
> ________
> From: Tim Daly 
> Sent: Saturday, July 25, 2020 5:18 AM
> To: William Sit
> Cc: axiom-dev
> Subject: Re: [EXTERNAL] Re: Axiom musings...
>
> The further I get into this game, the harder it becomes.
>
> For example, equality. In the easiest case Axiom has
> propositional equality. That is, the equality function is
> defined by the type, e.g. for the type Integer, 2 = 2.
>
> However, there is also judgmental equality, which
> would apply at the type level. If floating point numbers
> match the usual IEEE definition and UNUMs match
> Gustafson's Type I defintion, is there an equality
> between the types? Does FLOAT = UNUM?
> https://urldefense.proofpoint.com/v2/url?u=https-3A__en.wikipedia.org_wiki_Unum-5F-28number-5Fformat-29=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU=Y2MBHvioPEtIrgddTGyKqAMzQ_VeElXxUnvWAYX19HU=tIo8U5fkmzZuvhbq0yoSyRwBb-h2udDjfQfIkCK6qCY=
>
> Axiom's coerce provides a (poorly) defined kind of
> judgmental equality but its not actually defined as an
> equality at the type level. You can coerce from
> POLY(FRAC(INT)) to FRAC(POLY(INT)) but are they
> equal? Cubical type theory would seem to say yes.
>
> And don't get me started on the Univalent axiom.
>
> There are lots of interesting equality questions. If there
> is a proof of a program and you can regenerate the
> program from the proof, are the proof and program equal?
>
> Presumably the chap behind the SANE effort will be
> able to handle both kinds of equality (as well as giving
> universes for types).
>
> But I suspect he's not that clever.
>
>
>
> On 7/21/20, Tim Daly  wrote:
>>> Yes, I do remember we worked on the "symbolic integer"
>>> (or "symbolic polynomial", etc.) domains.
>>
>> Sadly only the idea survives.
>>
>> I spent almost all of my time at that time trying to get a
>> version of Axiom to bootstrap. Starting Axiom, at the time,
>> required an already running version of Axiom (the NAG
>> version) and NAG would not let me distribute their code.
>>
>&

Re: [EXTERNAL] Re: Axiom musings...

2020-07-26 Thread William Sit
The question of equality in computation is very different than equality in 
mathematics and it is basically due to data representations in computation (and 
for this question, we are not even  concerned with equality between two 
implementations of the same domain but different representations (types), when 
coercion is needed). As you pointed out, the key question is: "what should be 
the definition of equality?" even within one single domain and one data 
representation. 
Arithmetic in any floating point number system does not satisfy many of the 
usual laws. For example, a times (b divided by a) is NOT b in such a system 
unless the system is FP(2,1,c).
Here FP(r,p,c) is the floating point system with radix r, precision p, and c 
for chopping (truncation). 
 The associative laws of addition and multiplication do NOT hold in FP(r,p,c). 
Strict inequality (less than) between two FP(r,p,c) numbers can be weakened to 
(less than or equal to) after say adding a third number or multiplied by a 
positive number. Ref: Pat.  H. Sterbenz, Floating Point Computation,Sections 
1.6, 1.7.

So to prove correctness of implementation for an algorithm, say like taking 
square roots, will be far more difficult than the same for integers. How can 
one be convinced that the computation gives the most accurate square root for 
all inputs in the system FP(r,p,c)? In fact, is there such an algorithm other 
than one based on case considerations for every number in FP(r,p,c)---you may 
as well use a lookup table in that case.

In the case of polynomial equality between two domains, one can always do a 
subtraction in the domain of the target of coercion to verify equality. For the 
specific example you gave, I would agree with Cubical type theory (whatever 
that is!) that they should be considered equal (unless there is a problem with 
computing the LCM of denominators of the coefficients).

William

William Sit
Professor Emeritus
Department of Mathematics
The City College of The City University of New York
New York, NY 10031
homepage: wsit.ccny.cuny.edu


From: Tim Daly 
Sent: Saturday, July 25, 2020 5:18 AM
To: William Sit
Cc: axiom-dev
Subject: Re: [EXTERNAL] Re: Axiom musings...

The further I get into this game, the harder it becomes.

For example, equality. In the easiest case Axiom has
propositional equality. That is, the equality function is
defined by the type, e.g. for the type Integer, 2 = 2.

However, there is also judgmental equality, which
would apply at the type level. If floating point numbers
match the usual IEEE definition and UNUMs match
Gustafson's Type I defintion, is there an equality
between the types? Does FLOAT = UNUM?
https://urldefense.proofpoint.com/v2/url?u=https-3A__en.wikipedia.org_wiki_Unum-5F-28number-5Fformat-29=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU=Y2MBHvioPEtIrgddTGyKqAMzQ_VeElXxUnvWAYX19HU=tIo8U5fkmzZuvhbq0yoSyRwBb-h2udDjfQfIkCK6qCY=

Axiom's coerce provides a (poorly) defined kind of
judgmental equality but its not actually defined as an
equality at the type level. You can coerce from
POLY(FRAC(INT)) to FRAC(POLY(INT)) but are they
equal? Cubical type theory would seem to say yes.

And don't get me started on the Univalent axiom.

There are lots of interesting equality questions. If there
is a proof of a program and you can regenerate the
program from the proof, are the proof and program equal?

Presumably the chap behind the SANE effort will be
able to handle both kinds of equality (as well as giving
universes for types).

But I suspect he's not that clever.



On 7/21/20, Tim Daly  wrote:
>> Yes, I do remember we worked on the "symbolic integer"
>> (or "symbolic polynomial", etc.) domains.
>
> Sadly only the idea survives.
>
> I spent almost all of my time at that time trying to get a
> version of Axiom to bootstrap. Starting Axiom, at the time,
> required an already running version of Axiom (the NAG
> version) and NAG would not let me distribute their code.
>
> Building a self-booting version involved (among a lot of
> other things) breaking circular definitions in the category
> and domain hierarchy. That took several months and all
> of my time (much to Baumslag's annoyance).
>
> That effort also involved re-writing code from Norman's
> Lisp back to Common Lisp (much to Norman's annoyance)
>
> As a result, the free version of Axiom wasn't released until
> about 2003 even though I got a copy in 2001. As part of my
> employment agreement with CCNY they wouldn't make any
> claim on Axiom and I would only work on it in my free time
> unless Gilbert said otherwise. My work time was spent as
> the Magnus lead developer and open sourcing it. Gilbert
> eventually wanted Axiom and asked me to do it for work
> also. That's what led to the grant proposal.
>
> My thoughts on the sub

Re: [EXTERNAL] Re: Axiom musings...

2020-07-25 Thread Tim Daly
The further I get into this game, the harder it becomes.

For example, equality. In the easiest case Axiom has
propositional equality. That is, the equality function is
defined by the type, e.g. for the type Integer, 2 = 2.

However, there is also judgmental equality, which
would apply at the type level. If floating point numbers
match the usual IEEE definition and UNUMs match
Gustafson's Type I defintion, is there an equality
between the types? Does FLOAT = UNUM?
https://en.wikipedia.org/wiki/Unum_(number_format)

Axiom's coerce provides a (poorly) defined kind of
judgmental equality but its not actually defined as an
equality at the type level. You can coerce from
POLY(FRAC(INT)) to FRAC(POLY(INT)) but are they
equal? Cubical type theory would seem to say yes.

And don't get me started on the Univalent axiom.

There are lots of interesting equality questions. If there
is a proof of a program and you can regenerate the
program from the proof, are the proof and program equal?

Presumably the chap behind the SANE effort will be
able to handle both kinds of equality (as well as giving
universes for types).

But I suspect he's not that clever.



On 7/21/20, Tim Daly  wrote:
>> Yes, I do remember we worked on the "symbolic integer"
>> (or "symbolic polynomial", etc.) domains.
>
> Sadly only the idea survives.
>
> I spent almost all of my time at that time trying to get a
> version of Axiom to bootstrap. Starting Axiom, at the time,
> required an already running version of Axiom (the NAG
> version) and NAG would not let me distribute their code.
>
> Building a self-booting version involved (among a lot of
> other things) breaking circular definitions in the category
> and domain hierarchy. That took several months and all
> of my time (much to Baumslag's annoyance).
>
> That effort also involved re-writing code from Norman's
> Lisp back to Common Lisp (much to Norman's annoyance)
>
> As a result, the free version of Axiom wasn't released until
> about 2003 even though I got a copy in 2001. As part of my
> employment agreement with CCNY they wouldn't make any
> claim on Axiom and I would only work on it in my free time
> unless Gilbert said otherwise. My work time was spent as
> the Magnus lead developer and open sourcing it. Gilbert
> eventually wanted Axiom and asked me to do it for work
> also. That's what led to the grant proposal.
>
> My thoughts on the subject of symbolic integers were, as a
> consequence, only "paper experiments". I have several
> thousand technical papers stacked in my office and library.
> The CCNY notes are in there somewhere. Unfortunately,
> every time I pick up a pile to search I find other things I
> "need to know now" and get lost in that subject. It is like
> trying to get a glass of water when the three gorges dam
> burst.
>
> It seems feasible to create a SYMINT Symbolic Integer
> domain that used symbols rather than numbers for the
> arithmetic, creating a ring that could be used by POLY.
> Using SYMINT in SYMFRAC Symboiic Fraction would
> provide a field that could be used by POLY.
>
> The idea is still worth the effort but, as usual, I am deep
> into rebuilding Axiom from the ground up. The issues I'm
> struggling with now make the bootstrap effort look like a
> weekend hack. Bootstrapping took about several months.
> The current effort has taken 6 years so far. There is SO
> much to know and I am a slow learner.
>
> This is where I wish I had graduate students :-)
>
> Tim
>
>
> On 7/21/20, William Sit  wrote:
>> Dear Tim:
>>
>> You have expanded on the issues I raised. While you are right that a
>> computational algorithm can be proved if the specifications are
>> completely
>> and precisely coded for the proof systems like Coq. The catch is not the
>> precisely part but the completely part.
>>
>> Yes, I do remember we worked on the "symbolic integer" (or "symbolic
>> polynomial", etc.) domains. I think I might have actually some notes and
>> ideas and perhaps code, but it would take me more searching than I can do
>> now from obsoleted and perhaps non-functional computers. A few days ago,
>> I
>> was trying to recover tex files from a Raspberry Pi that I used while in
>> a
>> hotel in China (2015), but nothing (meaning ftp and other transfer
>> methods
>> or even mailing programs) works because of security. I have also
>> forgotten
>> all my knowledge on Linux.
>>
>> Too bad we did not continue that effort. Does such a domain (in any
>> computer
>> algebra system) exist these days? After all, nearly three decades have
>> passed.
>>
>> William
>>
>> William Sit
>>

Re: [EXTERNAL] Re: Axiom musings...

2020-07-21 Thread Tim Daly
> Yes, I do remember we worked on the "symbolic integer"
> (or "symbolic polynomial", etc.) domains.

Sadly only the idea survives.

I spent almost all of my time at that time trying to get a
version of Axiom to bootstrap. Starting Axiom, at the time,
required an already running version of Axiom (the NAG
version) and NAG would not let me distribute their code.

Building a self-booting version involved (among a lot of
other things) breaking circular definitions in the category
and domain hierarchy. That took several months and all
of my time (much to Baumslag's annoyance).

That effort also involved re-writing code from Norman's
Lisp back to Common Lisp (much to Norman's annoyance)

As a result, the free version of Axiom wasn't released until
about 2003 even though I got a copy in 2001. As part of my
employment agreement with CCNY they wouldn't make any
claim on Axiom and I would only work on it in my free time
unless Gilbert said otherwise. My work time was spent as
the Magnus lead developer and open sourcing it. Gilbert
eventually wanted Axiom and asked me to do it for work
also. That's what led to the grant proposal.

My thoughts on the subject of symbolic integers were, as a
consequence, only "paper experiments". I have several
thousand technical papers stacked in my office and library.
The CCNY notes are in there somewhere. Unfortunately,
every time I pick up a pile to search I find other things I
"need to know now" and get lost in that subject. It is like
trying to get a glass of water when the three gorges dam
burst.

It seems feasible to create a SYMINT Symbolic Integer
domain that used symbols rather than numbers for the
arithmetic, creating a ring that could be used by POLY.
Using SYMINT in SYMFRAC Symboiic Fraction would
provide a field that could be used by POLY.

The idea is still worth the effort but, as usual, I am deep
into rebuilding Axiom from the ground up. The issues I'm
struggling with now make the bootstrap effort look like a
weekend hack. Bootstrapping took about several months.
The current effort has taken 6 years so far. There is SO
much to know and I am a slow learner.

This is where I wish I had graduate students :-)

Tim


On 7/21/20, William Sit  wrote:
> Dear Tim:
>
> You have expanded on the issues I raised. While you are right that a
> computational algorithm can be proved if the specifications are completely
> and precisely coded for the proof systems like Coq. The catch is not the
> precisely part but the completely part.
>
> Yes, I do remember we worked on the "symbolic integer" (or "symbolic
> polynomial", etc.) domains. I think I might have actually some notes and
> ideas and perhaps code, but it would take me more searching than I can do
> now from obsoleted and perhaps non-functional computers. A few days ago, I
> was trying to recover tex files from a Raspberry Pi that I used while in a
> hotel in China (2015), but nothing (meaning ftp and other transfer methods
> or even mailing programs) works because of security. I have also forgotten
> all my knowledge on Linux.
>
> Too bad we did not continue that effort. Does such a domain (in any computer
> algebra system) exist these days? After all, nearly three decades have
> passed.
>
> William
>
> William Sit
> Professor Emeritus
> Department of Mathematics
> The City College of The City University of New York
> New York, NY 10031
> homepage: wsit.ccny.cuny.edu
>
> ____________
> From: Tim Daly 
> Sent: Monday, July 20, 2020 4:44 PM
> To: William Sit
> Cc: axiom-dev
> Subject: Re: [EXTERNAL] Re: Axiom musings...
>
>> So there are two kinds of algorithms, one that is purely mathematical
>> and one that is computational, the latter including a particular (class
>> of)
>> data representation(s) (perhaps even the computer language and
>> system of the implementation). It is proofs for the latter type of
>> algorithms that is lacking.
>
> There are ,as you point out, several kinds of proofs to consider.
>
> One is the proof of the algorithm. An example is Buchberger's
> Groebner Basis algorithm which was proven in Coq:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ricam.oeaw.ac.at_specsem_srs_groeb_download_coq-2Dlinz.pdf=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU=qJcX0eywVfbeyLAuiHayc0VdCrprfGa-v65dRgMKAuE=48_B3YVCzXBpYUWOgsuSY0mMrrZd9SpQzWVZki-c6d4=
>
> The Coq proof establishes that the formal algorithm is correct.
>
> Even in proof one runs into limits of what can be proved. For example,
> if the convergence is non-uniform one can, at best, do a proof that
> assumes bounds on the non-uniform behavior. So this isn't strictly
> a computer algorithm issue.
>
>> Since data represen

Re: [EXTERNAL] Re: Axiom musings...

2020-07-21 Thread William Sit
Dear Tim:

You have expanded on the issues I raised. While you are right that a 
computational algorithm can be proved if the specifications are completely and 
precisely coded for the proof systems like Coq. The catch is not the precisely 
part but the completely part.

Yes, I do remember we worked on the "symbolic integer" (or "symbolic 
polynomial", etc.) domains. I think I might have actually some notes and ideas 
and perhaps code, but it would take me more searching than I can do now from 
obsoleted and perhaps non-functional computers. A few days ago, I was trying to 
recover tex files from a Raspberry Pi that I used while in a hotel in China 
(2015), but nothing (meaning ftp and other transfer methods or even mailing 
programs) works because of security. I have also forgotten all my knowledge on 
Linux.

Too bad we did not continue that effort. Does such a domain (in any computer 
algebra system) exist these days? After all, nearly three decades have passed.

William

William Sit
Professor Emeritus
Department of Mathematics
The City College of The City University of New York
New York, NY 10031
homepage: wsit.ccny.cuny.edu


From: Tim Daly 
Sent: Monday, July 20, 2020 4:44 PM
To: William Sit
Cc: axiom-dev
Subject: Re: [EXTERNAL] Re: Axiom musings...

> So there are two kinds of algorithms, one that is purely mathematical
> and one that is computational, the latter including a particular (class of)
> data representation(s) (perhaps even the computer language and
> system of the implementation). It is proofs for the latter type of
> algorithms that is lacking.

There are ,as you point out, several kinds of proofs to consider.

One is the proof of the algorithm. An example is Buchberger's
Groebner Basis algorithm which was proven in Coq:
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ricam.oeaw.ac.at_specsem_srs_groeb_download_coq-2Dlinz.pdf=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU=qJcX0eywVfbeyLAuiHayc0VdCrprfGa-v65dRgMKAuE=48_B3YVCzXBpYUWOgsuSY0mMrrZd9SpQzWVZki-c6d4=

The Coq proof establishes that the formal algorithm is correct.

Even in proof one runs into limits of what can be proved. For example,
if the convergence is non-uniform one can, at best, do a proof that
assumes bounds on the non-uniform behavior. So this isn't strictly
a computer algorithm issue.

> Since data representations (like REP in Axiom) are built recursively,
> a computational algorithm (in the sense above) for Groebner basis
> may have to be designed to take care of just a few of the ways
> integers can be represented. Axiom is built with that in mind (that's
> where type theory comes in), but I bet no one SPECIFIES their
> computational algorithms with the limitations of data representation
> in mind, much less proves the algorithm anew for each new
> representation.

If you remember, while we were both at CCNY, I worked on a
grant project to construct a "symbolic integer" domain so that
computations could occur over non-numeric integers. The
symbolic form did not have a numeric limitation. Unfortunatly
the current Axiom has no way to support such a domain.

I'm glad you brought this up. I will have to give some thought
to representing and computing with symbolic integers again.

> So if a computation of a Groebner basis halts
> because of an intermediate LCM computation (say of two integer
> coefficients), should we consider the implementation as proven
> correct? What if the overflow condition was not detected and the
> computation continues? Indeed, since there may be different
> implementations of the integer domain, we must be sure that
> every implementation of the LCM algorithm handles overflows
> correctly AND specified in the documentation.

Establishing that the algorithm is correct (e.g Groebner) is
clearly in the proof side of computational math
.
Establishing that there is an implementation of Groebner is
clearly in the computer algebra side of computational math.

The game is to unite the two.

Such a proof becomes a PART of the specification of a program
that implements the algorithm, such as in Axiom.

The definitions and axioms of the proof have to be put into
the system both at the category and domain levels. They
need to be available at the implementation code.

On the other hand, there are non-logical 'axioms' of
categories and domains, such as limits to the size of a
floating point number. One could have many FLOAT
domains, such as Gustafson's UNUMs.
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.johngustafson.net_pdfs_BeatingFloatingPoint.pdf=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU=qJcX0eywVfbeyLAuiHayc0VdCrprfGa-v65dRgMKAuE=wrah9xCIC0aDYjDhQgMxrQb_NM6HKkKp_QbW91Vx4Y4=

There are non-logical limits to the implementation, such as
intermediate exp

Re: [EXTERNAL] Re: Axiom musings...

2020-07-20 Thread Tim Daly
ion is not the same as the
> abstract mathematical objects because there are finite bounds on the
> representation. Take for example, an algorithm to compute the LCM of two
> integers. The LCM can cause overflow and not be representable. Of course,
> you can change the data representation to have "infinite precision", but
> that would still be bounded by actual physical memory of the machine. The
> careful programmer of the LCM algorithm would add throws and catches to
> handle the "error",but the implementation will have to add code that is not
> considered in the theoretical LCM algorithm (unless the LCM algorithm is
> meant for bounded integers of a fixed data representation and not abstract
> integers). So there are two kinds of algorithms, one that is purely
> mathematical and one that is computational, the latter including a
> particular (class of) data representation(s) (perhaps even the computer
> language and system of the implementation). It is proofs for the latter type
> of algorithms that is lacking. Since data representations (like REP in
> Axiom) are built recursively, a computational algorithm (in the sense above)
> for Groebner basis may have to be designed to take care of just a few of the
> ways integers can be represented. Axiom is built with that in mind (that's
> where type theory comes in), but I bet no one SPECIFIES their computational
> algorithms with the limitations of data representation in mind, much less
> proves the algorithm anew for each new representation. So if a computation
> of a Groebner basis halts because of an intermediate LCM computation (say of
> two integer coefficients), should we consider the implementation as proven
> correct? What if the overflow condition was not detected and the computation
> continues? Indeed, since there may be different implementations of the
> integer domain, we must be sure that every implementation of the LCM
> algorithm handles overflows correctly AND specified in the documentation.
>
> I am sure I am just being ignorant to pose these questions, because they
> must have been considered and perhaps solved. In that case, please ignore
> them and just tell me so.
>
> William
>
> William Sit
> Professor Emeritus
> Department of Mathematics
> The City College of The City University of New York
> New York, NY 10031
> homepage: wsit.ccny.cuny.edu
>
> 
> From: Tim Daly 
> Sent: Sunday, July 19, 2020 5:33 PM
> To: William Sit
> Cc: axiom-dev
> Subject: Re: [EXTERNAL] Re: Axiom musings...
>
> There are several "problems" with proving programs correct that
> I don't quite know how to solve, or even approach. But that's the
> fun of "research", right?
>
> For the data representation question I've been looking at types.
> I took 10 courses at CMU. I am eyebrow deep in type theory.
> I'm looking at category theory and homotopy type theory. So
> far I haven't seen anyone looking at the data problem. Most of
> the focus is on strict code typing.
>
> There is an old MIT course by Abelson and Sussman "Structure
> and Interpretation of Computer Programs" (SICP). They rewrite
> data as programs which, in Lisp, is trivial to do, Dan Friedman
> seems to have some interesting ideas too.
>
> All of Axiom's SANE types are now CLOS objects which gives
> two benefits. First, they can be inherited. But second, they
> are basically Lisp data structures with associated code.
>
> I'm thinking of associating "data axioms" with the representation
> (REP) object of a domain as well as with the functions.
>
> For example, DenavitHartenbergMatrix encodes 4x4 matrices
> used in graphics and robotics. They are 4x4 matrices where
> the upper left 3x3 encodes rotations, the right column encodes
> translations, and the lower row includes scaling, skewing, etc.
>
> (As an aside, DHMATRIX matrices have an associated
> Jacobian which encodes the dynamics in things like robots.
> Since I'm also programming a robot I'm tempted to work on
> extending the domain with related functions... but, as
> Hamming said, new algebra code isn't "the most important
> problem in computational mathematics").
>
> Axioms associated with the REP can assume that they are
> 4x4, that they can be inverted, that they have a "space" of
> rotations, etc. The axioms provide "facts" known to be true
> about the REP. (I also need to think about a "specification"
> for the REP but I'm not there yet).
>
> Since every category and domain is a CLOS data structure
> the DHMATRIX data structure inherits REP axioms from its
> inheritance graph (e.g. SQMATRIX axioms). But DHMATRIX
> adds domain-specific REP axioms (as w

Re: [EXTERNAL] Re: Axiom musings...

2020-07-20 Thread William Sit
Hi Tim:

Perhaps I did not make myself clear in the short comment.
What I wanted to say is that a data representation is not the same as the 
abstract mathematical objects because there are finite bounds on the 
representation. Take for example, an algorithm to compute the LCM of two 
integers. The LCM can cause overflow and not be representable. Of course, you 
can change the data representation to have "infinite precision", but that would 
still be bounded by actual physical memory of the machine. The careful 
programmer of the LCM algorithm would add throws and catches to handle the 
"error",but the implementation will have to add code that is not considered in 
the theoretical LCM algorithm (unless the LCM algorithm is meant for bounded 
integers of a fixed data representation and not abstract integers). So there 
are two kinds of algorithms, one that is purely mathematical and one that is 
computational, the latter including a particular (class of) data 
representation(s) (perhaps even the computer language and system of the 
implementation). It is proofs for the latter type of algorithms that is 
lacking. Since data representations (like REP in Axiom) are built recursively, 
a computational algorithm (in the sense above) for Groebner basis may have to 
be designed to take care of just a few of the ways integers can be represented. 
Axiom is built with that in mind (that's where type theory comes in), but I bet 
no one SPECIFIES their computational algorithms with the limitations of data 
representation in mind, much less proves the algorithm anew for each new 
representation. So if a computation of a Groebner basis halts because of an 
intermediate LCM computation (say of two integer coefficients), should we 
consider the implementation as proven correct? What if the overflow condition 
was not detected and the computation continues? Indeed, since there may be 
different implementations of the integer domain, we must be sure that every 
implementation of the LCM algorithm handles overflows correctly AND specified 
in the documentation.

I am sure I am just being ignorant to pose these questions, because they must 
have been considered and perhaps solved. In that case, please ignore them and 
just tell me so.

William

William Sit
Professor Emeritus
Department of Mathematics
The City College of The City University of New York
New York, NY 10031
homepage: wsit.ccny.cuny.edu


From: Tim Daly 
Sent: Sunday, July 19, 2020 5:33 PM
To: William Sit
Cc: axiom-dev
Subject: Re: [EXTERNAL] Re: Axiom musings...

There are several "problems" with proving programs correct that
I don't quite know how to solve, or even approach. But that's the
fun of "research", right?

For the data representation question I've been looking at types.
I took 10 courses at CMU. I am eyebrow deep in type theory.
I'm looking at category theory and homotopy type theory. So
far I haven't seen anyone looking at the data problem. Most of
the focus is on strict code typing.

There is an old MIT course by Abelson and Sussman "Structure
and Interpretation of Computer Programs" (SICP). They rewrite
data as programs which, in Lisp, is trivial to do, Dan Friedman
seems to have some interesting ideas too.

All of Axiom's SANE types are now CLOS objects which gives
two benefits. First, they can be inherited. But second, they
are basically Lisp data structures with associated code.

I'm thinking of associating "data axioms" with the representation
(REP) object of a domain as well as with the functions.

For example, DenavitHartenbergMatrix encodes 4x4 matrices
used in graphics and robotics. They are 4x4 matrices where
the upper left 3x3 encodes rotations, the right column encodes
translations, and the lower row includes scaling, skewing, etc.

(As an aside, DHMATRIX matrices have an associated
Jacobian which encodes the dynamics in things like robots.
Since I'm also programming a robot I'm tempted to work on
extending the domain with related functions... but, as
Hamming said, new algebra code isn't "the most important
problem in computational mathematics").

Axioms associated with the REP can assume that they are
4x4, that they can be inverted, that they have a "space" of
rotations, etc. The axioms provide "facts" known to be true
about the REP. (I also need to think about a "specification"
for the REP but I'm not there yet).

Since every category and domain is a CLOS data structure
the DHMATRIX data structure inherits REP axioms from its
inheritance graph (e.g. SQMATRIX axioms). But DHMATRIX
adds domain-specific REP axioms (as well as domain-specific
function axioms). Thus a DHMATRIX rotate function can
base its proof on the fact that it only affects the upper 3x3
and lives in a space of rotations, all of which can be assumed
by the proof.

If I use the SICP "trick" of representing data as code I can

Re: [EXTERNAL] Re: Axiom musings...

2020-07-19 Thread Tim Daly
There are several "problems" with proving programs correct that
I don't quite know how to solve, or even approach. But that's the
fun of "research", right?

For the data representation question I've been looking at types.
I took 10 courses at CMU. I am eyebrow deep in type theory.
I'm looking at category theory and homotopy type theory. So
far I haven't seen anyone looking at the data problem. Most of
the focus is on strict code typing.

There is an old MIT course by Abelson and Sussman "Structure
and Interpretation of Computer Programs" (SICP). They rewrite
data as programs which, in Lisp, is trivial to do, Dan Friedman
seems to have some interesting ideas too.

All of Axiom's SANE types are now CLOS objects which gives
two benefits. First, they can be inherited. But second, they
are basically Lisp data structures with associated code.

I'm thinking of associating "data axioms" with the representation
(REP) object of a domain as well as with the functions.

For example, DenavitHartenbergMatrix encodes 4x4 matrices
used in graphics and robotics. They are 4x4 matrices where
the upper left 3x3 encodes rotations, the right column encodes
translations, and the lower row includes scaling, skewing, etc.

(As an aside, DHMATRIX matrices have an associated
Jacobian which encodes the dynamics in things like robots.
Since I'm also programming a robot I'm tempted to work on
extending the domain with related functions... but, as
Hamming said, new algebra code isn't "the most important
problem in computational mathematics").

Axioms associated with the REP can assume that they are
4x4, that they can be inverted, that they have a "space" of
rotations, etc. The axioms provide "facts" known to be true
about the REP. (I also need to think about a "specification"
for the REP but I'm not there yet).

Since every category and domain is a CLOS data structure
the DHMATRIX data structure inherits REP axioms from its
inheritance graph (e.g. SQMATRIX axioms). But DHMATRIX
adds domain-specific REP axioms (as well as domain-specific
function axioms). Thus a DHMATRIX rotate function can
base its proof on the fact that it only affects the upper 3x3
and lives in a space of rotations, all of which can be assumed
by the proof.

If I use the SICP "trick" of representing data as code I can
"expand" the data as part of the program proof.

It is all Omphaloskepsis (navel gazing) at this point though.
I'm still writing the new SANE compiler (which is wildly
different from the compiler course I taught).

I did give a talk at Notre Dame but I haven't attempted to
publish. All of my work shows up in literate programming
Axiom books on github.
(https://github.com/daly/PDFS)

It is all pretty pointless since nobody cares about computer
algebra, proving math programs correct, or Axiom itself.
Wolfram is taking up all the oxygen in the discussions.

But I have to say, this research is great fun. It reminds me
of the Scratchpad days, although I miss the give-and-take
of the group. It is hard to recreate my role as the dumbest
guy in the room when I'm stuck here by myself :-)

Hope you and your family are safe and healthy.

Tim

PS. I think we should redefine the "Hamming Distance" as
the distance between an idea and its implementation.



On 7/19/20, William Sit  wrote:
> Hi Tim:
>
> Glad to hear from you now and then, promoting and working towards your ideas
> and ideals.
>
>  >>We need proven algorithms.
>
> Just one short comment: it is often possible to prove algorithms (that is,
> providing the theoretical foundation for the algorithm), but it is much
> harder to prove that an implementation of the algorithm is correct. As you
> well know, the distinction lies in that implementation involves data
> representations whereas proofs of algorithms normally ignore them.
> Introducing (finite) data representations means introducing boundary
> situations that a programmer implementing an algorithm must deal with. So
> perhaps what we need to prove should include the correctness of
> implementations (to the bare metal, as you often say) and we should have a
> different set of analytic tools that can deal with the correctness (or
> completeness) of data representations. Of course, these tools must also be
> proven with the same rigor since behind every program is an algorithm.
>
> William
>
> William Sit
> Professor Emeritus
> Department of Mathematics
> The City College of The City University of New York
> New York, NY 10031
> homepage: wsit.ccny.cuny.edu
>
> 
> From: Axiom-developer
>  on behalf of
> Tim Daly 
> Sent: Saturday, July 18, 2020 6:28 PM
> To: axiom-dev; Tim Daly
> Subject: [EXTERNAL] Re: Axiom musings...
>
> Richard Hamming gave a great talk. "You and Your Research

Re: [EXTERNAL] Re: Axiom musings...

2020-07-19 Thread William Sit
Hi Tim:

Glad to hear from you now and then, promoting and working towards your ideas 
and ideals.

 >>We need proven algorithms.

Just one short comment: it is often possible to prove algorithms (that is, 
providing the theoretical foundation for the algorithm), but it is much harder 
to prove that an implementation of the algorithm is correct. As you well know, 
the distinction lies in that implementation involves data representations 
whereas proofs of algorithms normally ignore them. Introducing (finite) data 
representations means introducing boundary situations that a programmer 
implementing an algorithm must deal with. So perhaps what we need to prove 
should include the correctness of implementations (to the bare metal, as you 
often say) and we should have a different set of analytic tools that can deal 
with the correctness (or completeness) of data representations. Of course, 
these tools must also be proven with the same rigor since behind every program 
is an algorithm.

William

William Sit
Professor Emeritus
Department of Mathematics
The City College of The City University of New York
New York, NY 10031
homepage: wsit.ccny.cuny.edu


From: Axiom-developer 
 on behalf of Tim 
Daly 
Sent: Saturday, July 18, 2020 6:28 PM
To: axiom-dev; Tim Daly
Subject: [EXTERNAL] Re: Axiom musings...

Richard Hamming gave a great talk. "You and Your Research"
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3Da1zDuOPkMSw=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ=kSXlFiPNCbYVZvoZ62OUVd_40kcVviTxSKF3vNNtm0U=

His big question is:

"What is the most important problem in your field
and why aren't you working on it?"

To my mind, the most important problem in the field of
computational mathematics is grounding computer
algebra in proofs.

Computer mathematical algorithms that "maybe,
possibly, give correct answers sometimes" is a problem.
Indeed, for computer algebra, it is the most important
problem. We need proven algorithms.

New algorithms, better graphics, better documentation,
are all "nice to have" but, as Hamming would say,
they are not "the most important problem".

Tim



On 7/2/20, Tim Daly  wrote:
> Time for another update.
>
> The latest Intel processors, available only to data centers
> so far, have a built-in FPGA. This allows you to design
> your own circuits and have them loaded "on the fly",
> running in parallel with the CPU.
>
> I bought a Lattice ICEstick FPGA development board. For
> the first time there are open source tools that support it so
> it is a great test bench for ideas and development. It is a
> USB drive so it can be easily ported to any PC.
> (https://urldefense.proofpoint.com/v2/url?u=https-3A__www.latticesemi.com_products_developmentboardsandkits_icestick=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ=QxcJcE1BdIMqDbutQz2HFhAAAymG-QswIjRao_YTwz4=
>  )
>
> I also bought a large Intel Cyclone FPGA development board.
> (https://urldefense.proofpoint.com/v2/url?u=http-3A__www.terasic.com.tw_cgi-2Dbin_page_archive.pl-3FLanguage-3DEnglish-26No-3D836=DwIFaQ=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ=3wW6BueAeyVTQi0xGqoeE7xIA5EREDmvQR4fPw5zAXo=
>  )
> which has 2 embedded ARM processors. Unfortunately
> the tools (which are freely available) are not open source.
> It has sufficient size and power to do anything.
>
>
> I've got 2 threads of work in progress, both of which
> involve FPGAs (Field Programmable Gate Arrays).
>
> Thread 1
>
> The first thread involves proving programs correct. Once
> a proof has been made it is rather easier to check the proof.
> If code is shipped with a proof, the proof can be loaded into
> an FPGA running a proof-checker which verifies the program
> in parallel with running the code on the CPU.
>
> I am researching the question of writing a proof checker that
> runs on an FPGA, thus verifying the code "down to the metal".
> The Lean proof checker is the current target.
>
> The idea is to make "Oracle" algorithms that, because they
> are proven correct and verified at runtime, can be trusted
> by other mathematical software (e.g. Lean, Coq, Agda)
> when used in proofs.
>
> Thread 2
>
>
> The second thread involves arithmetic. Axiom currently ships
> with numeric routines (BLAS and LAPACK, see bookvol10.5).
> These routines have a known set of numeric failures such as
> cancellation, underflow, and scaling.
>
> John Gustafson has designed a 'unum' numeric format that can
&g

Re: Axiom musings...

2020-07-18 Thread Tim Daly
Richard Hamming gave a great talk. "You and Your Research"
https://www.youtube.com/watch?v=a1zDuOPkMSw

His big question is:

"What is the most important problem in your field
and why aren't you working on it?"

To my mind, the most important problem in the field of
computational mathematics is grounding computer
algebra in proofs.

Computer mathematical algorithms that "maybe,
possibly, give correct answers sometimes" is a problem.
Indeed, for computer algebra, it is the most important
problem. We need proven algorithms.

New algorithms, better graphics, better documentation,
are all "nice to have" but, as Hamming would say,
they are not "the most important problem".

Tim



On 7/2/20, Tim Daly  wrote:
> Time for another update.
>
> The latest Intel processors, available only to data centers
> so far, have a built-in FPGA. This allows you to design
> your own circuits and have them loaded "on the fly",
> running in parallel with the CPU.
>
> I bought a Lattice ICEstick FPGA development board. For
> the first time there are open source tools that support it so
> it is a great test bench for ideas and development. It is a
> USB drive so it can be easily ported to any PC.
> (https://www.latticesemi.com/products/developmentboardsandkits/icestick)
>
> I also bought a large Intel Cyclone FPGA development board.
> (http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English=836)
> which has 2 embedded ARM processors. Unfortunately
> the tools (which are freely available) are not open source.
> It has sufficient size and power to do anything.
>
>
> I've got 2 threads of work in progress, both of which
> involve FPGAs (Field Programmable Gate Arrays).
>
> Thread 1
>
> The first thread involves proving programs correct. Once
> a proof has been made it is rather easier to check the proof.
> If code is shipped with a proof, the proof can be loaded into
> an FPGA running a proof-checker which verifies the program
> in parallel with running the code on the CPU.
>
> I am researching the question of writing a proof checker that
> runs on an FPGA, thus verifying the code "down to the metal".
> The Lean proof checker is the current target.
>
> The idea is to make "Oracle" algorithms that, because they
> are proven correct and verified at runtime, can be trusted
> by other mathematical software (e.g. Lean, Coq, Agda)
> when used in proofs.
>
> Thread 2
>
>
> The second thread involves arithmetic. Axiom currently ships
> with numeric routines (BLAS and LAPACK, see bookvol10.5).
> These routines have a known set of numeric failures such as
> cancellation, underflow, and scaling.
>
> John Gustafson has designed a 'unum' numeric format that can
> eliminate many of these errors. (See
> Gustafson, John "The End of Error" CRC Press 2015
> https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868/ref=sr_1_1?dchild=1=gustafson+the+end+of+error=1593685423=8-1)
>
> The research goal is to implement Axiom's floating-point
> arithmetic that can be offloaded onto an FPGA implementing
> the unum format. Such a system would radically simplify
> the implementation of BLAS and LAPACK as most of the
> errors can't occur. The impact would be similar to using
> multi-precision integer arithmetic, only now its floating-point.
>
> SANE, the greater goal.
>
> The Axiom SANE compiler / interpreter can use both of
> these tools to implement trusted mathematical software.
> It's a long, ambitious research effort but even if only pieces
> of it succeed, it changes computational mathematics.
>
> Tim
>
> "A person's reach should exceed their grasp,
> or what's a computer for?"  (misquoting Robert Browning)
>
> (https://www.quotetab.com/quote/by-robert-browning/ah-but-a-mans-reach-should-exceed-his-grasp-or-whats-a-heaven-for)
>
>
>
>
> On 6/16/20, Tim Daly  wrote:
>> WHY PROVE AXIOM CORRECT (SANE)?
>>
>> Historically, Axiom credits CLU, the Cluster language by
>> Barbara Liskov, with the essential ideas behind the Spad
>> language. Barbara gave a talk (a partial transcript below)
>> that gives the rational behind the ``where clause'' used by
>> Spad.
>>
>> She talks about the limits of the compile time capablity.
>> In particular, she says:
>>
>>To go further, where we would say that T,
>>in addition, has to be an equality relation, that requires
>>much more sophisticated techniques that, even today, are
>>beyond the capabilities of the compiler.
>>
>> Showing that the ``equal'' function satisfies the equality
>> relation is no longer ``beyond the capabilities of the compiler''.
>> We have the required formalisms and mechanisms to
>> prove properties at compile time.
>>
>> The SANE effort is essentially trying to push compile
>> time checking into proving that, for categories that use
>> ``equal'', we prove that the equal function implements
>> equality.
>>
>> I strongly encourage you to watch her video.
>>
>> Tim
>>
>> ===
>> Barbara Liskov
>> May 2012
>> MIT CSAIL
>> 

Re: Axiom musings...

2020-07-02 Thread Tim Daly
Time for another update.

The latest Intel processors, available only to data centers
so far, have a built-in FPGA. This allows you to design
your own circuits and have them loaded "on the fly",
running in parallel with the CPU.

I bought a Lattice ICEstick FPGA development board. For
the first time there are open source tools that support it so
it is a great test bench for ideas and development. It is a
USB drive so it can be easily ported to any PC.
(https://www.latticesemi.com/products/developmentboardsandkits/icestick)

I also bought a large Intel Cyclone FPGA development board.
(http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English=836)
which has 2 embedded ARM processors. Unfortunately
the tools (which are freely available) are not open source.
It has sufficient size and power to do anything.


I've got 2 threads of work in progress, both of which
involve FPGAs (Field Programmable Gate Arrays).

Thread 1

The first thread involves proving programs correct. Once
a proof has been made it is rather easier to check the proof.
If code is shipped with a proof, the proof can be loaded into
an FPGA running a proof-checker which verifies the program
in parallel with running the code on the CPU.

I am researching the question of writing a proof checker that
runs on an FPGA, thus verifying the code "down to the metal".
The Lean proof checker is the current target.

The idea is to make "Oracle" algorithms that, because they
are proven correct and verified at runtime, can be trusted
by other mathematical software (e.g. Lean, Coq, Agda)
when used in proofs.

Thread 2


The second thread involves arithmetic. Axiom currently ships
with numeric routines (BLAS and LAPACK, see bookvol10.5).
These routines have a known set of numeric failures such as
cancellation, underflow, and scaling.

John Gustafson has designed a 'unum' numeric format that can
eliminate many of these errors. (See
Gustafson, John "The End of Error" CRC Press 2015
https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868/ref=sr_1_1?dchild=1=gustafson+the+end+of+error=1593685423=8-1)

The research goal is to implement Axiom's floating-point
arithmetic that can be offloaded onto an FPGA implementing
the unum format. Such a system would radically simplify
the implementation of BLAS and LAPACK as most of the
errors can't occur. The impact would be similar to using
multi-precision integer arithmetic, only now its floating-point.

SANE, the greater goal.

The Axiom SANE compiler / interpreter can use both of
these tools to implement trusted mathematical software.
It's a long, ambitious research effort but even if only pieces
of it succeed, it changes computational mathematics.

Tim

"A person's reach should exceed their grasp,
or what's a computer for?"  (misquoting Robert Browning)

(https://www.quotetab.com/quote/by-robert-browning/ah-but-a-mans-reach-should-exceed-his-grasp-or-whats-a-heaven-for)




On 6/16/20, Tim Daly  wrote:
> WHY PROVE AXIOM CORRECT (SANE)?
>
> Historically, Axiom credits CLU, the Cluster language by
> Barbara Liskov, with the essential ideas behind the Spad
> language. Barbara gave a talk (a partial transcript below)
> that gives the rational behind the ``where clause'' used by
> Spad.
>
> She talks about the limits of the compile time capablity.
> In particular, she says:
>
>To go further, where we would say that T,
>in addition, has to be an equality relation, that requires
>much more sophisticated techniques that, even today, are
>beyond the capabilities of the compiler.
>
> Showing that the ``equal'' function satisfies the equality
> relation is no longer ``beyond the capabilities of the compiler''.
> We have the required formalisms and mechanisms to
> prove properties at compile time.
>
> The SANE effort is essentially trying to push compile
> time checking into proving that, for categories that use
> ``equal'', we prove that the equal function implements
> equality.
>
> I strongly encourage you to watch her video.
>
> Tim
>
> ===
> Barbara Liskov
> May 2012
> MIT CSAIL
> Programming the Turing Machine
> https://www.youtube.com/watch?v=ibRar7sWulM
>
> POLYMORPHISM
>
> We don't just want a set, we want polymorphism or
> generics, as they are called today. We wanted to
> have a generic set which was paramaterized by type
> so you could instantiate it as:
>
> Set = [T:type] create, insert,...
>   % representation for Set object
>   % implementation of Set operations
>   Set
>
> Set[int] s := Set[int]$create()
> Set[int]$insert(s,3)
>
> We wanted a static solution to this problem. The
> problem is, not every type makes sense as a parameter
> to Set of T. For sets, per se, you need an equality
> relation. If it has been a sorted set we would have
> some ordering relation. And a type that didn't have
> one of those things would not have been a legitimate
> parameter. We needed a way of expressing that in a
> compile-time, checkable manner. 

Re: Axiom musings...

2020-06-16 Thread Tim Daly
WHY PROVE AXIOM CORRECT (SANE)?

Historically, Axiom credits CLU, the Cluster language by
Barbara Liskov, with the essential ideas behind the Spad
language. Barbara gave a talk (a partial transcript below)
that gives the rational behind the ``where clause'' used by
Spad.

She talks about the limits of the compile time capablity.
In particular, she says:

   To go further, where we would say that T,
   in addition, has to be an equality relation, that requires
   much more sophisticated techniques that, even today, are
   beyond the capabilities of the compiler.

Showing that the ``equal'' function satisfies the equality
relation is no longer ``beyond the capabilities of the compiler''.
We have the required formalisms and mechanisms to
prove properties at compile time.

The SANE effort is essentially trying to push compile
time checking into proving that, for categories that use
``equal'', we prove that the equal function implements
equality.

I strongly encourage you to watch her video.

Tim

===
Barbara Liskov
May 2012
MIT CSAIL
Programming the Turing Machine
https://www.youtube.com/watch?v=ibRar7sWulM

POLYMORPHISM

We don't just want a set, we want polymorphism or
generics, as they are called today. We wanted to
have a generic set which was paramaterized by type
so you could instantiate it as:

Set = [T:type] create, insert,...
  % representation for Set object
  % implementation of Set operations
  Set

Set[int] s := Set[int]$create()
Set[int]$insert(s,3)

We wanted a static solution to this problem. The
problem is, not every type makes sense as a parameter
to Set of T. For sets, per se, you need an equality
relation. If it has been a sorted set we would have
some ordering relation. And a type that didn't have
one of those things would not have been a legitimate
parameter. We needed a way of expressing that in a
compile-time, checkable manner. Otherwise we would
have had to resort to runtime checking.

Our solution was

Set = [T:  ] create, insert,...
  T equal: (T,T) (bool)


Our solution, what we call the ``where clause''. So we
added this to the header. The ``where clause'' tells you
what operations the parameter type has to have.

If you have the ``where'' clause you can do the static
checking because when you instantiate, when you provide
an actual type, the compiler can check that the type has
the operations that are required. And then, when you write
the implementation of Set the compiler knows it's ok to
call those operations because you can guarantee they are
actually there when you get around to running.

Of course, you notice that there's just syntax here; there's
no semantics.

As I'm sure you all know, compile-time type checking is
basically a proof technique of a very limited sort and
this was about as far as we can push what you could get out of the
static analysis. To go further, where we would say that T,
in addition, has to be an equality relation, that requires
much more sophisticated techniques that, even today, are
beyond the capabilities of the compiler.




On 3/24/20, Tim Daly  wrote:
> I've spent entirely too much time studing the legal issues
> of free and open source software. There are copyright,
> trademark, and intellectual property laws. I have read
> several books, listened to lectures, and read papers on
> the subject. I've spoken to lawyers about it. I've even
> been required, by law, to coerce people I respect.
> You would think it was all perfectly clear. It isn't.
>
> The most entertaining and enlightening lectures were
> by Robert Lefkowitz at OSCON 2004. His talk is
> "The Semasiology of Open Source", which sounds
> horrible but I assure you, this is a real treat.
>
> THE THESIS
>
> Semasiology, n. The science of meanings or
> sense development (of words); the explanation
> of the development and changes of the meanings
> of words. Source: Webster's Revised Unabridged
> Dictionary, � 1996, 1998 MICRA, Inc.
>
> "Open source doesn't just mean access to the
> source code." So begins the Open Source Definition.
> What then, does access to the source code mean?
> Seen through the lens of an Enterprise user, what
> does open source mean? When is (or isn't) it
> significant? And a catalogue of open source
> related arbitrage opportunities.
>
> http://origin.conversationsnetwork.org/Robert%20Lefkowitz%20-%20The%20Semasiology%20of%20Open%20Source.mp3
>
> Computer source code has words and sentence
> structure like actual prose or even poetry. Writing
> code for the computer is like writing an essay. It
> should be written for other people to read,
> understand and modify. These are some of the
> thoughts behind literate programming proposed
> by Donald Knuth. This is also one of the ideas
> behind Open Source.
>
>  THE ANTITHESIS
>
> "Open Source" is a phrase like "Object Oriented"
> - weird at first, but when it became popular, the
> meaning began to depend on the context of the
> speaker or listener. "Object Oriented" meant that

Re: Axiom musings...

2020-03-24 Thread Tim Daly
I've spent entirely too much time studing the legal issues
of free and open source software. There are copyright,
trademark, and intellectual property laws. I have read
several books, listened to lectures, and read papers on
the subject. I've spoken to lawyers about it. I've even
been required, by law, to coerce people I respect.
You would think it was all perfectly clear. It isn't.

The most entertaining and enlightening lectures were
by Robert Lefkowitz at OSCON 2004. His talk is
"The Semasiology of Open Source", which sounds
horrible but I assure you, this is a real treat.

THE THESIS

Semasiology, n. The science of meanings or
sense development (of words); the explanation
of the development and changes of the meanings
of words. Source: Webster's Revised Unabridged
Dictionary, � 1996, 1998 MICRA, Inc.

"Open source doesn't just mean access to the
source code." So begins the Open Source Definition.
What then, does access to the source code mean?
Seen through the lens of an Enterprise user, what
does open source mean? When is (or isn't) it
significant? And a catalogue of open source
related arbitrage opportunities.

http://origin.conversationsnetwork.org/Robert%20Lefkowitz%20-%20The%20Semasiology%20of%20Open%20Source.mp3

Computer source code has words and sentence
structure like actual prose or even poetry. Writing
code for the computer is like writing an essay. It
should be written for other people to read,
understand and modify. These are some of the
thoughts behind literate programming proposed
by Donald Knuth. This is also one of the ideas
behind Open Source.

 THE ANTITHESIS

"Open Source" is a phrase like "Object Oriented"
- weird at first, but when it became popular, the
meaning began to depend on the context of the
speaker or listener. "Object Oriented" meant that
PERL, C++, Java, Smalltalk, Basic and the newest
version of Cobol are all "Object Oriented" - for some
specific definition of "Object Oriented". Similar is
the case of the phrase "Open Source".

In Part I, Lefkowitz talked about the shift of the
meaning of "Open Source" away from any reference
to the actual "source code," and more towards other
phases of the software development life cycle. In
Part II, he returns to the consideration of the
relationship between "open source" and the actual
"source code," and reflects upon both the way
forward and the road behind, drawing inspiration
from Charlemagne, King Louis XIV, Donald Knuth,
and others.

http://origin.conversationsnetwork.org/ITC.OSCON05-RobertLefkowitz-2005.08.03.mp3

THE SYNTHESIS

In a fascinating synthesis, Robert “r0ml” Lefkowitz
polishes up his exposition on the evolving meaning
of the term ‘open source’. This intellectual joy-ride
draws on some of the key ideas in artificial intelligence
to probe the role of language, meaning and context
in computing and the software development process.
Like Wittgenstein’s famous thought experiment, the
open source ‘beetle in a box’ can represent different
things to different people, bearing interesting fruit for
philosophers and software creators alike.

http://itc.conversationsnetwork.org/audio/download/itconversations-1502.mp3


On 3/7/20, Tim Daly  wrote:
> I've pushed the lastest version of Axiom. The plan, followed
> so far, is to push once a month on the 7th.
>
> After some chat room interactions it was brought home
> again that the proof world really does not seem to like the
> idea of proving programs correct. And, given that it was is
> of the main Axiom goals and a point of friction during the fork,
> the computer algebra world does not like the idea of proving
> programs correct either.
>
> So the idea of "computational mathematics", which includes
> both disciplines (as well as type theory) seems still a long
> way off.
>
> Nevertheless, the primary change in these past and future
> updates is focused on merging proof and computer algebra.
>
> Proof systems are able to split the problem of creating a
> proof and the problem of verifying a proof, which is much
> cheaper. Ideally the proof checker would run on verified
> hardware so the proof is checked "down to the metal".
>
> I have a background in Field Programmable Gate Arrays
> (FPGAs) as I tried to do a startup using them. So now I'm
> looking at creating a hardware proof checker using a
> dedicated instruction set, one designed to be verifed.
> New CPUs used in data centers (not yet available to us
> mortals) have built-in FPGAs so it would be possible to
> "side-load" a proof of a program to be checked while the
> program is run. I have the FPGA and am doing a gate-level
> special instruction design for such a proof checker.
>
>
> On 2/7/20, Tim Daly  wrote:
>> As a mathematician, it is difficult to use a system like Axiom,
>> mostly because it keeps muttering about Types. If you're not
>> familiar with type theory (most mathematicians aren't) then it
>> seems pointless and painful.
>>
>> So Axiom has a steep learning curve.
>>
>> As a mathematician with an algorithmic 

Re: Axiom musings...

2020-03-07 Thread Tim Daly
I've pushed the lastest version of Axiom. The plan, followed
so far, is to push once a month on the 7th.

After some chat room interactions it was brought home
again that the proof world really does not seem to like the
idea of proving programs correct. And, given that it was is
of the main Axiom goals and a point of friction during the fork,
the computer algebra world does not like the idea of proving
programs correct either.

So the idea of "computational mathematics", which includes
both disciplines (as well as type theory) seems still a long
way off.

Nevertheless, the primary change in these past and future
updates is focused on merging proof and computer algebra.

Proof systems are able to split the problem of creating a
proof and the problem of verifying a proof, which is much
cheaper. Ideally the proof checker would run on verified
hardware so the proof is checked "down to the metal".

I have a background in Field Programmable Gate Arrays
(FPGAs) as I tried to do a startup using them. So now I'm
looking at creating a hardware proof checker using a
dedicated instruction set, one designed to be verifed.
New CPUs used in data centers (not yet available to us
mortals) have built-in FPGAs so it would be possible to
"side-load" a proof of a program to be checked while the
program is run. I have the FPGA and am doing a gate-level
special instruction design for such a proof checker.


On 2/7/20, Tim Daly  wrote:
> As a mathematician, it is difficult to use a system like Axiom,
> mostly because it keeps muttering about Types. If you're not
> familiar with type theory (most mathematicians aren't) then it
> seems pointless and painful.
>
> So Axiom has a steep learning curve.
>
> As a mathematician with an algorithmic approach, it is difficult
> to use a system like Axiom, mostly because you have to find
> or create "domains" or "packages", understand categories
> with their inheritance model, and learn a new language with
> a painful compiler always complaining about types.
>
> So Axiom has a steep learning curve.
>
> The Sane version of Axiom requires knowing the mathematics.
> It also assumes a background in type theory, inductive logic,
> homotopy type theory, ML (meta-language, not machine
> learning (yet)), interactive theorem proving, kernels, tactics,
> and tacticals. Also assumed is knowledge of specification languages,
> Hoare triples, proof techniques, soundness, and completeness.
> Oh, and there is a whole new syntax and semantics added to
> specify definitions, axioms, and theorems, not to mention whole
> libraries of the same.
>
> So Axiom Sane has a steep learning curve.
>
> I've taken 10 courses at CMU and spent the last 4-5 years
> learning to read the leading edge literature (also known
> as "greek studies", since every paper has pages of greek).
>
> I'm trying to unify computer algebra and proof theory into a
> "computational mathematics" framework. I suspect that the only
> way this system will ever be useful is after Universities have a
> "Computational Mathematics" major course of study and degree.
>
> Creating a new department is harder than creating Axiom Sane
> because, you know, ... people.
>
> I think such a department is inevitable given the deep and wide
> impact of computers, just not in my lifetime. That's ok. When I
> started programming there was no computer science degree.
>
> Somebody has to be the first lemming over the cliff.
>
> Tim
>
> On 1/9/20, Tim Daly  wrote:
>> When Axiom Sane is paired with a proof checker (e.g. with Lean)
>> there is a certain amount of verification that is involved.
>>
>> Axiom will provide proofs (presumably validated by Lean) for its
>> algorithms. Ideally, when a computation is requested from Lean
>> for a GCD, the result as well as a proof of the GCD algorithm is
>> returned. Lean can the verify that the proof is valid. But it is
>> computationally more efficient if Axiom and Lean use a cryptographic
>> hash, such as SHA1. That way the proof doesn't need to be
>> 'reproven', only a hash computation over the proof text needs to
>> be performed. Hashes are blazingly fast. This allows proofs to be
>> exchanged without re-running the proof mechanism. Since a large
>> computation request from Lean might involve many algorithms
>> there would be considerable overhead to recompute each proof.
>> A hash simplifies the issue yet provides proof integrity.
>>
>> Tim
>>
>>
>> On 1/9/20, Tim Daly  wrote:
>>> Provisos that is, 'formula SUCH pre/post-conditions'
>>>
>>> A computer algebra system ought to know and ought to provide
>>> information about the domain and range of a resulting formula.
>>> I've been pushing this effort since the 1980s (hence the
>>> SuchThat domain).
>>>
>>> It turns out that computing with, carrying, and combining this
>>> information is difficult if not impossible in the current system.
>>> The information isn't available and isn't computed. In that sense,
>>> the original Axiom system is 'showing its age'.
>>>
>>> In the 

Re: Axiom musings...

2020-02-06 Thread Tim Daly
As a mathematician, it is difficult to use a system like Axiom,
mostly because it keeps muttering about Types. If you're not
familiar with type theory (most mathematicians aren't) then it
seems pointless and painful.

So Axiom has a steep learning curve.

As a mathematician with an algorithmic approach, it is difficult
to use a system like Axiom, mostly because you have to find
or create "domains" or "packages", understand categories
with their inheritance model, and learn a new language with
a painful compiler always complaining about types.

So Axiom has a steep learning curve.

The Sane version of Axiom requires knowing the mathematics.
It also assumes a background in type theory, inductive logic,
homotopy type theory, ML (meta-language, not machine
learning (yet)), interactive theorem proving, kernels, tactics,
and tacticals. Also assumed is knowledge of specification languages,
Hoare triples, proof techniques, soundness, and completeness.
Oh, and there is a whole new syntax and semantics added to
specify definitions, axioms, and theorems, not to mention whole
libraries of the same.

So Axiom Sane has a steep learning curve.

I've taken 10 courses at CMU and spent the last 4-5 years
learning to read the leading edge literature (also known
as "greek studies", since every paper has pages of greek).

I'm trying to unify computer algebra and proof theory into a
"computational mathematics" framework. I suspect that the only
way this system will ever be useful is after Universities have a
"Computational Mathematics" major course of study and degree.

Creating a new department is harder than creating Axiom Sane
because, you know, ... people.

I think such a department is inevitable given the deep and wide
impact of computers, just not in my lifetime. That's ok. When I
started programming there was no computer science degree.

Somebody has to be the first lemming over the cliff.

Tim

On 1/9/20, Tim Daly  wrote:
> When Axiom Sane is paired with a proof checker (e.g. with Lean)
> there is a certain amount of verification that is involved.
>
> Axiom will provide proofs (presumably validated by Lean) for its
> algorithms. Ideally, when a computation is requested from Lean
> for a GCD, the result as well as a proof of the GCD algorithm is
> returned. Lean can the verify that the proof is valid. But it is
> computationally more efficient if Axiom and Lean use a cryptographic
> hash, such as SHA1. That way the proof doesn't need to be
> 'reproven', only a hash computation over the proof text needs to
> be performed. Hashes are blazingly fast. This allows proofs to be
> exchanged without re-running the proof mechanism. Since a large
> computation request from Lean might involve many algorithms
> there would be considerable overhead to recompute each proof.
> A hash simplifies the issue yet provides proof integrity.
>
> Tim
>
>
> On 1/9/20, Tim Daly  wrote:
>> Provisos that is, 'formula SUCH pre/post-conditions'
>>
>> A computer algebra system ought to know and ought to provide
>> information about the domain and range of a resulting formula.
>> I've been pushing this effort since the 1980s (hence the
>> SuchThat domain).
>>
>> It turns out that computing with, carrying, and combining this
>> information is difficult if not impossible in the current system.
>> The information isn't available and isn't computed. In that sense,
>> the original Axiom system is 'showing its age'.
>>
>> In the Sane implementation the information is available. It is
>> part of the specification and part of the proof steps. With a
>> careful design it will be possible to provide provisos for each
>> given result that are carried with the result for use in further
>> computation.
>>
>> This raises interesting questions to be explored. For example,
>> if the formula is defined over an interval, how is the interval
>> arithmetic handled?
>>
>> Exciting research ahead!
>>
>> Tim
>>
>>
>>
>> On 1/3/20, Tim Daly  wrote:
>>> Trusted Kernel... all the way to the metal.
>>>
>>> While building a trusted computer algebra system, the
>>> SANE version of Axiom, I've been looking at questions of
>>> trust at all levels.
>>>
>>> One of the key tenets (the de Bruijn principle) calls for a
>>> trusted kernel through which all other computations must
>>> pass. Coq, Lean, and other systems do this. They base
>>> their kernel  on logic like the Calculus of Construction or
>>> something similar.
>>>
>>> Andrej Bauer has been working on a smaller kernel (a
>>> nucleus) that separates the trust from the logic. The rules
>>> for the logic can be specified as needed but checked by
>>> the nucleus code.
>>>
>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>> that allow you to create your own hardware in a C-like
>>> language (Verilog). It allows you to check the chip you build
>>> all the way down to the transistor states. You can create
>>> things as complex as a whole CPU or as simple as a trusted
>>> nucleus. (youtube: Building a CPU on an FPGA). 

Re: Axiom musings...

2020-01-09 Thread Tim Daly
When Axiom Sane is paired with a proof checker (e.g. with Lean)
there is a certain amount of verification that is involved.

Axiom will provide proofs (presumably validated by Lean) for its
algorithms. Ideally, when a computation is requested from Lean
for a GCD, the result as well as a proof of the GCD algorithm is
returned. Lean can the verify that the proof is valid. But it is
computationally more efficient if Axiom and Lean use a cryptographic
hash, such as SHA1. That way the proof doesn't need to be
'reproven', only a hash computation over the proof text needs to
be performed. Hashes are blazingly fast. This allows proofs to be
exchanged without re-running the proof mechanism. Since a large
computation request from Lean might involve many algorithms
there would be considerable overhead to recompute each proof.
A hash simplifies the issue yet provides proof integrity.

Tim


On 1/9/20, Tim Daly  wrote:
> Provisos that is, 'formula SUCH pre/post-conditions'
>
> A computer algebra system ought to know and ought to provide
> information about the domain and range of a resulting formula.
> I've been pushing this effort since the 1980s (hence the
> SuchThat domain).
>
> It turns out that computing with, carrying, and combining this
> information is difficult if not impossible in the current system.
> The information isn't available and isn't computed. In that sense,
> the original Axiom system is 'showing its age'.
>
> In the Sane implementation the information is available. It is
> part of the specification and part of the proof steps. With a
> careful design it will be possible to provide provisos for each
> given result that are carried with the result for use in further
> computation.
>
> This raises interesting questions to be explored. For example,
> if the formula is defined over an interval, how is the interval
> arithmetic handled?
>
> Exciting research ahead!
>
> Tim
>
>
>
> On 1/3/20, Tim Daly  wrote:
>> Trusted Kernel... all the way to the metal.
>>
>> While building a trusted computer algebra system, the
>> SANE version of Axiom, I've been looking at questions of
>> trust at all levels.
>>
>> One of the key tenets (the de Bruijn principle) calls for a
>> trusted kernel through which all other computations must
>> pass. Coq, Lean, and other systems do this. They base
>> their kernel  on logic like the Calculus of Construction or
>> something similar.
>>
>> Andrej Bauer has been working on a smaller kernel (a
>> nucleus) that separates the trust from the logic. The rules
>> for the logic can be specified as needed but checked by
>> the nucleus code.
>>
>> I've been studying Field Programmable Gate Arrays (FPGA)
>> that allow you to create your own hardware in a C-like
>> language (Verilog). It allows you to check the chip you build
>> all the way down to the transistor states. You can create
>> things as complex as a whole CPU or as simple as a trusted
>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>> history of verifying hardware logic.
>>
>> It appears that, assuming I can understand Bauers
>> Andromeda system, it would be possible and not that hard
>> to implement a trusted kernel on an FPGA the size and
>> form factor of a USB stick.
>>
>> Trust "down to the metal".
>>
>> Tim
>>
>>
>>
>> On 12/15/19, Tim Daly  wrote:
>>> Progress in happening on the new Sane Axiom compiler.
>>>
>>> Recently I've been musing about methods to insert axioms
>>> into categories so they could be inherited like signatures.
>>> At the moment I've been thinking about adding axioms in
>>> the same way that signatures are written, adding them to
>>> the appropriate categories.
>>>
>>> But this is an interesting design question.
>>>
>>> Axiom already has a mechanism for inheriting signatures
>>> from categories. That is, we can bet a plus signature from,
>>> say, the Integer category.
>>>
>>> Suppose we follow the same pattern. Currently Axiom
>>> inherits certain so-called "attributes", such as ApproximateAttribute,
>>> which implies that the results are only approximate.
>>>
>>> We could adapt the same mechnaism to inherit the Transitive
>>> property by defining it in its own category. In fact, if we
>>> follow Carette and Farmer's "tiny theories" architecture,
>>> where each property has its own inheritable category,
>>> we can "mix and match" the axioms at will.
>>>
>>> An "axiom" category would also export a function. This function
>>> would essentially be a "tactic" used in a proof. It would modify
>>> the proof step by applying the function to the step.
>>>
>>> Theorems would have the same structure.
>>>
>>> This allows theorems to be constructed at run time (since
>>> Axiom supports "First Class Dynamic Types".
>>>
>>> In addition, this design can be "pushed down" into the Spad
>>> language so that Spad statements (e.g. assignment) had
>>> proof-related properties. A range such as [1..10] would
>>> provide explicit bounds in a proof "by language definition".
>>> Defining the logical properties 

Re: Axiom musings...

2020-01-02 Thread Tim Daly
Trusted Kernel... all the way to the metal.

While building a trusted computer algebra system, the
SANE version of Axiom, I've been looking at questions of
trust at all levels.

One of the key tenets (the de Bruijn principle) calls for a
trusted kernel through which all other computations must
pass. Coq, Lean, and other systems do this. They base
their kernel  on logic like the Calculus of Construction or
something similar.

Andrej Bauer has been working on a smaller kernel (a
nucleus) that separates the trust from the logic. The rules
for the logic can be specified as needed but checked by
the nucleus code.

I've been studying Field Programmable Gate Arrays (FPGA)
that allow you to create your own hardware in a C-like
language (Verilog). It allows you to check the chip you build
all the way down to the transistor states. You can create
things as complex as a whole CPU or as simple as a trusted
nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
history of verifying hardware logic.

It appears that, assuming I can understand Bauers
Andromeda system, it would be possible and not that hard
to implement a trusted kernel on an FPGA the size and
form factor of a USB stick.

Trust "down to the metal".

Tim



On 12/15/19, Tim Daly  wrote:
> Progress in happening on the new Sane Axiom compiler.
>
> Recently I've been musing about methods to insert axioms
> into categories so they could be inherited like signatures.
> At the moment I've been thinking about adding axioms in
> the same way that signatures are written, adding them to
> the appropriate categories.
>
> But this is an interesting design question.
>
> Axiom already has a mechanism for inheriting signatures
> from categories. That is, we can bet a plus signature from,
> say, the Integer category.
>
> Suppose we follow the same pattern. Currently Axiom
> inherits certain so-called "attributes", such as ApproximateAttribute,
> which implies that the results are only approximate.
>
> We could adapt the same mechnaism to inherit the Transitive
> property by defining it in its own category. In fact, if we
> follow Carette and Farmer's "tiny theories" architecture,
> where each property has its own inheritable category,
> we can "mix and match" the axioms at will.
>
> An "axiom" category would also export a function. This function
> would essentially be a "tactic" used in a proof. It would modify
> the proof step by applying the function to the step.
>
> Theorems would have the same structure.
>
> This allows theorems to be constructed at run time (since
> Axiom supports "First Class Dynamic Types".
>
> In addition, this design can be "pushed down" into the Spad
> language so that Spad statements (e.g. assignment) had
> proof-related properties. A range such as [1..10] would
> provide explicit bounds in a proof "by language definition".
> Defining the logical properties of language statements in
> this way would make it easier to construct proofs since the
> invariants would be partially constructed already.
>
> This design merges the computer algebra inheritance
> structure with the proof of algorithms structure, all under
> the same mechanism.
>
> Tim
>
> On 12/11/19, Tim Daly  wrote:
>> I've been reading Stephen Kell's (Univ of Kent
>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>> Seven deadly sins of talking about “types”
>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>
>> He raised an interesting idea toward the end of the essay
>> that type-checking could be done outside the compiler.
>>
>> I can see a way to do this in Axiom's Sane compiler.
>> It would be possible to run a program over the source code
>> to collect the information and write a stand-alone type
>> checker. This "unbundles" type checking and compiling.
>>
>> Taken further I can think of several other kinds of checkers
>> (aka 'linters') that could be unbundled.
>>
>> It is certainly something to explore.
>>
>> Tim
>>
>>
>> On 12/8/19, Tim Daly  wrote:
>>> The Axiom Sane compiler is being "shaped by the hammer
>>> blows of reality", to coin a phrase.
>>>
>>> There are many goals. One of the primary goals is creating a
>>> compiler that can be understood, maintained, and modified.
>>>
>>> So the latest changes involved adding multiple index files.
>>> These are documentation (links to where terms are mentioned
>>> in the text), code (links to the implementation of things),
>>> error (links to where errors are defined), signatures (links to
>>> the signatures of lisp functions), figures (links to figures),
>>> and separate category, domain, and package indexes.
>>>
>>> The tikz package is now used to create "railroad diagrams"
>>> of syntax (ala, the PASCAL report). The implementation of
>>> those diagrams follows immediately. Collectively these will
>>> eventually define at least the syntax of the language. In the
>>> ideal, changing the diagram would change the code but I'm
>>> not that clever.
>>>
>>> Reality shows up with the curent constraint that the

Re: Axiom musings...

2019-12-16 Thread Martin Baker

Tim,

Would this also be compatible with 'propositions as types' as in Idris 
and various proof assistants? That is, could you have both Curry–Howard 
and Carette-Farmer?


I ask this because types (and constructors for types) could also be 
inherited by categories in the way you describe. So an equation (axiom 
or identity) like a=a+0 has a constructor, Refl, if true which can be 
inherited.


Or do you see computer algebra/deduction/proof programs forking into 
constructive or axiomatic approaches?


Martin

On 16/12/2019 00:52, Tim Daly wrote:

Progress in happening on the new Sane Axiom compiler.

Recently I've been musing about methods to insert axioms
into categories so they could be inherited like signatures.
At the moment I've been thinking about adding axioms in
the same way that signatures are written, adding them to
the appropriate categories.

But this is an interesting design question.

Axiom already has a mechanism for inheriting signatures
from categories. That is, we can bet a plus signature from,
say, the Integer category.

Suppose we follow the same pattern. Currently Axiom
inherits certain so-called "attributes", such as ApproximateAttribute,
which implies that the results are only approximate.

We could adapt the same mechnaism to inherit the Transitive
property by defining it in its own category. In fact, if we
follow Carette and Farmer's "tiny theories" architecture,
where each property has its own inheritable category,
we can "mix and match" the axioms at will.

An "axiom" category would also export a function. This function
would essentially be a "tactic" used in a proof. It would modify
the proof step by applying the function to the step.

Theorems would have the same structure.

This allows theorems to be constructed at run time (since
Axiom supports "First Class Dynamic Types".

In addition, this design can be "pushed down" into the Spad
language so that Spad statements (e.g. assignment) had
proof-related properties. A range such as [1..10] would
provide explicit bounds in a proof "by language definition".
Defining the logical properties of language statements in
this way would make it easier to construct proofs since the
invariants would be partially constructed already.

This design merges the computer algebra inheritance
structure with the proof of algorithms structure, all under
the same mechanism.

Tim




Re: Axiom musings...

2019-12-15 Thread Tim Daly
Progress in happening on the new Sane Axiom compiler.

Recently I've been musing about methods to insert axioms
into categories so they could be inherited like signatures.
At the moment I've been thinking about adding axioms in
the same way that signatures are written, adding them to
the appropriate categories.

But this is an interesting design question.

Axiom already has a mechanism for inheriting signatures
from categories. That is, we can bet a plus signature from,
say, the Integer category.

Suppose we follow the same pattern. Currently Axiom
inherits certain so-called "attributes", such as ApproximateAttribute,
which implies that the results are only approximate.

We could adapt the same mechnaism to inherit the Transitive
property by defining it in its own category. In fact, if we
follow Carette and Farmer's "tiny theories" architecture,
where each property has its own inheritable category,
we can "mix and match" the axioms at will.

An "axiom" category would also export a function. This function
would essentially be a "tactic" used in a proof. It would modify
the proof step by applying the function to the step.

Theorems would have the same structure.

This allows theorems to be constructed at run time (since
Axiom supports "First Class Dynamic Types".

In addition, this design can be "pushed down" into the Spad
language so that Spad statements (e.g. assignment) had
proof-related properties. A range such as [1..10] would
provide explicit bounds in a proof "by language definition".
Defining the logical properties of language statements in
this way would make it easier to construct proofs since the
invariants would be partially constructed already.

This design merges the computer algebra inheritance
structure with the proof of algorithms structure, all under
the same mechanism.

Tim

On 12/11/19, Tim Daly  wrote:
> I've been reading Stephen Kell's (Univ of Kent
> https://www.cs.kent.ac.uk/people/staff/srk21/) on
> Seven deadly sins of talking about “types”
> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>
> He raised an interesting idea toward the end of the essay
> that type-checking could be done outside the compiler.
>
> I can see a way to do this in Axiom's Sane compiler.
> It would be possible to run a program over the source code
> to collect the information and write a stand-alone type
> checker. This "unbundles" type checking and compiling.
>
> Taken further I can think of several other kinds of checkers
> (aka 'linters') that could be unbundled.
>
> It is certainly something to explore.
>
> Tim
>
>
> On 12/8/19, Tim Daly  wrote:
>> The Axiom Sane compiler is being "shaped by the hammer
>> blows of reality", to coin a phrase.
>>
>> There are many goals. One of the primary goals is creating a
>> compiler that can be understood, maintained, and modified.
>>
>> So the latest changes involved adding multiple index files.
>> These are documentation (links to where terms are mentioned
>> in the text), code (links to the implementation of things),
>> error (links to where errors are defined), signatures (links to
>> the signatures of lisp functions), figures (links to figures),
>> and separate category, domain, and package indexes.
>>
>> The tikz package is now used to create "railroad diagrams"
>> of syntax (ala, the PASCAL report). The implementation of
>> those diagrams follows immediately. Collectively these will
>> eventually define at least the syntax of the language. In the
>> ideal, changing the diagram would change the code but I'm
>> not that clever.
>>
>> Reality shows up with the curent constraint that the
>> compiler should accept the current Spad language as
>> closely as possible. Of course, plans are to include new
>> constructs (e.g. hypothesis, axiom, specification, etc)
>> but these are being postponed until "syntax complete".
>>
>> All parse information is stored in a parse object, which
>> is a CLOS object (and therefore a Common Lisp type)
>> Fields within the parse object, e.g. variables are also
>> CLOS objects (and therefore a Common Lisp type).
>> It's types all the way down.
>>
>> These types are being used as 'signatures' for the
>> lisp functions. The goal is to be able to type-check the
>> compiler implementation as well as the Sane language.
>>
>> The parser is designed to "wrap around" so that the
>> user-level output of a parse should be the user-level
>> input (albeit in a 'canonical" form). This "mirror effect"
>> should make it easy to see that the parser properly
>> parsed the user input.
>>
>> The parser is "first class" so it will be available at
>> runtime as a domain allowing Spad code to construct
>> Spad code.
>>
>> One plan, not near implementation, is to "unify" some
>> CLOS types with the Axiom types (e.g. String). How
>> this will happen is still in the land of design. This would
>> "ground" Spad in lisp, making them co-equal.
>>
>> Making lisp "co-equal" is a feature, especially as Spad is
>> really just a domain-specific language 

Re: Axiom musings...

2019-12-11 Thread Tim Daly
I've been reading Stephen Kell's (Univ of Kent
https://www.cs.kent.ac.uk/people/staff/srk21/) on
Seven deadly sins of talking about “types”
https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/

He raised an interesting idea toward the end of the essay
that type-checking could be done outside the compiler.

I can see a way to do this in Axiom's Sane compiler.
It would be possible to run a program over the source code
to collect the information and write a stand-alone type
checker. This "unbundles" type checking and compiling.

Taken further I can think of several other kinds of checkers
(aka 'linters') that could be unbundled.

It is certainly something to explore.

Tim


On 12/8/19, Tim Daly  wrote:
> The Axiom Sane compiler is being "shaped by the hammer
> blows of reality", to coin a phrase.
>
> There are many goals. One of the primary goals is creating a
> compiler that can be understood, maintained, and modified.
>
> So the latest changes involved adding multiple index files.
> These are documentation (links to where terms are mentioned
> in the text), code (links to the implementation of things),
> error (links to where errors are defined), signatures (links to
> the signatures of lisp functions), figures (links to figures),
> and separate category, domain, and package indexes.
>
> The tikz package is now used to create "railroad diagrams"
> of syntax (ala, the PASCAL report). The implementation of
> those diagrams follows immediately. Collectively these will
> eventually define at least the syntax of the language. In the
> ideal, changing the diagram would change the code but I'm
> not that clever.
>
> Reality shows up with the curent constraint that the
> compiler should accept the current Spad language as
> closely as possible. Of course, plans are to include new
> constructs (e.g. hypothesis, axiom, specification, etc)
> but these are being postponed until "syntax complete".
>
> All parse information is stored in a parse object, which
> is a CLOS object (and therefore a Common Lisp type)
> Fields within the parse object, e.g. variables are also
> CLOS objects (and therefore a Common Lisp type).
> It's types all the way down.
>
> These types are being used as 'signatures' for the
> lisp functions. The goal is to be able to type-check the
> compiler implementation as well as the Sane language.
>
> The parser is designed to "wrap around" so that the
> user-level output of a parse should be the user-level
> input (albeit in a 'canonical" form). This "mirror effect"
> should make it easy to see that the parser properly
> parsed the user input.
>
> The parser is "first class" so it will be available at
> runtime as a domain allowing Spad code to construct
> Spad code.
>
> One plan, not near implementation, is to "unify" some
> CLOS types with the Axiom types (e.g. String). How
> this will happen is still in the land of design. This would
> "ground" Spad in lisp, making them co-equal.
>
> Making lisp "co-equal" is a feature, especially as Spad is
> really just a domain-specific language in lisp. Lisp
> functions (with CLOS types as signatures) would be
> avaiable for implementing Spad functions. This not
> only improves the efficiency, it would make the
> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
> .
> On the theory front I plan to attend the Formal Methods
> in Mathematics / Lean Together conference, mostly to
> know how little I know, especially that I need to know.
> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>
> Tim
>
>
>
> On 11/28/19, Jacques Carette  wrote:
>> The underlying technology to use for building such an algebra library is
>> documented in the paper " Building on the Diamonds between Theories:
>> Theory Presentation Combinators"
>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which will
>> also be on the arxiv by Monday, and has been submitted to a journal].
>>
>> There is a rather full-fledged prototype, very well documented at
>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>
>> (source at https://github.com/alhassy/next-700-module-systems). It is
>> literate source.
>>
>> The old prototype was hard to find - it is now at
>> https://github.com/JacquesCarette/MathScheme.
>>
>> There is also a third prototype in the MMT system, but it does not quite
>> function properly today, it is under repair.
>>
>> The paper "A Language Feature to Unbundle Data at Will"
>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>
>> is also relevant, as it solves a problem with parametrized theories
>> (parametrized Categories in Axiom terminology) that all current systems
>> suffer from.
>>
>> Jacques
>>
>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>> The new Sane compiler is also being tested with the Fricas
>>> algebra code. The compiler knows about the language but
>>> does not depend on the algebra library (so far). It should be
>>> possible, by 

Re: Axiom musings...

2019-12-08 Thread Tim Daly
The Axiom Sane compiler is being "shaped by the hammer
blows of reality", to coin a phrase.

There are many goals. One of the primary goals is creating a
compiler that can be understood, maintained, and modified.

So the latest changes involved adding multiple index files.
These are documentation (links to where terms are mentioned
in the text), code (links to the implementation of things),
error (links to where errors are defined), signatures (links to
the signatures of lisp functions), figures (links to figures),
and separate category, domain, and package indexes.

The tikz package is now used to create "railroad diagrams"
of syntax (ala, the PASCAL report). The implementation of
those diagrams follows immediately. Collectively these will
eventually define at least the syntax of the language. In the
ideal, changing the diagram would change the code but I'm
not that clever.

Reality shows up with the curent constraint that the
compiler should accept the current Spad language as
closely as possible. Of course, plans are to include new
constructs (e.g. hypothesis, axiom, specification, etc)
but these are being postponed until "syntax complete".

All parse information is stored in a parse object, which
is a CLOS object (and therefore a Common Lisp type)
Fields within the parse object, e.g. variables are also
CLOS objects (and therefore a Common Lisp type).
It's types all the way down.

These types are being used as 'signatures' for the
lisp functions. The goal is to be able to type-check the
compiler implementation as well as the Sane language.

The parser is designed to "wrap around" so that the
user-level output of a parse should be the user-level
input (albeit in a 'canonical" form). This "mirror effect"
should make it easy to see that the parser properly
parsed the user input.

The parser is "first class" so it will be available at
runtime as a domain allowing Spad code to construct
Spad code.

One plan, not near implementation, is to "unify" some
CLOS types with the Axiom types (e.g. String). How
this will happen is still in the land of design. This would
"ground" Spad in lisp, making them co-equal.

Making lisp "co-equal" is a feature, especially as Spad is
really just a domain-specific language in lisp. Lisp
functions (with CLOS types as signatures) would be
avaiable for implementing Spad functions. This not
only improves the efficiency, it would make the
BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
.
On the theory front I plan to attend the Formal Methods
in Mathematics / Lean Together conference, mostly to
know how little I know, especially that I need to know.
http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/

Tim



On 11/28/19, Jacques Carette  wrote:
> The underlying technology to use for building such an algebra library is
> documented in the paper " Building on the Diamonds between Theories:
> Theory Presentation Combinators"
> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which will
> also be on the arxiv by Monday, and has been submitted to a journal].
>
> There is a rather full-fledged prototype, very well documented at
> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>
> (source at https://github.com/alhassy/next-700-module-systems). It is
> literate source.
>
> The old prototype was hard to find - it is now at
> https://github.com/JacquesCarette/MathScheme.
>
> There is also a third prototype in the MMT system, but it does not quite
> function properly today, it is under repair.
>
> The paper "A Language Feature to Unbundle Data at Will"
> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>
> is also relevant, as it solves a problem with parametrized theories
> (parametrized Categories in Axiom terminology) that all current systems
> suffer from.
>
> Jacques
>
> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>> The new Sane compiler is also being tested with the Fricas
>> algebra code. The compiler knows about the language but
>> does not depend on the algebra library (so far). It should be
>> possible, by design, to load different algebra towers.
>>
>> In particular, one idea is to support the "tiny theories"
>> algebra from Carette and Farmer. This would allow much
>> finer grain separation of algebra and axioms.
>>
>> This "flexible algebra" design would allow things like the
>> Lean theorem prover effort in a more natural style.
>>
>> Tim
>>
>>
>> On 11/26/19, Tim Daly  wrote:
>>> The current design and code base (in bookvol15) supports
>>> multiple back ends. One will clearly be a common lisp.
>>>
>>> Another possible design choice is to target the GNU
>>> GCC intermediate representation, making Axiom "just
>>> another front-end language" supported by GCC.
>>>
>>> The current intermediate representation does not (yet)
>>> make any decision about the runtime implementation.
>>>
>>> Tim
>>>
>>>
>>> On 11/26/19, Tim Daly  wrote:
 Jason Gross and Adam Chlipala 

Re: Axiom musings...

2019-11-28 Thread Jacques Carette
The underlying technology to use for building such an algebra library is 
documented in the paper " Building on the Diamonds between Theories: 
Theory Presentation Combinators" 
http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which will 
also be on the arxiv by Monday, and has been submitted to a journal].


There is a rather full-fledged prototype, very well documented at 
https://alhassy.github.io/next-700-module-systems/prototype/package-former.html 
(source at https://github.com/alhassy/next-700-module-systems). It is 
literate source.


The old prototype was hard to find - it is now at 
https://github.com/JacquesCarette/MathScheme.


There is also a third prototype in the MMT system, but it does not quite 
function properly today, it is under repair.


The paper "A Language Feature to Unbundle Data at Will" 
(https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf) 
is also relevant, as it solves a problem with parametrized theories 
(parametrized Categories in Axiom terminology) that all current systems 
suffer from.


Jacques

On 2019-11-27 11:47 p.m., Tim Daly wrote:

The new Sane compiler is also being tested with the Fricas
algebra code. The compiler knows about the language but
does not depend on the algebra library (so far). It should be
possible, by design, to load different algebra towers.

In particular, one idea is to support the "tiny theories"
algebra from Carette and Farmer. This would allow much
finer grain separation of algebra and axioms.

This "flexible algebra" design would allow things like the
Lean theorem prover effort in a more natural style.

Tim


On 11/26/19, Tim Daly  wrote:

The current design and code base (in bookvol15) supports
multiple back ends. One will clearly be a common lisp.

Another possible design choice is to target the GNU
GCC intermediate representation, making Axiom "just
another front-end language" supported by GCC.

The current intermediate representation does not (yet)
make any decision about the runtime implementation.

Tim


On 11/26/19, Tim Daly  wrote:

Jason Gross and Adam Chlipala ("Parsing Parses") developed
a dependently typed general parser for context free grammar
in Coq.

They used the parser to prove its own completeness.

Unfortunately Spad is not a context-free grammar.
But it is an intersting thought exercise to consider
an "Axiom on Coq" implementation.

Tim





Re: Axiom musings...

2019-11-27 Thread Tim Daly
I'm still undecided about accepting unicode on input as opposed
to latex-style input (e.g. \pi vs the unicode pi character).

The logic syntax would really benefit from using things like
\forall as a unicode character on input.

It makes the math I/O much prettier but it makes parsing
much harder. What does one do with a function
  (code-char 958)(x:INT) ==

aka (greek small letter xi)

Some experiments are "in the thought pipeline"

Tim


On 11/27/19, Tim Daly  wrote:
> The new Sane compiler is also being tested with the Fricas
> algebra code. The compiler knows about the language but
> does not depend on the algebra library (so far). It should be
> possible, by design, to load different algebra towers.
>
> In particular, one idea is to support the "tiny theories"
> algebra from Carette and Farmer. This would allow much
> finer grain separation of algebra and axioms.
>
> This "flexible algebra" design would allow things like the
> Lean theorem prover effort in a more natural style.
>
> Tim
>
>
> On 11/26/19, Tim Daly  wrote:
>> The current design and code base (in bookvol15) supports
>> multiple back ends. One will clearly be a common lisp.
>>
>> Another possible design choice is to target the GNU
>> GCC intermediate representation, making Axiom "just
>> another front-end language" supported by GCC.
>>
>> The current intermediate representation does not (yet)
>> make any decision about the runtime implementation.
>>
>> Tim
>>
>>
>> On 11/26/19, Tim Daly  wrote:
>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>> a dependently typed general parser for context free grammar
>>> in Coq.
>>>
>>> They used the parser to prove its own completeness.
>>>
>>> Unfortunately Spad is not a context-free grammar.
>>> But it is an intersting thought exercise to consider
>>> an "Axiom on Coq" implementation.
>>>
>>> Tim
>>>
>>
>



Re: Axiom musings...

2019-11-27 Thread Tim Daly
The new Sane compiler is also being tested with the Fricas
algebra code. The compiler knows about the language but
does not depend on the algebra library (so far). It should be
possible, by design, to load different algebra towers.

In particular, one idea is to support the "tiny theories"
algebra from Carette and Farmer. This would allow much
finer grain separation of algebra and axioms.

This "flexible algebra" design would allow things like the
Lean theorem prover effort in a more natural style.

Tim


On 11/26/19, Tim Daly  wrote:
> The current design and code base (in bookvol15) supports
> multiple back ends. One will clearly be a common lisp.
>
> Another possible design choice is to target the GNU
> GCC intermediate representation, making Axiom "just
> another front-end language" supported by GCC.
>
> The current intermediate representation does not (yet)
> make any decision about the runtime implementation.
>
> Tim
>
>
> On 11/26/19, Tim Daly  wrote:
>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>> a dependently typed general parser for context free grammar
>> in Coq.
>>
>> They used the parser to prove its own completeness.
>>
>> Unfortunately Spad is not a context-free grammar.
>> But it is an intersting thought exercise to consider
>> an "Axiom on Coq" implementation.
>>
>> Tim
>>
>



Re: Axiom musings...

2019-11-26 Thread Tim Daly
The current design and code base (in bookvol15) supports
multiple back ends. One will clearly be a common lisp.

Another possible design choice is to target the GNU
GCC intermediate representation, making Axiom "just
another front-end language" supported by GCC.

The current intermediate representation does not (yet)
make any decision about the runtime implementation.

Tim


On 11/26/19, Tim Daly  wrote:
> Jason Gross and Adam Chlipala ("Parsing Parses") developed
> a dependently typed general parser for context free grammar
> in Coq.
>
> They used the parser to prove its own completeness.
>
> Unfortunately Spad is not a context-free grammar.
> But it is an intersting thought exercise to consider
> an "Axiom on Coq" implementation.
>
> Tim
>