I've spent entirely too much time studing the legal issues
of free and open source software. There are copyright,
trademark, and intellectual property laws. I have read
several books, listened to lectures, and read papers on
the subject. I've spoken to lawyers about it. I've even
been required, by law, to coerce people I respect.
You would think it was all perfectly clear. It isn't.

The most entertaining and enlightening lectures were
by Robert Lefkowitz at OSCON 2004. His talk is
"The Semasiology of Open Source", which sounds
horrible but I assure you, this is a real treat.

THE THESIS

Semasiology, n. The science of meanings or
sense development (of words); the explanation
of the development and changes of the meanings
of words. Source: Webster's Revised Unabridged
Dictionary, � 1996, 1998 MICRA, Inc.

"Open source doesn't just mean access to the
source code." So begins the Open Source Definition.
What then, does access to the source code mean?
Seen through the lens of an Enterprise user, what
does open source mean? When is (or isn't) it
significant? And a catalogue of open source
related arbitrage opportunities.

http://origin.conversationsnetwork.org/Robert%20Lefkowitz%20-%20The%20Semasiology%20of%20Open%20Source.mp3

Computer source code has words and sentence
structure like actual prose or even poetry. Writing
code for the computer is like writing an essay. It
should be written for other people to read,
understand and modify. These are some of the
thoughts behind literate programming proposed
by Donald Knuth. This is also one of the ideas
behind Open Source.

 THE ANTITHESIS

"Open Source" is a phrase like "Object Oriented"
- weird at first, but when it became popular, the
meaning began to depend on the context of the
speaker or listener. "Object Oriented" meant that
PERL, C++, Java, Smalltalk, Basic and the newest
version of Cobol are all "Object Oriented" - for some
specific definition of "Object Oriented". Similar is
the case of the phrase "Open Source".

In Part I, Lefkowitz talked about the shift of the
meaning of "Open Source" away from any reference
to the actual "source code," and more towards other
phases of the software development life cycle. In
Part II, he returns to the consideration of the
relationship between "open source" and the actual
"source code," and reflects upon both the way
forward and the road behind, drawing inspiration
from Charlemagne, King Louis XIV, Donald Knuth,
and others.

http://origin.conversationsnetwork.org/ITC.OSCON05-RobertLefkowitz-2005.08.03.mp3

THE SYNTHESIS

In a fascinating synthesis, Robert “r0ml” Lefkowitz
polishes up his exposition on the evolving meaning
of the term ‘open source’. This intellectual joy-ride
draws on some of the key ideas in artificial intelligence
to probe the role of language, meaning and context
in computing and the software development process.
Like Wittgenstein’s famous thought experiment, the
open source ‘beetle in a box’ can represent different
things to different people, bearing interesting fruit for
philosophers and software creators alike.

http://itc.conversationsnetwork.org/audio/download/itconversations-1502.mp3


On 3/7/20, Tim Daly <axiom...@gmail.com> wrote:
> I've pushed the lastest version of Axiom. The plan, followed
> so far, is to push once a month on the 7th.
>
> After some chat room interactions it was brought home
> again that the proof world really does not seem to like the
> idea of proving programs correct. And, given that it was is
> of the main Axiom goals and a point of friction during the fork,
> the computer algebra world does not like the idea of proving
> programs correct either.
>
> So the idea of "computational mathematics", which includes
> both disciplines (as well as type theory) seems still a long
> way off.
>
> Nevertheless, the primary change in these past and future
> updates is focused on merging proof and computer algebra.
>
> Proof systems are able to split the problem of creating a
> proof and the problem of verifying a proof, which is much
> cheaper. Ideally the proof checker would run on verified
> hardware so the proof is checked "down to the metal".
>
> I have a background in Field Programmable Gate Arrays
> (FPGAs) as I tried to do a startup using them. So now I'm
> looking at creating a hardware proof checker using a
> dedicated instruction set, one designed to be verifed.
> New CPUs used in data centers (not yet available to us
> mortals) have built-in FPGAs so it would be possible to
> "side-load" a proof of a program to be checked while the
> program is run. I have the FPGA and am doing a gate-level
> special instruction design for such a proof checker.
>
>
> On 2/7/20, Tim Daly <axiom...@gmail.com> wrote:
>> As a mathematician, it is difficult to use a system like Axiom,
>> mostly because it keeps muttering about Types. If you're not
>> familiar with type theory (most mathematicians aren't) then it
>> seems pointless and painful.
>>
>> So Axiom has a steep learning curve.
>>
>> As a mathematician with an algorithmic approach, it is difficult
>> to use a system like Axiom, mostly because you have to find
>> or create "domains" or "packages", understand categories
>> with their inheritance model, and learn a new language with
>> a painful compiler always complaining about types.
>>
>> So Axiom has a steep learning curve.
>>
>> The Sane version of Axiom requires knowing the mathematics.
>> It also assumes a background in type theory, inductive logic,
>> homotopy type theory, ML (meta-language, not machine
>> learning (yet)), interactive theorem proving, kernels, tactics,
>> and tacticals. Also assumed is knowledge of specification languages,
>> Hoare triples, proof techniques, soundness, and completeness.
>> Oh, and there is a whole new syntax and semantics added to
>> specify definitions, axioms, and theorems, not to mention whole
>> libraries of the same.
>>
>> So Axiom Sane has a steep learning curve.
>>
>> I've taken 10 courses at CMU and spent the last 4-5 years
>> learning to read the leading edge literature (also known
>> as "greek studies", since every paper has pages of greek).
>>
>> I'm trying to unify computer algebra and proof theory into a
>> "computational mathematics" framework. I suspect that the only
>> way this system will ever be useful is after Universities have a
>> "Computational Mathematics" major course of study and degree.
>>
>> Creating a new department is harder than creating Axiom Sane
>> because, you know, ... people.
>>
>> I think such a department is inevitable given the deep and wide
>> impact of computers, just not in my lifetime. That's ok. When I
>> started programming there was no computer science degree.
>>
>> Somebody has to be the first lemming over the cliff.
>>
>> Tim
>>
>> On 1/9/20, Tim Daly <axiom...@gmail.com> wrote:
>>> When Axiom Sane is paired with a proof checker (e.g. with Lean)
>>> there is a certain amount of verification that is involved.
>>>
>>> Axiom will provide proofs (presumably validated by Lean) for its
>>> algorithms. Ideally, when a computation is requested from Lean
>>> for a GCD, the result as well as a proof of the GCD algorithm is
>>> returned. Lean can the verify that the proof is valid. But it is
>>> computationally more efficient if Axiom and Lean use a cryptographic
>>> hash, such as SHA1. That way the proof doesn't need to be
>>> 'reproven', only a hash computation over the proof text needs to
>>> be performed. Hashes are blazingly fast. This allows proofs to be
>>> exchanged without re-running the proof mechanism. Since a large
>>> computation request from Lean might involve many algorithms
>>> there would be considerable overhead to recompute each proof.
>>> A hash simplifies the issue yet provides proof integrity.
>>>
>>> Tim
>>>
>>>
>>> On 1/9/20, Tim Daly <axiom...@gmail.com> wrote:
>>>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>>>
>>>> A computer algebra system ought to know and ought to provide
>>>> information about the domain and range of a resulting formula.
>>>> I've been pushing this effort since the 1980s (hence the
>>>> SuchThat domain).
>>>>
>>>> It turns out that computing with, carrying, and combining this
>>>> information is difficult if not impossible in the current system.
>>>> The information isn't available and isn't computed. In that sense,
>>>> the original Axiom system is 'showing its age'.
>>>>
>>>> In the Sane implementation the information is available. It is
>>>> part of the specification and part of the proof steps. With a
>>>> careful design it will be possible to provide provisos for each
>>>> given result that are carried with the result for use in further
>>>> computation.
>>>>
>>>> This raises interesting questions to be explored. For example,
>>>> if the formula is defined over an interval, how is the interval
>>>> arithmetic handled?
>>>>
>>>> Exciting research ahead!
>>>>
>>>> Tim
>>>>
>>>>
>>>>
>>>> On 1/3/20, Tim Daly <axiom...@gmail.com> wrote:
>>>>> Trusted Kernel... all the way to the metal.
>>>>>
>>>>> While building a trusted computer algebra system, the
>>>>> SANE version of Axiom, I've been looking at questions of
>>>>> trust at all levels.
>>>>>
>>>>> One of the key tenets (the de Bruijn principle) calls for a
>>>>> trusted kernel through which all other computations must
>>>>> pass. Coq, Lean, and other systems do this. They base
>>>>> their kernel  on logic like the Calculus of Construction or
>>>>> something similar.
>>>>>
>>>>> Andrej Bauer has been working on a smaller kernel (a
>>>>> nucleus) that separates the trust from the logic. The rules
>>>>> for the logic can be specified as needed but checked by
>>>>> the nucleus code.
>>>>>
>>>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>>>> that allow you to create your own hardware in a C-like
>>>>> language (Verilog). It allows you to check the chip you build
>>>>> all the way down to the transistor states. You can create
>>>>> things as complex as a whole CPU or as simple as a trusted
>>>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>>>> history of verifying hardware logic.
>>>>>
>>>>> It appears that, assuming I can understand Bauers
>>>>> Andromeda system, it would be possible and not that hard
>>>>> to implement a trusted kernel on an FPGA the size and
>>>>> form factor of a USB stick.
>>>>>
>>>>> Trust "down to the metal".
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>>
>>>>> On 12/15/19, Tim Daly <axiom...@gmail.com> wrote:
>>>>>> Progress in happening on the new Sane Axiom compiler.
>>>>>>
>>>>>> Recently I've been musing about methods to insert axioms
>>>>>> into categories so they could be inherited like signatures.
>>>>>> At the moment I've been thinking about adding axioms in
>>>>>> the same way that signatures are written, adding them to
>>>>>> the appropriate categories.
>>>>>>
>>>>>> But this is an interesting design question.
>>>>>>
>>>>>> Axiom already has a mechanism for inheriting signatures
>>>>>> from categories. That is, we can bet a plus signature from,
>>>>>> say, the Integer category.
>>>>>>
>>>>>> Suppose we follow the same pattern. Currently Axiom
>>>>>> inherits certain so-called "attributes", such as
>>>>>> ApproximateAttribute,
>>>>>> which implies that the results are only approximate.
>>>>>>
>>>>>> We could adapt the same mechnaism to inherit the Transitive
>>>>>> property by defining it in its own category. In fact, if we
>>>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>>>> where each property has its own inheritable category,
>>>>>> we can "mix and match" the axioms at will.
>>>>>>
>>>>>> An "axiom" category would also export a function. This function
>>>>>> would essentially be a "tactic" used in a proof. It would modify
>>>>>> the proof step by applying the function to the step.
>>>>>>
>>>>>> Theorems would have the same structure.
>>>>>>
>>>>>> This allows theorems to be constructed at run time (since
>>>>>> Axiom supports "First Class Dynamic Types".
>>>>>>
>>>>>> In addition, this design can be "pushed down" into the Spad
>>>>>> language so that Spad statements (e.g. assignment) had
>>>>>> proof-related properties. A range such as [1..10] would
>>>>>> provide explicit bounds in a proof "by language definition".
>>>>>> Defining the logical properties of language statements in
>>>>>> this way would make it easier to construct proofs since the
>>>>>> invariants would be partially constructed already.
>>>>>>
>>>>>> This design merges the computer algebra inheritance
>>>>>> structure with the proof of algorithms structure, all under
>>>>>> the same mechanism.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>> On 12/11/19, Tim Daly <axiom...@gmail.com> wrote:
>>>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>>>>>>> Seven deadly sins of talking about “types”
>>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>>>>>>
>>>>>>> He raised an interesting idea toward the end of the essay
>>>>>>> that type-checking could be done outside the compiler.
>>>>>>>
>>>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>>>> It would be possible to run a program over the source code
>>>>>>> to collect the information and write a stand-alone type
>>>>>>> checker. This "unbundles" type checking and compiling.
>>>>>>>
>>>>>>> Taken further I can think of several other kinds of checkers
>>>>>>> (aka 'linters') that could be unbundled.
>>>>>>>
>>>>>>> It is certainly something to explore.
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>> On 12/8/19, Tim Daly <axiom...@gmail.com> wrote:
>>>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>>>> blows of reality", to coin a phrase.
>>>>>>>>
>>>>>>>> There are many goals. One of the primary goals is creating a
>>>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>>>
>>>>>>>> So the latest changes involved adding multiple index files.
>>>>>>>> These are documentation (links to where terms are mentioned
>>>>>>>> in the text), code (links to the implementation of things),
>>>>>>>> error (links to where errors are defined), signatures (links to
>>>>>>>> the signatures of lisp functions), figures (links to figures),
>>>>>>>> and separate category, domain, and package indexes.
>>>>>>>>
>>>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>>>> those diagrams follows immediately. Collectively these will
>>>>>>>> eventually define at least the syntax of the language. In the
>>>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>>>> not that clever.
>>>>>>>>
>>>>>>>> Reality shows up with the curent constraint that the
>>>>>>>> compiler should accept the current Spad language as
>>>>>>>> closely as possible. Of course, plans are to include new
>>>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>>>> but these are being postponed until "syntax complete".
>>>>>>>>
>>>>>>>> All parse information is stored in a parse object, which
>>>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>>>> Fields within the parse object, e.g. variables are also
>>>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>>>> It's types all the way down.
>>>>>>>>
>>>>>>>> These types are being used as 'signatures' for the
>>>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>>>> compiler implementation as well as the Sane language.
>>>>>>>>
>>>>>>>> The parser is designed to "wrap around" so that the
>>>>>>>> user-level output of a parse should be the user-level
>>>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>>>> should make it easy to see that the parser properly
>>>>>>>> parsed the user input.
>>>>>>>>
>>>>>>>> The parser is "first class" so it will be available at
>>>>>>>> runtime as a domain allowing Spad code to construct
>>>>>>>> Spad code.
>>>>>>>>
>>>>>>>> One plan, not near implementation, is to "unify" some
>>>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>>>> this will happen is still in the land of design. This would
>>>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>>>
>>>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>>>> functions (with CLOS types as signatures) would be
>>>>>>>> avaiable for implementing Spad functions. This not
>>>>>>>> only improves the efficiency, it would make the
>>>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>>>> .
>>>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>>>> know how little I know, especially that I need to know.
>>>>>>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 11/28/19, Jacques Carette <care...@mcmaster.ca> wrote:
>>>>>>>>> The underlying technology to use for building such an algebra
>>>>>>>>> library
>>>>>>>>> is
>>>>>>>>> documented in the paper " Building on the Diamonds between
>>>>>>>>> Theories:
>>>>>>>>> Theory Presentation Combinators"
>>>>>>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which
>>>>>>>>> will
>>>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>>>> journal].
>>>>>>>>>
>>>>>>>>> There is a rather full-fledged prototype, very well documented at
>>>>>>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>>>>>>
>>>>>>>>> (source at https://github.com/alhassy/next-700-module-systems). It
>>>>>>>>> is
>>>>>>>>> literate source.
>>>>>>>>>
>>>>>>>>> The old prototype was hard to find - it is now at
>>>>>>>>> https://github.com/JacquesCarette/MathScheme.
>>>>>>>>>
>>>>>>>>> There is also a third prototype in the MMT system, but it does not
>>>>>>>>> quite
>>>>>>>>> function properly today, it is under repair.
>>>>>>>>>
>>>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>>>>>>
>>>>>>>>> is also relevant, as it solves a problem with parametrized
>>>>>>>>> theories
>>>>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>>>>> systems
>>>>>>>>> suffer from.
>>>>>>>>>
>>>>>>>>> Jacques
>>>>>>>>>
>>>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>>>
>>>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>>>
>>>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 11/26/19, Tim Daly <axiom...@gmail.com> wrote:
>>>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>>>
>>>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>>>
>>>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 11/26/19, Tim Daly <axiom...@gmail.com> wrote:
>>>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>>>>> in Coq.
>>>>>>>>>>>>
>>>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>>>
>>>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to