I'm trying to figure out the most elegant way to put Spad on top of a
well-constructed set of theories. This would lead to some compiler
and interpreter changes to reduce the connection to practice.

There is a theory called the polymorphic lambda calculus (aka System F).
Whereas the lambda calculus allows us to make functions, the polymorphic
lambda calculus allows us to use types as parameters. We do this all the
time in Axiom, we just don't ground it in theory.

One possible path might be to re-host the Axiom compiler on an ML
backend (the intermediate backend). However, it is not yet clear to me
that ML can handle Axiom's coercions and conversions. So far I have
not found anything that can. I might need to write a second survey paper.
At the moment I'm reading the ML compiler source code.

There is work by Dolan (Dola17) on Algebraic Subtyping which might
handle part of it but I'm still working my way through his thesis.

Tim


On Fri, Apr 6, 2018 at 5:44 PM, Tim Daly <axiom...@gmail.com> wrote:

> I'd also include this, from Noam Zeilberger's PhD thesis:
>
>     "The proofs-as-programs analogy really has demonstrated remarkable
> utility
>     in driving progress in programming languages. Over the past few
> decades,
>     many different ideas from logic have permeated into the field,
> reinterpreted
>     from a computational perspective and collectively organized as type
> theory.
>     In the 1980s, type theory dramatically improved the theoretical
> understanding
>     of difficult language concepts such as absract data types and
> polymorphism,
>     and led directly to the development of groundbreaking new languages
> such as
>     ML and Haskell. In the other direction, type theory has also been
> applied back
>     towards the mechanization of mathematics, and the Curry-Howard
> correspondence
>     forms the basis for successful proof assistants such as Coq and Agda.
> Not least,
>     the analogy between proving and programming has the social effect of
> linking
>     two different communities of researchers: although people who
> write/study proofs
>     and people who write/study programs often have very different
> motivations, the
>     Curry-Howard correspondence says that in some ways they are doing very
>     similar things."
>
> The smart (of which I am not enough of) and elegant (of which I am not
> capable)
> way to prove Axiom correct would be to fully exploit Curry-Howard in a way
> that
> would make the programs BE the proofs. That's the correct way to approach
> this
> problem. At my current state of understanding, this solution amounts to
> the old
> blackboard joke of "insert a miracle here".
>
> Tim
>
>
> On Fri, Apr 6, 2018 at 4:50 PM, Tim Daly <axiom...@gmail.com> wrote:
>
>> Apropos of the coercion issue, Henri Tuhola just pointed me at a recent
>> PhD thesis by Stephen Dolan on Algebraic Subtyping:
>> https://www.cl.cam.ac.uk/~sd601/thesis.pdf
>>
>> Tim
>>
>>
>> On Fri, Apr 6, 2018 at 4:04 PM, Tim Daly <axiom...@gmail.com> wrote:
>>
>>> >My reason for not so much trusting theorem-proving isn't because I don't
>>> >understand much of it (although that is a fact), but because of its
>>> formalism
>>> >which you obviously love.
>>>
>>> Actually, I find the formalism to be painful and very non-intuitive.
>>> Experience has shown me that it takes about 18 months to climb that
>>> curve.
>>>
>>> But consider that there are two VERY large areas of computational
>>> mathematics that have grown up side-by-side for the last half-century.
>>> CAS and Proof exist as essentially separate bodies of computational
>>> mathematics.
>>>
>>> My research effort is fundamentally about joining these two areas.
>>>
>>> Proof systems can't use CAS results because they have no basis for trust.
>>> They take stabs at algorithmic proofs but have no good structure to
>>> combine them (which Axiom provides).
>>>
>>> CAS systems don't do proofs, and in fact are nearly impossible to use
>>> as a basis (MMA uses rewrite, Maple and others use ad hoc tree-like
>>> structures, etc.)  Axiom provides a good basis for proofs.
>>>
>>> Axiom is ideally structured to be a bridge.
>>>
>>> Since I'm coming from the CAS world my view of bridge-building
>>> involves applying Proof technology to Axiom. If I were coming from
>>> the Proof world I'd be trying to structure a proof system along the
>>> lines of Axiom so I could organize the proven algorithms. These are
>>> two ends of the same bridge.
>>>
>>>
>>>
>>>
>>> >You consder Axiom code (and by implication, mathematics on which it is
>>> >based) as "hand-waving", which in my opinion, does not necessarily mean
>>> >non-rigorous.
>>>
>>> I don't consider the mathematics to be "hand-waving". But I do consider
>>> the code to be hand-waving. After 47 years of programming I'm well aware
>>> that "it works for my test cases" is a very poor measure of correctness.
>>> There is a half-century of research that exists, is automated, and
>>> attacks
>>> that question in a mathematically sound way.
>>>
>>> Axiom code is rather opaque. It is, in general, excellent code. Barry and
>>> James created an excellent structure for its time. But time has also
>>> shown
>>> me the cracks. One example is the coercion code, a lot of which is ad
>>> hoc,
>>> implemented as special cases in the interpreter. There are reasonable
>>> theories about that which ought to be implemented.
>>>
>>> I'm not trying to do anything new or innovative. I'm just trying to
>>> combine
>>> what everyone does (on the proof side) with what everyone does (on the
>>> CAS side). The end result should be of benefit to all of computational
>>> mathematics.
>>>
>>> Tim
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Apr 6, 2018 at 1:48 PM, William Sit <w...@ccny.cuny.edu> wrote:
>>>
>>>> ​Dear Tim:
>>>>
>>>>
>>>> I never said, nor implied you are wasting your time.
>>>>
>>>>
>>>> If there is any difference (and similarity) between our views, it is
>>>> about trust. You do not trust Axiom code but you trust theorem-proving
>>>> while I prefer to trust Axiom code and not so much on theorem-proving. Both
>>>> research areas are mathematically based. Clearly, no one can in a life time
>>>> understand all the mathematics behind these theories, and honestly, the
>>>> theories in theorem-proving (including rewriting systems, type-theory,
>>>> lambda calculus, Coq, etc.) are branches of mathematics much like group
>>>> theory,  model theory, number theory, geometries, etc.).
>>>>
>>>>
>>>> My reason for not so much trusting theorem-proving isn't because I
>>>> don't understand much of it (although that is a fact), but because of its
>>>> formalism, which you obviously love. You consider most Axiom code (and by
>>>> implication, mathematics on which it is based) as "hand-waving", which in
>>>> my opinion, does not necessarily mean non-rigorous. Mathematicians
>>>> frequently use "hand-waving" for results or methods or processes that are
>>>> well-known (to experts, perhaps) so as not to make the argument too long
>>>> and distract from the main one. So they use "it is easy to see", or "by
>>>> induction", or "play the same game", etc.
>>>>
>>>>
>>>> Believe it or not, theorem-proving use the same language and
>>>> "hand-waving". Even Coq does the same if you look at its "proofs". See
>>>> Proof for Lemmas 22 and 23, for example: "Both proofs go over easily by
>>>> induction over the structure of the derivation." http://www.lix.po
>>>> lytechnique.fr/Labo/Bruno.Barras/publi/coqincoq.pdf
>>>>
>>>>
>>>> There is ​one exception: Whitehead and Russell's *Principia
>>>> Mathematica*. Check this out (Wikipedia):
>>>>
>>>>
>>>> "Famously, several hundred pages of *PM* precede the proof of the
>>>> validity of the proposition 1+1=2."
>>>>
>>>>
>>>> Now that is another form of "proving to the bare metal". Should we say,
>>>> if we don't trust 1+1=2, then all is lost? No, ..., but ... (see Wiki):
>>>>
>>>>
>>>> *"PM* was an attempt to describe a set of axioms
>>>> <https://en.wikipedia.org/wiki/Axiom> and inference rules
>>>> <https://en.wikipedia.org/wiki/Inference_rule> in symbolic logic
>>>> <https://en.wikipedia.org/wiki/Mathematical_logic> from which all
>>>> mathematical truths could in principle be proven. As such, this ambitious
>>>> project is of great importance in the history of mathematics and 
>>>> philosophy,
>>>> [1]
>>>> <https://en.wikipedia.org/wiki/Principia_Mathematica#cite_note-SEP-1> being
>>>> one of the foremost products of the belief that such an undertaking may be
>>>> achievable. However, in 1931, Gödel's incompleteness theorem
>>>> <https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorem> 
>>>> proved
>>>> definitively that PM, and in fact any other attempt, could never achieve
>>>> this lofty goal; that is, for any set of axioms and inference rules
>>>> proposed to encapsulate mathematics, either the system must be
>>>> inconsistent, or there must in fact be some truths of mathematics which
>>>> could not be deduced from them."​
>>>>
>>>> "The *Principia* covered only set theory
>>>> <https://en.wikipedia.org/wiki/Set_theory>, cardinal numbers
>>>> <https://en.wikipedia.org/wiki/Cardinal_numbers>, ordinal numbers
>>>> <https://en.wikipedia.org/wiki/Ordinal_numbers>, and real numbers
>>>> <https://en.wikipedia.org/wiki/Real_numbers>. Deeper theorems from real
>>>> analysis <https://en.wikipedia.org/wiki/Real_analysis> were not
>>>> included, but by the end of the third volume it was clear to experts that a
>>>> large amount of known mathematics could *in principle* be developed in
>>>> the adopted formalism. It was also clear how lengthy such a development
>>>> would be.
>>>>
>>>> A fourth volume on the foundations of geometry
>>>> <https://en.wikipedia.org/wiki/Geometry> had been planned, but the
>>>> authors admitted to intellectual exhaustion upon completion of the third."
>>>>
>>>>
>>>> Perhaps someday, a more powerful computer system than Coq can reproduce
>>>> PM (and prove its correctness, much like Coq proves the Four Color
>>>> Theorem) and continue further.
>>>>
>>>> Nonetheless, even computers have their limits.
>>>>
>>>>
>>>> That is why I suggest "good enough" is good enough. It is also why I
>>>> admire your tenacity to follow your goal. Despite Gödel
>>>> <https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorem>'s
>>>> incompleteness theorem, we need to prove correctness (of Axiom) as deep and
>>>> wide as we can, and there are many ways to do that.
>>>>
>>>>
>>>> William
>>>>
>>>>
>>>> William Sit
>>>> Professor Emeritus
>>>> Department of Mathematics
>>>> The City College of The City University of New York
>>>> New York, NY 10031
>>>> homepage: wsit.ccny.cuny.edu
>>>> ------------------------------
>>>> *From:* Tim Daly <axiom...@gmail.com>
>>>> *Sent:* Friday, April 6, 2018 6:34 AM
>>>>
>>>> *To:* William Sit
>>>> *Cc:* axiom-dev; Tim Daly
>>>> *Subject:* Re: [Axiom-developer] Proving Axiom Correct
>>>>
>>>> One lesson I have learned over all my years is that you'd can't ever
>>>> change people's minds by argument or discussion.
>>>>
>>>> I've spent a lot of time and study in the subject of understanding
>>>> better and less error-prone ways of programming. That has led me
>>>> to applying mathematics (ala Floyd/Hoare/Dijkstra). Given that
>>>> Axiom is about computational mathematics there is a natural goal of
>>>> trying to make Axiom better and less error-prone.
>>>>
>>>> Proving Axiom correct is a very challenging and not very popular idea.
>>>> Writing Spad code is hard. Proving the code correct is beyond the
>>>> skill of most programmers. Sadly, even writing words to explain how
>>>> the code works seems beyond the skill of most programmers.
>>>>
>>>> My point of view is that writing Spad code that way is "pre-proof,
>>>> 19th century 'hand-waving' mathematics". We can do better.
>>>>
>>>> You obviously believe this is a waste of time. You are probably right.
>>>> But I've only got a few years left to waste and this seems to me to be
>>>> an important thing on which to waste them.
>>>>
>>>> Tim
>>>>
>>>>
>>>> On Fri, Apr 6, 2018 at 1:23 AM, William Sit <w...@ccny.cuny.edu> wrote:
>>>>
>>>>> Dear Tim:
>>>>>
>>>>>
>>>>> Thanks again for taking the time to explain your efforts and to
>>>>> further my understanding on the issue of proving "down to the metal". By
>>>>> following all the leads you gave, I had a quick course.
>>>>>
>>>>>
>>>>> Unfortunately, despite the tremendous efforts in the computing
>>>>> industry to assure us of correctness ("proven" by formalism), at least 
>>>>> from
>>>>> what you wrote (which I understand was not meant to be comprehensive), not
>>>>> only are those efforts piecewise, they also concentrate on quite limited
>>>>> aspects.
>>>>>
>>>>>
>>>>> My comments are in regular font; italicized paragraphs are quoted
>>>>> passages, isolated italics and highlights are mine. Itemized headings
>>>>> are from your email.
>>>>>
>>>>>
>>>>> 1. BLAS/LAPACK: they use a lot of coding tricks to avoid
>>>>> overflow/underflow/significance loss/etc​.
>>>>>
>>>>>
>>>>> Coding tricks are adverse to proofs by formal logics, or at least such
>>>>> code makes it practically impossible to assure correctness. Most of the
>>>>> time, these tricks are patches to deal with post-implementation revealed
>>>>> bugs (whose discoveries are more from real-life usage than from program
>>>>> proving).
>>>>>
>>>>>
>>>>> 2. Field Programmable Gate Array (FPGA)
>>>>>
>>>>>
>>>>> These are great at the gate level and of course, theoretically, they
>>>>> are the basic blocks in building Turing machines (practically, finite 
>>>>> state
>>>>> machines or FSMs). Mealy/Moore state machines are just two ways to
>>>>> look at FSMs; I read
>>>>> www-inst.eecs.berkeley.edu/~cs150/fa05/Lectures/07-SeqLogicIIIx2.pdf
>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www-2Dinst.eecs.berkeley.edu_-7Ecs150_fa05_Lectures_07-2DSeqLogicIIIx2.pdf&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=KxTLI8x_XLCw7f5LfxLdI49gcxkIWNLTXuBtYg0NHEg&s=szqaazDR95Wrn7_jCR-l2FRtSdu-JPzQaOKz3DrgM7s&e=>
>>>>>
>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.google.com_search-3Fclient-3Dsafari-26rls-3Den-26q-3DMealy_Moore-2Bstate-2Bmachines-26ie-3DUTF-2D8-26oe-3DUTF-2D8-23&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=KxTLI8x_XLCw7f5LfxLdI49gcxkIWNLTXuBtYg0NHEg&s=1lAmJKR4Fh7uaDIgVGCV-TOG2A9nKHvvLcg0okuMRQc&e=>
>>>>>
>>>>>    1.
>>>>>    
>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__webcache.googleusercontent.com_search-3Fq-3Dcache-3AuM-5FBbEPaPGoJ-3Awww-2Dinst.eecs.berkeley.edu_-7Ecs150_fa05_Lectures_07-2DSeqLogicIIIx2.pdf-2B-26cd-3D6-26hl-3Den-26ct-3Dclnk-26gl-3Dus-26client-3Dsafari&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=KxTLI8x_XLCw7f5LfxLdI49gcxkIWNLTXuBtYg0NHEg&s=u_SL_Yp7SymLcIYBuRwEeVwmSIbRJKmM7YSI_pzoFKI&e=>
>>>>>
>>>>> and there are nice examples illustrating the steps to construct FSMs
>>>>> (a bit of a nostalgic trip to revisit Karnaugh maps I learned in the
>>>>> 1970s) .  I assume these applications can all be automated and proven
>>>>> correct once *the set of specifications* for the finite state machine
>>>>> to perform a task is given but the final correctness still depend on a
>>>>> *proven* set of specifications! As far as I can
>>>>> discern, specifications are done manually since they are task dependent.
>>>>> As an example, before proving that a compiler is correct implemented,
>>>>> one needs to specify the language and the compiling algorithm (which of
>>>>> course, can be and have been done, like YACC). Despite all the
>>>>> verification and our trust in the proof algorithms and implementations, 
>>>>> there
>>>>> remains a small probability that something may still be amiss in the
>>>>> specifications, like an unanticipated but grammatically correct input is
>>>>> diagnosed as an error. We have all seen compiler error messages that do 
>>>>> not
>>>>> pinpoint where the error originated.
>>>>>
>>>>> 3. ProvenVisor on ARMs. http://www.provenrun.com
>>>>> /products/provenvisor/nxp/
>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.provenrun.com_products_provenvisor_nxp_&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=KxTLI8x_XLCw7f5LfxLdI49gcxkIWNLTXuBtYg0NHEg&s=s_pEqwh0zu3dxsLS4ULymJ-AlzLujST5uQgG6v63lGQ&e=>
>>>>>
>>>>> I read that, and my understanding is that these proven microkernels
>>>>> are concerned with security (both from external and from internal threats)
>>>>> in multicore architectures (which are prevalent in all CPU designs
>>>>> nowadays) and multi- and coexisting OSes. Even under such a general yet
>>>>> limited aspect of "proven correctness" (by formalism no less), there is no
>>>>> guarantee (paragraph under: Formally Proven Security):
>>>>>
>>>>> *In order to achieve the highest level of security, ProvenVisor uses a
>>>>> microkernel architecture implemented using formally proven code to get as
>>>>> close as possible to zero-defects, to guarantee the expected security
>>>>> properties and to ease the path to any required certifications. This
>>>>> architecture and the formal proofs insure the sustainability of the
>>>>> maintenance process of systems based on ProvenVisor. ... *
>>>>>
>>>>> This may be legalese, but from the highlighted phrases clearly show
>>>>> that the goal is not "proven and complete specifications" on security. 
>>>>> Even
>>>>> the formally proven code does not guarantee zero-defects on expected
>>>>> violations.  It is only a "best effort" (which still is commendable). The
>>>>> scope is also limited:
>>>>>
>>>>> *Prove & Run’s formal software development toolchain. This means that
>>>>> it is mathematically proven that virtual machines (VMs) hosted by
>>>>> ProvenVisor will always retain their integrity (no other VM can tamper 
>>>>> with
>>>>> their internal data) and confidentiality (no other VM can read their
>>>>> internal data). A misbehaving or malicious OS has no way to modify another
>>>>> OS or to spy on another OS.*
>>>>>
>>>>> A malicious program need not *run* in a *hosted* OS or VM if it
>>>>> gains access to the microkernel, say with an external hardware (and
>>>>> external software) that can modify it. After all, there has to be
>>>>> such equipment to test whether the microkernel is working or not and to
>>>>> apply patches if need be.
>>>>>
>>>>> *And a major "professional service" offered is:*
>>>>> *Revamping existing architectures for security with ad-hoc solutions:
>>>>> Secure Boot, secure Over-the-Air firmware update, firewalling, intrusion
>>>>> detection/protection solutions, authentication, secure storage, etc…*
>>>>>
>>>>> Does "ad-hoc solutions" mean patches?
>>>>>
>>>>> 4. The issue of Trust: If you can't trust the hardware gates to
>>>>> compute a valid AND/OR/NOT then all is lost.​
>>>>>
>>>>> Actually, I not only trust, but also believe in the correctness, or
>>>>> proof thereof, of gate-level logic or a microkernel, but that is a far far
>>>>> cry from, say, my trust in the correctness of an implementation of
>>>>> the Risch algorithm or Kovacic's algorithm. The complexity of coding high
>>>>> level symbolic algebraic methods, when traced down to the metal, as you
>>>>> say, is beyond current proof technology (nor is there sufficient interest
>>>>> in the hardware industry to provide that level of research). Note that the
>>>>> industry is still satisfied with "ad-hoc solutions" (and that might mean
>>>>> patches, and we all know how error-prone those are---so much so that 
>>>>> people
>>>>> learn and re-invent the wheel over and over again for a better wheel).
>>>>>
>>>>> Can prove-technology catch up, ever?
>>>>>
>>>>> I know *I *can't catch up. Still ignorant and biased.
>>>>>
>>>>> William
>>>>>
>>>>>
>>>>> William Sit
>>>>> Professor Emeritus
>>>>> Department of Mathematics
>>>>> The City College of The City University of New York
>>>>> New York, NY 10031
>>>>> homepage: wsit.ccny.cuny.edu
>>>>> ------------------------------
>>>>> *From:* Tim Daly <axiom...@gmail.com>
>>>>> *Sent:* Thursday, April 5, 2018 2:59 AM
>>>>> *To:* William Sit
>>>>> *Cc:* axiom-dev; Tim Daly
>>>>> *Subject:* Re: [Axiom-developer] Proving Axiom Correct
>>>>>
>>>>> William,
>>>>>
>>>>> I understand the issue of proving "down to the metal".
>>>>>
>>>>> Axiom's Volume 10.5 has my implementation of the BLAS / LAPACK
>>>>> libraries in Common Lisp. Those libraries have a lot of coding tricks
>>>>> to avoid overflow/underflow/significance loss/etc.
>>>>> http://axiom-developer.org/axiom-website/bookvol10.5.pdf
>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__axiom-2Ddeveloper.org_axiom-2Dwebsite_bookvol10.5.pdf&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=TfKyDJSqnnlVceWTEzW9xokZ1KC93faNl5VIDbb9UcI&s=mpk34qrWk3WJGr_ccjQTRsRc3coNGefE5fxMBHyOVR8&e=>
>>>>>
>>>>> Two years ago I got Gustafson's "End of Error" book. His new floating
>>>>> point format promises to eliminate these kinds of errors. Unfortunately
>>>>> no current processor implements his instructions.
>>>>>
>>>>> So I bought an Altera Cyclone Field Programmable Gate Array (FPGA)
>>>>> in order to implement the hardware instructions. This is my setup at
>>>>> home:
>>>>> http://daly.axiom-developer.org/FPGA1.jpg
>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__daly.axiom-2Ddeveloper.org_FPGA1.jpg&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=TfKyDJSqnnlVceWTEzW9xokZ1KC93faNl5VIDbb9UcI&s=009W3KhNl-v5TU61jwX3DxojKkmBinurdVU-ywxJFR8&e=>
>>>>> http://daly.axiom-developer.org/FPGA2.jpg
>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__daly.axiom-2Ddeveloper.org_FPGA2.jpg&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=TfKyDJSqnnlVceWTEzW9xokZ1KC93faNl5VIDbb9UcI&s=cBM4oaXxsJKB_XWealTIwgYY5LNXwX1RgmZdwEnOmFQ&e=>
>>>>> http://daly.axiom-developer.org/FPGA3.jpg
>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__daly.axiom-2Ddeveloper.org_FPGA3.jpg&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=TfKyDJSqnnlVceWTEzW9xokZ1KC93faNl5VIDbb9UcI&s=JjPVQ8eaQpifo_U2CARxxWczJaCDM7kaTsQHkFsi2wI&e=>
>>>>> http://daly.axiom-developre.org/FPGA4.jpg
>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__daly.axiom-2Ddevelopre.org_FPGA4.jpg&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=TfKyDJSqnnlVceWTEzW9xokZ1KC93faNl5VIDbb9UcI&s=AVSMfPcPADQNdLQEz9y31ObEtrWVqgIdjs_W9All2K8&e=>
>>>>> This is not yet published work.
>>>>>
>>>>> The game is to implement the instructions at the hardware gate level
>>>>> using Mealy/Moore state machines. Since this is a clocked logic design
>>>>> the state machines can be modelled as Turing machines.
>>>>>
>>>>> This allows Gustafson's arithmetic to be a real hardware processor.
>>>>>
>>>>> It turns out that shortly after I bought the FPGA from Altera (2 years
>>>>> ago)
>>>>> Intel bought Altera. They have recently released a new chip that
>>>>> combines
>>>>> the CPU and FPGA
>>>>> https://www.intel.com/content/www/us/en/fpga/devices.html
>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.intel.com_content_www_us_en_fpga_devices.html&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=TfKyDJSqnnlVceWTEzW9xokZ1KC93faNl5VIDbb9UcI&s=GDFziH4XPpntP4Bx6JYWPEz7olCOacUZp7NpsnNtQsQ&e=>
>>>>>
>>>>> Unfortunately the new chip is only available to data centers in server
>>>>> machines and I can't buy one (nor can I afford it). But this would
>>>>> allow
>>>>> Gustafson arithmetic at the hardware level.
>>>>>
>>>>> My Altera Cyclone has 2 ARM processors built into the chip. ProvenVisor
>>>>> has a verified hypervisor running on the ARM core
>>>>> http://www.provenrun.com/products/provenvisor/nxp/
>>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.provenrun.com_products_provenvisor_nxp_&d=DwMFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=TfKyDJSqnnlVceWTEzW9xokZ1KC93faNl5VIDbb9UcI&s=QGxWqtU50kOphUY1RkRhn4fkDttKVTN2ndMpbBVqB5M&e=>
>>>>>
>>>>> So I've looked at the issue all the way down to the gate-level hardware
>>>>> which is boolean logic and Turing machine level proofs.
>>>>>
>>>>> It all eventually comes down to trust but I'm not sure what else I can
>>>>> do
>>>>> to ensure that the proofs are trustworthy. If you can't trust the
>>>>> hardware
>>>>> gates to compute a valid AND/OR/NOT then all is lost.
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>> On Wed, Apr 4, 2018 at 6:03 PM, William Sit <w...@ccny.cuny.edu>
>>>>> wrote:
>>>>>
>>>>>> ...
>>>>>>
>>>>>> [Message clipped]
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
_______________________________________________
Axiom-developer mailing list
Axiom-developer@nongnu.org
https://lists.nongnu.org/mailman/listinfo/axiom-developer

Reply via email to