I've been examining formal proving in linear logic for the reason that I am
tired of having my programs crash without a good way to plan the clean up.

I have an idea going about building a programming language around proof
presentation of programs. Eg. You'd encode addition with: (n && n (+);n).
The system would print it into a proof with semicolon replaced by a
horizontal line.

There seem to be few benefits from doing it like this. It helps in focusing
on what you prove with the program. You get more proof irrelevance and the
programming environment can therefore help more in writing the programs.

Did a little experiment with proving and modeling the hanoi puzzle on
paper. It ended up describing movement like this:

forall a, b, c:disc. forall i, j:peg,
(top(a, i), top(c, j), above(a, b), !(a<c)) -o (top(b, I), top(a, j),
above(a, c))

Next you'd compose this into itself to get more proofs and end up with an
induction scheme to move any stack anywhere. It also holds for pegs with no
items as empty is not above anything else.

Induction would be also used to describe the stacks of discs in the puzzle.
It'd be:

exists a:disc. stack(a) =
  (exists b:disc. above(a, b), !(a<b), stack(b)) | !(forall b:disc. b<a))

Eg. stack(ø) = !(forall a:disc. a<ø)

The type notation would be sort of like rewriting/space saving rules. I
haven't decided up on how types are represented yet. I think this recursion
scheme is not good but didn't figure a better out yet.

Some of these ideas do remind me from Axiom CAS. The types seem to bind
down the meaning here as well and actions seem to be conveniently defined
on types.

Also have ideas on logic programming env that interleaves on this and uses
the same system of logic. It could be used to do programs that only run if
they succeed to satisfy the goal.

It takes a bit of effort to make the programming env to test these ideas.
I've been working on this for a while now.

-- Henri Tuhola

ti 11. kesäk. 2019 klo 15.06 Tim Daly <axiom...@gmail.com> kirjoitti:

> You are probably not old enough to remember 'core' memory.
> It used to be the case that you could turn off a computer and
> then, when you turned it on again, the memory image was
> exactly remembered (in "magnetic core") and your program
> would continue running exactly as though nothing happened.
> Of course, a BIG memory machine had 8,192 bytes.
>
> NVRAM (non-volitile RAM) is coming, in near terabyte sizes.
> Unlike DRAM (dynamic RAM) it remembers everything even
> though it has no power.
>
> That changes everything we know about computing.
> https://www.youtube.com/watch?v=kBXbump35dg
> https://www.youtube.com/watch?v=X8nMsabFD2w
>
> For example, data structures no longer needs a "disk
> representation" since they never leave memory. On the
> other hand, neither does a mangled data structure or a
> memory leak.
>
> Lisp is well suited for this environment. Lisp with a form of
> persistent data structures (Okasaki)  would be even more
> interesting.
>
> This has interesting implications for the new "Sane" Axiom
> compiler / interpreter. For example, an "input" file actually
> resides in interpreter memory at all times.
>
> If nothing else, these videos are an interesting glimpse of
> near-future computing issues.
>
> Tim
>
> _______________________________________________
> Axiom-developer mailing list
> Axiom-developer@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/axiom-developer
>
_______________________________________________
Axiom-developer mailing list
Axiom-developer@nongnu.org
https://lists.nongnu.org/mailman/listinfo/axiom-developer

Reply via email to