Hi all, here are the miniKanren things I'm working on:
- What I call a "miliKanren" in TypeScript:
- Unlike miniKanren it has an explicit conjunction operator
("and(...)")
- Unlike microKanren, 'and' and 'or' take any number of arguments,
and 'fresh' takes a function of any arity.
- The resulting look-and-feel of programming in miliKanren is very
much like miniKanren but with explicit and()s.
- Note that javascript is eager and has no macros, so you better put
all recursive calls inside fresh(() => { here }) or you'll loop.
- Unlike most kanrens I've seen it does array unification (x.length
=== y.length and equal(x[i], y[i]) for all i)
- It's not *that* different from MK; a cons cell is just a
length-2 array.
- For indefinite-length data structures I end up building cons
lists.
- You can do [key1, value1, [key2, value2, cddr_of_alist]]; not
sure how big the value-add is here.
- What could perhaps make CLP(variadic-tree) interestingly different
would be constraints on the form "element <some int> of <some array> must
equal <any expr>"
- This puts a lower bound on the length of the array, plus a
unification constraint on the element vs. the expression
- Maybe finite domain natural-number variables would be useful, I
dunno.
- BUG: the current implementation of and() takes incoming
environments and threads them through the complete chain of (essentially)
flatmaps one by one, rendering the search incomplete.
- I've attached a gzipped copy of the source. It says "run(3, q =>
nats(q, []));" at the bottom. Change the 3 to a 2 and it terminates.
- (Uh, license: let's say GPLv2+ and I'll 99% probably say yes if you
ask for an MIT-'ish license. Unless you want to operate a nuclear power
plant off my kanren code ^^)
- Theorem proving (in scheme w. miniKanren):
- Logic programming is inherently search, and proof search is search,
and fresh+unification ~= uniform substitution, so this is a good fit,
right? So far I'm 80% positive on that.
- I find it very compelling that I can transcribe the axioms from one
of
https://en.wikipedia.org/wiki/List_of_logic_systems#Implication_and_negation
and then:
- Search for proofs of given theorems
- Enumerate theorems with proofs
- Check proofs
- Reconstruct theorems from their proofs
- (The milikanren code mentioned above has such a transcription,
but see my above comment about its `and' not giving a complete search;
what
I say below is about my scheme code; I'll share it on request, but
it's not
essentially different from the typescript code—the differences are in
and
due to the underlying MK implementation.)
- It's much faster for (¬¬x → x) and (x → ¬¬x) than for (x → y) → ((y
→ z) → (x → z)), even though the proofs have comparable sizes (21/23/19
IIRC).
- (This is in Łukasiewicz's third axiom system).
- Just typing in the axioms (plus modus ponens) produces a very naive
search:
- Given (a → (b → a)) as an axiom, one way of proving 'a' is:
- prove a
- apply modus ponens to get b → a
- prove b
- apply modus ponens to get a
- The naive transcribe-and-press-play algorithm attempts this!
- Perhaps this could be mitigated in an general way by keeping
a list of all the `theorem' arguments up the call stack and adding
non-member constraints;
- That is, 'foo' should never be a lemma in any proof of 'foo'
(since that's redundant).
- Also, it probably tries to prove many things that are not
tautologies.
- ... which is a giant waste of time if your axioms are
tautologies and your deduction rules preserve tautology-ness.
- Adding in a hypothetical-syllogism rule (~= making use of the
deduction metatheorem) massively speeds up some proofs and dramatically
slows down others
- The naive approach will add fresh logic variables as
assumptions, which I think amounts to inferring lemmas. My guess is
that
this eats up all the search time.
- Maybe this approach has good costs and benefits:
- Let 'prove' be a search algorithm that doesn't use the
deduction metatheorem
- To prove a formula ',x → ,y', either prove it using "prove",
or prove y-given-x recursively with this approach. That is, try to
prove
e.g. all of:
- ⊢ (x → y) → ((y → z) → (x → z))
- (x → y) ⊢ ((y → z) → (x → z)
- (x → y), (y → z) ⊢ (x → z)
- (x → y), (y → z), x ⊢ z
- ... using the naive search, but no more than these four.
- I tried running Prover9 to infer each of Łukasiewicz's axiom
systems from each of the others.
- There was one of the axioms which I gave up on; I readily found
a proof of it by hand using the deduction metatheorem and a few simple
lemmas.
- The approach taken by Prover9 is a bottom up approach:
- For an pair of theorems (,x → ,y) and ,z there is a unique
most general theorem that can be deduced from them, found
essentially by
doing unification.
- Then, let theorems = [axiom_1, ..., axiom_n]; for i in nats:
for j in [0..i] (inclusive): if-let t =
deduce-something-new-from(theorems[i], theorems[j]):
theorems.push(t)
- Terminate when you have proven all the requested theorems.
- In general this works well, but as I said, the deduction
metatheorem helped me find a proof faster than Prover9.
- It's not obvious to me that miniKanren is a good fit for this
approach:
- once you unify (,x → ,y) and ,z you want to freeze the
involved logic variables, i.e. reify them; later on you want to
replace
symbols with logic variables so you can unify again.
- maybe using the unification and environment part of MK as
a library, but without running any goals, would be useful.
- Maybe a mixed approach can perform better:
- Run the bottom-up search for theorems
- Assume all the left-hand sides of implications down the
right-hand spine of the theorem's expression tree, i.e. try to prove
e.g.
(x → y), (y → z), x ⊢ z, using the theorems found in the bottom-up
search
as lemmas.
- Improving the top-down search beyond the naive transcription
would be useful.
- Using a similar bottom-up technique I've built a tautology
enumerator (in python)
- It could probably be done top-down in MK just fine
- Technique: recursively construct pairs of (formula, its column
in an n-variable truth table)
- eval your k-ary operations over the zip of the k
truth-table-columns of its k sub-formulae.
- I've also made half an automation of the independence proving
technique
of
http://www.cogsci.rpi.edu/~heuveb/teaching/Logic/CompLogic/Web/Handouts/Axioms.pdf
- Summary: construct a three-valued thruth table for 'not and
'implies such that:
- Your axioms always evaluate to true
- Your deduction rules preserve true-valuations
- There is at least one variable assignment such that your
query formula evaluates to either false or <the third value>
- Here's how this could be combined to construct a new sound and
complete axiom system:
- Use the tautology enumerator to generate an axiom
- Use the theorem prover to prove all the axioms of some other
sound and complete axiom system
- Simultaneously, use the independence checker to prove at least
one tautology independent (start e.g. with the axioms of the known
good
axiom systems)
- If you have completeness, output the axiom system; otherwise,
use the tautology enumerator to add another axiom and repeat.
- Maybe prove it independent (or prove it using already added
axioms) before adding it.
- Note that axiom system completeness is semi-decidable but not
decidable. If tautology independence was semi-decidable, then I think
incompleteness would be semi-decidable, rendering completeness
decidable.
- The tautology enumeration and the bottom-up proof search generates
formulae with lots of redundance.
- e.g. ¬¬¬¬x → x when ¬¬x → x is already known. Also, v1 → v2 →
... → vn → v1 from (a → (b → a)).
- Technically they're not redundant, but once you get how you
can apply a particular construction n+1 times you don't learn a lot
from
seeing it done.
- It would be nice to filter this list on non-triviality, where I
have yet to exactly define triviality.
- This sounds like the kind of job for which datalog is the
right tool.
- Prover9 fails to prove (a → (b → a)) in Meredith's single-axiom
system. I don't know why.
- Assembling all my pieces in a single language is on my TODO list.
- Would in be nice to have a better ui than using a DSL?
- Currently I'm focused on propositional calculus over negation and
implication; should I expand into more things? How different in
Gentzen-style natural deduction? Can both be done well in a single
framework?
- Very much at the drawing board, some kind of search tree visualization.
- Here are some questions that I think might be useful to answer (in
some visual way):
- What's the structure of the search tree? (How do we deal with
infinity?)
- Given a particular output, what path through the search tree
gave rise to it?
- Given a particular input-environment to a goal, how does it
develop as it goes through the search tree?
- For a particular node, what are its inputs and outputs? (How do
we deal with non-termination?)
- Starting by inputting the empty environment to the root node, in
what order is the tree traversed?
- The big uncertainty in my head is time and non-termination:
- How valuable is a rendering of a step-by-step execution of the
search? Also, what do I mean by "the"—there are e.g. multiple places
you
could put or not put a zzz; should I just render one particular
algorithm?
Which one?
- Is it valuable to see the state of the search at particular
points in time, or can all the user's questions be answered by
simultaneously showing all the (query-relevant) things that happened
in the
past?
- Second big question: what's a good way of visualizing this
information? What's a useful tree navigation UI like?
- Also, mundane question of input and implementation language:
- Should I build in on top of (or into) veneer—which is excellent
by which I don't know?
- Should I build in on top of my own miliKanren—which I know but
executes javascript rather than scheme kanren-programs?
- I have no code, just ideas, questions and a desire to make this
work.
- I suspect it would help me tune my top-down theorem search, by
discovering and eliminating stupidities.
One the way, I've found one addition to MK that might be useful:
call-fresh-vector. It takes an natural number n and a function f and calls
f with one argument, an n-element vector containing distinct fresh
variables. It's probably useful for solving sudoku or, in my case,
constructing truth tables in n-valued logic. I think some finite domain
solver would be even more useful, though.
--
You received this message because you are subscribed to the Google Groups
"minikanren" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/minikanren.
For more options, visit https://groups.google.com/d/optout.
kanren-polished.ts.gz
Description: Binary data
