Thanks David!

> I'm really hoping that you'll eventually publish 1+ papers about this,
> or even better, the models & supporting code that generates the proofs.
> I'd *really* like to see the models :-) !!

This is obviously the goal. The timing is still a work in progress,
but we will most definitely share our results formally with the
community.

> Work to minimize existing proofs is great, but I'm hoping that this can
> increasingly be used to create new proofs of claims that are not (yet) 
> formally proven.

This is obviously the plan :+1:

> Well, I think you know the answer. It's useful, with stars and exclamation 
> marks :-).

Your feedback as well as everyone else's is a very powerful validation
of our work internally and externally, so expect us to continue
pushing! Thank you.

> In addition, I *suspect* that shorter proofs would be helpful for future ML
> algorithms, because they'd be shown "shorter" ways to do it.
> That later claim might not be correct; if "shorter" ways are always shown, an 
> ML
> model might only learn "specialized" techniques instead of general ones.
> It might be wise to keep around a few of the older proofs that show more
> general techniques, even if minimized, so that humans and programs will
> have examples of more general approaches.

Agreed.

> If you do that, I urge you to do it in batches to start with (instead of all 
> at once).
> That will let us review the results in part before you completely "turn it 
> loose" :-).

Roger that.

-stan

On Sat, Mar 28, 2020 at 11:51 PM David A. Wheeler <[email protected]> wrote:
>
> On Sat, 28 Mar 2020 14:25:41 -0700 (PDT), "'Alexander van der Vekens' via 
> Metamath" <[email protected]> wrote:
> > Hi Stan,
> > I had a look at the proofs - very impressive results! Especially because we
> > had a global minimization recently, and your method found much shorter
> > proofs nevertheless.
>
> I agree. Any ML-based system is impressive if it can find many
> *shorter* proofs the than ones we already have. Nice work.
>
> I compared elfzmlbm - the biggest shortening - and I think the new proof
> is quite readable. Its use of Z>= for example seemed quite clear.
>
> We already run minimization runs and (usually) just accept what is shortest
> (as long as it doesn't add axioms or other discouraged statements).
> So this OpenAI minimization process easily fits into what we're already doing.
>
> As a general rule, if someone thinks that a proof's readability requires 
> reaching
> certain steps, then those steps should be expressed as separate lemmas.
>
>
> > On Thursday, March 26, 2020 at 5:24:02 PM UTC+1, Stanislas Polu wrote:
> > > At OpenAI we've been exploring the use of deep learning for formal
> > > math for the past couple months. We've worked a lot with Metamath
> > > whose simplicity makes it very amenable to deep learning.
>
> I'm really hoping that you'll eventually publish 1+ papers about this,
> or even better, the models & supporting code that generates the proofs.
> I'd *really* like to see the models :-) !!
>
> Work to minimize existing proofs is great, but I'm hoping that this can
> increasingly be used to create new proofs of claims that are not (yet) 
> formally proven.
>
> > > We're very curious to hear from the community on whether this is
> > > useful. Additionally, if the authors of the original proofs shortened
> > > by this PR have qualitative feedback on the proposed versions, we'd be
> > > keen on hearing from them as well, obviously.
>
> Well, I think you know the answer. It's useful, with stars and exclamation 
> marks :-).
>
> This is also revealing that some of our conventions aren't written down
> and should be. Here are the documented conventions:
> http://us.metamath.org/mpeuni/conventions.html
> For example, we mention "OLD" and "ALT" but not why they might be used
> and that they usually follow the "main" proof. I'll make a PR to fix that.
>
> > > As mentioned in the PR the models were run on a sample of ~1k proofs
> > > from set.mm so we could probably get 45 times as many shorter proofs
> > > if there is interest for it!
>
> Yes, I would certainly like to see that. Shorter proofs tend to be easier to 
> understand.
>
> In addition, I *suspect* that shorter proofs would be helpful for future ML
> algorithms, because they'd be shown "shorter" ways to do it.
> That later claim might not be correct; if "shorter" ways are always shown, an 
> ML
> model might only learn "specialized" techniques instead of general ones.
> It might be wise to keep around a few of the older proofs that show more
> general techniques, even if minimized, so that humans and programs will
> have examples of more general approaches.
>
> If you do that, I urge you to do it in batches to start with (instead of all 
> at once).
> That will let us review the results in part before you completely "turn it 
> loose" :-).
>
> --- David A. Wheeler
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Metamath" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/metamath/E1jIKIt-0002k1-EZ%40rmmprod07.runbox.

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/CACZd_0w2CkQi3WT7D-yKM%2Bkb3qfv4HhGdHO%2BoKFCmmVLsy%2BjXw%40mail.gmail.com.

Reply via email to