Interesting stuff. I've noticed some possible shortenings which also were 
caught by this tool or least are of the same sort. For example, negdii could 
use negdi but doesn't, perhaps because negdi didn't exist when negdii was first 
added. But having an automated way of finding them (beyond what the minimizer 
can do) can potentially make those shortenings more likely to happen.

I didn't look at this in great detail, but based on a first glance, it seems to 
be one of the more promising automated systems for metamath proofs (and it 
isn't the only one we have seen).

On March 26, 2020 9:23:44 AM PDT, 'Stanislas Polu' via Metamath 
<[email protected]> wrote:
>Hi!
>
>At OpenAI we've been exploring the use of deep learning for formal
>math for the past couple months. We've worked a lot with Metamath
>whose simplicity makes it very amenable to deep learning.
>
>Today, I submitted a PR containing a sample of proofs that were found
>by our models and that are substantially shorter than their human
>curated counterparts:
>
>https://github.com/metamath/set.mm/pull/1547
>
>We're very curious to hear from the community on whether this is
>useful. Additionally, if the authors of the original proofs shortened
>by this PR have qualitative feedback on the proposed versions, we'd be
>keen on hearing from them as well, obviously.
>
>As mentioned in the PR the models were run on a sample of ~1k proofs
>from set.mm so we could probably get 45 times as many shorter proofs
>if there is interest for it!
>
>Looking forward to your feedback and comments.
>
>-stan

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/BC216DF1-6EB7-4135-A71D-DC0472DA679B%40panix.com.

Reply via email to