Hi!

At OpenAI we've been exploring the use of deep learning for formal
math for the past couple months. We've worked a lot with Metamath
whose simplicity makes it very amenable to deep learning.

Today, I submitted a PR containing a sample of proofs that were found
by our models and that are substantially shorter than their human
curated counterparts:

https://github.com/metamath/set.mm/pull/1547

We're very curious to hear from the community on whether this is
useful. Additionally, if the authors of the original proofs shortened
by this PR have qualitative feedback on the proposed versions, we'd be
keen on hearing from them as well, obviously.

As mentioned in the PR the models were run on a sample of ~1k proofs
from set.mm so we could probably get 45 times as many shorter proofs
if there is interest for it!

Looking forward to your feedback and comments.

-stan

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/CACZd_0ywVrx13Ox1XejzikDvuJL5BofM0iFee0n6%3DhN4tDFCSA%40mail.gmail.com.

Reply via email to