Nice work! I really hope that a publication from such a credible AI company 
will bring more attention to metamath.

I have several small questions, hope you could shed some light on some of 
them.

1) Maybe you mention it in the article and I missed it, but why is the 
model called "GPT-f"? Does 'f' stand for "formal"?

2) Have you tried augmenting the set.mm database by adding theorems with 
renamed variables (like a version of id with |- ( ps -> ps ) instead of |- 
( ph -> ph ))? I think that some variables are used more often in set.mm 
than others, so it seems plausible that the model will "learn less" about 
rare variables. Does GPT-f have less luck proving statements which use rare 
variables rather than often-used ones?

3) Do you have any plans to release GPT-f in the future as OpenAI did with 
GPT-2?

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/5a86d117-f2ac-47c9-9f29-61d5dec65addo%40googlegroups.com.

Reply via email to