Helfgott's main paper says that the paper proof covers C >= 10^27, so
numerical verification of ternary Goldbach is required up to 10^27.
However, according to the first page of https://arxiv.org/pdf/1305.3062.pdf
(the reference for the computation of the low part), to get this bound it
suffices to prove the Riemann Hypothesis up to at least 3 x 10^10. The
reference https://people.maths.bris.ac.uk/~madjp/thesis5.pdf cited doesn't
explicitly mention this number (it mentions a smaller number for GRH that
might be convertible to RH), but the results that are there seem to suggest
that there were about 10^9 low to high precision integrals (most low, a few
high) needed to establish the main result. This puts it within the range of
modern computers, which is why we have the result in the first place.

Of course, if it's *just* within the range of modern computers to prove
this, then it is out of range for formal verification, at least encoded
naively as a metamath proof, because there is no way to do that without
some overhead. However, in the original context it is not known, for
example, which integrals are low precision and which are high, and an
oracle for that can possibly decrease the verification time by lowering the
low precision baseline even further and pointing out how high is necessary
for the high precision integrals.

However, I think that most of Helfgott's proof is well out of reach of
metamath at the moment, not because of the computations but because there
is a *significant* mathematics gap of analytic number theory required to do
the proof for C large, as well as the reduction from TGB to RH and from RH
to some concrete calculations. It seems the key step is the
https://en.wikipedia.org/wiki/Hardy%E2%80%93Littlewood_circle_method ,
which might be in range given the similarities to the method used to
calculate the integrals in the prime number theorem. This then goes through
a bunch of precise estimates, which probably differ significantly in their
difficulty. The hard part here is I think just getting through all of them
- Helfgott has hundreds of pages of integrals. But I think that this part
is easier than the RH verification part in terms of computational power.

But this is the sort of proof that I would really like to have a reflective
verifier for. It makes a lot of sense and is a lot better use of hardware
to prove correctness of a program and then run it, and all that is needed
to make this just as good a proof as the raw exprs proof is a proof that
the reflective verifier is conservative over a standard verifier, which can
be performed in a verified metalogic (aka MM0's strategy). (I'm not sure
exactly how to calculate where the best cutoff point is for where standard
proof loses to reflective proof, because it involves a time-space tradeoff:
the standard proof will probably take less time to check, but the
reflective verifier does not need massive proofs on disk. Actually,
considering that memory and disk are much slower than computation at the
moment, in addition to the fact that computation in metamath involves
relatively low bases and lots of memory accesses, I am pretty sure that
standard proof is at least a hundred times slower than reflective proof
when it comes to pure number crunching, so there isn't really any tradeoff
at that level. When the verified program becomes more "searchy", the
standard proof starts to win because it has an oracle giving all the right
answers, but a reflective proof can have the best of both worlds here, by
verifying a certificate.)

Mario


On Fri, Aug 7, 2020 at 5:45 AM savask <[email protected]> wrote:

> The main problem will be to provide means to express the results from
>> checking "small" numbers (performed with a computer): numbers up to about 4
>> x 10^18 for the binary Goldbach conjecture resp. about 9 x 10^30 for the
>> ternary Goldbach conjecture.  Maybe each of the results must be provided as
>> theorem, like ~ 6gbe , which would be quite a lot...
>
>
> The proof of ~6gbe takes about 500 bytes in set.mm, so recording Goldbach
> partitions of all even numbers up to 4x10^18 will require at least 10^21
> bytes, i.e. a zettabyte. For comparison, that is more or less on the same
> order of magnitude as the amount of data generated worldwide (see
> https://en.wikipedia.org/wiki/Zettabyte#Comparisons_for_scale). At this
> point AI would have a better chance trying to lower this bound to something
> more feasible :-)
>
> --
> You received this message because you are subscribed to the Google Groups
> "Metamath" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/metamath/2e121181-6739-4bbb-9c20-6404f1280caeo%40googlegroups.com
> <https://groups.google.com/d/msgid/metamath/2e121181-6739-4bbb-9c20-6404f1280caeo%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/CAFXXJSvJ_knW9x95pwTqiLho6BwfXzdsY0dg1OCd34mjVqx_VA%40mail.gmail.com.

Reply via email to