Sorry, this:

The log of this expression is never more than 0.1 of the log of n!.

should read:

The log of this expression is always within 0.1 of the log of n!.

Bill.

On 3 May, 13:12, Bill Hart <[EMAIL PROTECTED]> wrote:
> I think I nearly understand what Pari does.
>
> The value of B_k is given by zeta(n)*(2*n!)/(2^n pi^n). However,
> zeta(n) is *very* close to 1 for large n. So one starts by computing
> zeta to a precision given by the size of (2*n!)/(2^n pi^n) (which is
> basically the size of B_k) with 3 added to the precision to take care
> of small values of n.
>
> To compute how large (2*n!)/(2^n pi^n) can be, one uses Stirling's
> formula for n!, i.e. n! ~ sqrt(2n*pi)*n^n*e^(-n). The log of this
> expression is never more than 0.1 of the log of n!. Taking the log of
> the expression one gets when plugging in Stirling's approximation, one
> gets that the log of B_k does not exceed (n + 0.5) * log(n) -
> n*(1+log2PI) + log(2*sqrt(2*Pi)) + 0.1 except for very small n (for
> which it is sufficient to add 3 to the precision).
>
> Now we need to know how big the numerator can be in B_k. But we can
> compute the denominator precisely using the Clausen-von Staudt
> formula. Pari does this and calls the denominator d.
>
> Now the log of the numerator is no bigger than t = log(d) + (n + 0.5)
> * log(n) - n*(1+log2PI) + log(2*sqrt(2*Pi)) + 0.1 = log(d ) + (n +
> 0.5) * log(n) - n*(1+log2PI) + 1.712086.
>
> If this is how big the numerator of B_k is, we only need to compute it
> to half a part in e^t. Thus we only need to compute 1/zeta(n) to half
> a part in e^t. This is done by computing the inverse of the Euler
> product for sufficiently many primes. Suppose we compute this for
> primes up to p. Then the error in the inverse of the zeta function is
> less than sum 1/p^n + 1/(p+1)^n +..... Consider this sum p terms at a
> time. The first p are <= 1/p^n, the second p terms are <= 1/(2p)^n,
> etc. Thus the error is quite a bit less than zeta(n) * p/p^n ~ 1/
> p^(n-1).
>
> Thus if we want this error to be less than half a part in e^t we need
> to choose p to be about exp(t)^(1/(n-1)). Pari uses exp((t -
> log(n-1)) / (n-1)) and I'm not sure where the extra log(n-1) comes
> from, but either they use a slightly better approximation than me, or
> they perhaps note that one doesn't need the first 1/p^n in the
> approximation.
>
> Pari could be improved by noting that one does not need the last few
> terms of the numerator, as they can be recovered from Clausen-von
> Staudt, which actually gives the fractional part of B_k. But for large
> n, this is not a significant saving.
>
> Bill.
>
> On 3 May, 01:21, Bill Hart <[EMAIL PROTECTED]> wrote:
>
> > Actually, it might be n/log(n) steps, so the time might be something
> > like n^2 though there are other terms involved.
>
> > Bill.
>
> > On 3 May, 00:30, Bill Hart <[EMAIL PROTECTED]> wrote:
>
> > > The theoretical complexity of all the algorithms that rely on
> > > recurrences is supposed to be n^2. But this doesn't take into account
> > > the size of the numbers themselves. When you do this they are all
> > > about n^3 as far as I can see. You can use Ramanujan identities, the
> > > Akiyama-Tanigawa algorithm, the identity used by Lovelace, but all are
> > > n^3 or so.
>
> > > The theoretical complexity of the version using the zeta function
> > > looks something like n log n steps at precision n log n, i.e. time n^2
> > > (log n)^2.
>
> > > Bill.
>
> > > On 2 May, 21:24, David Harvey <[EMAIL PROTECTED]> wrote:
>
> > > > One more data point (2.6GHz opteron):
>
> > > > sage: time x = bernoulli(60000)
> > > > Wall time: 3.79
>
> > > > sage: time x = bernoulli(120000)
> > > > Wall time: 16.97
>
> > > > sage: time x = bernoulli(240000)
> > > > Wall time: 118.24
>
> > > > sage: time x = bernoulli(480000)
> > > > Wall time: 540.25
>
> > > > sage: time x = bernoulli(960000)
> > > > Wall time: 2436.06
>
> > > > The ratios between successive times are:
>
> > > > 4.47757255936675
> > > > 6.96758986446671
> > > > 4.56909675236806
> > > > 4.50913465987969
>
> > > > If we guess that it's "really" 4.5, then the complexity is N^(log
> > > > (4.5) / log(2)) = N^2.17. This is puzzling; I thought the algorithm  
> > > > was supposed to be better than quadratic. Does anyone know what the  
> > > > theoretical complexity is supposed to be?
>
> > > > Anyway, extrapolating gives about 4.5 days, pretty much the same as  
> > > > what Tom estimates. I'm going to start it running now.
>
> > > > david
--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to