On Tue, Sep 6, 2016 at 2:00 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Robert Haas <robertmh...@gmail.com> writes:
>> Yeah, but I've seen actual breakage from exactly this issue on
>> customer systems even with the 1GB limit, and when we start allowing
>> 100GB it's going to get a whole lot worse.
>
> While it's not necessarily a bad idea to consider these things,
> I think people are greatly overestimating the consequences of the
> patch-as-proposed.  AFAICS, it does *not* let you tell VACUUM to
> eat 100GB of workspace.  Note the line right in front of the one
> being changed:
>
>          maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
>          maxtuples = Min(maxtuples, INT_MAX);
> -        maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
> +        maxtuples = Min(maxtuples, MaxAllocHugeSize / 
> sizeof(ItemPointerData));
>
> Regardless of what vac_work_mem is, we aren't gonna let you have more
> than INT_MAX ItemPointers, hence 12GB at the most.  So the worst-case
> increase from the patch as given is 12X.  Maybe that's enough to cause
> bad consequences on some systems, but it's not the sort of disaster
> Robert posits above.

Hmm, OK.  Yes, that is a lot less bad.  (I think it's still bad.)

> If we think the expected number of dead pointers is so much less than
> that, why don't we just decrease LAZY_ALLOC_TUPLES, and take a hit in
> extra index vacuum cycles when we're wrong?

Because that's really inefficient.  Growing the array, even with a
stupid approach that copies all of the TIDs every time, is a heck of a
lot faster than incurring an extra index vac cycle.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to