On Sat, Aug 27, 2016 at 2:08 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> The standard calling pattern for AllocSetContextCreate is
>
>     cxt = AllocSetContextCreate(parent, name,
>                                 ALLOCSET_DEFAULT_MINSIZE,
>                                 ALLOCSET_DEFAULT_INITSIZE,
>                                 ALLOCSET_DEFAULT_MAXSIZE);
>
> or for some contexts you might s/DEFAULT/SMALL/g if you expect the context
> to never contain very much data.  I happened to notice that there are a
> few calls that get this pattern wrong.  After a bit of grepping, I found:
>
> hba.c lines 389, 2196:
>                                    ALLOCSET_DEFAULT_MINSIZE,
>                                    ALLOCSET_DEFAULT_MINSIZE,
>                                    ALLOCSET_DEFAULT_MAXSIZE);
>
> guc-file.l line 146:
>                                        ALLOCSET_DEFAULT_MINSIZE,
>                                        ALLOCSET_DEFAULT_MINSIZE,
>                                        ALLOCSET_DEFAULT_MAXSIZE);
>
> brin.c line 857:
>
>                                 ALLOCSET_SMALL_INITSIZE,
>                                 ALLOCSET_SMALL_MINSIZE,
>                                 ALLOCSET_SMALL_MAXSIZE);
>
> autovacuum.c line 2184:
>                                           ALLOCSET_DEFAULT_INITSIZE,
>                                           ALLOCSET_DEFAULT_MINSIZE,
>                                           ALLOCSET_DEFAULT_MAXSIZE);
>
> typcache.c lines 757, 842:
>
>                                             ALLOCSET_SMALL_INITSIZE,
>                                             ALLOCSET_SMALL_MINSIZE,
>                                             ALLOCSET_SMALL_MAXSIZE);
>
> In all of these cases, we're passing zero as the initBlockSize, which is
> invalid, but but AllocSetContextCreate silently forces it up to 1K.  So
> there's no failure but there may be some inefficiency due to small block
> sizes of early allocation blocks.  I seriously doubt that's intentional;
> in some of these cases it might be sane to use the SMALL parameters
> instead, but this isn't a good way to get that effect.  The last four
> cases are also passing a nonzero value as the minContextSize, forcing
> the context to allocate at least that much instantly and hold onto it
> over resets.  That's inefficient as well, though probably only minorly so.

I noticed this a while back while playing with my alternate allocator,
but wasn't sure how much of it was intentional.  Anyway, +1 for doing
something to clean this up.  Your proposal sounds OK, but maybe
AllocSetContextCreateDefault() and AllocSetContextCreateSmall() would
be nice and easier to remember.

Also, I think we ought to replace this code in aset.c:

    initBlockSize = MAXALIGN(initBlockSize);
    if (initBlockSize < 1024)
        initBlockSize = 1024;
    maxBlockSize = MAXALIGN(maxBlockSize);

With this:

    Assert(initBlockSize >= 1024 && initBlockSize == MAXALIGN(initBlockSize));
    Assert(maxBlockSize == MAXALIGN(maxBlockSize));

This might save a few cycles which wouldn't be unwelcome given that
AllocSetContextCreate does show up in profiles, but more importantly I
think it's completely inappropriate for this function to paper over
whatever errors may be made by callers.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to