Peter Eisentraut <peter.eisentr...@2ndquadrant.com> writes:
> On 4/19/18 02:10, Michael Paquier wrote:
>> You are right.  I can easily see the leak if I use for example a
>> background worker which connects to a database, and launches many
>> transactions in a row.  The laziest reproducer I have is to patch one of
>> my bgworkers to launch millions of transactions in a tight loop and the
>> leak is plain (this counts relations automatically, does not matter):
>> https://github.com/michaelpq/pg_plugins/tree/master/count_relations

> This is not a memory leak in the sense that the memory is not reusable.
> It allocates memory for a stack, and that stack will be reused later.
> The stack could be bigger than you'll need, but that's not necessarily a
> problem.

Uh, no, the memory *isn't* reused later.  The coding in AtEOXact_SPI
assumes that it can just null out _SPI_stack because whatever that
might've been pointing to is transaction-lifespan memory and will
go away without extra work here.  Which was true in every prior
release, because SPI_connect caused the stack to be allocated in
TopTransactionContext.  You've changed it to be in TopMemoryContext
and not adjusted the cleanup to match.

We could change the coding in AtEOXact_SPI so that it preserves
_SPI_stack and _SPI_stack_depth and only resets _SPI_connected,
but I'm not sure what's the point of keeping the stack storage
if _SPI_connected gets reset; that means we no longer think
there's any data there.  One way or the other this code is now
quite schizophrenic.

Possibly what you really want is for the stack storage to be permanent
and for _SPI_connected to get reset to something that's not necessarily
-1, so that AtEOXact_SPI is more like AtEOSubXact_SPI.  But then you need
a mechanism for figuring out how much of the stack ought to outlive the
current transaction.  I would bet money that that
"_SPI_current->internal_xact" thing is wrong/inadequate.

The comments in both SPI_connect and AtEOXact_SPI about memory management
seem to need work, too.

                        regards, tom lane

Reply via email to