"Fields, Zachary J. (MU-Student)" <[email protected]> writes:
> I'm working on PostgreSQL 9.13 (waiting for admin to push upgrades next 
> week), in the meanwhile, I was curious if there are any known bugs regarding 
> large cursor fetches, or if I am to blame.
> My cursor has 400 million records, and I'm fetching in blocks of 2^17 
> (approx. 130K). When I fetch the next block after processing the 48,889,856th 
> record, then the DB seg faults. It should be noted, I have processed tables 
> with 23 million+ records several times and everything appears to work great.

> I have watched top, and the system memory usage gets up to 97.6% (from approx 
> 30 million records onward - then sways up and down), but ultimately crashes 
> when I try to get past the 48,889,856th record. I have tried odd and various 
> block sizes, anything greater than 2^17 crashes at the fetch that would have 
> it surpassed 48,889,856 records, 2^16 hits the same sweet spot, and anything 
> less than 2^16 actually crashes slightly earlier (noted in comments in code 
> below).

> To me, it appears to be an obvious memory leak,

Well, you're leaking the SPITupleTables (you should be doing
SPI_freetuptable when done with each one), so running out of memory is
not exactly surprising.  I suspect what is happening is that an
out-of-memory error is getting thrown and recovery from that is messed
up somehow.  Have you tried getting a stack trace from the crash?

I note that you're apparently using C++.  C++ in the backend is rather
dangerous, and one of the main reasons is that C++ error handling
doesn't play nice with elog/ereport error handling.  It's possible to
make it work safely but it takes a lot of attention and extra code,
which you don't seem to have here.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to