Yes of course that’s all verified and taken into account during code
initialization
From: Vitalii Tymchyshyn [mailto:v...@tym.im]
Sent: Wednesday, February 22, 2017 8:14 PM
To: John Gorman; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Correct use of cursors for very large result sets
; John
>
>
>
> *From:* pgsql-performance-ow...@postgresql.org [mailto:
> pgsql-performance-ow...@postgresql.org] *On Behalf Of *Mike Beaton
> *Sent:* Tuesday, February 21, 2017 6:49 AM
> *To:* pgsql-performance@postgresql.org
> *Subject:* Re: [PERFORM] Correct use of cursors
the time.
Regards
John
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Mike Beaton
Sent: Tuesday, February 21, 2017 6:49 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Correct use of cursors for very large result sets in
Thanks, Tom.
Wouldn't this mean that cursors are noticeably non-optimal even for normal
data sizes, since the entire data to be streamed from the table is always
duplicated into another buffer and then streamed?
> if you want the whole query result at once, why are you bothering with a
cursor?
T
Mike Beaton writes:
> New TL;DR (I'm afraid): PostgreSQL is always generating a huge buffer file
> on `FETCH ALL FROM CursorToHuge`.
I poked into this and determined that it's happening because pquery.c
executes FETCH statements the same as it does with any other
tuple-returning utility statement
The generated buffer is 140MB, not 14MB. At 14 bytes per row, that makes
sense.
I have done another test.
If I execute `FETCH ALL FROM cursor` I get a 140MB disk buffer file, on the
PostgreSQL server, reported in its log.
If I execute `FETCH 500 FROM cursor` (exactly half the rows), I see a
> Seems odd. Is your cursor just on "SELECT * FROM table", or is there
> some processing in there you're not mentioning? Maybe it's a cursor
> WITH HOLD and you're exiting the source transaction?
Hi Tom,
I've deleted my own Stack Overflow answer in favour of Laurenz Albe's one.
New TL;DR (I'm
Mike Beaton writes:
> One outstanding question I have. Based on a lot of helpful responses given
> to the SO question I can now test and see what disk buffers are generated
> (by setting `log_temp_files` to `0` and then `tail -f log`), as well as how
> long it takes for results to start arriving.
I meant to say: "the `FETCH 1 FROM cursor` until exhausted pattern will
always be safe". Nasty typo, sorry!
Dear Tom,
This is very helpful, thank you.
You make a very useful point that the limitation is basically on PL/pgSQL
and other PL languages. And someone on SO already pointed out that an
inline SQL function with a enormous sized TABLE return value also doesn't
have any buffering problems. So that
Mike Beaton writes:
> [ generally accurate information ]
> **WARNINGS**
> It would *never* make sense to do `FETCH ALL FROM cursor` for
> astronomically large data, if your client side code (including your data
> access layer) has any bottleneck at all at which means that all the data
> from a c
I asked the same question at the same time on Stack Overflow (sincere
apologies if this is a breach of etiquette - I really needed an answer, and
I thought the two communities might not overlap).
Stackoverflow now has an answer, by me:
http://stackoverflow.com/q/42292341/#42297234 - which is based
**Short version of my question:**
If I hold a cursor reference to an astronomically huge result set in my
client code, would it be ridiculous (i.e. completely defeats the point of
cursors) to issue "FETCH ALL FROM cursorname" as my next command? Or would
this slowly stream the data back to me as I
13 matches
Mail list logo