[COMMITTERS] pgsql: Avoid buffer bloat in libpq when server is consistently faster t

2014-05-07 Thread Tom Lane
Avoid buffer bloat in libpq when server is consistently faster than client.

If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
incomplete message in each bufferload we'd usually double the buffer size,
due to supposing that we didn't have enough room in the buffer to finish
collecting that message.  After filling the newly-enlarged buffer, the
cycle repeats, eventually resulting in an out-of-memory situation (which
would be reported misleadingly as "lost synchronization with server").
Of course, we should not enlarge the buffer unless we still need room
after discarding already-processed messages.

This bug dates back quite a long time: pqParseInput3 has had the behavior
since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1ad
in 2008.  Probably the reason it's not been isolated before is that in
common environments the recv() loop would always be faster than the server
(if on the same machine) or faster than the network (if not); or at least
it wouldn't be slower consistently enough to let the buffer ramp up to a
problematic size.  The reported cases involve Windows, which perhaps has
different timing behavior than other platforms.

Per bug #7914 from Shin-ichi Morita, though this is different from his
proposed solution.  Back-patch to all supported branches.

Branch
--
REL9_3_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/b4f9c93ce0556e3afe371566b96e093f8228bf8f

Modified Files
--
src/interfaces/libpq/fe-misc.c |   32 
1 file changed, 32 insertions(+)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Avoid buffer bloat in libpq when server is consistently faster t

2014-05-07 Thread Tom Lane
Avoid buffer bloat in libpq when server is consistently faster than client.

If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
incomplete message in each bufferload we'd usually double the buffer size,
due to supposing that we didn't have enough room in the buffer to finish
collecting that message.  After filling the newly-enlarged buffer, the
cycle repeats, eventually resulting in an out-of-memory situation (which
would be reported misleadingly as "lost synchronization with server").
Of course, we should not enlarge the buffer unless we still need room
after discarding already-processed messages.

This bug dates back quite a long time: pqParseInput3 has had the behavior
since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1ad
in 2008.  Probably the reason it's not been isolated before is that in
common environments the recv() loop would always be faster than the server
(if on the same machine) or faster than the network (if not); or at least
it wouldn't be slower consistently enough to let the buffer ramp up to a
problematic size.  The reported cases involve Windows, which perhaps has
different timing behavior than other platforms.

Per bug #7914 from Shin-ichi Morita, though this is different from his
proposed solution.  Back-patch to all supported branches.

Branch
--
REL9_0_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/77e66282769b4e6df181210cc24ab2073f123eb7

Modified Files
--
src/interfaces/libpq/fe-misc.c |   32 
1 file changed, 32 insertions(+)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Avoid buffer bloat in libpq when server is consistently faster t

2014-05-07 Thread Tom Lane
Avoid buffer bloat in libpq when server is consistently faster than client.

If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
incomplete message in each bufferload we'd usually double the buffer size,
due to supposing that we didn't have enough room in the buffer to finish
collecting that message.  After filling the newly-enlarged buffer, the
cycle repeats, eventually resulting in an out-of-memory situation (which
would be reported misleadingly as "lost synchronization with server").
Of course, we should not enlarge the buffer unless we still need room
after discarding already-processed messages.

This bug dates back quite a long time: pqParseInput3 has had the behavior
since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1ad
in 2008.  Probably the reason it's not been isolated before is that in
common environments the recv() loop would always be faster than the server
(if on the same machine) or faster than the network (if not); or at least
it wouldn't be slower consistently enough to let the buffer ramp up to a
problematic size.  The reported cases involve Windows, which perhaps has
different timing behavior than other platforms.

Per bug #7914 from Shin-ichi Morita, though this is different from his
proposed solution.  Back-patch to all supported branches.

Branch
--
REL9_1_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/86888054a92aeca429593a22437ecc83fd300985

Modified Files
--
src/interfaces/libpq/fe-misc.c |   32 
1 file changed, 32 insertions(+)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Avoid buffer bloat in libpq when server is consistently faster t

2014-05-07 Thread Tom Lane
Avoid buffer bloat in libpq when server is consistently faster than client.

If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
incomplete message in each bufferload we'd usually double the buffer size,
due to supposing that we didn't have enough room in the buffer to finish
collecting that message.  After filling the newly-enlarged buffer, the
cycle repeats, eventually resulting in an out-of-memory situation (which
would be reported misleadingly as "lost synchronization with server").
Of course, we should not enlarge the buffer unless we still need room
after discarding already-processed messages.

This bug dates back quite a long time: pqParseInput3 has had the behavior
since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1ad
in 2008.  Probably the reason it's not been isolated before is that in
common environments the recv() loop would always be faster than the server
(if on the same machine) or faster than the network (if not); or at least
it wouldn't be slower consistently enough to let the buffer ramp up to a
problematic size.  The reported cases involve Windows, which perhaps has
different timing behavior than other platforms.

Per bug #7914 from Shin-ichi Morita, though this is different from his
proposed solution.  Back-patch to all supported branches.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/2f557167b19af79ffecb8faedf8b7bce4d48f3e1

Modified Files
--
src/interfaces/libpq/fe-misc.c |   32 
1 file changed, 32 insertions(+)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Avoid buffer bloat in libpq when server is consistently faster t

2014-05-07 Thread Tom Lane
Avoid buffer bloat in libpq when server is consistently faster than client.

If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
incomplete message in each bufferload we'd usually double the buffer size,
due to supposing that we didn't have enough room in the buffer to finish
collecting that message.  After filling the newly-enlarged buffer, the
cycle repeats, eventually resulting in an out-of-memory situation (which
would be reported misleadingly as "lost synchronization with server").
Of course, we should not enlarge the buffer unless we still need room
after discarding already-processed messages.

This bug dates back quite a long time: pqParseInput3 has had the behavior
since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1ad
in 2008.  Probably the reason it's not been isolated before is that in
common environments the recv() loop would always be faster than the server
(if on the same machine) or faster than the network (if not); or at least
it wouldn't be slower consistently enough to let the buffer ramp up to a
problematic size.  The reported cases involve Windows, which perhaps has
different timing behavior than other platforms.

Per bug #7914 from Shin-ichi Morita, though this is different from his
proposed solution.  Back-patch to all supported branches.

Branch
--
REL9_2_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/f7672c8ce26582f250f7dce543a998ac9d1d6665

Modified Files
--
src/interfaces/libpq/fe-misc.c |   32 
1 file changed, 32 insertions(+)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Avoid buffer bloat in libpq when server is consistently faster t

2014-05-07 Thread Tom Lane
Avoid buffer bloat in libpq when server is consistently faster than client.

If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
incomplete message in each bufferload we'd usually double the buffer size,
due to supposing that we didn't have enough room in the buffer to finish
collecting that message.  After filling the newly-enlarged buffer, the
cycle repeats, eventually resulting in an out-of-memory situation (which
would be reported misleadingly as "lost synchronization with server").
Of course, we should not enlarge the buffer unless we still need room
after discarding already-processed messages.

This bug dates back quite a long time: pqParseInput3 has had the behavior
since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1ad
in 2008.  Probably the reason it's not been isolated before is that in
common environments the recv() loop would always be faster than the server
(if on the same machine) or faster than the network (if not); or at least
it wouldn't be slower consistently enough to let the buffer ramp up to a
problematic size.  The reported cases involve Windows, which perhaps has
different timing behavior than other platforms.

Per bug #7914 from Shin-ichi Morita, though this is different from his
proposed solution.  Back-patch to all supported branches.

Branch
--
REL8_4_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/664ac3de7f50e3a16b9c96fa73c4f6e733500ade

Modified Files
--
src/interfaces/libpq/fe-misc.c |   32 
1 file changed, 32 insertions(+)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


Re: [COMMITTERS] pgsql: Clean up jsonb code.

2014-05-07 Thread Peter Geoghegan
Thanks for cleaning this up.

On Wed, May 7, 2014 at 1:18 PM, Heikki Linnakangas
 wrote:
> The jsonb_exists_any and jsonb_exists_all functions no longer sort the input
> array. That was a premature optimization, the idea being that if there are
> duplicates in the input array, you only need to check them once. Also,
> sorting the array saves some effort in the binary search used to find a key
> within an object. But there were drawbacks too: the sorting and
> deduplicating obviously isn't free, and in the typical case there are no
> duplicates to remove, and the gain in the binary search was minimal. Remove
> all that, which makes the code simpler too.

This is not the reason why the code did that. De-duplication was not
the point at all. findJsonbValueFromSuperHeader()'s lowbound argument
previously served to establish a low bound for searching when
searching for multiple keys (so the second and subsequent
user-supplied key could skip much of the object). In the case of
jsonb_exists_any(), say, if you only have a reasonable expectation
that about 1 key exists, and that happens to be the last key that the
user passed to the text[] argument (to the existence/? operator), then
n - 1 calls to what is now findJsonbValueFromContainer() (which now
does not accept a lowbound) are wasted.  That's elem_count - 1
top-level binary searches of the entire jsonb. Or elem_count such
calls rather than 1 call (plus 1 sort of the supplied array) in the
common case where jsonb_exists_any() will return false.

Granted, that might not be that bad right now, given that it's only
ever (say) elem_count or elem_count - 1 wasted binary searches through
the *top* level, but that might not always be true. And even today,
sorting a presumably much smaller user-passed lookup array once has to
be cheaper than searching through the entire jsonb perhaps elem_count
times per call.

-- 
Peter Geoghegan


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: When a background worker exists with code 0, unregister it.

2014-05-07 Thread Robert Haas
When a background worker exists with code 0, unregister it.

The previous behavior was to restart immediately, which was generally
viewed as less useful.

Petr Jelinek, with some adjustments by me.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/be7558162acc5578d0b2cf0c8d4c76b6076ce352

Modified Files
--
doc/src/sgml/bgworker.sgml  |   14 ++
src/backend/postmaster/bgworker.c   |4 ++--
src/backend/postmaster/postmaster.c |8 +++-
src/include/postmaster/bgworker.h   |8 
4 files changed, 23 insertions(+), 11 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix build after removing JsonbValue.estSize field.

2014-05-07 Thread Heikki Linnakangas
Fix build after removing JsonbValue.estSize field.

Oops, I didn't realize that contrib/hstore refers to jsonb stuff.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/7572b7735971cd7a5ef289e133eedf7d82f79c42

Modified Files
--
contrib/hstore/hstore_io.c |   12 
1 file changed, 12 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: When a bgworker exits, always call ReleasePostmasterChildSlot.

2014-05-07 Thread Robert Haas
When a bgworker exits, always call ReleasePostmasterChildSlot.

Commit e2ce9aa27bf20eff2d991d0267a15ea5f7024cd7 was insufficiently
well thought out.  Repair.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/eee6cf1f337aa488a20e9111df446cdad770e645

Modified Files
--
src/backend/postmaster/postmaster.c |   22 --
1 file changed, 12 insertions(+), 10 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Restart bgworkers immediately after a crash-and-restart cycle.

2014-05-07 Thread Robert Haas
Restart bgworkers immediately after a crash-and-restart cycle.

Just as we would start bgworkers immediately after an initial startup
of the server, we should restart them immediately when reinitializing.

Petr Jelinek and Robert Haas

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/970d1f76d1600dfbdbd9cd88a9e2af113e253798

Modified Files
--
src/backend/postmaster/bgworker.c   |   34 ++-
src/backend/postmaster/postmaster.c |7 +++---
src/include/postmaster/bgworker_internals.h |1 +
3 files changed, 33 insertions(+), 9 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Clean up jsonb code.

2014-05-07 Thread Heikki Linnakangas
Clean up jsonb code.

The main target of this cleanup is the convertJsonb() function, but I also
touched a lot of other things that I spotted into in the process.

The new convertToJsonb() function uses an output buffer that's resized on
demand, so the code to estimate of the size of JsonbValue is removed.

The on-disk format was not changed, even though I refactored the structs
used to handle it. The term "superheader" is replaced with "container".

The jsonb_exists_any and jsonb_exists_all functions no longer sort the input
array. That was a premature optimization, the idea being that if there are
duplicates in the input array, you only need to check them once. Also,
sorting the array saves some effort in the binary search used to find a key
within an object. But there were drawbacks too: the sorting and
deduplicating obviously isn't free, and in the typical case there are no
duplicates to remove, and the gain in the binary search was minimal. Remove
all that, which makes the code simpler too.

This includes a bug-fix; the total length of the elements in a jsonb array
or object mustn't exceed 2^28. That is now checked.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/364ddc3e5cbd01c93a39896b5260509129a9883e

Modified Files
--
src/backend/utils/adt/jsonb.c  |   16 +-
src/backend/utils/adt/jsonb_gin.c  |4 +-
src/backend/utils/adt/jsonb_op.c   |  106 ++--
src/backend/utils/adt/jsonb_util.c |  929 +---
src/backend/utils/adt/jsonfuncs.c  |   64 +--
src/include/utils/jsonb.h  |  213 +
6 files changed, 556 insertions(+), 776 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Detach shared memory from bgworkers without shmem access.

2014-05-07 Thread Robert Haas
Detach shared memory from bgworkers without shmem access.

Since the postmaster won't perform a crash-and-restart sequence
for background workers which don't request shared memory access,
we'd better make sure that they can't corrupt shared memory.

Patch by me, review by Tom Lane.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/4d155d8b08fe08c1a1649fdbad61c6dcf4a8671f

Modified Files
--
src/backend/postmaster/bgworker.c |   48 ++---
1 file changed, 39 insertions(+), 9 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix failure to set ActiveSnapshot while rewinding a cursor.

2014-05-07 Thread Tom Lane
Fix failure to set ActiveSnapshot while rewinding a cursor.

ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least).  The wisdom of that is somewhat
debatable, perhaps, but for now the simplest fix is to make sure the
required context is valid.  Failure to do this typically led to a
null-pointer-dereference core dump, though it's possible that in more
complex cases a function could be executed with the wrong snapshot
leading to very subtle misbehavior.

Per report from Leif Jensen.  It's been broken for a long time, so
back-patch to all active branches.

Branch
--
REL8_4_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/2a527baa33372db723ca6b1b49b14da1e7a1b92e

Modified Files
--
src/backend/tcop/pquery.c |   14 --
src/test/regress/expected/portals.out |   24 
src/test/regress/sql/portals.sql  |   11 +++
3 files changed, 47 insertions(+), 2 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix failure to set ActiveSnapshot while rewinding a cursor.

2014-05-07 Thread Tom Lane
Fix failure to set ActiveSnapshot while rewinding a cursor.

ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least).  The wisdom of that is somewhat
debatable, perhaps, but for now the simplest fix is to make sure the
required context is valid.  Failure to do this typically led to a
null-pointer-dereference core dump, though it's possible that in more
complex cases a function could be executed with the wrong snapshot
leading to very subtle misbehavior.

Per report from Leif Jensen.  It's been broken for a long time, so
back-patch to all active branches.

Branch
--
REL9_1_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/229101db4d7696c555b338264ef9c89f769c511d

Modified Files
--
src/backend/tcop/pquery.c |   14 --
src/test/regress/expected/portals.out |   24 
src/test/regress/sql/portals.sql  |   11 +++
3 files changed, 47 insertions(+), 2 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix failure to set ActiveSnapshot while rewinding a cursor.

2014-05-07 Thread Tom Lane
Fix failure to set ActiveSnapshot while rewinding a cursor.

ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least).  The wisdom of that is somewhat
debatable, perhaps, but for now the simplest fix is to make sure the
required context is valid.  Failure to do this typically led to a
null-pointer-dereference core dump, though it's possible that in more
complex cases a function could be executed with the wrong snapshot
leading to very subtle misbehavior.

Per report from Leif Jensen.  It's been broken for a long time, so
back-patch to all active branches.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/04e5025be8bbe572e12b19c4ba9e2a8360b8ffe5

Modified Files
--
src/backend/tcop/pquery.c |   14 --
src/test/regress/expected/portals.out |   24 
src/test/regress/sql/portals.sql  |   11 +++
3 files changed, 47 insertions(+), 2 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix failure to set ActiveSnapshot while rewinding a cursor.

2014-05-07 Thread Tom Lane
Fix failure to set ActiveSnapshot while rewinding a cursor.

ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least).  The wisdom of that is somewhat
debatable, perhaps, but for now the simplest fix is to make sure the
required context is valid.  Failure to do this typically led to a
null-pointer-dereference core dump, though it's possible that in more
complex cases a function could be executed with the wrong snapshot
leading to very subtle misbehavior.

Per report from Leif Jensen.  It's been broken for a long time, so
back-patch to all active branches.

Branch
--
REL9_0_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/7f66ade71a59a8da58a43514ca851ab5e96dd209

Modified Files
--
src/backend/tcop/pquery.c |   14 --
src/test/regress/expected/portals.out |   24 
src/test/regress/sql/portals.sql  |   11 +++
3 files changed, 47 insertions(+), 2 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix failure to set ActiveSnapshot while rewinding a cursor.

2014-05-07 Thread Tom Lane
Fix failure to set ActiveSnapshot while rewinding a cursor.

ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least).  The wisdom of that is somewhat
debatable, perhaps, but for now the simplest fix is to make sure the
required context is valid.  Failure to do this typically led to a
null-pointer-dereference core dump, though it's possible that in more
complex cases a function could be executed with the wrong snapshot
leading to very subtle misbehavior.

Per report from Leif Jensen.  It's been broken for a long time, so
back-patch to all active branches.

Branch
--
REL9_3_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/fc58c39d468587467c7c55b349c28167794eadaf

Modified Files
--
src/backend/tcop/pquery.c |   14 --
src/test/regress/expected/portals.out |   24 
src/test/regress/sql/portals.sql  |   11 +++
3 files changed, 47 insertions(+), 2 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix failure to set ActiveSnapshot while rewinding a cursor.

2014-05-07 Thread Tom Lane
Fix failure to set ActiveSnapshot while rewinding a cursor.

ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least).  The wisdom of that is somewhat
debatable, perhaps, but for now the simplest fix is to make sure the
required context is valid.  Failure to do this typically led to a
null-pointer-dereference core dump, though it's possible that in more
complex cases a function could be executed with the wrong snapshot
leading to very subtle misbehavior.

Per report from Leif Jensen.  It's been broken for a long time, so
back-patch to all active branches.

Branch
--
REL9_2_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/022b5f2b228e2d0a658b808340bd32ba904b87f4

Modified Files
--
src/backend/tcop/pquery.c |   14 --
src/test/regress/expected/portals.out |   24 
src/test/regress/sql/portals.sql  |   11 +++
3 files changed, 47 insertions(+), 2 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Never crash-and-restart for bgworkers without shared memory acce

2014-05-07 Thread Robert Haas
Never crash-and-restart for bgworkers without shared memory access.

The motivation for a crash and restart cycle when a backend dies is
that it might have corrupted shared memory on the way down; and we
can't recover reliably except by reinitializing everything.  But that
doesn't apply to processes that don't touch shared memory.  Currently,
there's nothing to prevent a background worker that doesn't request
shared memory access from touching shared memory anyway, but that's a
separate bug.

Previous to this commit, the coding in postmaster.c was inconsistent:
an exit status other than 0 or 1 didn't provoke a crash-and-restart,
but failure to release the postmaster child slot did.  This change
makes those cases consistent.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/e2ce9aa27bf20eff2d991d0267a15ea5f7024cd7

Modified Files
--
src/backend/postmaster/postmaster.c |   20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)


-- 
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers