Re: [HACKERS] [RFC: bug fix?] Connection attempt block forever when the synchronous standby is not running

2014-07-06 Thread Amit Kapila
On Fri, Jul 4, 2014 at 7:29 PM, MauMau maumau...@gmail.com wrote:

 Hello,

 When requesting synchronous replication, each commit of a write
transaction will wait until confirmation is received that the commit has
been written to the transaction log on disk of both the primary and standby
server.
 ...
 Read only transactions and transaction rollbacks need not wait for
replies from standby servers. Subtransaction commits do not wait for
responses from standby servers, only top-level commits.


 [Hypothesis]
 Why does the connection processing emit WAL?

 Probably, it did page-at-a-time vacuum during access to pg_database and
pg_authid for client authentication.  src/backend/access/heap/README.HOT
describes:

I agree with your analysis that it can happen during connection
attempt.

 But the customer could not reproduce the problem when he performed the
same archive recovery from the same base backup again.  Why?  I guess the
autovacuum daemon vacuumed the system catalogs before he attempted to
connect to the database.

 Is this correct?

One way to confirm could be to perform the archive recovery by
disabling autovacuum.


 [How to fix]
 Of course, adding -o '-c synchronous_commit=local' or -o '-c
synchronous_standby_names=' to pg_ctl start in the recovery script would
prevent the problem.

 But isn't there anything to fix in PostgreSQL?  I think the doc needs
improvement so that users won't misunderstand that only write transactions
would block at commit.

I also think at the very least we should update docs even if we
don't have any solution for this case.

Another thing which I am wondering about is can't the same happen
even for Read Only transaction (incase someone does Select which
prunes the page).


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-07-06 Thread Sergey Muraviov
Hi.

Is there anyone who can commit the patch?


2014-06-25 20:17 GMT+04:00 Pavel Stehule pavel.steh...@gmail.com:




 2014-06-24 19:45 GMT+02:00 Sergey Muraviov sergey.k.murav...@gmail.com:

 Hi.

 Is there any problem with the patch?


  I tested it and I had not any issue with last version

 So, please, commit it

 Regards

 Pavel




 2014-06-17 0:21 GMT+04:00 Greg Stark st...@mit.edu:

 On Mon, Jun 16, 2014 at 9:05 PM, Robert Haas robertmh...@gmail.com
 wrote:
  So, it seems like we need to do something about this one way or
  another.  Who's working on that?

 So I'm fine finishing what I started. I've just been a bit busy this
 past week.

 My inclination is to try to push forward and commit this patch,
 document the changes and make sure we check for any consequences of
 them.

 The alternate plan is to revert it for 9.4 and commit the changes to
 9.5 and that gives us more time to be sure we're ok with them.


 --
 greg




 --
 Best regards,
 Sergey Muraviov





-- 
Best regards,
Sergey Muraviov


[HACKERS] [PATCH] Improve bytea error messages

2014-07-06 Thread Craig Ringer
Hi all

After someone reported somewhat less than informative errors in bytea
decoding (http://stackoverflow.com/q/24588866/398670) I thought I'd put
together a quick patch to improve the errors here.

Please merge.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

From 1f8b96526ca19a8843960b6a6ec08357b1f166a8 Mon Sep 17 00:00:00 2001
From: Craig Ringer cr...@2ndquadrant.com
Date: Sun, 6 Jul 2014 16:15:21 +0800
Subject: [PATCH] Improve bytea decoding error messages

---
 src/backend/utils/adt/encode.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/backend/utils/adt/encode.c b/src/backend/utils/adt/encode.c
index 46993ba..67fb574 100644
--- a/src/backend/utils/adt/encode.c
+++ b/src/backend/utils/adt/encode.c
@@ -292,7 +292,7 @@ b64_decode(const char *src, unsigned len, char *dst)
 else
 	ereport(ERROR,
 			(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-			 errmsg(unexpected \=\)));
+			 errmsg(unexpected \=\ while decoding base64 sequence)));
 			}
 			b = 0;
 		}
@@ -304,7 +304,7 @@ b64_decode(const char *src, unsigned len, char *dst)
 			if (b  0)
 ereport(ERROR,
 		(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-		 errmsg(invalid symbol)));
+		 errmsg(invalid symbol '%c' while decoding base64 sequence, (int) c)));
 		}
 		/* add it to buffer */
 		buf = (buf  6) + b;
@@ -324,7 +324,7 @@ b64_decode(const char *src, unsigned len, char *dst)
 	if (pos != 0)
 		ereport(ERROR,
 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg(invalid end sequence)));
+ errmsg(invalid base64 end sequence (missing padding or corrupt or truncated data;
 
 	return p - dst;
 }
-- 
1.9.0


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.auto.conf and reload

2014-07-06 Thread Christoph Berg
Re: Amit Kapila 2014-07-06 
CAA4eK1K=2WCD5ur8c-34NOw+XKg57Q4k0SajwSQXcwciD-=+0...@mail.gmail.com
 Another could be that during initdb all the uncommented settings be
 written to postgresql.auto.conf file rather than to postgresql.conf.
 I think we can do this by changing code in initdb.c-setup_config().
 This will ensure that unless user is changing settings in a mixed way
 (some by Alter System and some manually by editing postgresql.conf),
 there will no such problem.

That will make editing postgresql.conf totally useless, because users
will always need to check if their new value isn't overridden by
something in auto.conf. This will affect *all* users, as everyone
needs to bump shared_buffers, which is set by initdb. We'd get 10^X
complaints from admins who got confused where their config actually
is.

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 logical replication - walsender keepalive replies

2014-07-06 Thread Andres Freund
Hi Steve,

On 2014-06-30 11:40:50 -0400, Steve Singer wrote:
 In 9.4 we've the below  block of code to walsender.c as
 
 /*
  * We only send regular messages to the client for full decoded
  * transactions, but a synchronous replication and walsender shutdown
  * possibly are waiting for a later location. So we send pings
  * containing the flush location every now and then.
 */
 if (MyWalSnd-flush  sentPtr  !waiting_for_ping_response)
 {
 WalSndKeepalive(true);
 waiting_for_ping_response = true;
 }
 
 
 I am finding that my logical replication reader is spending a tremendous
 amount of time sending feedback to the server because a keep alive reply was
 requested.  My flush pointer is smaller than sendPtr, which I see as the
 normal case (The client hasn't confirmed all the wal it has been sent).   My
 client queues the records it receives and only confirms when actually
 processes the record.
 
 So the sequence looks something like
 
 Server Sends LSN 0/1000
 Server Sends LSN 0/2000
 Server Sends LSN 0/3000
 Client confirms LSN 0/2000

 I don't see why all these keep alive replies are needed in this case (the
 timeout value is bumped way up, it's the above block that is triggering the
 reply request not something related to timeout)

Right. I thought about this for a while, and I think we should change
two things. For one, don't request replies here. It's simply not needed,
as this isn't dealing with timeouts. For another don't just check -flush
 sentPtr but also  -write  sentPtr. The reason we're sending these
feedback messages is to inform the 'logical standby' that there's been
WAL activity which it can't see because they don't correspond to
anything that's logically decoded (e.g. vacuum stuff).
Would that suit your needs?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tweaking NTUP_PER_BUCKET

2014-07-06 Thread Tomas Vondra
On 6.7.2014 06:47, Stephen Frost wrote:
 * Greg Stark (st...@mit.edu) wrote:
 Last time was we wanted to use bloom filters in hash joins to
 filter out tuples that won't match any of the future hash batches
 to reduce the amount of tuples that need to be spilled to disk.
 However the problem was that it was unclear for a given amount of
 memory usage how to pick the right size bloom filter and how to
 model how much it would save versus how much it would cost in
 reduced hash table size.
 
 Right. There's only one hash function available, really, (we don't 
 currently support multiple hash functions..), unless we want to try
 and re-hash the 32bit hash value that we get back (not against trying
 that, but it isn't what I'd start with) and it would hopefully be
 sufficient for this.

I don't really see a need to add more hashing functions. Use the hash
value itself as an input to the bloom filter seems just fine to me.

According to [1] the expected number of collisions is

   n - k + k * math.pow((1 - 1/k), n)

where 'k' is the number of possible values (2^32 in our case), and 'n'
is the number of inserts (i.e. values inserted into the table). With a
1GB work_mem and ~50B per row, that's ~20M rows. According to the
formula, that's ~50k collisions. In other words, 0.25% of false
positives. This seems more than sufficient for the bloom filter, and if
0.25% of false positives causes it to be inefficient I'd say the gains
are not that great anyway. (Unless my calculations are somehow flawed,
of course).

[1] https://www.cs.duke.edu/courses/cps102/spring10/Lectures/L-18.pdf

 I think it just required some good empirical tests and hash join
 heavy workloads to come up with some reasonable guesses. We don't
 need a perfect model just some reasonable bloom filter size that
 we're pretty sure will usually help more than it hurts.
 
 This would help out a lot of things, really.. Perhaps what Tomas is 
 developing regarding test cases would help here also.

The test cases might be reused I guess. However I still don't have a
clear idea of how exactly the bloom filter would be implemented. In the
threads I found it was suggested to use per-bucket filtersm, which seems
really strange to me. I see two options:

(a) global filter

- bloom filter sized according to estimates from planner
- ... so we're screwed if the estimates are significantly wrong
- create the filter in ExecHashTableCreate
- insert all tuples in ExecHashTableInsert (all batches)

(b) per-batch filter

- may be built after the batching is completed
- we have reliable numbers at this point (not just estimates)
- per-batch filters may be smaller (only fraction of tuples)

So the per-batch filter seems to be a better approach. The question is
the sizing of the bloom filter. According to [2] there are three rules
of thumb for sizing bloom filters:

   (1) 1B per item, to get 2% error rate
   (2) 0.7 * bits of hash functions (~5 when using 1B / item)
   (3) number of hashes dominates the performance (more to compute,
   more memory accesses)

When working with 20M rows, this means ~20MB and 5 hash functions. Not
that bad, although we don't know how expensive it's to build the filter.

[2] http://corte.si/posts/code/bloom-filter-rules-of-thumb/index.html

However this is somewhat flawed because this assumes the 20M hashed
values are distinct, so if the inner table contains duplicate keys, we
might use smaller bloom filter. But we don't know that value (ndistinct
estimates are rather unreliable, but we might use hyperloglog counter to
do this).

Also, there are cases when we know there will be no misses (e.g. a join
on a FK), and in that case it's pointless to build the bloom filter.
Would be nice to get some flag from the planner whether to build it, a
threshold when to build it (e.g. if rowcount is  10k, build the filter).

Doing something similar for the two patches I posted (tweaking the
nbuckets dynamically) might also benefit from such planning information,
although it's not that critical for them.

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Selectivity estimation for inet operators

2014-07-06 Thread Emre Hasegeli
Thank you for looking at it.

 In inet_his_inclusion_selec function, 
 When the constant matches only the right side of the bucket, and if it’s a 
 last bucket then it's never considered as partial match candidate.
 In my opinion, if it's not a last bucket then for next bucket it will become 
 left boundary and this will be treated as partial match so no problem, but 
 in-case of last bucket it can give wrong selectivity.
 
 Can't we consider it as partial bucket match if it is last bucket ?

Actually, in that case, the ratio for one value in the column is used.
I clarified the comment about it.  I do not think it is common enough
case to make the function more complicated.

 Apart from that there is one spell check you can correct
 -- in inet_his_inclusion_selec comments
 histogram boundies  - histogram boundaries :)

I fixed it.  New version attached.  The debug log statements are also
removed.
diff --git a/src/backend/utils/adt/network_selfuncs.c 
b/src/backend/utils/adt/network_selfuncs.c
index d0d806f..d8aeae9 100644
--- a/src/backend/utils/adt/network_selfuncs.c
+++ b/src/backend/utils/adt/network_selfuncs.c
@@ -1,32 +1,677 @@
 /*-
  *
  * network_selfuncs.c
  *   Functions for selectivity estimation of inet/cidr operators
  *
- * Currently these are just stubs, but we hope to do better soon.
+ * Estimates are based on null fraction, distinct value count, most common
+ * values, and histogram of inet/cidr datatypes.
  *
  * Portions Copyright (c) 1996-2014, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  *
  * IDENTIFICATION
  *   src/backend/utils/adt/network_selfuncs.c
  *
  *-
  */
 #include postgres.h
 
+#include math.h
+
+#include access/htup_details.h
+#include catalog/pg_collation.h
+#include catalog/pg_operator.h
+#include catalog/pg_statistic.h
+#include utils/lsyscache.h
 #include utils/inet.h
+#include utils/selfuncs.h
 
 
+/* Default selectivity constant for the inet overlap operator */
+#define DEFAULT_OVERLAP_SEL 0.01
+
+/* Default selectivity constant for the other operators */
+#define DEFAULT_INCLUSION_SEL 0.005
+
+/* Default selectivity for given operator */
+#define DEFAULT_SEL(operator) \
+   ((operator) == OID_INET_OVERLAP_OP ? \
+   DEFAULT_OVERLAP_SEL : DEFAULT_INCLUSION_SEL)
+
+static short int inet_opr_order(Oid operator, bool reversed);
+static Selectivity inet_his_inclusion_selec(Datum *values, int nvalues,
+   int red_nvalues, Datum *constvalue, 
double ndistinct,
+   short int opr_order);
+static Selectivity inet_mcv_join_selec(Datum *values1, float4 *numbers1,
+   int nvalues1, Datum *values2, float4 
*numbers2,
+   int nvalues2, Oid operator, bool 
reversed);
+static Selectivity inet_mcv_his_selec(Datum *mcv_values, float4 *mcv_numbers,
+   int mcv_nvalues, Datum *his_values, int 
his_nvalues,
+   int red_nvalues, short int opr_order);
+static Selectivity inet_his_inclusion_join_selec(Datum *his1_values,
+   int his1_nvalues, int red1_nvalues, 
Datum *his2_values,
+   int his2_nvalues, int red2_nvalues, 
short int opr_order);
+static short int inet_inclusion_cmp(inet *left, inet *right,
+   short 
int opr_order);
+static short int inet_masklen_inclusion_cmp(inet *left, inet *right,
+   
short int opr_order);
+static short int inet_his_match_divider(inet *boundary, inet *query,
+   
short int opr_order);
+
+/*
+ * Selectivity estimation for the subnet inclusion operators
+ */
 Datum
 networksel(PG_FUNCTION_ARGS)
 {
-   PG_RETURN_FLOAT8(0.001);
+   PlannerInfo*root = (PlannerInfo *) PG_GETARG_POINTER(0);
+   Oid operator = PG_GETARG_OID(1);
+   List   *args = (List *) PG_GETARG_POINTER(2);
+   int varRelid = PG_GETARG_INT32(3),
+   his_nvalues;
+   VariableStatData vardata;
+   Node   *other;
+   boolvaronleft;
+   Selectivity selec,
+   max_mcv_selec,
+   max_his_selec;
+   Datum   constvalue,
+  *his_values;
+   Form_pg_statistic stats;
+   FmgrInfoproc;
+
+   /*
+* If expression 

Re: [HACKERS] tweaking NTUP_PER_BUCKET

2014-07-06 Thread Stephen Frost
Tomas,

* Tomas Vondra (t...@fuzzy.cz) wrote:
 I can't find the thread / test cases in the archives. I've found this
 thread in hackers:
 
 http://www.postgresql.org/message-id/caoezvif-r-ilf966weipk5by-khzvloqpwqurpak3p5fyw-...@mail.gmail.com
 
 Can you point me to the right one, please?

This:

http://www.postgresql.org/message-id/20130328235627.gv4...@tamriel.snowman.net

and I think there may have been other test cases on that thread
somewhere...  I certainly recall having at least a couple of them that I
was playing with.

I can probably find them around here somewhere too, if necessary.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Allowing join removals for more join types

2014-07-06 Thread Tom Lane
David Rowley dgrow...@gmail.com writes:
 On 6 July 2014 03:20, Tom Lane t...@sss.pgh.pa.us wrote:
 Just to note that I've started looking at this, and I've detected a rather
 significant omission: there's no check that the join operator has anything
 to do with the subquery's grouping operator.

 hmm, good point. If I understand this correctly we can just ensure that the
 same operator is used for both the grouping and the join condition.

Well, that's sort of the zero-order solution, but it doesn't work if the
join operators are cross-type.

I poked around to see if we didn't have some code already for that, and
soon found that not only do we have such code (equality_ops_are_compatible)
but actually almost this entire patch duplicates logic that already exists
in optimizer/util/pathnode.c, to wit create_unique_path's subroutines
query_is_distinct_for et al.  So I'm thinking what this needs to turn into
is an exercise in refactoring to allow that logic to be used for both
purposes.

I notice that create_unique_path is not paying attention to the question
of whether the subselect's tlist contains SRFs or volatile functions.
It's possible that that's a pre-existing bug.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance regression: 9.2+ vs. ScalarArrayOpExpr vs. ORDER BY

2014-07-06 Thread Tom Lane
Andrew Gierth and...@tao11.riddles.org.uk writes:
 commit 807a40c5 fixed a bug in handling of (new in 9.2) functionality
 of ScalarArrayOpExpr in btree index quals, forcing the results of
 scans including such a qual to be treated as unordered (because the
 order can in fact be wrong).
 However, this kills any chance of using the same index _without_ the
 SAOP to get the benefit of its ordering.

Hm, good point.

 I've experimented with the attached patch, which detects when this
 situation might have occurred and does another pass to try and build
 ordered scans without the SAOP condition. However, the results may not
 be quite ideal, because at least in some test queries (not all) the
 scan with the SAOP included in the indexquals is being costed higher
 than the same scan with the SAOP moved to a Filter, which seems
 unreasonable.

I'm not convinced that that's a-priori unreasonable, since the SAOP
will result in multiple index scans under the hood.  Conceivably we
ought to try the path with and with SAOPs all the time.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tweaking NTUP_PER_BUCKET

2014-07-06 Thread Tomas Vondra
On 6.7.2014 17:57, Stephen Frost wrote:
 Tomas,
 
 * Tomas Vondra (t...@fuzzy.cz) wrote:
 I can't find the thread / test cases in the archives. I've found this
 thread in hackers:

 http://www.postgresql.org/message-id/caoezvif-r-ilf966weipk5by-khzvloqpwqurpak3p5fyw-...@mail.gmail.com

 Can you point me to the right one, please?
 
 This:
 
 http://www.postgresql.org/message-id/20130328235627.gv4...@tamriel.snowman.net

Sadly, the link to the SQL does not work :-(

  http://snowman.net/~sfrost/test_case.sql = 404

 and I think there may have been other test cases on that thread
 somewhere...  I certainly recall having at least a couple of them that I
 was playing with.
 
 I can probably find them around here somewhere too, if necessary.

If you could look for them, that'd be great.

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Fix decoding of MULTI_INSERTs when rows other than the last are

2014-07-06 Thread Andres Freund
Hi,

On 2014-07-06 14:01:11 +, Andres Freund wrote:
 Fix decoding of MULTI_INSERTs when rows other than the last are toasted.
 
 When decoding the results of a HEAP2_MULTI_INSERT (currently only
 generated by COPY FROM) toast columns for all but the last tuple
 weren't replaced by their actual contents before being handed to the
 output plugin. The reassembled toast datums where disregarded after
 every REORDER_BUFFER_CHANGE_(INSERT|UPDATE|DELETE) which is correct
 for plain inserts, updates, deletes, but not multi inserts - there we
 generate several REORDER_BUFFER_CHANGE_INSERTs for a single
 xl_heap_multi_insert record.
 
 To solve the problem add a clear_toast_afterwards boolean to
 ReorderBufferChange's union member that's used by modifications. All
 row changes but multi_inserts always set that to true, but
 multi_insert sets it only for the last change generated.
 
 Add a regression test covering decoding of multi_inserts - there was
 none at all before.
 
 Backpatch to 9.4 where logical decoding was introduced.
 
 Bug found by Petr Jelinek.

Further testing unfortuantely shows that this isn't sufficient:

Commit 1b86c81d2d fixed the decoding of toasted columns for the rows
contained in one xl_heap_multi_insert record. But that's not actually
enough because heap_multi_insert() will actually first toast all
passed in rows and then emit several *_multi_insert records; one for
each page it fills with tuples.

I've attached a preliminary patch that:
Add a XLOG_HEAP_LAST_MULTI_INSERT flag which is set in
xl_heap_multi_insert-flag denoting that this multi_insert record is
the last emitted by one heap_multi_insert() call. Then use that flag
in decode.c to only set clear_toast_afterwards in the right situation.

But since I am not exactly happy about that solution I'm wondering if
somebody can think of a better approach. Alternatives I though of
included:
* Add a boolean to xl_heap_multi_insert instead of adding the
  XLOG_HEAP_LAST_MULTI_INSERT flag. I don't have an opinion either way
  about that one.
* Only toast rows in heap_multi_insert for every page. That doesn't
  really work without major reworks though as the loop over all the
  tuples is done in a critical section.
* Don't clean up toast chunks for multi insert at all. Instead do so after
  the next !multi insert record. I think that's out because of the
  amount of multi_insert calls COPY can produce.

The patch also adds 200 lines of COPY FROM STDIN in the regression tests
to create a long COPY that does a multi insert with toast covering two
pages. Does anybody object to that?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From b154d9732918de190dc55ce8fecdd790ac3657e3 Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Sun, 6 Jul 2014 18:44:54 +0200
Subject: [PATCH] Fix decoding of consecutive MULTI_INSERTs emitted by one
 heap_multi_insert().

Commit 1b86c81d2d fixed the decoding of toasted columns for the rows
contained in one xl_heap_multi_insert record. But that's not actually
enough because heap_multi_insert() will actually first toast all
passed in rows and then emit several *_multi_insert records; one for
each page it fills with tuples.

Add a XLOG_HEAP_LAST_MULTI_INSERT flag which is set in
xl_heap_multi_insert-flag denoting that this multi_insert record is
the last emitted by one heap_multi_insert() call. Then use that flag
in decode.c to only set clear_toast_afterwards in the right situation.

Backpatch to 9.4, just like the previous commit.
---
 contrib/test_decoding/expected/toast.out | 201 ++-
 contrib/test_decoding/sql/toast.sql  | 199 ++
 src/backend/access/heap/heapam.c |   8 ++
 src/backend/access/rmgrdesc/heapdesc.c   |   2 +
 src/backend/replication/logical/decode.c |  12 +-
 src/include/access/heapam_xlog.h |   2 +
 6 files changed, 421 insertions(+), 3 deletions(-)

diff --git a/contrib/test_decoding/expected/toast.out b/contrib/test_decoding/expected/toast.out
index 322afdb..53442e0 100644
--- a/contrib/test_decoding/expected/toast.out
+++ b/contrib/test_decoding/expected/toast.out
@@ -97,8 +97,207 @@ SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot',
  table public.toasted_copy: INSERT: id[integer]:2 data[text]:'toasted1-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
  table public.toasted_copy: INSERT: id[integer]:3 data[text]:'untoasted2'
  table public.toasted_copy: INSERT: id[integer]:4 data[text]:'toasted2-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
+ table public.toasted_copy: INSERT: id[integer]:5 data[text]:'untoasted3'
+ table public.toasted_copy: INSERT: id[integer]:6 

Re: [HACKERS] add line number as prompt option to psql

2014-07-06 Thread Sawada Masahiko
On Fri, Jun 20, 2014 at 7:17 PM, Jeevan Chalke
jeevan.cha...@enterprisedb.com wrote:
 Hi Sawada Masahiko,

 I liked this feature. So I have reviewed it.

 Changes are straight forward and looks perfect.
 No issues found with make/make install/initdb/regression.

 However I would suggest removing un-necessary braces at if, as we have only
 one statement into it.

 if (++cur_line = INT_MAX)
 {
 cur_line = 1;
 }

 Also following looks wrong:

 postgres[1]=# select
 postgres[2]-# *
 postgres[3]-# from
 postgres[4]-# tab;
  a
 ---
 (0 rows)

 postgres[1]=# select
 *
 from
 tab
 postgres[2]-# where t  10;
 ERROR:  column t does not exist
 LINE 5: where t  10;
   ^

 Line number in ERROR is 5 which is correct.
 But line number in psql prompt is wrong.

 To get first 4 lines I have simply used up arrow followed by an enter for
 which I was expecting 5 in psql prompt.
 But NO it was 2 which is certainly wrong.

 Need to handle above carefully.

 Thanks


 On Thu, Jun 12, 2014 at 10:46 PM, Sawada Masahiko sawada.m...@gmail.com
 wrote:

 Hi all,

 The attached IWP patch is one prompt option for psql, which shows
 current line number.
 If the user made syntax error with too long SQL then psql outputs
 message as following.

 ERROR:  syntax error at or near a
 LINE 250: hoge
 ^
 psql teaches me where syntax error is occurred, but it is not kind
 when SQL is too long.
 We can use write SQL with ¥e(editor) command(e.g., emacs) , and we can
 know line number.
 but it would make terminal log dirty . It will make analyzing of log
 difficult after error is occurred.
 (I think that we usually use copy  paste)

 After attached this patch, we will be able to use %l option as prompting
 option.

 e.g.,
 $ cat ~/.psqlrc
 \set PROMPT2 '%/[%l]%R%# '
 \set PROMPT1 '%/[%l]%R%# '
 $ psql -d postgres
 postgres[1]=# select
 postgres[2]-# *
 postgres[3]-# from
 postgres[4]-# hoge;

 The past discussion is following.

 http://www.postgresql.org/message-id/cafj8prc1rupk6+cha1jpxph3us_zipabdovmceex4wpbp2k...@mail.gmail.com

 Please give me feedback.

 Regards,

 ---
 Sawada Masahiko


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




 --
 Jeevan B Chalke
 Principal Software Engineer, Product Development
 EnterpriseDB Corporation
 The Enterprise PostgreSQL Company

 Phone: +91 20 30589500

 Website: www.enterprisedb.com
 EnterpriseDB Blog: http://blogs.enterprisedb.com/
 Follow us on Twitter: http://www.twitter.com/enterprisedb

 This e-mail message (and any attachment) is intended for the use of the
 individual or entity to whom it is addressed. This message contains
 information from EnterpriseDB Corporation that may be privileged,
 confidential, or exempt from disclosure under applicable law. If you are not
 the intended recipient or authorized to receive this for the intended
 recipient, any use, dissemination, distribution, retention, archiving, or
 copying of this communication is strictly prohibited. If you have received
 this e-mail in error, please notify the sender immediately by reply e-mail
 and delete this message.

Thank you for reviewing patch, and sorry for late response.

I have updated this patch, and attached it.

 postgres[1]=# select
 *
 from
 tab
 postgres[2]-# where t  10;
 ERROR:  column t does not exist
 LINE 5: where t  10;
Attached patch can handle this case.

Please give me feedback.

Regards,

---
Sawada Masahiko


psql-line-number_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] how to find the order of joins from Explain command XML plan output in PostgreSQL9.3.4

2014-07-06 Thread Rajmohan C
EXPLAIN (format XML) query command in PostgreSQL9.3.4 gives the plan
chosen by the optimizer in XML format. In my program, I have to extract
certain data about optimizer plan from this XML output.

  I am using *LibXML2* library for parsing the XML. I had successfully
extracted information about which relations are involved and what joins are
used by parsing the XML.
  But I am *unable to extract the* *order of joining the relations from
the XML output*. I conceptually understood that the level order traversal
of binary tree representation of the XML plan will give correct ordering of
joins applied.
  But I could not figure out how do I get that from the XML? Does
libXML2 support anything of this sort? If not how should I proceed to
tackle this?


Re: [HACKERS] Spinlocks and compiler/memory barriers

2014-07-06 Thread Robert Haas
On Tue, Jul 1, 2014 at 12:22 PM, Robert Haas robertmh...@gmail.com wrote:
 Since we have a Sun Studio machine in the buildfarm, we shouldn't give
 up on SPARC completely, but maybe we should only add the cases for
 sparcv8+ and above?  That at least has some chance of getting tested.

 That we have code for sparcv7 is ridiculous btw. Sparcv8 (not 8+/9) is
 from 1992. v7/sun4c been dropped from solaris 8 if I see that correctly.

 I agree that v8 (in contrast to v8+ aka v9 in 32bit mode) support has to
 go as well. I'd personally vote for backpatching a note to
 installation.sgml saying that it's probably not working, and not do
 anything else there. That means we also should replace the ldstub by cas
 in the the gcc inline assembly - but we have buildfarm members for that,
 so it's not too bad.

 If you want to do that, it's fine with me.  What I would do is:

 - Back-patch the addition of the sparcv8+ stuff all the way.  If
 anyone's running anything older, let them complain...
 - Remove the special case for MIPS without gcc intrinsics only in
 master, leaving the back-branches broken.  If anyone cares, let them
 complain...
 - Nothing else.

I've gone ahead and done the second of these things.

Andres, do you want to go take a stab at fixing the SPARC stuff?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Spinlocks and compiler/memory barriers

2014-07-06 Thread Andres Freund
On 2014-07-06 15:02:21 -0400, Robert Haas wrote:
 On Tue, Jul 1, 2014 at 12:22 PM, Robert Haas robertmh...@gmail.com wrote:
  Since we have a Sun Studio machine in the buildfarm, we shouldn't give
  up on SPARC completely, but maybe we should only add the cases for
  sparcv8+ and above?  That at least has some chance of getting tested.
 
  That we have code for sparcv7 is ridiculous btw. Sparcv8 (not 8+/9) is
  from 1992. v7/sun4c been dropped from solaris 8 if I see that correctly.
 
  I agree that v8 (in contrast to v8+ aka v9 in 32bit mode) support has to
  go as well. I'd personally vote for backpatching a note to
  installation.sgml saying that it's probably not working, and not do
  anything else there. That means we also should replace the ldstub by cas
  in the the gcc inline assembly - but we have buildfarm members for that,
  so it's not too bad.
 
  If you want to do that, it's fine with me.  What I would do is:
 
  - Back-patch the addition of the sparcv8+ stuff all the way.  If
  anyone's running anything older, let them complain...
  - Remove the special case for MIPS without gcc intrinsics only in
  master, leaving the back-branches broken.  If anyone cares, let them
  complain...
  - Nothing else.
 
 I've gone ahead and done the second of these things.

Thanks.

 Andres, do you want to go take a stab at fixing the SPARC stuff?

Will do, will probably take me till thursday to come up with the brain
cycles.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Estimating selectivity of predicates involving string pattern matching

2014-07-06 Thread Rajmohan C
I recently upgraded my machine from PostgreSQL8.3 to PostgreSQL9.3.4. After
upgrade, when I ran TPC-H queries for performance testing, I could see
drastic change in timings. Especially TPC-H query 5 which was taking 4
minutes to execute in pgsql8.3, took just 15 seconds in postgresql9.3.4. I
think the reason it was taking 4 minutes in earlier version was because of
the error in selectivity of string pattern matching predicate in the query.
How was it fixed in recent version? Has any new algorithm been incorporated
to handle this?


Re: [HACKERS] Performance regression: 9.2+ vs. ScalarArrayOpExpr vs. ORDER BY

2014-07-06 Thread Andrew Gierth
 Tom == Tom Lane t...@sss.pgh.pa.us writes:

  I've experimented with the attached patch, which detects when this
  situation might have occurred and does another pass to try and
  build ordered scans without the SAOP condition. However, the
  results may not be quite ideal, because at least in some test
  queries (not all) the scan with the SAOP included in the
  indexquals is being costed higher than the same scan with the SAOP
  moved to a Filter, which seems unreasonable.

 Tom I'm not convinced that that's a-priori unreasonable, since the
 Tom SAOP will result in multiple index scans under the hood.
 Tom Conceivably we ought to try the path with and with SAOPs all the
 Tom time.

OK, here's a patch that always retries on lower SAOP clauses, assuming
that a SAOP in the first column is always applicable - or is even that
assumption too strong?

-- 
Andrew (irc:RhodiumToad)

diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index 42dcb11..cfcfbfc 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -50,7 +50,8 @@ typedef enum
 {
 	SAOP_PER_AM,/* Use ScalarArrayOpExpr if amsearcharray */
 	SAOP_ALLOW,	/* Use ScalarArrayOpExpr for all indexes */
-	SAOP_REQUIRE/* Require ScalarArrayOpExpr to be used */
+	SAOP_REQUIRE,/* Require ScalarArrayOpExpr to be used */
+	SAOP_SKIP_LOWER/* Require lower ScalarArrayOpExpr to be eliminated */
 } SaOpControl;
 
 /* Whether we are looking for plain indexscan, bitmap scan, or either */
@@ -118,7 +119,8 @@ static void get_index_paths(PlannerInfo *root, RelOptInfo *rel,
 static List *build_index_paths(PlannerInfo *root, RelOptInfo *rel,
   IndexOptInfo *index, IndexClauseSet *clauses,
   bool useful_predicate,
-  SaOpControl saop_control, ScanTypeControl scantype);
+  SaOpControl saop_control, ScanTypeControl scantype,
+  bool *saop_retry);
 static List *build_paths_for_OR(PlannerInfo *root, RelOptInfo *rel,
    List *clauses, List *other_clauses);
 static List *generate_bitmap_or_paths(PlannerInfo *root, RelOptInfo *rel,
@@ -734,6 +736,7 @@ get_index_paths(PlannerInfo *root, RelOptInfo *rel,
 {
 	List	   *indexpaths;
 	ListCell   *lc;
+	bool   saop_retry = false;
 
 	/*
 	 * Build simple index paths using the clauses.  Allow ScalarArrayOpExpr
@@ -742,7 +745,23 @@ get_index_paths(PlannerInfo *root, RelOptInfo *rel,
 	indexpaths = build_index_paths(root, rel,
    index, clauses,
    index-predOK,
-   SAOP_PER_AM, ST_ANYSCAN);
+   SAOP_PER_AM, ST_ANYSCAN, saop_retry);
+
+	/*
+	 * If we allowed any ScalarArrayOpExprs on an index with a useful sort
+	 * ordering, then try again without them, since otherwise we miss important
+	 * paths where the index ordering is relevant.
+	 */
+	if (saop_retry)
+	{
+		indexpaths = list_concat(indexpaths,
+ build_index_paths(root, rel,
+   index, clauses,
+   index-predOK,
+   SAOP_SKIP_LOWER,
+   ST_ANYSCAN,
+   NULL));
+	}
 
 	/*
 	 * Submit all the ones that can form plain IndexScan plans to add_path. (A
@@ -779,7 +798,7 @@ get_index_paths(PlannerInfo *root, RelOptInfo *rel,
 		indexpaths = build_index_paths(root, rel,
 	   index, clauses,
 	   false,
-	   SAOP_REQUIRE, ST_BITMAPSCAN);
+	   SAOP_REQUIRE, ST_BITMAPSCAN, NULL);
 		*bitindexpaths = list_concat(*bitindexpaths, indexpaths);
 	}
 }
@@ -816,12 +835,14 @@ get_index_paths(PlannerInfo *root, RelOptInfo *rel,
  * 'useful_predicate' indicates whether the index has a useful predicate
  * 'saop_control' indicates whether ScalarArrayOpExpr clauses can be used
  * 'scantype' indicates whether we need plain or bitmap scan support
+ * 'saop_retry' indicates whether a SAOP_SKIP_LOWER retry is worthwhile
  */
 static List *
 build_index_paths(PlannerInfo *root, RelOptInfo *rel,
   IndexOptInfo *index, IndexClauseSet *clauses,
   bool useful_predicate,
-  SaOpControl saop_control, ScanTypeControl scantype)
+  SaOpControl saop_control, ScanTypeControl scantype,
+  bool *saop_retry)
 {
 	List	   *result = NIL;
 	IndexPath  *ipath;
@@ -877,7 +898,9 @@ build_index_paths(PlannerInfo *root, RelOptInfo *rel,
 	 * assuming that the scan result is ordered.  (Actually, the result is
 	 * still ordered if there are equality constraints for all earlier
 	 * columns, but it seems too expensive and non-modular for this code to be
-	 * aware of that refinement.)
+	 * aware of that refinement.) But if saop_control is SAOP_SKIP_LOWER, we
+	 * skip exactly these clauses that break sorting, and don't bother
+	 * building any paths otherwise.
 	 *
 	 * We also build a Relids set showing which outer rels are required by the
 	 * selected clauses.  Any lateral_relids are included in that, but not
@@ -901,9 +924,13 @@ build_index_paths(PlannerInfo *root, RelOptInfo *rel,
 /* Ignore if not supported by index */
 if (saop_control 

Re: [HACKERS] Should extension--unpackaged-$ver.sql's \echo use FROM unpackaged;?

2014-07-06 Thread Robert Haas
On Thu, Jul 3, 2014 at 9:03 AM, Fujii Masao masao.fu...@gmail.com wrote:
 On Thu, Jul 3, 2014 at 4:26 AM, Andres Freund and...@2ndquadrant.com wrote:
 Hi,
 Fujii noticed that I'd used
 \echo Use CREATE EXTENSION pg_buffercache to load this file. \quit
 in an extension upgrade script. That's obviously wrong, because it
 should say ALTER EXTENSION. The reason it's that way is that I copied
 the line from the unpackaged--1.0.sql file.
 I noticed all unpackaged--$version transition files miss the FROM
 unpackaged in that echo.
 I guess that's just a oversight which we should correct?

 Maybe for user-friendness. But I think that's not so important fix enough
 to backpatch.

Seems like a clear mistake to me.  +1 for fixing it and back-patching.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-07-06 Thread Greg Stark
On Sun, Jul 6, 2014 at 8:40 AM, Sergey Muraviov
sergey.k.murav...@gmail.com wrote:
 Is there anyone who can commit the patch?

So what I'm inclined to do here (sigh) is commit it into 9.5 and
revert it in 9.4. I think it's an improvement but I there's enough
confusion and surprise about the changes from people who weren't
expecting them and enough surprising side effects from things like
check_postgres that letting it simmer for a full release so people can
get used to it might be worthwhile.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.auto.conf and reload

2014-07-06 Thread Amit Kapila
On Sun, Jul 6, 2014 at 1:57 PM, Christoph Berg c...@df7cb.de wrote:
 Re: Amit Kapila 2014-07-06
CAA4eK1K=2WCD5ur8c-34NOw+XKg57Q4k0SajwSQXcwciD-=+0...@mail.gmail.com
  Another could be that during initdb all the uncommented settings be
  written to postgresql.auto.conf file rather than to postgresql.conf.
  I think we can do this by changing code in initdb.c-setup_config().
  This will ensure that unless user is changing settings in a mixed way
  (some by Alter System and some manually by editing postgresql.conf),
  there will no such problem.

 That will make editing postgresql.conf totally useless, because users
 will always need to check if their new value isn't overridden by
 something in auto.conf. This will affect *all* users, as everyone
 needs to bump shared_buffers, which is set by initdb. We'd get 10^X
 complaints from admins who got confused where their config actually
 is.

Users/admins can always confirm that from pg_settings (sourcefile can
be used to find that out).

Another thing is that if user is using mixed way (Alter System and manually
editing postgresql.conf) to change the configuration parameters, then he
needs to be bit careful about setting the same parameter in both ways.
Example:
1. Default value of autovacuum_vacuum_cost_delay = 20
2. Edit postgresql.conf and change it to 30
3. Perform Reload
4. Check the value by Show command, it will be 30
5. Alter System set autovacuum_vacuum_cost_delay =40
6. Perform Reload
7. Check the value by Show command, it will be 40
8. Alter System set autovacuum_vacuum_cost_delay = Default;
9  Perform Reload
10 Check the value by Show command, it will be 30

In Step-10, ideally he can expect default value for parameter which
is 20, however the actual value will be 30 because of editing done in
Step-2.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] 9.4 documentation: duplicate paragraph in logical decoding example

2014-07-06 Thread Fujii Masao
On Sun, Jul 6, 2014 at 2:41 AM, Christoph Moench-Tegeder
c...@burggraben.net wrote:
 Hi,

 while reading the logical decoding docs, I came across a duplicated
 paragraph in doc/src/sgml/logicaldecoding.sgml - in the current
 master branch, lines 108 to 115 are the same as lines 117 to 124.
 I've attached a patch which removes the second instance of that
 paragraph.
 In case it is intended to demonstrate that the changes in the stream
 were not consumed by pg_logical_slot_peek_changes(),

I didn't write that text, but I think that's the reason why the same
texts exist.

 reworded like the changes have
 not been consumed by the previous command, just to avoid making
 it look like that paragraph had been duplicated by accident :)

I'm OK with this.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request for Patch Feedback: Lag Lead Window Functions Can Ignore Nulls

2014-07-06 Thread Jeff Davis
On Wed, 2014-04-16 at 12:50 +0100, Nicholas White wrote:
 Thanks for the detailed feedback, I'm sorry it took so long to
 incorporate it. I've attached the latest version of the patch, fixing
 in particular:

I took a good look at this today.

* It fails for offset of 0 with IGNORE NULLS. Fixed (trivial).

* The tests are locale-sensitive. Fixed (trivial).

* The leadlag_common function is just way too long. I refactored the
IGNORE NULLS code into it's own function (win_get_arg_ignore_nulls())
with the same API as WinGetFuncArgInPartition. This cleans things up
substantially, and makes it easier to add useful comments.

* We're implementing the former semantics, so we'll need to correct
slightly sounds arbitrary, but it's mandated by the standard. That
should be clarified.

* I did a lot of other refactoring within win_get_arg_ignore_nulls for
the constant case. I'm not done yet, and I'm not 100% sure it's a net
gain, because the code ended up a little longer. But the previous
version was quite hard to follow because of so many special cases around
positive versus negative offsets. For instance, having the negative
'next' value in your code actually means something quite different than
when it's positive, but it took me a while to figure that out, so I made
it into two variables. I hope my code is moving it in a direction that's
easier for others to understand.

Please let me know if you think I am making things worse with my
refactorings. Otherwise I'll keep working on this and hopefully get it
committable soon.

The attached patch is still a WIP; just posting it here in case you see
any red flags.

Regards,
Jeff Davis

*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 13164,13169  SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
--- 13164,13170 
   lag(replaceable class=parametervalue/replaceable typeany/
   [, replaceable class=parameteroffset/replaceable typeinteger/
   [, replaceable class=parameterdefault/replaceable typeany/ ]])
+  [ { RESPECT | IGNORE } NULLS ]
 /function
/entry
entry
***
*** 13178,13184  SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
 replaceable class=parameterdefault/replaceable are evaluated
 with respect to the current row.  If omitted,
 replaceable class=parameteroffset/replaceable defaults to 1 and
!replaceable class=parameterdefault/replaceable to null
/entry
   /row
  
--- 13179,13187 
 replaceable class=parameterdefault/replaceable are evaluated
 with respect to the current row.  If omitted,
 replaceable class=parameteroffset/replaceable defaults to 1 and
!replaceable class=parameterdefault/replaceable to null. If
!literalIGNORE NULLS/ is specified then the function will be evaluated
!as if the rows containing nulls didn't exist.
/entry
   /row
  
***
*** 13191,13196  SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
--- 13194,13200 
   lead(replaceable class=parametervalue/replaceable typeany/
[, replaceable class=parameteroffset/replaceable typeinteger/
[, replaceable class=parameterdefault/replaceable typeany/ ]])
+  [ { RESPECT | IGNORE } NULLS ]
 /function
/entry
entry
***
*** 13205,13211  SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
 replaceable class=parameterdefault/replaceable are evaluated
 with respect to the current row.  If omitted,
 replaceable class=parameteroffset/replaceable defaults to 1 and
!replaceable class=parameterdefault/replaceable to null
/entry
   /row
  
--- 13209,13217 
 replaceable class=parameterdefault/replaceable are evaluated
 with respect to the current row.  If omitted,
 replaceable class=parameteroffset/replaceable defaults to 1 and
!replaceable class=parameterdefault/replaceable to null. If
!literalIGNORE NULLS/ is specified then the function will be evaluated
!as if the rows containing nulls didn't exist.
/entry
   /row
  
***
*** 13299,13309  SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
note
 para
  The SQL standard defines a literalRESPECT NULLS/ or
! literalIGNORE NULLS/ option for functionlead/, functionlag/,
! functionfirst_value/, functionlast_value/, and
! functionnth_value/.  This is not implemented in
! productnamePostgreSQL/productname: the behavior is always the
! same as the standard's default, namely literalRESPECT NULLS/.
  Likewise, the standard's literalFROM FIRST/ or literalFROM LAST/
  option for functionnth_value/ is not implemented: only the
  default literalFROM FIRST/ behavior is supported.  (You can achieve
--- 

Re: [HACKERS] [RFC: bug fix?] Connection attempt block forever when the synchronous standby is not running

2014-07-06 Thread Rajeev rastogi

On 04 July 2014 19:29, MauMau Wrote:

 [How to fix]
 Of course, adding -o '-c synchronous_commit=local' or -o '-c
 synchronous_standby_names=' to pg_ctl start in the recovery script
 would prevent the problem.
 
 But isn't there anything to fix in PostgreSQL?  I think the doc needs
 improvement so that users won't misunderstand that only write
 transactions would block at commit.

As of now there is no solution for this in PostgreSQL but I had submitted a 
patch Standalone synchronous master in 
9.4 2014-01 CommitFest, which was rejected because of some issues. This patch 
was meant to degrade the synchronous
level of master, if all synchronous standbys are down.
I plan to resubmit this with better design sometime in 9.5.

Thanks and Regards,
Kumar Rajeev Rastogi



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC: bug fix?] Connection attempt block forever when the synchronous standby is not running

2014-07-06 Thread Andres Freund
On 2014-07-07 04:20:12 +, Rajeev rastogi wrote:
 
 On 04 July 2014 19:29, MauMau Wrote:
 
  [How to fix]
  Of course, adding -o '-c synchronous_commit=local' or -o '-c
  synchronous_standby_names=' to pg_ctl start in the recovery script
  would prevent the problem.
  
  But isn't there anything to fix in PostgreSQL?  I think the doc needs
  improvement so that users won't misunderstand that only write
  transactions would block at commit.
 
 As of now there is no solution for this in PostgreSQL but I had submitted a 
 patch Standalone synchronous master in 
 9.4 2014-01 CommitFest, which was rejected because of some issues. This patch 
 was meant to degrade the synchronous
 level of master, if all synchronous standbys are down.
 I plan to resubmit this with better design sometime in 9.5.

That seems to be more less orthogonal to the issue at hand. The problem
here is that a readonly command lead to a wait. And even worse it was a
command the user had no influence over.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 documentation: duplicate paragraph in logical decoding example

2014-07-06 Thread Andres Freund
On 2014-07-07 12:50:17 +0900, Fujii Masao wrote:
 On Sun, Jul 6, 2014 at 2:41 AM, Christoph Moench-Tegeder
 c...@burggraben.net wrote:
  Hi,
 
  while reading the logical decoding docs, I came across a duplicated
  paragraph in doc/src/sgml/logicaldecoding.sgml - in the current
  master branch, lines 108 to 115 are the same as lines 117 to 124.
  I've attached a patch which removes the second instance of that
  paragraph.
  In case it is intended to demonstrate that the changes in the stream
  were not consumed by pg_logical_slot_peek_changes(),
 
 I didn't write that text, but I think that's the reason why the same
 texts exist.

It was me... I was being cute^Wtired^Wuhhh and thought the duplicated
content would demonstrate that nothing has happened due to the decoding
stream.
But since you're now the second person confused by it I guess we better
rephrase it.

  reworded like the changes have
  not been consumed by the previous command, just to avoid making
  it look like that paragraph had been duplicated by accident :)
 
 I'm OK with this.

Care to submit a patch for it Christoph?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers