[HACKERS] Postgresql c function returning one row with 2 fileds
I'm new in postgresql c function and I start following examples. I want to write a simple function that have inside an SQL and passing parameter evaluete anbd return 2 fields as sum (for now to be simpler). The function below has problem passing the check (get_call_result_type(fcinfo, resultTypeId, resultTupleDesc) != TYPEFUNC_COMPOSITE) If I remove this line I get 1 integer as result from this query select * from pdc_imuanno(2012); and error from select (a).* from pdc_imuanno(2012) a; because is not a composite type. Question is I can prepare template for tuple if it's not correct this resultTupleDesc = CreateTemplateTupleDesc(2, false); TupleDescInitEntry(resultTupleDesc, (AttrNumber) 1, abp1, FLOAT4OID, -1, 0); TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, abp2, FLOAT4OID, -1, 0); And in get_call_result_type(fcinfo, resultTypeId, resultTupleDesc) fcinfo what is and where come from? Thanks a lot for any help. Luca Datum test_query(PG_FUNCTION_ARGS) { TupleDesc resultTupleDesc, tupledesc; Oid resultTypeId; Datum retvals[2]; bool retnulls[2]; HeapTuple rettuple; sprintf(query,SELECT anno, abp1::real, abp2::real FROM imu.calcolo WHERE anno = %d AND codfis LIKE 'MR%';,PG_GETARG_INT32(0)); int ret; int proc; float abp1 = 0; float abp2 = 0; SPI_connect(); ret = SPI_exec(query,0); proc = SPI_processed; if (ret 0 SPI_tuptable != NULL) { HeapTuple tuple; tupledesc = SPI_tuptable-tupdesc; SPITupleTable *tuptable = SPI_tuptable; for (j = 0; j proc; j++) { tuple = tuptable-vals[j]; abp1 += DatumGetFloat4(SPI_getbinval(tuple, tupledesc, 2, bisnull)); abp2 += DatumGetFloat4(SPI_getbinval(tuple, tupledesc, 3, cisnull)); } } resultTupleDesc = CreateTemplateTupleDesc(2, false); TupleDescInitEntry(resultTupleDesc, (AttrNumber) 1, abp1, FLOAT4OID, -1, 0); TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, abp2, FLOAT4OID, -1, 0); if (get_call_result_type(fcinfo, resultTypeId, resultTupleDesc) != TYPEFUNC_COMPOSITE) { ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg(function returning record called in context that cannot accept type record))); } resultTupleDesc = BlessTupleDesc(resultTupleDesc); SPI_finish(); retvals[0] = Float4GetDatum(abp1); retvals[1] = Float4GetDatum(abp2); retnulls[0] = false; retnulls[1] = false; rettuple = heap_form_tuple( resultTupleDesc, retvals, retnulls); PG_RETURN_DATUM( HeapTupleGetDatum( rettuple ) ); } -- View this message in context: http://postgresql.1045698.n5.nabble.com/Postgresql-c-function-returning-one-row-with-2-fileds-tp5777581.html Sent from the PostgreSQL - hackers mailing list archive at Nabble.com. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] logical changeset generation v6.5
On 2013-11-08 17:11:58 -0500, Peter Eisentraut wrote: On 11/8/13, 3:03 PM, Robert Haas wrote: On Fri, Nov 8, 2013 at 12:38 PM, Robert Haas robertmh...@gmail.com wrote: On Tue, Nov 5, 2013 at 10:21 AM, Andres Freund and...@2ndquadrant.com wrote: Attached to this mail and in the xlog-decoding-rebasing-remapping branch in my git[1] repository you can find the next version of the patchset that: I have pushed patches #1 and #2 from this series as a single commit, after some editing. And I've also pushed patch #13, which is an almost-totally-unrelated improvement that has nothing to do with logical replication, but is useful all the same. Please fix this new compiler warning: pg_regress_ecpg.c: In function ‘main’: pg_regress_ecpg.c:170:2: warning: passing argument 3 of ‘regression_main’ from incompatible pointer type [enabled by default] In file included from pg_regress_ecpg.c:19:0: ../../../../src/test/regress/pg_regress.h:55:5: note: expected ‘init_function’ but argument is of type ‘void (*)(void)’ Hrmpf... I usually run something akin to # make -j3 -s (cd contrib make -j3 -s) and then in a separate step # make -s check-world this is so I see compiler warnings before drowning them in check-world's output. But ecpg/test isn't built during make in src/interfaces/ecpg, but just during make check there. ISTM ecpg's regression tests should be built (not run!) during $(recurse) not just during make check. Patch towards that end attached. Also attached is the fix for the compilation warning itself. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services From 99aaf866b315af21fddd858c3d2922ca21918f8c Mon Sep 17 00:00:00 2001 From: Andres Freund and...@anarazel.de Date: Sat, 9 Nov 2013 11:49:27 +0100 Subject: [PATCH 1/2] Recurse into ecpg/test during normal builds, instead just during make check. ecpg's make check, which currently compiles the contents of ecpg/test, is only executed during make check-world and not make all making warnings during the compilation of the tests hard to spot manually. Instead build the test framework during a normal toplevel make all. --- src/interfaces/ecpg/Makefile | 10 -- 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/src/interfaces/ecpg/Makefile b/src/interfaces/ecpg/Makefile index e397210..4f05ae4 100644 --- a/src/interfaces/ecpg/Makefile +++ b/src/interfaces/ecpg/Makefile @@ -2,7 +2,7 @@ subdir = src/interfaces/ecpg top_builddir = ../../.. include $(top_builddir)/src/Makefile.global -SUBDIRS = include pgtypeslib ecpglib compatlib preproc +SUBDIRS = include pgtypeslib ecpglib compatlib preproc test # Suppress parallel build of subdirectories to avoid a bug in gmake 3.82, cf # http://savannah.gnu.org/bugs/?30653 @@ -16,15 +16,13 @@ endif $(recurse) -all-pgtypeslib-recurse all-ecpglib-recurse all-compatlib-recurse all-preproc-recurse: all-include-recurse +all-pgtypeslib-recurse all-ecpglib-recurse all-compatlib-recurse all-preproc-recurse all-test-recurse: all-include-recurse all-compatlib-recurse: all-ecpglib-recurse all-ecpglib-recurse: all-pgtypeslib-recurse -install-pgtypeslib-recurse install-ecpglib-recurse install-compatlib-recurse install-preproc-recurse: install-include-recurse +install-pgtypeslib-recurse install-ecpglib-recurse install-compatlib-recurse install-preproc-recurse insta--test-recurse: install-include-recurse install-compatlib-recurse: install-ecpglib-recurse install-ecpglib-recurse: install-pgtypeslib-recurse - -clean distclean maintainer-clean: - $(MAKE) -C test clean +all-test-recurse: all-preproc-recurse all-ecpglib-recurse all-compatlib-recurse check checktcp installcheck: all $(MAKE) -C test $@ -- 1.8.3.251.g1462b67 From a0aa8180a849751df6fdcf49a83ff87d906c8aed Mon Sep 17 00:00:00 2001 From: Andres Freund and...@anarazel.de Date: Sat, 9 Nov 2013 11:53:33 +0100 Subject: [PATCH 2/2] ecpg: Adapt to recent pg_regress init_function API changes This fixes the warning during ecpg's test compilation introduced in 9b4d52f2095be96ca238ce41f6963ec56376491f. --- src/interfaces/ecpg/test/pg_regress_ecpg.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/interfaces/ecpg/test/pg_regress_ecpg.c b/src/interfaces/ecpg/test/pg_regress_ecpg.c index d01703e..740b566 100644 --- a/src/interfaces/ecpg/test/pg_regress_ecpg.c +++ b/src/interfaces/ecpg/test/pg_regress_ecpg.c @@ -159,7 +159,7 @@ ecpg_start_test(const char *testname, } static void -ecpg_init(void) +ecpg_init(int argc, char *argv[]) { /* nothing to do here at the moment */ } -- 1.8.3.251.g1462b67 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Row-security writer-side checks proposal
On 11/08/2013 11:03 PM, Robert Haas wrote: Separate READ DELETE etc would only be interesting if we wanted to let someone DELETE rows they cannot SELECT. Since we have DELETE ... RETURNING, and since users can write a predicate function for DELETE that leaks the information even if we didn't, in practice if you give the user any READ right you've given them all of them. So I don't think we can support that (except maybe by column RLS down the track). Well, we could require SELECT privilege when a a RETURNING clause is present... Absolutely could. Wouldn't stop them grabbing the data via a predicate function on the update/delete, though, and we can't sanely (IMO) require SELECT rights if they want to use non-LEAKPROOF functions/operators either. I do think this needs looking at further, but I suspect it's an area where Pg's flexibility will make life harder. -- Craig Ringer http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Race condition in b-tree page deletion
The B-tree page deletion algorithm has a race condition. We don't allow a page to be deleted if it's the rightmost child of its parent, but that situation can change after we check for it. Problem --- We check that the page to be deleted is not the rightmost child of its parent, and then lock its left sibling, the page itself, its right sibling, and the parent, in that order. However, if the parent page is split after the check but before acquiring the locks, the target page might become the rightmost child, if the split happens at the right place. That leads to an error in vacuum (I reproduced this by setting a breakpoint in debugger): ERROR: failed to delete rightmost child 41 of block 3 in index foo_pkey We currently re-check that the page is still the rightmost child, and throw the above error if it's not. We could easily just give up rather than throw an error, but that approach doesn't scale to half-dead pages. To recap, although we don't normally allow deleting the rightmost child, if the page is the *only* child of its parent, we delete the child page and mark the parent page as half-dead in one atomic operation. But before we do that, we check that the parent can later be deleted, by checking that it in turn is not the rightmost child of the grandparent (potentially recursing all the way up to the root). But the same situation can arise there - the grandparent can be split while we're not holding the locks. We end up with a half-dead page that we cannot delete. To make things worse, the keyspace of the deleted page has already been transferred to its right sibling. As the README points out, the keyspace at the grandparent level is out-of-whack until the half-dead page is deleted, and if enough tuples with keys in the transferred keyspace are inserted, the page might get split and a downlink might be inserted into the grandparent that is out-of-order. That might not cause any serious problem if it's transient (as the README ponders), but is surely bad if it stays that way. Solutions - 1. The simplest solution I can see is to never delete internal pages, avoiding the whole concept of half-dead pages. That obviously leads to some bloat, though. 2. The second-simplest solution I see is to keep locked the whole chain of pages that will be deleted, and delete all of them as one atomic WAL-logged operation. Ie. the leaf page, and all the parent pages above it that will become half-dead, and the parent of the last half-dead page. 3. Another approach would be to get rid of the can't delete rightmost child limitation. We currently have that limitation because it ensures that we never need to change the high-key of a page. If we delete a page that is the rightmost child of its parent, we transfer the deleted keyspace from the parent page to its right sibling. To do that, we need to update the high key of the parent, as well as the downlink of the right sibling at the grandparent level. That's a bit complicated, because updating the high key might require splitting the page. Still, I think it would be doable: When we delete a page that is the rightmost child of its parent, we mark the right sibling with an out-of-whack flag. It indicates that the keyspace of that page is not in sync in the parent level. Searches work fine, as there are no keys in the keyspace that is out-of-whack, but before allowing any insertions to the page, the situation needs to be fixed. That is the responsibility of the next insert to the page that comes along. To fix it, the parent's left sibling's high key needs to be updated, as well as the parent's downlink in the grandparent. We can do that in a few discrete steps; first update the high key, then update the downlink, and finally once the high key and parent's downlink are in sync with the reality at the bottom level, clear the out-of-whack flag. On reflection, approach 2 seems best. It's fairly simple, although it potentially requires holding many pages locked simultaneously. In practice, B-trees are typically not very deep. We have a limit of 4 full-page images in a single WAL record, but since all the pages are empty, we can just WAL-log all the information needed to reconstruct the pages in deleted state without using full-page images. This also might be back-patchable, although it requires changing the WAL record format, so it would break replication from higher minor version to lower. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Race condition in b-tree page deletion
Heikki Linnakangas hlinnakan...@vmware.com writes: 2. The second-simplest solution I see is to keep locked the whole chain of pages that will be deleted, and delete all of them as one atomic WAL-logged operation. Ie. the leaf page, and all the parent pages above it that will become half-dead, and the parent of the last half-dead page. This would be more tenable if we could put a known limit on the number of pages to be changed at once. I'm not too awake at the moment, so maybe this is not possible, but could we simply decide in advance that we won't propagate page deletion up more than a-small-number of levels? It'd require allowing a page to remain half-dead until some other vacuum comes along and fixes it, but that seems OK. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Race condition in b-tree page deletion
On 09.11.2013 18:24, Tom Lane wrote: Heikki Linnakangas hlinnakan...@vmware.com writes: 2. The second-simplest solution I see is to keep locked the whole chain of pages that will be deleted, and delete all of them as one atomic WAL-logged operation. Ie. the leaf page, and all the parent pages above it that will become half-dead, and the parent of the last half-dead page. This would be more tenable if we could put a known limit on the number of pages to be changed at once. I'm not too awake at the moment, so maybe this is not possible, but could we simply decide in advance that we won't propagate page deletion up more than a-small-number of levels? It'd require allowing a page to remain half-dead until some other vacuum comes along and fixes it, but that seems OK. I don't feel comfortable leaving pages in half-dead state. Looking back at the archives, your original design ten years ago did exactly that (http://www.postgresql.org/message-id/8281.1045089...@sss.pgh.pa.us), but that turned out to be a bad idea (http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=70ce5c908202ada7616f7afded8a91bbf2742471). Mind you, even if we check that the half-dead page at some upper level is eventually deletable because it's not the rightmost child, it might become the rightmost child if we don't hold the lock so that the next vacuum cannot delete it, and we're back to square one. We could just punt if more than X pages would need to be changed. That would mean that we never delete pages at the top (h - X) levels of the tree. In practice that should be fine if X is high enough. As a data point, GIN list page deletion holds 16 pages locked at once (GIN_NDELETE_AT_ONCE). Normally, a 16-level deep B-tree is pretty huge. As another data point, in the worst case the keys are so wide that only 2 keys fit on each B-tree page. With that assumption, the tree can be at most 32 levels deep if you just insert into it with no deletions, since MaxBlockNumber ~= 2^32 (I may be off by one in either direction, not sure). Deletions make it more complicated, but I would be pretty surprised if you can construct a B-tree tallers than, say 40 levels. Things gets tricky if shared_buffers is very small; with shared_buffers=16, you certainly can't hold more than 16 buffers pinned at once. (in fact, I think ginfast.c already has a problem with that...) - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Race condition in b-tree page deletion
On 09.11.2013 18:49, Heikki Linnakangas wrote: We could just punt if more than X pages would need to be changed. That would mean that we never delete pages at the top (h - X) levels of the tree. In practice that should be fine if X is high enough. As a data point, GIN list page deletion holds 16 pages locked at once (GIN_NDELETE_AT_ONCE). Normally, a 16-level deep B-tree is pretty huge. As another data point, in the worst case the keys are so wide that only 2 keys fit on each B-tree page. With that assumption, the tree can be at most 32 levels deep if you just insert into it with no deletions, since MaxBlockNumber ~= 2^32 (I may be off by one in either direction, not sure). Deletions make it more complicated, but I would be pretty surprised if you can construct a B-tree tallers than, say 40 levels. On further thought, it's worse than that. To delete a page, you need to lock the left and right siblings, so you need 3 pages locked per each level you delete... - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Race condition in b-tree page deletion
On 09.11.2013 19:18, Heikki Linnakangas wrote: On 09.11.2013 18:49, Heikki Linnakangas wrote: We could just punt if more than X pages would need to be changed. That would mean that we never delete pages at the top (h - X) levels of the tree. In practice that should be fine if X is high enough. As a data point, GIN list page deletion holds 16 pages locked at once (GIN_NDELETE_AT_ONCE). Normally, a 16-level deep B-tree is pretty huge. As another data point, in the worst case the keys are so wide that only 2 keys fit on each B-tree page. With that assumption, the tree can be at most 32 levels deep if you just insert into it with no deletions, since MaxBlockNumber ~= 2^32 (I may be off by one in either direction, not sure). Deletions make it more complicated, but I would be pretty surprised if you can construct a B-tree tallers than, say 40 levels. On further thought, it's worse than that. To delete a page, you need to lock the left and right siblings, so you need 3 pages locked per each level you delete... On further further thought, we don't need to unlink the pages immediately. It's enough to mark them as half-dead and remove the downlink to the upmost half-dead page. Marking a page as half-dead is as good as deleting it outright as far as searches and insertions are concerned. Actually unlinking the pages from the left and right siblings can be done at any later time, and doesn't need to be done in any particular order. So, my original musings about the number of concurrent locks needed still holds. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Suggestion: Issue warning when calling SET TRANSACTION outside transaction block
On Fri, Nov 8, 2013 at 5:36 PM, Tom Lane t...@sss.pgh.pa.us wrote: [ I'm so far behind ... ] Bruce Momjian br...@momjian.us writes: Applied. Thank you for all your suggestions. I thought the suggestion had been to issue a *warning*. How did that become an error? This patch seems likely to break applications that may have just been harmlessly sloppy about when they were issuing SETs and/or what flavor of SET they use. We don't for example throw an error for START TRANSACTION with an open transaction or COMMIT or ROLLBACK without one --- how can it possibly be argued that these operations are more dangerous than those cases? I'd personally have voted for using NOTICE. Well, LOCK TABLE throws an error, so it's not without precedent. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] UTF8 national character data type support WIP patch and list of open issues.
MauMau wrote: Let me repeat myself: I think the biggest and immediate issue is that PostgreSQL does not support national character types at least officially. Officially means the description in the manual. So I don't have strong objection against the current (hidden) implementation of nchar types in PostgreSQL which are just synonyms, as long as the official support is documented. Serious users don't want to depend on hidden features. I agree with you there. Actually it is somewhat documented in http://www.postgresql.org/docs/9.3/static/features-sql-standard.html as F421, but that requires that you read the SQL standard. However, doesn't the current synonym approach have any problems? Wouldn't it produce any trouble in the future? If we treat nchar as char, we lose the fact that the user requested nchar. Can we lose the fact so easily and produce irreversible result as below? I don't think that it is a problem. According to the SQL standard, the user requested a CHAR or VARCHAR with an encoding of the choice of the DBMS. PostgreSQL chooses the database encoding. In a way, it is similar to using the data type serial. The column will be displayed as integer, and the information that it was a serial can only be inferred from the DEFAULT value. It seems that this is working fine and does not cause many problems, so I don't see why things should be different here. Again, for serial the behaviour is well documented, so that seconds your request for more documentation. Would you like to write a patch for that? Yours, Laurenz Albe -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Fw: [COMMITTERS] pgsql: Fix blatantly broken record_image_cmp() logic for pass-by-value
Forwarding to -hackers. - Forwarded Message - From: Kevin Grittner kgri...@ymail.com To: Tom Lane t...@sss.pgh.pa.us Cc: pgsql-committ...@postgresql.org pgsql-committ...@postgresql.org Sent: Friday, November 8, 2013 3:19 PM Subject: Re: [COMMITTERS] pgsql: Fix blatantly broken record_image_cmp() logic for pass-by-value Tom Lane t...@sss.pgh.pa.us wrote: That is a serious compiler bug which you should file with your distro forthwith. I distilled it down to the simplest case I could find which failed to produce the warning; attached. Do you agree that it is a compiler bug that this generates no warning? Compile lines used: gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -D_GNU_SOURCE -c -o warning_test.o warning_test.c -MMD -MP clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -g -D_GNU_SOURCE -c -o warning_test.o warning_test.c -MMD -MP It is probably significant that if I simplify the while loop condition to just use one variable I do get the warning. It definitely does show up with what I'm using: gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3) The warning shows up, or the bug does? It might be worth trawling the buildfarm records to see which compilers did or didn't warn before. I'll get this filed first with the version I'm using, then look around to see if there is anything else to report. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company int test_warnings(int x, int y, int z); int test_warnings(int x, int y, int z) { int i; switch (x) { case 1: if (y != z) { i = (y z) ? -1 : 1; } break; case 2: if (y != z) { i = (y z) ? -1 : 1; } break; default: return 0; } return i; } -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Fw: [COMMITTERS] pgsql: Fix blatantly broken record_image_cmp() logic for pass-by-value
Forwarded to -hackers - Forwarded Message - From: Kevin Grittner kgri...@ymail.com To: Kevin Grittner kgri...@ymail.com; Tom Lane t...@sss.pgh.pa.us Cc: pgsql-committ...@postgresql.org pgsql-committ...@postgresql.org Sent: Friday, November 8, 2013 4:33 PM Subject: Re: [COMMITTERS] pgsql: Fix blatantly broken record_image_cmp() logic for pass-by-value Kevin Grittner kgri...@ymail.com wrote: I distilled it down to the simplest case I could find which failed to produce the warning; attached. For a compiler which seems to like to generate warnings for really esoteric cases, clang falls down rather badly on uninitialized variables -- at least on the package for Ubuntu 12.10. It does not complain at all about this: int warning_test(int a); int warning_test(int a) { int result; if (a == 1) result = 1; return result; } I assume that everyone here agrees that merits a warning? Bug reports filed. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Fw: [COMMITTERS] pgsql: Fix blatantly broken record_image_cmp() logic for pass-by-value
Kevin Grittner kgri...@ymail.com wrote: int warning_test(int a); int warning_test(int a) { int result; if (a == 1) result = 1; return result; } I had to file separate bug reports for gcc and clang. I have already gotten a response on the clang bug report using the above test code. It was closed with this response: Fixed from clang 3.3 warning_test.c:8:6: warning: variable 'result' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] if (a == 1) ^~ warning_test.c:11:9: note: uninitialized use occurs here return result; ^~ warning_test.c:8:2: note: remove the 'if' if its condition is always true if (a == 1) ^~~ warning_test.c:6:14: note: initialize the variable 'result' to silence this warning int result; ^ = 0 ** Changed in: clang (Ubuntu) Status: New = Fix Released I take that to say that the bug will be fixed in clang version 3.3, which is not available in any production release of Ubuntu. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] logical changeset generation v6.5
On 11/05/2013 10:21 AM, Andres Freund wrote: Hi, Attached to this mail and in the xlog-decoding-rebasing-remapping branch in my git[1] repository you can find the next version of the patchset that: * Fixes full table rewrites of catalog tables using the method Robert prefers (which is to log rewrite mappings to disk) * Extract the REPLICA IDENTITY as configured with ALTER TABLE for the old tuple for UPDATEs and DELETEs * Much better support for synchronous replication * Better resource cleanup (as in we need less local WAL available) * Lots of smaller fixes The change around REPLICA IDENTITY is *incompatible* to older output plugins since we now log tuples using the table's TupleDesc, not the indexes. My updated plugin is getting rows with change-tp.oldtuple as NULL on updates either with the default PRIMARY KEY identify or with a FULL identity. When I try the test_decoding plugin on UPDATE I get rows like: table do_inventory: UPDATE: ii_id[int8]:251 ii_in_stock[int8]:1 ii_reserved[int8]:144 ii_total_sold[int8]:911 which I think is only data from the new tuple.The lack of old-key in the output makes me think the test decoding plugin also isn't getting the old tuple. (This is with your patch-set rebased ontop of ac4ab97ec05ea900db0f14d428cae2e79832e02d which includes the patches Robert committed the other day, I can't rule out that I didn't break something in the rebase). Robert, I'd be very grateful if you could have a look at patch 0007 implementing what we've discussed. I kept it separate to make it easier to look at it in isolation, but I think in the end it partially should be merged into the wal_level=logical patch. I still think the wide cmin/cmax solution is more elegant and has wider applicability, but this works as well although it's about 5 times the code. Comments? [1]: http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=summary Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] logical changeset generation v6.5
On 2013-11-09 17:36:49 -0500, Steve Singer wrote: On 11/05/2013 10:21 AM, Andres Freund wrote: Hi, Attached to this mail and in the xlog-decoding-rebasing-remapping branch in my git[1] repository you can find the next version of the patchset that: * Fixes full table rewrites of catalog tables using the method Robert prefers (which is to log rewrite mappings to disk) * Extract the REPLICA IDENTITY as configured with ALTER TABLE for the old tuple for UPDATEs and DELETEs * Much better support for synchronous replication * Better resource cleanup (as in we need less local WAL available) * Lots of smaller fixes The change around REPLICA IDENTITY is *incompatible* to older output plugins since we now log tuples using the table's TupleDesc, not the indexes. My updated plugin is getting rows with change-tp.oldtuple as NULL on updates either with the default PRIMARY KEY identify or with a FULL identity. When I try the test_decoding plugin on UPDATE I get rows like: table do_inventory: UPDATE: ii_id[int8]:251 ii_in_stock[int8]:1 ii_reserved[int8]:144 ii_total_sold[int8]:911 which I think is only data from the new tuple.The lack of old-key in the output makes me think the test decoding plugin also isn't getting the old tuple. (This is with your patch-set rebased ontop of ac4ab97ec05ea900db0f14d428cae2e79832e02d which includes the patches Robert committed the other day, I can't rule out that I didn't break something in the rebase). I've pushed an updated tree to git, that contains that http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=shortlog;h=refs/heads/xlog-decoding-rebasing-remapping git://git.postgresql.org/git/users/andresfreund/postgres.git and some more fixes. I'll send out an email with details sometime soon. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Fw: [COMMITTERS] pgsql: Fix blatantly broken record_image_cmp() logic for pass-by-value
Kevin Grittner kgri...@ymail.com wrote: I distilled it down to the simplest case I could find which failed to produce the warning; attached. Do you agree that it is a compiler bug that this generates no warning? gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -D_GNU_SOURCE -c -o warning_test.o warning_test.c -MMD -MP The actual test case I sent for gcc is what is attached here. Sorry for attaching the wrong file before. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Companyint warning_test(int a, int b, int c); int warning_test(int a, int b, int c) { int result = 0; int i1; int i2; i1 = i2 = 0; while (i1 2 || i2 2) { int cmpresult; switch (a) { case 1: if (b != c) cmpresult = (b c) ? -1 : 1; break; } if (cmpresult 0) { result = -1; break; } i1++, i2++; } return result; } -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Race condition in b-tree page deletion
On Sat, Nov 9, 2013 at 12:40 PM, Heikki Linnakangas hlinnakan...@vmware.com wrote: On 09.11.2013 19:18, Heikki Linnakangas wrote: On 09.11.2013 18:49, Heikki Linnakangas wrote: We could just punt if more than X pages would need to be changed. That would mean that we never delete pages at the top (h - X) levels of the tree. In practice that should be fine if X is high enough. As a data point, GIN list page deletion holds 16 pages locked at once (GIN_NDELETE_AT_ONCE). Normally, a 16-level deep B-tree is pretty huge. As another data point, in the worst case the keys are so wide that only 2 keys fit on each B-tree page. With that assumption, the tree can be at most 32 levels deep if you just insert into it with no deletions, since MaxBlockNumber ~= 2^32 (I may be off by one in either direction, not sure). Deletions make it more complicated, but I would be pretty surprised if you can construct a B-tree tallers than, say 40 levels. On further thought, it's worse than that. To delete a page, you need to lock the left and right siblings, so you need 3 pages locked per each level you delete... On further further thought, we don't need to unlink the pages immediately. It's enough to mark them as half-dead and remove the downlink to the upmost half-dead page. Marking a page as half-dead is as good as deleting it outright as far as searches and insertions are concerned. Actually unlinking the pages from the left and right siblings can be done at any later time, and doesn't need to be done in any particular order. So, my original musings about the number of concurrent locks needed still holds. I think we've tried pretty hard to avoid algorithms where the maximum number of lwlocks that must be held at one time is not a constant, and I think we're in for a bad time of it if we start to deviate from that principal. I'm not sure what to do about this problem, but I think locking N levels of the tree at once, where N can be as large as the tree is deep, is probably a bad plan, whether the number of locks required is N or 3N. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Comment - uniqueness of relfilenode
On Thu, Nov 7, 2013 at 10:56 AM, Antonin Houska antonin.hou...@gmail.com wrote: catalog/catalog.c:GetNewRelFileNode() and its calls indicate that the following change makes sense: diff --git a/src/include/storage/relfilenode.h b/src/include/storage/relfilenode.h index 75f897f..7190974 100644 --- a/src/include/storage/relfilenode.h +++ b/src/include/storage/relfilenode.h @@ -55,7 +55,7 @@ typedef enum ForkNumber * relNode identifies the specific relation. relNode corresponds to * pg_class.relfilenode (NOT pg_class.oid, because we need to be able * to assign new physical files to relations in some situations). - * Notice that relNode is only unique within a particular database. + * Notice that relNode is only unique within a particular tablespace. * * Note: spcNode must be GLOBALTABLESPACE_OID if and only if dbNode is * zero. We support shared relations only in the global tablespace. // Antonin Houska (Tony) Technically speaking, I think it's only guaranteed to be unique with a database-tablespace combination. In other words, the same OID can be reused as a relfilenode if *either* of those two values differs. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: Fix pg_isolation_regress to work outside its build directory compiler warning
On Sat, Nov 9, 2013 at 12:45 AM, David Rowley dgrowle...@gmail.com wrote: Commit 9b4d52f2095be96ca238ce41f6963ec56376491f introduced a new compiler warning to the windows visual studios build D:\Postgres\b\pgsql.sln (default target) (1) - D:\Postgres\b\pg_regress_ecpg.vcxproj (default target) (88) - (ClCompile target) - src\interfaces\ecpg\test\pg_regress_ecpg.c(170): warning C4113: 'void (__cdecl *)(void)' differs in parameter lists from 'init_function' [D:\Postgres\b\pg_regress_ecpg.vcxproj] 1 Warning(s) The attached fixes it. Committed, thanks. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] logical changeset generation v6.5
On 11/09/2013 05:42 PM, Andres Freund wrote: On 2013-11-09 17:36:49 -0500, Steve Singer wrote: On 11/05/2013 10:21 AM, Andres Freund wrote: Hi, Attached to this mail and in the xlog-decoding-rebasing-remapping branch in my git[1] repository you can find the next version of the patchset that: * Fixes full table rewrites of catalog tables using the method Robert prefers (which is to log rewrite mappings to disk) * Extract the REPLICA IDENTITY as configured with ALTER TABLE for the old tuple for UPDATEs and DELETEs * Much better support for synchronous replication * Better resource cleanup (as in we need less local WAL available) * Lots of smaller fixes The change around REPLICA IDENTITY is *incompatible* to older output plugins since we now log tuples using the table's TupleDesc, not the indexes. My updated plugin is getting rows with change-tp.oldtuple as NULL on updates either with the default PRIMARY KEY identify or with a FULL identity. When I try the test_decoding plugin on UPDATE I get rows like: table do_inventory: UPDATE: ii_id[int8]:251 ii_in_stock[int8]:1 ii_reserved[int8]:144 ii_total_sold[int8]:911 which I think is only data from the new tuple.The lack of old-key in the output makes me think the test decoding plugin also isn't getting the old tuple. (This is with your patch-set rebased ontop of ac4ab97ec05ea900db0f14d428cae2e79832e02d which includes the patches Robert committed the other day, I can't rule out that I didn't break something in the rebase). I've pushed an updated tree to git, that contains that http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=shortlog;h=refs/heads/xlog-decoding-rebasing-remapping git://git.postgresql.org/git/users/andresfreund/postgres.git and some more fixes. I'll send out an email with details sometime soon. 93c5c2a171455763995cef0afa907bcfaa405db4 Still give me the following: update disorder.do_inventory set ii_in_stock=2 where ii_id=251; UPDATE 1 test1=# LOG: tuple in table with oid: 35122 without primary key \d disorder.do_inventory Table disorder.do_inventory Column | Type | Modifiers ---++--- ii_id | bigint | not null ii_in_stock | bigint | ii_reserved | bigint | ii_total_sold | bigint | Indexes: do_inventory_pkey PRIMARY KEY, btree (ii_id) Foreign-key constraints: do_inventory_item_ref FOREIGN KEY (ii_id) REFERENCES disorder.do_item(i_id) ON DELETE CASCADE Referenced by: TABLE disorder.do_item CONSTRAINT do_item_inventory_ref FOREIGN KEY (i_id) REFERENCES disorder.do_inventory(ii_id) DEFERRABLE INITIALLY DEFERRED TABLE disorder.do_restock CONSTRAINT do_restock_inventory_ref FOREIGN KEY (r_i_id) REFERENCES disorder.do_inventory(ii_id) ON DELETE CASCADE Triggers: _disorder_replica_truncatetrigger BEFORE TRUNCATE ON disorder.do_inventory FOR EACH STATEMENT EXECUTE PROCEDURE _disorder_replica.log_truncate('3') Disabled triggers: _disorder_replica_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON disorder.do_inventory FOR EACH ROW EXECUTE PROCEDURE _disorder_replica.denyaccess('_disorder_replica') _disorder_replica_truncatedeny BEFORE TRUNCATE ON disorder.do_inventory FOR EACH STATEMENT EXECUTE PROCEDURE _disorder_replica.deny_truncate() Replica Identity: FULL The test decoder plugin gives me: table do_inventory: UPDATE: old-pkey: a) The table does have a primary key b) I don't get anything in the old key when I was expecting all the rows c) If I change the table to use the pkey index with alter table disorder.do_inventory replica identity using index do_inventory_pkey; The LOG message on the update goes away but the output of the test decoder plugin goes back to table do_inventory: UPDATE: ii_id[int8]:251 ii_in_stock[int8]:5 ii_reserved[int8]:144 ii_total_sold[int8]:911 Which I suspect means oldtuple is back to null Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch to fix unused variable warning on windows build
On Sat, Nov 9, 2013 at 1:00 PM, David Rowley dgrowle...@gmail.com wrote: On Sat, Nov 9, 2013 at 7:29 PM, Amit Kapila amit.kapil...@gmail.com wrote: Thanks for the link. The reason that we don't see more warnings for this is that it seems in all other places where we have used PG_USED_FOR_ASSERTS_ONLY, the variable is getting assigned to every time, though it will only be when compiled in debug that the variable is checked. It seems microsoft decided to disable warnings for assigned but not used for pretty much this reason. http://stackoverflow.com/questions/10593547/why-is-no-warning-given-for-this-unused-variable Microsoft explain that it's because they had lots of complaints from people who were assigning variables purely so they could see what a method call returned during debugging, and found the warning irritating: So I guess fixing up PG_USED_FOR_ASSERTS_ONLY to work with visual studios is not required and my patch seems like the fix for this unique case. You are right that for this case it is sufficient to fix it the way you have done in patch. However it might be the case that here expectation is to provide a generic solution for such kind of warnings (as it doesn't occur on linux or other OS) so that in future such a case should not arise. I think it is good, if one of committer's who have windows env. can look into it and commit or provide suggestions, else you can make a combined patch of this and other warning you saw on windows and upload to next CF so that it doesn't get lost. I checked that you have already submitted a patch for this warning alone in CF. I was not quite sure what I should do for these tiny patches. Quite often if a committer happens to read the post and agrees with the patch then it might get committed pretty quickly even outside a commitfest, but if not then if I didn't add to the commitfest then it would likely get lost. You have done the right thing. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ERROR during end-of-xact/FATAL
On Sat, Nov 9, 2013 at 2:43 AM, Noah Misch n...@leadboat.com wrote: On Wed, Nov 06, 2013 at 10:14:53AM +0530, Amit Kapila wrote: On Thu, Oct 31, 2013 at 8:22 PM, Noah Misch n...@leadboat.com wrote: About unclean FATAL-then-ERROR scenario, one way to deal at high level could be to treat such a case as backend crash in which case postmaster reinitialises shared memory and other stuff. If we can't manage to free a shared memory resource like a lock or buffer pin, we really must PANIC. Can't we try to initialise the shared memory and other resources, wouldn't that resolve the problem's that can occur due to scenario explained by you? A PANIC will reinitialize everything relevant, largely resolving the problems around ERROR during FATAL. It's a heavy-handed solution, but it may well be the best solution. Efforts to harden CommitTransaction() and AbortTransaction() seem well-spent, but the additional effort to make FATAL exit cope where AbortTransaction() or another exit action could not cope seems to be slicing ever-smaller portions of additional robustness. I pondered a variant of that conclusion that distinguished critical cleanup needs from the rest. Each shared resource (heavyweight locks, buffer pins, LWLocks) would have an on_shmem_exit() callback that cleans up the resource under a critical section. (AtProcExit_Buffers() used to fill such a role, but resowner.c's work during AbortTransaction() has mostly supplanted it.) The ShutdownPostgres callback would not use a critical section, so lesser failures in AbortTransaction() would not upgrade to a PANIC. But I'm leaning against such a complication on the grounds that it would add seldom-tested code paths posing as much a chance of eroding robustness as bolstering it. I think here PANIC is safe and less complicated solution for this situation. Apart from this we can try to avoid palloc or other such errors in AbortTransaction/CommitTransaction path. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL Service on Windows does not start. ~ is not a valid Win32 application
Hi Naoya, On Thu, Oct 31, 2013 at 5:42 PM, Robert Haas robertmh...@gmail.com wrote: On Thu, Oct 31, 2013 at 1:44 AM, Asif Naeem anaeem...@gmail.com wrote: On Thu, Oct 31, 2013 at 10:17 AM, Amit Kapila amit.kapil...@gmail.com wrote: On Tue, Oct 29, 2013 at 12:46 PM, Naoya Anzai anzai-na...@mxu.nes.nec.co.jp wrote: Hi Sandeep I think, you should change the subject line to Unquoted service path containing space is vulnerable and can be exploited on Windows to get the attention.. :) Thank you for advice! I'll try to post to pgsql-bugs again. I could also reproduce this issue. The situation is very rare such that an exe with name same as first part of directory should exist in installation path. If one of the committers who is knowledgeable about Windows has time to apply this *before* the next CommitFest, that's obviously great. But the purpose of adding a link to the next CommitFest is to provide a backstop, so that we're not relying solely on someone to notice this email thread and pick it up, but instead have the patch as part of a list of patches needing review. I have uploaded your patch for next commit fest, hope you can support it if there is any feedback for your patch by reviewer/committer. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers