Re: [HACKERS] pg_controldata output alignment regression
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/24/2015 07:41 PM, Tom Lane wrote: Joe Conway m...@joeconway.com writes: Do we care that as of 9.5 pg_controldata output is not 100% aligned anymore? The culprit is: Current track_commit_timestamp setting: off Its value is shifted 2 characters to the right with respect to all the others. I think it ought to be fixed but thought I'd get opinions first. Seems to me we could s/Current //g, or s/ setting//g, or both, and get rid of the problem without adding more whitespace. I'd agree, except I think not everyone might be happy with that. The surrounding lines look like: 8 ... End-of-backup record required:no Current wal_level setting:minimal Current wal_log_hints setting:off Current max_connections setting: 100 Current max_worker_processes setting: 8 Current max_prepared_xacts setting: 0 Current max_locks_per_xact setting: 64 Current track_commit_timestamp setting: off Maximum data alignment: 8 Database block size: 8192 ... 8 So while changing that line to this would work... 8 ... Current max_locks_per_xact setting: 64 track_commit_timestamp setting: off Maximum data alignment: 8 ... 8 ... it does become inconsistent with the ones above. One possible solution is to abbreviate Current for all of them as Cur.: 8 ... End-of-backup record required:no Cur. wal_level setting: minimal Cur. wal_log_hints setting: off Cur. max_connections setting: 100 Cur. max_worker_processes setting:8 Cur. max_prepared_xacts setting: 0 Cur. max_locks_per_xact setting: 64 Cur. track_commit_timestamp setting: off Maximum data alignment: 8 Database block size: 8192 ... 8 Of course that breaks backward compatibility if you believe it is important here. Otherwise maybe: 8 ... End-of-backup record required:no Current wal_level setting:minimal Current wal_log_hints setting:off Current max_connections setting: 100 Current max_worker_processes setting: 8 Current max_prepared_xacts setting: 0 Current max_locks_per_xact setting: 64 Cur. track_commit_timestamp setting: off Maximum data alignment: 8 Database block size: 8192 ... 8 Joe - -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, Open Source Development -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJV3KUiAAoJEDfy90M199hlPmAP+wSn1w+l4YhPlkuk0tZVN5Vl LpmXD3uGi0WXrt21jQgCaXdj3QeLPzrK9Pu5QDyHODpGexZ7j1lJiTl0cXxQ8CuK LwyPlNr5nzoRGru+g4aRElzGr1unSPI4K8m7nVH2KTw/mmByR+RCQu6CPGqnOZ1+ 4EtW/9svO4hw+YxhjRRdyQwP7XVhI1Og4jryp6kOdzmYbO0K+uMZo8+xFvRg4Sr5 u7iyJe1xUgrsqQhvbRh+eguV0+d/ykDGgodEEPfEEcmvxxQEDvhQ9STM8eEEoK1v sz1/ObFbJ3GrzVZB5Mse6+uFwQB6JqJBvCrnkuH43d9U2NKikR5vm8VJ48yxvwd7 VZLXodAQmudlt0nJdL7vRGoOBt/gztSkuWvl+4y206GRdWcvkNFKTKyvnpoZdW+7 KIaz0D2mWeC/Hr5j84pTLPcfF3ezz+HdUHDmuSt7HX+fH3CSzhGlcoCMZdgZIIKM 1a2RHN8r3sF0U/hyKFjpFetq28Pgnrhardn7Y4U4qveCfwRopF4grNYYrfqreQ0a xxi0bXb81iWX5HvvnWh82/NmG9qH+YhLaHqovvR/5+iXKpcv1do+oSVz0uKwaSen 4gcE7JiWELrhp6+iftzt2U0X69Xd5KeluScjaxeOaQsAYW53pHvOLk5c56RrHVim WZiPEkdGZffETA0SCaZL =hV8z -END PGP SIGNATURE- -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Resource Owner reassign Locks
Jeff Janes jeff.ja...@gmail.com writes: On Tue, Aug 25, 2015 at 5:48 AM, Michael Paquier michael.paqu...@gmail.com On Fri, Jul 10, 2015 at 4:22 AM, Andres Freund and...@anarazel.de wrote: That's the safest way. Sometimes you can decide that a function can not sanely be called by external code and thus change the signature. But I'd rather not risk or here, IRS quite possible that one pod these is used by a extension. Where are we on this? Could there be a version for = 9.2? Once the code has to be rewritten, my argument that it has been working in the field for a while doesn't really apply anymore. Yeah. However, I'm not entirely following Andres' concern here. AFAICS, the only externally visible API change in commit eeb6f37d8 was that LockReleaseCurrentOwner and LockReassignCurrentOwner gained some arguments. That would certainly be an issue if there were any plausible reason for extension code to be calling either one --- but it seems to me that those functions are only meant to be called from resowner.c. What other use-case would there be for them? Were any follow-on commits needed to fix problems induced by eeb6f37d8? I couldn't find any in a quick trawl of the commit logs, but I could have missed something. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_controldata output alignment regression
Joe Conway m...@joeconway.com writes: On 08/24/2015 07:41 PM, Tom Lane wrote: Seems to me we could s/Current //g, or s/ setting//g, or both, and get rid of the problem without adding more whitespace. I'd agree, except I think not everyone might be happy with that. The surrounding lines look like: I was suggesting getting rid of Current in *all* the entries. What value does it add? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_controldata output alignment regression
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/25/2015 10:28 AM, Tom Lane wrote: Joe Conway m...@joeconway.com writes: On 08/24/2015 07:41 PM, Tom Lane wrote: Seems to me we could s/Current //g, or s/ setting//g, or both, and get rid of the problem without adding more whitespace. I'd agree, except I think not everyone might be happy with that. The surrounding lines look like: I was suggesting getting rid of Current in *all* the entries. What value does it add? I agree, it adds no value, and is a simple solution. Does anyone out there object to a non-backward compatible change to pg_controldata output? Joe - -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, Open Source Development -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJV3KaTAAoJEDfy90M199hlv+gP/Rgbhvj6Vg06zPokyUXTLMiw LHmZedhOv2XaPW5e1uj7P/8d4y+NjSt7bWnQ2P6ONqNk9SkgQTGS1QIlvShUQDAj 312Lct83xYnJrukBfqzLoeDavPM7GPUiJal4yEixREDYElNa7bwTO/bIFWuAdx9F xJYAwLsWW9AnbTroRn4pgOTpr9YvP/pk0WS7s1wQCmMKbyBRtTYb2yfn+p2NYJS1 /nFJPzIzuRjjVH4U43PZuWuESoW5RUKQQXYQn6FdrgcBPRMWA02blzRTKvuuX19T XXqb0HS+Ge8QpeqofAW6RuCHsvHClYex99PfCyUCAf6t9HOpY6w/dd2RWqExw8zV TrhSJnB0gVI0dONXrew/AwhTc4hy6oeHkSDZd/h6RldwrUMspXbDrjBdmUIo66Dq SinE9OrBXbS2lbDPMmYIWJLbkHn2bjKi8Bs3yBSxmqCnZclAHQefF5TqcxYRB3gD +U0QGuAcCjmKVGE+q33DnIUdSe64uBKP0zRpEWpHw3ENrtwgqR3dfrsTZwLxtMij R6XCOOJQEIw8Gh3nULxwk4sar7zFG+hQqGcZ5IHlAvj4Cjis67qMLTqXyBItQP7x TrVn+UJv4J0t1lCYAt1Cxv11kVictiqBzS1E9JcOJBhAgQguh88HddlnWmj1kVBi lryNq+HsH/lZbc0HwkB9 =crEM -END PGP SIGNATURE- -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Rework access method interface
Alvaro Herrera alvhe...@2ndquadrant.com writes: Jim Nasby wrote: On 8/24/15 9:49 AM, Alexander Korotkov wrote: 3) Non-index access methods reuse both pg_class.relam and pg_am. This violates relational theory because we store different objects in the same table. In my reading of the thread, we have a consensus for doing #3, and that one gets my vote in any case. That's what I thought as well. In userspace, table inheritance handles this nicely. Stick a type field in the parent so you know what kind of entity each record is, along with all your common fields. Yeah, this pattern is not hugely common but it's definitely used in some places. In fact, I would think it is less of a violation of relational theory than #2 -- because then relam is always a reference to pg_am, instead of sometimes being a reference to some other catalog. What's stored in pg_am is not pg_class' concern; and I think calling pg_am a catalog for access methods (in a generic way, not only indexes) is sound. I'm good with this as long as all the things that get stored in pg_am are things that pg_class.relam can legitimately reference. If somebody proposed adding an access method kind that was not a relation access method, I'd probably push back on whether that should be in pg_am or someplace else. The fact that the subsidiary tables like pg_opclass no longer have perfectly clean foreign keys to pg_am is a bit annoying, but that's better than having pg_class.relam linking to multiple tables. (Also, if we really wanted to we could define the foreign key constraints as multicolumn ones that also require a match on amkind. So it's not *that* broken. We could easily add opr_sanity tests to verify proper matching.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: index-only scans with partial indexes
On Fri, Jul 10, 2015 at 11:29 AM, Tomas Vondra tomas.von...@2ndquadrant.com wrote: Hi, currently partial indexes end up not using index only scans in most cases, because check_index_only() is overly conservative, as explained in this comment: * XXX this is overly conservative for partial indexes, since we will * consider attributes involved in the index predicate as required even * though the predicate won't need to be checked at runtime. (The same * is true for attributes used only in index quals, if we are certain * that the index is not lossy.) However, it would be quite expensive * to determine that accurately at this point, so for now we take the * easy way out. In other words, unless you include columns from the index predicate to the index, the planner will decide index only scans are not possible. Which is a bit unfortunate, because those columns are not needed at runtime, and will only increase the index size (and the main benefit of partial indexes is size reduction). The attached patch fixes this by only considering clauses that are not implied by the index predicate. The effect is simple: create table t as select i as a, i as b from generate_series(1,1000) s(i); create index tidx_partial on t(b) where a 1000 and a 2000; vacuum freeze t; analyze t; explain analyze select count(b) from t where a 1000 and a 2000; However, explain analyze select sum(b) from t where a 1000 and a 1999; still doesn't use the index only scan. Isn't that also implied by the predicate? Cheers, Jeff
Re: [HACKERS] Error message with plpgsql CONTINUE
On 8/22/15 2:53 PM, Tom Lane wrote: This message seems confusing: label lab1 does exist, it's just not attached to the right loop. In a larger function that might not be too obvious, and I can easily imagine somebody wasting some time before Agreed. figuring out the cause of his problem. Given the way the namespace data structure works, I am not sure that we can realistically detect at line 8 that there was an instance of lab1 earlier, but perhaps we could word the Are there any other reasons we'd want to improve the ns stuff? Doesn't seem worth it for just this case, but if there were other nitpicks elsewhere maybe it is. error message to cover either possibility. Maybe something like there is no label foo surrounding this statement? surrounding seems pretty nebulous. Maybe no label foo in this context? I'd say we use the term block, but we differentiate between blocks and loops. Perhaps it would be best to document namespaces and make it clear that blocks and loops both use them. :/ Regardless of that, a hint is probably warranted. Is foo a label for an adjacent block or loop?? This is not too accurate, as shown by the fact that the first EXIT is accepted. Perhaps EXIT without a label cannot be used outside a loop? +1 I realize that this is pretty nitpicky, but if we're going to all the trouble of improving the error messages about these things, seems like we ought to be careful about what the messages actually say. Agreed. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Error message with plpgsql CONTINUE
Jim Nasby jim.na...@bluetreble.com writes: On 8/22/15 2:53 PM, Tom Lane wrote: ... Given the way the namespace data structure works, I am not sure that we can realistically detect at line 8 that there was an instance of lab1 earlier, but perhaps we could word the Are there any other reasons we'd want to improve the ns stuff? Doesn't seem worth it for just this case, but if there were other nitpicks elsewhere maybe it is. I'm not aware offhand of any other cases where it's not getting the job done. error message to cover either possibility. Maybe something like there is no label foo surrounding this statement? surrounding seems pretty nebulous. Maybe no label foo in this context? I'd say we use the term block, but we differentiate between blocks and loops. Perhaps it would be best to document namespaces and make it clear that blocks and loops both use them. :/ I agree that surrounding might not be the best word, but it seems more concrete than in this context. The point is that the label needs to be attached to a block/loop that contains the CONTINUE/EXIT statement. I considered phrasing it as no label that contains this statement, but thinking of the label itself as containing anything seemed pretty bogus. Regardless of that, a hint is probably warranted. Is foo a label for an adjacent block or loop?? Meh. Doesn't do anything for me. If we had positive detection, we could add an errdetail saying There is a label foo, but it's not attached to a block that encloses this statement.. But without being able to say that for sure, I think the hint would probably just be confusing. Hmm ... what do you think of wording the error as there is no label foo attached to any block enclosing this statement? That still leaves the terminology block undefined, but it seems better than any statement enclosing this statement. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Error message with plpgsql CONTINUE
I wrote: Hmm ... what do you think of wording the error as there is no label foo attached to any block enclosing this statement? That still leaves the terminology block undefined, but it seems better than any statement enclosing this statement. Actually, looking at the plpgsql documentation, I see that it is completely consistent about using the word block to refer to [DECLARE]/BEGIN/END. So we probably can't get away with using the term in a vaguer sense here. So the wording would have to be there is no label foo attached to any block or loop enclosing this statement. That's a tad verbose, but at least it's clear ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Planned release for PostgreSQL 9.5
On 08/24/2015 11:26 AM, Tom Lane wrote: Paragon Corporation l...@pcorp.us writes: Just checking to see if you guys have settled on a date for 9.5.0 release. No. Considering we don't have a beta out yet, it's not imminent ... This is the timeline, effectively: https://wiki.postgresql.org/wiki/PostgreSQL_9.5_Open_Items When that list gets down to a handful of non-critical items, we'll be in beta. Help wanted. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Resource Owner reassign Locks
On 2015-08-25 13:54:20 -0400, Tom Lane wrote: Jeff Janes jeff.ja...@gmail.com writes: Once the code has to be rewritten, my argument that it has been working in the field for a while doesn't really apply anymore. If rewriting involves adding two one line wrapper functions, I don't see the problem. However, I'm not entirely following Andres' concern here. AFAICS, the only externally visible API change in commit eeb6f37d8 was that LockReleaseCurrentOwner and LockReassignCurrentOwner gained some arguments. That would certainly be an issue if there were any plausible reason for extension code to be calling either one --- but it seems to me that those functions are only meant to be called from resowner.c. What other use-case would there be for them? I don't think it's super likely, but I don't think it's impossible that somebody created their own resource owner. Say because they want to perform some operation and then release the locks without finishing the transaction. Adding a zero argument LockReleaseCurrentOwner()/LockReassignCurrentOwner() wrapper seems like a small enough effort to simply not bother looking for existing callers. Were any follow-on commits needed to fix problems induced by eeb6f37d8? I couldn't find any in a quick trawl of the commit logs, but I could have missed something. I don't remember any at least. Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Error message with plpgsql CONTINUE
Alvaro Herrera alvhe...@2ndquadrant.com writes: Tom Lane wrote: So the wording would have to be there is no label foo attached to any block or loop enclosing this statement. That's a tad verbose, but at least it's clear ... This seems good to me, verbosity notwithstanding. Hearing no objections, I'll go make it so. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Rework access method interface
On 8/24/15 9:49 AM, Alexander Korotkov wrote: 2) Non-index access methods reuse pg_class.relam but don't reuse pg_am. This violates relational theory because single column reference multiple tables. 3) Non-index access methods reuse both pg_class.relam and pg_am. This violates relational theory because we store different objects in the same table. I'd say we already have precedent of #2. It's pg_depend which reference objects of arbitrary types. In the #3 we really shouldn't keep any specific to index am in pg_am. In userspace, table inheritance handles this nicely. Stick a type field in the parent so you know what kind of entity each record is, along with all your common fields. Everything else is in the children, and code generally already knows which child table to hit or doesn't care about specifics and hits only the parent. Perhaps something similar could be made to work with a catalog table. #2 seems like a twist on the same idea, except that there's fields in pg_class that tell you what the child is instead of a real parent table. Presumably we could still create a parent table even if the internals were going through pg_class. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Rework access method interface
Jim Nasby wrote: On 8/24/15 9:49 AM, Alexander Korotkov wrote: 2) Non-index access methods reuse pg_class.relam but don't reuse pg_am. This violates relational theory because single column reference multiple tables. 3) Non-index access methods reuse both pg_class.relam and pg_am. This violates relational theory because we store different objects in the same table. I'd say we already have precedent of #2. It's pg_depend which reference objects of arbitrary types. In the #3 we really shouldn't keep any specific to index am in pg_am. In my reading of the thread, we have a consensus for doing #3, and that one gets my vote in any case. In userspace, table inheritance handles this nicely. Stick a type field in the parent so you know what kind of entity each record is, along with all your common fields. Yeah, this pattern is not hugely common but it's definitely used in some places. In fact, I would think it is less of a violation of relational theory than #2 -- because then relam is always a reference to pg_am, instead of sometimes being a reference to some other catalog. What's stored in pg_am is not pg_class' concern; and I think calling pg_am a catalog for access methods (in a generic way, not only indexes) is sound. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql - better support pipe line
On 8/24/15 3:04 PM, Pavel Stehule wrote: (1) there is no reason to believe that the db name and only the db name is needed to do another connection; what about port, host, user, etc? I have to agree - the possibilities is much more than database name - so one option is not good idea. What I've had problems with is trying to correlate psql specified connection attributes with things like DBI. It would be nice if there was a way to get a fully formed connection URI for the current connection. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] CREATE POLICY and RETURNING
Zhaomo, * Zhaomo Yang (zmp...@gmail.com) wrote: If no NEW or OLD is used, what happens? Or would you have to always specify OLD/NEW for UPDATE, and then what about for the other policies, and the FOR ALL policies? I should be clearer with references to OLD/NEW. SELECT Predicates cannot reference any of them. INSERT predicates cannot refer to OLD and DELETE predicates cannot refer to NEW. Basically, for INSERT/UPDATE/DELETE, we specify predicates the same way as we do for triggers' WHEN condition. As for FOR ALL, I think we will abandon it if we apply SELECT policy to other commands, since SELECT predicate will be the new universally applicable read policy, which makes the FOR ALL USING clause much less useful. Of course users may need to specify separate predicates for different commands, but I think it is fine. How often do users want the same predicate for all the commands? I can certainly see use-cases where you'd want to apply the same policy to all new records, regardless of how they're being added, and further, the use-case where you want the same policy for records which are visible and those which are added. In fact, I'd expect that to be one of the most common use-cases as it maps directly to a set of rows which are owned by one user, where that user can see/modify/delete their own records but not impact other users. So, I don't think it would be odd at all for users to want the same predicate for all of the commands. This could be accomplished with USING (bar 1) and WITH CHECK (foo 1), no? Your sentence above that USING and WITH CHECK are combined by AND isn't correct either- they're independent and are therefore really OR'd. If they were AND'd then the new record would have to pass both USING and WITH CHECK policies. No, it is impossible with the current implementation. CREATE TABLE test { id int, v1 int, v2 int }; Suppose that the user wants an update policy which is OLD.v1 10 OR NEW.v2 10. As you suggested, we use the following policy CREATE update_p ON test FOR UPDATE TO test_user USING v1 10 WITH CHECK v2 10; (1) Assume there is only one row in the table id | v1 | v2 | 1 | 11 | 20 | Now we execute UPDATE test SET v2 = 100. this query is allowed by the policy and the only row should be updated since v1's old value 10, but will trigger an error because it violates the WITH CHECK clause. In this scenario, you don't care what the value of the NEW record is, at all? As long as the old record had 'v1 10', then the resulting row can be anything? I have to admit, I have a hard timing seeing the usefulness of that, but it could be allowed by having a 'true' WITH CHECK policy. (2) Again assume there is only one row in the table id | v1 | v2 | 1 | 9 | 20 | Now we execute UPDATE test SET v2 = 7. this query is allowed by the policy and the only row should be updated since v2's new value 10, nothing will be updated because the only row will be filtered out before update happens. Again, in this case, you could have a 'USING' policy which is simply 'true', if you wish to allow any row to be updated, provided the result is v2 10 (and a WITH CHECK clause to enforce that). This is why I said USING and WITH CHECK are combined by AND. In order to update an row, first the row needs to be visible, which meaning it needs to pass the USING check, then it needs to pass the WITH CHECK. That's correct, and very simple to reason about. I really don't like the approach you're suggesting above where an 'OR' inside of such a clause could mean that users can arbitrarly change any existing row without any further check on that row and I have a hard time seeing the use-case which justifies the additional complexity and user confusion. Further, I'm not sure that I see how this would work in a case where you have the SELECT policy (which clearly could only refer to OLD) applied first, as you suggest? We use SELECT policy to filter the table when we scan it (just like how we use USING clause now). The predicate of UPDATE will be checked later (probably similar to how we handle trigger's WHEN clause which can also reference OLD and NEW). So there would also be a SELECT policy anyway, which is just like the existing UPDATE USING policy is today and what you're really asking for is the ability to have the WITH CHECK policy reference both the OLD and NEW records. I might be able to get behind supporting that, but I'm not terribly excited about it and you've not provided any real use-cases for it that I've seen, and it still doesn't really change anything regarding RETURNING any differently than the earlier suggestions did about having the SELECT policy applied to all commands. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] Commitfest remaining Needs Review items
Michael, * Michael Paquier (michael.paqu...@gmail.com) wrote: -- Default Roles: Stephen, are you planning to work on that for next CF? Yup! Thanks! Stephen signature.asc Description: Digital signature
[HACKERS] Re: [COMMITTERS] pgsql: Change TAP test framework to not rely on having a chmod executab
On 6/19/15 10:52 AM, Robert Haas wrote: Change TAP test framework to not rely on having a chmod executable. This might not work at all on Windows, and is not ever efficient. Michael Paquier I came across this on an unrelated mission and noticed it was unnecessarily complicated. How about this patch instead? From 3062d1ad69774f7dd68eb8f307997ae32d3535f2 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut pete...@gmx.net Date: Tue, 25 Aug 2015 09:58:49 -0400 Subject: [PATCH] Simplify Perl chmod calls The Perl chmod function already takes multiple file arguments, so we don't need a separate looping function. --- src/test/ssl/ServerSetup.pm | 16 +--- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/src/test/ssl/ServerSetup.pm b/src/test/ssl/ServerSetup.pm index 8c1b517..a8228b0 100644 --- a/src/test/ssl/ServerSetup.pm +++ b/src/test/ssl/ServerSetup.pm @@ -43,20 +43,6 @@ sub copy_files } } -# Perform chmod on a set of files, taking into account wildcards -sub chmod_files -{ - my $mode = shift; - my $file_expr = shift; - - my @all_files = glob $file_expr; - foreach my $file_entry (@all_files) - { - chmod $mode, $file_entry - or die Could not run chmod with mode $mode on $file_entry; - } -} - sub configure_test_server_for_ssl { my $tempdir = $_[0]; @@ -82,7 +68,7 @@ sub configure_test_server_for_ssl # Copy all server certificates and keys, and client root cert, to the data dir copy_files(ssl/server-*.crt, $tempdir/pgdata); copy_files(ssl/server-*.key, $tempdir/pgdata); - chmod_files(0600, $tempdir/pgdata/server-*.key); + chmod(0600, glob $tempdir/pgdata/server-*.key) or die $!; copy_files(ssl/root+client_ca.crt, $tempdir/pgdata); copy_files(ssl/root+client.crl,$tempdir/pgdata); -- 2.5.0 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Commitfest remaining Needs Review items
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/25/2015 12:39 AM, Michael Paquier wrote: -- self-defined policy for sepgsql regression test, returned with feedback? The regressions are broken as mentioned at the end of the thread. I am in the process of getting a VM setup with an appropriate SELinux setup to look at this one. Will hopefully be able to provide more feedback in a day or two. Joe - -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, Open Source Development -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJV3HdkAAoJEDfy90M199hlMggQAKfKGJrKzSkdEACoMy83KPAY UBLPtoFr7ay6c8eS13ahlW6xehIeQoSp1COftYOIO0qFo4JN3+NOJCVGvmS+ooG+ HbGkP919urG10QB3REWp4J6gdnKjIhuAq2cQZQRFBSVPVWqKTBDGUL6RRWDkRwtq ytWqOLnXb6FTidBihi1hBmV12z9z1FDujMntBMyzIrGpcCcnQ0KDeDXM7kkQeOU7 dMs2m1b2PijToI9UEtWI5lBldhFH/UUSVtBAKQ0BQtAJaLgs2BknIEuj4zwswzEx IUG7Q+zD4RijkYePVw/gqeHW24g5rSvAC6lmvSpF5L7Cnr2bP1ZXedcInWcRGafT 8IhW1Sggmjd9ZP3EOO/9cEKG50cZpqbdDWdUZciAG0MTbAvOByMyMP49ckyO49T9 XLyUj8lCL1Hoocl203DAJouUr24HOOxd2IcfqYRa64B0jlWUwqi9j1E+DofBMJc+ wUOYrpsQ0/r0YOxNQppfiHX0NxIkp2RCI2dmmNHt2JjlqNME4YH1o2RHby7OwiBb 7kE5WUz7MTJrN7vg9Ua0DvJEBJ1YihOl7mjnlXgJ6ZoXih/l2Jq7LrrlohQlfHeZ lpSqeVUTu7HkBm0zicwgGna9dzeXpnUrTrlvFH9Wsr7//9kmrfHi8Kt8lASKeQ4d 7YwaThj0oAp4sVsFW39Z =WGIF -END PGP SIGNATURE- -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Commitfest remaining Needs Review items
On Tue, Aug 25, 2015 at 4:39 AM, Michael Paquier michael.paqu...@gmail.com wrote: [...] -- Support to COMMENT ON CURRENT DATABASE: returned with feedback? The author mentioned that patch will be reworked but there has been no new version around it seems. Moved to the next commitfest. Regards, -- Fabrízio de Royes Mello Consultoria/Coaching PostgreSQL Timbira: http://www.timbira.com.br Blog: http://fabriziomello.github.io Linkedin: http://br.linkedin.com/in/fabriziomello Twitter: http://twitter.com/fabriziomello Github: http://github.com/fabriziomello
Re: [HACKERS] psql - better support pipe line
Jim Nasby jim.na...@bluetreble.com writes: What I've had problems with is trying to correlate psql specified connection attributes with things like DBI. It would be nice if there was a way to get a fully formed connection URI for the current connection. Yeah, although I'd think the capability to create such a URI is libpq's province not psql's. Maybe a PQgetConnectionURI(PGConn) function in libpq, and some psql backslash command to access that? Or maybe a nicer API would be that there's a magic psql variable containing the URI; not sure. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Error message with plpgsql CONTINUE
On 8/25/15 10:50 AM, Jim Nasby wrote: figuring out the cause of his problem. Given the way the namespace data structure works, I am not sure that we can realistically detect at line 8 that there was an instance of lab1 earlier, but perhaps we could word the Are there any other reasons we'd want to improve the ns stuff? Doesn't seem worth it for just this case, but if there were other nitpicks elsewhere maybe it is. Thinking about this some more... If we added a prev_label_in_context field to nsitem and changed how push worked we could walk the entire chain. Most everything just cares about the previous level, so I don't think it would be terribly invasive. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Rework access method interface
Jim Nasby wrote: On 8/25/15 10:56 AM, Tom Lane wrote: I'm good with this as long as all the things that get stored in pg_am are things that pg_class.relam can legitimately reference. If somebody proposed adding an access method kind that was not a relation access method, I'd probably push back on whether that should be in pg_am or someplace else. Would fields in pg_am be overloaded then? From a SQL standpoint it'd be much nicer to have child tables, though that could potentially be faked with views. The whole point of this conversation is that we're getting rid of almost all the columns in pg_am, leaving only an amkind column and a pointer to a handler function. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Rework access method interface
On 8/25/15 10:56 AM, Tom Lane wrote: I'm good with this as long as all the things that get stored in pg_am are things that pg_class.relam can legitimately reference. If somebody proposed adding an access method kind that was not a relation access method, I'd probably push back on whether that should be in pg_am or someplace else. Would fields in pg_am be overloaded then? From a SQL standpoint it'd be much nicer to have child tables, though that could potentially be faked with views. The fact that the subsidiary tables like pg_opclass no longer have perfectly clean foreign keys to pg_am is a bit annoying, but that's better than having pg_class.relam linking to multiple tables. (Also, if we really wanted to we could define the foreign key constraints as multicolumn ones that also require a match on amkind. So it's not*that* broken. We could easily add opr_sanity tests to verify proper matching.) I have wished for something similar to CHECK constraints that operated on data in a *related* table (something we already have a foreign key to) for this purpose. In the past I've enforced it with triggers on both sides but writing those gets old after a while. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Rework access method interface
Jim Nasby jim.na...@bluetreble.com writes: On 8/25/15 10:56 AM, Tom Lane wrote: I'm good with this as long as all the things that get stored in pg_am are things that pg_class.relam can legitimately reference. If somebody proposed adding an access method kind that was not a relation access method, I'd probably push back on whether that should be in pg_am or someplace else. Would fields in pg_am be overloaded then? No, because the proposal was to reduce pg_am to just amname, amkind (which would be something like 'i' or 's'), and amhandler. Everything specific to a particular type of access method would be shoved down to the level of the C APIs. From a SQL standpoint it'd be much nicer to have child tables, though that could potentially be faked with views. I've looked into having actual child tables in the system catalogs, and I'm afraid that the pain-to-reward ratio doesn't look very good. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Error message with plpgsql CONTINUE
Tom Lane wrote: So the wording would have to be there is no label foo attached to any block or loop enclosing this statement. That's a tad verbose, but at least it's clear ... This seems good to me, verbosity notwithstanding. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Resource Owner reassign Locks
On Tue, Aug 25, 2015 at 5:48 AM, Michael Paquier michael.paqu...@gmail.com wrote: On Fri, Jul 10, 2015 at 4:22 AM, Andres Freund and...@anarazel.de wrote: On July 9, 2015 9:13:20 PM GMT+02:00, Jeff Janes jeff.ja...@gmail.com wrote: Unfortunately I don't know what that means about the API. Does it mean that none of the functions declared in any .h file can have their signatures changed? But new functions can be added? That's the safest way. Sometimes you can decide that a function can not sanely be called by external code and thus change the signature. But I'd rather not risk or here, IRS quite possible that one pod these is used by a extension. Where are we on this? Could there be a version for = 9.2? Once the code has to be rewritten, my argument that it has been working in the field for a while doesn't really apply anymore. It is beyond what I feel comfortable trying to do, especially as I have no test case of 3rd party code to verify I haven't broken it. I still think is a good idea, but for someone who knows more about linkers and .so files than I do. If I were faced with upgrading a 9.2 instance with many tens of thousands of objects, I would just backpatch the existing code and compile it to make a binary used only for the purposes of the upgrade. Cheers, Jeff
Re: [HACKERS] Commitfest remaining Needs Review items
On 08/25/2015 10:39 AM, Michael Paquier wrote: On Mon, Aug 10, 2015 at 4:34 PM, Heikki Linnakangas hlinn...@iki.fi wrote: Hello Hackers, There are a few Needs Review items remaining in the July commitfest. Reviewers, please take action - you are holding up the commitfest. In addition to these items, there are a bunch of items in Ready for Committer state. Committers: please help with those. At this stage, there are currently 26 patches in need of actions for the current CF: Thanks Michael! - 12 patches are waiting on author: These can all be marked as Returned with good conscience, they've gotten at least some feedback. - 8 patches are in need of review: -- extends pgbench expressions with functions: bump to next CF, v9 has not been reviewed. -- check existency of table for -t option (pg_dump) when pattern has no wildcard: Bump to next CF? -- self-defined policy for sepgsql regression test, returned with feedback? The regressions are broken as mentioned at the end of the thread. -- Unique Joins: bump to next CF? -- improving join estimates using FK: bump to next CF? -- checkpoint continuous flushing, bump, work, performance tests and review are all still going on. -- Join pushdown support for foreign tables: err... This patch had no activity for 4 months. -- Reload SSL certificates on SIGHUP: returned with feedback? I think that this patch needs more work to be in a commitable state. - 6 patches marked as ready for committer: -- numeric timestamp in log_line_prefix -- plpgsql raise statement with context -- postgres_fdw: Options to set fetch_size at the server and table level. -- Improving test coverage of extensions with pg_dump -- New functions in sslinfo module -- CREATE EXTENSION ... CASCADE For all of them, bump to next CF with the same status if they are not committed at the end of the month? Yeah, I don't think we can let this linger much longer. I hate to bump patches in a commitfest just because no-one's gotten around to reviewing them, because the point of commitfests is precisely to provide a checkpoint where every submitted patch gets at least some feedback. But I've run out of steam myself, and I don't see anyone else interested in any of these patches, so I don't think there's much else we can do :-(. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Make HeapTupleSatisfiesMVCC more concurrent
On Wed, Aug 26, 2015 at 2:21 AM, Tom Lane t...@sss.pgh.pa.us wrote: Amit Kapila amit.kapil...@gmail.com writes: I am wondering that is there any harm in calling TransactionIdDidAbort() in slow path before calling SubTransGetTopmostTransaction(), that can also maintain consistency of checks in both the functions? I think this is probably a bad idea. It adds a pg_clog lookup that we would otherwise not do at all, in hopes of avoiding a pg_subtrans lookup. It's not exactly clear that that's a win even if we successfully avoid the subtrans lookup (which we often would not). And even if it does win, that would only happen if the other transaction has aborted, which isn't generally the case we prefer to optimize for. I think Alvaro has mentioned the case where it could win, however it can still add some penality where most (or all) transactions are committed. I agree with you that we might not want to optimize or spend our energy figuring out if this is win for not-a-common use case. OTOH, I feel having this and other point related to optimisation (one-XID-cache) could be added as part of function level comments to help, if some body encounters any such case in future or is puzzled why there are some differences in TransactionIdIsInProgress() and XidInMVCCSnapshot(). I don't mean to dismiss the potential for further optimization inside XidInMVCCSnapshot (for instance, the one-XID-cache idea sounds promising); but I think that's material for further research and a separate patch. It's not clear to me if anyone wanted to do further review/testing of this patch, or if I should go ahead and push it (after fixing whatever comments need to be fixed). I think jeff's test results upthread already ensured that this patch is of value and fair enough number of people have already looked into it and provided there feedback, so +1 for pushing it. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP
On Wed, Aug 26, 2015 at 10:57 AM, Tom Lane t...@sss.pgh.pa.us wrote: [...] So I think the way to move this forward is to investigate how to hold the SSL config constant until SIGHUP in an EXEC_BACKEND build. If we find out that that's unreasonably difficult, maybe we'll decide that we can live without it; but I'd like to see the question investigated rather than ignored. You have a point here. In EXEC_BACKEND, parameter updated via SIGHUP are only taken into account by newly-started backends, right? Hence, a way to do what we want is to actually copy the data needed to initialize the SSL context into alternate file(s). When postmaster starts up, or when SIGHUP shows up those alternate files are upserted by the postmaster. be-secure-openssl.c needs also to be changed such as with EXEC_BACKEND the context needs to be loaded from those alternate files. At quick glance this seems doable. For now I am moving the patch to the next CF, more investigation is surely needed. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP
On Wed, Aug 26, 2015 at 12:24 PM, Michael Paquier michael.paqu...@gmail.com wrote: On Wed, Aug 26, 2015 at 10:57 AM, Tom Lane t...@sss.pgh.pa.us wrote: [...] So I think the way to move this forward is to investigate how to hold the SSL config constant until SIGHUP in an EXEC_BACKEND build. If we find out that that's unreasonably difficult, maybe we'll decide that we can live without it; but I'd like to see the question investigated rather than ignored. You have a point here. In EXEC_BACKEND, parameter updated via SIGHUP are only taken into account by newly-started backends, right? Oops. I mistook with PGC_BACKEND here. Sorry for the noise. Hence, a way to do what we want is to actually copy the data needed to initialize the SSL context into alternate file(s). When postmaster starts up, or when SIGHUP shows up those alternate files are upserted by the postmaster. be-secure-openssl.c needs also to be changed such as with EXEC_BACKEND the context needs to be loaded from those alternate files. At quick glance this seems doable. Still, this idea would be to use a set of alternate files in global/ to set the context, basically something like config_exec_ssl_cert_file, config_exec_ssl_key_file and config_exec_ssl_ca_file. It does not seem to be necessary to manipulate [read|write]_nondefault_variables() as the use of this metadata should be made only when SSL context is initialized on backend. Other thoughts welcome. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign join pushdown vs EvalPlanQual
On 2015/08/25 10:18, Kouhei Kaigai wrote: How about your opinion towards the solution? Likely, what you need to do are... 1. Save the alternative path on fdw_paths when foreign join push-down. GetForeignJoinPaths() may be called multiple times towards a particular joinrel according to the combination of innerrel/outerrel. RelOptInfo-fdw_private allows to avoid construction of same remote join path multiple times. On the second or later invocation, it may be a good tactics to reference cheapest_startup_path and replace the saved one if later invocation have cheaper one, prior to exit. I'm not sure that the tactics is a good one. I think you probably assume that GetForeignJoinPaths executes set_cheapest each time that gets called, but ISTM that that would be expensive. (That is one of the reason why I think it would be better to hook that routine in standard_join_search.) Here is two different problems. I'd like to identify whether the problem is must be solved or nice to have. Obviously, failure on EPQ check is a problem must be solved, however, hook location is nice to have. In addition, you may misunderstand the proposition of mine above. You can check RelOptInfo-fdw_private on top of the GetForeignJoinPaths, then, if it is second or later invocation, you can check cost of the alternative path kept in the ForeignPath node previously constructed. If cheapest_total_path at the moment of GetForeignJoinPaths invocation is cheaper than the saved alternative path, you can adjust the node to replace the alternative path node. 2. Save the alternative Plan nodes on fdw_plans or lefttree/righttree somewhere you like at the GetForeignPlan() 3. Makes BeginForeignScan() to call ExecInitNode() towards the plan node saved at (2), then save the PlanState on fdw_ps, lefttree/righttree, or somewhere private area if not displayed on EXPLAIN. 4. Implement ForeignRecheck() routine. If scanrelid==0, it kicks the planstate node saved at (3) to generate tuple slot. Then, call the ExecQual() to check qualifiers being pushed down. 5. Makes EndForeignScab() to call ExecEndNode() towards the PlanState saved at (3). I never think above steps are too complicated for people who can write FDW drivers. It is what developer usually does. Sorry, my explanation was not accurate, but the design that you proposed looks complicated beyond necessity. I think we should add an FDW API for doing something if FDWs have more knowledge about doing that than the core, but in your proposal, instead of the core, an FDW has to eventually do a lot of the core's work: ExecInitNode, ExecProcNode, ExecQual, ExecEndNode and so on. It is a trade-off problem between interface flexibility and code smallness of FDW extension if it fits scope of the core support. I stand on the viewpoint that gives highest priority on the flexibility, especially, in case when unpredictable type of modules are expected. Your proposition is comfortable to FDW on behalf of RDBMS, however, nobody can promise it is beneficial to FDW on behalf of columnar-store for example. If you stick on the code smallness of FDW on behalf of RDBMS, we can add utility functions on foreign.c or somewhere. It will be able to provide equivalent functionality, but FDW can determine whether it use the routines. Thanks, -- NEC Business Creation Division / PG-Strom Project KaiGai Kohei kai...@ak.jp.nec.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] exposing pg_controldata and pg_config as functions
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/24/2015 08:52 AM, Joe Conway wrote: On 08/24/2015 06:50 AM, Tom Lane wrote: Andrew Dunstan and...@dunslane.net writes: On 08/23/2015 08:58 PM, Michael Paquier wrote: I think that's a good thing to have, now I have concerns about making this data readable for non-superusers. Cloud deployments of Postgres are logically going to block the access of this view. I don't think it exposes any information of great security value. We just had that kerfuffle about whether WAL compression posed a security risk; doesn't that imply that at least the data relevant to WAL position has to be unreadable by non-superusers? So pg_config might be fully unrestricted, but pg_controldata might need certain rows filtered based on superuser status? Do you think those rows should be present but redacted, or completely filtered out? Here is the next installment on this. It addresses most of the previous comments, but does not yet address the issue raised here by Tom . Changes: 1.) pg_controldata() function and pg_controldata view added 2.) cleanup_path() moved to port/path.c 3.) extra PG_FUNCTION_INFO_V1() noise removed Issues needing comment: a.) Which items need hiding from non-superusers and should the value be redacted or the entire result set row be suppressed? b.) There is a difference in the formatting of timestamps between the pg_controldata binary and the builtin function. To see it do: diff -c (pg_controldata) \ (psql -qAt -c select rpad(name || ':', 38) || setting from pg_controldata) What I see is: pg_controldata ! pg_control last modified: Tue 25 Aug 2015 08:10:42 PM PDT pg_controldata() ! pg_control last modified: Tue Aug 25 20:10:42 2015 Does it matter? c.) There is some common code between pg_controldata.c in bin and pg_controldata.c in backend/utils/misc. Specifically the string definitions in printf statements match those in ControlData[], and dbState() and wal_level_str() are in both places. Any opinions on consolidating them in src/common somewhere? d.) Still no docs yet - will get to it eventually. Thanks, Joe - -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, Open Source Development -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJV3TNnAAoJEDfy90M199hl0OsQAIyTr0hqxwjrGenDpnS4qE8u UJWVeCpqehFIobxcJ0TICTaQX835fzPGJIiTUwI1Dmz9sgjipvSG1wBmD4bRT93X WO4e/+Yr5onZ9vNVXlUswPK2kuzehImhmzMSnJ6KDXKkfw2MOZmz56wBb3yIB3lq K44FDukZ01w9RCGM8H5B/MPNMHIqfBB4wdlKARJZhqeUyPvTJ71iMiqeE77v3AIH JLmW6kRw8c3NVu/Wa+GVz4FGjIZKR5oazlFYfDTeHXrxV8NIDUFNrKikAW1ScdVK qSPVjFxoUlbX4W2dd1L1ciGeq83DktYbdKtpZZScQGXwhuq7Y1fHZQwzlxlraB/c UiqNdxmi7IeUdOIncsKPDmjs7C5yeNj1CRnWHTAQRW98RM42A3TvT2Qlkxm0CVLQ lZjFVOOMIf4pXYQv6PfiicO6QWYTUSXCa891s/10H2xkS/sMK1yHz3DWSZxVdDdI dbh5gie/GFro1nwWd8gjkn5KCe917GDBAUBn+QE5TgUPnRhserq6FQBSyVXfZtOQ o6nRM8vuv9Y06CRoeIgagtDWxippl0OAw442wHyme/PBQZ2842PW8GNNqw+B1HWz Ir0V5FiZdLLQipwiKT152+8OsOa/NU6wxGFuJr8It/4471h3jU5dxuHO+wQqMDEb xCn6ebwZaa9oSjHFrfy3 =oMOO -END PGP SIGNATURE- diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql index ccc030f..7b5db32 100644 *** a/src/backend/catalog/system_views.sql --- b/src/backend/catalog/system_views.sql *** CREATE VIEW pg_timezone_abbrevs AS *** 425,430 --- 425,436 CREATE VIEW pg_timezone_names AS SELECT * FROM pg_timezone_names(); + CREATE VIEW pg_config AS + SELECT * FROM pg_config(); + + CREATE VIEW pg_controldata AS + SELECT * FROM pg_controldata(); + -- Statistics views CREATE VIEW pg_stat_all_tables AS diff --git a/src/backend/utils/misc/Makefile b/src/backend/utils/misc/Makefile index 7889101..13b60f7 100644 *** a/src/backend/utils/misc/Makefile --- b/src/backend/utils/misc/Makefile *** include $(top_builddir)/src/Makefile.glo *** 14,21 override CPPFLAGS := -I. -I$(srcdir) $(CPPFLAGS) ! OBJS = guc.o help_config.o pg_rusage.o ps_status.o rls.o \ !sampling.o superuser.o timeout.o tzparser.o # This location might depend on the installation directories. Therefore # we can't subsitute it into pg_config.h. --- 14,34 override CPPFLAGS := -I. -I$(srcdir) $(CPPFLAGS) ! # don't include subdirectory-path-dependent -I and -L switches ! STD_CPPFLAGS := $(filter-out -I$(top_srcdir)/src/include -I$(top_builddir)/src/include,$(CPPFLAGS)) ! STD_LDFLAGS := $(filter-out -L$(top_builddir)/src/port,$(LDFLAGS)) ! override CPPFLAGS += -DVAL_CONFIGURE=\$(configure_args)\ ! override CPPFLAGS += -DVAL_CC=\$(CC)\ ! override CPPFLAGS += -DVAL_CPPFLAGS=\$(STD_CPPFLAGS)\ ! override CPPFLAGS += -DVAL_CFLAGS=\$(CFLAGS)\ ! override CPPFLAGS += -DVAL_CFLAGS_SL=\$(CFLAGS_SL)\ ! override CPPFLAGS += -DVAL_LDFLAGS=\$(STD_LDFLAGS)\ ! override CPPFLAGS += -DVAL_LDFLAGS_EX=\$(LDFLAGS_EX)\ ! override CPPFLAGS +=
Re: [HACKERS] Commitfest remaining Needs Review items
On Wed, Aug 26, 2015 at 10:30 AM, Andreas Karlsson andr...@proxel.se wrote: On 08/25/2015 09:39 AM, Michael Paquier wrote: -- Reload SSL certificates on SIGHUP: returned with feedback? I think that this patch needs more work to be in a commitable state. Maybe I am being dense here, but I do not feel like I have gotten any clear feedback which gives me a way forward with the patch. I do not really see what more I can do here other than resubmit it to the next CF which I feel would be poor etiquette by me. If you have the time I to spare I would be very grateful if you could clarify what work you think needs to be done. No problem. I have moved it to the next CF then. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign join pushdown vs EvalPlanQual
Hi KaiGai-san, On 2015/08/25 10:18, Kouhei Kaigai wrote: How about your opinion towards the solution? Likely, what you need to do are... 1. Save the alternative path on fdw_paths when foreign join push-down. GetForeignJoinPaths() may be called multiple times towards a particular joinrel according to the combination of innerrel/outerrel. RelOptInfo-fdw_private allows to avoid construction of same remote join path multiple times. On the second or later invocation, it may be a good tactics to reference cheapest_startup_path and replace the saved one if later invocation have cheaper one, prior to exit. I'm not sure that the tactics is a good one. I think you probably assume that GetForeignJoinPaths executes set_cheapest each time that gets called, but ISTM that that would be expensive. (That is one of the reason why I think it would be better to hook that routine in standard_join_search.) 2. Save the alternative Plan nodes on fdw_plans or lefttree/righttree somewhere you like at the GetForeignPlan() 3. Makes BeginForeignScan() to call ExecInitNode() towards the plan node saved at (2), then save the PlanState on fdw_ps, lefttree/righttree, or somewhere private area if not displayed on EXPLAIN. 4. Implement ForeignRecheck() routine. If scanrelid==0, it kicks the planstate node saved at (3) to generate tuple slot. Then, call the ExecQual() to check qualifiers being pushed down. 5. Makes EndForeignScab() to call ExecEndNode() towards the PlanState saved at (3). I never think above steps are too complicated for people who can write FDW drivers. It is what developer usually does. Sorry, my explanation was not accurate, but the design that you proposed looks complicated beyond necessity. I think we should add an FDW API for doing something if FDWs have more knowledge about doing that than the core, but in your proposal, instead of the core, an FDW has to eventually do a lot of the core's work: ExecInitNode, ExecProcNode, ExecQual, ExecEndNode and so on. Thank you for the comments! Best regards, Etsuro Fujita -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Commitfest remaining Needs Review items
On Mon, Aug 10, 2015 at 4:34 PM, Heikki Linnakangas hlinn...@iki.fi wrote: Hello Hackers, There are a few Needs Review items remaining in the July commitfest. Reviewers, please take action - you are holding up the commitfest. In addition to these items, there are a bunch of items in Ready for Committer state. Committers: please help with those. At this stage, there are currently 26 patches in need of actions for the current CF: - 12 patches are waiting on author: -- Let pg_ctl check the result of a postmaster config reload: returned with feedback? Heikki has provided input on this patch but the patch has not been updated by the author since last month. -- Allow specifying multiple hosts in libpq connect options: returned with feedback? A review has been sent but no new versions have been submitted. -- multivariate statistics: bump to next CF? -- merging pgbench logs: returned with feedback or bump? Fabien has concerns about performance regarding fprintf when merging the logs. Fabien, Tomas, thoughts? -- pgbench - per-transaction and aggregated logs: returned with feedback or bump to next CF? Fabien, Tomas, thoughts? -- Add sample rate to auto_explain: returned with feedback because of lack of activity? -- Backpatch Resource Owner reassign locks cache for the sake of pg_upgrade: a committer may want to look at that for a backpatch in = 9.2. -- Compensate for full_page_writes when spreading checkpoint I/O: Bump to next CF? Heikki, are you planning to continue working on that? -- Improving replay of XLOG_BTREE_VACUUM records: returned with feedback? There has been review input from 2 committers but no updated versions. -- compress method for spgist : returned with feedback? -- Support to COMMENT ON CURRENT DATABASE: returned with feedback? The author mentioned that patch will be reworked but there has been no new version around it seems. -- Default Roles: Stephen, are you planning to work on that for next CF? - 8 patches are in need of review: -- extends pgbench expressions with functions: bump to next CF, v9 has not been reviewed. -- check existency of table for -t option (pg_dump) when pattern has no wildcard: Bump to next CF? -- self-defined policy for sepgsql regression test, returned with feedback? The regressions are broken as mentioned at the end of the thread. -- Unique Joins: bump to next CF? -- improving join estimates using FK: bump to next CF? -- checkpoint continuous flushing, bump, work, performance tests and review are all still going on. -- Join pushdown support for foreign tables: err... This patch had no activity for 4 months. -- Reload SSL certificates on SIGHUP: returned with feedback? I think that this patch needs more work to be in a commitable state. - 6 patches marked as ready for committer: -- numeric timestamp in log_line_prefix -- plpgsql raise statement with context -- postgres_fdw: Options to set fetch_size at the server and table level. -- Improving test coverage of extensions with pg_dump -- New functions in sslinfo module -- CREATE EXTENSION ... CASCADE For all of them, bump to next CF with the same status if they are not committed at the end of the month? Regards, -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reducing ClogControlLock contention
On 12 August 2015 at 04:49, Amit Kapila amit.kapil...@gmail.com wrote: On Tue, Aug 11, 2015 at 7:27 PM, Simon Riggs si...@2ndquadrant.com wrote: On 11 August 2015 at 14:53, Amit Kapila amit.kapil...@gmail.com wrote: One more point here why do we need CommitLock before calling SimpleLruReadPage_ReadOnly() in the patch and if it is not required, then can we use LWLockAcquire(shared-buffer_locks[slotno], LW_EXCLUSIVE); instead of CommitLock? That prevents read only access, not just commits, so that isn't a better suggestion. read only access of what (clog page?)? Here we are mainly doing three operations read clog page, write transaction status on clog page and update shared control state. So basically two resources are involved clog page and shared control state, so which one of those you are talking? Sorry, your suggestion was good. Using LWLockAcquire(shared-buffer_locks[slotno], LW_EXCLUSIVE); now seems sufficient. Apart from above, in below code, it is assumed that we have exclusive lock on clog page which we don't in the proposed patch as some one can read the same page while we are modifying it. In current code, this assumption is valid because during Write we take CLogControlLock in Exclusive mode and while Reading we take the same in Shared mode. Not exactly, no. This is not a general case, it is for one important and very specific case only, exactly suited to our transaction manager. I have checked all call paths and we are good. New patch attached. I will reply to Andres' post separately since this does not yet address all of his detailed points. -- Simon Riggshttp://www.2ndQuadrant.com/ http://www.2ndquadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services clog_optimize.v4.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reducing ClogControlLock contention
On 22 August 2015 at 15:14, Andres Freund and...@anarazel.de wrote: On 2015-06-30 08:02:25 +0100, Simon Riggs wrote: Proposal for improving this is to acquire the ClogControlLock in Shared mode, if possible. I find that rather scary. That requires for all read and write paths in clog.c and slru.c to only ever read memory locations once. Otherwise those reads may not get the same results. That'd mean we'd need to add indirections via volatile to all reads/writes. It also requires that we never read in 4 byte quantities. There is is a very specific case in which this is possible. We already allow writes to data structures in the transaction manager without locks *in certain cases* and this is all that is proposed here. Nothing scary about doing something we already do, as long as we get it right. This is safe because people checking visibility of an xid must always run TransactionIdIsInProgress() first to avoid race conditions, which will always return true for the transaction we are currently committing. As a result, we never get concurrent access to the same bits in clog, which would require a barrier. I don't think that's really sufficient. There's a bunch of callers do lookups without such checks, e.g. in heapam.c. It might be safe due to other circumstances, but at the very least this is a significant but sublte API revision. I've checked the call sites you mention and they refer to tests made AFTER we know have waited for the xid to complete via the lock manager. So as of now, there are no callers of TransactionIdGetStatus() that have not already confirmed that the xid is no longer active in the lock manager or the procarray. Since we set clog before touching lock manager or procarray we can be certain there is no concurrent reads and writes. TransactionIdSetPageStatus() calls TransactionIdSetStatusBit(), which writes an 8 byte variable (the lsn). That's not safe. Agreed, thanks for spotting that. I see this as the biggest problem to overcome with this patch. We write WAL in pages, so we don't need to store the low order bits (varies according to WAL page size). We seldom use the higher order bits, since it takes a while to go thru (8192 * INT_MAX) = 32PB of WAL. So it seems like we can have a base LSN for a whole clog page, plus 4-byte LSN offsets. That way we can make the reads and writes atomic on all platforms. All of that can be hidden in clog.c in macros, so low impact, modular code. A patch like this will need far more changes. Every read and write from a page has to be guaranteed to only be done once, otherwise you can get 'effectively torn' values. That is, you can't just do static void TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, int slotno) ... char *byteptr; charbyteval; ... /* note this assumes exclusive access to the clog page */ byteval = *byteptr; byteval = ~(((1 CLOG_BITS_PER_XACT) - 1) bshift); byteval |= (status bshift); *byteptr = byteval; ... the compiler is entirely free to optimize away the byteval variable and do all these masking operations on memory! It can intermittenly write temporary values to byteval, because e.g. the register pressure is too high. To actually rely on single-copy-atomicity you have to enforce that these reads/writes can only happen. Leavout out some possible macro indirection that'd have to look like byteval = (volatile char *) byteptr; ... *(volatile char *) byteptr = byteval; some for TransactionIdGetStatus(), without the write side obviously. Seems doable. -- Simon Riggshttp://www.2ndQuadrant.com/ http://www.2ndquadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
Re: [HACKERS] Commitfest remaining Needs Review items
-- merging pgbench logs: returned with feedback or bump? Fabien has concerns about performance regarding fprintf when merging the logs. Fabien, Tomas, thoughts? -- pgbench - per-transaction and aggregated logs: returned with feedback or bump to next CF? Fabien, Tomas, thoughts? I think that both features are worthwhile so next CF would be better, but it really depends on Tomas. The key issue was the implementation complexity and maintenance burden which was essentially driven by fork-based thread emulation compatibility, but it has gone away as the emulation has been taken out of pgbench and it is now possible to do a much simpler implementation of these features. The performance issue is that if you have many threads which perform monstruous tps and try to log them then logging becomes a bottle neck, both the printf time and the eventual file locking... Well, that is life, it is well know that experimentators influence experiments they are looking at since Schrödinger, and moreover the --sampling-rate option is already here to alleviate this problem if needed, so I do not think that it is an issue to address by keeping the code complex. -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Resource Owner reassign Locks
Andres Freund and...@anarazel.de writes: On 2015-08-25 14:12:37 -0400, Tom Lane wrote: How would they have done that without major code surgery? We don't have any hooks or function pointers involved in the users of resowner.h. Certainly locks would not be getting passed to a nonstandard resowner. CurrentResourceOwner = myresowner; /* do some op */ Yeah, but so what? GrantLockLocal does not contain any way that external code could change the way that a new lock is recorded. (IOW, yeah, certainly third-party code could create a new *instance* of the ResourceOwner data structure, but they would not have any knowledge of what's inside unless they had hacked the core code.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Resource Owner reassign Locks
Andres Freund and...@anarazel.de writes: On 2015-08-25 13:54:20 -0400, Tom Lane wrote: However, I'm not entirely following Andres' concern here. AFAICS, the only externally visible API change in commit eeb6f37d8 was that LockReleaseCurrentOwner and LockReassignCurrentOwner gained some arguments. That would certainly be an issue if there were any plausible reason for extension code to be calling either one --- but it seems to me that those functions are only meant to be called from resowner.c. What other use-case would there be for them? I don't think it's super likely, but I don't think it's impossible that somebody created their own resource owner. How would they have done that without major code surgery? We don't have any hooks or function pointers involved in the users of resowner.h. Certainly locks would not be getting passed to a nonstandard resowner. Say because they want to perform some operation and then release the locks without finishing the transaction. Adding a zero argument LockReleaseCurrentOwner()/LockReassignCurrentOwner() wrapper seems like a small enough effort to simply not bother looking for existing callers. I agree that a wrapper is possible, but it's not without cost; both as to the time required to modify the patch, and as to possibly complicating future back-patching because the code becomes gratuitously different in the back branches. I really don't see that a wrapper is appropriate here. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Custom Scans and private data
(sending again, forgot to cc hackers, sorry for the duplicate) Hi, I'm trying to use the custom scan API to replace code that currently does everything via hooks and isn't safe against copyObject() (Yes, that's not a grand plan). To me dealing with CustomScan-custom_private seems to be extraordinarily painful when dealing with nontrivial data structures. And such aren't all that unlikely when dealing with a custom scan over something more complex. Now, custom scans likely modeled private data after FDWs. But it's already rather painful there (in fact it's the one thing I heard people complain about repeatedly besides the inability to push down operations). Just look at the ugly hangups postgres_fdw goes through to pass data around - storing it in lists with indexes determining the content and such. That kinda somewhat works if you only integers and such need to be stored, but if you have more complex data it really is a PITA. The best alternatives I found are a) to serialize data into a string, base64 or so. b) use a Const node over a bytea datum. It seems rather absurd having to go through deserialization at every execution. Since we already have CustomScan-methods, it seems to be rather reasonable to have a CopyCustomScan callback and let it do the copying of the private data if present? Or possibly of the whole node, which'd allow to embed it into a bigger node? I looked at pg-strom and it seems to go through rather ugly procedures to deal with the problem: https://github.com/pg-strom/devel/blob/master/src/gpujoin.c form_gpujoin_info/deform_gpujoin_info. What's the advantage of the current model? Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Custom Scans and private data
Andres Freund and...@anarazel.de writes: Since we already have CustomScan-methods, it seems to be rather reasonable to have a CopyCustomScan callback and let it do the copying of the private data if present? Or possibly of the whole node, which'd allow to embed it into a bigger node? Weren't there rumblings of requiring plan trees to be deserializable/ reserializable, not just copiable, so that they could be passed to background workers? Not that I'm particularly in favor of that, but if you're going to go in the direction of allowing private data formats to be copied then I think you're likely to have to address the other thing. In any case, since this convention already exists for FDWs I'm not sure why we should make it different for CustomScan. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Resource Owner reassign Locks
Andres Freund and...@anarazel.de writes: On 2015-08-25 14:33:25 -0400, Tom Lane wrote: (IOW, yeah, certainly third-party code could create a new *instance* of the ResourceOwner data structure, but they would not have any knowledge of what's inside unless they had hacked the core code.) What I was thinking is that somebody created a new resowner, did something, and then called LockReleaseCurrentOwner() (because no locks are needed anymore), or LockReassignCurrentOwner() (say you want to abort a subtransaction, but do *not* want the locks to be released). Anyway, I slightly lean towards having wrappers, you strongly against, so that makes it an easy call. Well, I'm not strongly against them, just trying to understand whether there's a plausible argument that someone is calling these functions from extensions. I'm not hearing one ... for one thing, I don't believe there are any extensions playing games with transaction/lock semantics. (My Salesforce colleagues have done some of that, and no you can't get far without changing the core code.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Resource Owner reassign Locks
On 2015-08-25 14:12:37 -0400, Tom Lane wrote: How would they have done that without major code surgery? We don't have any hooks or function pointers involved in the users of resowner.h. Certainly locks would not be getting passed to a nonstandard resowner. CurrentResourceOwner = myresowner; /* do some op */ ... ? Say because they want to perform some operation and then release the locks without finishing the transaction. Adding a zero argument LockReleaseCurrentOwner()/LockReassignCurrentOwner() wrapper seems like a small enough effort to simply not bother looking for existing callers. I agree that a wrapper is possible, but it's not without cost; both as to the time required to modify the patch, and as to possibly complicating future back-patching because the code becomes gratuitously different in the back branches. I really don't see that a wrapper is appropriate here. Works for me. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_controldata output alignment regression
Joe Conway wrote: Does anyone out there object to a non-backward compatible change to pg_controldata output? I don't (and thanks for taking care of it), but as I recall, pg_upgrade reads and interprets pg_controldata output so it may need adjustment too. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Resource Owner reassign Locks
On 2015-08-25 14:33:25 -0400, Tom Lane wrote: (IOW, yeah, certainly third-party code could create a new *instance* of the ResourceOwner data structure, but they would not have any knowledge of what's inside unless they had hacked the core code.) What I was thinking is that somebody created a new resowner, did something, and then called LockReleaseCurrentOwner() (because no locks are needed anymore), or LockReassignCurrentOwner() (say you want to abort a subtransaction, but do *not* want the locks to be released). Anyway, I slightly lean towards having wrappers, you strongly against, so that makes it an easy call. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_controldata output alignment regression
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/25/2015 10:32 AM, Joe Conway wrote: On 08/25/2015 10:28 AM, Tom Lane wrote: I was suggesting getting rid of Current in *all* the entries. What value does it add? I agree, it adds no value, and is a simple solution. Does anyone out there object to a non-backward compatible change to pg_controldata output? ...specifically the attached. Will commit/push to 9.5 and HEAD in a few hours barring any objections. Joe - -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, Open Source Development -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJV3LM6AAoJEDfy90M199hlzwUP/0EeobxsxidorFS/4RQRXeWN Gb879a7brwAIuEZVqP9BdlzhfPWrGx98vyRHvQ4jbcNXXO3fe5xTqklXdSMAJD5L 0hF1+DIVHg+SuZ7hJiW0wl1y0b8OIs2w7WOHwi39ZFV1//dbJkV8DkWG0Ty2Roxf JXqBrFOLP0nu6sS3jx2tCkGtkAL5b0FvZw6aFiuxvVUYFW8U36VjCVgZ6aEN+7Jn zvMrMsBtH/AxfhPWd8BblsBhC3+ShPPdG8rE9RHSBwX6qBWlJnf3WAiEjuj6p7Wk kQluJwxWU13s/IKjqmES+H04fEUeqTNouRDlCEKim7o5d7E1FNDS0gCcQ0oIftco dtTf192K91+xtnsnQrgODkrk/tZpnCr7ay0LHI+ydIOtmqgX1g3RchoTL37zMohK sDLEo3aTM4f6mLV2Qbh9ETizcssJuZJoSKKo51icLxbXXUrv8CuU5yniXAm8NM0a MwyVfJL3gjQhEcP3aRIbvCRVqtQJK2Y8Qiff5uDQ+9Nl3ApdIVuyEp+w9pvK/55Z w3MdZt5cjphIANjayqfA2B4nSvLE3RC7caJr0HS10yvFAzP2Go4HQUA1t7RFCYI0 zww+ZQ2hrIdiSZu61QnO1DR49WZrQPd1Z/W9gzHHqTFO1IYf+XxBliQk0mTbZUiN v2DJhvBV2zMhS0jjrVvV =6223 -END PGP SIGNATURE- diff --git a/src/bin/pg_controldata/pg_controldata.c b/src/bin/pg_controldata/pg_controldata.c index d8cfe5e..046480c 100644 *** a/src/bin/pg_controldata/pg_controldata.c --- b/src/bin/pg_controldata/pg_controldata.c *** main(int argc, char *argv[]) *** 290,308 (uint32) ControlFile.backupEndPoint); printf(_(End-of-backup record required:%s\n), ControlFile.backupEndRequired ? _(yes) : _(no)); ! printf(_(Current wal_level setting:%s\n), wal_level_str(ControlFile.wal_level)); ! printf(_(Current wal_log_hints setting:%s\n), ControlFile.wal_log_hints ? _(on) : _(off)); ! printf(_(Current max_connections setting: %d\n), ControlFile.MaxConnections); ! printf(_(Current max_worker_processes setting: %d\n), ControlFile.max_worker_processes); ! printf(_(Current max_prepared_xacts setting: %d\n), ControlFile.max_prepared_xacts); ! printf(_(Current max_locks_per_xact setting: %d\n), ControlFile.max_locks_per_xact); ! printf(_(Current track_commit_timestamp setting: %s\n), ControlFile.track_commit_timestamp ? _(on) : _(off)); printf(_(Maximum data alignment: %u\n), ControlFile.maxAlign); --- 290,308 (uint32) ControlFile.backupEndPoint); printf(_(End-of-backup record required:%s\n), ControlFile.backupEndRequired ? _(yes) : _(no)); ! printf(_(wal_level setting:%s\n), wal_level_str(ControlFile.wal_level)); ! printf(_(wal_log_hints setting:%s\n), ControlFile.wal_log_hints ? _(on) : _(off)); ! printf(_(max_connections setting: %d\n), ControlFile.MaxConnections); ! printf(_(max_worker_processes setting: %d\n), ControlFile.max_worker_processes); ! printf(_(max_prepared_xacts setting: %d\n), ControlFile.max_prepared_xacts); ! printf(_(max_locks_per_xact setting: %d\n), ControlFile.max_locks_per_xact); ! printf(_(track_commit_timestamp setting: %s\n), ControlFile.track_commit_timestamp ? _(on) : _(off)); printf(_(Maximum data alignment: %u\n), ControlFile.maxAlign); -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] 9.4 broken on alpha
Hi, From the Debian ports buildd: https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=alphaver=9.4.4-1stamp=1434132509 make[5]: Entering directory '/«PKGBUILDDIR»/build/src/backend/postmaster' [...] gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -g -O2 -Wformat -Werror=format-security -I/usr/include/mit-krb5 -fPIC -pie -DLINUX_OOM_SCORE_ADJ=0 -I../../../src/include -I/«PKGBUILDDIR»/build/../src/include -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include/tcl8.6 -c -o bgworker.o /«PKGBUILDDIR»/build/../src/backend/postmaster/bgworker.c /tmp/cc4j88on.s: Assembler messages: /tmp/cc4j88on.s:952: Error: unknown opcode `rmb' as: BFD (GNU Binutils for Debian) 2.25 internal error, aborting at ../../gas/write.c line 603 in size_seg as: Please report this bug. builtin: recipe for target 'bgworker.o' failed make[5]: *** [bgworker.o] Error 1 There's a proposed patch: https://bugs.debian.org/cgi-bin/bugreport.cgi?att=1;msg=5;bug=756368;filename=alpha-fix-read-memory-barrier.patch Christoph Berg -- Senior Berater, Tel.: +49 (0)21 61 / 46 43-187 credativ GmbH, HRB Mönchengladbach 12080, USt-ID-Nummer: DE204566209 Hohenzollernstr. 133, 41061 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer pgp fingerprint: 5C48 FE61 57F4 9179 5970 87C6 4C5A 6BAB 12D2 A7AE Index: postgresql-9.4-9.4~beta2/src/include/storage/barrier.h === --- postgresql-9.4-9.4~beta2.orig/src/include/storage/barrier.h +++ postgresql-9.4-9.4~beta2/src/include/storage/barrier.h @@ -117,7 +117,7 @@ extern slock_t dummy_spinlock; * read barrier to cover that case. We might need to add that later. */ #define pg_memory_barrier() __asm__ __volatile__ (mb : : : memory) -#define pg_read_barrier() __asm__ __volatile__ (rmb : : : memory) +#define pg_read_barrier() __asm__ __volatile__ (mb : : : memory) #define pg_write_barrier() __asm__ __volatile__ (wmb : : : memory) #elif defined(__hppa) || defined(__hppa__) /* HP PA-RISC */ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Planned release for PostgreSQL 9.5
On 24 August 2015 at 19:15, Paragon Corporation l...@pcorp.us wrote: Just checking to see if you guys have settled on a date for 9.5.0 release. The PostGIS Dev team would like to release PostGIS 2.2 about or a week or more before, but not too far ahead of 9.5.0 release. It's a good question, thanks for asking. We are behind the planned schedule at this point. If you follow the same alpha/beta/rc/release process of the main project you should be able to release close by. Once Beta is announced, probably in Sept, then extensions can go Beta also. Postgres Beta freezes the APIs as much as possible, so you'll have time to support that before the main project releases. Many extensions support multiple releases, I know. In that case it makes sense to state clearly the level of support for each release, e.g. PostGIS 2.2.0 offers Production Support of 9.4, Beta Support of 9.5 then include any later changes into 2.2.1, if any, or simply just announce 2.2.0 as supporting 9.5 if nothing changed. -- Simon Riggshttp://www.2ndQuadrant.com/ http://www.2ndquadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
Re: [HACKERS] 9.4 broken on alpha
Tom Lane wrote: Alvaro Herrera alvhe...@2ndquadrant.com writes: Aaron W. Swenson wrote: In the 4 years that that particular line has been there, not once had anyone else run into it on Gentoo until a couple months ago. And it isn't a case of end users missing it as we have arch testers that test packages before marking them suitable for public consumption. Alpha is one of the arches. This means that not once has anybody compiled in an Alpha in 4 years. Well, strictly speaking, there were no uses of pg_read_barrier until 9.4. However, pg_write_barrier (which used wmb) was in use since 9.2; so unless you're claiming your assembler knows wmb but not rmb, the code's failed to compile for Alpha since 9.2. Actually according to this http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15213-f98/doc/alpha-asm.pdf there is a wmb instruction but there is no rmb. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Using HeapTuple.t_len to verify size of tuple
Vignesh Raghunathan vignesh.pg...@gmail.com writes: Can the t_len field in HeapTuple structure be used to verify the length of the tuple? That is, if I calculate the length from the contents of the tuple using data from pg_attribute for fixed length fields and the data from the header for varlena fields, should it always match the value stored in t_len? If t_len were *less* than that, it would be a bug. But I think it's fairly common for t_len to be rounded up to the next maxalign boundary. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 9.4 broken on alpha
Alvaro Herrera alvhe...@2ndquadrant.com writes: Tom Lane wrote: Well, strictly speaking, there were no uses of pg_read_barrier until 9.4. However, pg_write_barrier (which used wmb) was in use since 9.2; so unless you're claiming your assembler knows wmb but not rmb, the code's failed to compile for Alpha since 9.2. Actually according to this http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15213-f98/doc/alpha-asm.pdf there is a wmb instruction but there is no rmb. Oh really? If rmb were a figment of someone's imagination, it would explain the build failure (although not why nobody's reported it till now). It'd be easy enough to s/rmb/mb/ in 9.4 ... but not sure it's worth the trouble, since we're desupporting Alpha as of 9.5 anyway. If the effective desupport date is 9.4 instead, how much difference does that make? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Function accepting array of complex type
This works: CREATE TYPE c AS (r float, i float); CREATE FUNCTION mag(c c) RETURNS float LANGUAGE sql AS $$ SELECT sqrt(c.r^2 + c.i^2) $$; SELECT mag( (2.2, 2.2) ); mag -- 3.11126983722081 But this doesn't: CREATE FUNCTION magsum( c c[] ) RETURNS float LANGUAGE sql AS $$ SELECT sum(sqrt(c.r^2 + c.i^2)) FROM unnest(c) c $$; SELECT magsum( array[row(2.1, 2.1), row(2.2,2.2)] ); ERROR: function magsum(record[]) does not exist at character 8 Presumably we're playing some games with resolving (...) into a complex type instead of a raw record; what would be involved with making that work for an array of a complex type? I don't see anything array-specific in parse_func.c, so I'm not sure what the path for this is... -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Using HeapTuple.t_len to verify size of tuple
On Tue, Aug 25, 2015 at 2:56 PM, Tom Lane t...@sss.pgh.pa.us wrote: Vignesh Raghunathan vignesh.pg...@gmail.com writes: Can the t_len field in HeapTuple structure be used to verify the length of the tuple? That is, if I calculate the length from the contents of the tuple using data from pg_attribute for fixed length fields and the data from the header for varlena fields, should it always match the value stored in t_len? If t_len were *less* than that, it would be a bug. But I think it's fairly common for t_len to be rounded up to the next maxalign boundary. regards, tom lane I have modified heap_deform_tuple code to check whether the 'off' variable matches the tuple's t_len field for a project. In the code, off is not updated in case of null fields. However, when I run it for pg_class table, the code throws an error saying that the value of off does not match t_len. It turns out that for all tuples even if the attribute attacl is null, t_len field is set to be the sizeof(FormData_pg_class). Is this normal behavior?
Re: [HACKERS] Using HeapTuple.t_len to verify size of tuple
Vignesh Raghunathan vignesh.pg...@gmail.com writes: On Tue, Aug 25, 2015 at 2:56 PM, Tom Lane t...@sss.pgh.pa.us wrote: If t_len were *less* than that, it would be a bug. But I think it's fairly common for t_len to be rounded up to the next maxalign boundary. I have modified heap_deform_tuple code to check whether the 'off' variable matches the tuple's t_len field for a project. In the code, off is not updated in case of null fields. However, when I run it for pg_class table, the code throws an error saying that the value of off does not match t_len. It turns out that for all tuples even if the attribute attacl is null, t_len field is set to be the sizeof(FormData_pg_class). You mean even if relacl is not null? Sounds improbable: AFAIR, pg_class tuples are built with heap_form_tuple, same as anything else. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Custom Scans and private data
On 2015-08-25 14:42:32 -0400, Tom Lane wrote: Andres Freund and...@anarazel.de writes: Since we already have CustomScan-methods, it seems to be rather reasonable to have a CopyCustomScan callback and let it do the copying of the private data if present? Or possibly of the whole node, which'd allow to embed it into a bigger node? Weren't there rumblings of requiring plan trees to be deserializable/ reserializable, not just copiable, so that they could be passed to background workers? Yes, I do recall that as well. that's going to require a good bit of independent stuff - currently there's callbacks as pointers in nodes; that's obviously not going to fly well across processes. Additionally custom scan already has a TextOutCustomScan callback, even if it's currently probably intended for debugging. I rather doubt that adding out/readfuncs without the ability to do something about what exactly is read in is going to work well. But I admit I'm not too sure about it. In any case, since this convention already exists for FDWs I'm not sure why we should make it different for CustomScan. I think it was a noticeable mistake in the fdw case, but we already released with that. We shouldn't make the same mistake twice. Looking at postgres_fdw and the pg-strom example linked upthread imo pretty clearly shows how ugly this is. There's also the rather noticeable difference that we already have callbacks in the node for custom scans (used by outfuncs), making this rather trivial to add. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_controldata output alignment regression
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/25/2015 11:28 AM, Alvaro Herrera wrote: Joe Conway wrote: Does anyone out there object to a non-backward compatible change to pg_controldata output? I don't (and thanks for taking care of it), but as I recall, pg_upgrade reads and interprets pg_controldata output so it may need adjustment too. Thanks for the heads up. There are lots of controldata items pg_upgrade is interested in, but AFAICS none of these are included. Now maybe they should be, but they are not currently referenced. (Bruce added to the thread: we're talking about: Current wal_level setting Current wal_log_hints setting Current max_connections setting Current max_worker_processes setting Current max_prepared_xacts setting Current max_locks_per_xact setting Current track_commit_timestamp setting ) Joe - -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, Open Source Development -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJV3MCnAAoJEDfy90M199hli+4P/1fTAXs1yiPga/5MPDoU8yuZ 8mHEvc+6fDXfkb8wk3GEiRjbhenhqkwFhLOBRDCWqKgzLT0rENp8fgm44gnNkim2 cGyz2ZOl5cqVqgZMziiLEhxlojbCLGKB8UOYM2176tBvkxx6NbhY8kjdaOoc6lXX 88n+PWaVdEwaIvYYMGfQjaxgVJxBJBBoRMNjYXhmgqBo3RNE0gwJfjEUNk7VzSnp w+tWrhgBIsHDyg12PnB/X3Wo5220N8rmN11ShDIUxhG5TJj3+u9W3iLB94lP8U2l hmdqsLkbYp5sptkYcFW1d3twOvJwqM0TIezLqTsHRWDtL2u0qOF6IGg9KsFBwbLg g6YcDUUw8UmrX3QmeytKzecbbvi2j1hg8h7kleWG86MwipbX2V1GHohBT3Ih2Srf Aw4poaYC94VKY+kKpMM+0901JOC064PguT/6Cud6QcujxGWrzzZJWmbbfXSlS+DZ 5xVco7e9XeYGQoA2CfhPiBZc1Mb7ZZYv1ptvK5NW64NQBlgrwQEwSa1YUkLvA+/Y mlCXgC8/w6A1QE4sRdQKzKqN1MRxcvnZKIVM/F0KepagIxU9IWUBh+qE98LjZWsM /02fyZPLt1COZnqDQSfGXdA7QgMLOm6Tfl0v3A7iv6qUT+hxiP5TonhPcJk2u0IM E81K2fX6gOcsdQGtqKql =nbOs -END PGP SIGNATURE- -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] One question about security label command
All, The second approach above works. I defined a own privileged domain (sepgsql_regtest_superuser_t) instead of system's unconfined_t domain. The reason why regression test gets failed was, definition of unconfined_t in the system default policy was changed to bypass multi-category rules; which our regression test depends on. So, the new sepgsql_regtest_superuser_t domain performs almost like as unconfined_t, but restricted by multi-category policy as traditional unconfined_t did. It is self defined domain, so will not affected by system policy change. Even though the sepgsql-regtest.te still uses unconfined_u and unconfined_r pair for selinux-user and role, it requires users to define additional selinux-user by hand if we try to define own one. In addition, its definition has not been changed for several years. So, I thought it has less risk to rely on unconfined_u/unconfined_r field unlike unconfined_t domain. I have reviewed and tested this patch against 'master' at 781ed2b. The patch applies without issue and all tests pass on EL7. -Adam -- Adam Brightwell - adam.brightw...@crunchydatasolutions.com Database Engineer - www.crunchydatasolutions.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] One question about security label command
So what about the buildfarm animal that was offered for this? We still have this module completely uncovered in the buildfarm ... -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] One question about security label command
* Adam Brightwell (adam.brightw...@crunchydatasolutions.com) wrote: So what about the buildfarm animal that was offered for this? We still have this module completely uncovered in the buildfarm ... I believe that is in the works and should be made available soon. Right, Joe commented on the 'open commitfest items' thread that he's working on getting it set up (actually, more than one; aiui he's building both a RHEL 6 one and a RHEL 7 one). Thanks! Stephen signature.asc Description: Digital signature
Re: [HACKERS] One question about security label command
So what about the buildfarm animal that was offered for this? We still have this module completely uncovered in the buildfarm ... I believe that is in the works and should be made available soon. -Adam -- Adam Brightwell - adam.brightw...@crunchydatasolutions.com Database Engineer - www.crunchydatasolutions.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 9.4 broken on alpha
Alvaro Herrera alvhe...@2ndquadrant.com writes: Aaron W. Swenson wrote: In the 4 years that that particular line has been there, not once had anyone else run into it on Gentoo until a couple months ago. And it isn't a case of end users missing it as we have arch testers that test packages before marking them suitable for public consumption. Alpha is one of the arches. This means that not once has anybody compiled in an Alpha in 4 years. Well, strictly speaking, there were no uses of pg_read_barrier until 9.4. However, pg_write_barrier (which used wmb) was in use since 9.2; so unless you're claiming your assembler knows wmb but not rmb, the code's failed to compile for Alpha since 9.2. As for the dropped support, has the Alpha specific code been ripped out? Would it still presumably run on Alpha? Yes, code has been ripped out. I would assume that it doesn't build at all anymore, but maybe what happens is you get spinlocks emulated with semaphores and it's only horribly slow. The whole business about laxer-than-average memory coherency gives me the willies, though. It's fairly likely that PG has never worked right on multi-CPU Alphas. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reduce ProcArrayLock contention
On Thu, Aug 20, 2015 at 3:49 PM, Andres Freund and...@anarazel.de wrote: On 2015-08-20 15:38:36 +0530, Amit Kapila wrote: On Wed, Aug 19, 2015 at 9:09 PM, Andres Freund and...@anarazel.de wrote: I spent some time today reviewing the commited patch. So far my only major complaint is that I think the comments are only insufficiently documenting the approach taken: Stuff like avoiding ABA type problems by clearling the list entirely and it being impossible that entries end up on the list too early absolutely needs to be documented explicitly. I think more comments can be added to explain such behaviour if it is not clear via looking at current code and comments. It's not mentioned at all, so yes. I have updated comments in patch as suggested by you. We can even add a link of wiki or some other page which can explain the definition of ABA problem or we can even explain what the problem is, but as this is a well-known problem that can occur while implementing lock-free data- structure, so not adding any explanation also seems okay. I think you are right and here we need to use something like what is suggested below by you. Originally the code was similar to what you have written below, but it was using a different (new) variable to achieve what you have achieved with lwWaiting and to avoid the use of new variable the code has been refactored in current way. I think we should do this change (I can write a patch) unless Robert feels otherwise. I think we can just rename lwWaiting to something more generic. I think that can create problem considering we have to set it in ProcArrayGroupClearXid() before adding the proc to wait list (which means it will be set for leader as well and that can create a problem, because leader needs to acquire LWLock and in LWLock code, lwWaiting is used). The problem I see with setting lwWaiting after adding it to list is that leader might have already cleared the proc by the time we try to set lwWaiting for a follower. For now, I have added a separate variable. Consider what happens if such a follower enqueues in another transaction. It is not, as far as I could find out, guaranteed on all types of cpus that a third backend can actually see nextClearXidElem as INVALID_PGPROCNO. That'd likely require SMT/HT cores and multiple sockets. If the write to nextClearXidElem is entered into the local store buffer (leader #1) a hyper-threaded process (leader #2) can possibly see it (store forwarding) while another backend doesn't yet. I think this is very unlikely to be an actual problem due to independent barriers until enqueued again, but I don't want to rely on it undocumentedly. It seems safer to replace +wakeidx = pg_atomic_read_u32(proc-nextClearXidElem); +pg_atomic_write_u32(proc-nextClearXidElem, INVALID_PGPROCNO); with a pg_atomic_exchange_u32(). I didn't follow this point, if we ensure that follower can never return before leader wakes it up, then why it will be a problem to update nextClearXidElem like above. Because it doesn't generally enforce that *other* backends have seen the write as there's no memory barrier. After changing the code by have separate variable to indicate that the xid is cleared and changed logic (by having barriers), I don't think this problem can occur, can you please see the latest attached patch and let me know if you still see this problem. I don't think it's a good idea to use the variable name in PROC_HDR and PGPROC, it's confusing. What do you mean by this, are you not happy with variable name? Yes. I think it's a bad idea to have the same variable name in PROC_HDR and PGPROC. struct PGPROC { ... /* Support for group XID clearing. */ volatile pg_atomic_uint32 nextClearXidElem; ... } typedef struct PROC_HDR { ... /* First pgproc waiting for group XID clear */ volatile pg_atomic_uint32 nextClearXidElem; ... } PROC_HDR's variable imo isn't well named. Changed the variable name in PROC_HDR. How hard did you try checking whether this causes regressions? This increases the number of atomics in the commit path a fair bit. I doubt it's really bad, but it seems like a good idea to benchmark something like a single full-throttle writer and a large number of readers. One way to test this is run pgbench read load (with 100 client count) and write load (tpc-b - with one client) simultaneously and check the results. I have tried this and there is lot of variation(more than 50%) in tps in different runs of write load, so not sure if this is the right way to benchmark it. Another possible way is to hack pgbench code and make one thread run write transaction and others run read transactions. Do you have any other ideas or any previously written test (which you are aware of) with which this can be
Re: [HACKERS] pg_controldata output alignment regression
Joe Conway wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I should have gotten my key signed when I had the chance :-( On 08/25/2015 11:28 AM, Alvaro Herrera wrote: Joe Conway wrote: Does anyone out there object to a non-backward compatible change to pg_controldata output? I don't (and thanks for taking care of it), but as I recall, pg_upgrade reads and interprets pg_controldata output so it may need adjustment too. Thanks for the heads up. There are lots of controldata items pg_upgrade is interested in, but AFAICS none of these are included. Now maybe they should be, but they are not currently referenced. Well, if there's no compatibility hit then I don't think it's worth worrying about. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 9.4 broken on alpha
On 2015-08-25 08:57, Andrew Dunstan wrote: On 08/25/2015 08:30 AM, Andres Freund wrote: On 2015-08-25 08:29:18 -0400, Andrew Dunstan wrote: needs a buildfarm animal. If we had one we'd presumably have caught this much earlier. On the other hand, we dropped alpha support in 9.5, ... Oh, I missed that. Sorry for the noise. I've been meaning to report this myself. In the 4 years that that particular line has been there, not once had anyone else run into it on Gentoo until a couple months ago. And it isn't a case of end users missing it as we have arch testers that test packages before marking them suitable for public consumption. Alpha is one of the arches. As for the dropped support, has the Alpha specific code been ripped out? Would it still presumably run on Alpha? signature.asc Description: Digital signature
Re: [HACKERS] 9.4 broken on alpha
Aaron W. Swenson wrote: I've been meaning to report this myself. In the 4 years that that particular line has been there, not once had anyone else run into it on Gentoo until a couple months ago. And it isn't a case of end users missing it as we have arch testers that test packages before marking them suitable for public consumption. Alpha is one of the arches. This means that not once has anybody compiled in an Alpha in 4 years. As for the dropped support, has the Alpha specific code been ripped out? Would it still presumably run on Alpha? Yes, code has been ripped out. I would assume that it doesn't build at all anymore, but maybe what happens is you get spinlocks emulated with semaphores and it's only horribly slow. See http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=a6d488cb538c8761658f0f7edfc40cecc8c29f2d -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Make HeapTupleSatisfiesMVCC more concurrent
Amit Kapila amit.kapil...@gmail.com writes: On Fri, Aug 21, 2015 at 8:22 PM, Robert Haas robertmh...@gmail.com wrote: On Wed, Aug 19, 2015 at 2:53 AM, Amit Kapila amit.kapil...@gmail.com Another minor point is, I think we should modify function level comment for XidInMVCCSnapshot() where it says that this applies to known- committed XIDs which will no longer be true after this patch. Yeah, that comment would certainly be out of date, and I think it may not be the only one. I'll check around some more. I am wondering that is there any harm in calling TransactionIdDidAbort() in slow path before calling SubTransGetTopmostTransaction(), that can also maintain consistency of checks in both the functions? I think this is probably a bad idea. It adds a pg_clog lookup that we would otherwise not do at all, in hopes of avoiding a pg_subtrans lookup. It's not exactly clear that that's a win even if we successfully avoid the subtrans lookup (which we often would not). And even if it does win, that would only happen if the other transaction has aborted, which isn't generally the case we prefer to optimize for. I don't mean to dismiss the potential for further optimization inside XidInMVCCSnapshot (for instance, the one-XID-cache idea sounds promising); but I think that's material for further research and a separate patch. It's not clear to me if anyone wanted to do further review/testing of this patch, or if I should go ahead and push it (after fixing whatever comments need to be fixed). regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Idea: closing the loop for pg_ctl reload
On August 25, 2015 09:31:35 PM Michael Paquier wrote: On Thu, Jul 23, 2015 at 5:06 PM, Heikki Linnakangas wrote: Other comments: [...] This patch had feedback, but there has been no update in the last month, so I am now marking it as returned with feedback. It was suggested that this mechanism became superfluous with the inclusion of the view postgresql.conf (pg_settings?) in 9.5. I left it to Tom (being the one that suggested the feature) to indicate if he still thinks it's useful. jan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Idea: closing the loop for pg_ctl reload
Jan de Visser j...@de-visser.net writes: On August 25, 2015 09:31:35 PM Michael Paquier wrote: This patch had feedback, but there has been no update in the last month, so I am now marking it as returned with feedback. It was suggested that this mechanism became superfluous with the inclusion of the view postgresql.conf (pg_settings?) in 9.5. I left it to Tom (being the one that suggested the feature) to indicate if he still thinks it's useful. I think there's still a fair argument for pg_ctl reload being able to return a simple yes-or-no result. We had talked about trying to shoehorn textual error messages into the protocol, and I'm now feeling that that complication isn't needed, but a bare-bones feature would still be worth the trouble IMO. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Custom Scans and private data
On 2015-08-25 14:42:32 -0400, Tom Lane wrote: Andres Freund and...@anarazel.de writes: Since we already have CustomScan-methods, it seems to be rather reasonable to have a CopyCustomScan callback and let it do the copying of the private data if present? Or possibly of the whole node, which'd allow to embed it into a bigger node? Weren't there rumblings of requiring plan trees to be deserializable/ reserializable, not just copiable, so that they could be passed to background workers? Yes, I do recall that as well. that's going to require a good bit of independent stuff - currently there's callbacks as pointers in nodes; that's obviously not going to fly well across processes. Additionally custom scan already has a TextOutCustomScan callback, even if it's currently probably intended for debugging. I rather doubt that adding out/readfuncs without the ability to do something about what exactly is read in is going to work well. But I admit I'm not too sure about it. In any case, since this convention already exists for FDWs I'm not sure why we should make it different for CustomScan. I think it was a noticeable mistake in the fdw case, but we already released with that. We shouldn't make the same mistake twice. Looking at postgres_fdw and the pg-strom example linked upthread imo pretty clearly shows how ugly this is. There's also the rather noticeable difference that we already have callbacks in the node for custom scans (used by outfuncs), making this rather trivial to add. As Tom pointed out, the primary reason why CustomScan required provider to save its private data on custom_exprs/custom_private were awareness of copyObject(). In addition, custom_exprs are expected to link expression node to be processed in setrefs.c and subselect.c because core implementation cannot know which type of data is managed in private. Do you concern about custom_private only? Even if we have extra callbacks like CopyObjectCustomScan() and TextReadCustomScan(), how do we care about the situation when core implementation needs to know the location of expression nodes? Is custom_exprs retained as is? In the earlier version of CustomScan interface had individual callbacks on setrefs.c and subselect.c, however, its structure depended on the current implementation too much, then we moved to the implementation to have two individual private fields rather than callbacks on outfuncs.c. On the other hands, I'm inclined to think TextOutCustomScan() might be a misdesign to support serialize/deserialize via readfuncs.c. http://www.postgresql.org/message-id/9a28c8860f777e439aa12e8aea7694f80111d...@bpxm15gp.gisp.nec.co.jp I think it shall be deprecated rather then enhancement. Thanks, -- NEC Business Creation Division / PG-Strom Project KaiGai Kohei kai...@ak.jp.nec.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] compress method for spgist - 2
On Thu, Jul 23, 2015 at 6:18 PM, Teodor Sigaev teo...@sigaev.ru wrote: Poorly, by hanging boxes that straddled dividing lines off the parent node in a big linear list. The hope would be that the case was Ok, I see, but that's not really what I was wondering. My question is this: SP-GiST partitions the space into non-overlapping sections. How can you store polygons - which can overlap - in an SP-GiST index? And how does the compress method help with that? I believe if we found a way to index boxes then we will need a compress method to build index over polygons. BTW, we are working on investigation a index structure for box where 2d-box is treated as 4d-point. There has been no activity on this patch for some time now, and a new patch version has not been submitted, so I am marking it as return with feedback. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Additional role attributes superuser review
On Sat, Jul 11, 2015 at 6:06 AM, Heikki Linnakangas wrote: On 05/08/2015 07:35 AM, Stephen Frost wrote: In consideration of the fact that you can't create schemas which start with pg_ and therefore the default search_path wouldn't work for that user, and that we also reserve pg_ for tablespaces, I'm not inclined to worry too much about this case. Further, if we accept this argument, then we simply can't ever provide additional default or system roles, ever. That'd be a pretty narrow corner to have painted ourselves into. Well, you could still provide them through some other mechanism, like require typing SYSTEM ROLE pg_backup any time you mean that magic role. But I agree, reserving pg_* is much better. I wish we had done it when we invented roles (6.5?), so there would be no risk that you would upgrade from a system that already has a pg_foo role. But I think it'd still be OK. I agree with Robert's earlier point that this needs to be split into multiple patches, which can then be reviewed and discussed separately. Pending that, I'm going to mark this as Waiting on author in the commitfest. ... And now marked as returned with feedback. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Custom Scans and private data
Andres Freund and...@anarazel.de writes: On 2015-08-25 14:42:32 -0400, Tom Lane wrote: In any case, since this convention already exists for FDWs I'm not sure why we should make it different for CustomScan. I think it was a noticeable mistake in the fdw case, but we already released with that. We shouldn't make the same mistake twice. I don't agree that it was a mistake, and I do think there is value in consistency. In the case at hand, it would not be too hard to provide some utility functions for some common cases; for instance, if you want to just store a struct, we could offer convenience functions to wrap that in a bytea constant and unwrap it again. Those could be useful for both FDWs and custom scans. (The bigger picture here is that we always intended to offer a bunch of support functions to make writing FDWs easier, once we'd figured out what made sense. The fact that we haven't done that work yet doesn't make it a bad approach. Nor does shove it all into some callbacks mean that the callbacks will be easy to write.) Looking at postgres_fdw and the pg-strom example linked upthread imo pretty clearly shows how ugly this is. There's also the rather noticeable difference that we already have callbacks in the node for custom scans (used by outfuncs), making this rather trivial to add. I will manfully refrain from taking that bait. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Function accepting array of complex type
On Tue, Aug 25, 2015 at 6:21 PM, Jim Nasby jim.na...@bluetreble.com wrote: This works: CREATE TYPE c AS (r float, i float); CREATE FUNCTION mag(c c) RETURNS float LANGUAGE sql AS $$ SELECT sqrt(c.r^2 + c.i^2) $$; SELECT mag( (2.2, 2.2) ); mag -- 3.11126983722081 But this doesn't: CREATE FUNCTION magsum( c c[] ) RETURNS float LANGUAGE sql AS $$ SELECT sum(sqrt(c.r^2 + c.i^2)) FROM unnest(c) c $$; SELECT magsum( array[row(2.1, 2.1), row(2.2,2.2)] ); ERROR: function magsum(record[]) does not exist at character 8 Presumably we're playing some games with resolving (...) into a complex type instead of a raw record; what would be involved with making that work for an array of a complex type? I don't see anything array-specific in parse_func.c, so I'm not sure what the path for this is... magsum( c c[] ) never gets a chance to coerce its argument because array[row(...), row(...)] beats it to the punch. SELECT mag( row(...) ) does see the untyped row and seeing only a single function with parameter c coerces it to match. I'm not sure what can be done besides adding the cast to either the array[]::c[] or to the individual items array[ row(...)::c ]. Hopefully the thought helps because I'm useless when it comes to the actual code. This does seem similar to how non-array literals are treated; though I'm not sure if there are inferences (or node look-through) occurring in literals that make some cases like this work while the corresponding unknown record gets set in stone differently. David J.
Re: [HACKERS] Function accepting array of complex type
Jim Nasby jim.na...@bluetreble.com writes: This works: CREATE TYPE c AS (r float, i float); CREATE FUNCTION mag(c c) RETURNS float LANGUAGE sql AS $$ SELECT sqrt(c.r^2 + c.i^2) $$; SELECT mag( (2.2, 2.2) ); mag -- 3.11126983722081 But this doesn't: CREATE FUNCTION magsum( c c[] ) RETURNS float LANGUAGE sql AS $$ SELECT sum(sqrt(c.r^2 + c.i^2)) FROM unnest(c) c $$; SELECT magsum( array[row(2.1, 2.1), row(2.2,2.2)] ); ERROR: function magsum(record[]) does not exist at character 8 You need to cast it to some specific record type: regression=# SELECT magsum( array[row(2.1, 2.1), row(2.2,2.2)]::c[] ); magsum -- 6.08111831820431 (1 row) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Using HeapTuple.t_len to verify size of tuple
You mean even if relacl is not null? Sounds improbable: AFAIR, pg_class tuples are built with heap_form_tuple, same as anything else. regards, tom lane I am sorry. It was a bug in my code. I did not add the size of the tuple's header field to the off variable before comparing it with t_len. Thank you for the help.
Re: [HACKERS] Commitfest remaining Needs Review items
On Tue, Aug 25, 2015 at 11:28 PM, Stephen Frost sfr...@snowman.net wrote: Michael, * Michael Paquier (michael.paqu...@gmail.com) wrote: -- Default Roles: Stephen, are you planning to work on that for next CF? Yup! OK. Fine for me. I have moved the patch to next CF, even if I mentioned having marked it as returned with feedback on the related thread. I was too hasty here. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Make HeapTupleSatisfiesMVCC more concurrent
Tom Lane wrote: Amit Kapila amit.kapil...@gmail.com writes: I am wondering that is there any harm in calling TransactionIdDidAbort() in slow path before calling SubTransGetTopmostTransaction(), that can also maintain consistency of checks in both the functions? I think this is probably a bad idea. It adds a pg_clog lookup that we would otherwise not do at all, in hopes of avoiding a pg_subtrans lookup. It's not exactly clear that that's a win even if we successfully avoid the subtrans lookup (which we often would not). And even if it does win, that would only happen if the other transaction has aborted, which isn't generally the case we prefer to optimize for. It's probably key to this idea that TransactionIdDidAbort returns in a single slru lookup, whereas SubTransGetTopmostTransaction needs to iterate possibly many layers of subxacts. But the point about this being a win only for aborted xacts makes it probably pointless, I agree. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] One question about security label command
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/25/2015 01:02 PM, Stephen Frost wrote: * Adam Brightwell (adam.brightw...@crunchydatasolutions.com) wrote: So what about the buildfarm animal that was offered for this? We still have this module completely uncovered in the buildfarm ... I believe that is in the works and should be made available soon. Right, Joe commented on the 'open commitfest items' thread that he's working on getting it set up (actually, more than one; aiui he's building both a RHEL 6 one and a RHEL 7 one). Yeah, I'm working on it. I also have reviewed and tested the patch on rhel7.1 and am working on rhel6.7. Joe - -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, Open Source Development -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJV3N3cAAoJEDfy90M199hl+8cP/iczkjAmz9J87/+wUfXDWj5C rRCa4K9gxngU96u7LZeiufaAIj5Y84jJZlV+L9mpt2eHmWyxAbIUBGEt7bk0aN3u tXsSBNXbtl3iTDK2QsZTBXiAUbXBKjSjpjyPRNggnpbPK8nEEgUTlV/WQmU5pgWc V1fODx89tDROM5hhGxy6l4Q6bFnJaLSoVnnUT/+yJiZIyeN0TfzL+ekRdoyyrQYL WR8bOPG2AVAHJH2saFDwJ8FbeohOgs87gW5U0eO40oGz21TJzsHMcdJRLYa8B650 6eJH1pV2YKhqSaeUjCqNOpie3EFBDlNmWmO92BhVC3oTDWxRLtObeoq9BB/B8tEz XCy+to/aLB/d+Bntmkg5SoV7ODHLQdH6qHaXcemallNCz4/uTP9Cm8oDPL6eu7bj AOeMx5cYDpvIKMjSRBsAEMxAfepc1VgAfxdthOBoYcswKXz1c3BIn3SjQnNQk78R vhg4walAKGXT+dOfROdOaUcNUODpf18M2CPTAYm8M+uC/vcb+7j8MlRc+0xIz/LA 0iNLQAjdWDsSwiRqrt+GfV4peoSqwJenyGRhdOeRES33U+pT6kWWxSYy43cgLlso GcT49DwSWDxwjU+9EtA2Rdxa/yBCAcrDQOwT2N/FQr2QQUB8WFfbAIq2G8fUl2q5 ZjAChs3KMSuWQJoVuwMN =RVIH -END PGP SIGNATURE- -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Using HeapTuple.t_len to verify size of tuple
Hello, Can the t_len field in HeapTuple structure be used to verify the length of the tuple? That is, if I calculate the length from the contents of the tuple using data from pg_attribute for fixed length fields and the data from the header for varlena fields, should it always match the value stored in t_len? Thanks, VIgnesh
Re: [HACKERS] 9.4 broken on alpha
On 2015-08-25 15:43:12 -0400, Aaron W. Swenson wrote: As for the dropped support, has the Alpha specific code been ripped out? Would it still presumably run on Alpha? I'm pretty sure that postgres hasn't run correctly under concurrency on alpha for a long while. The lax cache coherency makes developing concurrent code hard. Since there are rarely, if ever, people testing postgres on alpha under load it's nigh on impossible to verify anything. Having to adhere to a more complicated memory model than for any other architecture isn't worth it, since there barely are users. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 9.4 broken on alpha
On 08/25/2015 06:16 AM, Christoph Berg wrote: Hi, From the Debian ports buildd: https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=alphaver=9.4.4-1stamp=1434132509 make[5]: Entering directory '/«PKGBUILDDIR»/build/src/backend/postmaster' [...] gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -g -O2 -Wformat -Werror=format-security -I/usr/include/mit-krb5 -fPIC -pie -DLINUX_OOM_SCORE_ADJ=0 -I../../../src/include -I/«PKGBUILDDIR»/build/../src/include -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include/tcl8.6 -c -o bgworker.o /«PKGBUILDDIR»/build/../src/backend/postmaster/bgworker.c /tmp/cc4j88on.s: Assembler messages: /tmp/cc4j88on.s:952: Error: unknown opcode `rmb' as: BFD (GNU Binutils for Debian) 2.25 internal error, aborting at ../../gas/write.c line 603 in size_seg as: Please report this bug. builtin: recipe for target 'bgworker.o' failed make[5]: *** [bgworker.o] Error 1 needs a buildfarm animal. If we had one we'd presumably have caught this much earlier. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Idea: closing the loop for pg_ctl reload
On Thu, Jul 23, 2015 at 5:06 PM, Heikki Linnakangas wrote: Other comments: [...] This patch had feedback, but there has been no update in the last month, so I am now marking it as returned with feedback. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] libpq: Allow specifying multiple host names to try to connect to
On Thu, Aug 6, 2015 at 4:02 PM, Mikko Tiihonen wrote: Because the feature as its simplest is a for loop in libpq. I would not think it much of a feature creep, especially since my original patch to libpq showed the loop already has been hidden in libpq for a long time, it just needed a special dns record for the postgresql hosts that returned dns records for all hosts. Even there are poolers in front of postgres they can be set up in much simpler and reliable non-cluster mode when the libpq can be given multiple pooler addresses to connect to. Patch marked as returned with feedback, there has been review input, but unfortunately no patch updates lately. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Commitfest remaining Needs Review items
On Tue, Aug 25, 2015 at 6:05 PM, Fabien COELHO coe...@cri.ensmp.fr wrote: -- merging pgbench logs: returned with feedback or bump? Fabien has concerns about performance regarding fprintf when merging the logs. Fabien, Tomas, thoughts? -- pgbench - per-transaction and aggregated logs: returned with feedback or bump to next CF? Fabien, Tomas, thoughts? I think that both features are worthwhile so next CF would be better, but it really depends on Tomas. OK, so let's wait for input from Tomas for now. The key issue was the implementation complexity and maintenance burden which was essentially driven by fork-based thread emulation compatibility, but it has gone away as the emulation has been taken out of pgbench and it is now possible to do a much simpler implementation of these features. The performance issue is that if you have many threads which perform monstruous tps and try to log them then logging becomes a bottle neck, both the printf time and the eventual file locking... Well, that is life, it is well know that experimentators influence experiments they are looking at since Schrödinger, and moreover the --sampling-rate option is already here to alleviate this problem if needed, so I do not think that it is an issue to address by keeping the code complex. Honestly, I don't like the idea of having a bottleneck at logging level even if we can leverage it with a logging sample rate, that's a recipe for making pgbench become a benchmark to measure its own contention, while it should put the backend into pressure, particularly when short transactions are used. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Let PostgreSQL's On Schedule checkpoint write buffer smooth spread cycle by tuning IsCheckpointOnSchedule?
On Mon, Jul 6, 2015 at 12:30 PM, Amit Kapila wrote: Yes, we definitely want to see the effect on TPS at the beginning of checkpoint, but even measuring the IO during checkpoint with the way Digoal was capturing the data can show the effect of this patch. I am marking this patch as returned with feedback. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Can pg_dump make use of CURRENT/SESSION_USER
On Thu, Apr 30, 2015 at 2:04 AM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Fabrízio de Royes Mello wrote: I have this idea: 1) Add an ObjectAddress field to CommentStmt struct an set it in gram.y 2) In the CommentObject check if CommentStmt-address is InvalidObjectAddress then call get_object_address else use it For DDL deparsing purposes, it seems important that the affected object address can be reproduced somehow. I think pg_get_object_address() should be considered, as well as the object_address test. If we do as you propose, we would have to deparse COMMENT ON CURRENT DATABASE IS 'foo' as COMMENT ON DATABASE whatever_the_name_is IS 'foo' which is not a fatal objection but doesn't seem all that great. Seeing no activity in the last couple of months for this patch that had reviews, it is now marked as returned with feedback. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Commitfest remaining Needs Review items
On Tue, Aug 25, 2015 at 5:51 PM, Heikki Linnakangas wrote: On 08/25/2015 10:39 AM, Michael Paquier wrote: - 12 patches are waiting on author: These can all be marked as Returned with good conscience, they've gotten at least some feedback. Fine for me. I'll notify each thread individually and update the status of each patch. - 8 patches are in need of review: [...] - 6 patches marked as ready for committer: [...] For all of them, bump to next CF with the same status if they are not committed at the end of the month? Yeah, I don't think we can let this linger much longer. I hate to bump patches in a commitfest just because no-one's gotten around to reviewing them, because the point of commitfests is precisely to provide a checkpoint where every submitted patch gets at least some feedback. But I've run out of steam myself, and I don't see anyone else interested in any of these patches, so I don't think there's much else we can do :-(. OK let's wait a bit then for those ones. There is still a bit of time. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 9.4 broken on alpha
On 2015-08-25 08:29:18 -0400, Andrew Dunstan wrote: needs a buildfarm animal. If we had one we'd presumably have caught this much earlier. On the other hand, we dropped alpha support in 9.5, ... -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] multivariate statistics / patch v7
On Fri, Jul 31, 2015 at 6:28 AM, Tomas Vondra tomas.von...@2ndquadrant.com wrote: [series of arguments] If you need stats without these issues you'll have to use MCV list or a histogram. Trying to fix the simple statistics types is futile, IMHO. Patch is marked as returned with feedback. There has been advanced discussions and reviews as well. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] auto_explain sample rate
On Fri, Jul 17, 2015 at 2:53 PM, Craig Ringer cr...@2ndquadrant.com wrote: On 7 July 2015 at 21:37, Julien Rouhaud julien.rouh...@dalibo.com wrote: Well, I obviously missed that pg_srand48() is only used if the system lacks random/srandom, sorry for the noise. So yes, random() must be used instead of pg_lrand48(). I'm attaching a new version of the patch fixing this issue just in case. Thanks for picking this up. I've been trying to find time to come back to it but been swamped in priority work. For now I am marking that as returned with feedback. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Resource Owner reassign Locks
On Fri, Jul 10, 2015 at 4:22 AM, Andres Freund and...@anarazel.de wrote: On July 9, 2015 9:13:20 PM GMT+02:00, Jeff Janes jeff.ja...@gmail.com wrote: Unfortunately I don't know what that means about the API. Does it mean that none of the functions declared in any .h file can have their signatures changed? But new functions can be added? That's the safest way. Sometimes you can decide that a function can not sanely be called by external code and thus change the signature. But I'd rather not risk or here, IRS quite possible that one pod these is used by a extension. Where are we on this? Could there be a version for = 9.2? -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 9.4 broken on alpha
On 08/25/2015 08:30 AM, Andres Freund wrote: On 2015-08-25 08:29:18 -0400, Andrew Dunstan wrote: needs a buildfarm animal. If we had one we'd presumably have caught this much earlier. On the other hand, we dropped alpha support in 9.5, ... Oh, I missed that. Sorry for the noise. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Improving replay of XLOG_BTREE_VACUUM records
On Sun, Jul 26, 2015 at 9:46 PM, Andres Freund wrote: On 2015-07-24 09:53:49 +0300, Heikki Linnakangas wrote: To me it sounds like this shouldn't go through the full ReadBuffer() rigamarole. That code is already complex enough, and here it's really not needed. I think it'll be much easier to review - and actually faster in many cases to simply have something like bool BufferInCache(Relation, ForkNumber, BlockNumber) { /* XXX: setup tag, hash, partition */ LWLockAcquire(newPartitionLock, LW_SHARED); buf_id = BufTableLookup(newTag, newHash); LWLockRelease(newPartitionLock); return buf_id != -1; } and then fall back to the normal ReadBuffer() when it's in cache. Patch marked as returned with feedback as input from the author has been waited for some time now. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Commitfest remaining Needs Review items
Hi, On 08/25/2015 02:44 PM, Michael Paquier wrote: On Tue, Aug 25, 2015 at 6:05 PM, Fabien COELHO coe...@cri.ensmp.fr wrote: -- merging pgbench logs: returned with feedback or bump? Fabien has concerns about performance regarding fprintf when merging the logs. Fabien, Tomas, thoughts? -- pgbench - per-transaction and aggregated logs: returned with feedback or bump to next CF? Fabien, Tomas, thoughts? I think that both features are worthwhile so next CF would be better,but it really depends on Tomas. OK, so let's wait for input from Tomas for now. Let's move them to the next CF. The key issue was the implementation complexity and maintenance burden which was essentially driven by fork-based thread emulation compatibility, but it has gone away as the emulation has been taken out of pgbench and it is now possible to do a much simpler implementation of these features. To some extent, yes. It makes logging into a single file simpler, but the overhead it introduces is still an open question and it does not really simplify the other patch (writing both raw and aggregated logs). The performance issue is that if you have many threads which perform monstruous tps and try to log them then logging becomes a bottle neck, both the printf time and the eventual file locking... Well, that is life, it is well know that experimentators influence experiments they are looking at since Schrödinger, and moreover the --sampling-rate option is already here to alleviate this problem if needed, so I do not think that it is an issue to address by keeping the code complex. Honestly, I don't like the idea of having a bottleneck at logging level even if we can leverage it with a logging sample rate, that's a recipe for making pgbench become a benchmark to measure its own contention, while it should put the backend into pressure, particularly when short transactions are used. I'd like to point out this overhead would not be a new thing - the locking is already there (at least with glibc) to a large degree. See: http://www.gnu.org/software/libc/manual/html_node/Streams-and-Threads.html So fprintf does locking, and that has overhead even when the lock is uncontended (e.g. when using one file per thread). And it has nothing to do with the thread emulation - that was mostly about code complexity, not about locking overhead. The logging system was designed with a single log in mind, so it's not quite compatible with features like this - I think we may need to redesign it, and I think it nicely matches the producer/consumer pattern, about like this: 1) each thread (-j) is a producer - producing transaction details (un-formatted) - possibly batches the data to minimize overhead 2) each log type is a separate consumer - may be a dedicated thread or just a function - gets the raw transaction details (in batches) - either just writes the data to a file (raw), aggregates them or does something else about that (e.g prints progress) Data is passed through queues (hopefully with low overhead thanks to the batching). regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] One question about security label command
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/25/2015 02:27 PM, Joe Conway wrote: On 08/25/2015 01:02 PM, Stephen Frost wrote: * Adam Brightwell (adam.brightw...@crunchydatasolutions.com) wrote: So what about the buildfarm animal that was offered for this? We still have this module completely uncovered in the buildfarm ... I believe that is in the works and should be made available soon. Right, Joe commented on the 'open commitfest items' thread that he's working on getting it set up (actually, more than one; aiui he's building both a RHEL 6 one and a RHEL 7 one). Yeah, I'm working on it. I also have reviewed and tested the patch on rhel7.1 and am working on rhel6.7. I'm arriving late to this party, so maybe everyone else already knows this, but apparently sepgsql is not compatible with the version of selinux available on RHEL 6.x. So there doesn't seem to be much reason for a RHEL 6.x buildfarm animal just for sepgsql testing as it will always fail ;-) As I have never before set up a build farm animal, and since sepgsql cannot be run directly with `make installcheck`, nor can sepgsql be installed in the usual manner, this may not be a slam dunk... Joe - -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, Open Source Development -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJV3RB4AAoJEDfy90M199hlsg0P/2cPCkqbDpOzFLQzUi/MvgMs CORS98FiAy/PYYhFBs75XJo39uSOX7nzk9M/tAAfFgjvV1jl8QUW0kFvBKE3P8bI M93ZUT3tSG51cpfhj+Gaj6uvC3c7GVOj14vhvXOS+Mg8qaGfoa1kzJi4ku4Ajt/H UurJWq529sTTETgGdpaMJgW1bLfseRJFjTN60FZI3DSUQZgQ6WuqIJ6tk8XhS0Sk r6iTCeS3Z6QsiM3g20UIjLXfc+1Ju6AiZZbG4zeRkt6T5FlgofCS+3tDGF3Lnr0Q ZGG5J9F5cpX+xEX7gNmWaPFvnb+PyLiqPDGMLFEWqrs6ST/IKT6TI2z8C6+PKmBX t8wrDu3vZSGlWwosg3iN3UbzQcH2omuPbpZwiBf+fCMVpmp9IBr/itwL7rShI9Wp NHAoU3o3f+Skgt/D/QEgGhGcjb7toftKxJcrhtnoDrCSXMd2XH7YxMbRamSIcQNq Ih/lGfGNNdfJjcS3upnsgQuEnIHzqxIA2R8BprEz5E3LNTWQ74pe74UXOxvhLIfc H9bXHeRZebvukue0GOO9HbArrAFB6FxsHgYtipY1o5VAxBTuQIvWv+tDXZnrPjud Kce/eHEK6JLsIkuHmKYU4hsZPMIQYSBqv6z8aFHJa9bEaMrv6eQ4VvZ9DL+Xt7z5 9iRVlEUnOZ1FD7jebJTh =+kmo -END PGP SIGNATURE- -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Commitfest remaining Needs Review items
On 08/25/2015 09:39 AM, Michael Paquier wrote: -- Reload SSL certificates on SIGHUP: returned with feedback? I think that this patch needs more work to be in a commitable state. Maybe I am being dense here, but I do not feel like I have gotten any clear feedback which gives me a way forward with the patch. I do not really see what more I can do here other than resubmit it to the next CF which I feel would be poor etiquette by me. If you have the time I to spare I would be very grateful if you could clarify what work you think needs to be done. -- Andreas Karlsson -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_controldata output alignment regression
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/25/2015 12:41 PM, Alvaro Herrera wrote: Joe Conway wrote: On 08/25/2015 11:28 AM, Alvaro Herrera wrote: Joe Conway wrote: Does anyone out there object to a non-backward compatible change to pg_controldata output? I don't (and thanks for taking care of it), but as I recall, pg_upgrade reads and interprets pg_controldata output so it may need adjustment too. Thanks for the heads up. There are lots of controldata items pg_upgrade is interested in, but AFAICS none of these are included. Now maybe they should be, but they are not currently referenced. Well, if there's no compatibility hit then I don't think it's worth worrying about. Committed and pushed to HEAD and 9.5 Joe - -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, Open Source Development -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJV3RrPAAoJEDfy90M199hlL+AQAJIChfxiDf08OqclgAHAcVr4 j5RKou1aeDJSHfDOoH4+f9wd8jaQawVzk/7pEy8aiumF4nioGcNcgVnG7w4qgHvf vt7w41YZStd08fvwuXr37lr1sMnPQLWGwWFx5VecTgXPveLSdh8sWZyxrXBoL7Z3 2lJx8RByjyDYB/wj2Ci0MRtk2s8vapFkHDwHPKvmx/i0neMgbqaXu0WKTLaNfOUy IIK1G5o0VHK4Qes3omtzfyTGLC299o1zfhW1Klk3uPukWliYiAjJfw1L8vAwLl0F MFJZb+EaLKl5EbQMRPSPCyHN/XxbEgmuTNFvAqhbUDYmOOPOgBLbIg8ke/w0s0g2 fEpzDFGKmsiSdYj17BZ6jp+xafIGaN8+seWcTyNaMDrqG9WEGx3AkcBUxgXfForY LyCOuZn1aS6hgQvVH5uiGg0QIHN0ZmzFBOTYUE+tUPetnQskC94IUT1rFRm1XH9I T/+JD/EYEoRiosZyeFfx+Zf+caL9KmtrUutGP7OXuNBBNyiTq26QifDZjfG5Fd8b IIxgeZQ3u0vWeTDuP148QrtioSj/IToUKhmn/kZvMwq6XxHhErA8Sc5zgzTE0dIi X4cYRcFESmCzEoDtAASaulpDIxSMVMp5GPWumCyK8CPHIqq7c8lNNqfM2ljdrGjR 1tfHRhq8/BFPrjT1fcrR =WaIR -END PGP SIGNATURE- -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] One question about security label command
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/25/2015 06:03 PM, Joe Conway wrote: I'm arriving late to this party, so maybe everyone else already knows this, but apparently sepgsql is not compatible with the version of selinux available on RHEL 6.x. So there doesn't seem to be much reason for a RHEL 6.x buildfarm animal just for sepgsql testing as it will always fail ;-) Just to be clear, I have marked this on the commitfest app as ready for commit, and plan to commit it soon. Figuring out the buildfarm animal will be my next task after that. Joe - -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, Open Source Development -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJV3RxQAAoJEDfy90M199hlkHQP/AtzBbKFxCFuNBKr/nTpvSOx Sb2N0SdQINA/VbXYIS5yN4R4rUi33ZmCcDfQNs690X0feB2BF4cWpw2xHKyQSXZa B2oxScwtW+ReiADIILjrfv14tK4IAq6lWDVlLD7f+yAwvpjMNPb4S7+qcFoDnP2A weGlhpaQETpWqLMmM89QG3HN62hQeoSt1Zv9NiFDMyDso63jW/hgFELL8PFUeLtv 49tEI/ClaKLldLdu7uMJOouU9bsrxJzsnDRI4yf9tNotuzCjd9RW2P/CBVQMP1EV YWfhxY8hxonEg1NLFc4Wvj5xkQuRpTyMPQnZQoHfQaLPM7Qfvi1oqn4xRBKMpi5u gK/9B3w2oUO9CZAWM7HPnLgULZv+MXuUD3rYhVIlq4fPlgaD6lkHE1sbVC4W38jz L9gyDycOSkaMU+FDtRSrH0TwSoMgtybMDdTRTxddVxDFIeh8f15rJISpPq1xmsAZ pE71SVOf88Lo2TXliwh1sEeN7bZO/Hc/eUefKC4UVWgV/FbEnpQBT9U7qRuZBNIM +mr9FKtdX/Gp0NzWLQRozS+lQJhG0j6iM3DG6kTPlqh7sqfcxgqbwQpnjudIRILE 89FrkwXfEX0OziW8h54fdmYakTlQqH9sPt8XG9i1lOmHRNtnQCnTuit4sLr/rkko v9tq3xliqP3TtQXeRssY =MxPX -END PGP SIGNATURE- -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers