[HACKERS] Re: [GENERAL] Strange replication problem - segment restored from archive but still requested from master
2015-05-22 6:55 GMT+02:00 Fujii Masao masao.fu...@gmail.com: Thanks for the report! This seems to be a bug. This problem happens when WAL record is stored in separate two WAL files and there is no valid latter WAL file in the standby. In your case, the former file is 00044C4D0090 and the latter is 00044C4D0091. I've got bot files, when I've pg_xdumplog-ed them: $ cat dump1 00044C4D0090.dump ... rmgr: Heap2 len (rec/tot): 24/ 7772, tx: 0, lsn: 4C4D/90FF6DB8, prev 4C4D/90FF51E8, bkp: 1000, desc: clean: rel 1663/131438/10079072; blk 13843 remxid 472083193 rmgr: Heap2 len (rec/tot):504/ 536, tx: 0, lsn: 4C4D/90FF8C30, prev 4C4D/90FF6DB8, bkp: , desc: freeze_page: rel 1663/131438/10079072; blk 13843; cutoff xid 422325322 ntuples 40 rmgr: Heap2 len (rec/tot): 24/ 5632, tx: 0, lsn: 4C4D/90FF8E48, prev 4C4D/90FF8C30, bkp: 1000, desc: freeze_page: rel 1663/131438/10079072; blk 13844; cutoff xid 422325322 ntuples 30 rmgr: Heap2 len (rec/tot): 24/ 7528, tx: 0, lsn: 4C4D/90FFA460, prev 4C4D/90FF8E48, bkp: 1000, desc: clean: rel 1663/131438/10079072; blk 13845 remxid 472083193 rmgr: Heap2 len (rec/tot):564/ 596, tx: 0, lsn: 4C4D/90FFC1E0, prev 4C4D/90FFA460, bkp: , desc: freeze_page: rel 1663/131438/10079072; blk 13845; cutoff xid 422325322 ntuples 45 rmgr: Heap2 len (rec/tot): 24/ 5316, tx: 0, lsn: 4C4D/90FFC438, prev 4C4D/90FFC1E0, bkp: 1000, desc: clean: rel 1663/131438/10079072; blk 13846 remxid 472083193 rmgr: Heap2 len (rec/tot):420/ 452, tx: 0, lsn: 4C4D/90FFD900, prev 4C4D/90FFC438, bkp: , desc: freeze_page: rel 1663/131438/10079072; blk 13846; cutoff xid 422325322 ntuples 33 rmgr: Heap2 len (rec/tot): 24/ 7856, tx: 0, lsn: 4C4D/90FFDAC8, prev 4C4D/90FFD900, bkp: 1000, desc: freeze_page: rel 1663/131438/10079072; blk 13847; cutoff xid 422325322 ntuples 53 rmgr: Heap2 len (rec/tot): 20/52, tx: 0, lsn: 4C4D/90FFF990, prev 4C4D/90FFDAC8, bkp: , desc: visible: rel 1663/131438/10079072; blk 13847 00044C4D0091.dump rmgr: Heap2 len (rec/tot):600/ 632, tx: 0, lsn: 4C4D/910014A0, prev 4C4D/90FFF9C8, bkp: , desc: freeze_page: rel 1663/131438/10079072; blk 13848; cutoff xid 422325322 ntuples 48 rmgr: Heap2 len (rec/tot): 24/ 6640, tx: 0, lsn: 4C4D/91001718, prev 4C4D/910014A0, bkp: 1000, desc: freeze_page: rel 1663/131438/10079072; blk 13849; cutoff xid 422325322 ntuples 42 rmgr: Heap2 len (rec/tot): 24/ 7848, tx: 0, lsn: 4C4D/91003120, prev 4C4D/91001718, bkp: 1000, desc: freeze_page: rel 1663/131438/10079072; blk 13850; cutoff xid 422325322 ntuples 36 rmgr: Heap2 len (rec/tot): 20/52, tx: 0, lsn: 4C4D/91004FE0, prev 4C4D/91003120, bkp: , desc: visible: rel 1663/131438/10079072; blk 13850 rmgr: Heap2 len (rec/tot): 24/ 6208, tx: 0, lsn: 4C4D/91005018, prev 4C4D/91004FE0, bkp: 1000, desc: clean: rel 1663/131438/10079072; blk 13851 remxid 472083193 rmgr: Heap2 len (rec/tot):564/ 596, tx: 0, lsn: 4C4D/91006870, prev 4C4D/91005018, bkp: , desc: freeze_page: rel 1663/131438/10079072; blk 13851; cutoff xid 422325322 ntuples 45 rmgr: Heap2 len (rec/tot): 24/ 7340, tx: 0, lsn: 4C4D/91006AC8, prev 4C4D/91006870, bkp: 1000, desc: freeze_page: rel 1663/131438/10079072; blk 13852; cutoff xid 422325322 ntuples 39 rmgr: Heap2 len (rec/tot): 24/ 7744, tx: 0, lsn: 4C4D/91008790, prev 4C4D/91006AC8, bkp: 1000, desc: clean: rel 1663/131438/10079072; blk 13853 remxid 472083193 rmgr: Heap2 len (rec/tot):492/ 524, tx: 0, lsn: 4C4D/9100A5E8, prev 4C4D/91008790, bkp: , desc: freeze_page: rel 1663/131438/10079072; blk 13853; cutoff xid 422325322 ntuples 39 rmgr: Heap2 len (rec/tot): 24/ 7152, tx: 0, lsn: 4C4D/9100A7F8, prev 4C4D/9100A5E8, bkp: 1000, desc: freeze_page: rel 1663/131438/10079072; blk 13854; cutoff xid 422325322 ntuples 32 rmgr: Heap2 len (rec/tot): 24/ 7380, tx: 0, lsn: 4C4D/9100C400, prev 4C4D/9100A7F8, bkp: 1000, desc: freeze_page: rel 1663/131438/10079072; blk 13855; cutoff xid 422325322 ntuples 42 rmgr: Heap2 len (rec/tot): 24/ 6784, tx: 0, lsn: 4C4D/9100E0F0, prev 4C4D/9100C400, bkp: 1000, desc: clean: rel 1663/131438/10079072; blk 13856 remxid 472083193 rmgr: Heap2 len (rec/tot):408/ 440, tx: 0, lsn: 4C4D/9100FB70, prev 4C4D/9100E0F0, bkp: , desc: freeze_page: rel 1663/131438/10079072; blk 13856; cutoff xid 422325322 ntuples 32 rmgr: Heap2 len (rec/tot): 24/ 7312, tx: 0, lsn: 4C4D/9100FD28, prev 4C4D/9100FB70, bkp: 1000, desc: clean: rel 1663/131438/10079072; blk 13857 remxid 472083193 rmgr: Heap2 len (rec/tot):432/ 464, tx: 0, lsn: 4C4D/910119D0,
Re: [HACKERS] Add a hint when postgresql.conf hot_standby = 'off' and recovery.conf standby = 'on'
On Fri, May 22, 2015 at 4:22 PM, Heikki Linnakangas hlinn...@iki.fi wrote: On 05/22/2015 06:53 AM, Matthew Kelly wrote: Why anybody in practice would want hot_standby off while in standby mode eludes me, but these are our default values (recovery.conf was generated by pg_basebackup -R). That's how you set up a cold standby system. It seems worth adding a hint and/or changing the error message to be more descriptive when in this state. Any options about what should be logged before I start putting together a patch? Yeah, might be worthwhile. Perhaps: FATAL: the database system is in standby mode and hot_standby is not enabled Since we may connect to the server even during an archive recovery (e.g., recovery_target_action = pause case), this message should be changed as follows? FATAL: the database system is in standby or archive recovery mode and hot_standby is not enabled Or just: FATAL: the database system is in cold standby mode It would be useful to distinguish that state, where you could connect if hot standby was enabled, from the state where it's starting up and hasn't reached min-recovery-point yet and you couldn't connect even if hot_standby was enabled. Not sure how much signalling you'd need between postmaster and the startup process for that. Probably we need to make the startup process send the SIGUSR1 signal to postmaster when it reaches the consistent point even if hot_standby is not enabled. Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
On 5/22/15 3:08 PM, Eric Ridge wrote: I find it annoying to have to specifically exclude pg_backend_pid() from pg_stat_activity if I'm trying to kill a bunch of backends at once, and I can't think of any reason why you'd ever want to call a pg_cancel_* function with your own PID. I'm just a lurker, but regularly get bitten by this exact thing. Rather than change the behavior of pg_cancel/terminate_backend(), why not change pg_stat_activity to exclude the current session? Seems like showing a row in pg_stat_activity for SELECT * FROM pg_stat_activity is kinda useless anyways. Interesting idea. I suspect that would be even more invasive than modifying the functions though... -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
Jim Nasby jim.na...@bluetreble.com writes: On 5/22/15 3:08 PM, Eric Ridge wrote: Rather than change the behavior of pg_cancel/terminate_backend(), why not change pg_stat_activity to exclude the current session? Seems like showing a row in pg_stat_activity for SELECT * FROM pg_stat_activity is kinda useless anyways. Interesting idea. I suspect that would be even more invasive than modifying the functions though... -1 ... some other columns in pg_stat_activity are potentially useful even for the current session, eg session and transaction start times. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] jsonb concatenate operator's semantics seem questionable
Jim Nasby jim.na...@bluetreble.com writes: On 5/22/15 2:44 PM, Andrew Dunstan wrote: Still I'd rather not add yet another parameter to the function, and I certainly don't want to make throwing an error the only behaviour. If instead of a create_missing boolean it accepted an enum we could handle both (since they're related). But I'd also like to avoid Yet More Knobs if possible. I think there's essentially two scenarios for JSON usage; one where you want to be pretty paranoid about things like keys aren't missing, you're not trying to access a path that doesn't exist, etc. The other mode (what we have today) is when you really don't care much about that stuff and want the database to JustStoreIt. I don't know how many people would want the stricter mode, but it certainly seems painful to try and enforce that stuff today if you care about it. ISTM that the use case for JSON is pretty much JustStoreIt. If you had strict structural expectations you'd probably have chosen a more relational representation in the first place ... or else XML, which at least has heard of schemas and validation. So I definitely agree that we need the no-error case, and am not that excited about having an error-throwing variant. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Disabling trust/ident authentication configure option
Volker, I likely just viewed this too much through a security lens - you see a possible attack scenario, a way to turn it off, and only minor downsides, so just go for it - but this is not how you can work in a huge open source project. I guess as a developer you would have to take many other issues (like maintainability, user confusion because of the change, edge use cases) into account. And as it seems to cause too much trouble for official inclusion I am fine with patching it during our package build. The security issue you're trying to address is real. My argument is that you're addressing it in the wrong way. So from my perspective anything which requires going off standard PostgreSQL packages, and encourages users to go off standard PostgreSQL packages, is a actually a fairly high cost even if the code is non-invasive. I would be more open to a GUC which limited the auth mechansisms available (requiring restart to change), for example, than a compile flag. And yes once someone has write access to your pg_hba.conf you are very likely doomed. This would just prevent an attack happening through a careless trust entry left there, which is just a very quick win for an attacker, and may be a bit less likely through this patch. To answer to Tom: I see a restricted audience for this patch, but also no impact for anyone not wanting to use it. The group of users I see would be as follows: * People who package PostgreSQL as part of their product and want to provide their customers with a restricted more secure functionality set only to reduce training and support effort. (my use case) Doesn't it may more sense to deny them direct access to pg_hba.conf entirely? I don't understand the use-case here, I guess. * People with large Postgres deployments who build their own packages and want to enforce a certain security policy (e.g. services are not allowed to offer authentication-less access over the network) It's hard for me to see this argument as persuasive because anyone with a large deployment is going to be using configuration management, and CMS already can be configured to control pg_hba.conf, far more effectively than merely denying specific auth methods. - specifically a good security plan would be to only allow a safe subset of methods and ensure that these are well documented and perhaps audited automatically - this would also allow ensuring there is only one documented / audited way to reset passwords (modulo single user mode, that is an additional problem which won't be easily fixable) This latter (forcing users to only use GSSAPI auth, for example) would be something which would be nice to support, athough again CMS. * Distributions which want to provide a more secure package and want to ensure each available method can be configured securely and documented clearly for their specific setup. It does not apply to (or would have a negative effect for) the following groups: * PostgreSQL users on Windows (disabling trust should not work or should show a very prominent warning) * Users of default builds (they simply won't be affected) * People with a specific use case requiring trust, ident or for the more generic patch other specific auth methods. They will be affected if they happen to be using a build with this method turned off. * People who are used to resetting passwords using trust and are surprised this suddenly does not work on some specific system The above is a good list of reasons why we can't realistically remove Trust from the codebase entirely. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] jsonb concatenate operator's semantics seem questionable
On Fri, May 22, 2015 at 2:54 PM, Tom Lane t...@sss.pgh.pa.us wrote: I think there's essentially two scenarios for JSON usage; one where you want to be pretty paranoid about things like keys aren't missing, you're not trying to access a path that doesn't exist, etc. The other mode (what we have today) is when you really don't care much about that stuff and want the database to JustStoreIt. I don't know how many people would want the stricter mode, but it certainly seems painful to try and enforce that stuff today if you care about it. ISTM that the use case for JSON is pretty much JustStoreIt. If you had strict structural expectations you'd probably have chosen a more relational representation in the first place ... or else XML, which at least has heard of schemas and validation. So I definitely agree that we need the no-error case, and am not that excited about having an error-throwing variant. I agree. The basic idea of JSON is that the schema is implicit. This gives applications flexibility (but typically they require just a little flexibility, and understand that JSON documents are still fairly homogeneously structured). Anyone that doesn't like that can just not use the JSON types, or can use a check constraint with the JSON types. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
On May 19, 2015, at 6:59 PM, Jim Nasby jim.na...@bluetreble.com wrote: I find it annoying to have to specifically exclude pg_backend_pid() from pg_stat_activity if I'm trying to kill a bunch of backends at once, and I can't think of any reason why you'd ever want to call a pg_cancel_* function with your own PID. I'm just a lurker, but regularly get bitten by this exact thing. Rather than change the behavior of pg_cancel/terminate_backend(), why not change pg_stat_activity to exclude the current session? Seems like showing a row in pg_stat_activity for SELECT * FROM pg_stat_activity is kinda useless anyways. eric PROPRIETARY AND COMPANY CONFIDENTIAL COMMUNICATIONS The information contained in this communication is intended only for the use of the addressee. Any other use is strictly prohibited. Please notify the sender if you have received this message in error. This communication is protected by applicable legal privileges and is company confidential. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] jsonb concatenate operator's semantics seem questionable
On 5/22/15 2:44 PM, Andrew Dunstan wrote: On 05/22/2015 03:27 PM, Peter Geoghegan wrote: On Fri, May 22, 2015 at 11:59 AM, Andrew Dunstan and...@dunslane.net wrote: As for raising an error, in principle it's doable, but the code to detect it might get messy. Also, I don't want a huge number of knobs. So I'm excited about the idea. I think that that's a bad default behavior, although I don't think that's what Jim means. Consider our experience with having subscript operators throw errors -- it complicates certain cases (my complaint at the time was about expression indexes, but there are others). I certainly agree about indexable operations. However this seems unlikely to be indexed, although I'm prepared to be educated on that point. Still I'd rather not add yet another parameter to the function, and I certainly don't want to make throwing an error the only behaviour. If instead of a create_missing boolean it accepted an enum we could handle both (since they're related). But I'd also like to avoid Yet More Knobs if possible. I think there's essentially two scenarios for JSON usage; one where you want to be pretty paranoid about things like keys aren't missing, you're not trying to access a path that doesn't exist, etc. The other mode (what we have today) is when you really don't care much about that stuff and want the database to JustStoreIt. I don't know how many people would want the stricter mode, but it certainly seems painful to try and enforce that stuff today if you care about it. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Asynchronous DRAM Self-Refresh
Hackers, At CoreOS Fest, Intel presented about a technology which they used to improve write times for the nonrelational data store Etcd. It's called Asynchronous DRAM Self-Refresh, or ADR. This is supposedly a feature of all of their chips since E5 which allows users to designate a small area of memory (16 to 64MB) which is somehow guaranteed to be flushed to disk in the event of a power loss (the exact mechanism was not explained). So my thought was Hello! wal_buffers? Theoretically, this feature could give us the benefits of aynchronous commit without the penalties ... *if* it actually works. However, since then I've been able to find zero documentation on ADR. There's a bunch of stuff in the Intel press releases, but zero I can find in their technical docs. Anyone have a clue on this? -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] jsonb concatenate operator's semantics seem questionable
On 5/22/15 4:54 PM, Tom Lane wrote: Jim Nasby jim.na...@bluetreble.com writes: On 5/22/15 2:44 PM, Andrew Dunstan wrote: Still I'd rather not add yet another parameter to the function, and I certainly don't want to make throwing an error the only behaviour. If instead of a create_missing boolean it accepted an enum we could handle both (since they're related). But I'd also like to avoid Yet More Knobs if possible. I think there's essentially two scenarios for JSON usage; one where you want to be pretty paranoid about things like keys aren't missing, you're not trying to access a path that doesn't exist, etc. The other mode (what we have today) is when you really don't care much about that stuff and want the database to JustStoreIt. I don't know how many people would want the stricter mode, but it certainly seems painful to try and enforce that stuff today if you care about it. ISTM that the use case for JSON is pretty much JustStoreIt. If you had strict structural expectations you'd probably have chosen a more relational representation in the first place ... or else XML, which at least has heard of schemas and validation. So I definitely agree that we need the no-error case, and am not that excited about having an error-throwing variant. I think the validation case would be if you're doing transforms or other things to the JSON in SQL, to make sure it's matching what you're expecting it to. For example, if you have something in json that actually has duplicated keys, if you simply cast that to jsonb then all but one of the dupes is silently dropped. I don't like just assuming that's OK. There's probably other cases like this. That said, I don't think users have pushed our JSON stuff enough yet to do more than guess at these use cases. Presumably it will be easier to tell if this is a problem as people start using the more advanced operators and functions. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
On 05/22/2015 03:35 PM, Andres Freund wrote: On 2015-05-22 17:29:03 -0500, Jim Nasby wrote: On 5/22/15 4:51 PM, Tom Lane wrote: Jim Nasby jim.na...@bluetreble.com writes: On 5/22/15 3:08 PM, Eric Ridge wrote: Rather than change the behavior of pg_cancel/terminate_backend(), why not change pg_stat_activity to exclude the current session? Seems like showing a row in pg_stat_activity for SELECT * FROM pg_stat_activity is kinda useless anyways. Interesting idea. I suspect that would be even more invasive than modifying the functions though... -1 ... some other columns in pg_stat_activity are potentially useful even for the current session, eg session and transaction start times. AFAICT the only field you can get in pg_stat_activity and nowhere else is session start time (txn start is always now(), no?). If that's the only objection about eliminating the current backend from pg_stat_activity then I'd say we should just add a session_start_time function to handle that. That's far too big a backward compat break for something of very minor benefit. This whole discussion seems to be about making it easier to run SELECT pg_cancel_backend(pid) FROM pg_stat_activity;. But that shouldn't be made easier! If anything harder. Linux doesn't prevent you from running Kill on your own process, either. Would be interesting to find out how their discussion on the same topic went. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
On 2015-05-22 17:29:03 -0500, Jim Nasby wrote: On 5/22/15 4:51 PM, Tom Lane wrote: Jim Nasby jim.na...@bluetreble.com writes: On 5/22/15 3:08 PM, Eric Ridge wrote: Rather than change the behavior of pg_cancel/terminate_backend(), why not change pg_stat_activity to exclude the current session? Seems like showing a row in pg_stat_activity for SELECT * FROM pg_stat_activity is kinda useless anyways. Interesting idea. I suspect that would be even more invasive than modifying the functions though... -1 ... some other columns in pg_stat_activity are potentially useful even for the current session, eg session and transaction start times. AFAICT the only field you can get in pg_stat_activity and nowhere else is session start time (txn start is always now(), no?). If that's the only objection about eliminating the current backend from pg_stat_activity then I'd say we should just add a session_start_time function to handle that. That's far too big a backward compat break for something of very minor benefit. This whole discussion seems to be about making it easier to run SELECT pg_cancel_backend(pid) FROM pg_stat_activity;. But that shouldn't be made easier! If anything harder. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
Andres Freund and...@anarazel.de writes: This whole discussion seems to be about making it easier to run SELECT pg_cancel_backend(pid) FROM pg_stat_activity;. But that shouldn't be made easier! If anything harder. Indeed. I find it hard to believe that there's a real problem here, and I absolutely will resist breaking backwards compatibility to solve it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] jsonb_set: update or upsert default?
The proposed flag for jsonb_set (the renamed jsonb_replace) in the patch I recently published is set to false, meaning that the default behaviour is to require all elements of the path including the last to be present. What that does is effectively UPDATE for jsonb. If the flag is true, then the last element can be absent, in which case it's created, so this is basically UPSERT for jsonb. The question is which should be the default. We got into the weeds on this with suggestions of throwing errors on missing paths, but that's going nowhere, and I want to get discussion back onto the topic of what should be the default. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Minor ON CONFLICT related fixes
On 2015-05-19 22:50:54 -0700, Peter Geoghegan wrote: On Tue, May 19, 2015 at 2:23 PM, Andres Freund and...@anarazel.de wrote: Pushed. I eyeballed the commit, and realized that I made a trivial error. New patch attached fixing that. Sorry for not getting this fix completely right first time around. Don't know how I missed it. Pushed, with expanded tests. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
On 2015-05-22 12:23:32 -0700, Peter Geoghegan wrote: On Thu, May 21, 2015 at 5:51 PM, Peter Geoghegan p...@heroku.com wrote: So I think we're good with ripping it out. Peter? I'll come up with a patch. Attached patch rips the command tag out. Pushed. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
On Fri, May 22, 2015 at 4:51 PM Jim Nasby jim.na...@bluetreble.com wrote: Interesting idea. I suspect that would be even more invasive than modifying the functions though... Here's the solution. I can't see how anyone could possibly disagree with this... ;) Change the sort order for pg_stat_activity so the current session sorts last. That way all the other sessions get killed first when doing select pg_terminate_backend(pid) from pg_stat_activity. When I get bitten by this, I don't really care that my session gets killed, I care that it gets killed before all the others. Personally, I only do this sort of thing interactively via psql, and typically ^D as soon as the query finishes (which means the second time I run psql and add a WHERE clause to the query). Alternatively, queue up pg_cancel/terminate_backend calls and process them only on successful transaction commit, in such an order that the current session is processed last. They'd be consistent with NOTIFY too, which might be an added bonus. eric ps, sorry my last message was from corporate email w/ the stupid legal disclaimer.
[HACKERS] patch: change magic constants to DEFINE value for readability.
in src/backend/utils/misc/tzparser.c It uses 60 * 60 to represent SECS_PER_HOUR. and It is already define in other files. so I think using SECS_PER_HOUR is more clear for readability. and I attached patch.(it just change 60 * 60 to SECS_PER_HOUR) What do you think? magic.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Generalized JSON output functions
On Fri, May 22, 2015 at 10:51 AM, Merlin Moncure mmonc...@gmail.com wrote: On Fri, May 22, 2015 at 9:43 AM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Andrew Dunstan wrote: On 05/20/2015 09:16 AM, Shulgin, Oleksandr wrote: Attached is a patch against master to generalize the JSON-producing functions in utils/adt/json.c and to provide a set of callbacks which can be overridden the same way that is already provided for *parsing* JSON. I'm not necessarily opposed to this, but it sure seems like a lot of changes, and moderately invasive ones, to support something that could be done, at the cost of reparsing, with a simple loadable extension that I could create in a few hours of programming. But this seems like a pretty reasonable change to make, no? Doesn't the total amount of code decrease after this patch? JSON stuff is pretty new so some refactoring and generalization of what we have is to be expected. Yeah. Also, there have been a few previous gripes about this, for example, http://www.postgresql.org/message-id/cahbvmpzs+svr+y-ugxjrq+xw4dqtevl-cozc69zffwmxjck...@mail.gmail.com . As noted, I definitely prefer 'space free' by default for efficiency reasons, but standardizing the output has definitely got to be a reasonable goal. Every JSON implementation I have ever used defaults to the minified version of JSON (no whitespace) when printed.
Re: [HACKERS] jsonb concatenate operator's semantics seem questionable
On 05/22/2015 03:27 PM, Peter Geoghegan wrote: On Fri, May 22, 2015 at 11:59 AM, Andrew Dunstan and...@dunslane.net wrote: As for raising an error, in principle it's doable, but the code to detect it might get messy. Also, I don't want a huge number of knobs. So I'm excited about the idea. I think that that's a bad default behavior, although I don't think that's what Jim means. Consider our experience with having subscript operators throw errors -- it complicates certain cases (my complaint at the time was about expression indexes, but there are others). I certainly agree about indexable operations. However this seems unlikely to be indexed, although I'm prepared to be educated on that point. Still I'd rather not add yet another parameter to the function, and I certainly don't want to make throwing an error the only behaviour. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
On 5/22/15 4:51 PM, Tom Lane wrote: Jim Nasby jim.na...@bluetreble.com writes: On 5/22/15 3:08 PM, Eric Ridge wrote: Rather than change the behavior of pg_cancel/terminate_backend(), why not change pg_stat_activity to exclude the current session? Seems like showing a row in pg_stat_activity for SELECT * FROM pg_stat_activity is kinda useless anyways. Interesting idea. I suspect that would be even more invasive than modifying the functions though... -1 ... some other columns in pg_stat_activity are potentially useful even for the current session, eg session and transaction start times. AFAICT the only field you can get in pg_stat_activity and nowhere else is session start time (txn start is always now(), no?). If that's the only objection about eliminating the current backend from pg_stat_activity then I'd say we should just add a session_start_time function to handle that. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Issues in Replication Progress Tracking
On Fri, May 22, 2015 at 11:20 PM, Andres Freund and...@anarazel.de wrote: On 2015-05-21 09:40:58 +0530, Amit Kapila wrote: On Thu, May 21, 2015 at 12:42 AM, Andres Freund and...@anarazel.de wrote: On 2015-05-20 19:27:05 +0530, Amit Kapila wrote: 13. In function replorigin_session_setup() and or replorigin_session_advance(), don't we need to WAL log the use of Replication state? No, the point is that the replication progress is persisted via an extra data block in the commit record. That's important for both performance and correctness, because otherwise it gets hard to tie a transaction made during replay with the update to the progress. Unless you use 2PC which isn't really an alternative. Okay, but what triggered this question was the difference of those functions as compare to when user call function pg_replication_origin_advance(). pg_replication_origin_advance() will WAL log the information during that function call itself (via replorigin_advance()). So even if the transaction issuing pg_replication_origin_advance() function will abort, it will still update the Replication State, why so? I don't see a problem here. pg_replication_origin_advance() is for setting up the initial position/update the position upon configuration changes. Okay, I am not aware how exactly these API's will be used for replication but let me try to clarify what I have in mind related to this API usage. Can we use pg_replication_origin_advance() for node where Replay has to happen, if Yes, then Let us say user of pg_replication_origin_advance() API set the lsn position to X for the node N1 on which replay has to happen, so now replay will proceed from X + 1 even though the information related to X is not persisted, so now it could so happen X will get written after the replay of X + 1 which might lead to problem after crash recovery? It'd be a fair amount of infrastructure to make it tie into transactions - without a point to it afaics? Agreed, that if there is no valid use case then we should keep it as it is. (Just to be clear, I plan to address all the points I've not commented upon) Thanks. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Re: [HACKERS] Issues in Replication Progress Tracking
On May 22, 2015 10:23:06 PM PDT, Amit Kapila amit.kapil...@gmail.com wrote: On Fri, May 22, 2015 at 11:20 PM, Andres Freund and...@anarazel.de wrote: On 2015-05-21 09:40:58 +0530, Amit Kapila wrote: On Thu, May 21, 2015 at 12:42 AM, Andres Freund and...@anarazel.de wrote: On 2015-05-20 19:27:05 +0530, Amit Kapila wrote: 13. In function replorigin_session_setup() and or replorigin_session_advance(), don't we need to WAL log the use of Replication state? No, the point is that the replication progress is persisted via an extra data block in the commit record. That's important for both performance and correctness, because otherwise it gets hard to tie a transaction made during replay with the update to the progress. Unless you use 2PC which isn't really an alternative. Okay, but what triggered this question was the difference of those functions as compare to when user call function pg_replication_origin_advance(). pg_replication_origin_advance() will WAL log the information during that function call itself (via replorigin_advance()). So even if the transaction issuing pg_replication_origin_advance() function will abort, it will still update the Replication State, why so? I don't see a problem here. pg_replication_origin_advance() is for setting up the initial position/update the position upon configuration changes. Okay, I am not aware how exactly these API's will be used for replication but let me try to clarify what I have in mind related to this API usage. Can we use pg_replication_origin_advance() for node where Replay has to happen, if Yes, then Let us say user of pg_replication_origin_advance() API set the lsn position to X for the node N1 on which replay has to happen, so now replay will proceed from X + 1 even though the information related to X is not persisted, so now it could so happen X will get written after the replay of X + 1 which might lead to problem after crash recovery? Then you'd use the session variant - which does tie into transactions. Is there a reason that's be unsuitable for you? Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Map basebackup tablespaces using a tablespace_map file
On Fri, May 15, 2015 at 11:51 AM, Amit Kapila amit.kapil...@gmail.com wrote: On Thu, May 14, 2015 at 10:29 PM, Andrew Dunstan and...@dunslane.net wrote: On 05/14/2015 10:52 AM, Robert Haas wrote: On Thu, May 14, 2015 at 12:12 AM, Amit Kapila amit.kapil...@gmail.com wrote: On Thu, May 14, 2015 at 2:10 AM, Andrew Dunstan and...@dunslane.net wrote: How about if we simply abort if we find a non-symlink where we want the symlink to be, and only remove something that is actually a symlink (or a junction point, which is more or less the same thing)? We can do that way and for that I think we need to use rmdir instead of rmtree in the code being discussed (recovery path), OTOH we should try to minimize the errors raised during recovery. I'm not sure I understand this issue in detail, but why would using rmtree() on something you expect to be a symlink ever be a good idea? It seems like if things are the way you expect them to be, it has no benefit, but if they are different from what you expect, you might blow away a ton of important data. Maybe I am just confused. The suggestion is to get rid of using rmtree. Instead, if we find a non-symlink in pg_tblspc we'll make the user clean it up before we can continue. So your instinct is in tune with my suggestion. Find the patch which gets rid of rmtree usage. I have made it as a separate function because the same code is used from create_tablespace_directories() as well. I thought of extending the same API for using it from destroy_tablespace_directories() as well, but due to special handling (especially for ENOENT) in that function, I left it as of now. Does it make sense to track this in 9.5 Open Items [1]? [1] - https://wiki.postgresql.org/wiki/PostgreSQL_9.5_Open_Items With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
[HACKERS] Missing -i / --ignore-version in pg_dump help
Hi all, There are some reason to -i, --ignore-version option doesn't appear in pg_dump help? Regards, -- Fabrízio de Royes Mello Consultoria/Coaching PostgreSQL Timbira: http://www.timbira.com.br Blog: http://fabriziomello.github.io Linkedin: http://br.linkedin.com/in/fabriziomello Twitter: http://twitter.com/fabriziomello Github: http://github.com/fabriziomello
[HACKERS] Add some spaces to ALTER TABLE doc
Hi all, The attached add some spaces to better visualization of the ALTER TABLE doc. Regards, -- Fabrízio de Royes Mello Consultoria/Coaching PostgreSQL Timbira: http://www.timbira.com.br Blog: http://fabriziomello.github.io Linkedin: http://br.linkedin.com/in/fabriziomello Twitter: http://twitter.com/fabriziomello Github: http://github.com/fabriziomello diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index ff032bc..4168837 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -66,7 +66,7 @@ ALTER TABLE ALL IN TABLESPACE replaceable class=PARAMETERname/replaceable SET WITH OIDS SET WITHOUT OIDS SET TABLESPACE replaceable class=PARAMETERnew_tablespace/replaceable -SET {LOGGED | UNLOGGED} +SET { LOGGED | UNLOGGED } SET ( replaceable class=PARAMETERstorage_parameter/replaceable = replaceable class=PARAMETERvalue/replaceable [, ... ] ) RESET ( replaceable class=PARAMETERstorage_parameter/replaceable [, ... ] ) INHERIT replaceable class=PARAMETERparent_table/replaceable @@ -74,7 +74,7 @@ ALTER TABLE ALL IN TABLESPACE replaceable class=PARAMETERname/replaceable OF replaceable class=PARAMETERtype_name/replaceable NOT OF OWNER TO { replaceable class=PARAMETERnew_owner/replaceable | CURRENT_USER | SESSION_USER } -REPLICA IDENTITY {DEFAULT | USING INDEX replaceable class=PARAMETERindex_name/replaceable | FULL | NOTHING} +REPLICA IDENTITY { DEFAULT | USING INDEX replaceable class=PARAMETERindex_name/replaceable | FULL | NOTHING } phraseand replaceable class=PARAMETERtable_constraint_using_index/replaceable is:/phrase -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: At promotion, archive last segment from old timeline with .parti
On 05/22/2015 12:35 PM, Fujii Masao wrote: Doesn't this change break the case where we want to PITR to the recovery target location in the last partial WAL file with the old timeline? In this case, that partial WAL file needs to be read and replayed. But since the suffix of its filename is .partial, unless DBA gets rid of the suffix, the WAL file cannot be restored and PITR would fail. No? PITR to a specific location always requires manual intervention by the DBA anyway. It's not something you'd automate. Copying the .partial file manually into pg_xlog is just one small extra step. Even if there are some downsides to this, I think it's just plain evil to archive a partial segment that looks indistinguishable from a complete one. We have had reports of that causing confusion in production systems. What if the master had already archived the complete version of the segment before dying? The standby will try to archive a partial version of the same, which will fail, or worse, overwrite the complete version with the partial one. Note that PITR in that scenario was always hit-and-miss. First of all, if the master died, there is no guarantee that it archived all the previous segments successfully before dying. (archive_mode=always alleviates that in 9.5, as the standby will archive them even if the master didn't). (See discussion on this point at http://www.postgresql.org/message-id/5535fe71.1010...@iki.fi) - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] psql tabcomplete - minor bugfix - tabcomplete for SET ROLE TO xxx
Hi we support SET ROLE name and SET ROLE TO name. Second form isn't supported by tabcomplete. Attached trivial patch fixes it. Regards Pavel diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c new file mode 100644 index 750e29d..7110102 *** a/src/bin/psql/tab-complete.c --- b/src/bin/psql/tab-complete.c *** psql_completion(const char *text, int st *** 3552,3557 --- 3552,3561 AND nspname not like 'pg\\_temp%%' UNION SELECT 'DEFAULT' ); } + else if (pg_strcasecmp(prev2_wd, role) == 0) + { + COMPLETE_WITH_QUERY(Query_for_list_of_roles); + } else { static const char *const my_list[] = -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postgres_fdw join pushdown (was Re: Custom/Foreign-Join-APIs)
Thank for your comments. 2015-05-21 23:11 GMT+09:00 Robert Haas robertmh...@gmail.com: On Sat, May 16, 2015 at 9:04 AM, Shigeru Hanada shigeru.han...@gmail.com wrote: d) All relations must have the same effective user id This check is done with userid stored in PgFdwRelationInfo, which is valid only when underlying relations have the same effective user id. Here effective user id means id of the user executing the query, or the owner of the view when the foreign table is accessed via view. Tom suggested that it is necessary to check that user mapping matches for security reason, and now I think it's better than checking effective user as current postgres_fdw does. So, should this be a separate patch? Yes, it should be an additional patch for Custom/Foreign join API which is already committed. The patch would contain these changes: * add new field usermappingid to RelOptInfo, it is InvalidOid for non-foreign tables * obtain oid of user mapping for a foreign table, and store it in the RelOptInfo (we already have serverid in RelOptInfo, so userid is enough to identify user mapping though) * propagate usermappingid up to the join relation when outer and inner relations have same valid value * check matching of user mapping before calling GetForeignJoinPaths, rather than serverid One of my concerns about this patch is that it's got a lot of stuff in it that isn't obviously related to the patch. Anything that is a separate change should be separated out into its own patch. Perhaps you can post a set of patches that apply one on top of the next, with the changes for each one clearly separated. IIUC, each patch should not break compilation, and should contain only one complete logical change which can't be separated into pieces. I think whole of the patch is necessary to implement e) Each source relation must not have any local filter Evaluating conditions of join source talbe potentially produces different result in OUTER join cases. This can be relaxed for the cases when the join is INNER and local filters don't contain any volatile function/operator, but it is left as future enhancement. I think this restriction is a sign that you're not really doing this right. Consider: (1) SELECT * FROM a LEFT JOIN b ON a.x = b.x AND b.x = 3; (2) SELECT * FROM a LEFT JOIN b ON a.x = b.x WHERE b.x = 3; If you push down the scan of b, you can include the b.x = 3 qual in case (1) but not in case (2). If you push down the join, you can include the qual in either case, but you must attach it in the same place where it was before. One big change about deparsing base relation is aliasing. This patch adds column alias to SELECT clause even original query is a simple single table SELECT. fdw=# EXPLAIN (VERBOSE, COSTS false) SELECT * FROM pgbench_branches b; QUERY PLAN Foreign Scan on public.pgbench_branches b Output: bid, bbalance, filler Remote SQL: SELECT bid a9, bbalance a10, filler a11 FROM public.pgbench_branches (3 rows) As you see, every column has alias in format a%d with index value derived from pg_attribute.attnum. Index value is attnum + 8, and the magic number 8 comes from FirstLowInvalidHeapAttributeNumber for the adjustment that makes attribute number of system attributes positive. Yeah. I'm not sure this is a good idea. The column labels are pointless at the outermost level. I'm not sure it isn't a good idea, either, but I have some doubts. I fixed the patch to not add column alias to remote queries for single table. This change also reduces amount of differences from master branch slightly. One thing tricky is peusdo projection which is done by deparseProjectionSql for whole-row reference. This is done by put the query string in FROM subquery and add whole-row reference in outer SELECT clause. This is done by ExecProjection in 9.4 and older, but we need to do this in postgres_fdw because ExecProjection is not called any more. What commit changed this? No commit changed this behavior, as Kaigai-san says. If you still have comments, please refer my response to Kaigai-san. Thanks for your work on this. Although I know progress has been slow, I think this work is really important to the project. I agree. I’ll take more time for this work. -- Shigeru HANADA -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Missing -i / --ignore-version in pg_dump help
On Fri, May 22, 2015 at 8:59 PM, Fabrízio de Royes Mello fabriziome...@gmail.com wrote: Hi all, There are some reason to -i, --ignore-version option doesn't appear in pg_dump help? Because it's obsolete option, I think. Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Missing -i / --ignore-version in pg_dump help
Fujii Masao masao.fu...@gmail.com writes: On Fri, May 22, 2015 at 8:59 PM, FabrÃzio de Royes Mello fabriziome...@gmail.com wrote: There are some reason to -i, --ignore-version option doesn't appear in pg_dump help? Because it's obsolete option, I think. Yeah, see commit c22ed3d523782c43836c163c16fa5a7bb3912826. Considering we shipped that in 8.4, maybe it's time to remove all trace of those switches. We'd still have to wait a couple releases before it'd be safe to use -i for something else, but it'd be a start. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Enhanced ALTER OPERATOR
On Wednesday 20 May 2015 20:50:41 Andres Freund wrote: Note that this very likely wasn't actually using a prepared plan. Due to the custom plan infrastructure the first few invocations are going to be replanned. Hello. I tested it on 30 and 50 iterations, and it feels good. Thanks. -- Uriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Add some spaces to ALTER TABLE doc
On Fri, May 22, 2015 at 8:42 PM, Fabrízio de Royes Mello fabriziome...@gmail.com wrote: Hi all, The attached add some spaces to better visualization of the ALTER TABLE doc. I found another line to add spaces. Applied. Thanks! Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Missing -i / --ignore-version in pg_dump help
On Fri, May 22, 2015 at 9:20 AM, Fujii Masao masao.fu...@gmail.com wrote: On Fri, May 22, 2015 at 8:59 PM, Fabrízio de Royes Mello fabriziome...@gmail.com wrote: Hi all, There are some reason to -i, --ignore-version option doesn't appear in pg_dump help? Because it's obsolete option, I think. You're correct... looking at the source code: 421 case 'i': 422 /* ignored, deprecated option */ 423 break; Thanks! -- Fabrízio de Royes Mello Consultoria/Coaching PostgreSQL Timbira: http://www.timbira.com.br Blog: http://fabriziomello.github.io Linkedin: http://br.linkedin.com/in/fabriziomello Twitter: http://twitter.com/fabriziomello Github: http://github.com/fabriziomello
[HACKERS] Re: [COMMITTERS] pgsql: At promotion, archive last segment from old timeline with .parti
On Fri, May 22, 2015 at 6:59 PM, Heikki Linnakangas hlinn...@iki.fi wrote: On 05/22/2015 12:35 PM, Fujii Masao wrote: Doesn't this change break the case where we want to PITR to the recovery target location in the last partial WAL file with the old timeline? In this case, that partial WAL file needs to be read and replayed. But since the suffix of its filename is .partial, unless DBA gets rid of the suffix, the WAL file cannot be restored and PITR would fail. No? PITR to a specific location always requires manual intervention by the DBA anyway. It's not something you'd automate. Copying the .partial file manually into pg_xlog is just one small extra step. We should document this? Otherwise no DBA can complete such PITR scenario. Also as a safeguard, if the required WAL file is not found but the file with .parital suffix found during recovery, maybe we should cause the recovery to fail at that moment. Otherwise since the required file is not found, the server would end the recovery before the .partial file and start normal processing. DBA may not be able to notice this incompletion of the recovery. Maybe this is overkill against the small use case, though. Even if there are some downsides to this, I think it's just plain evil to archive a partial segment that looks indistinguishable from a complete one. We have had reports of that causing confusion in production systems. What if the master had already archived the complete version of the segment before dying? The standby will try to archive a partial version of the same, which will fail, or worse, overwrite the complete version with the partial one. Note that PITR in that scenario was always hit-and-miss. First of all, if the master died, there is no guarantee that it archived all the previous segments successfully before dying. (archive_mode=always alleviates that in 9.5, as the standby will archive them even if the master didn't). So we don't need to rename the last WAL file basically in archive recovery case even if it's read from pg_xlog, i.e., it's partial. No? Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Run pgindent now?
Alvaro Herrera alvhe...@2ndquadrant.com writes: Robert Haas wrote: On Tue, May 19, 2015 at 10:26 AM, Tom Lane t...@sss.pgh.pa.us wrote: To do it before every minor release would require re-indenting HEAD as well (since the whole point is to keep HEAD and the back branches consistent). I think we'd get too much push-back from developers whose pending patches got broken. We can get away with reindenting HEAD between development cycles, but probably not more often than that. I'm not convinced of that. If we did it more often, it might actually be less disruptive. I believe it's possible to mechanically rebase a patch over an indent run of the underlying branch with half a dozen commands or less. +1 for reindenting all branches before each minor release, FWIW. Yeah? Can you show an example? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Run pgindent now?
Robert Haas wrote: On Tue, May 19, 2015 at 10:26 AM, Tom Lane t...@sss.pgh.pa.us wrote: To do it before every minor release would require re-indenting HEAD as well (since the whole point is to keep HEAD and the back branches consistent). I think we'd get too much push-back from developers whose pending patches got broken. We can get away with reindenting HEAD between development cycles, but probably not more often than that. I'm not convinced of that. If we did it more often, it might actually be less disruptive. I believe it's possible to mechanically rebase a patch over an indent run of the underlying branch with half a dozen commands or less. +1 for reindenting all branches before each minor release, FWIW. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Missing importing option of postgres_fdw
Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes: I agree with you on that point. And I think one option for that is that IMPORT FOREIGN SCHEMA only imports CHECK constraints of remote tables from a remote server that have convalidated = true. (If doing so, we wouldn't need to allow IMPORT FOREIGN SCHEMA to return ALTER FOREIGN TABLE statements.) But I'm not sure that that is a good idea. ISTM it would be better for IMPORT FOREIGN SCHEMA just to imports all CHECK constraints of remote tables, reflecting convalidated, and that we leave it to the user to do VALIDATE CONSTRAINT for locally defined constraints that have convalidated = false when matching constraints are validated on the remote server. It would only be safe to automatically import CHECK constraints if they contain no expressions that could evaluate differently on remote and local --- IOW, only expressions that would be safe to transmit as remote query conditions. I don't really think we should try to do that at all. For starters, while it might seem like we could use is_foreign_expr() on the conditions, there's no guarantee that the local server accurately knows what functions on the foreign server are really safe. The is safe to transmit condition isn't symmetric. For that matter, there's no guarantee that we could even parse the remote's constraint condition text without failing --- it might use SQL features we don't have. (We have that risk already for DEFAULT expressions, of course, but those tend to be a lot simpler.) So I think worrying about convalidated is pretty pointless. Really, it is up to the user to determine what constraints should be attached to the foreign table, and IMPORT FOREIGN SCHEMA can't help much. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Add a hint when postgresql.conf hot_standby = 'off' and recovery.conf standby = 'on'
Heikki Linnakangas hlinn...@iki.fi writes: On 05/22/2015 06:53 AM, Matthew Kelly wrote: It seems worth adding a hint and/or changing the error message to be more descriptive when in this state. Any options about what should be logged before I start putting together a patch? Yeah, might be worthwhile. Perhaps: FATAL: the database system is in standby mode and hot_standby is not enabled Or just: FATAL: the database system is in cold standby mode It would be useful to distinguish that state, where you could connect if hot standby was enabled, from the state where it's starting up and hasn't reached min-recovery-point yet and you couldn't connect even if hot_standby was enabled. Not sure how much signalling you'd need between postmaster and the startup process for that. Another angle worth considering is whether PQping can or should distinguish this state from database is fully up. (I do not recall what it does right now.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Generalized JSON output functions
Andrew Dunstan wrote: On 05/20/2015 09:16 AM, Shulgin, Oleksandr wrote: Attached is a patch against master to generalize the JSON-producing functions in utils/adt/json.c and to provide a set of callbacks which can be overridden the same way that is already provided for *parsing* JSON. I'm not necessarily opposed to this, but it sure seems like a lot of changes, and moderately invasive ones, to support something that could be done, at the cost of reparsing, with a simple loadable extension that I could create in a few hours of programming. But this seems like a pretty reasonable change to make, no? Doesn't the total amount of code decrease after this patch? JSON stuff is pretty new so some refactoring and generalization of what we have is to be expected. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Run pgindent now?
Tom Lane wrote: Alvaro Herrera alvhe...@2ndquadrant.com writes: I believe it's possible to mechanically rebase a patch over an indent run of the underlying branch with half a dozen commands or less. +1 for reindenting all branches before each minor release, FWIW. Yeah? Can you show an example? So we have this: ---PIC' | \---CI' where P is the parent commit; I is the pgindent commit; C is your change (applied to the unindented tree). What you need is to obtain C' which is a copy of C that applies to I. You can do this by creating I' which is a pgindent run over your patch, then diff that one to I. I *think* this should work: git checkout C pgindent tree git commit # yields I' git diff I I' C' git checkout I git apply C' I spent a few minutes looking for a nontrivial patch to test this on, couldn't find one; but the key is that you must be able to run pgindent on your own using the same rules that Bruce's run would. This shouldn't need human intervention at all, so should even be possible to write a script for it and use it for git rebase -i -x this_script origin/master for when you have a branch with several commits that you want to rebase over an upstream pgindent. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump quietly ignore missing tables - is it bug?
On Fri, May 22, 2015 at 6:09 PM, Pavel Stehule pavel.steh...@gmail.com wrote: 2015-05-21 16:48 GMT+02:00 Oleksandr Shulgin oleksandr.shul...@zalando.de : I think this is a bit over-engineered (apart from the fact that processSQLNamePattern is also used in two dozen of places in psql/describe.c and all of them must be touched for this patch to compile). it was prototype - I believe so issue with describe.c can be solved better Also, the new --table-if-exists options seems to be doing what the old --table did, and I'm not really sure I underestand what --table does now. I propose instead to add a separate new option --strict-include, without argument, that only controls the behavior when an include pattern didn't find any table (or schema). hard to say - any variant has own advantages and disadvantages But I more to unlike it than like - it is more usual, when you use exact name so, you need it exactly one, and when you use some wildcard, so you are expecting one or more tables. This use case is not checked in your patch. Maybe I'm missing something, but I believe it's handled by pg_dump -t mytables* --strict-include so that it will error out if nothing was found for mytables* pattern. -- Alex
Re: [HACKERS] pg_dump quietly ignore missing tables - is it bug?
2015-05-22 18:35 GMT+02:00 Shulgin, Oleksandr oleksandr.shul...@zalando.de : On Fri, May 22, 2015 at 6:32 PM, Pavel Stehule pavel.steh...@gmail.com wrote: 2015-05-22 18:30 GMT+02:00 Shulgin, Oleksandr oleksandr.shul...@zalando.de: On Fri, May 22, 2015 at 6:09 PM, Pavel Stehule pavel.steh...@gmail.com wrote: 2015-05-21 16:48 GMT+02:00 Oleksandr Shulgin oleksandr.shul...@zalando.de: I think this is a bit over-engineered (apart from the fact that processSQLNamePattern is also used in two dozen of places in psql/describe.c and all of them must be touched for this patch to compile). it was prototype - I believe so issue with describe.c can be solved better Also, the new --table-if-exists options seems to be doing what the old --table did, and I'm not really sure I underestand what --table does now. I propose instead to add a separate new option --strict-include, without argument, that only controls the behavior when an include pattern didn't find any table (or schema). hard to say - any variant has own advantages and disadvantages But I more to unlike it than like - it is more usual, when you use exact name so, you need it exactly one, and when you use some wildcard, so you are expecting one or more tables. This use case is not checked in your patch. Maybe I'm missing something, but I believe it's handled by pg_dump -t mytables* --strict-include so that it will error out if nothing was found for mytables* pattern. If I understand it raise a error when it found more than one table I hope not, and that totally was not my intent :-p It should raise if it found *less than* one, that is: none. ok, then I have not objection
Re: [HACKERS] [PATCH] Generalized JSON output functions
On Fri, May 22, 2015 at 9:43 AM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Andrew Dunstan wrote: On 05/20/2015 09:16 AM, Shulgin, Oleksandr wrote: Attached is a patch against master to generalize the JSON-producing functions in utils/adt/json.c and to provide a set of callbacks which can be overridden the same way that is already provided for *parsing* JSON. I'm not necessarily opposed to this, but it sure seems like a lot of changes, and moderately invasive ones, to support something that could be done, at the cost of reparsing, with a simple loadable extension that I could create in a few hours of programming. But this seems like a pretty reasonable change to make, no? Doesn't the total amount of code decrease after this patch? JSON stuff is pretty new so some refactoring and generalization of what we have is to be expected. Yeah. Also, there have been a few previous gripes about this, for example, http://www.postgresql.org/message-id/cahbvmpzs+svr+y-ugxjrq+xw4dqtevl-cozc69zffwmxjck...@mail.gmail.com. As noted, I definitely prefer 'space free' by default for efficiency reasons, but standardizing the output has definitely got to be a reasonable goal. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump quietly ignore missing tables - is it bug?
2015-05-22 18:30 GMT+02:00 Shulgin, Oleksandr oleksandr.shul...@zalando.de : On Fri, May 22, 2015 at 6:09 PM, Pavel Stehule pavel.steh...@gmail.com wrote: 2015-05-21 16:48 GMT+02:00 Oleksandr Shulgin oleksandr.shul...@zalando.de: I think this is a bit over-engineered (apart from the fact that processSQLNamePattern is also used in two dozen of places in psql/describe.c and all of them must be touched for this patch to compile). it was prototype - I believe so issue with describe.c can be solved better Also, the new --table-if-exists options seems to be doing what the old --table did, and I'm not really sure I underestand what --table does now. I propose instead to add a separate new option --strict-include, without argument, that only controls the behavior when an include pattern didn't find any table (or schema). hard to say - any variant has own advantages and disadvantages But I more to unlike it than like - it is more usual, when you use exact name so, you need it exactly one, and when you use some wildcard, so you are expecting one or more tables. This use case is not checked in your patch. Maybe I'm missing something, but I believe it's handled by pg_dump -t mytables* --strict-include so that it will error out if nothing was found for mytables* pattern. If I understand it raise a error when it found more than one table Pavel -- Alex
Re: [HACKERS] pg_dump quietly ignore missing tables - is it bug?
On Fri, May 22, 2015 at 6:34 PM, Tom Lane t...@sss.pgh.pa.us wrote: Oleksandr Shulgin oleksandr.shul...@zalando.de writes: I think this is a bit over-engineered (apart from the fact that processSQLNamePattern is also used in two dozen of places in psql/describe.c and all of them must be touched for this patch to compile). Also, the new --table-if-exists options seems to be doing what the old --table did, and I'm not really sure I underestand what --table does now. I'm pretty sure we had agreed *not* to change the default behavior of -t. My patch does that, in the case of no-wildcards -t argument. However, it can be fixed easily: just drop that strcspn() call, and then default behavior is the same for both wildcard and exact matches, since --strict-include is off by default. -- Alex
Re: [HACKERS] Run pgindent now?
Alvaro Herrera alvhe...@2ndquadrant.com writes: Tom Lane wrote: Alvaro Herrera alvhe...@2ndquadrant.com writes: I believe it's possible to mechanically rebase a patch over an indent run of the underlying branch with half a dozen commands or less. +1 for reindenting all branches before each minor release, FWIW. Yeah? Can you show an example? I *think* this should work: git checkout C pgindent tree git commit# yields I' git diff I I' C' git checkout I git apply C' I spent a few minutes looking for a nontrivial patch to test this on, couldn't find one; but the key is that you must be able to run pgindent on your own using the same rules that Bruce's run would. OK. So agreed, the blocking issue here is whether pgindent is conveniently available to every patch submitter. Right now, it would certainly be charitable to describe installing it as a PITA. I think what we'd need to do is (1) include fully patched sources in our git tree, and (2) build them by default (but not install them, probably) so that we can flush out any portability issues. I think it's too late to consider doing that for 9.5, but maybe we could do it after the branch. Another issue is whether there's a copyright problem if we include modified BSD indent sources in our tree. I wouldn't think so but we better check exactly how it's licensed. We'd also want a more mechanical way of obtaining the right typedef list to use. Although it probably couldn't be totally mechanized, because if your patch adds new typedefs you'd want to manually add those names to the list being used. Maybe there should be an optional local typedef list separate from the automatically generated file. I guess in the scenario you're describing, the most helpful thing would be if the pgindent commit put the typedef list it had used into the tree, and then we just use that (plus manual additions) when generating the I' commit. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump quietly ignore missing tables - is it bug?
Oleksandr Shulgin oleksandr.shul...@zalando.de writes: I think this is a bit over-engineered (apart from the fact that processSQLNamePattern is also used in two dozen of places in psql/describe.c and all of them must be touched for this patch to compile). Also, the new --table-if-exists options seems to be doing what the old --table did, and I'm not really sure I underestand what --table does now. I'm pretty sure we had agreed *not* to change the default behavior of -t. I propose instead to add a separate new option --strict-include, without argument, that only controls the behavior when an include pattern didn't find any table (or schema). If we do it as a separate option, then it necessarily changes the behavior for *each* -t switch in the call. Can anyone show a common use-case where that's no good, and you need separate behavior for each of several -t switches? If not, I like the simplicity of this approach. (Perhaps the switch name could use some bikeshedding, though.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump quietly ignore missing tables - is it bug?
2015-05-22 18:34 GMT+02:00 Tom Lane t...@sss.pgh.pa.us: Oleksandr Shulgin oleksandr.shul...@zalando.de writes: I think this is a bit over-engineered (apart from the fact that processSQLNamePattern is also used in two dozen of places in psql/describe.c and all of them must be touched for this patch to compile). Also, the new --table-if-exists options seems to be doing what the old --table did, and I'm not really sure I underestand what --table does now. I'm pretty sure we had agreed *not* to change the default behavior of -t. I propose instead to add a separate new option --strict-include, without argument, that only controls the behavior when an include pattern didn't find any table (or schema). If we do it as a separate option, then it necessarily changes the behavior for *each* -t switch in the call. Can anyone show a common use-case where that's no good, and you need separate behavior for each of several -t switches? If not, I like the simplicity of this approach. (Perhaps the switch name could use some bikeshedding, though.) it is near to one proposal implement only new long option --required-table Pavel regards, tom lane
Re: [HACKERS] pg_dump quietly ignore missing tables - is it bug?
2015-05-21 16:48 GMT+02:00 Oleksandr Shulgin oleksandr.shul...@zalando.de: Pavel Stehule pavel.steh...@gmail.com writes: 2015-03-23 17:11 GMT+01:00 Pavel Stehule pavel.steh...@gmail.com: Hi 2015-03-15 16:09 GMT+01:00 Tom Lane t...@sss.pgh.pa.us: Pavel Stehule pavel.steh...@gmail.com writes: other variant, I hope better than previous. We can introduce new long option --strict. With this active option, every pattern specified by -t option have to have identifies exactly only one table. It can be used for any other should to exists patterns - schemas. Initial implementation in attachment. I think this design is seriously broken. If I have '-t foo*' the code should not prevent that from matching multiple tables. What would the use case for such a restriction be? What would make sense to me is one or both of these ideas: * require a match for a wildcard-free -t switch * require at least one (not exactly one) match for a wildcarded -t switch. attached initial implementation updated version - same mechanism should be used for schema Hello, I think this is a bit over-engineered (apart from the fact that processSQLNamePattern is also used in two dozen of places in psql/describe.c and all of them must be touched for this patch to compile). it was prototype - I believe so issue with describe.c can be solved better Also, the new --table-if-exists options seems to be doing what the old --table did, and I'm not really sure I underestand what --table does now. I propose instead to add a separate new option --strict-include, without argument, that only controls the behavior when an include pattern didn't find any table (or schema). hard to say - any variant has own advantages and disadvantages But I more to unlike it than like - it is more usual, when you use exact name so, you need it exactly one, and when you use some wildcard, so you are expecting one or more tables. This use case is not checked in your patch. Regards Pavel Please see attached patch. -- Alex
Re: [HACKERS] pg_dump quietly ignore missing tables - is it bug?
On Fri, May 22, 2015 at 6:32 PM, Pavel Stehule pavel.steh...@gmail.com wrote: 2015-05-22 18:30 GMT+02:00 Shulgin, Oleksandr oleksandr.shul...@zalando.de: On Fri, May 22, 2015 at 6:09 PM, Pavel Stehule pavel.steh...@gmail.com wrote: 2015-05-21 16:48 GMT+02:00 Oleksandr Shulgin oleksandr.shul...@zalando.de: I think this is a bit over-engineered (apart from the fact that processSQLNamePattern is also used in two dozen of places in psql/describe.c and all of them must be touched for this patch to compile). it was prototype - I believe so issue with describe.c can be solved better Also, the new --table-if-exists options seems to be doing what the old --table did, and I'm not really sure I underestand what --table does now. I propose instead to add a separate new option --strict-include, without argument, that only controls the behavior when an include pattern didn't find any table (or schema). hard to say - any variant has own advantages and disadvantages But I more to unlike it than like - it is more usual, when you use exact name so, you need it exactly one, and when you use some wildcard, so you are expecting one or more tables. This use case is not checked in your patch. Maybe I'm missing something, but I believe it's handled by pg_dump -t mytables* --strict-include so that it will error out if nothing was found for mytables* pattern. If I understand it raise a error when it found more than one table I hope not, and that totally was not my intent :-p It should raise if it found *less than* one, that is: none.
Re: [HACKERS] jsonb concatenate operator's semantics seem questionable
On 5/21/15 4:25 PM, Andrew Dunstan wrote: Here is a patch that renames jsonb_replace to jsonb_set with a boolean create_missing flag that defaults to false (should we default it to true?). With the flag set it's more or less upsert for jsonb. Without, it's just update. I think upsert is probably the more expected behavior. Though, I'm also wondering if we should allow for throwing an error if path doesn't already exist (it looks like if create_missing is false it silently does nothing right now?) -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
2015-05-22 20:28 GMT+02:00 Jim Nasby jim.na...@bluetreble.com: On 5/21/15 7:12 AM, Robert Haas wrote: On Wed, May 20, 2015 at 8:46 PM, Andres Freund and...@anarazel.de wrote: I've a hard time believing it's actually a good idea to change this. It pretty much seems to only be useful if you're doing unqualified SELECT pg_cancel_backend(pid) FROM pg_stat_activity; type queries. I don't see that as something we need to address. +1. I'm not saying this isn't annoying - I've been annoyed by it myself - but IMHO it's really not worth having two functions that do 99% the same thing. Then, instead of having to remember to exclude your own backend using the same SQL syntax you use for everything else, you have to remember which of two similarly-named functions to call if you don't want to kill your own backend. That might be better for some people, but it won't be better for everyone. OTOH, if we allowed RAISE FATAL in plpgsql there'd be no need for self-termination via pg_terminate_backend(). I also don't see a problem with allowing self-termination, with the default being to disallow it. I suspect the number of people writing code that need self-termination is very, very small, whereas probably every DBA out there has been bitten by this. This is the type of thing that gives Postgres a reputation for being 'hard to use'. I would think the benefit of the larger group would outweigh the pain the smaller group would feel changing their code, but of course that's just my opinion and I don't see any easy way to quantify that. You and Andreas think it's fine as-is. Tom and Jon Nelson maybe don't like it as-is, but won't break backwards compatibility. I am with Tom and Jon - I don't see a good enough reason why to change long used behave without more users reports. Possibility to kill self is simple test, so this feature is working. Regards Pavel David Steele and I seem fine with breaking compat., not sure about Fabrizio. Given the opposition unless some others speak up I'll just drop it. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] jsonb concatenate operator's semantics seem questionable
On 05/22/2015 02:38 PM, Jim Nasby wrote: On 5/21/15 4:25 PM, Andrew Dunstan wrote: Here is a patch that renames jsonb_replace to jsonb_set with a boolean create_missing flag that defaults to false (should we default it to true?). With the flag set it's more or less upsert for jsonb. Without, it's just update. I think upsert is probably the more expected behavior. Though, I'm also wondering if we should allow for throwing an error if path doesn't already exist (it looks like if create_missing is false it silently does nothing right now?) Yes, that's actually documented in the patch. As for raising an error, in principle it's doable, but the code to detect it might get messy. Also, I don't want a huge number of knobs. So I'm excited about the idea. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Issues in Replication Progress Tracking
On 2015-05-21 09:40:58 +0530, Amit Kapila wrote: On Thu, May 21, 2015 at 12:42 AM, Andres Freund and...@anarazel.de wrote: On 2015-05-20 19:27:05 +0530, Amit Kapila wrote: 13. In function replorigin_session_setup() and or replorigin_session_advance(), don't we need to WAL log the use of Replication state? No, the point is that the replication progress is persisted via an extra data block in the commit record. That's important for both performance and correctness, because otherwise it gets hard to tie a transaction made during replay with the update to the progress. Unless you use 2PC which isn't really an alternative. Okay, but what triggered this question was the difference of those functions as compare to when user call function pg_replication_origin_advance(). pg_replication_origin_advance() will WAL log the information during that function call itself (via replorigin_advance()). So even if the transaction issuing pg_replication_origin_advance() function will abort, it will still update the Replication State, why so? I don't see a problem here. pg_replication_origin_advance() is for setting up the initial position/update the position upon configuration changes. It'd be a fair amount of infrastructure to make it tie into transactions - without a point to it afaics? (Just to be clear, I plan to address all the points I've not commented upon) Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] RFC: Non-user-resettable SET SESSION AUTHORISATION
On 5/20/15 9:38 PM, Robert Haas wrote: On Wed, May 20, 2015 at 8:22 PM, Jim Nasby jim.na...@bluetreble.com wrote: It might be a good idea to do something like this, but it's significantly more complicated than a protocol-level SET SESSION AUTHORIZATION. Right now, you can never go backwards from an authenticated state to an unauthenticated state, and there may be code in the backend that relies on that in subtle ways. The initial bootstrap sequence is pretty complicated, and I'm pretty sure that any naive attempt to redo that stuff is going to have unpleasant, probably security-relevant bugs. What about the middle-ground of not doing de-auth right now? That eliminates your concerns but still allows getting rid of ugly things like copies of the password file (FWIW, my understanding is pgBouncer was meant more to run on the database server where you'd just point it at the native password file). Uh, I don't have a clue what you mean when you say the middle ground of not doing de-auth right now. Don't allow a backend to move back into a de-authenticated state. Basically, allow a special connection mode that does nothing but provide user authentication back to the pooler. This would allow the pooler to defer all auth decisions to Postgres. Once the user was authenticated, the pooler could then figure out what pool connection to give to the user. I was thinking that if this authentication connection was never allowed to run SQL then it should eliminate fears of being in a de-authenticated state, but I see now that we've already started transaction machinery by the time PerformAuthentication happens, presumably for good reason. So maybe it's just as bad as trying to de-authenticate a full backend... -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
On Fri, May 22, 2015 at 3:28 PM, Jim Nasby jim.na...@bluetreble.com wrote: You and Andreas think it's fine as-is. Tom and Jon Nelson maybe don't like it as-is, but won't break backwards compatibility. David Steele and I seem fine with breaking compat., not sure about Fabrizio. +1 to add a second parameter and -1 to break backward compatibility with the default = true. Regards, -- Fabrízio de Royes Mello Consultoria/Coaching PostgreSQL Timbira: http://www.timbira.com.br Blog: http://fabriziomello.github.io Linkedin: http://br.linkedin.com/in/fabriziomello Twitter: http://twitter.com/fabriziomello Github: http://github.com/fabriziomello
Re: [HACKERS] [COMMITTERS] pgsql: Row-Level Security Policies (RLS)
Stephen Frost wrote: Row-Level Security Policies (RLS) What do we need RowSecurityPolicy-policy_id for? It seems to me that it is only used to determine whether the policy is the default deny one, so that it can later be removed if a hook adds a different one. This seems contrived as well as under-documented. Why isn't a boolean flag sufficient? (Not an actual review of this patch. I was merely skimming the code.) -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Add a hint when postgresql.conf hot_standby = 'off' and recovery.conf standby = 'on'
On 05/22/2015 11:01 AM, Josh Berkus wrote: On 05/22/2015 12:22 AM, Heikki Linnakangas wrote: It seems worth adding a hint and/or changing the error message to be more descriptive when in this state. Any options about what should be logged before I start putting together a patch? Yeah, might be worthwhile. Perhaps: FATAL: the database system is in standby mode and hot_standby is not enabled Or just: FATAL: the database system is in cold standby mode Warm Standby is what we called it in the past, so I think we should be consistent. Otherwise +1. As I recall, warm standby is actually what it is. A cold standby is not applying logs. A warm standby is. So if there is a recovery.conf and more importantly PostgreSQL is running, it is going to be a warm standby not a cold. Other than that, absolute +1. JD -- Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564 PostgreSQL Centered full stack support, consulting and development. Announcing I'm offended is basically telling the world you can't control your own emotions, so everyone else should do it for you. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change pg_cancel_*() to ignore current backend
On 5/21/15 7:12 AM, Robert Haas wrote: On Wed, May 20, 2015 at 8:46 PM, Andres Freund and...@anarazel.de wrote: I've a hard time believing it's actually a good idea to change this. It pretty much seems to only be useful if you're doing unqualified SELECT pg_cancel_backend(pid) FROM pg_stat_activity; type queries. I don't see that as something we need to address. +1. I'm not saying this isn't annoying - I've been annoyed by it myself - but IMHO it's really not worth having two functions that do 99% the same thing. Then, instead of having to remember to exclude your own backend using the same SQL syntax you use for everything else, you have to remember which of two similarly-named functions to call if you don't want to kill your own backend. That might be better for some people, but it won't be better for everyone. OTOH, if we allowed RAISE FATAL in plpgsql there'd be no need for self-termination via pg_terminate_backend(). I also don't see a problem with allowing self-termination, with the default being to disallow it. I suspect the number of people writing code that need self-termination is very, very small, whereas probably every DBA out there has been bitten by this. This is the type of thing that gives Postgres a reputation for being 'hard to use'. I would think the benefit of the larger group would outweigh the pain the smaller group would feel changing their code, but of course that's just my opinion and I don't see any easy way to quantify that. You and Andreas think it's fine as-is. Tom and Jon Nelson maybe don't like it as-is, but won't break backwards compatibility. David Steele and I seem fine with breaking compat., not sure about Fabrizio. Given the opposition unless some others speak up I'll just drop it. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
On Thu, May 21, 2015 at 5:51 PM, Peter Geoghegan p...@heroku.com wrote: So I think we're good with ripping it out. Peter? I'll come up with a patch. Attached patch rips the command tag out. -- Peter Geoghegan From 3b50dcd242223c909bdedf20d9b7cb94ba2ff78e Mon Sep 17 00:00:00 2001 From: Peter Geoghegan peter.geoghega...@gmail.com Date: Fri, 22 May 2015 11:44:25 -0700 Subject: [PATCH] Remove UPSERT command tag Previously, INSERT with an ON CONFLICT DO UPDATE (but not an ON CONFLICT DO NOTHING) clause used a new command tag -- UPSERT. This was mostly useful for psql, which reports the number of rows affected alongside the command tag. The original concern was that it would be a misrepresentation to allow this to happen with ON CONFLICT DO UPDATE, because some affected rows may actually have been updated. However, per discussion with Tom Lane, Andres Freund, and Alvaro Herrera, this seems unlikely to be worth it. There are compatibility problems with adding a new command tag, since it is for a command that is substantively the same as INSERT. Remove the recently added command tag, and update the INSERT documentation to reflect this. --- doc/src/sgml/ref/insert.sgml | 21 +++-- src/backend/nodes/copyfuncs.c| 1 - src/backend/nodes/outfuncs.c | 1 - src/backend/optimizer/plan/planner.c | 2 -- src/backend/tcop/pquery.c| 17 +++-- src/bin/psql/common.c| 5 + src/include/nodes/plannodes.h| 2 -- 7 files changed, 11 insertions(+), 38 deletions(-) diff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml index 3c3315e..7cd4577 100644 --- a/doc/src/sgml/ref/insert.sgml +++ b/doc/src/sgml/ref/insert.sgml @@ -497,20 +497,13 @@ INSERT INTO replaceable class=PARAMETERtable_name/replaceable [ AS replac screen INSERT replaceableoid/replaceable replaceable class=parametercount/replaceable /screen - However, in the event of an literalON CONFLICT DO UPDATE/ clause - (but emphasisnot/emphasis in the event of an literalON - CONFLICT DO NOTHING/ clause), the command tag reports the number of - rows inserted or updated together, of the form -screen -UPSERT replaceableoid/replaceable replaceable class=parametercount/replaceable -/screen - The replaceable class=parametercount/replaceable is the number - of rows inserted. If replaceable class=parametercount/replaceable - is exactly one, and the target table has OIDs, then - replaceable class=parameteroid/replaceable is the - acronymOID/acronym - assigned to the inserted row (but not if there is only a single - updated row). Otherwise replaceable + The replaceable class=parametercount/replaceable is the + number of rows inserted or updated. If replaceable + class=parametercount/replaceable is exactly one, and the + target table has OIDs, then replaceable + class=parameteroid/replaceable is the acronymOID/acronym + assigned to the inserted row. The single row must have been + inserted rather than updated. Otherwise replaceable class=parameteroid/replaceable is zero. /para diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 2d9bf41..cab9372 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -81,7 +81,6 @@ _copyPlannedStmt(const PlannedStmt *from) COPY_SCALAR_FIELD(queryId); COPY_SCALAR_FIELD(hasReturning); COPY_SCALAR_FIELD(hasModifyingCTE); - COPY_SCALAR_FIELD(isUpsert); COPY_SCALAR_FIELD(canSetTag); COPY_SCALAR_FIELD(transientPlan); COPY_NODE_FIELD(planTree); diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 54464f8..4775acf 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -243,7 +243,6 @@ _outPlannedStmt(StringInfo str, const PlannedStmt *node) WRITE_UINT_FIELD(queryId); WRITE_BOOL_FIELD(hasReturning); WRITE_BOOL_FIELD(hasModifyingCTE); - WRITE_BOOL_FIELD(isUpsert); WRITE_BOOL_FIELD(canSetTag); WRITE_BOOL_FIELD(transientPlan); WRITE_NODE_FIELD(planTree); diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index d3f5a14..60340e3 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -261,8 +261,6 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) result-queryId = parse-queryId; result-hasReturning = (parse-returningList != NIL); result-hasModifyingCTE = parse-hasModifyingCTE; - result-isUpsert = - (parse-onConflict parse-onConflict-action == ONCONFLICT_UPDATE); result-canSetTag = parse-canSetTag; result-transientPlan = glob-transientPlan; result-planTree = top_plan; diff --git a/src/backend/tcop/pquery.c b/src/backend/tcop/pquery.c index bcffd85..9c14e8a 100644 --- a/src/backend/tcop/pquery.c +++ b/src/backend/tcop/pquery.c @@ -202,14 +202,8 @@ ProcessQuery(PlannedStmt *plan, lastOid = queryDesc-estate-es_lastoid; else lastOid =
Re: [HACKERS] jsonb concatenate operator's semantics seem questionable
On Fri, May 22, 2015 at 11:59 AM, Andrew Dunstan and...@dunslane.net wrote: As for raising an error, in principle it's doable, but the code to detect it might get messy. Also, I don't want a huge number of knobs. So I'm excited about the idea. I think that that's a bad default behavior, although I don't think that's what Jim means. Consider our experience with having subscript operators throw errors -- it complicates certain cases (my complaint at the time was about expression indexes, but there are others). -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Add a hint when postgresql.conf hot_standby = 'off' and recovery.conf standby = 'on'
On 05/22/2015 12:22 AM, Heikki Linnakangas wrote: It seems worth adding a hint and/or changing the error message to be more descriptive when in this state. Any options about what should be logged before I start putting together a patch? Yeah, might be worthwhile. Perhaps: FATAL: the database system is in standby mode and hot_standby is not enabled Or just: FATAL: the database system is in cold standby mode Warm Standby is what we called it in the past, so I think we should be consistent. Otherwise +1. The additional benefit of this is that it would (hopefully) allow us to distinguish between a warm standby which was still reading its own xlogs (i.e. in crash recovery) and a warm standby which was using the restore_command (i.e. standby-ing). For that reason, it would be ideal if the new message only displayed once the restore_command starts being used. That is: Cold Standby == DB Snapshot and a huge folder of WAL files (i.e. Barman) Warm Standby == hot_standby=off, recoveryconf.standby=on Hot Standby == hot_standby=on, recoveryconf.standby=on -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers