Re: [HACKERS] [bug fix] PostgreSQL fails to start on Windows if it crashes after tablespace creation
From: Amit Kapila amit.kapil...@gmail.com On Wed, Jan 15, 2014 at 8:12 PM, MauMau maumau...@gmail.com wrote: Thanks for reviewing and testing the patch. Yes, at first I did what you mentioned, but modified the patch according to some advice in the mail thread. During redo, create_tablespace_directories() needs to handle the case where the $PGDATA/pg_tblspc/xxx is not a symlink but a directory even on UNIX/Linux. Please see TablespaceCreateDbspace is(). destroy_tablespace_directories() doesn't have to handle such situation. If create_tablespace_directories() needs to handle with directory both on Windows/Linux, then shouldn't it be a runtime check as in your first version rather than compile time check? Also isn't that the reason why destroy_tablespace_directories() have similar check? I see..., and you are correct. The first version of my patch should be the right fix. It seems that my head went somewhere when I submitted the second revision. What should I do? Should I re-submit the first revision as the latest fifth revision and link the email from the CommitFest newest entry? Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] jsonb and nested hstore
On Thu, Mar 13, 2014 at 3:39 PM, Peter Geoghegan p...@heroku.com wrote: On Thu, Mar 13, 2014 at 2:21 AM, Greg Stark st...@mit.edu wrote: It does sound like the main question here is which opclass should be the default. From the discussion there's a jsonb_hash_ops which works on all input values but supports fewer operators and a jsonb_ops which supports more operators but can't handle json with larger individual elements. Perhaps it's better to make jsonb_hash_ops the default so at least it's always safe to create a default gin index? Personally, I don't think it's a good idea to change the default. I must admit that I'm coming around to the view that jsonb_hash_ops would make a better default. Its performance is superb, and I think there's a strong case to be made for that more than making up for it not supporting all indexable operators - the existence operators just aren't that useful in comparison. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [bug fix] PostgreSQL fails to start on Windows if it crashes after tablespace creation
On Fri, Mar 21, 2014 at 12:24 PM, MauMau maumau...@gmail.com wrote: From: Amit Kapila amit.kapil...@gmail.com If create_tablespace_directories() needs to handle with directory both on Windows/Linux, then shouldn't it be a runtime check as in your first version rather than compile time check? Also isn't that the reason why destroy_tablespace_directories() have similar check? I see..., and you are correct. The first version of my patch should be the right fix. It seems that my head went somewhere when I submitted the second revision. What should I do? Should I re-submit the first revision as the latest fifth revision and link the email from the CommitFest newest entry? The comments in your first version needs to be improved, as there you just mentioned a Windows specific comment: + /* On Windows, lstat() I think you can change comments (make it somewhat similar to destroy_tablespace_directories) and then submit it as a new version. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Standby server won't start
I changed primary servers max_connections from 100 to 4 for just a testing purpose. Now standby server won't start and complains: hot standby is not possible because max_connections = 4 is a lower setting than on the master server (its value was 100) My guess is this is because standby's pg_control file contains previous primary setting (max_connections = 100). Is there any way to start the standby server without re-creating pg_control (which implies getting base backup again)? If not, there should be some way to allow to start standby server without getting base backup... Best regards, -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [bug fix] PostgreSQL fails to start on Windows if it crashes after tablespace creation
From: Amit Kapila amit.kapil...@gmail.com The comments in your first version needs to be improved, as there you just mentioned a Windows specific comment: + /* On Windows, lstat() I think you can change comments (make it somewhat similar to destroy_tablespace_directories) and then submit it as a new version. OK, done. Please find the attached patch. I also rebased the patch to HEAD. I'll update the CommitFest entry soon. Regards MauMau remove_tblspc_symlink_v5.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Standby server won't start
On 21 March 2014 13:41, Tatsuo Wrote: I changed primary servers max_connections from 100 to 4 for just a testing purpose. Now standby server won't start and complains: hot standby is not possible because max_connections = 4 is a lower setting than on the master server (its value was 100) My guess is this is because standby's pg_control file contains previous primary setting (max_connections = 100). Is there any way to start the standby server without re-creating pg_control (which implies getting base backup again)? If not, there should be some way to allow to start standby server without getting base backup... I think there is no way to do this because parameter from master is already set in pg_control file, which can not be changed without taking new backup from master. Also this is not recommended to have standby's max_connection values lesser than the master's max_connection value. Thanks and Regards, Kumar Rajeev Rastogi -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Inheritance of foregn key constraints.
http://wiki.postgresql.org/wiki/Todo Section Inheritance Allow inherited tables to inherit indexes, UNIQUE constraints, and primary/FOREIGN KEYS Good Morning. I started to program a patch for inheritance of the foreign key constraints. I. e. after applying the patch FKs are maintained between foreign table inheritance tree (master tree) and a given table inheritance tree (detail tree). My patch need one change that might be of significance. A type of the depencencies (pg_depend) among the FK constraint (pg_constraint) and the corresponding RI_ConstraintTrigger triggers has to be changed from DEPENDENCY_INTERNAL to DEPENDENCY_AUTO. If this modification is not applied, the detail child table cannot be dropped without prevous dropping the whole FK constraint because the removing operation depend on the FK constraint of its parent table. It also requires an end user to remember about a triggers maintaining uniqueness of a parent tree key and about consistent unique indexes/PKs. Currently the patch works for 9.3 release for ALTER TABLE ... ADD CONSTRAINT ... FOREIGN KEY ...; ALTER TABLE ... DROP CONSTRAINT or DROP TABLE (master or detail root). This covers my needs. As soon as the patch is ready, matching postgres standards and tested I would like to add it to the distribution. I have the following questions. 1. Is the community interested in it? So should I continue my work since it matches the postgres standards? 2. If the answer is yes, please comment the above change of the dependency type. Kind regards Andrzej Mazurkiewicz -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Standby server won't start
I changed primary servers max_connections from 100 to 4 for just a testing purpose. Now standby server won't start and complains: hot standby is not possible because max_connections = 4 is a lower setting than on the master server (its value was 100) My guess is this is because standby's pg_control file contains previous primary setting (max_connections = 100). Is there any way to start the standby server without re-creating pg_control (which implies getting base backup again)? If not, there should be some way to allow to start standby server without getting base backup... I think there is no way to do this because parameter from master is already set in pg_control file, which can not be changed without taking new backup from master. Yes, that's the problem. Also this is not recommended to have standby's max_connection values lesser than the master's max_connection value. In my case I had already changed primary's max_connections to 4 and restarted it. So at that point both postgresql.conf of primary and standby were 4. Maybe we could enhance it something like this: 1) When standby starts, it get max_connections info from primary and updates pg_control file if it's different. 2) Then standby restarts itself if max_connections has been changed in #1. Best regards, -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Standby server won't start
On 21 March 2014 16:17, Tatsuo Wrote: In my case I had already changed primary's max_connections to 4 and restarted it. So at that point both postgresql.conf of primary and standby were 4. If you changed max_connection to 4 only in primary, then I am not able to understand, how it got changed in standby also (if you have not taken back again)? Let me know If I have missed something. Thanks and Regards, Kumar Rajeev Rastogi -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?
From: Mitsumasa KONDO kondo.mitsum...@gmail.com 2014-03-17 21:12 GMT+09:00 Fujii Masao masao.fu...@gmail.com: On Mon, Mar 17, 2014 at 10:20 AM, Robert Haas robertmh...@gmail.com wrote: On Sun, Mar 16, 2014 at 6:23 AM, MauMau maumau...@gmail.com wrote: * Improve the example in the documentation. But what command can we use to reliably sync just one file? * Provide some command, say pg_copy, which copies a file synchronously by using fsync(), and describes in the doc something like for simple use cases, you can use pg_copy as the standard reliable copy command. +1. This won't obviate the need for tools to manage replication, but it would make it possible to get the simplest case right without guessing. +1, too. And, what about making pg_copy call posix_fadvise(DONT_NEED) against the archived file after the copy? Also It might be good idea to support the direct copy of the file to avoid wasting the file cache. Use direct_cp. http://directcp.sourceforge.net/direct_cp.html Thank you all for giving favorable responses and interesting ideas. Then, I think I'll do: * Create pg_copy in C so that it can be used on Windows as well as on UNIX/Linux. It just copies one file. Its source code is located in src/bin/pg_copy/. Please recommend a better name if you have one in mind. * Add a reference page for pg_copy in the chapter Server applications. Modify the section for continuous archiving to recommend pg_copy for simple use cases as the standard command. * pg_copy calls posix_fadvise(DONT_NEED) on the destination file. * pg_copy passes O_DIRECT flag when opening the destination file when --directio or -d option is specified. O_DIRECT is not used by default because it may not be available on some file systems, as well as it might cause trouble on older platforms such as RHEL4/5. pg_copy does not use O_DIRECT for the source file so that it can copy the data from the filesystem cache, which is just written by postgres. Could you give me your opinions before starting the work, including the following? * Should I refactor the functions (copy_file, copydir, etc.) in src/backend/storage/file/copydir.c so that they can also be used for frontends? If so, which of src/port or src/common/ is the right place to put copydir.c in? * Should I complete the work before 9.4 beta so that it will be available starting with 9.4? I think so because it is a basic capability to archive transaction logs safely (although the time may not allow me to do this). Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Standby server won't start
From: Rajeev rastogi rajeev.rast...@huawei.com If you changed max_connection to 4 only in primary, then I am not able to understand, how it got changed in standby also (if you have not taken back again)? Let me know If I have missed something. The primary log the new value as an XLOG_PARAMETER_CHANGE WAL record at startup when the parameter value in postgresql.conf does not match the one in pg_control. Then, the WAL record is sent to the standby and applied, which changes the value in pg_control on the standby. Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Standby server won't start
The primary log the new value as an XLOG_PARAMETER_CHANGE WAL record at startup when the parameter value in postgresql.conf does not match the one in pg_control. Then, the WAL record is sent to the standby and applied, which changes the value in pg_control on the standby. Last time I tested in following way, max_connections in pg_control of standby did not reflect the change in primary. 1) stop primary 2) stop standby 3) change max_connections to 4 in primary 4) change max_connections to 4 in standby 5) start primary 6) start standby but it failed as I said before So the particular WAL record was not sent to standby? I'm going to test again... Best regards, -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] QSoC proposal: Rewrite pg_dump and pg_restore
On Fri, Mar 21, 2014 at 4:09 AM, Tom Lane t...@sss.pgh.pa.us wrote: Craig Ringer cr...@2ndquadrant.com writes: Here's how I think it needs to look: [ move all the functionality to the backend ] Of course, after you've done all that work, you've got something that is of exactly zero use to its supposed principal use-case, pg_dump. pg_dump will still have to support server versions that predate all these fancy new dump functions, and that pretty much ensures that most of pg_dump's core functionality will still be on the client side. Or, if you try to finesse that problem by making sure the new server APIs correspond to easily-identified pieces of pg_dump code, you'll probably end up with APIs that nobody else wants to use :-(. Or you should mandate that new server versions should be able to consume _old_ pg_dump version output. This would change the recommended when upgrading, dump using the new pg_dump to when upgrading, dump using the old pg_dump. This would be necessary policy going forward anyway, if most of the pg_dump functionality was server-side, because it would be generating dumps in the server-version dump format, not the client-version format. 'Regards Marcin Mańk (goes back to lurker cave...)
Re: [HACKERS] Standby server won't start
From: Tatsuo Ishii is...@postgresql.org Last time I tested in following way, max_connections in pg_control of standby did not reflect the change in primary. 1) stop primary 2) stop standby 3) change max_connections to 4 in primary 4) change max_connections to 4 in standby 5) start primary 6) start standby but it failed as I said before So the particular WAL record was not sent to standby? I'm going to test again... That's because the parameter is checked at the beginning of recovery (i.e. at standby start) before XLOG_PARAMETER_CHANGE is received and applied on the standby. Please see CheckRequiredParameterValues() in StartupXLOG(). To persist the max_connections change: 1) stop primary 2) change max_connections on the primary 3) start primary 4) watch pg_stat_replication to wait until the standby is sync with the primary (XLOG_PARAMETER_CHANGE is applied) 5) stop standby 6) change max_connections on the standby 7) start standby Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?
On Fri, Mar 21, 2014 at 8:54 PM, MauMau maumau...@gmail.com wrote: * Create pg_copy in C so that it can be used on Windows as well as on UNIX/Linux. It just copies one file. Its source code is located in src/bin/pg_copy/. Please recommend a better name if you have one in mind. I'd rather see that as a part of contrib/ if possible. Is there any portion of the code you have in mind that makes mandatory putting it in src/bin? * Should I complete the work before 9.4 beta so that it will be available starting with 9.4? I think so because it is a basic capability to archive transaction logs safely (although the time may not allow me to do this). Pursing efforts on a utility like that is worth the shot IMO (I would use it for sure if it has reliable cross-platform support to unify sets of replication scripts), but including it in 9.4 is out of scope. A saner target would be the 1st commit fest of 9.5. Regards, -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] jsonb and nested hstore
On Fri, Mar 21, 2014 at 7:23 AM, Peter Geoghegan p...@heroku.com wrote: I must admit that I'm coming around to the view that jsonb_hash_ops would make a better default. Its performance is superb, and I think there's a strong case to be made for that more than making up for it not supporting all indexable operators - the existence operators just aren't that useful in comparison Is there any \d command that would display a nice list of which operators a given operator class actually supports? It's kind of hard to determine whether a proposed index would actually be useful for your queries without it. -- greg
Re: [HACKERS] QSoC proposal: Rewrite pg_dump and pg_restore
On Thu, Mar 20, 2014 at 11:09 PM, Tom Lane t...@sss.pgh.pa.us wrote: Craig Ringer cr...@2ndquadrant.com writes: Here's how I think it needs to look: [ move all the functionality to the backend ] Of course, after you've done all that work, you've got something that is of exactly zero use to its supposed principal use-case, pg_dump. pg_dump will still have to support server versions that predate all these fancy new dump functions, and that pretty much ensures that most of pg_dump's core functionality will still be on the client side. Or, if you try to finesse that problem by making sure the new server APIs correspond to easily-identified pieces of pg_dump code, you'll probably end up with APIs that nobody else wants to use :-(. It's worse than that. If you put all the logic in the server, then a dump taken on an older version won't be able to quote keywords added in the newer version. Go directly to fail. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Creating tables for columns
We are working on a project in which we need to create tables for each column. So which function should we call in recursion to create the tables?
Re: [HACKERS] Array of composite types returned from python
On Thu, Mar 20, 2014 at 4:54 PM, Behn, Edward (EBEHN) eb...@arinc.com wrote: I've endeavored to enable the return of arrays of composite types from code written in PL/Python. It seems that this can be accomplished though a very minor change to the code: On line 401 in the file src/pl/plpython/plpy_typeio.c, remove the error report PL/Python functions cannot return type... and replace it with the command arg-func = PLyObject_ToComposite; From all that I can see, this does exactly what I want. A python list of tuples is converted to an array of composite types in SQL. I ran the main and python regression suites for both python2 and python3 with assert enabled. The only discrepancies I got were ones that were due to the output expecting an error. When I altered the .out files to the expected behavior, it matched just fine. Am I missing anything, (ie memory leak, undesirable behavior elsewhere)? Don't know, but I'd definitely submit that patch to the next open fest. That's a very useful gain for such a small change. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Creating tables for columns
I had implemented similar code on top of FDW API. https://github.com/kaigai/pg_strom/blob/old_cuda/utilcmds.c#L244 Probably, heap_create_with_catalog() is what you are finding out. 2014-03-21 22:57 GMT+09:00 Rajashree Mandaogane rajashree@gmail.com: We are working on a project in which we need to create tables for each column. So which function should we call in recursion to create the tables? -- KaiGai Kohei kai...@kaigai.gr.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Inheritance of foregn key constraints.
Andrzej Mazurkiewicz andr...@mazurkiewicz.org writes: My patch need one change that might be of significance. A type of the depencencies (pg_depend) among the FK constraint (pg_constraint) and the corresponding RI_ConstraintTrigger triggers has to be changed from DEPENDENCY_INTERNAL to DEPENDENCY_AUTO. So in other words, somebody could (accidentally or maliciously) break the constraint by dropping one of its implementation triggers. I doubt that's acceptable. If this modification is not applied, the detail child table cannot be dropped without prevous dropping the whole FK constraint because the removing operation depend on the FK constraint of its parent table. Dropping a child table is going to have much larger problems than that, no? What about the values in the child table --- don't you risk orphaning referencing rows? Or are you only supporting this on the referencing side? In any case, it seems like DROP TABLE could remove the dependency entries for itself, rather than taking the risk of weakening the dependency type. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] QSoC proposal: Rewrite pg_dump and pg_restore
On 03/21/2014 09:38 AM, Robert Haas wrote: On Thu, Mar 20, 2014 at 11:09 PM, Tom Lane t...@sss.pgh.pa.us wrote: Craig Ringer cr...@2ndquadrant.com writes: Here's how I think it needs to look: [ move all the functionality to the backend ] Of course, after you've done all that work, you've got something that is of exactly zero use to its supposed principal use-case, pg_dump. pg_dump will still have to support server versions that predate all these fancy new dump functions, and that pretty much ensures that most of pg_dump's core functionality will still be on the client side. Or, if you try to finesse that problem by making sure the new server APIs correspond to easily-identified pieces of pg_dump code, you'll probably end up with APIs that nobody else wants to use :-(. It's worse than that. If you put all the logic in the server, then a dump taken on an older version won't be able to quote keywords added in the newer version. Go directly to fail. Yeah. This tantalizing project has been looked at several times and found to be a viper's nest. What would be useful for many purposes, and is a long-standing project of mine that I still haven't found time to make progress on, is that the server should contain functions to produce the creation SQL for all its own objects, free of the locks that pg_dump requires for consistency. That would be a great SoC project, incidentally. I'd even volunteer to mentor that one. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?
From: Michael Paquier michael.paqu...@gmail.com On Fri, Mar 21, 2014 at 8:54 PM, MauMau maumau...@gmail.com wrote: * Create pg_copy in C so that it can be used on Windows as well as on UNIX/Linux. It just copies one file. Its source code is located in src/bin/pg_copy/. Please recommend a better name if you have one in mind. I'd rather see that as a part of contrib/ if possible. Is there any portion of the code you have in mind that makes mandatory putting it in src/bin? Archiving transaction logs reliably is a basic responsibility of DBMS, so I think it should be treated as part of the core. It is not a desirable feature but actually a mandatory one to persist transaction logs. Even if it were a better to have feature, it can be put in the core like pg_basebackup and pg_isready, which are not mandatory tools. * Should I complete the work before 9.4 beta so that it will be available starting with 9.4? I think so because it is a basic capability to archive transaction logs safely (although the time may not allow me to do this). Pursing efforts on a utility like that is worth the shot IMO (I would use it for sure if it has reliable cross-platform support to unify sets of replication scripts), but including it in 9.4 is out of scope. A saner target would be the 1st commit fest of 9.5. OK, I don't mind if it should be targeted at 9.4 or 9.5. If someone wants it for 9.4, I try to hurry. Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?
From: Michael Paquier michael.paqu...@gmail.com On Fri, Mar 21, 2014 at 8:54 PM, MauMau maumau...@gmail.com wrote: * Create pg_copy in C so that it can be used on Windows as well as on UNIX/Linux. It just copies one file. Its source code is located in src/bin/pg_copy/. Please recommend a better name if you have one in mind. I'd rather see that as a part of contrib/ if possible. Is there any portion of the code you have in mind that makes mandatory putting it in src/bin? Archiving transaction logs reliably is a basic responsibility of DBMS, so I think it should be treated as part of the core. It is not a desirable feature but actually a mandatory one to persist transaction logs. Even if it were a better to have feature, it can be put in the core like pg_basebackup and pg_isready, which are not mandatory tools. * Should I complete the work before 9.4 beta so that it will be available starting with 9.4? I think so because it is a basic capability to archive transaction logs safely (although the time may not allow me to do this). Pursing efforts on a utility like that is worth the shot IMO (I would use it for sure if it has reliable cross-platform support to unify sets of replication scripts), but including it in 9.4 is out of scope. A saner target would be the 1st commit fest of 9.5. OK, I don't mind if it should be targeted at 9.4 or 9.5. If someone wants it for 9.4, I try to hurry. Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Patch for CREATE RULE sgml -- Was in: [DOCS]
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Hi guys, I realized that the output of the CREATE RULE has not a detailed output for the events parameter. But the question here is that I'm not sure which format follow: { INSERT | UPDATE | DELETE | SELECT} or INSERT UPDATE DELETE SELECT - -- I attach a patch for each one. I sent a mail to pgsql-docs but no response has been received. Thanks and regards, -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.18 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBCgAGBQJTLGERAAoJEIBeI/HMagHmUYcP/ibZQaTYDgLmclqgv6wbeTnT m9d9mhYEJVhdyowGcJ5s+OSCN5tj2vGJifjxUAQ+6hW+0mzj9kqQLoWJyb9DYn1o AbbUM6+p2Q1+P5KAUYMVZDRiGN6EqFPw2BgsXhIA2rXHT/PiSPpKvFK6wQ0rmSyX houtfrXEl4zzLHyrmHZmWHyz6DgPyTM/jtHTd/yhAl1OxKR5V583bHEfDUz3zYEX v6NjBiq5TY6uaXVF0B0cya8nRfW3KP240cwZTF5zRoyHZ1LNlHLPEPs2huNZPLbk tp3UIHhdiZKl7ddJcWTTeuv4ABQXQVBLQwHWR91szMr+dz/Wsk0zmeFNzlCKCzzN cehz5n6yewIiosS1MlD7lxOAHPN0yiXtsjiOWoQYbLaxkcTO74oUZK1iWXD025TB vSegM0L1a1GJFNjyZhQnUgMLWELhsC5y1g7Pn7D9YkzeyicEFtFRTkGDR9eh9LmV PNtV4w5M/hZtbappfoW7IL6tLtZnjHUA+UjPflnoae0uQ8G+MOW+3rkXt30BksdR VXvbhXTXaKLj2cEs8cfyBuVUs8AwBGssjfX2OxkSYjvGYcHVoSN/o0cqOWUD/mMi 98ORZcLrzWk70kQ0zxW9wIXbkp4M+8uogyzL9ViveiLREHYD9SRlWMGPpgbcs/aM fB7fgznJGvXZoRefJKdQ =UWLa -END PGP SIGNATURE- diff --git a/doc/src/sgml/ref/create_rule.sgml b/doc/src/sgml/ref/create_rule.sgml index d3212b5..22b46e2 100644 --- a/doc/src/sgml/ref/create_rule.sgml +++ b/doc/src/sgml/ref/create_rule.sgml @@ -24,6 +24,9 @@ PostgreSQL documentation CREATE [ OR REPLACE ] RULE replaceable class=parametername/replaceable AS ON replaceable class=parameterevent/replaceable TO replaceable class=parametertable_name/replaceable [ WHERE replaceable class=parametercondition/replaceable ] DO [ ALSO | INSTEAD ] { NOTHING | replaceable class=parametercommand/replaceable | ( replaceable class=parametercommand/replaceable ; replaceable class=parametercommand/replaceable ... ) } + +phrasewhere replaceable class=parameterevent/replaceable can be one of:/phrase +{ INSERT | UPDATE | DELETE | SELECT} /synopsis /refsynopsisdiv diff --git a/doc/src/sgml/ref/create_rule.sgml b/doc/src/sgml/ref/create_rule.sgml index d3212b5..a4df024 100644 --- a/doc/src/sgml/ref/create_rule.sgml +++ b/doc/src/sgml/ref/create_rule.sgml @@ -24,6 +24,12 @@ PostgreSQL documentation CREATE [ OR REPLACE ] RULE replaceable class=parametername/replaceable AS ON replaceable class=parameterevent/replaceable TO replaceable class=parametertable_name/replaceable [ WHERE replaceable class=parametercondition/replaceable ] DO [ ALSO | INSTEAD ] { NOTHING | replaceable class=parametercommand/replaceable | ( replaceable class=parametercommand/replaceable ; replaceable class=parametercommand/replaceable ... ) } + +phrasewhere replaceable class=parameterevent/replaceable can be one of:/phrase +INSERT +UPDATE +DELETE +SELECT /synopsis /refsynopsisdiv -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe Reply-To:
On 21 March 2014 03:45, Noah Misch n...@leadboat.com wrote: On Sat, Mar 08, 2014 at 11:14:30AM +, Simon Riggs wrote: On 7 March 2014 09:04, Simon Riggs si...@2ndquadrant.com wrote: The right thing to do here is to not push to the extremes. If we mess too much with the ruleutil stuff it will just be buggy. A more considered analysis in a later release is required for a full and complete approach. As I indicated earlier, an 80/20 solution is better for this release. Slimming down the patch, I've removed changes to lock levels for almost all variants. The only lock levels now reduced are those for VALIDATE, plus setting of relation and attribute level options. Good call. Thanks for the review. I'll respond to each point on a later email but looks nothing much major, apart from the point raised on separate thread. + * Be careful to ensure this function is called for Tables and Indexes only. + * It is not currently safe to be called for Views because security_barrier + * is listed as an option and so would be allowed to be set at a level lower + * than AccessExclusiveLock, which would not be correct. This statement is accepted and takes only ShareUpdateExclusiveLock: alter table information_schema.triggers set (security_barrier = true); I find it hard to justify why we accept such a statement. Surely its a bug when the named table turns out to be a view? Presumably ALTER SEQUENCE and ALTER other stuff has checks for the correct object type? OMG. I suggest adding a LOCKMODE field to relopt_gen and adding a reloptions_locklevel() function to determine the required level from an options list. That puts control of the lock level near the code that understands the implications for each option. You can then revert the addition of AlterViewInternal() and some of the utility.c changes. Sure, that's how we code it, but I'm not sure we should introduce that feature. The above weirdness is not itself justification. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] QSoC proposal: Rewrite pg_dump and pg_restore
Andrew Dunstan escribió: What would be useful for many purposes, and is a long-standing project of mine that I still haven't found time to make progress on, is that the server should contain functions to produce the creation SQL for all its own objects, free of the locks that pg_dump requires for consistency. Maybe you'd like my DDL deparse project, then. Right now it's only of use for event triggers during DDL execution, but I don't see any strong reason it cannot be used to reconstruct object creation commands from only their identifying OID. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [bug fix] pg_ctl always uses the same event source
From: Alvaro Herrera alvhe...@2ndquadrant.com MauMau escribió: The raw link only gave the mail in text format. I hoped to import the mail into Windows Mail on Windows Vista, but I couldn't. You might need to run a conversion process by which you transform the raw file (in mbox format) into EML format or whatever it is that Windows Mail uses. I vaguely recall there are tools for this. Thanks. I could open the file without any conversion as follows: 1. Click the raw link on the Web browser (I'm using Internet Explorer). 2. The Web browser displays the mail file in text format. Save the file as a text file (e.g. mail.txt). 3. Just change the extension from .txt to .eml (e.g. mail.eml). 4. Double-click the .eml file on the Windows Explorer. Windows Mail opens and displayes the mail. I can reply to it. Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_archivecleanup bug
On Wed, Mar 19, 2014 at 02:02:50PM -0400, Bruce Momjian wrote: The attached patch is slightly updated. I will apply it to head and all the back branches, including the stylistic change to pg_resetxlog (for consistency) and remove the MinGW block in head. Patch applied back through 8.4. I had the closedir() tests backwards and that was fixed. I also went over all the readdir/closedir() calls in all back branches to make sure they were properly handled. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe Reply-To:
On Fri, Mar 21, 2014 at 04:11:12PM +, Simon Riggs wrote: On 21 March 2014 03:45, Noah Misch n...@leadboat.com wrote: On Sat, Mar 08, 2014 at 11:14:30AM +, Simon Riggs wrote: Thanks for the review. I'll respond to each point on a later email but looks nothing much major, apart from the point raised on separate thread. Yep. + * Be careful to ensure this function is called for Tables and Indexes only. + * It is not currently safe to be called for Views because security_barrier + * is listed as an option and so would be allowed to be set at a level lower + * than AccessExclusiveLock, which would not be correct. This statement is accepted and takes only ShareUpdateExclusiveLock: alter table information_schema.triggers set (security_barrier = true); I find it hard to justify why we accept such a statement. Surely its a bug when the named table turns out to be a view? Presumably ALTER SEQUENCE and ALTER other stuff has checks for the correct object type? OMG. We've framed ALTER TABLE's relkind leniency as a historic artifact. As a move toward stricter checks, ALTER TABLE refused to operate on foreign tables in 9.1 and 9.2. 9.3 reversed that course, though. For better or worse, ALTER TABLE is nearly a union of the relation ALTER possibilities. That choice is well-entrenched. -- Noah Misch EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] [RFC, POC] Don't require a NBuffer sized PrivateRefCount array of local buffer pins
Hi, I've been annoyed at the amount of memory used by the backend local PrivateRefCount array for a couple of reasons: a) The performance impact of AtEOXact_Buffers() on Assert() enabled builds is really, really annoying. b) On larger nodes, the L1/2/3 cache impact of randomly accessing several megabyte big array at a high frequency is noticeable. I've seen the access to that to be the primary (yes, really) source of pipeline stalls. c) On nodes with significant shared_memory the sum of the per-backend arrays is a significant amount of memory, that could very well be used more beneficially. So what I have done in the attached proof of concept is to have a small (8 currently) array of (buffer, pincount) that's searched linearly when the refcount of a buffer is needed. When more than 8 buffers are pinned a hashtable is used to lookup the values. That seems to work fairly well. On the few tests I could run on my laptop - I've done this during a flight - it's a small performance win in all cases I could test. While saving a fair amount of memory. Alternatively we could just get rid of the idea of tracking this per backend, relying on tracking via resource managers... Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services From 433f248c0f4c3e3d43d1cc955354e5dd5cddfcea Mon Sep 17 00:00:00 2001 From: Andres Freund and...@anarazel.de Date: Thu, 20 Mar 2014 21:46:34 +0100 Subject: [PATCH] Make backend local tracking of buffer pins more efficient. --- src/backend/storage/buffer/buf_init.c | 25 --- src/backend/storage/buffer/bufmgr.c | 346 -- src/include/storage/bufmgr.h | 19 -- 3 files changed, 290 insertions(+), 100 deletions(-) diff --git a/src/backend/storage/buffer/buf_init.c b/src/backend/storage/buffer/buf_init.c index e187242..3b2432d 100644 --- a/src/backend/storage/buffer/buf_init.c +++ b/src/backend/storage/buffer/buf_init.c @@ -130,31 +130,6 @@ InitBufferPool(void) } /* - * Initialize access to shared buffer pool - * - * This is called during backend startup (whether standalone or under the - * postmaster). It sets up for this backend's access to the already-existing - * buffer pool. - * - * NB: this is called before InitProcess(), so we do not have a PGPROC and - * cannot do LWLockAcquire; hence we can't actually access stuff in - * shared memory yet. We are only initializing local data here. - * (See also InitBufferPoolBackend, over in bufmgr.c.) - */ -void -InitBufferPoolAccess(void) -{ - /* - * Allocate and zero local arrays of per-buffer info. - */ - PrivateRefCount = (int32 *) calloc(NBuffers, sizeof(int32)); - if (!PrivateRefCount) - ereport(FATAL, -(errcode(ERRCODE_OUT_OF_MEMORY), - errmsg(out of memory))); -} - -/* * BufferShmemSize * * compute the size of shared memory for the buffer pool including diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c index 19eecab..113b7ed 100644 --- a/src/backend/storage/buffer/bufmgr.c +++ b/src/backend/storage/buffer/bufmgr.c @@ -86,6 +86,175 @@ static bool IsForInput; /* local state for LockBufferForCleanup */ static volatile BufferDesc *PinCountWaitBuf = NULL; +typedef struct PrivateRefCount +{ + Buffer buffer; + int32 refcount; +} PrivateRefCount; + +/* one full cacheline */ +#define REFCOUNT_ARRAY_ENTRIES 8 + +/* + * Backend-Private refcount management. + */ +static struct PrivateRefCount PrivateRefCountArray[REFCOUNT_ARRAY_ENTRIES]; +static HTAB *PrivateRefCountHash = NULL; +static int32 PrivateRefCountOverflowed = 0; + +static PrivateRefCount* GetPrivateRefCountEntry(Buffer buffer, bool create); +static inline int32 GetPrivateRefCount(Buffer buffer); +static void ForgetPrivateRefCountEntry(PrivateRefCount *ref); + +/* + * Return the PrivateRefCount entry for the passed buffer. + * + * Returns NULL if create = false is passed and the buffer doesn't have a + * PrivateRefCount entry; allocates a new PrivateRefCount entry if currently + * none exists and create = true is passed. + * + * When a returned refcount entry isn't used anymore it has to be forgotten, + * using ForgetPrivateRefCountEntry(). + * + * Only works for shared buffers. + */ +static PrivateRefCount* +GetPrivateRefCountEntry(Buffer buffer, bool create) +{ + PrivateRefCount *res; + PrivateRefCount *free = NULL; + + int i; + bool found = false; + + Assert(BufferIsValid(buffer)); + Assert(!BufferIsLocal(buffer)); + + /* + * First search for references in the array, that'll be sufficient in the + * majority of cases. + */ + for (i = 0; i REFCOUNT_ARRAY_ENTRIES; i++) + { + res = PrivateRefCountArray[i]; + + if (res-buffer == buffer) + return res; + + /* Remember where to put a new refcount, should it become necessary. */ + if (create free == NULL res-buffer == InvalidBuffer) + free = res; + } + + /* + * By here we know that the buffer, if already
Re: [HACKERS] equalTupleDescs() ignores ccvalid/ccnoinherit
On Fri, Mar 21, 2014 at 12:12 AM, Noah Misch n...@leadboat.com wrote: We added these ConstrCheck fields for 9.2, but equalTupleDescs() did not get the memo. I looked for resulting behavior problems, and I found one in RelationClearRelation() only. Test case: set constraint_exclusion = on; drop table if exists ccvalid_test; create table ccvalid_test (c int); alter table ccvalid_test add constraint x check (c 0) not valid; begin; -- constraint_exclusion won't use an invalid constraint. explain (costs off) select * from ccvalid_test where c = 0; -- Make it valid. alter table ccvalid_test validate constraint x; -- Local invalidation rebuilt the Relation and decided the TupleDesc hadn't -- changed, so we're still not using the constraint. explain (costs off) select * from ccvalid_test where c = 0; commit; -- At COMMIT, we destroyed the then-closed Relation in response to shared -- invalidation. Now constraint_exclusion sees the valid constraint. explain (costs off) select * from ccvalid_test where c = 0; Currently, the damage is limited to later commands in the transaction that issued ALTER TABLE VALIDATE. Changing ccvalid requires AccessExclusiveLock, so no other backend will have an affected, open relcache entry to rebuild. Shared invalidation will make the current backend destroy its affected relcache entry before starting a new transaction. However, the impact will not be so limited once we allow ALTER TABLE VALIDATE to run with a mere ShareUpdateExclusiveLock. (I discovered this bug while reviewing the patch implementing that very feature.) I don't see a way to get trouble from the ccnoinherit omission. You can't change ccnoinherit except by dropping and recreating the constraint, and each of the drop and create operations would make equalTupleDescs() detect a change. The same can be said of ccbin, but equalTupleDescs() does compare that field. For simplicity, I'll have it compare ccnoinherit. CreateTupleDescCopyConstr() also skips ccnoinherit. I don't see a resulting live bug, but it's worth correcting. Given the minor symptoms in released versions, I lean against a back-patch. FWIW, I'd lean toward a back-patch. It's probably not a big deal either way, but I have a hard time seeing what risk we're avoiding by not back-patching, and it seems potentially confusing to leave known-wrong logic floating around in older branches. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe Reply-To:
On 21 March 2014 17:49, Noah Misch n...@leadboat.com wrote: + * Be careful to ensure this function is called for Tables and Indexes only. + * It is not currently safe to be called for Views because security_barrier + * is listed as an option and so would be allowed to be set at a level lower + * than AccessExclusiveLock, which would not be correct. This statement is accepted and takes only ShareUpdateExclusiveLock: alter table information_schema.triggers set (security_barrier = true); I find it hard to justify why we accept such a statement. Surely its a bug when the named table turns out to be a view? Presumably ALTER SEQUENCE and ALTER other stuff has checks for the correct object type? OMG. We've framed ALTER TABLE's relkind leniency as a historic artifact. As a move toward stricter checks, ALTER TABLE refused to operate on foreign tables in 9.1 and 9.2. 9.3 reversed that course, though. For better or worse, ALTER TABLE is nearly a union of the relation ALTER possibilities. That choice is well-entrenched. By well entrenched, I think you mean undocumented, untested, unintentional? Do we think anyone *relies* on being able to say the word TABLE when in fact they mean VIEW or SEQUENCE? How is that artefact anything but a bug? i.e. is anyone going to stop me fixing it? -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] equalTupleDescs() ignores ccvalid/ccnoinherit
On 21 March 2014 18:26, Robert Haas robertmh...@gmail.com wrote: Given the minor symptoms in released versions, I lean against a back-patch. FWIW, I'd lean toward a back-patch. It's probably not a big deal either way, but I have a hard time seeing what risk we're avoiding by not back-patching, and it seems potentially confusing to leave known-wrong logic floating around in older branches. Agreed. It could lead to some other bug by not fixing it. Well spotted, Noah, and thanks, since I believe it was my bug. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe Reply-To:
On 21 March 2014 03:45, Noah Misch n...@leadboat.com wrote: + * Note that Hot Standby only knows about AccessExclusiveLocks on the master + * so any changes that might affect SELECTs running on standbys need to use + * AccessExclusiveLocks even if you think a lesser lock would do, unless you + * have a solution for that also. Out of curiosity, do SELECTs on hot standbys impose known challenges in this area not shared with local SELECTs? No, but locks less than AccessExclusiveLock won't happen at all, so its a difference that if improperly handled could cause a bug. Plus I wanted to indicate I'd thought about it. - * 2. Relcache needs to be internally consistent, so unless we lock the - * definition during reads we have no way to guarantee that. I looked for hazards like this, but I found none in the ALTER forms covered by this patch. None of them modify multiple catalog rows affecting the same relcache entry. However, thinking about that did lead me to ponder another class of hazards. When backends can use one or more relations concurrently with a DDL operation affecting those relations, those backends can find themselves running with a subset of the catalog changes made within a particular DDL operation. Consider VALIDATE CONSTRAINT against an inherited constraint of an inheritance parent. It validates child table constraints, modifying one catalog row per table. At COMMIT time, we queue sinval messages for all affected tables. We add to the queue in atomic groups of WRITE_QUANTUM (64) messages. Between two such groups joining the queue, another backend may process the first group of messages. If the original DDL used AccessExclusiveLock, this is always harmless. The DDL-issuing backend still holds its lock, which means the inval-accepting backend must not have the relation open. If the inval-accepting backend later opens the affected relation, it will first acquire some lock and process the rest of the invalidations from the DDL operation. When doing DDL under a weaker lock, the inval-accepting backend might apply half the invalidations and immediately use them in the context of an open relation. For VALIDATE CONSTRAINT, this means a backend might briefly recognize only a subset of the inheritance tree becoming valid. (I did not actually build a test case to confirm this.) Considering that constraint exclusion is the sole consumer of convalidated/ccvalid that can run in parallel with VALIDATE CONSTRAINT, I think this is harmless. I did not find problems of this nature in any ALTER TABLE forms affected by the patch. Let's just keep it in mind during future lock level changes. I'll document pg_get_constraintdef_mvcc() still does syscache lookups by way of decompile_column_index_array(), get_constraint_index(), and deparse_expression_pretty(). It uses MVCC for things that matter for pg_dump vs. reduced lock levels, but not comprehensively. I recommend not adding a new function and instead changing pg_get_constraintdef() to use the transaction snapshot unconditionally. OK -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?
On Sun, Mar 16, 2014 at 3:23 AM, MauMau maumau...@gmail.com wrote: Hello, The PostgreSQL documentation describes cp (on UNIX/Linux) or copy (on Windows) as an example for archive_command. However, cp/copy does not sync the copied data to disk. As a result, the completed WAL segments would be lost in the following sequence: 1. A WAL segment fills up. 2. The archiver process archives the just filled WAL segment using archive_command. That is, cp/copy reads the WAL segment file from pg_xlog/ and writes to the archive area. At this point, the WAL file is not persisted to the archive area yet, because cp/copy doesn't sync the writes. 3. The checkpoint processing removes the WAL segment file from pg_xlog/. Note that it takes two checkpoints for this to happen, at least as currently coded. Also, if the system crashed badly enough to need media recovery, rather than just automatic crash recovery, some lost transactions are expected. Although this could silently break your PITR chain, of a crash happened and automatic recover used the copy in pg_xlog (which of course was synced) , while copy in the archive was not synced. 4. The OS crashes. The filled WAL segment doesn't exist anywhere any more. Considering the reliable image of PostgreSQL and widespread use in enterprise systems, I think something should be done. Could you give me your opinions on the right direction? Although the doc certainly escapes by saying (This is an example, not a recommendation, and might not work on all platforms.), it seems from pgsql-xxx MLs that many people are following this example. I use this as an example, kind of, but what I am copying to is a network mount, so any attempts to fsync it there would probably need unavailable hooks into the remote file system. Do people really just copy the files from one directory of local storage to another directory of local storage? I don't see the point of that. But it seems like this is an area where there are hundreds of use cases, and often one doesn't see the point of other people's, making it hard to come up with good examples. * Improve the example in the documentation. But what command can we use to reliably sync just one file? * Provide some command, say pg_copy, which copies a file synchronously by using fsync(), and describes in the doc something like for simple use cases, you can use pg_copy as the standard reliable copy command. The recommendation is to refuse to overwrite an existing file of the same name, and exit with failure. Which essentially brings archiving to a halt, because it keeps trying but it will keep failing. If we make a custom version, one thing it should do is determine if the existing archived file is just a truncated version of the attempting-to-be archived file, and if so overwrite it. Because if the first archival command fails with a network glitch, it can leave behind a partial file. Cheers, Jeff
Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe Reply-To:
On Fri, Mar 21, 2014 at 06:53:27PM +, Simon Riggs wrote: On 21 March 2014 17:49, Noah Misch n...@leadboat.com wrote: alter table information_schema.triggers set (security_barrier = true); I find it hard to justify why we accept such a statement. Surely its a bug when the named table turns out to be a view? Presumably ALTER SEQUENCE and ALTER other stuff has checks for the correct object type? OMG. We've framed ALTER TABLE's relkind leniency as a historic artifact. As a move toward stricter checks, ALTER TABLE refused to operate on foreign tables in 9.1 and 9.2. 9.3 reversed that course, though. For better or worse, ALTER TABLE is nearly a union of the relation ALTER possibilities. That choice is well-entrenched. By well entrenched, I think you mean undocumented, untested, unintentional? It's deliberate; a -hackers discussion revisits it perhaps once a year. The ALTER VIEW documentation says: For historical reasons, ALTER TABLE can be used with views too; but the only variants of ALTER TABLE that are allowed with views are equivalent to the ones shown above. ALTER INDEX and ALTER SEQUENCE say something similar. Do we think anyone *relies* on being able to say the word TABLE when in fact they mean VIEW or SEQUENCE? pg_dump emits statements that exercise it: psql -c 'create view v as select 1 as c; alter view v alter c set default 0;' pg_dump --table v | grep ALTER How is that artefact anything but a bug? i.e. is anyone going to stop me fixing it? It's not the behavior I would choose for a new product, but I can't see benefits sufficient to overturn previous decisions to keep it. -- Noah Misch EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] psql blows up on BOM character sequence
See http://www.postgresql.org/message-id/4afeab39.3000...@dunslane.net This is still broken as of fairly recent HEAD; any objections to adding it to TODO? -- Jim C. Nasby, Data Architect j...@nasby.net 512.569.9461 (cell) http://jim.nasby.net -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql blows up on BOM character sequence
On Fri, Mar 21, 2014 at 4:02 PM, Jim Nasby j...@nasby.net wrote: See http://www.postgresql.org/message-id/4afeab39.3000...@dunslane.net This is still broken as of fairly recent HEAD; any objections to adding it to TODO? Agreed: this is a major annoyance. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql blows up on BOM character sequence
On 03/21/2014 05:06 PM, Merlin Moncure wrote: On Fri, Mar 21, 2014 at 4:02 PM, Jim Nasby j...@nasby.net wrote: See http://www.postgresql.org/message-id/4afeab39.3000...@dunslane.net This is still broken as of fairly recent HEAD; any objections to adding it to TODO? Agreed: this is a major annoyance. Surely if it were really a major annoyance, someone would have sent code to fix it during the last 4 years and more since the above. I suspect it's a minor annoyance :-) But by all means add it to the TODO list if it's not there already. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe Reply-To:
On 21 March 2014 20:58, Noah Misch n...@leadboat.com wrote: On Fri, Mar 21, 2014 at 06:53:27PM +, Simon Riggs wrote: On 21 March 2014 17:49, Noah Misch n...@leadboat.com wrote: alter table information_schema.triggers set (security_barrier = true); I find it hard to justify why we accept such a statement. Surely its a bug when the named table turns out to be a view? Presumably ALTER SEQUENCE and ALTER other stuff has checks for the correct object type? OMG. We've framed ALTER TABLE's relkind leniency as a historic artifact. As a move toward stricter checks, ALTER TABLE refused to operate on foreign tables in 9.1 and 9.2. 9.3 reversed that course, though. For better or worse, ALTER TABLE is nearly a union of the relation ALTER possibilities. That choice is well-entrenched. By well entrenched, I think you mean undocumented, untested, unintentional? It's deliberate; a -hackers discussion revisits it perhaps once a year. The ALTER VIEW documentation says: For historical reasons, ALTER TABLE can be used with views too; but the only variants of ALTER TABLE that are allowed with views are equivalent to the ones shown above. ALTER INDEX and ALTER SEQUENCE say something similar. Do we think anyone *relies* on being able to say the word TABLE when in fact they mean VIEW or SEQUENCE? pg_dump emits statements that exercise it: psql -c 'create view v as select 1 as c; alter view v alter c set default 0;' pg_dump --table v | grep ALTER How is that artefact anything but a bug? i.e. is anyone going to stop me fixing it? It's not the behavior I would choose for a new product, but I can't see benefits sufficient to overturn previous decisions to keep it. Speechless -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?
From: Jeff Janes jeff.ja...@gmail.com Do people really just copy the files from one directory of local storage to another directory of local storage? I don't see the point of that. It makes sense to archive WAL to a directory of local storage for media recovery. Here, the local storage is a different disk drive which is directly attached to the database server or directly connected through SAN. The recommendation is to refuse to overwrite an existing file of the same name, and exit with failure. Which essentially brings archiving to a halt, because it keeps trying but it will keep failing. If we make a custom version, one thing it should do is determine if the existing archived file is just a truncated version of the attempting-to-be archived file, and if so overwrite it. Because if the first archival command fails with a network glitch, it can leave behind a partial file. What I'm trying to address is just an alternative to cp/copy which fsyncs a file. It just overwrites an existing file. Yes, you're right, the failed archive attempt leaves behind a partial file which causes subsequent attempts to fail, if you follow the PG manual. That's another undesirable point in the current doc. To overcome this, someone on this ML recommended me to do cp %p /archive/dir/%f.tmp mv /archive/dir/%f.tmp /archive/dir/%f. Does this solve your problem? Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql blows up on BOM character sequence
Andrew Dunstan and...@dunslane.net writes: Surely if it were really a major annoyance, someone would have sent code to fix it during the last 4 years and more since the above. The code would probably be pretty trivial, *if* we had consensus on what the behavior ought to be. I'm not sure if we do. People who only use Unicode would probably like it if BOMs were unconditionally swallowed, whether or not psql thinks the client_encoding is UTF8. (And I seem to recall somebody even proposing that finding a BOM be cause to switch the client_encoding to UTF8.) However, these ideas are complete nonstarters for people who habitually use other encodings. The argument about SQL syntax carries no weight for me, at least --- what about COPY data files? And I don't really want to suppose that \i can never be used to insert a portion of a SQL command, either. I'd be okay with swallowing a leading BOM if and only if client encoding is UTF8. This should apply to any file psql reads, whether script or data. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql blows up on BOM character sequence
On Fri, Mar 21, 2014 at 4:28 PM, Tom Lane t...@sss.pgh.pa.us wrote: I'd be okay with swallowing a leading BOM if and only if client encoding is UTF8. This should apply to any file psql reads, whether script or data. Yeah. The one case that doesn't solve is: cat f1.sql f2.sql | psql ... Which is common usage in deployment systems where combining things inside a single transaction scope is important. There is no way for psql to handle that case though unless you'd strip *all* BOMs encountered. Compounding this problem is that there's no practical way AFAIK to send multiple file to psql via single command line invocation. If you pass multiple -f arguments all but one is ignored. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Why is autovacuum_freeze_max_age a postmaster setting?
Why do we require a restart to change autovacuum_freeze_max_age? Can’t we respawn the autovac workers to pick up the setting? (Or just pass the HUP down to them?) -- Jim C. Nasby, Data Architect j...@nasby.net 512.569.9461 (cell) http://jim.nasby.net -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Memory ordering issue in LWLockRelease, WakeupWaiters, WALInsertSlotRelease
Hi, I see you've committed this, cool. Sorry for not getting back to the topic earlier.. On 2014-03-13 22:44:03 +0200, Heikki Linnakangas wrote: On 03/12/2014 09:29 PM, Andres Freund wrote: On 2014-03-07 17:54:32 +0200, Heikki Linnakangas wrote: So there are some unexplained differences there, but based on these results, I'm still OK with committing the patch. So, I am looking at this right now. I think there are some minor things I'd like to see addressed: 1) I think there needs to be a good sized comment explaining why WaitXLogInsertionsToFinish() isn't racy due to the unlocked read at the beginning of LWLockWait(). There's a comment inside LWLockWait(). I think that's the right place for it; it's LWLockWait() that's cheating by not acquiring the spinlock before reading lock-exclusive. I don't find that argument convincing. After all it's only correct because the API user does things in a particular way. So there should be comment at the callsite to make sure that's not changed. 3) I am the wrong one to complain, I know, but the comments above struct WALInsertLock are pretty hard to read from th sentence structure. Hmm, ok. I reworded that, I hope it's more clear now. Yes, it is. The committed version doesn't compile with LWLOCK_STATS... Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql blows up on BOM character sequence
Merlin Moncure mmonc...@gmail.com writes: There is no way for psql to handle that case though unless you'd strip *all* BOMs encountered. Compounding this problem is that there's no practical way AFAIK to send multiple file to psql via single command line invocation. If you pass multiple -f arguments all but one is ignored. Well, that seems like a solvable but rather independent problem. I guess one issue is how you'd define the meaning of --single ... one transaction per run, or one per file? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why is autovacuum_freeze_max_age a postmaster setting?
Hi, On 2014-03-21 16:49:53 -0500, Jim Nasby wrote: Why do we require a restart to change autovacuum_freeze_max_age? Can’t we respawn the autovac workers to pick up the setting? (Or just pass the HUP down to them?) It's more complex than notifying the workers. There's limits in shared memory that's computed based on it. Check varsup.c:SetTransactionIdLimit(). It's not entirely trivial to trigger recomputation of that value via the GUC machinery in a sensible way... But yes, I'd wished it were PGC_SIGHUP before as well. I guess we could delegate responsibility of updating the shared memory value to the autovac launcher? Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Memory ordering issue in LWLockRelease, WakeupWaiters, WALInsertSlotRelease
On 2014-03-21 22:52:33 +0100, Andres Freund wrote: The committed version doesn't compile with LWLOCK_STATS... Just noticed that it seems to also break the dtrace stuff: http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=rover_fireflydt=2014-03-21%2018%3A04%3A00 Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why is autovacuum_freeze_max_age a postmaster setting?
On 3/21/14, 4:55 PM, Andres Freund wrote: Hi, On 2014-03-21 16:49:53 -0500, Jim Nasby wrote: Why do we require a restart to change autovacuum_freeze_max_age? Can’t we respawn the autovac workers to pick up the setting? (Or just pass the HUP down to them?) It's more complex than notifying the workers. There's limits in shared memory that's computed based on it. Check varsup.c:SetTransactionIdLimit(). It's not entirely trivial to trigger recomputation of that value via the GUC machinery in a sensible way... But yes, I'd wished it were PGC_SIGHUP before as well. I guess we could delegate responsibility of updating the shared memory value to the autovac launcher? Does the launcher handle the SIGHUP for autovac workers? But generally speaking, yes, I think it would be sensible to only worry about the effect that setting has asynchronously from what guc.c does, *as long as* it will always be set, regardless of things like the autovac GUC. Also, maybe we should split setting ShmemVariableCache-xidVacLimit into it's own function? Would that help? (Sorry, I haven't wrapped my head around the issue with calling this straight from guc.c yet...) -- Jim C. Nasby, Data Architect j...@nasby.net 512.569.9461 (cell) http://jim.nasby.net -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?
On Fri, Mar 21, 2014 at 01:16:08PM -0700, Jeff Janes wrote: On Sun, Mar 16, 2014 at 3:23 AM, MauMau maumau...@gmail.com wrote: Hello, The PostgreSQL documentation describes cp (on UNIX/Linux) or copy (on Windows) as an example for archive_command. However, cp/copy does not sync the copied data to disk. As a result, the completed WAL segments would be lost in the following sequence: 1. A WAL segment fills up. 2. The archiver process archives the just filled WAL segment using archive_command. That is, cp/copy reads the WAL segment file from pg_xlog/ and writes to the archive area. At this point, the WAL file is not persisted to the archive area yet, because cp/copy doesn't sync the writes. 3. The checkpoint processing removes the WAL segment file from pg_xlog/. Note that it takes two checkpoints for this to happen, at least as currently coded. Also, if the system crashed badly enough to need media recovery, rather than just automatic crash recovery, some lost transactions are expected. Although this could silently break your PITR chain, of a crash happened and automatic recover used the copy in pg_xlog (which of course was synced) , while copy in the archive was not synced. That is one good reason to keep checkpoint_warning=30, so the typical file system sync that happens every 30 seconds warns that those files might not on permanent storage. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Best way to know frequency of column reference?
Hello, As people may know, I've implemented a relation cache mechanism on top of custom-plan interface, that holds contents of a particular columns only, thus it does not need to take storage access as long as user's query refers the columns on in-memory cache. The key factor of how this mechanism works is selection of columns to be cached on the initial read, or re-read, of the related heap. The simplest idea is just picking up referenced columns in the query on demand, and will reconstruct later if further query referenced more wider reference than previous one, however, it is not a good strategy. So, I'd like to investigate the way to select columns to be cached adaptively. Probably, one better idea is columns-selection according to the frequency of column references in a particular time-slot. Right now, pg_statistic does not record such kind of information, if I can understand correctly. Is there any way to retrieve how many times columns were referenced? Or, do I need to implement an own extension to track it? Thanks, -- KaiGai Kohei kai...@kaigai.gr.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] equalTupleDescs() ignores ccvalid/ccnoinherit
Robert Haas robertmh...@gmail.com writes: On Fri, Mar 21, 2014 at 12:12 AM, Noah Misch n...@leadboat.com wrote: Given the minor symptoms in released versions, I lean against a back-patch. FWIW, I'd lean toward a back-patch. It's probably not a big deal either way, but I have a hard time seeing what risk we're avoiding by not back-patching, and it seems potentially confusing to leave known-wrong logic floating around in older branches. I agree with Robert. This is a bug, let's fix it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe Reply-To:
Simon Riggs si...@2ndquadrant.com writes: On 21 March 2014 20:58, Noah Misch n...@leadboat.com wrote: It's not the behavior I would choose for a new product, but I can't see benefits sufficient to overturn previous decisions to keep it. Speechless The key argument for not fixing this is that it would break existing pg_dump files. That's a pretty hard argument to overcome, unfortunately, even if you're willing to blow off the possibility that client applications might contain similar shortcuts. We still do our best to read dump files from the 7.0 era (see ConvertTriggerToFK() for one example of going above and beyond for that); and every so often we do hear of people trying to get data out of such ancient servers. So even if you went and fixed pg_dump tomorrow, it'd probably be ten or fifteen years before people would let you stop reading dumps from existing versions. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Standby server won't start
That's because the parameter is checked at the beginning of recovery (i.e. at standby start) before XLOG_PARAMETER_CHANGE is received and applied on the standby. Please see CheckRequiredParameterValues() in StartupXLOG(). To persist the max_connections change: 1) stop primary 2) change max_connections on the primary 3) start primary 4) watch pg_stat_replication to wait until the standby is sync with the primary (XLOG_PARAMETER_CHANGE is applied) 5) stop standby 6) change max_connections on the standby 7) start standby Unfotunately this did not work for me. pg_stat_replication showed replay_location and sent_location are identical, and I assume the standby is sync with the primary in step #4. Still the standby did not start in #7 with same error message I showed. This is PostgreSQL 9.3.3. Also pg_controldata standby DB cluster showed the old max_connections at #7. So I guess XLOG_PARAMETER_CHANGE has not been sent for some reason. Will look into this. Best regards, -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Partial index locks
Hi, I've created a table with 1000 partial indexes. Each one matches exactly one row based on the predicate WHERE id = value. However, when I perform an UPDATE of a single row in a transaction, I've noticed that all those partial indexes show up in pg_locks with RowExclusiveLock. Only 2 of those indexes have a reference to the row: the primary key and a single partial index. Is it necessary for a partial index that doesn't include the row to be involved in locking? Thanks Thom -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Partial index locks
On 03/22/2014 01:43 AM, Thom Brown wrote: Hi, I've created a table with 1000 partial indexes. Each one matches exactly one row based on the predicate WHERE id = value. However, when I perform an UPDATE of a single row in a transaction, I've noticed that all those partial indexes show up in pg_locks with RowExclusiveLock. Only 2 of those indexes have a reference to the row: the primary key and a single partial index. Is it necessary for a partial index that doesn't include the row to be involved in locking? What if the update puts the row into one of the other indexes? -- Vik -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql blows up on BOM character sequence
On Mar 21, 2014, at 2:16 PM, Andrew Dunstan and...@dunslane.net wrote: Surely if it were really a major annoyance, someone would have sent code to fix it during the last 4 years and more since the above. I suspect it's a minor annoyance :-) But by all means add it to the TODO list if it's not there already. I have cleaned up many a BOM added to files that made psql blow up. I think PGAdmin III was a culprit, though I’m not sure (I don’t use, it, cleaned up after coworkers who do). David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch for CREATE RULE sgml -- Was in: [DOCS]
On Sat, Mar 22, 2014 at 12:56 AM, Emanuel Calvo emanuel.ca...@2ndquadrant.com wrote: I realized that the output of the CREATE RULE has not a detailed output for the events parameter. But the question here is that I'm not sure which format follow: { INSERT | UPDATE | DELETE | SELECT} or INSERT UPDATE DELETE SELECT - -- I attach a patch for each one. I sent a mail to pgsql-docs but no response has been received. The list of events possible is already listed in the section Parameters = event: Page: http://www.postgresql.org/docs/devel/static/sql-createrule.html Quote: The event is one of SELECT, INSERT, UPDATE, or DELETE. Regards, -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Partial index locks
On 22 March 2014 00:59, Vik Fearing vik.fear...@dalibo.com wrote: On 03/22/2014 01:43 AM, Thom Brown wrote: Hi, I've created a table with 1000 partial indexes. Each one matches exactly one row based on the predicate WHERE id = value. However, when I perform an UPDATE of a single row in a transaction, I've noticed that all those partial indexes show up in pg_locks with RowExclusiveLock. Only 2 of those indexes have a reference to the row: the primary key and a single partial index. Is it necessary for a partial index that doesn't include the row to be involved in locking? What if the update puts the row into one of the other indexes? Well here's where I'm confused. The entries in pg_locks show than a RowExclusiveLock is being held on the index for which there is no matching row. What does that translate as? There is also a RowExclusiveLock on the table itself too, which is what I expect to see. Also, a delete results in all the locks being taken too. That can't possibly result in a new entry being put into any of those indexes. As those indexes don't contain references to the row, what is it locking? -- Thom -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch for CREATE RULE sgml -- Was in: [DOCS]
On Fri, Mar 21, 2014 at 8:15 PM, Michael Paquier michael.paqu...@gmail.com wrote: On Sat, Mar 22, 2014 at 12:56 AM, Emanuel Calvo emanuel.ca...@2ndquadrant.com wrote: I realized that the output of the CREATE RULE has not a detailed output for the events parameter. The list of events possible is already listed in the section Parameters = event: AFAIU, the synopsis is used to build the help command (\h) in psql. Currently in that help the events doesn't appear. btw, CREATE TRIGGER already looks this way: http://www.postgresql.org/docs/current/static/sql-createtrigger.html -- Jaime Casanova www.2ndQuadrant.com Professional PostgreSQL: Soporte 24x7 y capacitación Phone: +593 4 5107566 Cell: +593 987171157 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Partial index locks
Thom Brown t...@linux.com writes: Is it necessary for a partial index that doesn't include the row to be involved in locking? Yes. You can't determine whether the index needs to get a new entry without examining its metadata, and that's what the lock is mainly about. The only possible alternative would be to take the minimum possible lock (AccessShareLock) on each index so its metadata would hold still, and then upgrade that to RowExclusiveLock on the one(s) we find need insertions. This is not better; it means *more* lock management traffic not less, and lock upgrades increase the potential for deadlocks. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers