Re: [HACKERS] WIP: preloading of ispell dictionary
Takahiro Itagaki wrote: Pavel Stehule pavel.steh...@gmail.com wrote: I wrote some small patch, that allow preloading of selected ispell dictionary. It solve the problem with slow tsearch initialisation with some language configuration. I afraid so this module doesn't help on MS Windows. I think it should work on all platforms if we include it into the core. It will work, as in it will compile and run. It just won't be any faster. I think that's enough, otherwise you could argue that we shouldn't have preload_shared_libraries option at all because it won't help on Windows. The fundamental issue seems to be in the slow initialization of dictionaries. If so, how about adding a pre-complile tool to convert a dictionary into a binary file, and each backend simply mmap it? Yeah, that would be better. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Make standby server continuously retry restoring the next WAL
Sorry for the delay. On Fri, Mar 19, 2010 at 8:37 PM, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: Here's a patch I've been playing with. Thanks! I'm reading the patch. The idea is that in standby mode, the server keeps trying to make progress in the recovery by: a) restoring files from archive b) replaying files from pg_xlog c) streaming from master When recovery reaches an invalid WAL record, typically caused by a half-written WAL file, it closes the file and moves to the next source. If an error is found in a file restored from archive or in a portion just streamed from master, however, a PANIC is thrown, because it's not expected to have errors in the archive or in the master. But in the current (v8.4 or before) behavior, recovery ends normally when an invalid record is found in an archived WAL file. Otherwise, the server would never be able to start normal processing when there is a corrupted archived file for some reasons. So, that invalid record should not be treated as a PANIC if the server is not in standby mode or the trigger file has been created. Thought? When I tested the patch, the following PANIC error was thrown in the normal archive recovery. This seems to derive from the above change. The detail error sequence: 1. In ReadRecord(), emode was set to PANIC after 0001000B was read. 2. 0001000C including the contrecord tried to be read by using the emode (= PANIC). But since 0001000C did not exist, PANIC error was thrown. - LOG: restored log file 0001000B from archive cp: cannot stat `../data.arh/0001000C': No such file or directory PANIC: could not open file pg_xlog/0001000C (log file 0, segment 12): No such file or directory LOG: startup process (PID 17204) was terminated by signal 6: Aborted LOG: terminating any other active server processes - Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Windowing Qual Pushdown
2010/3/21 Daniel Farina drfar...@acm.org: In the function subquery_is_pushdown_safe, there is an immediate false returned if the subquery has a windowing function. While that seems true in general, are there cases where we can push down a qual if it is on the partitioning key? Or do NULLs or some other detail get in the way? Ugh, that seems true. In similar case you can push down WHERE clause of outer query to subquery if the qual clause match GROUP BY clause. This is done by transforming outer WHERE - HAVING - inner WHERE. However, window function querys don't have such clause as HAVING of aggregate. If you implement that optimization, we need have kind of implicit, homologous qual information. Sure, it's possible. Regards, -- Hitoshi Harada -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Windowing Qual Pushdown
On Tue, Mar 23, 2010 at 12:19 AM, Hitoshi Harada umi.tan...@gmail.com wrote: If you implement that optimization, we need have kind of implicit, homologous qual information. Sure, it's possible. I'm not sure precisely what you mean here. Do you predict the mechanism will be complicated? It's been a burning itch of mine for a little while now. I do not know exactly how windowing functions look in Query values just yet, although I'm very familiar with older structures there. fdr -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Windowing Qual Pushdown
2010/3/23 Daniel Farina drfar...@acm.org: On Tue, Mar 23, 2010 at 12:19 AM, Hitoshi Harada umi.tan...@gmail.com wrote: If you implement that optimization, we need have kind of implicit, homologous qual information. Sure, it's possible. I'm not sure precisely what you mean here. Do you predict the mechanism will be complicated? It's been a burning itch of mine for a little while now. I do not know exactly how windowing functions look in Query values just yet, although I'm very familiar with older structures there. I believe the changes will probably not be 2-3 lines (ie. a member added to Query structure, etc) if I try it. But the optimizer part is too complicated to me so that I am not sure, either. My idea above is that the similar mechanism you see in GROUP BY optimization will help you and the issue is not so particular about window functions. Regards, -- Hitoshi Harada -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: preloading of ispell dictionary
2010/3/23 Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp: Pavel Stehule pavel.steh...@gmail.com wrote: I wrote some small patch, that allow preloading of selected ispell dictionary. It solve the problem with slow tsearch initialisation with some language configuration. I afraid so this module doesn't help on MS Windows. I think it should work on all platforms if we include it into the core. We should continue to research shared memory or mmap approaches. The fundamental issue seems to be in the slow initialization of dictionaries. If so, how about adding a pre-complile tool to convert a dictionary into a binary file, and each backend simply mmap it? It means loading about 25MB from disc. for every first tsearch query - sorry, I don't believe can be good. BTW, SimpleAllocContextCreate() is not used at all in the patch. Do you still need it? yes - I needed it. Without Simple Allocator cz configuration takes 48MB. There are a few parts has to be supported by Simple Allocator - other hasn't significant impact - so I don't ugly more code. In my first path I verify so dictionary data are read only so I was motivated to use Simple Allocator everywhere. It is not necessary for preload method. Pavel Regards, --- Takahiro Itagaki NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: preloading of ispell dictionary
2010/3/23 Pavel Stehule pavel.steh...@gmail.com: 2010/3/23 Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp: The fundamental issue seems to be in the slow initialization of dictionaries. If so, how about adding a pre-complile tool to convert a dictionary into a binary file, and each backend simply mmap it? It means loading about 25MB from disc. for every first tsearch query - sorry, I don't believe can be good. The operating system's VM subsystem should make that a non-problem. Loading is also not the word I would use to indicate what mmap does. Nicolas -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] proposal: more practical view on function's source code
Modification of proposal: I think so from discussion can take some two points: a) enhancing editing \ef funcname, line ... edit function and move cursor on line \ef... edit function - name and line take from error message b) enhancing view \sf funcname ... show function source code without any decorations - good for copy/paste \sf+ funcname ... show function and add line numbers, maybe others in future \sf+ funcname, line ... show function from line \sf ... show function - name and line take from error message + add new system variable PG_EDITOR_OPTION ?? what do you think about? Regards Pavel 2010/3/22 Peter Eisentraut pete...@gmx.net: On sön, 2010-03-21 at 20:40 -0400, Robert Haas wrote: \ef function-name line-number with suitable magic to get the editor to place the cursor at that line. I suspect this wouldn't be too hard to do with emacs --- what do you think about vi? Well, in vi you can just do vi +linenum filename. I think that's a pretty widely spread convention. A quick test shows that all of emacs, vi, joe, and nano support this. Of course there are editors that don't support it, so we'll have to distinguish that somehow, but it won't be too complicated to support a few of the common editors. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: preloading of ispell dictionary
2010/3/23 Nicolas Barbier nicolas.barb...@gmail.com: 2010/3/23 Pavel Stehule pavel.steh...@gmail.com: 2010/3/23 Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp: The fundamental issue seems to be in the slow initialization of dictionaries. If so, how about adding a pre-complile tool to convert a dictionary into a binary file, and each backend simply mmap it? It means loading about 25MB from disc. for every first tsearch query - sorry, I don't believe can be good. The operating system's VM subsystem should make that a non-problem. Loading is also not the word I would use to indicate what mmap does. Maybe we can do some manipulation inside memory - I have not any knowledges about mmap. With Simple Allocator we can have a dictionary data as one block. Problems are a pointers, but I believe so can be replaced by offsets. Personally I dislike idea some dictionary precompiler - it is next application for maintaining and maybe not necessary. And still you need a next application for loading. p.s. I able to serialise czech dictionary, because it use only simply regexp. Pavel Nicolas -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Windowing Qual Pushdown
On Tue, Mar 23, 2010 at 12:40 AM, Hitoshi Harada umi.tan...@gmail.com wrote: I believe the changes will probably not be 2-3 lines (ie. a member added to Query structure, etc) if I try it. But the optimizer part is too complicated to me so that I am not sure, either. My idea above is that the similar mechanism you see in GROUP BY optimization will help Are you suggesting that the windowing clause should perhaps refer to a column in the target list, much like GROUP BY/ORDER BY, so that one can easily see if the qual in the fromexpr corresponds to the windowClause to see if the pushdown is safe? fdr -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Ragged latency log data in multi-threaded pgbench
On Tue, Mar 23, 2010 at 05:10, Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp wrote: Greg Smith g...@2ndquadrant.com wrote: By the way: the pgbench.sgml that you committed looks like it passed through a system that added a CR to every line in it. Probably not the way you intended to commit that. Oops, fixed. Thanks. My guess is that this happened because you committed from Windows? I learned the hard way that this is a bad idea. (Luckily I learned it on other CVS projects, before I started committing to PostgreSQL). There are settings to make it not do that, but they are not reliable. I'd strongly suggest that you always just do a cvs diff on windows and then use a staging machine running linux or bsd or something to apply it through. And then you *always* run those patches through something like fromdos. It's a bunch of extra steps, but it's really the only way to do it reliably. If that's not at all what happened, then well, it's still good advice I think, even if it doesn't apply :-) -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
Peter Eisentraut pete...@gmx.net writes: Well, sudo is pretty useful, and this would be quite similar. +1. I guess one of the big difficulties would be to be able to match a given random query with the list of queries we have in any Jail, given that we put in there generic queries and we want to allow specific queries. But once we have that, it could turn out pretty useful for other thoughts. I can't find it again in the archives, but the idea was to collect statistics on views rather than plain table so that you can have correlated stats on JOINs and some columns etc. The hard part here too looks like being able to tell at runtime that a given query is a specific form of an existing view. Regards, -- dim -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Deadlock possibility in _bt_check_unique?
Hi, With the implementation of deferred unique constraints, we need to go back to the index second time to check whether the unique check is valid. Say a situation occurs like this a) the first session doing the unique check finds out that there is a unique check required second time and just makes its entry and comes back b) the second session doing the unique check finds out that there is a unique check required second time and just makes its entry and comes back While they do the second check, first session will wait for the session to complete and vice versa. Won't this result in a deadlock? Isn't this a realistic scenario? Thanks, Gokul.
Re: [HACKERS] Deadlock possibility in _bt_check_unique?
Gokulakannan Somasundaram wrote: Hi, With the implementation of deferred unique constraints, we need to go back to the index second time to check whether the unique check is valid. Say a situation occurs like this a) the first session doing the unique check finds out that there is a unique check required second time and just makes its entry and comes back b) the second session doing the unique check finds out that there is a unique check required second time and just makes its entry and comes back While they do the second check, first session will wait for the session to complete and vice versa. Won't this result in a deadlock? Isn't this a realistic scenario? Yes, that can happen. The deadlock detector will kick in and abort one of the sessions. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Deadlock possibility in _bt_check_unique?
Can you also explain how are we avoiding duplicates in this scenario? a) Say there are three pages(P,Q, R) full of duplicate tuples, that are deleted but not dead of id x(due to some long running transaction). b) Now Session A gets in and checks the duplicate tuples for their liveliness with the HeapTuple for id x with shared lock on all the pages P, Q and R. Since all are deleted, it will get the message, that it need not come back to check again for uniqueness Finally it again starts from P to check for freespace to insert its tuple. Say it inserts the tuple at page Q. c) Now Session B(with same id x) starts after Session A, but it passes Q before the insertion of the tuple by Session A. It will also get the response from _bt_check_unique, that it need not comeback for second time unique check. Now it checks for freespace from P and it finds freespace at P. Then it will insert the new record at P itself. So we have two duplicate records, eventhough there is a unique constraint. Is this a possible scenario? Thanks, Gokul.
Re: [HACKERS] Windowing Qual Pushdown
2010/3/23 Daniel Farina drfar...@acm.org: On Tue, Mar 23, 2010 at 12:40 AM, Hitoshi Harada umi.tan...@gmail.com wrote: I believe the changes will probably not be 2-3 lines (ie. a member added to Query structure, etc) if I try it. But the optimizer part is too complicated to me so that I am not sure, either. My idea above is that the similar mechanism you see in GROUP BY optimization will help Are you suggesting that the windowing clause should perhaps refer to a column in the target list, much like GROUP BY/ORDER BY, so that one can easily see if the qual in the fromexpr corresponds to the windowClause to see if the pushdown is safe? The windowing clause refer to the targetlist as resjunk columns. I thought we need some intermediate data like havingQual to tell what is pushed down to subquery, because the pushdown of GROUP BY columns is done later in subquery_planner() of the subquery (as pushed-down havingQual), not in set_subquery_pathlist(). However, I found the real problem. If the query has multiple window definitions, at this stage you cannot tell if the pushdown is safe nor cannot know how, because the order of evaluation of individual window is decided later in grouping_planner(). So the workaround is to limit this optimization for only one window definition case but it seems too narrow solution. Maybe there're chances in setrefs.c to do it after grouping_planner(), but I'm not quite sure. Regards, -- Hitoshi Harada -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Windowing Qual Pushdown
Hitoshi Harada umi.tan...@gmail.com writes: I believe the changes will probably not be 2-3 lines (ie. a member added to Query structure, etc) if I try it. But the optimizer part is too complicated to me so that I am not sure, either. My idea above is that the similar mechanism you see in GROUP BY optimization will help you and the issue is not so particular about window functions. The real question is what benefit you expect to get. If the filter condition can't be pushed below the window functions (which AFAICS it can't without changing the results) then there is really nothing to be gained compared to leaving it in the outer query level. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Deadlock possibility in _bt_check_unique?
Gokulakannan Somasundaram gokul...@gmail.com writes: Can you also explain how are we avoiding duplicates in this scenario? a) Say there are three pages(P,Q, R) full of duplicate tuples, that are deleted but not dead of id x(due to some long running transaction). b) Now Session A gets in and checks the duplicate tuples for their liveliness with the HeapTuple for id x with shared lock on all the pages P, Q and R. Since all are deleted, it will get the message, that it need not come back to check again for uniqueness Finally it again starts from P to check for freespace to insert its tuple. Say it inserts the tuple at page Q. c) Now Session B(with same id x) starts after Session A, but it passes Q before the insertion of the tuple by Session A. It will also get the response from _bt_check_unique, that it need not comeback for second time unique check. Now it checks for freespace from P and it finds freespace at P. Then it will insert the new record at P itself. So we have two duplicate records, eventhough there is a unique constraint. Is this a possible scenario? Are you talking about exclusion constraints or btree uniqueness constraints? This doesn't seem to be a particularly accurate description of the implementation of either one. The way btree deals with this is explained in _bt_doinsert: * NOTE: obviously, _bt_check_unique can only detect keys that are already * in the index; so it cannot defend against concurrent insertions of the * same key. We protect against that by means of holding a write lock on * the target page. Any other would-be inserter of the same key must * acquire a write lock on the same target page, so only one would-be * inserter can be making the check at one time. Furthermore, once we are * past the check we hold write locks continuously until we have performed * our insertion, so no later inserter can fail to see our insertion. * (This requires some care in _bt_insertonpg.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Windowing Qual Pushdown
On Tue, Mar 23, 2010 at 8:23 AM, Tom Lane t...@sss.pgh.pa.us wrote: The real question is what benefit you expect to get. If the filter condition can't be pushed below the window functions (which AFAICS Even on the partition key? Right now if you define a view with a windowing + PARTITION BY clause in it and people write a lot of queries to interrogate one partition or the other, you end up computing results for the entire relation, and then filtering all but one partition out, in my understanding. Since it seems on the surface there is no context sensitivity(?) between partitions in this kind of a case it would seem a qual pushdown on the partition key would help rather intensely. fdr -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Mismatch in libpqwalreceiver
There's a mismatch in HEAD between README and the actual definition in replication/libpqwalreceiver. In README, bool walrcv_receive(int timeout, XLogRecPtr *recptr, char **buffer, int *len) but in walreceiver.h, typedef bool (*walrcv_receive_type) (int timeout, unsigned char *type, char **buffer, int *len); It seems this commit forgot README fix. http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/replication/walreceiver.h?r1=1.5r2=1.6 Regards, -- Hitoshi Harada -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 9.0 release notes done
Bruce, I thought this year we were going to start using people's full names instead of the first names, for clarity. No? -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
On 3/21/10 9:36 PM, Joseph Adams wrote: Inside of the jail definition is a series of pseudo-statements that indicate the space of queries the user can perform. Simply creating a jail does not make it go into effect. A jail is activated using another query, and it remains in effect for the remainder of the session. It cannot be deactivated through the protocol, as doing so would constitute a privilege escalation. This is an interesting approach and I don't think that most of the people commenting on this list have quite grasped it. I see two major difficulties to solve with this approach: (1) developing a way of phrasing the query stubs which would allow common things like dynamic where clauses, order by, and limit, and (2) whether it's practical for the author of any real application to define all of those queries beforehand. For (1), you might want to look at Meredith's libDejector, which takes a similar approach for SQL-injection protection: http://www.thesmartpolitenerd.com/code/dejector.html I don't think that the idea of turning on the jail mode via a session-level switch works, given the realities of connection pooling. Also, I do not believe that we currently have any USERSET variable which can be turned on but not off, so that would require adding a whole new mode. BTW, if you wanted something less ambitious, we have a longstanding request to implement local superuser, that is, the ability to give one role the ability to edit other roles in one database only. -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 9.0 release notes done
On Tue, Mar 23, 2010 at 1:09 PM, Josh Berkus j...@agliodbs.com wrote: I thought this year we were going to start using people's full names instead of the first names, for clarity. No? +1 for that approach. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
On Tue, Mar 23, 2010 at 1:28 PM, Josh Berkus j...@agliodbs.com wrote: I don't think that the idea of turning on the jail mode via a session-level switch works, given the realities of connection pooling. Also, I do not believe that we currently have any USERSET variable which can be turned on but not off, so that would require adding a whole new mode. I think this could be done with an assign hook. BTW, if you wanted something less ambitious, we have a longstanding request to implement local superuser, that is, the ability to give one role the ability to edit other roles in one database only. But roles aren't database-specific... they're globals. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Deadlock possibility in _bt_check_unique?
Are you talking about exclusion constraints or btree uniqueness constraints? This doesn't seem to be a particularly accurate description of the implementation of either one. The way btree deals with this is explained in _bt_doinsert: Unique constraints * NOTE: obviously, _bt_check_unique can only detect keys that are already * in the index; so it cannot defend against concurrent insertions of the * same key. We protect against that by means of holding a write lock on * the target page. Any other would-be inserter of the same key must * acquire a write lock on the same target page, so only one would-be * inserter can be making the check at one time. Furthermore, once we are * past the check we hold write locks continuously until we have performed * our insertion, so no later inserter can fail to see our insertion. * (This requires some care in _bt_insertonpg.) This is fine, if the second session has to pass through the page, where the first session inserted the record. But as i said if the second session finds a free slot before hitting on the page where the first session inserted, then it will never hit the page with write lock. The comment says that, Furthermore, once we are * past the check we hold write locks continuously until we have performed * our insertion, so no later inserter can fail to see our insertion. * (This requires some care in _bt_insertonpg.) But in the case, i suggested(page1, page2, page3 with non-dead duplicate tuples, which are deleted), the first session checks page1, finds no freespace, moves to page2, finds freespace and inserts. But when the second session checks page1, say some of the tuples become dead. Then it gets freespace there and inserts, but never sees the insertion of the first session. Maybe, i am missing something. Just thought of raising the flag.. Gokul.
Re: [HACKERS] Deadlock possibility in _bt_check_unique?
Gokulakannan Somasundaram gokul...@gmail.com writes: This is fine, if the second session has to pass through the page, where the first session inserted the record. But as i said if the second session finds a free slot before hitting on the page where the first session inserted, then it will never hit the page with write lock. The comment says that, No, you don't understand how it works. All would-be inserters will hit the same target page to begin with, ie, the first one that the new key could legally be inserted on. The lock that protects against this problem is the lock on that page, regardless of which page the key actually ends up on. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Deadlock possibility in _bt_check_unique?
No, you don't understand how it works. All would-be inserters will hit the same target page to begin with, ie, the first one that the new key could legally be inserted on. The lock that protects against this problem is the lock on that page, regardless of which page the key actually ends up on. Consider Time instances T1, T2, T3, T4 T1 : session 1 holds the write lock on page p1 and completes the unique check on p1, p2 and p3. T2 : session 1 releases the lock on p1 (its waiting to acquire a ex lock on p2) T3 : session 2 acquires write lock on p1 and completes the unique check on p1, p2 and p3. Here, i believe the Session 2 has a chance of getting the read lock before session 1 gets the write lock. Is it not possible? T4 : session 1 gets the write lock on p2 and inserts and session 2 gets the write lock on p1 and inserts. OK. I have stated my assumptions. Is my assumption at time instant T3 wrong/ never happen? Thanks, Gokul.
Re: [HACKERS] Repeating Append operation
On Sun, Mar 21, 2010 at 4:29 PM, Robert Haas robertmh...@gmail.com wrote: On Fri, Mar 19, 2010 at 2:09 PM, Gurjeet Singh singh.gurj...@gmail.com wrote: Is there a way to avoid this double evaluation? Maybe with a CTE? WITH x AS (...) SELECT ... It does look like surprising behavior. It was discussed on the IRC that same day, and RhodiumToad (Andrew) pointed out that this behaviour is because of subquery un-nesting. Putting an OFFSET 0 clause (hint) in the inline view prevents it from being merged with the outer query: explain select v from ( select array( select 1 union all select 2) as v from (select 1) offset 0) as s where v is not null; QUERY PLAN -- Subquery Scan s (cost=0.04..0.07 rows=1 width=32) Filter: (v IS NOT NULL) - Limit (cost=0.04..0.06 rows=1 width=0) InitPlan - Append (cost=0.00..0.04 rows=2 width=0) - Result (cost=0.00..0.01 rows=1 width=0) - Result (cost=0.00..0.01 rows=1 width=0) - Subquery Scan __unnamed_subquery_0 (cost=0.00..0.02 rows=1 width=0) - Result (cost=0.00..0.01 rows=1 width=0) (9 rows) This raises the point that we do subquery un-nesting purely on heuristics, and not on cost basis. I guess we should be be doing a cost comparison too. I think that this un-nesting happens quite before we start generating alternative plans for cost comparisons, and that we might not have costs to compare at this stage, but IMHO we should somehow incorporate cost comparisons too. Regards, -- gurjeet.singh @ EnterpriseDB - The Enterprise Postgres Company http://www.enterprisedb.com singh.gurj...@{ gmail | yahoo }.com Twitter/Skype: singh_gurjeet Mail sent from my BlackLaptop device
Re: [HACKERS] Deadlock possibility in _bt_check_unique?
Gokulakannan Somasundaram gokul...@gmail.com writes: Consider Time instances T1, T2, T3, T4 T1 : session 1 holds the write lock on page p1 and completes the unique check on p1, p2 and p3. T2 : session 1 releases the lock on p1 (its waiting to acquire a ex lock on p2) That's not what we do. See _bt_findinsertloc. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
Robert Haas escribió: On Tue, Mar 23, 2010 at 1:28 PM, Josh Berkus j...@agliodbs.com wrote: BTW, if you wanted something less ambitious, we have a longstanding request to implement local superuser, that is, the ability to give one role the ability to edit other roles in one database only. But roles aren't database-specific... they're globals. Well, that's another longstanding request ;-) (See the db_user_namespace hack) -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Repeating Append operation
On Tue, Mar 23, 2010 at 2:09 PM, Gurjeet Singh singh.gurj...@gmail.com wrote: On Sun, Mar 21, 2010 at 4:29 PM, Robert Haas robertmh...@gmail.com wrote: On Fri, Mar 19, 2010 at 2:09 PM, Gurjeet Singh singh.gurj...@gmail.com wrote: Is there a way to avoid this double evaluation? Maybe with a CTE? WITH x AS (...) SELECT ... It does look like surprising behavior. It was discussed on the IRC that same day, and RhodiumToad (Andrew) pointed out that this behaviour is because of subquery un-nesting. Putting an OFFSET 0 clause (hint) in the inline view prevents it from being merged with the outer query: explain select v from ( select array( select 1 union all select 2) as v from (select 1) offset 0) as s where v is not null; QUERY PLAN -- Subquery Scan s (cost=0.04..0.07 rows=1 width=32) Filter: (v IS NOT NULL) - Limit (cost=0.04..0.06 rows=1 width=0) InitPlan - Append (cost=0.00..0.04 rows=2 width=0) - Result (cost=0.00..0.01 rows=1 width=0) - Result (cost=0.00..0.01 rows=1 width=0) - Subquery Scan __unnamed_subquery_0 (cost=0.00..0.02 rows=1 width=0) - Result (cost=0.00..0.01 rows=1 width=0) (9 rows) This raises the point that we do subquery un-nesting purely on heuristics, and not on cost basis. I guess we should be be doing a cost comparison too. I think that this un-nesting happens quite before we start generating alternative plans for cost comparisons, and that we might not have costs to compare at this stage, but IMHO we should somehow incorporate cost comparisons too. I don't think this is right. Flattening the subquery doesn't prevent the join from being implemented a nested loop, which is essentially what happens when it's treated as an initplan. It just allows other options also. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Deadlock possibility in _bt_check_unique?
T2 : session 1 releases the lock on p1 (its waiting to acquire a ex lock on p2) That's not what we do. See _bt_findinsertloc. regards, tom lane I am really confused. Please keep the cool and explain me, if i am wrong. I could see this code in _bt_findinsertloc. There is a _bt_relandgetbuf, which releases lock on p1 and tries to acquire a lock on p2. I wrote ex lock in the place of BT_WRITE. * * *00589 for (;;) 00590 { 00591 BlockNumber http://doxygen.postgresql.org/block_8h.html#0be1c1ab88d7f8120e2cd2e8ac2697a1 rblkno = lpageop-btpo_next http://doxygen.postgresql.org/structBTPageOpaqueData.html#0e96302f6e2aa4203cef0e243362b692; 00592 00593 rbuf = _bt_relandgetbuf http://doxygen.postgresql.org/nbtpage_8c.html#023261cd645fc5e8b4e8517fe9027bd6(rel, rbuf, rblkno, BT_WRITE http://doxygen.postgresql.org/nbtree_8h.html#e494b1ec6ecbe7251dfc412a1ec53c1b); 00594 page = BufferGetPage http://doxygen.postgresql.org/bufmgr_8h.html#fb570c83a17839dabeb75dba7ea8e1a5(rbuf); 00595 lpageop = (BTPageOpaque http://doxygen.postgresql.org/structBTPageOpaqueData.html) PageGetSpecialPointer http://doxygen.postgresql.org/bufpage_8h.html#3be45495654ca1ff61f6ae45805e25f6(page); 00596 if (!P_IGNORE http://doxygen.postgresql.org/nbtree_8h.html#a8df35238449d00d7e14c3f1ccd3121c(lpageop)) 00597 break; 00598 if (P_RIGHTMOST http://doxygen.postgresql.org/nbtree_8h.html#8b5e4857926b514c0f97592bf3344293(lpageop)) 00599 elog http://doxygen.postgresql.org/elog_8h.html#850290250a1449fc513da20ae2ce5ec5(ERROR http://doxygen.postgresql.org/elog_8h.html#8fe83ac76edc595f6b98cd4a4127aed5, fell off the end of index \%s\, 00600 RelationGetRelationName http://doxygen.postgresql.org/rel_8h.html#5e9d450c92f70171110e22ffc678ed04(rel)); 00601 }* What is that, i am missing here? Gokul.
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
Alvaro Herrera alvhe...@commandprompt.com writes: Robert Haas escribió: On Tue, Mar 23, 2010 at 1:28 PM, Josh Berkus j...@agliodbs.com wrote: BTW, if you wanted something less ambitious, we have a longstanding request to implement local superuser, that is, the ability to give one role the ability to edit other roles in one database only. But roles aren't database-specific... they're globals. Well, that's another longstanding request ;-) (See the db_user_namespace hack) Yeah, you'd have to fix that first. The ambitious part of that is coming up with a spec that everybody will accept. Once you had that, coding it might not be very hard ... BTW, local superuser is an oxymoron. If you're superuser you'd have no trouble whatsoever breaking into other databases. Local CREATEROLE privilege could be a sane concept, though, if we had local roles. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Deadlock possibility in _bt_check_unique?
Oh! yeah, i got it. Thanks On Wed, Mar 24, 2010 at 12:26 AM, Gokulakannan Somasundaram gokul...@gmail.com wrote: T2 : session 1 releases the lock on p1 (its waiting to acquire a ex lock on p2) That's not what we do. See _bt_findinsertloc. regards, tom lane I am really confused. Please keep the cool and explain me, if i am wrong. I could see this code in _bt_findinsertloc. There is a _bt_relandgetbuf, which releases lock on p1 and tries to acquire a lock on p2. I wrote ex lock in the place of BT_WRITE. * * *00589 for (;;) 00590 { 00591 BlockNumber http://doxygen.postgresql.org/block_8h.html#0be1c1ab88d7f8120e2cd2e8ac2697a1 rblkno = lpageop-btpo_next http://doxygen.postgresql.org/structBTPageOpaqueData.html#0e96302f6e2aa4203cef0e243362b692; 00592 00593 rbuf = _bt_relandgetbuf http://doxygen.postgresql.org/nbtpage_8c.html#023261cd645fc5e8b4e8517fe9027bd6(rel, rbuf, rblkno, BT_WRITE http://doxygen.postgresql.org/nbtree_8h.html#e494b1ec6ecbe7251dfc412a1ec53c1b); 00594 page = BufferGetPage http://doxygen.postgresql.org/bufmgr_8h.html#fb570c83a17839dabeb75dba7ea8e1a5(rbuf); 00595 lpageop = (BTPageOpaque http://doxygen.postgresql.org/structBTPageOpaqueData.html) PageGetSpecialPointer http://doxygen.postgresql.org/bufpage_8h.html#3be45495654ca1ff61f6ae45805e25f6(page); 00596 if (!P_IGNORE http://doxygen.postgresql.org/nbtree_8h.html#a8df35238449d00d7e14c3f1ccd3121c(lpageop)) 00597 break; 00598 if (P_RIGHTMOST http://doxygen.postgresql.org/nbtree_8h.html#8b5e4857926b514c0f97592bf3344293(lpageop)) 00599 elog http://doxygen.postgresql.org/elog_8h.html#850290250a1449fc513da20ae2ce5ec5(ERROR http://doxygen.postgresql.org/elog_8h.html#8fe83ac76edc595f6b98cd4a4127aed5, fell off the end of index \%s\, 00600 RelationGetRelationName http://doxygen.postgresql.org/rel_8h.html#5e9d450c92f70171110e22ffc678ed04(rel)); 00601 }* What is that, i am missing here? Gokul.
Re: [HACKERS] Deadlock possibility in _bt_check_unique?
Gokulakannan Somasundaram gokul...@gmail.com writes: I am really confused. Please keep the cool and explain me, if i am wrong. I could see this code in _bt_findinsertloc. There is a _bt_relandgetbuf, which releases lock on p1 and tries to acquire a lock on p2. No, read it again. The only locks that get released inside that loop are ones on intermediate dead pages (rbuf is not buf). The lock on the original page is only released after the loop. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: xmlconcat (was [HACKERS] 9.0 release notes done)
On mån, 2010-03-22 at 19:38 -0400, Andrew Dunstan wrote: But if we are not comfortable about being able to do that safely, I would be OK with just raising an error if a concatenation is attempted where one value contains a DTD. The impact in practice should be low. Right. Can you find a way to do that using the libxml API? I haven't managed to, and I'm pretty sure I can construct XML that fails every simple string search test I can think of, either with a false negative or a false positive. The documentation on that is terse as usual. In any case, you will need to XML parse the input values, and so you might as well resort to parsing the output value to see if it is well-formed, which should catch this mistake and possibly others. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 9.0 release notes done
Josh Berkus wrote: Bruce, I thought this year we were going to start using people's full names instead of the first names, for clarity. No? OK, I will do this once Josh is done with his modifications. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] booleans in recovery.conf
Is there a reason that recovery.conf uses true/false, while postgresql.conf uses on/off? #recovery_target_inclusive = 'true' # 'true' or 'false' or are these settings more boolean for some reason? -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
Tom Lane escribió: Alvaro Herrera alvhe...@commandprompt.com writes: Robert Haas escribi�: But roles aren't database-specific... they're globals. Well, that's another longstanding request ;-) (See the db_user_namespace hack) Yeah, you'd have to fix that first. The ambitious part of that is coming up with a spec that everybody will accept. Once you had that, coding it might not be very hard ... I wonder if this is simpler now that we got rid of the flat files stuff. We could validate the user once we've connected to a database and thus able to poke at the local user catalog, not just the global one. I think that was a serious roadblock. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
Alvaro Herrera alvhe...@commandprompt.com writes: I wonder if this is simpler now that we got rid of the flat files stuff. We could validate the user once we've connected to a database and thus able to poke at the local user catalog, not just the global one. I think that was a serious roadblock. I think it'd be a mistake to invent a separate catalog for local users; what had been nice clean foreign key relationships (eg, relowner - pg_auth.oid) would suddenly become a swamp. My first thought about a catalog representation would be to add a column to pg_auth which is a DB OID for local users or zero for global users. However, you'd probably want to prevent local users and global users from having the same names, and it's not very clear how to do that with this representation (though that'd be even worse with separate catalogs). I guess we could fall back on a creation-time check (ick). regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
On Tue, Mar 23, 2010 at 8:16 PM, Tom Lane t...@sss.pgh.pa.us wrote: Alvaro Herrera alvhe...@commandprompt.com writes: I wonder if this is simpler now that we got rid of the flat files stuff. We could validate the user once we've connected to a database and thus able to poke at the local user catalog, not just the global one. I think that was a serious roadblock. I think it'd be a mistake to invent a separate catalog for local users; what had been nice clean foreign key relationships (eg, relowner - pg_auth.oid) would suddenly become a swamp. My first thought about a catalog representation would be to add a column to pg_auth which is a DB OID for local users or zero for global users. However, you'd probably want to prevent local users and global users from having the same names, and it's not very clear how to do that with this representation (though that'd be even worse with separate catalogs). I guess we could fall back on a creation-time check (ick). Could we use a suitably defined exclusion constraint? ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
Robert Haas robertmh...@gmail.com writes: On Tue, Mar 23, 2010 at 8:16 PM, Tom Lane t...@sss.pgh.pa.us wrote: My first thought about a catalog representation would be to add a column to pg_auth which is a DB OID for local users or zero for global users. However, you'd probably want to prevent local users and global users from having the same names, and it's not very clear how to do that with this representation (though that'd be even worse with separate catalogs). I guess we could fall back on a creation-time check (ick). Could we use a suitably defined exclusion constraint? Not unless you'd like to solve the issues with triggers on system catalogs first ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] booleans in recovery.conf
On Wed, Mar 24, 2010 at 8:43 AM, Bruce Momjian br...@momjian.us wrote: Is there a reason that recovery.conf uses true/false, while postgresql.conf uses on/off? IIRC, because, in the old version, recovery.conf allowed only true/false as a boolean value. Of course, we can change those now. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Mismatch in libpqwalreceiver
On Wed, Mar 24, 2010 at 1:49 AM, Hitoshi Harada umi.tan...@gmail.com wrote: There's a mismatch in HEAD between README and the actual definition in replication/libpqwalreceiver. In README, bool walrcv_receive(int timeout, XLogRecPtr *recptr, char **buffer, int *len) but in walreceiver.h, typedef bool (*walrcv_receive_type) (int timeout, unsigned char *type, char **buffer, int *len); It seems this commit forgot README fix. http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/replication/walreceiver.h?r1=1.5r2=1.6 Thanks for the report! That is my mistake. Here is the patch. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center walreceiver_readme_v1.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
On Tue, Mar 23, 2010 at 8:30 PM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Tue, Mar 23, 2010 at 8:16 PM, Tom Lane t...@sss.pgh.pa.us wrote: My first thought about a catalog representation would be to add a column to pg_auth which is a DB OID for local users or zero for global users. However, you'd probably want to prevent local users and global users from having the same names, and it's not very clear how to do that with this representation (though that'd be even worse with separate catalogs). I guess we could fall back on a creation-time check (ick). Could we use a suitably defined exclusion constraint? Not unless you'd like to solve the issues with triggers on system catalogs first ... Urp. Not really, though I don't know what they are exactly. I didn't think exclusion constraints depended on triggers. UNIQUE constraints work on system catalogs, right? ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Add connection messages for streaming replication.
On Sat, Mar 20, 2010 at 4:19 AM, Simon Riggs sri...@postgresql.org wrote: Log Message: --- Add connection messages for streaming replication. log_connections was broken for a replication connection and no messages were displayed on either standby or primary, at any debug level. Connection messages needed to diagnose session drop/reconnect events. Use LOG mode for now, discuss lowering in later releases. LOG: connection authorized: user=foo database=replication Currently, when the primary accepts the connection from the standby, it emits the above message. But database=replication is not accurate because no database is supplied by the standby unless it's explicitly specified in primary_conninfo parameter. So, how about changing the message as follow? LOG: replication connection authorized: user=foo Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: access control jails (and introduction as aspiring GSoC student)
Robert Haas robertmh...@gmail.com writes: On Tue, Mar 23, 2010 at 8:30 PM, Tom Lane t...@sss.pgh.pa.us wrote: Not unless you'd like to solve the issues with triggers on system catalogs first ... Urp. Not really, though I don't know what they are exactly. I didn't think exclusion constraints depended on triggers. UNIQUE constraints work on system catalogs, right? UNIQUE constraints depend on internal support in the index access method (see today's thread with Gokulakannan Somasundaram for some details of how btree does it). Exclusion constraints have a totally different implementation --- they don't require index AM support, but they do use triggers. Now having said that, my recollection is that the worst issues surrounding triggers on catalogs had to do with BEFORE triggers. Exclusion constraint triggers would be AFTER triggers, so maybe it could be made to work. It'd still be significant work though, for not a lot of value as far as this particular issue goes. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: preloading of ispell dictionary
Pavel Stehule wrote: Personally I dislike idea some dictionary precompiler - it is next application for maintaining and maybe not necessary. That's the sort of thing that can be done when first required by any backend and the results saved in a file for other backends to mmap(). It'd probably want to be opened r/w access-exclusive initially, then re-opened read-only access-shared when ready for use. My only concern would be that the cache would want to be forcibly cleared at postmaster start, so that restart the postmaster fixes any messsed-up-cache issues that might arise (not that they should) without people having to go rm'ing in the datadir. Even if Pg never has any bugs that result in bad cache files, the file system / bad memory / cosmic rays / etc can still mangle a cache file. BTW, mmap() isn't an issue on Windows: http://msdn.microsoft.com/en-us/library/aa366556%28VS.85%29.aspx It's spelled CreateFileMapping, but otherwise is fairly similar, and is perfect for this sort of use. A shared read-only mapping of processed-and-cached tsearch2 dictionaries would save a HUGE amount of memory if many backends were using tsearch2 at the same time. I'd make a big difference here. -- Craig Ringer -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Add connection messages for streaming replication.
On Wed, 2010-03-24 at 10:52 +0900, Fujii Masao wrote: On Sat, Mar 20, 2010 at 4:19 AM, Simon Riggs sri...@postgresql.org wrote: Log Message: --- Add connection messages for streaming replication. log_connections was broken for a replication connection and no messages were displayed on either standby or primary, at any debug level. Connection messages needed to diagnose session drop/reconnect events. Use LOG mode for now, discuss lowering in later releases. LOG: connection authorized: user=foo database=replication Currently, when the primary accepts the connection from the standby, it emits the above message. But database=replication is not accurate because no database is supplied by the standby unless it's explicitly specified in primary_conninfo parameter. So, how about changing the message as follow? LOG: replication connection authorized: user=foo The main thing for me was that it logged something. The above two ways occurred to me and figured we'd end up discussing it. The first way is slightly confusing for the reason stated, agreed. By using the same form of words as is used currently, all existing scripts that search for connection details will all still work. The second way is more informative, if you don't know replication is a pseudo-database, but it will break all existing scripts. My own feeling was that breaking existing scripts was not a price worth paying for the extra information in the second form of the message, since its just the same words re-arranged. -- Simon Riggs www.2ndQuadrant.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers