Re: [HACKERS] Should pg_dump dump larger tables first?
Tom Lane t...@sss.pgh.pa.us writes: Also, it's far from obvious to me that largest first is the best rule anyhow; it's likely to be more complicated than that. But anyway, the right place to add this sort of consideration is in pg_restore --parallel, not pg_dump. I don't know how hard it would be for the scheduler algorithm in there to take table size into account, but at least in principle it should be possible to find out the size of the (compressed) table data from examination of the archive file. From some experiences with pgloader and loading data in migration processes, often enough the most gains are to be had when you load the biggest table in parallel with loading all the little ones. It often makes it so that the big table loading time is not affected, and by the time it's done the rest of the database is done too. Loading several big'o'tables in parallel tend not to give benefits in the tests I've done so far, but that might be an artefact of python multi threading, I will do some testing with proper tooling later. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Should pg_dump dump larger tables first?
On 1/31/2013 2:06 AM, Dimitri Fontaine wrote: Loading several big'o'tables in parallel tend not to give benefits in the tests I've done so far, but that might be an artefact of python multi threading, I will do some testing with proper tooling later. or insufficient IO parallelism in your disk hardware, file system, etc. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump --pretty-print-views
Andrew Dunstan and...@dunslane.net writes: Well, we could actually set the wrap value to 0, which would mean always wrap. That wouldn't be making any assumption about the user's terminal window size ;-) +1 Personally I find the wrapped case MUCH more readable. I guess anything is an advance, but turning on PRETTY_INDENT without turning on some level of target wrapping seems like a half-done job. Minor gripe: the CASE WHEN THEN indenting is not ideal: CASE WHEN (a.attnotnull OR ((t.typtype = 'd'::char) AND t.typnotnull)) THEN 'NO'::text ELSE 'YES'::text END)::information_schema.yes_or_no AS is_nullable, I think the following is easier to read: CASE WHEN (a.attnotnull OR ((t.typtype = 'd'::char) AND t.typnotnull)) THEN 'NO'::text ELSE 'YES'::text END)::information_schema.yes_or_no AS is_nullable, But I realise it's not exactly what's being talked about in this thread… Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] backend hangs at immediate shutdown
As I promised yesterday, I'll show you the precise call stack: #0 0x003fa0cf542e in __lll_lock_wait_private () from /lib64/libc.so.6 #1 0x003fa0c7bed5 in _L_lock_9323 () from /lib64/libc.so.6 #2 0x003fa0c797c6 in malloc () from /lib64/libc.so.6 #3 0x003fa0c2fd99 in _nl_make_l10nflist () from /lib64/libc.so.6 #4 0x003fa0c2e0a5 in _nl_find_domain () from /lib64/libc.so.6 #5 0x003fa0c2d990 in __dcigettext () from /lib64/libc.so.6 #6 0x006f2a71 in errhint () #7 0x00634064 in quickdie () #8 signal handler called #9 0x003fa0c77813 in _int_free () from /lib64/libc.so.6 #10 0x0070e329 in AllocSetDelete () #11 0x0070e8cb in MemoryContextDelete () #12 0x00571723 in FreeExprContext () #13 0x00571781 in FreeExecutorState () #14 0x005dc883 in evaluate_expr () #15 0x005ddca0 in simplify_function () #16 0x005de69f in eval_const_expressions_mutator () #17 0x00599143 in expression_tree_mutator () #18 0x005de452 in eval_const_expressions_mutator () #19 0x00599143 in expression_tree_mutator () #20 0x005de452 in eval_const_expressions_mutator () #21 0x005dfa2f in eval_const_expressions () #22 0x005cf16d in preprocess_expression () #23 0x005d2201 in subquery_planner () #24 0x005d23cf in standard_planner () #25 0x0063426a in pg_plan_query () #26 0x00634354 in pg_plan_queries () #27 0x00635310 in exec_simple_query () #28 0x00636333 in PostgresMain () #29 0x005f64e9 in PostmasterMain () #30 0x00596e20 in main () -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] [PATCH] HOT on tables with oid indexes broken
Hi, The fklocks patch moved HeapSatisfiesHOTandKeyUpdate (or rather HeapSatisfiesHOTUpdate back then) to be called way earlier in heap_update as its needed to know which lock level is required. Unfortunately the oid of the new tuple isn't yet setup at that point. Due to this everytime there's an index on an oid - like in most catalog tables - no HOT updates will be performed as the old tuples oid will be compared to the new ones which isn't setup yet (so either InvalidOid or uninitialized). There's also a related bug which seems to go further back but has far fewer implications, namely that the tableOid of the old tuple isn't necessarily setup before determining HOTability. Now that only matters if there's an index on 'tableoid' which doesn't seem to be an all that frequent thing to do. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services From 295a5b40779c9f1e86dd80d532e78af2ea84b3ff Mon Sep 17 00:00:00 2001 From: Andres Freund and...@anarazel.de Date: Thu, 31 Jan 2013 13:15:17 +0100 Subject: [PATCH] Copy a tuple's oid early enough in heap_update for HOT to work on table with oid indexes This was broken in 0ac5ad5134 which moved HeapSatisfiesHOTandKeyUpdate (called HeapSatisfiesHOTUpdate back then) to be called earlier in heap_update but didn't move enough of the tuple setup to be done early enough. --- src/backend/access/heap/heapam.c | 36 +--- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 57d47e8..5dd14a6 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -2978,10 +2978,30 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup, lp = PageGetItemId(page, ItemPointerGetOffsetNumber(otid)); Assert(ItemIdIsNormal(lp)); + /* setup enough data for HeapSatisfiesHOTandKeyUpdate to work properly */ + oldtup.t_tableOid = RelationGetRelid(relation); oldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp); oldtup.t_len = ItemIdGetLength(lp); oldtup.t_self = *otid; + newtup-t_tableOid = RelationGetRelid(relation); + + /* Fill in OID for newtup */ + if (relation-rd_rel-relhasoids) + { +#ifdef NOT_USED + /* this is redundant with an Assert in HeapTupleSetOid */ + Assert(newtup-t_data-t_infomask HEAP_HASOID); +#endif + HeapTupleSetOid(newtup, HeapTupleGetOid(oldtup)); + } + else + { + /* check there is not space for an OID */ + Assert(!(newtup-t_data-t_infomask HEAP_HASOID)); + } + + /* * If we're not updating any key column, we can grab a weaker lock type. * This allows for more concurrency when we are running simultaneously with @@ -3243,20 +3263,7 @@ l2: */ CheckForSerializableConflictIn(relation, oldtup, buffer); - /* Fill in OID and transaction status data for newtup */ - if (relation-rd_rel-relhasoids) - { -#ifdef NOT_USED - /* this is redundant with an Assert in HeapTupleSetOid */ - Assert(newtup-t_data-t_infomask HEAP_HASOID); -#endif - HeapTupleSetOid(newtup, HeapTupleGetOid(oldtup)); - } - else - { - /* check there is not space for an OID */ - Assert(!(newtup-t_data-t_infomask HEAP_HASOID)); - } + /* Fill in transaction status data */ /* * If the tuple we're updating is locked, we need to preserve the locking @@ -3306,7 +3313,6 @@ l2: newtup-t_data-t_infomask |= HEAP_UPDATED | infomask_new_tuple; newtup-t_data-t_infomask2 |= infomask2_new_tuple; HeapTupleHeaderSetXmax(newtup-t_data, xmax_new_tuple); - newtup-t_tableOid = RelationGetRelid(relation); /* * Replace cid with a combo cid if necessary. Note that we already put -- 1.7.12.289.g0ce9864.dirty -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] backend hangs at immediate shutdown
From: Tom Lane t...@sss.pgh.pa.us MauMau maumau...@gmail.com writes: How about the case where some backend crashes due to a bug of PostgreSQL? In this case, postmaster sends SIGQUIT to all backends, too. The instance is expected to disappear cleanly and quickly. Doesn't the hanging backend harm the restart of the instance? [ shrug... ] That isn't guaranteed, and never has been --- for instance, the process might have SIGQUIT blocked, perhaps as a result of third-party code we have no control over. Are you concerned about user-defined C functions? I don't think they need to block signals. So I don't find it too restrictive to say do not block or send signals in user-defined functions. If it's a real concern, it should be noted in the manul, rather than writing do not use pg_ctl stop -mi as much as you can, because it can leave hanging backends. How about using SIGKILL instead of SIGQUIT? Because then we couldn't notify clients at all. One practical disadvantage of that is that it would become quite hard to tell from the outside which client session actually crashed, which is frequently useful to know. How is the message below useful to determine which client session actually crashed? The message doesn't contain information about the crashed session. Are you talking about log_line_prefix? ERROR: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. HINT: In a moment you should be able to reconnect to the database and repeat your command. However, it is not quickdie() but LogChildExit() that emits useful information to tell which session crashed. So I don't think quickdie()'s message is very helpful. I think if we want to make it bulletproof we'd have to do what the OP suggested and switch to SIGKILL. I'm not enamored of that for the reasons I mentioned --- but one idea that might dodge the disadvantages is to have the postmaster wait a few seconds and then SIGKILL any backends that hadn't exited. I believe that SIGKILL is the only and simple way to choose. Consider again: the purpose of pg_ctl stop -mi is to immediately and reliably shut down the instance. If it is not reliable, what can we do instead? Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation
On Wednesday, January 30, 2013 8:32 PM Amit Kapila wrote: On Tuesday, January 29, 2013 7:42 PM Amit Kapila wrote: On Tuesday, January 29, 2013 3:53 PM Heikki Linnakangas wrote: On 29.01.2013 11:58, Amit Kapila wrote: Can there be another way with which current patch code can be made better, so that we don't need to change the encoding approach, as I am having feeling that this might not be performance wise equally good. The point is that I don't want to heap_delta_encode() to know the internals of pglz compression. You could probably make my patch more like yours in behavior by also passing an array of offsets in the new tuple to check, and only checking for matches as those offsets. I think it makes sense, because if we have offsets of both new and old tuple, we can internally use memcmp to compare columns and use same algorithm for encoding. I will change the patch according to this suggestion. I have modified the patch as per above suggestion. Apart from passing new and old tuple offsets, I have passed bitmaplength also, as we need to copy the bitmap of new tuple as it is into Encoded WAL Tuple. Please see if such API design is okay? I shall update the README and send the performance/WAL Reduction data for modified patch tomorrow. Updated patch including comments and README is attached with this mail. This patch contain exactly same design behavior as per previous. It takes care of API design suggestion of Heikki. The performance data is similar, as it is not complete, I shall send that tomorrow. With Regards, Amit Kapila. wal_update_changes_v10.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Suggestion: Issue warning when calling SET TRANSACTION outside transaction block
Hi! Calling SET TRANSACTION ISOLATION LEVEL ... outside a transaction block has no effect. This is unlike LOCK ... and DECLARE foo CURSOR FOR ..., which both raise an error. This is also unlike MySQL, where such a statement will affect the next transaction performed. There's some risk of data corruption, as a user might assume he's working on a snapshot, while in fact he's not. I suggest issuing a warning, notice or error message when SET TRANSACTION ... is called outside a transaction block, possibly directing the user to the SET SESSION CHARACTERISTICS AS TRANSACTION ... syntax. I'm not familiar with the PostgreSQL source code, but it seems this would have to be added to check_XactIsoLevel() or by calling RequireTransactionChain() at some appropriate location. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Strange Windows problem, lock_timeout test request
2013-01-30 17:45 keltezéssel, Zoltán Böszörményi írta: 2013-01-30 16:06 keltezéssel, Hari Babu írta: On Wednesday, January 30, 2013 7:59 PM Zoltán Böszörményi wrote: 2013-01-28 15:20 keltezéssel, Hari Babu írta: 2. regress check failed because the expected .out file is not updated properly. Which regress check failed? The .out file was updated in the patch for prepared_xacts.sql where the regression tests for lock_timeout were added. Or do you mean the one for the sql file you sent? During regress test, prepared_xacts_1.out expected file used for comparing with the result file. Which is not updated by the patch. Because of this reason the regress check is failing. I see, so this is a Windows-only change that needs a different. Can you send the resulting prepared_xacts_1.out file so I can integrate its changes into my patch? That way it would be complete. Thanks in advance. I have found a little time to look into this problem and found a way to make pg_regress use prepared_xacts_1.out. I had to change line 2193 in pg_regress.c from fputs(max_prepared_transactions = 2\n, pg_conf); to fputs(max_prepared_transactions = 0\n, pg_conf); The patch now passed make check in both cases. Best regards, Zoltán Böszörményi -- -- Zoltán Böszörményi Cybertec Schönig Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt, Austria Web: http://www.postgresql-support.de http://www.postgresql.at/ 2-lock_timeout-v29.patch.gz Description: GNU Zip compressed data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] information schema parameter_default implementation
On Wed, Jan 9, 2013 at 4:28 PM, Peter Eisentraut pete...@gmx.net wrote: Here is an implementation of the information_schema.parameters.parameter_default column. I ended up writing a C function to decode the whole thing from the system catalogs, because it was too complicated in SQL, so I abandoned the approach discussed in [0]. [0]: http://archives.postgresql.org/message-id/1356092400.25658.6.ca...@vanquo.pezone.net -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers I checked our your patch. There seems to be an issue when we have OUT parameters after the DEFAULT values. For example a simple test case is given below: postgres=# CREATE FUNCTION functest1(a int default 1, out b int) postgres-# RETURNS int postgres-# LANGUAGE SQL postgres-# AS 'SELECT $1'; CREATE FUNCTION postgres=# postgres=# SELECT ordinal_position, parameter_name, parameter_default FROM information_schema.parameters WHERE specific_name LIKE 'functest%' ORDER BY 1; ordinal_position | parameter_name | parameter_default --++--- 1 | a | 1 2 | b | 1 (2 rows) The out parameters gets the same value as the the last default parameter. The patch work only when default values are at the end. Switch the parameters and it starts working(make OUT parameter as first and default one the last one). Below is the example: postgres=# CREATE FUNCTION functest1(out a int, b int default 1) postgres-# RETURNS int postgres-# LANGUAGE SQL postgres-# AS 'SELECT $1'; CREATE FUNCTION postgres=# SELECT ordinal_position, parameter_name, parameter_default FROM information_schema.parameters WHERE specific_name LIKE 'functest%' ORDER BY 1; ordinal_position | parameter_name | parameter_default --++--- 1 | a | 2 | b | 1 (2 rows) Some other minor observations: 1) Some variables are not lined in pg_get_function_arg_default(). 2) I found the following check a bit confusing, maybe you can make it better if (!argmodes || argmodes[i] == PROARGMODE_IN || argmodes[i] == PROARGMODE_INOUT || argmodes[i] == PROARGMODE_VARIADIC) 2) inputargn can be assigned in declaration. 3) Function level comment for pg_get_function_arg_default() is missing. 4) You can also add comments inside the function, for example the comment for the line: nth = inputargn - 1 - (proc-pronargs - proc-pronargdefaults); 5) I think the line added in the documentation(informational_schema.sgml) is very long. Consider revising. Maybe change from: The default expression of the parameter, or null if none or if the function is not owned by a currently enabled role. TO The default expression of the parameter, or null if none was specified. It will also be null if the function is not owned by a currently enabled role. I don't know what do you exactly mean by: function is not owned by a currently enabled role? Regards, Ali Dar
Re: [HACKERS] information schema parameter_default implementation
Another thing I forget: The patch does not apply because of the changes in catversion.h Regards, Ali Dar On Thu, Jan 31, 2013 at 6:59 PM, Ali Dar ali.munir@gmail.com wrote: On Wed, Jan 9, 2013 at 4:28 PM, Peter Eisentraut pete...@gmx.net wrote: Here is an implementation of the information_schema.parameters.parameter_default column. I ended up writing a C function to decode the whole thing from the system catalogs, because it was too complicated in SQL, so I abandoned the approach discussed in [0]. [0]: http://archives.postgresql.org/message-id/1356092400.25658.6.ca...@vanquo.pezone.net -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers I checked our your patch. There seems to be an issue when we have OUT parameters after the DEFAULT values. For example a simple test case is given below: postgres=# CREATE FUNCTION functest1(a int default 1, out b int) postgres-# RETURNS int postgres-# LANGUAGE SQL postgres-# AS 'SELECT $1'; CREATE FUNCTION postgres=# postgres=# SELECT ordinal_position, parameter_name, parameter_default FROM information_schema.parameters WHERE specific_name LIKE 'functest%' ORDER BY 1; ordinal_position | parameter_name | parameter_default --++--- 1 | a | 1 2 | b | 1 (2 rows) The out parameters gets the same value as the the last default parameter. The patch work only when default values are at the end. Switch the parameters and it starts working(make OUT parameter as first and default one the last one). Below is the example: postgres=# CREATE FUNCTION functest1(out a int, b int default 1) postgres-# RETURNS int postgres-# LANGUAGE SQL postgres-# AS 'SELECT $1'; CREATE FUNCTION postgres=# SELECT ordinal_position, parameter_name, parameter_default FROM information_schema.parameters WHERE specific_name LIKE 'functest%' ORDER BY 1; ordinal_position | parameter_name | parameter_default --++--- 1 | a | 2 | b | 1 (2 rows) Some other minor observations: 1) Some variables are not lined in pg_get_function_arg_default(). 2) I found the following check a bit confusing, maybe you can make it better if (!argmodes || argmodes[i] == PROARGMODE_IN || argmodes[i] == PROARGMODE_INOUT || argmodes[i] == PROARGMODE_VARIADIC) 2) inputargn can be assigned in declaration. 3) Function level comment for pg_get_function_arg_default() is missing. 4) You can also add comments inside the function, for example the comment for the line: nth = inputargn - 1 - (proc-pronargs - proc-pronargdefaults); 5) I think the line added in the documentation(informational_schema.sgml) is very long. Consider revising. Maybe change from: The default expression of the parameter, or null if none or if the function is not owned by a currently enabled role. TO The default expression of the parameter, or null if none was specified. It will also be null if the function is not owned by a currently enabled role. I don't know what do you exactly mean by: function is not owned by a currently enabled role? Regards, Ali Dar
Re: [HACKERS] Strange Windows problem, lock_timeout test request
Zoltán Böszörményi wrote: I have found a little time to look into this problem and found a way to make pg_regress use prepared_xacts_1.out. I had to change line 2193 in pg_regress.c from fputs(max_prepared_transactions = 2\n, pg_conf); to fputs(max_prepared_transactions = 0\n, pg_conf); The patch now passed make check in both cases. That sounds a lot more difficult than just using make installcheck and configure the running server with zero prepared xacts ... -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Strange Windows problem, lock_timeout test request
2013-01-31 15:22 keltezéssel, Alvaro Herrera írta: Zoltán Böszörményi wrote: I have found a little time to look into this problem and found a way to make pg_regress use prepared_xacts_1.out. I had to change line 2193 in pg_regress.c from fputs(max_prepared_transactions = 2\n, pg_conf); to fputs(max_prepared_transactions = 0\n, pg_conf); The patch now passed make check in both cases. That sounds a lot more difficult than just using make installcheck and configure the running server with zero prepared xacts ... It didn't occur to me to use make installcheck for this one. What is strange though is why prepared_xacts_1.out exists at all, since pg_regress.c / make check seems to set max_prepared_transactions on Windows, too. Is there a way to specify such settings in REGRESS_OPTS? Best regards, Zoltán Böszörményi -- -- Zoltán Böszörményi Cybertec Schönig Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt, Austria Web: http://www.postgresql-support.de http://www.postgresql.at/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Strange Windows problem, lock_timeout test request
On 01/31/2013 09:55 AM, Zoltán Böszörményi wrote: 2013-01-31 15:22 keltezéssel, Alvaro Herrera írta: Zoltán Böszörményi wrote: I have found a little time to look into this problem and found a way to make pg_regress use prepared_xacts_1.out. I had to change line 2193 in pg_regress.c from fputs(max_prepared_transactions = 2\n, pg_conf); to fputs(max_prepared_transactions = 0\n, pg_conf); The patch now passed make check in both cases. That sounds a lot more difficult than just using make installcheck and configure the running server with zero prepared xacts ... It didn't occur to me to use make installcheck for this one. What is strange though is why prepared_xacts_1.out exists at all, since pg_regress.c / make check seems to set max_prepared_transactions on Windows, too. Is there a way to specify such settings in REGRESS_OPTS? You can use the temp_config option to specify a file with non-standard settings that will be appended to the installed postgresql.conf. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Strange Windows problem, lock_timeout test request
=?ISO-8859-1?Q?Zolt=E1n_B=F6sz=F6rm=E9nyi?= z...@cybertec.at writes: 2013-01-31 15:22 keltezéssel, Alvaro Herrera írta: That sounds a lot more difficult than just using make installcheck and configure the running server with zero prepared xacts ... It didn't occur to me to use make installcheck for this one. What is strange though is why prepared_xacts_1.out exists at all, since pg_regress.c / make check seems to set max_prepared_transactions on Windows, too. Alvaro told you why: so that the tests wouldn't report failure in make installcheck against a stock-configuration server. BTW, 99% of the time you can update alternative expected files by applying the same patch to them as you did to the tested version. At least that usually works for me, and it can be a lot easier than arranging to duplicate the environment the alternative file is meant for. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Should pg_dump dump larger tables first?
On Tue, Jan 29, 2013 at 3:34 PM, Tom Lane t...@sss.pgh.pa.us wrote: David Rowley dgrowle...@gmail.com writes: If pg_dump was to still follow the dependencies of objects, would there be any reason why it shouldn't backup larger tables first? Pretty much every single discussion/complaint about pg_dump's ordering choices has been about making its behavior more deterministic not less so. So I can't imagine such a change would go over well with most folks. Also, it's far from obvious to me that largest first is the best rule anyhow; it's likely to be more complicated than that. From my experience in the non-database world of processing many files of greatly different sizes in parallel, sorting them so the largest are scheduled first and smaller ones get pack around them is very successful and very easy. I agree that best rule surely is more complicated, but probably so much so that it will never get implemented. But anyway, the right place to add this sort of consideration is in pg_restore --parallel, not pg_dump. Yeah. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Should pg_dump dump larger tables first?
On Thu, Jan 31, 2013 at 5:06 AM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote: Tom Lane t...@sss.pgh.pa.us writes: Also, it's far from obvious to me that largest first is the best rule anyhow; it's likely to be more complicated than that. But anyway, the right place to add this sort of consideration is in pg_restore --parallel, not pg_dump. I don't know how hard it would be for the scheduler algorithm in there to take table size into account, but at least in principle it should be possible to find out the size of the (compressed) table data from examination of the archive file. From some experiences with pgloader and loading data in migration processes, often enough the most gains are to be had when you load the biggest table in parallel with loading all the little ones. It often makes it so that the big table loading time is not affected, and by the time it's done the rest of the database is done too. Loading several big'o'tables in parallel tend not to give benefits in the tests I've done so far, but that might be an artefact of python multi threading, I will do some testing with proper tooling later. We had the idea of doing this with Slony, to try to process subscriptions faster by, yes, indeed, loading large tables first, and throwing reindexing off onto secondary threads. The big win seemed to come specifically from the reindexing aspect; that tends to take a fair bit more time than the indexless COPY. It would make the subscription process a bit more fragile, and would add quite a bit of development work, for something that didn't seem to be *that* much of a priority, so we never went past the Gedankenexperiment of establishing that it seemed feasible. A side-effect that we didn't have to worry about with Slony, but that would be important for more general use, is what happens to the processing of re-establishing foreign keys. In Slony, we suppress them on subscriber nodes; you'd need to worry about that for general purpose use. -- When confronted by a difficult problem, solve it by reducing it to the question, How would the Lone Ranger handle this? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Strange Windows problem, lock_timeout test request
2013-01-31 16:39 keltezéssel, Tom Lane írta: =?ISO-8859-1?Q?Zolt=E1n_B=F6sz=F6rm=E9nyi?= z...@cybertec.at writes: 2013-01-31 15:22 keltezéssel, Alvaro Herrera írta: That sounds a lot more difficult than just using make installcheck and configure the running server with zero prepared xacts ... It didn't occur to me to use make installcheck for this one. What is strange though is why prepared_xacts_1.out exists at all, since pg_regress.c / make check seems to set max_prepared_transactions on Windows, too. Alvaro told you why: so that the tests wouldn't report failure in make installcheck against a stock-configuration server. BTW, 99% of the time you can update alternative expected files by applying the same patch to them as you did to the tested version. At least that usually works for me, and it can be a lot easier than arranging to duplicate the environment the alternative file is meant for. regards, tom lane Thanks. A question though: how does make check or make installcheck chooses between the *.out and its different *_N.out incarnations? I couldn't find traces of prepared_xacts_1.out in any file saying this is the one to be used in this-and-this configuration. Does the procedure check against all versions and the least different one is reported? Thanks, Zoltán Böszörményi -- -- Zoltán Böszörményi Cybertec Schönig Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt, Austria Web: http://www.postgresql-support.de http://www.postgresql.at/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Strange Windows problem, lock_timeout test request
=?ISO-8859-2?Q?Zolt=E1n_B=F6sz=F6rm=E9nyi?= z...@cybertec.at writes: Thanks. A question though: how does make check or make installcheck chooses between the *.out and its different *_N.out incarnations? I couldn't find traces of prepared_xacts_1.out in any file saying this is the one to be used in this-and-this configuration. Does the procedure check against all versions and the least different one is reported? Exactly. This is documented, see the regression tests chapter of the SGML manual. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] autovacuum not prioritising for-wraparound tables
Christopher Browne escribió: On Fri, Jan 25, 2013 at 12:00 PM, Andres Freund and...@2ndquadrant.com wrote: I'd be inclined to do something a bit more sophisticated than just age(relfrozenxid) for wraparound; I'd be inclined to kick off large tables' wraparound vacuums earlier than those for smaller tables. With a little bit of noodling around, here's a thought for a joint function that I *think* has reasonably common scales: f(deadtuples, relpages, age) = deadtuples/relpages + e ^ (age*ln(relpages)/2^32) Okay, here's a patch along these lines. I haven't considered Jim's suggestion downthread about discounting dead tuples from relpages; maybe we can do that by subtracting the pages attributed to dead ones, estimating via tuple density (reltuples/relpages). But that's no my main concern here. Instead, what I propose (and is not really in the patch), as a backpatchable item, is an approach in which the functions to compute each rel's Browne strength and sort are hooks. Normal behavior is not to sort at all, as currently, and sites that have a problem with the current random order can install a custom module that provide hooks to change ordering as they see fit. So behavior won't change for people who have no problem today. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] autovacuum not prioritising for-wraparound tables
Alvaro Herrera escribió: Okay, here's a patch along these lines. I haven't considered Jim's suggestion downthread about discounting dead tuples from relpages; maybe we can do that by subtracting the pages attributed to dead ones, estimating via tuple density (reltuples/relpages). Patch attached. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services *** a/src/backend/postmaster/autovacuum.c --- b/src/backend/postmaster/autovacuum.c *** *** 167,173 typedef struct avw_dbase PgStat_StatDBEntry *adw_entry; } avw_dbase; ! /* struct to keep track of tables to vacuum and/or analyze, in 1st pass */ typedef struct av_relation { Oid ar_toastrelid; /* hash key - must be first */ --- 167,173 PgStat_StatDBEntry *adw_entry; } avw_dbase; ! /* struct to keep track of TOAST-main relation mappings */ typedef struct av_relation { Oid ar_toastrelid; /* hash key - must be first */ *** *** 177,182 typedef struct av_relation --- 177,201 * reloptions, or NULL if none */ } av_relation; + /* + * A tasklist is a set of tables to process, collected during a worker's first + * phase. For each table we keep track of its Browne strength, so that we can + * process in priority order. + */ + typedef struct avw_tltable + { + Oid tt_reloid; + float4 tt_browne_strength; + } avw_tltable; + + typedef struct avw_tasklist + { + int tl_maxelts; + int tl_nelts; + avw_tltable **tl_elts; + } avw_tasklist; + + /* struct to keep track of tables to vacuum and/or analyze, after rechecking */ typedef struct autovac_table { *** *** 299,305 static autovac_table *table_recheck_autovac(Oid relid, HTAB *table_toast_map, static void relation_needs_vacanalyze(Oid relid, AutoVacOpts *relopts, Form_pg_class classForm, PgStat_StatTabEntry *tabentry, ! bool *dovacuum, bool *doanalyze, bool *wraparound); static void autovacuum_do_vac_analyze(autovac_table *tab, BufferAccessStrategy bstrategy); --- 318,326 static void relation_needs_vacanalyze(Oid relid, AutoVacOpts *relopts, Form_pg_class classForm, PgStat_StatTabEntry *tabentry, ! bool *dovacuum, bool *doanalyze, bool *wraparound, ! float4 *deadtuples, float4 *relpages, ! uint32 *xidage); static void autovacuum_do_vac_analyze(autovac_table *tab, BufferAccessStrategy bstrategy); *** *** 1890,1895 get_database_list(void) --- 1911,1979 return dblist; } + static avw_tasklist * + tasklist_initialize(void) + { + avw_tasklist *tasklist; + + tasklist = palloc(sizeof(avw_tasklist)); + tasklist-tl_maxelts = 32; + tasklist-tl_nelts = 0; + tasklist-tl_elts = palloc(tasklist-tl_maxelts * sizeof(avw_tltable *)); + + return tasklist; + } + + /* + * Add a table to the tasklisk. + */ + static void + tasklist_add_table(avw_tasklist *tasklist, Oid relid, bool dovacuum, + bool doanalyze, bool wraparound, float4 deadtuples, + float4 relpages, uint32 xidage) + { + avw_tltable *tab; + + /* enlarge the array if necessary */ + if (tasklist-tl_nelts = tasklist-tl_maxelts) + { + tasklist-tl_maxelts *= 2; + tasklist-tl_elts = repalloc(tasklist-tl_elts, tasklist-tl_maxelts * + sizeof(avw_tltable *)); + } + + tab = palloc0(sizeof(avw_tltable)); + + tab-tt_reloid = relid; + if (dovacuum) + { + tab-tt_browne_strength = deadtuples / relpages + + exp(xidage * logf(relpages) / UINT_MAX); + } + + tasklist-tl_elts[tasklist-tl_nelts++] = tab; + } + + /* + * qsort comparator: sorts avw_tltable elements by value of Browne strength, + * descending + */ + static int + avw_tt_compar(const void *a, const void *b) + { + const avw_tltable *taba = *(avw_tltable *const *) a; + const avw_tltable *tabb = *(avw_tltable *const *) b; + + return tabb-tt_browne_strength - taba-tt_browne_strength; + } + + static void + tasklist_sort(avw_tasklist *tasklist) + { + qsort(tasklist-tl_elts, tasklist-tl_nelts, sizeof(avw_tltable *), + avw_tt_compar); + } + /* * Process a database table-by-table * *** *** 1903,1917 do_autovacuum(void) HeapTuple tuple; HeapScanDesc relScan; Form_pg_database dbForm; - List *table_oids = NIL; HASHCTL ctl; HTAB *table_toast_map; - ListCell *volatile cell; PgStat_StatDBEntry *shared; PgStat_StatDBEntry *dbentry; BufferAccessStrategy bstrategy; ScanKeyData key; TupleDesc pg_class_desc; /* * StartTransactionCommand and CommitTransactionCommand will automatically --- 1987,2001 HeapTuple tuple; HeapScanDesc relScan; Form_pg_database dbForm; HASHCTL ctl; HTAB *table_toast_map; PgStat_StatDBEntry *shared; PgStat_StatDBEntry *dbentry; BufferAccessStrategy bstrategy; ScanKeyData key; TupleDesc pg_class_desc; + avw_tasklist
Re: [HACKERS] autovacuum not prioritising for-wraparound tables
On Thu, Jan 31, 2013 at 2:40 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Alvaro Herrera escribió: Okay, here's a patch along these lines. I haven't considered Jim's suggestion downthread about discounting dead tuples from relpages; maybe we can do that by subtracting the pages attributed to dead ones, estimating via tuple density (reltuples/relpages). Patch attached. This strikes me as too clever by half. You've introduced the concept of a Browne strength (apparently named for Christopher Browne) and yet you haven't even bothered to add a comment explaining the meaning of the term, let along justifying the choice of that formula rather than any other. I don't want to dog this proposal to death, because surely we can do better than the status quo here, but adopting the first formula someone proposed without any analysis of whether it does the right thing cannot possibly be the correct decision process. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] autovacuum not prioritising for-wraparound tables
On Thu, Jan 31, 2013 at 2:36 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Instead, what I propose (and is not really in the patch), as a backpatchable item, is an approach in which the functions to compute each rel's Browne strength and sort are hooks. Normal behavior is not to sort at all, as currently, and sites that have a problem with the current random order can install a custom module that provide hooks to change ordering as they see fit. So behavior won't change for people who have no problem today. Can you think of any examples of cases where we have back-patched a new hook? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] autovacuum not prioritising for-wraparound tables
Alvaro Herrera alvhe...@2ndquadrant.com writes: Instead, what I propose (and is not really in the patch), as a backpatchable item, is an approach in which the functions to compute each rel's Browne strength and sort are hooks. Normal behavior is not to sort at all, as currently, and sites that have a problem with the current random order can install a custom module that provide hooks to change ordering as they see fit. So behavior won't change for people who have no problem today. Meh. I'm not really thrilled with adding hooks (that presumably we'd have to preserve forever) to solve a short-term problem. Nor does this sound hugely convenient for users with the problem, anyway. Do we even know for sure that anyone would create such modules? I think we should just fix it as best we can in HEAD, and then anyone who thinks the risk/reward ratio is favorable can back-patch that fix into a private build. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] autovacuum not prioritising for-wraparound tables
Tom Lane escribió: Alvaro Herrera alvhe...@2ndquadrant.com writes: Instead, what I propose (and is not really in the patch), as a backpatchable item, is an approach in which the functions to compute each rel's Browne strength and sort are hooks. Normal behavior is not to sort at all, as currently, and sites that have a problem with the current random order can install a custom module that provide hooks to change ordering as they see fit. So behavior won't change for people who have no problem today. Meh. I'm not really thrilled with adding hooks (that presumably we'd have to preserve forever) to solve a short-term problem. Nor does this sound hugely convenient for users with the problem, anyway. Do we even know for sure that anyone would create such modules? Well, I would. Providing a custom module is many times more convenient than providing a patched binary. But since there seems to be considerable resistance to the idea I will drop it, unless others vote in favour. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] autovacuum not prioritising for-wraparound tables
Robert Haas escribió: On Thu, Jan 31, 2013 at 2:40 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Alvaro Herrera escribió: Okay, here's a patch along these lines. I haven't considered Jim's suggestion downthread about discounting dead tuples from relpages; maybe we can do that by subtracting the pages attributed to dead ones, estimating via tuple density (reltuples/relpages). Patch attached. This strikes me as too clever by half. You've introduced the concept of a Browne strength (apparently named for Christopher Browne) and yet you haven't even bothered to add a comment explaining the meaning of the term, let along justifying the choice of that formula rather than any other. I don't want to dog this proposal to death, because surely we can do better than the status quo here, but adopting the first formula someone proposed without any analysis of whether it does the right thing cannot possibly be the correct decision process. My intention was to apply a Nasby correction to Browne Strength and call the resulting function Browne' (Browne prime). Does that sound better? Now seriously, I did experiment a bit with this and it seems to behave reasonably. Of course, there might be problems with it, and I don't oppose to changing the name. Vacuum strength didn't sound so great, so I picked the first term that came to mind. It's not like picking people's last names to name stuff is a completely new idea; that said, it was sort of a joke. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] autovacuum not prioritising for-wraparound tables
On Thu, Jan 31, 2013 at 3:18 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Robert Haas escribió: On Thu, Jan 31, 2013 at 2:40 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Alvaro Herrera escribió: Okay, here's a patch along these lines. I haven't considered Jim's suggestion downthread about discounting dead tuples from relpages; maybe we can do that by subtracting the pages attributed to dead ones, estimating via tuple density (reltuples/relpages). Patch attached. This strikes me as too clever by half. You've introduced the concept of a Browne strength (apparently named for Christopher Browne) and yet you haven't even bothered to add a comment explaining the meaning of the term, let along justifying the choice of that formula rather than any other. I don't want to dog this proposal to death, because surely we can do better than the status quo here, but adopting the first formula someone proposed without any analysis of whether it does the right thing cannot possibly be the correct decision process. My intention was to apply a Nasby correction to Browne Strength and call the resulting function Browne' (Browne prime). Does that sound better? Now seriously, I did experiment a bit with this and it seems to behave reasonably. Of course, there might be problems with it, and I don't oppose to changing the name. Vacuum strength didn't sound so great, so I picked the first term that came to mind. It's not like picking people's last names to name stuff is a completely new idea; that said, it was sort of a joke. Color me amused :-). And, when thinking about how strong these things are, just remember, smell isn't everything. I spent 20 minutes at a whiteboard arriving at the Browne strength, and I think it's not unreasonable as a usage of the data already immediately at hand. But it is absolutely just intended as a strawman proposal, and I'd be pleased to see it get prodded into something more prime. -- When confronted by a difficult problem, solve it by reducing it to the question, How would the Lone Ranger handle this? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)
On 1/30/13 9:11 AM, MauMau wrote: When I ran pg_ctl stop -mi against the primary, some applications connected to the primary did not stop. The cause was that the backends was deadlocked in quickdie() with some call stack like the following. I'm sorry to have left the stack trace file on the testing machine, so I'll show you the precise stack trace tomorrow. I've had similar problems in the past: http://www.postgresql.org/message-id/1253704891.20834.8.ca...@fsopti579.f-secure.com The discussion there never quite concluded. But yes, you need to be prepared that in rare circumstances SIGQUIT won't succeed and you need to use SIGKILL. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: proposal: a width specification for s specifier (format function), fix behave when positional and ordered placeholders are used
Hello 2013/1/29 Dean Rasheed dean.a.rash...@gmail.com: On 29 January 2013 08:19, Dean Rasheed dean.a.rash...@gmail.com wrote: * The width field is optional, even if the '-' flag is specified. So '%-s' is perfectly legal and should be interpreted as '%s'. The current implementation treats it as a width of 0, which is wrong. Oh, but of course a width of 0 is the same as no width at all, so the current code is correct after all. That's what happens if I try to write emails before I've had my caffeine :-) I think my other points remain valid though. It would still be neater to parse the flags separately from the width field, and then all literal numbers that appear in the format should be positive. I am sending rewritten code It indirect width * and *n$ is supported. It needs little bit more code. There are a new question what should be result of format(%2$*1$s, NULL, hello) ??? raise exception now, but I am able to modify to some agreement Regards Pavel Regards, Dean format_width_20130131.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: Privileges for INFORMATION_SCHEMA.SCHEMATA (was Re: [DOCS] Small clarification in 34.41. schemata)
On 1/9/13 8:56 PM, Tom Lane wrote: However, it seems to me that this behavior is actually wrong for our purposes, as it represents a too-literal reading of the spec. The SQL standard has no concept of privileges on schemas, only ownership. We do have privileges on schemas, so it seems to me that the consistent thing would be for this view to show any schema that you either own or have some privilege on. That is the test should be more like pg_has_role(n.nspowner, 'USAGE') OR has_schema_privilege(n.oid, 'CREATE, USAGE') As things stand, a non-superuser won't see public, pg_catalog, nor even information_schema itself in this view, which seems a tad silly. I agree it would make sense to change this. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Strange Windows problem, lock_timeout test request
2013-01-31 19:38 keltezéssel, Tom Lane írta: =?ISO-8859-2?Q?Zolt=E1n_B=F6sz=F6rm=E9nyi?= z...@cybertec.at writes: Thanks. A question though: how does make check or make installcheck chooses between the *.out and its different *_N.out incarnations? I couldn't find traces of prepared_xacts_1.out in any file saying this is the one to be used in this-and-this configuration. Does the procedure check against all versions and the least different one is reported? Exactly. This is documented, see the regression tests chapter of the SGML manual. regards, tom lane Thanks. I tested my patch with installcheck and installcheck-parallel using max_prepared_transactions=0 in the server and it passed that way too. Best regards, Zoltán Böszörményi -- -- Zoltán Böszörményi Cybertec Schönig Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt, Austria Web: http://www.postgresql-support.de http://www.postgresql.at/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] json api WIP patch
On 1/10/13 6:42 PM, Andrew Dunstan wrote: This updated patch contains all the intended functionality, including operators for the json_get_path functions, so you can say things like select jsonval-array['f1','0','f2] ... I would like to not create any - operators, so that that syntax could be used in the future for method invocation or something similar (it's in the SQL standard). I also don't find the proposed use to be very intuitive. You invented lots of other function names -- why not invent a few more for this purpose that are clearer? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] json api WIP patch
On 01/31/2013 05:06 PM, Peter Eisentraut wrote: On 1/10/13 6:42 PM, Andrew Dunstan wrote: This updated patch contains all the intended functionality, including operators for the json_get_path functions, so you can say things like select jsonval-array['f1','0','f2] ... I would like to not create any - operators, so that that syntax could be used in the future for method invocation or something similar (it's in the SQL standard). This is the first time I have heard that we should stay away from this. We have operators with this name in hstore, which is why I chose it. Have we officially deprecated '-'? I know we deprecated =, but I simply don't recall anything about '-'. I also don't find the proposed use to be very intuitive. You invented lots of other function names -- why not invent a few more for this purpose that are clearer? I'm happy to take opinions about this, and I expected some bikeshedding, but your reaction is contrary to everything others have told me. Mostly they love the operators. I guess that '~' and '~' would work as well as '-' and '-'. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)
From: Peter Eisentraut pete...@gmx.net On 1/30/13 9:11 AM, MauMau wrote: When I ran pg_ctl stop -mi against the primary, some applications connected to the primary did not stop. The cause was that the backends was deadlocked in quickdie() with some call stack like the following. I'm sorry to have left the stack trace file on the testing machine, so I'll show you the precise stack trace tomorrow. I've had similar problems in the past: http://www.postgresql.org/message-id/1253704891.20834.8.ca...@fsopti579.f-secure.com The discussion there never quite concluded. But yes, you need to be prepared that in rare circumstances SIGQUIT won't succeed and you need to use SIGKILL. Thank you for sharing your experience. So you also considered making postmaster SIGKILL children like me, didn't you? I bet most of people who encounter this problem would feel like that. It is definitely pg_ctl who needs to be prepared, not the users. It may not be easy to find out postgres processes to SIGKILL if multiple instances are running on the same host. Just doing pkill postgres will unexpectedly terminate postgres of other instances. I would like to make a patch which that changes SIGQUIT to SIGKILL when postmaster terminates children. Any other better ideas? Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)
MauMau maumau...@gmail.com wrote: Just doing pkill postgres will unexpectedly terminate postgres of other instances. Not if you run each instance under a different OS user, and execute pkill with the right user. (Never use root for that!) This is just one of the reasons that you should not run multiple clusters on the same machine with the same OS user. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] json api WIP patch
On Thu, Jan 31, 2013 at 4:20 PM, Andrew Dunstan and...@dunslane.net wrote: On 01/31/2013 05:06 PM, Peter Eisentraut wrote: On 1/10/13 6:42 PM, Andrew Dunstan wrote: This updated patch contains all the intended functionality, including operators for the json_get_path functions, so you can say things like select jsonval-array['f1','0','f2] ... I would like to not create any - operators, so that that syntax could be used in the future for method invocation or something similar (it's in the SQL standard). This is the first time I have heard that we should stay away from this. We have operators with this name in hstore, which is why I chose it. Have we officially deprecated '-'? I know we deprecated =, but I simply don't recall anything about '-'. I also don't find the proposed use to be very intuitive. You invented lots of other function names -- why not invent a few more for this purpose that are clearer? I'm happy to take opinions about this, and I expected some bikeshedding, but your reaction is contrary to everything others have told me. Mostly they love the operators. I guess that '~' and '~' would work as well as '-' and '-'. also hstore implements - quick off-topic aside: is colon (:) reserved for any purpose as an operator in SQL? merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] json api WIP patch
Merlin Moncure mmonc...@gmail.com writes: On Thu, Jan 31, 2013 at 4:20 PM, Andrew Dunstan and...@dunslane.net wrote: On 01/31/2013 05:06 PM, Peter Eisentraut wrote: I would like to not create any - operators, so that that syntax could be used in the future for method invocation or something similar (it's in the SQL standard). This is the first time I have heard that we should stay away from this. We have operators with this name in hstore, which is why I chose it. I'm not happy about this either. It's bad enough that we're thinking about taking away =, but to disallow - as well? My inclination is to just say no, we're not implementing that. Even if we remove the contrib operators named that way, it's insane to suppose that nobody has chosen these names for user-defined operators in their applications. quick off-topic aside: is colon (:) reserved for any purpose as an operator in SQL? We disallow it as an operator character, because of the conflict with parameter/variable syntax in ecpg and psql. It was allowed before PG 7.0, IIRC. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] json api WIP patch
On Jan 31, 2013, at 2:20 PM, Andrew Dunstan and...@dunslane.net wrote: I'm happy to take opinions about this, and I expected some bikeshedding, but your reaction is contrary to everything others have told me. Mostly they love the operators. I guess that '~' and '~' would work as well as '-' and '-'. Or + and +, since ~ is set very high and small by some fonts (where the fontmakers though of it as a kind of superscript character). I suppose that := is out of the question? Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] json api WIP patch
On 01/31/2013 07:16 PM, David E. Wheeler wrote: On Jan 31, 2013, at 2:20 PM, Andrew Dunstan and...@dunslane.net wrote: I'm happy to take opinions about this, and I expected some bikeshedding, but your reaction is contrary to everything others have told me. Mostly they love the operators. I guess that '~' and '~' would work as well as '-' and '-'. Or + and +, since ~ is set very high and small by some fonts (where the fontmakers though of it as a kind of superscript character). I suppose that := is out of the question? Even if it were I would not on any account use it. As an old Ada programmer my mind just revolts at the idea of using this for anything but assignment. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] json api WIP patch
Andrew Dunstan and...@dunslane.net writes: On 01/31/2013 07:16 PM, David E. Wheeler wrote: I suppose that := is out of the question? Even if it were I would not on any account use it. As an old Ada programmer my mind just revolts at the idea of using this for anything but assignment. Ada or no, its use in plpgsql would render that a seriously bad idea. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] json api WIP patch
On Jan 31, 2013, at 4:32 PM, Tom Lane t...@sss.pgh.pa.us wrote: Ada or no, its use in plpgsql would render that a seriously bad idea. I assumed that its use in function params would be the main reason not to use it. David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] json api WIP patch
On 01/02/13 13:26, Andrew Dunstan wrote: On 01/31/2013 07:16 PM, David E. Wheeler wrote: On Jan 31, 2013, at 2:20 PM, Andrew Dunstan and...@dunslane.net wrote: I'm happy to take opinions about this, and I expected some bikeshedding, but your reaction is contrary to everything others have told me. Mostly they love the operators. I guess that '~' and '~' would work as well as '-' and '-'. Or + and +, since ~ is set very high and small by some fonts (where the fontmakers though of it as a kind of superscript character). I suppose that := is out of the question? Even if it were I would not on any account use it. As an old Ada programmer my mind just revolts at the idea of using this for anything but assignment. cheers andrew Ancient Algol 60 programmer here, otherwise ditto!
Re: [HACKERS] Cascading replication: should we detect/prevent cycles?
If we're going to start installing safeguards against doing stupid things, there's a long list of scenarios that happen far more regularly than this ever will and cause far more damage. What's wrong with making it easier for sysadmins to troubleshoot things? Again, I'm not talking about erroring out, I'm talking about logging a warning. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Streaming-only cascading replica won't come up without writes on the master
Heikki, I thought this was only a 9.3 issue, but it turns out to be reproduceable on 9.2.2. Basically, I did: 1. master is queicent ... no writes occuring. 2. createded cascading replica (reprep1) from replica (repmaster) 3. reprep1 remains in recovery mode until a write occurs on master I've been able to reproduce this several times on my laptop using postmasters running on different ports. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] parameter info?
What's the best way for me to find out if a given parameter of a function is a constant? The context is that it's expensive to process, and in most cases will in fact be a constant, so if it is in fact a constant I'd like to process it once and stash the results. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Cascading replication: should we detect/prevent cycles?
On 02/01/2013 12:01 PM, Josh Berkus wrote: If we're going to start installing safeguards against doing stupid things, there's a long list of scenarios that happen far more regularly than this ever will and cause far more damage. What's wrong with making it easier for sysadmins to troubleshoot things? Again, I'm not talking about erroring out, I'm talking about logging a warning. Or to put it another way: Robert, you just did a nobody wants that to me. I thought you were opposed to such things on this list. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] find libxml2 using pkg-config
On Mon, 2013-01-14 at 10:25 -0500, Tom Lane wrote: Peter Eisentraut pete...@gmx.net writes: The attached patch looks for pkg-config first, and finds libxml2 using that if available. Otherwise it falls back to using xml2-config. What happens if pkg-config is installed but doesn't know anything about xml2? I'd expect the code to fall back to the old method, but this patch doesn't appear to have any sanity check whatsoever on pkg-config's output. Updated patch to that effect diff --git a/configure.in b/configure.in index f31f7ef..81ac837 100644 --- a/configure.in +++ b/configure.in @@ -706,18 +706,24 @@ PGAC_ARG_BOOL(with, libxml, no, [build with XML support], [AC_DEFINE([USE_LIBXML], 1, [Define to 1 to build with XML support. (--with-libxml)])]) if test $with_libxml = yes ; then - AC_CHECK_PROGS(XML2_CONFIG, xml2-config) - if test -n $XML2_CONFIG; then -for pgac_option in `$XML2_CONFIG --cflags`; do - case $pgac_option in --I*|-D*) CPPFLAGS=$CPPFLAGS $pgac_option;; - esac -done -for pgac_option in `$XML2_CONFIG --libs`; do - case $pgac_option in --L*) LDFLAGS=$LDFLAGS $pgac_option;; - esac -done + AC_CHECK_PROGS(PKG_CONFIG, pkg-config) + if test -n $PKG_CONFIG $PKG_CONFIG --exists libxml-2.0; then +CPPFLAGS=$CPPFLAGS `$PKG_CONFIG libxml-2.0 --cflags-only-I` +LDFLAGS=$LDFLAGS `$PKG_CONFIG libxml-2.0 --libs-only-L` + else +AC_CHECK_PROGS(XML2_CONFIG, xml2-config) +if test -n $XML2_CONFIG; then + for pgac_option in `$XML2_CONFIG --cflags`; do +case $pgac_option in + -I*|-D*) CPPFLAGS=$CPPFLAGS $pgac_option;; +esac + done + for pgac_option in `$XML2_CONFIG --libs`; do +case $pgac_option in + -L*) LDFLAGS=$LDFLAGS $pgac_option;; +esac + done +fi fi fi diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index 22e6cf1..09c1d09 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -892,11 +892,18 @@ titleConfiguration/title /para para - Libxml installs a program commandxml2-config/command that - can be used to detect the required compiler and linker - options. PostgreSQL will use it automatically if found. To - specify a libxml installation at an unusual location, you can - either set the environment variable + To detect the required compiler and linker options, PostgreSQL will + query commandpkg-config/command, if it is installed and knows + about libxml2. Otherwise the program commandxml2-config/command, + which is installed by libxml, will be used if it is found. Use + of commandpkg-config/command is preferred, because it can deal + with multi-arch installations better. +/para + +para + To specify a libxml installation at an unusual location, you can + either set commandpkg-config/command-related environment variables + (see its documentation), or set the environment variable envarXML2_CONFIG/envar to point to the commandxml2-config/command program belonging to the installation, or use the options -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] recursive view syntax
On Fri, 2013-01-18 at 10:00 -0500, Stephen Frost wrote: I've done another review of this patch and it looks pretty good to me. My only complaint is that there isn't a single comment inside makeRecursiveViewSelect(). Added some of that and committed. One other thought is- I'm guessing this isn't going to work: CREATE RECURSIVE VIEW name (columns) AS WITH ... SELECT ...; Does the spec explicitly allow or disallow that? Should we provide any comments about it? That works fine, AFAICT. It just becomes another level of WITH. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] parameter info?
hello 2013/2/1 Andrew Dunstan and...@dunslane.net: What's the best way for me to find out if a given parameter of a function is a constant? The context is that it's expensive to process, and in most cases will in fact be a constant, so if it is in fact a constant I'd like to process it once and stash the results. you can look into parser tree see src of get_fn_expr_argtype(fcinfo-flinfo, 0); Regards Pavel cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] parameter info?
Andrew Dunstan and...@dunslane.net writes: What's the best way for me to find out if a given parameter of a function is a constant? The context is that it's expensive to process, and in most cases will in fact be a constant, so if it is in fact a constant I'd like to process it once and stash the results. I think we added get_fn_expr_arg_stable() for precisely that purpose. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Suggestion: Issue warning when calling SET TRANSACTION outside transaction block
On Wednesday, January 30, 2013 6:53 AM Morten Hustveit wrote: Hi! Calling SET TRANSACTION ISOLATION LEVEL ... outside a transaction block has no effect. This is unlike LOCK ... and DECLARE foo CURSOR FOR ..., which both raise an error. This is also unlike MySQL, where such a statement will affect the next transaction performed. There's some risk of data corruption, as a user might assume he's working on a snapshot, while in fact he's not. The behavior of SET TRANSACTION ISOLATION LEVEL ... needs to be compared with SET LOCAL ... These commands are used to set property of current transaction in which they are executed. The usage can be clear with below function, where it is used to set the current transaction property. Create or Replace function temp_trans() Returns boolean AS $$ Declare sync_status boolean; Begin Set LOCAL synchronous_commit=off; show synchronous_commit into sync_status; return sync_status; End; $$ Language plpgsql; I suggest issuing a warning, notice or error message when SET TRANSACTION ... is called outside a transaction block, possibly directing the user to the SET SESSION CHARACTERISTICS AS TRANSACTION ... syntax. It is already mentioned in documentation that SET Transaction command is used to set characteristics of current transaction (http://www.postgresql.org/docs/9.2/static/sql-set-transaction.html). I think user should be aware of effect before using SET commands, as these are used at various levels (TRANSACTION, SESSION, ...). With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] proposal - assign result of query to psql variable
Hello can you look, please, on updated version - it respects Tom's proposal and it is significantly reduced? Thank you Pavel Stehule 2013/1/28 Pavel Stehule pavel.steh...@gmail.com: Hello 2013/1/26 Tom Lane t...@sss.pgh.pa.us: Andrew Dunstan and...@dunslane.net writes: +1. This looks quite nifty. Maybe useful too to have a default prefix via some setting. Meh. I would expect that \gset :foo would work to specify a computed prefix if you wanted it --- isn't that sufficient indirection? I'm not thrilled with further expanding the set of magic variables in psql. here is patch related to your proposal Regards Pavel regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: proposal: a width specification for s specifier (format function), fix behave when positional and ordered placeholders are used
Hello minor update - fix align NULL for %L Regards Pavel 2013/1/31 Pavel Stehule pavel.steh...@gmail.com: Hello 2013/1/29 Dean Rasheed dean.a.rash...@gmail.com: On 29 January 2013 08:19, Dean Rasheed dean.a.rash...@gmail.com wrote: * The width field is optional, even if the '-' flag is specified. So '%-s' is perfectly legal and should be interpreted as '%s'. The current implementation treats it as a width of 0, which is wrong. Oh, but of course a width of 0 is the same as no width at all, so the current code is correct after all. That's what happens if I try to write emails before I've had my caffeine :-) I think my other points remain valid though. It would still be neater to parse the flags separately from the width field, and then all literal numbers that appear in the format should be positive. I am sending rewritten code It indirect width * and *n$ is supported. It needs little bit more code. There are a new question what should be result of format(%2$*1$s, NULL, hello) ??? raise exception now, but I am able to modify to some agreement Regards Pavel Regards, Dean format_width_20130201.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Cascading replication: should we detect/prevent cycles?
On Sunday, 27 January 2013, Robert Haas robertmh...@gmail.com wrote: If we're going to start installing safeguards against doing stupid things, there's a long list of scenarios that happen far more regularly than this ever will and cause far more damage. +1 -- Regards, Peter Geoghegan