Re: [GENERAL] strange behavior, hoping for an explanation
Chris Travers wrote: I have found recently that tables in certain contexts seem to have a name pseudocolumn. I was wondering if there is any documentation as to what this is and what it signifies. postgres=# CREATE table TEST2 (a text, b text); CREATE TABLE postgres=# INSERT INTO test2 values ('', ''); INSERT 0 1 postgres=# select t.name FROM test2 t; name - (,) (1 row) However: postgres=# select name FROM test2 t; ERROR: column name does not exist LINE 1: select name FROM test2 t; This isn't making any sense to me. Are there certain circumstances where a tuple is cast to something like varchar(63)? Does this pose pitfals for any columns named 'name' in other contexts? I tried to your sample in 9.1.1 and 9.2devel, and both gave me ERROR: column t.name does not exist as expected. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PQexecParams with binary resultFormat vs BINARY CURSOR
Hey Mateusz, 2011/11/11 Mateusz Łoskot mate...@loskot.net Hi, Considering query for binary data stored directly in tables using libpq API, I'm trying to understand what is the difference between specifying binary format in functions like PQexecParams and use of BINARY CURSOR. For example, with query like this: SELECT large_image FROM tbl; where large_image is a custom type, is there a big difference between binary format specified to libpq and use of BINARY CURSOR? Is it client-side binary vs server-side binary processing? Simply, I'd like to avoid textual-binary conversions at any stage. (Endianness is not an issue here.) Best regards, ...The concept of a binary cursor as such is thus obsolete when using extended query protocol — any cursor can be treated as either text or binary. ... from http://www.postgresql.org/docs/9.1/static/sql-declare.html -- // Dmitriy.
Re: [GENERAL] PQexecParams with binary resultFormat vs BINARY CURSOR
Hi Dmitriy, 2011/11/11 Dmitriy Igrishin dmit...@gmail.com: 2011/11/11 Mateusz Łoskot mate...@loskot.net Considering query for binary data stored directly in tables using libpq API, I'm trying to understand what is the difference between specifying binary format in functions like PQexecParams and use of BINARY CURSOR. For example, with query like this: SELECT large_image FROM tbl; where large_image is a custom type, is there a big difference between binary format specified to libpq and use of BINARY CURSOR? Is it client-side binary vs server-side binary processing? Simply, I'd like to avoid textual-binary conversions at any stage. (Endianness is not an issue here.) Best regards, ...The concept of a binary cursor as such is thus obsolete when using extended query protocol — any cursor can be treated as either text or binary. ... from http://www.postgresql.org/docs/9.1/static/sql-declare.html Thanks, this is interesting. I've been reading more about this and the picture seems to be clear: Note: The choice between text and binary output is determined by the format codes given in Bind, regardless of the SQL command involved. http://www.postgresql.org/docs/9.1/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY However, I'm not sure how I can utilise this feature of the extended query protocol with libpq API. Concretely, how to translate the binding with format specification here choice between text and binary output is determined by the format codes given in Bind Does it mean the protocol automagically switches between text/binary depending on format code (0|1) specified to PQexecParams and friends? Side question, is this new feature of the extended query protocol in 9.x line? Best regards, -- Mateusz Loskot, http://mateusz.loskot.net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PQexecParams with binary resultFormat vs BINARY CURSOR
2011/11/11 Mateusz Łoskot mate...@loskot.net Hi Dmitriy, 2011/11/11 Dmitriy Igrishin dmit...@gmail.com: 2011/11/11 Mateusz Łoskot mate...@loskot.net Considering query for binary data stored directly in tables using libpq API, I'm trying to understand what is the difference between specifying binary format in functions like PQexecParams and use of BINARY CURSOR. For example, with query like this: SELECT large_image FROM tbl; where large_image is a custom type, is there a big difference between binary format specified to libpq and use of BINARY CURSOR? Is it client-side binary vs server-side binary processing? Simply, I'd like to avoid textual-binary conversions at any stage. (Endianness is not an issue here.) Best regards, ...The concept of a binary cursor as such is thus obsolete when using extended query protocol — any cursor can be treated as either text or binary. ... from http://www.postgresql.org/docs/9.1/static/sql-declare.html Thanks, this is interesting. I've been reading more about this and the picture seems to be clear: Note: The choice between text and binary output is determined by the format codes given in Bind, regardless of the SQL command involved. http://www.postgresql.org/docs/9.1/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY However, I'm not sure how I can utilise this feature of the extended query protocol with libpq API. Concretely, how to translate the binding with format specification here choice between text and binary output is determined by the format codes given in Bind Does it mean the protocol automagically switches between text/binary depending on format code (0|1) specified to PQexecParams and friends? The Bind(F) message contains array with the parameter format codes. So, yes, all you need is to pass array with format codes to ::PQexecParams and libpq will let it go. Side question, is this new feature of the extended query protocol in 9.x line? The extended query sub-protocol introduced in protocol version 3.0. -- // Dmitriy.
[GENERAL] weird pg_statistic problem
Hello, this morning I experienced a weird problem with our pgsql database (9.0.3): while performing a simple query, I receive the following error: Nov 11 10:24:09 host postgres[23395]: [7-1] ERROR: missing chunk number 0 for toast value 550556127 in pg_toast_2619 so I tried to find which relation is corrupted with the following query: DB=# select * from pg_class pg1 inner join pg_class pg2 on pg1.oid=pg2.reltoastrelid where pg1.relname='pg_toast_2619'; -[ RECORD 1 ]---+ relname | pg_toast_2619 relnamespace| 99 reltype | 10949 reloftype | 0 relowner| 10 relam | 0 relfilenode | 11583 reltablespace | 0 relpages| 137 reltuples | 343 reltoastrelid | 0 reltoastidxid | 2841 relhasindex | t relisshared | f relistemp | f relkind | t relnatts| 3 relchecks | 0 relhasoids | f relhaspkey | f relhasexclusion | f relhasrules | f relhastriggers | f relhassubclass | f relfrozenxid| 949968032 relacl | reloptions | relname | pg_statistic relnamespace| 11 reltype | 10730 reloftype | 0 relowner| 10 relam | 0 relfilenode | 11581 reltablespace | 0 relpages| 550 reltuples | 3084 reltoastrelid | 2840 reltoastidxid | 0 relhasindex | t relisshared | f relistemp | f relkind | r relnatts| 22 relchecks | 0 relhasoids | f relhaspkey | f relhasexclusion | f relhasrules | f relhastriggers | f relhassubclass | f relfrozenxid| 949968032 relacl | {postgres=arwdDxt/postgres} reloptions | apparently, the pg_statistic is having issues. Then, I performed an analyze verbose on the whole DB to reset the statistics, and, after a while, I obtained an error: ERROR: duplicate key value violates unique constraint pg_statistic_relid_att_inh_index DETAIL: Key (starelid, staattnum, stainherit)=(531526103, 7, f) already exists. It seems analyze is violating the primary in the pg_statistic table: DB=# \d pg_statistic_relid_att_inh_index Index pg_catalog.pg_statistic_relid_att_inh_index Column | Type | Definition +--+ starelid | oid | starelid staattnum | smallint | staattnum stainherit | boolean | stainherit unique, btree, for table pg_catalog.pg_statistic DB=# \d+ pg_statistic Table pg_catalog.pg_statistic Column| Type | Modifiers | Storage | Description -+--+---+--+- starelid| oid | not null | plain| staattnum | smallint | not null | plain| stainherit | boolean | not null | plain| stanullfrac | real | not null | plain| stawidth| integer | not null | plain| stadistinct | real | not null | plain| stakind1| smallint | not null | plain| stakind2| smallint | not null | plain| stakind3| smallint | not null | plain| stakind4| smallint | not null | plain| staop1 | oid | not null | plain| staop2 | oid | not null | plain| staop3 | oid | not null | plain| staop4 | oid | not null | plain| stanumbers1 | real[] | | extended | stanumbers2 | real[] | | extended | stanumbers3 | real[] | | extended | stanumbers4 | real[] | | extended | stavalues1 | anyarray | | extended | stavalues2 | anyarray | | extended | stavalues3 | anyarray | | extended | stavalues4 | anyarray | | extended | Indexes: pg_statistic_relid_att_inh_index UNIQUE, btree (starelid, staattnum, stainherit) Has OIDs: no at this point, I'm stuck. How should I proceed? Is it possible to drop/recreate the pg_statistic table? What else could I try? Thanks a lot for your help, Enrico -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] FK dissapearing
So I have a strange issue on one of our live systems. \d+ table shows me the FKs with cascaded deletes, but querying pg_trigger doesn't show me any specific triggers for the FK. Is that possible ? Or am I missing something here? The psql version is 8.3.7 . -- GJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] FK dissapearing
On 11 November 2011 12:25, Gregg Jaskiewicz gryz...@gmail.com wrote: So I have a strange issue on one of our live systems. \d+ table shows me the FKs with cascaded deletes, but querying pg_trigger doesn't show me any specific triggers for the FK. Is that possible ? Or am I missing something here? The psql version is 8.3.7 . What happened it seems was that the box run out of disk space (it's a test box), and postgresql (obviously) kicked the bucket. Was restarted, and worked fine until I've noticed the FK problem. I was trying to get a backup just now, and got this: pg_dump: failed sanity check, parent table OID 1026802 of pg_rewrite entry OID 1026968 not found Questions: - how to fix it - is it something that's been fixed in 8.3.x, where x 7 ? -- GJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to inquiry a nest result?
Appreciate for your help ! -- View this message in context: http://postgresql.1045698.n5.nabble.com/How-to-inquiry-a-nest-result-tp4981259p4984218.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Passing NULL to a function called with OidFunctionCall3
Hey all, I'm trying to make use of OidFunctionCall3 and am wondering how to resolve an issue. I need to be able to pass to the function called with OidFunctionCall3 a NULL and am having difficulty figuring out how. {{{ /* build fcnarg */ for (i = 0; i set_count; i++) { if (_haspixel[i]) { fcnarg[i] = Float8GetDatum(_pixel[i]); POSTGIS_RT_DEBUGF(4, arg %d is %f, i, _pixel[i]); } else { fcnarg[i] = (Datum) NULL; POSTGIS_RT_DEBUGF(4, arg %d is NULL, i); } } datum = OidFunctionCall3(fcnoid, fcnarg[0], fcnarg[1], fcnuserarg); }}} The above does not work (segfault). What is the correct way to pass a NULL to the function being called? Thanks, Bborie -- Bborie Park Programmer Center for Vectorborne Diseases UC Davis 530-752-8380 bkp...@ucdavis.edu -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Hotel reservation for FOSDEM 2012 - Deadline: December 31th, 2011
Hi all, like the last years we will have a devroom at FOSDEM 2012. We also look forward to have a booth. We made a group reservation in the Agenda Louise hotel: Hotel Agenda Louise rue de Florence 6 B-1000 Brussels Tel: + 32.2.539.00.31 Fax: + 32.2.539.00.63 www.hotel-agenda.com This time, as a good customer, we got a special price. From Friday to Sunday included: - 80 EUR per night and single room - 90 EUR per night and double room From Monday to Thursday included: - 106 EUR per night and single room - 120 EUR per night and single room Breakfast, taxes and services are included. If you would like to book a room, please send me an email. Include your name, email address, room type, arrival and leave date. Important: please send me this information until December 31th, 211! Thanks -- Andreas 'ads' Scherbaum German PostgreSQL User Group European PostgreSQL User Group - Board of Directors Volunteer Regional Contact, Germany - PostgreSQL Project -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] : postgres: archiver process failed on 0000000100000F72000000F0
Hello, WAL Archive process in our production is not working. [postgres@hostname]$ ps -ef | grep archive postgres 12077 16015 0 10:19 pts/400:00:00 grep archive postgres 31126 27607 0 Nov10 ?00:01:18 postgres: archiver process failed on 00010F7200F0 I see WAL files getting accumulated in pg_xlog location and the status in archive_status is shown as .ready. Is there anyway we can only restart archiving process without disturbing the actual cluster ? Actually, we had killed a process using kill -9 and the db went into recovery mode and was back up and running. We have no issues with the application as well. postgres=# select pg_is_in_recovery(); pg_is_in_recovery --- f (1 row) Please help to resolve this ! Thanks VB
Re: [GENERAL] : postgres: archiver process failed on 0000000100000F72000000F0
This problem has been resolved !! Thanks VB On Fri, Nov 11, 2011 at 9:58 PM, Venkat Balaji venkat.bal...@verse.inwrote: Hello, WAL Archive process in our production is not working. [postgres@hostname]$ ps -ef | grep archive postgres 12077 16015 0 10:19 pts/400:00:00 grep archive postgres 31126 27607 0 Nov10 ?00:01:18 postgres: archiver process failed on 00010F7200F0 I see WAL files getting accumulated in pg_xlog location and the status in archive_status is shown as .ready. Is there anyway we can only restart archiving process without disturbing the actual cluster ? Actually, we had killed a process using kill -9 and the db went into recovery mode and was back up and running. We have no issues with the application as well. postgres=# select pg_is_in_recovery(); pg_is_in_recovery --- f (1 row) Please help to resolve this ! Thanks VB
Re: [GENERAL] : postgres: archiver process failed on 0000000100000F72000000F0
On Fri, Nov 11, 2011 at 09:58:56PM +0530, Venkat Balaji wrote: - Hello, - - WAL Archive process in our production is not working. - - [postgres@hostname]$ ps -ef | grep archive - postgres 12077 16015 0 10:19 pts/400:00:00 grep archive - postgres 31126 27607 0 Nov10 ?00:01:18 postgres: archiver process - failed on 00010F7200F0 - - I see WAL files getting accumulated in pg_xlog location and the status in - archive_status is shown as .ready. - - Is there anyway we can only restart archiving process without disturbing - the actual cluster ? - - Actually, we had killed a process using kill -9 and the db went into - recovery mode and was back up and running. - - We have no issues with the application as well. - - postgres=# select pg_is_in_recovery(); - - pg_is_in_recovery - --- - f - (1 row) - - Please help to resolve this ! If you fix the problem causing the archiver process to fail it will start processing the logs again. check your postgres logs and check your archive_command in your postgresql.conf and make sure that everything is correct there. Dave -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] weird pg_statistic problem
Enrico Sirola enrico.sir...@gmail.com writes: this morning I experienced a weird problem with our pgsql database (9.0.3): while performing a simple query, I receive the following error: Nov 11 10:24:09 host postgres[23395]: [7-1] ERROR: missing chunk number 0 for toast value 550556127 in pg_toast_2619 Was this a transient error, or repeatable? If it was transient, it's probably a recently-fixed issue: http://archives.postgresql.org/pgsql-hackers/2011-10/msg01366.php http://archives.postgresql.org/pgsql-committers/2011-11/msg00014.php ERROR: duplicate key value violates unique constraint pg_statistic_relid_att_inh_index DETAIL: Key (starelid, staattnum, stainherit)=(531526103, 7, f) already exists. This seems unrelated. Can you repeat this one? If so, try REINDEX'ing that index and see if the problem goes away. It'd be worth your while to update to 9.0.5 --- we fixed a fair number of potential data-corruption issues since January. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] dblink build problem…must be a lesson here...
Hi, I compiled pg 9.1.1 on my Mac OX 10.7.2 this afternoon and when I attempted to build dblink I got: eagle:dblink postgres$ export CC=gcc -arch i386 eagle:dblink postgres$ make gcc -arch i386 -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv -I../../src/interfaces/libpq -I. -I. -I../../src/include -c -o dblink.o dblink.c In file included from ../../src/include/postgres.h:48, from dblink.c:33: ../../src/include/utils/elog.h:69:28: error: utils/errcodes.h: No such file or directory dblink.c:62:28: error: utils/fmgroids.h: No such file or directory The files errcodes.h and fmgrouds.h appear to be links to nowhere… I scratched my head for a while. I finally recalled that after I ran config and built postgresql I moved the source folder to a different location. This evidently screws up some of the links…not good. When I moved the folder back to its original location dblink.so built with no problems. Be careful out there :) It has been a long time since I have messed with installing dblink. Do I need to install the sql again? If so where and how do I reinstall the sql interfaces… T'aint clear to me when dblink needs to be reinstalled. Jerry -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] VACUUM touching file but not updating relation
Thom Brown t...@linux.com writes: On 11 November 2011 00:55, Tom Lane t...@sss.pgh.pa.us wrote: Thom Brown t...@linux.com writes: I just noticed that the VACUUM process touches a lot of relations (affects mtime) but for one file I looked at, it didn't change. This doesn't always happen, and many relations aren't touched at all. No immmediate ideas as to why the mtime would change if the file contents didn't. It seems like there must be a code path that marked a buffer dirty without having changed it, but we're usually pretty careful about that. I checked all files where the time stamp of the file had changed, but had the same MD5 sum. I used the list in the query you mentioned and get: [ mostly indexes ] Hmm, is this on a hot standby master? I observe that _bt_delitems_vacuum() unconditionally dirties the page and writes a WAL record, whether it has anything to do or not; and that if XLogStandbyInfoActive() then btvacuumscan will indeed call it despite there being (probably) nothing useful to do. Seems like that could be improved. The comment explaining why it's necessary to do that doesn't make any sense to me, either. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] VACUUM touching file but not updating relation
On 11 November 2011 23:28, Tom Lane t...@sss.pgh.pa.us wrote: Thom Brown t...@linux.com writes: On 11 November 2011 00:55, Tom Lane t...@sss.pgh.pa.us wrote: Thom Brown t...@linux.com writes: I just noticed that the VACUUM process touches a lot of relations (affects mtime) but for one file I looked at, it didn't change. This doesn't always happen, and many relations aren't touched at all. No immmediate ideas as to why the mtime would change if the file contents didn't. It seems like there must be a code path that marked a buffer dirty without having changed it, but we're usually pretty careful about that. I checked all files where the time stamp of the file had changed, but had the same MD5 sum. I used the list in the query you mentioned and get: [ mostly indexes ] Hmm, is this on a hot standby master? It's using a wal_level of hot_standby and has max_wal_senders set to 2, but it's not actually replicating to anywhere else. But if I comment out both of these, restart, then compare pre-vacuum and post-vacuum, I get the following results for unchanged but touched items: test=# select oid,relname from pg_class where relfilenode in (11680,11682,11684,11686,11690,16530); oid | relname ---+- 2619 | pg_statistic 2840 | pg_toast_2619 2841 | pg_toast_2619_index 16530 | cows2 (4 rows) The items which didn't match a result in this instance were 11686 and 11690, which is surprising since they both have a visibility map and free space map, indicating they're some kind of table. I observe that _bt_delitems_vacuum() unconditionally dirties the page and writes a WAL record, whether it has anything to do or not; and that if XLogStandbyInfoActive() then btvacuumscan will indeed call it despite there being (probably) nothing useful to do. Seems like that could be improved. The comment explaining why it's necessary to do that doesn't make any sense to me, either. Well the effect, in the single instances I've checked, is certainly more pronounced for hot_standby, but there still appears to be some occurrences for minimal wal_level too. -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935 EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PQexecParams with binary resultFormat vs BINARY CURSOR
2011/11/11 Dmitriy Igrishin dmit...@gmail.com: 2011/11/11 Mateusz Łoskot mate...@loskot.net Does it mean the protocol automagically switches between text/binary depending on format code (0|1) specified to PQexecParams and friends? The Bind(F) message contains array with the parameter format codes. So, yes, all you need is to pass array with format codes to ::PQexecParams and libpq will let it go. Dmitriy, This is the answer I was looking for. Thanks! I'm glad I don't have to fiddle with cursors directly. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Determine a function's volatility in C
Hey all, I'm wondering if there is a way to determine a function's volatility in C. The function information provided through fmgr_info() doesn't provide it. Ideas? Thanks, Bborie -- Bborie Park Programmer Center for Vectorborne Diseases UC Davis 530-752-8380 bkp...@ucdavis.edu -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Determine a function's volatility in C
Hello 2011/11/12 Bborie Park bkp...@ucdavis.edu: Hey all, I'm wondering if there is a way to determine a function's volatility in C. The function information provided through fmgr_info() doesn't provide it. Ideas? you should to look to pg_proc table search in postgresql code tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcoid)); if (!HeapTupleIsValid(tuple)) elog(ERROR, cache lookup failed for function %u, funcoid); proc = (Form_pg_proc) GETSTRUCT(tuple); ... switch (proc-provolatile) { case PROVOLATILE_IMMUTABLE: appendStringInfoString(buf, IMMUTABLE); break; case PROVOLATILE_STABLE: appendStringInfoString(buf, STABLE); break; case PROVOLATILE_VOLATILE: break; } ... ReleaseSysCache(tuple); Regards Pavel Stehule Thanks, Bborie -- Bborie Park Programmer Center for Vectorborne Diseases UC Davis 530-752-8380 bkp...@ucdavis.edu -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Determine a function's volatility in C
Bborie Park bkp...@ucdavis.edu writes: I'm wondering if there is a way to determine a function's volatility in C. The function information provided through fmgr_info() doesn't provide it. Ideas? extern char func_volatile(Oid funcid) (Most catalog-lookup convenience functions of this ilk can be found in lsyscache.c, and that's also a good source of prototypes if you need a field that's not exposed by one of those functions.) regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Large values for duration of COMMITs and slow queries. Due to large WAL config values?
Postgres 9.1.1, master with 2 slaves via streaming replication. I've enabled slow query logging of 150ms and am seeing a large number of slow COMMITs: 2011-11-12 06:55:02 UTC pid:30897 (28/0-0) LOG: duration: 232.398 ms statement: COMMIT 2011-11-12 06:55:08 UTC pid:30896 (27/0-0) LOG: duration: 1078.789 ms statement: COMMIT 2011-11-12 06:55:09 UTC pid:30842 (15/0-0) LOG: duration: 2395.432 ms statement: COMMIT 2011-11-12 06:55:09 UTC pid:30865 (23/0-0) LOG: duration: 2395.153 ms statement: COMMIT 2011-11-12 06:55:09 UTC pid:30873 (17/0-0) LOG: duration: 2390.106 ms statement: COMMIT The machine has 16GB of RAM and plenty of disk space. What I think might be relevant settings are: wal_buffers = 16MB checkpoint_segments = 32 max_wal_senders = 10 checkpoint_completion_target = 0.9 wal_keep_segments = 1024 maintenance_work_mem = 256MB work_mem = 88MB shared_buffers = 3584MB effective_cache_size = 10GB Recently we have bumped up wal_keep_segments and checkpoint_segments because we wanted to run long running queries on the slaves and we're receiving cancellation errors on the slaves. I think the master was recycling WAL logs from underneath the slave and thus canceling the queries. Hence, I believed I needed to crank up those values. It seems to work, I can run long queries (for statistics / reports) on the slaves just fine. But I now wonder if its having an adverse effect on the master, ala these slow commit times and other slow queries (e.g. primary key lookups on tables with not that many records), which seem to have increased since the configuration change. I am watching iostat and sure enough, when %iowait gets 15 or so then a bunch more slow queries get logged. So I can see its disk related. I just dont know what the underlying cause is. Any pointers would be appreciated. Thank you. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general