[PERFORM] Best practice when reindexing in production
Hi, I have a database with quite some data (millions of rows), that is heavily updated all the time. Once a day I would like to reindex my database (and maybe re cluster it - don't know if that's worth it yet?). I need the database to be usable while doing this (both read and write). I see that there is no way to REINDEX CONCURRENTLY - So what approach would you suggest that I take on this? Regards Niels Kristian -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Best practice when reindexing in production
On Wed, May 29, 2013 at 8:24 AM, Niels Kristian Schjødt wrote: > Hi, > > I have a database with quite some data (millions of rows), that is heavily > updated all the time. Once a day I would like to reindex my database (and > maybe re cluster it - don't know if that's worth it yet?). I need the > database to be usable while doing this (both read and write). I see that > there is no way to REINDEX CONCURRENTLY - So what approach would you suggest > that I take on this? If you have the diskspaec, it's generally a good idea to do a CREATE INDEX CONCURRENTLY, and then rename the new one into place (typically in a transaction). (If your app, documentation or dba doesn't mind the index changing names, you don't need to rename of course, you can just drop the old one). -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Best practice when reindexing in production
On Wed, May 29, 2013 at 2:26 PM, Magnus Hagander wrote: > On Wed, May 29, 2013 at 8:24 AM, Niels Kristian Schjødt > wrote: > > Hi, > > > > I have a database with quite some data (millions of rows), that is > heavily updated all the time. Once a day I would like to reindex my > database (and maybe re cluster it - don't know if that's worth it yet?). I > need the database to be usable while doing this (both read and write). I > see that there is no way to REINDEX CONCURRENTLY - So what approach would > you suggest that I take on this? > > If you have the diskspaec, it's generally a good idea to do a CREATE > INDEX CONCURRENTLY, and then rename the new one into place (typically > in a transaction). (If your app, documentation or dba doesn't mind the > index changing names, you don't need to rename of course, you can just > drop the old one). > If you wish to recluster it online you can also look into pg_repack - https://github.com/reorg/pg_repack Great tool allows you to repack and reindex your database without going offline.
Re: [PERFORM] Best practice when reindexing in production
I looked at pg_repack - however - is it "safe" for production? It seems very intrusive and black-box-like to me... Den 29/05/2013 kl. 14.30 skrev Armand du Plessis : > > On Wed, May 29, 2013 at 2:26 PM, Magnus Hagander wrote: > On Wed, May 29, 2013 at 8:24 AM, Niels Kristian Schjødt > wrote: > > Hi, > > > > I have a database with quite some data (millions of rows), that is heavily > > updated all the time. Once a day I would like to reindex my database (and > > maybe re cluster it - don't know if that's worth it yet?). I need the > > database to be usable while doing this (both read and write). I see that > > there is no way to REINDEX CONCURRENTLY - So what approach would you > > suggest that I take on this? > > If you have the diskspaec, it's generally a good idea to do a CREATE > INDEX CONCURRENTLY, and then rename the new one into place (typically > in a transaction). (If your app, documentation or dba doesn't mind the > index changing names, you don't need to rename of course, you can just > drop the old one). > > If you wish to recluster it online you can also look into pg_repack - > https://github.com/reorg/pg_repack Great tool allows you to repack and > reindex your database without going offline. >
Re: [PERFORM] Best practice when reindexing in production
Thanks Can you think of a way to select all the indexes programmatically from a table and run CREATE INDEX CONCURRENTLY for each of them, so that I don't have to hardcode every index name + create statement ? Den 29/05/2013 kl. 14.26 skrev Magnus Hagander : > On Wed, May 29, 2013 at 8:24 AM, Niels Kristian Schjødt > wrote: >> Hi, >> >> I have a database with quite some data (millions of rows), that is heavily >> updated all the time. Once a day I would like to reindex my database (and >> maybe re cluster it - don't know if that's worth it yet?). I need the >> database to be usable while doing this (both read and write). I see that >> there is no way to REINDEX CONCURRENTLY - So what approach would you suggest >> that I take on this? > > If you have the diskspaec, it's generally a good idea to do a CREATE > INDEX CONCURRENTLY, and then rename the new one into place (typically > in a transaction). (If your app, documentation or dba doesn't mind the > index changing names, you don't need to rename of course, you can just > drop the old one). > > > -- > Magnus Hagander > Me: http://www.hagander.net/ > Work: http://www.redpill-linpro.com/ -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Best practice when reindexing in production
On Wed, May 29, 2013 at 8:41 AM, Niels Kristian Schjødt wrote: > Thanks > > Can you think of a way to select all the indexes programmatically from a > table and run CREATE INDEX CONCURRENTLY for each of them, so that I don't > have to hardcode every index name + create statement ? You can use something like SELECT pg_get_indexdef(indexrelid) FROM pg_index. You will need to filter it not to include system indexes, toast, etc, and then insert the CONCURRENCY part, but it should give you a good startingpoint. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Best practice when reindexing in production
On Wed, May 29, 2013 at 9:41 AM, Niels Kristian Schjødt < nielskrist...@autouncle.com> wrote: > Thanks > > Can you think of a way to select all the indexes programmatically from a > table and run CREATE INDEX CONCURRENTLY for each of them, so that I don't > have to hardcode every index name + create statement ? > > > You could do something like this (which considers you use simple names for your indexes, where simple ~ [a-z_][a-z0-9_]*): SELECT regexp_replace(i.indexdef, '^CREATE( UNIQUE)? INDEX (.*) ON (.*)', 'CREATE\1 INDEX CONCURRENTLY tmp_\2 ON \3;') || E'\n' || E'BEGIN;\n' || 'DROP INDEX ' || i.indexname || E';\n' || 'ALTER INDEX tmp_' || i.indexname || ' RENAME TO ' || i.indexname || E';\n' || E'COMMIT;\n' FROM pg_indexes i WHERE schemaname !~ '^(pg_|information_schema$)'; Although this one is *really simple* and *error phrone*, because it does not consider at least two things: index that are constraints and index that has FK depending on it. For the first case, you only need to change the constraint to use the index and the DROP command. As for the second case, you would need to remove the FKs, drop the old one and recreate the FK (inside a transaction, of course), but this could be really slow, a reindex for this case would be simpler and perhaps faster. Regards, -- Matheus de Oliveira Analista de Banco de Dados Dextra Sistemas - MPS.Br nível F! www.dextra.com.br/postgres
Re: [PERFORM] Best practice when reindexing in production
You could do something like this (which considers you use simple names for your indexes, where simple ~ [a-z_][a-z0-9_]*): SELECT regexp_replace(i.indexdef, '^CREATE( UNIQUE)? INDEX (.*) ON (.*)', 'CREATE\1 INDEX CONCURRENTLY tmp_\2 ON \3;') || E'\n' || E'BEGIN;\n' || 'DROP INDEX ' || i.indexname || E';\n' || 'ALTER INDEX tmp_' || i.indexname || ' RENAME TO ' || i.indexname || E';\n' || E'COMMIT;\n' FROM pg_indexes i WHERE schemaname !~ '^(pg_|information_schema$)'; Although this one is *really simple* and *error phrone*, because it does not consider at least two things: index that are constraints and index that has FK depending on it. For the first case, you only need to change the constraint to use the index and the DROP command. As for the second case, you would need to remove the FKs, drop the old one and recreate the FK (inside a transaction, of course), but this could be really slow, a reindex for this case would be simpler and perhaps faster. Regards, -- Matheus de Oliveira Analista de Banco de Dados Dextra Sistemas - MPS.Br nível F! www.dextra.com.br/postgres I must be missing something here. But, how is that FK depends on the index? I understand FK lookup works much faster with the index supporting FK than without it, but you could have FK without index (on the "child" table). So, what gives? Regards, Igor Neyman -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Best practice when reindexing in production
On Wed, May 29, 2013 at 10:55 AM, Igor Neyman wrote: > > > You could do something like this (which considers you use simple names for > your indexes, where simple ~ [a-z_][a-z0-9_]*): > > SELECT > regexp_replace(i.indexdef, '^CREATE( UNIQUE)? INDEX (.*) ON (.*)', > 'CREATE\1 INDEX CONCURRENTLY tmp_\2 ON \3;') || E'\n' > || E'BEGIN;\n' > || 'DROP INDEX ' || i.indexname || E';\n' > || 'ALTER INDEX tmp_' || i.indexname || ' RENAME TO ' || i.indexname || > E';\n' > || E'COMMIT;\n' > FROM pg_indexes i > WHERE schemaname !~ '^(pg_|information_schema$)'; > > Although this one is *really simple* and *error phrone*, because it does > not consider at least two things: index that are constraints and index that > has FK depending on it. For the first case, you only need to change the > constraint to use the index and the DROP command. As for the second case, > you would need to remove the FKs, drop the old one and recreate the FK > (inside a transaction, of course), but this could be really slow, a reindex > for this case would be simpler and perhaps faster. > > = > > I must be missing something here. > But, how is that FK depends on the index? > I understand FK lookup works much faster with the index supporting FK than > without it, but you could have FK without index (on the "child" table). > So, what gives? > > AFAIK, when you create a FK, PostgreSQL associate it with an UNIQUE INDEX on the target table. It creates an entry on pg_depends (I don't know if somewhere else), and when you try to drop the index, even if there is an identical one that PGs could use, it will throw an error. You can easily check this: postgres=# CREATE TABLE parent(id int); CREATE TABLE postgres=# CREATE UNIQUE INDEX parent_idx1 ON parent (id); CREATE INDEX postgres=# CREATE TABLE child(idparent int REFERENCES parent (id)); CREATE TABLE postgres=# CREATE UNIQUE INDEX parent_idx2 ON parent (id); CREATE INDEX postgres=# DROP INDEX parent_idx1; ERROR: cannot drop index parent_idx1 because other objects depend on it DETAIL: constraint child_idparent_fkey on table child depends on index parent_idx1 HINT: Use DROP ... CASCADE to drop the dependent objects too. BTW, I do think PostgreSQL could verify if there is another candidate to this FK. Is it in TODO list? Should it be? Regards, -- Matheus de Oliveira Analista de Banco de Dados Dextra Sistemas - MPS.Br nível F! www.dextra.com.br/postgres
Re: [GENERAL] [PERFORM] Very slow inner join query Unacceptable latency.
We re-tested these settings a few times after our initial test and realized that the execution time I posted was shewed, because the execution plan was cached after the initial run. Subsequent executions ran in a little over a second. There ended up being no significant saving by setting these parameters. Un-cached the query ran in about 55 seconds. Original Message Subject: Re: [GENERAL] [PERFORM] Very slow inner join query Unacceptablelatency.From: Scott MarloweDate: Fri, May 24, 2013 3:03 pmTo: fburg...@radiantblue.comCc: Jaime Casanova , psql performance list , Postgres General On Fri, May 24, 2013 at 3:44 PM, wrote:> Total runtime: 1606.728 ms 1.6 seconds <- very good response time> improvement>> (7 rows)>> Questions:>> Any concerns with setting these conf variables you recommended; work_mem,> random_page_cost dbserver wide (in postgresql,conf)?>> Thanks so much!!!Yes 500MB is pretty high especially if you have a lot of connections.Try it with it back down to 16MB and see how it does. Work mem is persort so a setting as high as 500MB can exhaust memory on the machineunder heavy load.--To understand recursion, one must first understand recursion.
Re: [PERFORM] Best practice when reindexing in production
From: Matheus de Oliveira [mailto:matioli.math...@gmail.com] Sent: Wednesday, May 29, 2013 10:19 AM To: Igor Neyman Cc: Niels Kristian Schjødt; Magnus Hagander; pgsql-performance@postgresql.org list Subject: Re: [PERFORM] Best practice when reindexing in production On Wed, May 29, 2013 at 10:55 AM, Igor Neyman wrote: You could do something like this (which considers you use simple names for your indexes, where simple ~ [a-z_][a-z0-9_]*): SELECT regexp_replace(i.indexdef, '^CREATE( UNIQUE)? INDEX (.*) ON (.*)', 'CREATE\1 INDEX CONCURRENTLY tmp_\2 ON \3;') || E'\n' || E'BEGIN;\n' || 'DROP INDEX ' || i.indexname || E';\n' || 'ALTER INDEX tmp_' || i.indexname || ' RENAME TO ' || i.indexname || E';\n' || E'COMMIT;\n' FROM pg_indexes i WHERE schemaname !~ '^(pg_|information_schema$)'; Although this one is *really simple* and *error phrone*, because it does not consider at least two things: index that are constraints and index that has FK depending on it. For the first case, you only need to change the constraint to use the index and the DROP command. As for the second case, you would need to remove the FKs, drop the old one and recreate the FK (inside a transaction, of course), but this could be really slow, a reindex for this case would be simpler and perhaps faster. = I must be missing something here. But, how is that FK depends on the index? I understand FK lookup works much faster with the index supporting FK than without it, but you could have FK without index (on the "child" table). So, what gives? AFAIK, when you create a FK, PostgreSQL associate it with an UNIQUE INDEX on the target table. It creates an entry on pg_depends (I don't know if somewhere else), and when you try to drop the index, even if there is an identical one that PGs could use, it will throw an error. You can easily check this: postgres=# CREATE TABLE parent(id int); CREATE TABLE postgres=# CREATE UNIQUE INDEX parent_idx1 ON parent (id); CREATE INDEX postgres=# CREATE TABLE child(idparent int REFERENCES parent (id)); CREATE TABLE postgres=# CREATE UNIQUE INDEX parent_idx2 ON parent (id); CREATE INDEX postgres=# DROP INDEX parent_idx1; ERROR: cannot drop index parent_idx1 because other objects depend on it DETAIL: constraint child_idparent_fkey on table child depends on index parent_idx1 HINT: Use DROP ... CASCADE to drop the dependent objects too. BTW, I do think PostgreSQL could verify if there is another candidate to this FK. Is it in TODO list? Should it be? Regards, -- Matheus de Oliveira Analista de Banco de Dados Dextra Sistemas - MPS.Br nível F! www.dextra.com.br/postgres So, it's about index on parent table that's used for unique (or PK) constraint and referenced by FK on child table. >From your previous email I thought that index on child table supporting FK >(which is mostly created for performance purposes) cannot be dropped without >disabling FK. My bad. Igor Neyman -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Best practice when reindexing in production
On 29/05/13 14:24, Niels Kristian Schjødt wrote:On 29/05/13 14:24, Niels Kristian Schjødt wrote: Hi, I have a database with quite some data (millions of rows), that is heavily updated all the time. Once a day I would like to reindex my database (and maybe re cluster it - don't know if that's worth it yet?). I need the database to be usable while doing this (both read and write). I see that there is no way to REINDEX CONCURRENTLY - So what approach would you suggest that I take on this? Hi. Since you still dont know wether it is worth it or not, I would strongly suggest that you test this out before. Simply just creating an index next to the old one with the same options (but different name) and compare sizes would be simple. Second, if the new index is significantly smaller than the old on, I suggest that you try to crank up the autovacuum daemon instead of blindly dropping and creating indexes, this will help to mitigate the bloat you're seeing accumulating in above test. Cranking up autovacuum is going to have significan less impact on the concurrent queries while doing it and can help to maintain the database in a shape where regular re-indexings shouldnt be nessesary. Autovacuum has build in logic to sleep inbetween operations in order to reduce the IO-load of you system for the benefit of concurrent users. The approach of duplicate indices will pull all the resources it can get and concurrent users may suffer while you do it.. Jesper -- Jesper -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Best practice when reindexing in production
On Wed, 2013-05-29 at 19:12 +0200, Jesper Krogh wrote: > Second, if the new index is significantly smaller than the old on, I > suggest that you try to crank up the autovacuum daemon instead of > blindly dropping and creating indexes, this will help to mitigate the > bloat you're seeing accumulating in above test. In my experience vacuum/autovacuum just don't reclaim any space from the indexes, which accumulate bloat indefinitely. I've tried to work around that in so many ways: the show-stopper has been the impossibility to drop FK indexes in a concurrent way, coupled with VALIDATE CONSTRAINT not doing what advertised and taking an exclusive lock. My solution has been to become pg_repack maintainer. YMMV. Just don't expect vacuum to reduce the indexes size: it doesn't. -- Daniele -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Best practice when reindexing in production
On Wednesday, May 29, 2013 06:25:21 PM Daniele Varrazzo wrote: > My solution has been to become pg_repack maintainer. YMMV. Just don't > expect vacuum to reduce the indexes size: it doesn't. It's not supposed to. It is supposed to keep them from indefinitely growing, though, which it does reasonably well at. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Best practice when reindexing in production
On Wed, May 29, 2013 at 6:47 PM, Alan Hodgson wrote: > On Wednesday, May 29, 2013 06:25:21 PM Daniele Varrazzo wrote: >> My solution has been to become pg_repack maintainer. YMMV. Just don't >> expect vacuum to reduce the indexes size: it doesn't. > > It's not supposed to. It is supposed to keep them from indefinitely growing, > though, which it does reasonably well at. My experience is different. I've repeated this test often. This is PG 9.1: piro=# create table test (id serial primary key); NOTICE: CREATE TABLE will create implicit sequence "test_id_seq" for serial column "test.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "test_pkey" for table "test" CREATE TABLE piro=# insert into test (id) select generate_series(1,1000); INSERT 0 1000 The table size is: piro=# select pg_size_pretty(pg_relation_size('test'::regclass)); pg_size_pretty 306 MB (1 row) ...and the index size is: piro=# select pg_size_pretty(pg_relation_size('test_pkey'::regclass)); pg_size_pretty 171 MB (1 row) piro=# delete from test where id <= 990; DELETE 990 piro=# select pg_size_pretty(pg_relation_size('test'::regclass)), pg_size_pretty(pg_relation_size('test_pkey'::regclass)); pg_size_pretty | pg_size_pretty + 306 MB | 171 MB (1 row) My statement is that vacuum doesn't reclaim any space. Maybe sometimes in the tables, but never in the index, in my experience. piro=# vacuum test; VACUUM piro=# select pg_size_pretty(pg_relation_size('test'::regclass)), pg_size_pretty(pg_relation_size('test_pkey'::regclass)); pg_size_pretty | pg_size_pretty + 306 MB | 171 MB (1 row) Vacuum full is a different story, but doesn't work online. piro=# vacuum full test; VACUUM piro=# select pg_size_pretty(pg_relation_size('test'::regclass)), pg_size_pretty(pg_relation_size('test_pkey'::regclass)); pg_size_pretty | pg_size_pretty + 3144 kB| 1768 kB In our live system we have a small table of active records in a transient state. No record stages there for a long time. The size of the table stays reasonable (but not really stable) but not the indexes. One of them (friendly labeled "the index of death") is 5-6 columns wide and, given enough time, regularly grows into the gigabytes for a table in the order of the ~100k records, only tamed by a pg_repack treatment (previously by a create concurrently and drop). -- Daniele -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
[PERFORM] Performance bug in prepared statement binding in 9.2?
Folks, I'm seeing what may be a major performance bug in BIND in 9.2.4. We have a client who has an application which uses Tomcat+Hibernate+JDBC. They are in the process of upgrading this application from 8.4.17 to 9.2.4. As part of this, they have been doing performance testing, and 9.2 is coming out MUCH worse than 8.4. The problem appears to be bind/plan time. Their application does not use prepared queries usefully, doing parse,bind,execute on every query cycle. Here's timings overall for 29 test cycles (cycle 1 has been omitted). As you can see, parse+execute times are pretty much constant, as are application think times, but bind times vary quite a lot. In 8.4, the 29 cycles are constantly 4.5min to 5.75min long. In 9.2, which is the chart below, they are all over the place. Definitions: cycle: test cycle #, arbitrary. Each cycle does the same amount of "work" measured in rows of data. non_bind_time: time spent in PARSE and EXECUTE. bind_time: time spent in BIND app_time: time spent outside Postgres all times are in minutes: cycle | non_bind_time | bind_time | app_time | total_time ---+---+---+--+ 2 | 0.79 | 0.62 | 3.19 | 4.60 3 | 0.77 | 0.87 | 3.13 | 4.77 4 | 0.76 | 1.10 | 3.16 | 5.02 5 | 0.76 | 1.26 | 3.08 | 5.10 6 | 0.72 | 1.40 | 3.08 | 5.20 7 | 0.72 | 1.51 | 3.05 | 5.28 8 | 0.70 | 1.60 | 3.07 | 5.37 9 | 0.73 | 1.72 | 3.05 | 5.50 10 | 0.71 | 1.84 | 3.05 | 5.60 11 | 0.70 | 1.96 | 3.07 | 5.73 12 | 0.74 | 2.11 | 3.08 | 5.93 13 | 0.74 | 3.58 | 3.08 | 7.40 14 | 0.73 | 2.41 | 3.08 | 6.22 15 | 0.75 | 4.15 | 3.08 | 7.98 16 | 0.74 | 2.69 | 3.09 | 6.52 17 | 0.76 | 4.68 | 3.09 | 8.53 18 | 0.74 | 2.99 | 3.09 | 6.82 19 | 0.77 | 5.24 | 3.11 | 9.12 20 | 0.75 | 3.29 | 3.08 | 7.12 21 | 0.78 | 5.90 | 3.14 | 9.82 22 | 0.78 | 3.57 | 3.12 | 7.47 23 | 0.76 | 6.17 | 3.10 | 10.03 24 | 0.77 | 6.61 | 3.10 | 10.48 25 | 0.77 | 3.97 | 3.11 | 7.85 26 | 0.77 | 5.24 | 3.12 | 9.13 27 | 0.76 | 7.15 | 3.12 | 11.03 28 | 0.76 | 4.37 | 3.10 | 8.23 29 | 0.78 | 4.48 | 3.12 | 8.38 30 | 0.76 | 7.73 | 3.11 | 11.60 I pulled out some of the queries with the greatest variance in bind time. Unexpectedly, they are not particularly complex. Here's the anonymized plan for a query which in the logs took 80ms to bind: http://explain.depesz.com/s/YSj Nested Loop (cost=8.280..26.740 rows=1 width=289) -> Nested Loop (cost=8.280..18.450 rows=1 width=248) -> Hash Join (cost=8.280..10.170 rows=1 width=140) Hash Cond: (foxtrot2kilo_oscar.quebec_seven = kilo_juliet1kilo_oscar.sierra_quebec) -> Seq Scan on foxtrot november (cost=0.000..1.640 rows=64 width=25) -> Hash (cost=8.270..8.270 rows=1 width=115) -> Index Scan using quebec_six on victor_india sierra_oscar (cost=0.000..8.270 rows=1 width=115) Index Cond: (quebec_seven = 10079::bigint) -> Index Scan using alpha on seven_tango lima (cost=0.000..8.270 rows=1 width=108) Index Cond: ((xray = 10079::bigint) AND (golf = 10002::bigint)) -> Index Scan using six on india victor_romeo (cost=0.000..8.280 rows=1 width=41) Index Cond: (quebec_seven = seven_victor0kilo_oscar.delta) As you can see, it's not particularly complex; it only joins 4 tables, and it has 2 parameters. This database does have some horrible ugly queries with up to 500 parameters, but inexplicably those don't take a particularly long time to bind. Note that I have not been able to reproduce this long bind time interactively, but it's 100% reproducable in the test. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Performance bug in prepared statement binding in 9.2?
On 05/29/2013 05:14 PM, Josh Berkus wrote: > Here's timings overall for 29 test cycles (cycle 1 has been omitted). > As you can see, parse+execute times are pretty much constant, as are > application think times, but bind times vary quite a lot. In 8.4, the > 29 cycles are constantly 4.5min to 5.75min long. In 9.2, which is the > chart below, they are all over the place. To be clear, the TOTAL times for 8.4 are 4.5 to 5.75 minutes long. Bind times in 8.4 are a more-or-less constant 0.75 minutes. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance