Re: [PERFORM] tables with 300+ partitions

2007-10-31 Thread Steven Flatt
On 10/31/07, Pablo Alcaraz [EMAIL PROTECTED] wrote: I was a program inserting into the base table. The program ran in 200+ threads and every thread insert data on it. Every thread inserts a row every 3 seconds aprox.(or they used to do it), but when I put more partitions the insert speed went

Re: [PERFORM] tables with 300+ partitions

2007-10-30 Thread Steven Flatt
On 10/30/07, Pablo Alcaraz [EMAIL PROTECTED] wrote: I did some testing. I created a 300 partitioned empty table. Then, I inserted some rows on it and the perfomance was SLOW too. Is the problem with inserting to the partitioned table or selecting from it? It sounds like inserting is the

Re: [PERFORM] When/if to Reindex

2007-08-24 Thread Steven Flatt
On 8/24/07, Mark Kirkwood [EMAIL PROTECTED] wrote: Tom Lane wrote: The fly in the ointment is that after collecting the pg_index definition of the index, plancat.c also wants to know how big it is --- it calls RelationGetNumberOfBlocks. And that absolutely does look at the physical

Re: [PERFORM] When/if to Reindex

2007-08-24 Thread Steven Flatt
On 8/24/07, Tom Lane [EMAIL PROTECTED] wrote: Steven Flatt [EMAIL PROTECTED] writes: Why do we even need to consider calling RelationGetNumberOfBlocks or looking at the pg_class.relpages entry? My understanding of the expected behaviour is that while a reindex is happening, all queries

Re: [PERFORM] When/if to Reindex

2007-08-24 Thread Steven Flatt
On 8/24/07, Tom Lane [EMAIL PROTECTED] wrote: You might be able to work around it for now by faking such a reindex by hand; that is, create a duplicate new index under a different name using CREATE INDEX CONCURRENTLY, then exclusive-lock the table for just long enough to drop the old index

Re: [PERFORM] When/if to Reindex

2007-08-23 Thread Steven Flatt
On 8/22/07, Gregory Stark [EMAIL PROTECTED] wrote: postgres=# create table test (i integer); CREATE TABLE postgres=# insert into test select generate_series(1,1000); INSERT 0 1000 postgres=# create or replace function slow(integer) returns integer as 'begin perform pg_sleep(0); return $1;

Re: [PERFORM] When/if to Reindex

2007-08-22 Thread Steven Flatt
It makes more sense for us to have ~1 hour's worth of reindexing afterwards during which read performance on that partition is compromised. So, based on the docs, I was expecting read performance to be compromised during a reindex, specifically reads would not be allowed to use the index:

Re: [PERFORM] When/if to Reindex

2007-08-09 Thread Steven Flatt
On 8/8/07, Tom Lane [EMAIL PROTECTED] wrote: I'd make the same remark about Steven's case: if possible, don't create the indexes at all until you've loaded the table fully. We considered this, however in some of our 12-hour partitions, there are upwards of 50 or 60 million rows near the end

Re: [PERFORM] Vacuum looping?

2007-07-30 Thread Steven Flatt
On 7/28/07, Jim C. Nasby [EMAIL PROTECTED] wrote: What are your vacuum_cost_* settings? If you set those too aggressively you'll be in big trouble. autovacuum_vacuum_cost_delay = 100 autovacuum_vacuum_cost_limit = 200 These are generally fine, autovacuum keeps up, and there is minimal

[PERFORM] Vacuum looping?

2007-07-27 Thread Steven Flatt
Postgres 8.2.4. We have a large table, let's call it foo, whereby an automated process periodically inserts many (hundreds of thousands or millions) rows into it at a time. It's essentially INSERT INTO foo SELECT FROM another table WHERE some conditions. Recently, for whatever reason, the query

[PERFORM] When/if to Reindex

2007-07-18 Thread Steven Flatt
We're using Postgres 8.2.4. I'm trying to decide whether it's worthwhile to implement a process that does periodic reindexing. In a few ad hoc tests, where I've tried to set up data similar to how our application does it, I've noticed decent performance increases after doing a reindex as well

Re: [PERFORM] Database-wide VACUUM ANALYZE

2007-06-26 Thread Steven Flatt
On 6/25/07, Jim Nasby [EMAIL PROTECTED] wrote: If you set that to 2B, that means you're 2^31-2 billion-100 transactions away from a shutdown when autovac finally gets around to trying to run a wraparound vacuum on a table. If you have any number of large tables, that could be a big problem,

[PERFORM] Database-wide VACUUM ANALYZE

2007-06-21 Thread Steven Flatt
We recently upgraded a very large database (~550 GB) from 8.1.4 to 8.2.4 via a pg_dump and pg_restore. (Note that the restore took several days.) We had accepted the default settings: vacuum_freeze_min_age = 100 million autovacuum_freeze_max_age = 200 million Due to our very high transaction

[PERFORM] Very long SQL strings

2007-06-21 Thread Steven Flatt
I can't seem to find a definitive answer to this. It looks like Postgres does not enforce a limit on the length of an SQL string. Great. However is there some point at which a query string becomes ridiculously too long and affects performance? Here's my particular case: consider an INSERT

Re: [PERFORM] Database-wide VACUUM ANALYZE

2007-06-21 Thread Steven Flatt
On 6/21/07, Francisco Reyes [EMAIL PROTECTED] wrote: Are you on FreeBSD by any chance? I think the FreeBSD port by default installs a script that does a daily vacuum. Yes, FreeBSD. Do you know what script that is? And it does a db-wide VACUUM ANALYZE every day?! That is certainly not

Re: [PERFORM] Very long SQL strings

2007-06-21 Thread Steven Flatt
Thanks everyone for your responses. I don't think it's realistic to change our application infrastructure to use COPY from a stream at this point. It's good to know that multi-row-VALUES is good up into the thousands of rows (depending on various things, of course). That's a good enough answer

Re: [PERFORM] Database-wide VACUUM ANALYZE

2007-06-21 Thread Steven Flatt
Thanks everyone. It appears that we had hacked the 502.pgsql script for our 8.1 build to disable the daily vacuum. I was not aware of this when building and upgrading to 8.2. So it looks like for the past two weeks, that 36 hour db-wide vacuum has been running every 24 hours. Good for it for

Re: [PERFORM] performance drop on 8.2.4, reverting to 8.1.4

2007-06-12 Thread Steven Flatt
Thanks Tom and Alvaro. To follow up on this, I re-wrote and tweaked a number of queries (including the one provided) to change LEFT OUTER JOIN ... WHERE col IS NULL clauses to WHERE col NOT IN (...) clauses. This has brought performance to an acceptable level on 8.2. Thanks for your time,

Re: [PERFORM] performance drop on 8.2.4, reverting to 8.1.4

2007-06-07 Thread Steven Flatt
On 6/5/07, Tom Lane [EMAIL PROTECTED] wrote: If you're feeling desperate you could revert this patch in your local copy: http://archives.postgresql.org/pgsql-committers/2006-11/msg00066.php regards, tom lane Reverting that patch has not appeared to solve our problem.

Re: [PERFORM] performance drop on 8.2.4, reverting to 8.1.4

2007-06-05 Thread Steven Flatt
On 5/18/07, Tom Lane [EMAIL PROTECTED] wrote: Yeah. 8.2 is estimating that the nodeid IS NULL condition will discard all or nearly all the rows, presumably because there aren't any null nodeid's in the underlying table --- it fails to consider that the LEFT JOIN may inject some nulls. 8.1

[PERFORM] Foreign Key Deadlocking

2007-04-18 Thread Steven Flatt
Hi, we're using Postgres 8.1.4. We've been seeing deadlock errors of this form, sometimes as often as several times per hour: Apr 17 13:39:50 postgres[53643]: [4-1] ERROR: deadlock detected Apr 17 13:39:50 postgres[53643]: [4-2] DETAIL: Process 53643 waits for ShareLock on transaction

Re: [PERFORM] Foreign Key Deadlocking

2007-04-18 Thread Steven Flatt
Thanks for your answers and feedback. All things considered, it is easiest (and acceptable) in this case to remove RI between the tables where the deadlocks were occurring. We are still looking to upgrade to 8.1.latest but that is another matter... Steve

[PERFORM] Turning off Autovacuum

2007-03-05 Thread Steven Flatt
Not quite a performance question, but I can't seem to find a simple answer to this. We're using 8.1.4 and the autovacuum daemon is running every 40 seconds cycling between 3 databases. What is the easiest way to disable the autovacuumer for a minute or two, do some other work, then re-enable

Re: [PERFORM] table partioning performance

2007-01-10 Thread Steven Flatt
On 1/9/07, Simon Riggs [EMAIL PROTECTED] wrote: If you are doing date range partitioning it should be fairly simple to load data into the latest table directly. That was the way I originally intended for it to be used. The rules approach isn't something I'd recommend as a bulk loading option

Re: [PERFORM] table partioning performance

2007-01-10 Thread Steven Flatt
On 1/10/07, Jim C. Nasby [EMAIL PROTECTED] wrote: Except for the simplest partitioning cases, you'll be much better off using a trigger on the parent table to direct inserts/updates/deletes to the children. As a bonus, using a trigger makes it a lot more realistic to deal with an update moving

Re: [PERFORM] table partioning performance

2007-01-08 Thread Steven Flatt
On 1/6/07, Colin Taylor [EMAIL PROTECTED] wrote: Hi there, we've partioned a table (using 8.2) by day due to the 50TB of data (500k row size, 100G rows) we expect to store it in a year. Our performance on inserts and selects against the master table is disappointing, 10x slower (with ony 1

Re: [PERFORM] Insertion to temp table deteriorating over time

2006-12-19 Thread Steven Flatt
On 12/19/06, Tom Lane [EMAIL PROTECTED] wrote: I still can't reproduce this. Using 7.4 branch tip, I did create temp table foo(f1 varchar); create table nottemp(f1 varchar); \timing insert into foo select stringu1 from tenk1 limit 100; insert into nottemp select * from foo; truncate foo;

Re: [PERFORM] Insertion to temp table deteriorating over time

2006-12-18 Thread Steven Flatt
you're creating is temporary and you're selecting from a view? Steve On 12/15/06, Tom Lane [EMAIL PROTECTED] wrote: Steven Flatt [EMAIL PROTECTED] writes: I've been trying to reproduce the problem for days now :). I've done pretty much exactly what you describe below, but I can't reproduce

Re: [PERFORM] Insertion to temp table deteriorating over time

2006-12-18 Thread Steven Flatt
Please ignore my post from earlier today. As strange as it sounds, changing CREATE TEMP TABLE ... AS to CREATE TEMP TABLE ... LIKE appeared to fix my performance problem because things errored out so quickly (and silently in my test program). After checking the pgsql logs, it became clear to me

Re: [PERFORM] Insertion to temp table deteriorating over time

2006-12-15 Thread Steven Flatt
to the temp table are 5 times slower now than they were 24 hours ago. I wonder if the problem has to do with a long running ODBC connection. Steve On 12/14/06, Tom Lane [EMAIL PROTECTED] wrote: Steven Flatt [EMAIL PROTECTED] writes: Regarding your other email -- interesting -- but we are vacuuming

Re: [PERFORM] Insertion to temp table deteriorating over time

2006-12-15 Thread Steven Flatt
Our application is such that there is a great deal of activity at the beginning of the hour and minimal activity near the end of the hour. Those 3 vacuums were done at (approximately) 30 minutes past, 40 minutes past, and 50 minutes past the hour, during low activity. Vacuums of pg_class look

Re: [PERFORM] Insertion to temp table deteriorating over time

2006-12-15 Thread Steven Flatt
Good question, and I agree with your point. Are the removable rows in pg_class even an issue? So what if 5000-6000 dead tuples are generated every hour then vacuumed? Performance continues to steadily decline over a few days time. Memory usage does not appear to be bloating. Open file

Re: [PERFORM] Insertion to temp table deteriorating over time

2006-12-15 Thread Steven Flatt
PROTECTED] wrote: Steven Flatt [EMAIL PROTECTED] writes: Are the removable rows in pg_class even an issue? So what if 5000-6000 dead tuples are generated every hour then vacuumed? Performance continues to steadily decline over a few days time. Memory usage does not appear to be bloating. Open

Re: [PERFORM] Insertion to temp table deteriorating over time

2006-12-14 Thread Steven Flatt
. So I don't think the answer lies there... Steve On 12/13/06, Tom Lane [EMAIL PROTECTED] wrote: Steven Flatt [EMAIL PROTECTED] writes: Having said that, what kinds of things should I be looking for that could deteriorate/bloat over time? Ordinarily the culprit might be infrequent vacuuming

[PERFORM] Insertion to temp table deteriorating over time

2006-12-13 Thread Steven Flatt
Hi, Our application is using Postgres 7.4 and I'd like to understand the root cause of this problem: To speed up overall insert time, our application will write thousands of rows, one by one, into a temp table (same structure as a permanent table), then do a bulk insert from the temp table to

Re: [PERFORM] Insertion to temp table deteriorating over time

2006-12-13 Thread Steven Flatt
has 15 columns: a timestamp, a double, and the rest integers. It has no indexes. Thanks, Steve On 12/13/06, Tom Lane [EMAIL PROTECTED] wrote: Steven Flatt [EMAIL PROTECTED] writes: Any idea where the bloat is happening? I believe that if we were dropping and re-creating the temp table over

Re: [PERFORM] Database-wide vacuum can take a long time, duringwhich tables are not being analyzed

2006-11-02 Thread Steven Flatt
Sorry, I think there's a misunderstanding here. Our system is not doing near that number of transactions per second. I meant that the duration of a single DB-wide vacuumtakes on the order of acouple of weeks. The time between DB-wide vacuums is a little over a year, I believe. Every couple of

[PERFORM] Database-wide vacuum can take a long time, during which tables are not being analyzed

2006-11-01 Thread Steven Flatt
Here is a potential problem with the auto-vacuum daemon, and I'm wondering if anyone has considered this. To avoid transaction ID wraparound, the auto-vacuum daemon will periodically determine that it needs to do a DB-wide vacuum, which takes a long time. On our system, it is on the order of a