Re: [PERFORM] experiments in query optimization

2010-03-31 Thread Matthew Wakeling
On Tue, 30 Mar 2010, Faheem Mitha wrote: work_mem = 1 GB (see diag.{tex/pdf}). Sure, but define sane setting, please. I guess part of the point is that I'm trying to keep memory low You're trying to keep memory usage low, but you have work_mem set to 1GB? Matthew -- Prove to thyself that

Re: [PERFORM] experiments in query optimization

2010-03-31 Thread Faheem Mitha
On Wed, 31 Mar 2010, Matthew Wakeling wrote: On Tue, 30 Mar 2010, Faheem Mitha wrote: work_mem = 1 GB (see diag.{tex/pdf}). Sure, but define sane setting, please. I guess part of the point is that I'm trying to keep memory low You're trying to keep memory usage low, but you have

Re: [PERFORM] 3ware vs. MegaRAID

2010-03-31 Thread Matteo Beccati
On 30/03/2010 19:18, Greg Smith wrote: The MegaRAID SAS 84* cards have worked extremely well for me in terms of performance and features for all the systems I've seen them installed in. I'd consider it a modest upgrade from that 3ware card, speed wise. The main issue with the MegaRAID cards is

Re: [PERFORM] mysql to postgresql, performance questions

2010-03-31 Thread Bruce Momjian
James Mansion wrote: Hannu Krosing wrote: Pulling the plug should not corrupt a postgreSQL database, unless it was using disks which lie about write caching. Didn't we recently put the old wife's 'the disks lied' tale to bed in favour of actually admiting that some well known

Re: [PERFORM] experiments in query optimization

2010-03-31 Thread Robert Haas
On Wed, Mar 31, 2010 at 6:10 AM, Faheem Mitha fah...@email.unc.edu wrote: [If Kevin Grittner reads this, please fix your email address. I am getting bounces from your email address.] On Tue, 30 Mar 2010, Robert Haas wrote: On Tue, Mar 30, 2010 at 12:30 PM, Faheem Mitha fah...@email.unc.edu

Re: [PERFORM] 3ware vs. MegaRAID

2010-03-31 Thread Francisco Reyes
Ireneusz Pluta writes: I am waiting for an ordered machine dedicated to PostgresSQL. It was expected to have 3ware 9650SE 16 port controller. However, the vendor wants to replace this controller with MegaRAID SAS 84016E, because, as I have had better luck getting 3ware management tools to

Re: [PERFORM] query has huge variance in execution times

2010-03-31 Thread Brian Cox
On 03/31/2010 12:37 AM, David Wilson [david.t.wil...@gmail.com] wrote: These won't necessarily get the same plan. If you want to see what plan the prepared query is getting, you'll need to prepare it (prepare foo as query) and then explain *that* via explain execute foo. The prepared version

Re: [PERFORM] query has huge variance in execution times

2010-03-31 Thread David Wilson
On Wed, Mar 31, 2010 at 2:10 PM, Brian Cox brian@ca.com wrote: so the question still remains: why did it take 20 mins? To see if it was due to autovacuum running ANALYZE, I turned off autovacuum, created a table using SELECT * INTO temp FROM ts_stats_tranunit_user_daily, added the index

Re: [PERFORM] Database size growing over time and leads to performance impact

2010-03-31 Thread Scott Carey
On Mar 27, 2010, at 6:35 AM, Andy Colson wrote: Dont VACUUM FULL, its not helping you, and is being removed in newer versions. Off topic: How is that going to work? CLUSTER doesn't work on tables without an index. I would love to be able to CLUSTER on some column set that doesn't

Re: [PERFORM] Database size growing over time and leads to performance impact

2010-03-31 Thread Robert Haas
On Wed, Mar 31, 2010 at 4:37 PM, Scott Carey sc...@richrelevance.com wrote: On Mar 27, 2010, at 6:35 AM, Andy Colson wrote: Dont VACUUM FULL, its not helping you, and is being removed in newer versions. Off topic:  How is that going to work?  CLUSTER doesn't work on tables without an

Re: [PERFORM] Database size growing over time and leads to performance impact

2010-03-31 Thread Alvaro Herrera
Scott Carey wrote: On Mar 27, 2010, at 6:35 AM, Andy Colson wrote: Dont VACUUM FULL, its not helping you, and is being removed in newer versions. Off topic: How is that going to work? CLUSTER doesn't work on tables without an index. I would love to be able to CLUSTER on some

[PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
Hi All, I have a table with 40GB size, it has few indexes on it. When i try to REINDEX on the table, its take a long time. I tried increasing the maintenance_work_mem, but still i havnt find a satisfying result. Questions === 1. What are the parameters will effect, when issuing the REINDEX

Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Kevin Grittner
raghavendra t raagavendra@gmail.com wrote: I have a table with 40GB size, it has few indexes on it. What does the table look like? What indexes are there? When i try to REINDEX on the table, Why are you doing that? its take a long time. How long? I tried increasing the

Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
Hi Kevin, Thank you for the update, What does the table look like? What indexes are there? Table has a combination of byteas. Indexes are b-tree and Partial Why are you doing that? Our table face lot of updates and deletes in a day, so we prefer reindex to update the indexes as well overcome

Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Kevin Grittner
raghavendra t raagavendra@gmail.com wrote: overcome with a corrupted index. If this is a one-time fix for a corrupted index, did you look at CREATE INDEX CONCURRENTLY? You could avoid any down time while you fix things up.

Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
If this is a one-time fix for a corrupted index, did you look at CREATE INDEX CONCURRENTLY? You could avoid any down time while you fix things up. Using CREATE INDEX CONCURRENTLY will avoid the exclusive locks on the table, but my question is, how to get a performance on the existing

Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Kevin Grittner
raghavendra t raagavendra@gmail.com wrote: my question is, how to get a performance on the existing indexes. You mean to say , drop the existing indexes and create the index with CONCURRENTLY. Does this give the performance back. You would normally want to create first and then drop the

Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
Thank you for the suggestion. On Thu, Apr 1, 2010 at 3:21 AM, Kevin Grittner kevin.gritt...@wicourts.govwrote: raghavendra t raagavendra@gmail.com wrote: my question is, how to get a performance on the existing indexes. You mean to say , drop the existing indexes and create the index

Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Kevin Grittner
raghavendra t raagavendra@gmail.com wrote: Thank you for the suggestion. I'm sorry I couldn't come up with more, but what you've provided so far is roughly equivalent to me telling you that it takes over four hours to travel to see my Uncle Jim, and then asking you how I can find out how

Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Jaime Casanova
On Wed, Mar 31, 2010 at 5:33 PM, raghavendra t raagavendra@gmail.com wrote: Why are you doing that? Our table face lot of updates and deletes in a day, so we prefer reindex to update the indexes as well overcome with a corrupted index. do you have a corrupted index? if not, there is

Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
I'm sorry I couldn't come up with more, but what you've provided so far is roughly equivalent to me telling you that it takes over four hours to travel to see my Uncle Jim, and then asking you how I can find out how he's doing in less time than that. There's just not much to go on. :-(

Re: [PERFORM] temp table on commit delete rows: transaction overhead

2010-03-31 Thread Bruce Momjian
Tom Lane wrote: Artiom Makarov artiom.maka...@gmail.com writes: When temp tables with on commit delete rows exists, I can see a strange delay at any ?begin? and ?commit?. A delay at commit is hardly surprising, because each such temp table requires filesystem operations at commit

Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Craig Ringer
Jaime Casanova wrote: On Wed, Mar 31, 2010 at 5:33 PM, raghavendra t raagavendra@gmail.com wrote: Why are you doing that? Our table face lot of updates and deletes in a day, so we prefer reindex to update the indexes as well overcome with a corrupted index. do you have a corrupted