Re: [PERFORM] experiments in query optimization

2010-03-31 Thread Matthew Wakeling

On Tue, 30 Mar 2010, Faheem Mitha wrote:

work_mem = 1 GB (see diag.{tex/pdf}).


Sure, but define sane setting, please. I guess part of the point is that I'm 
trying to keep memory low


You're trying to keep memory usage low, but you have work_mem set to 1GB?

Matthew

--
Prove to thyself that all circuits that radiateth and upon which thou worketh
are grounded, lest they lift thee to high-frequency potential and cause thee
to radiate also.  -- The Ten Commandments of Electronics

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] experiments in query optimization

2010-03-31 Thread Faheem Mitha



On Wed, 31 Mar 2010, Matthew Wakeling wrote:


On Tue, 30 Mar 2010, Faheem Mitha wrote:

work_mem = 1 GB (see diag.{tex/pdf}).


Sure, but define sane setting, please. I guess part of the point is that 
I'm trying to keep memory low


You're trying to keep memory usage low, but you have work_mem set to 1GB?


I'm trying to keep both runtime and memory usage low. I assume that with 
lower levels of memory, the runtime would be longer, other things being 
equal.

 Regards, Faheem.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 3ware vs. MegaRAID

2010-03-31 Thread Matteo Beccati

On 30/03/2010 19:18, Greg Smith wrote:

The MegaRAID SAS 84* cards have worked extremely well for me in terms of
performance and features for all the systems I've seen them installed
in. I'd consider it a modest upgrade from that 3ware card, speed wise.
The main issue with the MegaRAID cards is that you will have to write a
lot of your own custom scripts to monitor for failures using their
painful MegaCLI utility, and under FreeBSD that also requires using
their Linux utility via emulation:
http://www.freebsdsoftware.org/sysutils/linux-megacli.html


Getting MegaCLI to work was a slight PITA, but once it was running it's 
been just a matter of adding:


daily_status_mfi_raid_enable=YES

to /etc/periodic.conf to get the following data in the daily reports:

Adpater: 0

Physical Drive Information:
ENC SLO DEV SEQ MEC OEC PFC LPF STATE
1   0   0   2   0   0   0   0   Online
1   1   1   2   0   0   0   0   Online
1   2   2   2   0   0   0   0   Online
1   3   3   2   0   0   0   0   Online
1   4   4   2   0   0   0   0   Online
1   5   5   2   0   0   0   0   Online
1   255 248 0   0   0   0   0   Unconfigured(good)

Virtual Drive Information:
VD  DRV RLP RLS RLQ STSSIZESTATE NAME
0   2   1   0   0   64kB   69472MB Optimal
1   2   1   3   0   64kB   138944MBOptimal

BBU Information:
TYPE TEMP OK   RSOC  ASOC  RC   CCME
iTBBU 29 C -1   9493816   109   2

Controller Logs:


+++ /var/log/mfi_raid_0.today   Sun Mar 28 03:07:36 2010
@@ -37797,3 +37797,25 @@
 Event Description: Patrol Read complete
 Event Data:
None
+
+
+seqNum: 0x36f6
+Time: Sat Mar 27 03:00:00 2010
+
+Code: 0x0027
+Class: 0
+Locale: 0x20
+Event Description: Patrol Read started
+Event Data:
+   None

etc...


Cheers
--
Matteo Beccati

Development  Consulting - http://www.beccati.com/

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] mysql to postgresql, performance questions

2010-03-31 Thread Bruce Momjian
James Mansion wrote:
 Hannu Krosing wrote:
  Pulling the plug should not corrupt a postgreSQL database, unless it was
  using disks which lie about write caching.

 Didn't we recently put the old wife's 'the disks lied' tale to bed in 
 favour of actually admiting that some well known filesystems and 
 saftware raid systems have had trouble with their write barriers?

I thought the issue was that many file systems do not issue the drive
ATAPI flush command, and I suppose drives are allowed not to flush on
write if they honor the command.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] experiments in query optimization

2010-03-31 Thread Robert Haas
On Wed, Mar 31, 2010 at 6:10 AM, Faheem Mitha fah...@email.unc.edu wrote:

 [If Kevin Grittner reads this, please fix your email address. I am getting
 bounces from your email address.]

 On Tue, 30 Mar 2010, Robert Haas wrote:

 On Tue, Mar 30, 2010 at 12:30 PM, Faheem Mitha fah...@email.unc.edu
 wrote:

 Sure, but define sane setting, please. I guess part of the point is that
 I'm
 trying to keep memory low, and it seems this is not part of the planner's
 priorities. That it, it does not take memory usage into consideration
 when
 choosing a plan. If that it wrong, let me know, but that is my
 understanding.

 I don't understand quite why you're confused here.  We've already
 explained to you that the planner will not employ a plan that uses
 more than the amount of memory defined by work_mem for each sort or
 hash.

 Typical settings for work_mem are between 1MB and 64MB.  1GB is enormous.

 I don't think I am confused. To be clear, when I said it does not take
 memory usage into consideration' I was talking about overall memory usage.
 Let me summarize:

 The planner will choose the plan with the minimum total cost, with the
 constraint that the number of memory used for each of certain steps is less
 than work_mem. In other words with k such steps it can use at most

 k(plan)*work_mem

 memory where k(plan) denotes that k is a function of the plan. (I'm assuming
 here that memory is not shared between the different steps). However,
 k(plan)*work_mem is not itself bounded. I fail to see how reducing work_mem
 significantly would help me. This would mean that the current plans I am
 using would likely be ruled out, and I would be left with plans which, by
 definition, would have larger cost and so longer run times. The current
 runtimes are already quite long - for the PED query, the best I can do with
 work_mem=1 GB is 2 1/2 hrs, and that is after splitting the query into two
 pieces.

 I might actually be better off *increasing* the memory, since then the
 planner would have more flexibility to choose plans where the individual
 steps might require more memory, but the overall memory sum might be lower.

OK, your understanding is correct.

 You might need to create some indices, too.

 Ok. To what purpose? This query picks up everything from the
 tables and the planner does table scans, so conventional wisdom
 and indeed my experience, says that indexes are not going to be so
 useful.

 There are situations where scanning the entire table to build up a
 hash table is more expensive than using an index.  Why not test it?

 Certainly, but I don't know what you and Robert have in mind, and I'm not
 experienced enough to make an educated guess. I'm open to specific
 suggestions.

 Try creating an index on geno on the columns that are being used for the
 join.

 Ok, I'll try that. I guess the cols in question on geno are idlink_id and
 anno_id. I thought that I already had indexes on them, but no. Maybe I had
 indexes, but removed them.

 If I understand the way this works, if you request, say an INNER JOIN, the
 planner can choose different ways/algorithms to do this, as in
 http://en.wikipedia.org/wiki/Join_(SQL)#Nested_loops . It may choose a hash
 join, or an nested loop join or something else, based on cost. If the
 indexes don't exist that may make the inner loop join more expensive, so tip
 the balance in favor of using a hash join. However, I have no way to control
 which option it chooses, short of disabling eg. the hash join option, which
 is not an option for production usage anyway. Correct?

Yep.

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 3ware vs. MegaRAID

2010-03-31 Thread Francisco Reyes

Ireneusz Pluta writes:

I am waiting for an ordered machine dedicated to PostgresSQL. It was 
expected to have 3ware 9650SE 16 port controller. However, the vendor 
wants to replace this controller with MegaRAID SAS 84016E, because, as 


I have had better luck getting 3ware management tools to work on both 
FreeBSD and Linux than the Megaraid cards.


I also like the 3ware cards can be configured to send out an email in case 
of problems once you have the monitoring program running.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] query has huge variance in execution times

2010-03-31 Thread Brian Cox

On 03/31/2010 12:37 AM, David Wilson [david.t.wil...@gmail.com] wrote:

These won't necessarily get the same plan. If you want to see what plan
the prepared query is getting, you'll need to prepare it (prepare foo
as query) and then explain *that* via explain execute foo.

The prepared version likely has a much more generic plan, whereas the
regular query gets optimized for the actual values provided.


I didn't know this. Thanks. The plans are indeed different:

cemdb=# prepare sq as select b.ts_id from ts_stats_tranunit_user_daily 
b, ts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = 
c.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and 
b.ts_user_incarnation_id = c.ts_user_incarnation_id and 
c.ts_interval_start_time = $1 and c.ts_interval_start_time  $2 and 
b.ts_interval_start_time = $3 and b.ts_interval_start_time  $4;
cemdb=# explain execute sq('2010-3-29 01:00', '2010-3-29 02:00', 
'2010-3-29', '2010-3-30'); 
  QUERY PLAN 
-

 Merge Join  (cost=7885.37..8085.91 rows=30 width=8)
   Merge Cond: ((b.ts_transet_incarnation_id = 
c.ts_transet_incarnation_id) AND (b.ts_tranunit_id = c.ts_tranunit_id) 
AND (b.ts_user_incarnation_id = c.ts_user_incarnation_id))

   -  Sort  (cost=1711.82..1716.81 rows=3994 width=32)
 Sort Key: b.ts_transet_incarnation_id, b.ts_tranunit_id, 
b.ts_user_incarnation_id
 -  Index Scan using ts_stats_tranunit_user_daily_starttime on 
ts_stats_tranunit_user_daily b  (cost=0.00..1592.36 rows=3994 width=32)
   Index Cond: ((ts_interval_start_time = $3) AND 
(ts_interval_start_time  $4))


cemdb=# explain select b.ts_id from ts_stats_tranunit_user_daily b, 
ts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = 
c.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and 
b.ts_user_incarnation_id = c.ts_user_incarnation_id and 
c.ts_interval_start_time = '2010-3-29 01:00' and 
c.ts_interval_start_time  '2010-3-29 02:00' and 
b.ts_interval_start_time = '2010-3-29' and b.ts_interval_start_time  
'2010-3-30';


   QUERY PLAN
--
 Hash Join  (cost=291965.90..335021.46 rows=13146 width=8)
   Hash Cond: ((c.ts_transet_incarnation_id = 
b.ts_transet_incarnation_id) AND (c.ts_tranunit_id = b.ts_tranunit_id) 
AND (c.ts_user_incarnation_id = b.ts_user_incarnation_id))
   -  Index Scan using ts_stats_tranunit_user_interval_starttime on 
ts_stats_tranunit_user_interval c  (cost=0.00..11783.36 rows=88529 width=24)
 Index Cond: ((ts_interval_start_time = '2010-03-29 
01:00:00-07'::timestamp with time zone) AND (ts_interval_start_time  
'2010-03-29 02:00:00-07'::timestamp with time zone))

   -  Hash  (cost=285681.32..285681.32 rows=718238 width=32)
 -  Index Scan using ts_stats_tranunit_user_daily_starttime on 
ts_stats_tranunit_user_daily b  (cost=0.00..285681.32 rows=718238 width=32)
   Index Cond: ((ts_interval_start_time = '2010-03-29 
00:00:00-07'::timestamp with time zone) AND (ts_interval_start_time  
'2010-03-30 00:00:00-07'::timestamp with time zone))

(7 rows)

   -  Sort  (cost=6173.55..6218.65 rows=36085 width=24)
 Sort Key: c.ts_transet_incarnation_id, c.ts_tranunit_id, 
c.ts_user_incarnation_id
 -  Index Scan using ts_stats_tranunit_user_interval_starttime 
on ts_stats_tranunit_user_interval c  (cost=0.00..4807.81 rows=36085 
width=24)
   Index Cond: ((ts_interval_start_time = $1) AND 
(ts_interval_start_time  $2))

(10 rows)

I notice that the row estimates are substantially different; this is due 
to the lack of actual values?


But, this prepared query runs in ~4 secs:

[r...@rdl64xeoserv01 log]# cat /tmp/select.sql
prepare sq as select b.ts_id from ts_stats_tranunit_user_daily b, 
ts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = 
c.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and 
b.ts_user_incarnation_id = c.ts_user_incarnation_id and 
c.ts_interval_start_time = $1 and c.ts_interval_start_time  $2 and 
b.ts_interval_start_time = $3 and b.ts_interval_start_time  $4;

execute sq('2010-3-29 01:00', '2010-3-29 02:00', '2010-3-29', '2010-3-30

[r...@rdl64xeoserv01 log]# time PGPASSWORD=quality psql -U postgres -d 
cemdb -f /tmp/select.sql   /tmp/select1.txt 21

real0m4.131s
user0m0.119s
sys 0m0.007s

so the question still remains: why did it take  20 mins? To see if it 
was due to autovacuum running ANALYZE, I turned off autovacuum, created 
a table using SELECT * INTO temp FROM ts_stats_tranunit_user_daily, 
added the index on ts_interval_start_time and ran the prepared 

Re: [PERFORM] query has huge variance in execution times

2010-03-31 Thread David Wilson
On Wed, Mar 31, 2010 at 2:10 PM, Brian Cox brian@ca.com wrote:



 so the question still remains: why did it take  20 mins? To see if it was
 due to autovacuum running ANALYZE, I turned off autovacuum, created a table
 using SELECT * INTO temp FROM ts_stats_tranunit_user_daily, added the index
 on ts_interval_start_time and ran the prepared query with temp, but the
 query completed in a few secs.

 It's possible that statistics were updated between the 20 minute run and
your most recent prepared query test. In fact, comparing the plans between
your two emails, it's quite likely, as even the non-prepared versions are
not producing the same plan or the same estimates; it's therefore possible
that your problem has already corrected itself if you're unable to duplicate
the 20 minute behaviour at this point.

Taking a look at the statistics accuracy with an explain analyze might still
be informative, however.

-- 
- David T. Wilson
david.t.wil...@gmail.com


Re: [PERFORM] Database size growing over time and leads to performance impact

2010-03-31 Thread Scott Carey

On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:
 
 Dont VACUUM FULL, its not helping you, and is being removed in newer 
 versions.
 

Off topic:  How is that going to work?  CLUSTER doesn't work on tables without 
an index.  I would love to be able to CLUSTER on some column set that doesn't 
necessarily have an index.

 -Andy
 
 -- 
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Database size growing over time and leads to performance impact

2010-03-31 Thread Robert Haas
On Wed, Mar 31, 2010 at 4:37 PM, Scott Carey sc...@richrelevance.com wrote:
 On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:

 Dont VACUUM FULL, its not helping you, and is being removed in newer 
 versions.


 Off topic:  How is that going to work?  CLUSTER doesn't work on tables 
 without an index.  I would love to be able to CLUSTER on some column set that 
 doesn't necessarily have an index.

I believe the new VF implementation just rewrites the data in the same
physical order as it was in previously, but without the dead space.
So it's sort of like cluster-by-no-index-at-all.

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Database size growing over time and leads to performance impact

2010-03-31 Thread Alvaro Herrera
Scott Carey wrote:
 
 On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:
  
  Dont VACUUM FULL, its not helping you, and is being removed in newer 
  versions.
  
 
 Off topic:  How is that going to work?  CLUSTER doesn't work on tables
 without an index.  I would love to be able to CLUSTER on some column
 set that doesn't necessarily have an index.

VACUUM FULL has been rewritten in 9.0 so that it uses the CLUSTER logic,
except that it doesn't require an index.

If you want to do it in earlier versions, you can use a no-op SET TYPE
command, like so:

ALTER TABLE foo ALTER COLUMN bar SET TYPE baz;

assuming that table foo has a column bar which is already of type baz.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
Hi All,

I have a table with 40GB size, it has few indexes on it. When i try to
REINDEX on the table, its take a long time. I tried increasing the
maintenance_work_mem, but still i havnt find a satisfying result.

Questions
===
1. What are the parameters will effect, when issuing the REINDEX command
2. Best possible way to increase the spead of the REINDEX

Thanks in Advance

Regards
Raghavendra


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Kevin Grittner
raghavendra t raagavendra@gmail.com wrote:
 
 I have a table with 40GB size, it has few indexes on it.
 
What does the table look like?  What indexes are there?
 
 When i try to REINDEX on the table,
 
Why are you doing that?
 
 its take a long time.
 
How long?
 
 I tried increasing the maintenance_work_mem, but still i havnt
 find a satisfying result.
 
What run time are you expecting?
 
 Questions
 ===
 1. What are the parameters will effect, when issuing the REINDEX
command
 2. Best possible way to increase the spead of the REINDEX
 
It's hard to answer that without more information, like PostgreSQL
version and configuration, for starters.  See:
 
http://wiki.postgresql.org/wiki/SlowQueryQuestions
 
My best guess is that you can make them instantaneous by not running
them.  A good VACUUM policy should make such runs unnecessary in
most cases -- at least on recent PostgreSQL versions.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
Hi Kevin,

Thank you for the update,

What does the table look like?  What indexes are there?
Table has a combination of byteas. Indexes are b-tree and Partial

Why are you doing that?
Our table face lot of updates and deletes in a day, so we prefer reindex to
update the indexes as well overcome with a corrupted index.

 How long?
More than 4 hrs..

What run time are you expecting?
Less than what it is taking at present.

It's hard to answer that without more information, like PostgreSQL
version and configuration, for starters.  See:
  version

 PostgreSQL 8.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2
20080704 (Red Hat 4.1.2-44), 32-bit
(1 row)

http://wiki.postgresql.org/wiki/SlowQueryQuestions
Expected the performance question..

Regards
Raghavendra

On Thu, Apr 1, 2010 at 2:32 AM, Kevin Grittner
kevin.gritt...@wicourts.govwrote:

 raghavendra t raagavendra@gmail.com wrote:

  I have a table with 40GB size, it has few indexes on it.

 What does the table look like?  What indexes are there?

  When i try to REINDEX on the table,

 Why are you doing that?

  its take a long time.

 How long?

  I tried increasing the maintenance_work_mem, but still i havnt
  find a satisfying result.

 What run time are you expecting?

  Questions
  ===
  1. What are the parameters will effect, when issuing the REINDEX
 command
  2. Best possible way to increase the spead of the REINDEX

 It's hard to answer that without more information, like PostgreSQL
 version and configuration, for starters.  See:

 http://wiki.postgresql.org/wiki/SlowQueryQuestions

 My best guess is that you can make them instantaneous by not running
 them.  A good VACUUM policy should make such runs unnecessary in
 most cases -- at least on recent PostgreSQL versions.

 -Kevin



Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Kevin Grittner
raghavendra t raagavendra@gmail.com wrote:
 
 overcome with a corrupted index.
 
If this is a one-time fix for a corrupted index, did you look at
CREATE INDEX CONCURRENTLY?  You could avoid any down time while you
fix things up.
 
http://www.postgresql.org/docs/8.4/interactive/sql-createindex.html
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t

 If this is a one-time fix for a corrupted index, did you look at
 CREATE INDEX CONCURRENTLY?  You could avoid any down time while you
 fix things up.

Using CREATE INDEX CONCURRENTLY will avoid the exclusive locks on the table,
but my question is, how to get a performance on the existing indexes. You
mean to say , drop the existing indexes and create the index with
CONCURRENTLY. Does this give the performance back.

Regards
Raghavendra


On Thu, Apr 1, 2010 at 3:10 AM, Kevin Grittner
kevin.gritt...@wicourts.govwrote:

 raghavendra t raagavendra@gmail.com wrote:

  overcome with a corrupted index.

 If this is a one-time fix for a corrupted index, did you look at
 CREATE INDEX CONCURRENTLY?  You could avoid any down time while you
 fix things up.

 http://www.postgresql.org/docs/8.4/interactive/sql-createindex.html

 -Kevin



Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Kevin Grittner
raghavendra t raagavendra@gmail.com wrote:
 
 my question is, how to get a performance on the existing indexes.
 You mean to say , drop the existing indexes and create the index
 with CONCURRENTLY. Does this give the performance back.
 
You would normally want to create first and then drop the old ones,
unless the old ones are hopelessly corrupted.  Since you still
haven't given me any information to suggest you need to reindex
except for the mention of corruption, or any information to help
identify where the performance bottleneck is, I can't see any other
improvements to suggest at this point.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
Thank you for the suggestion.

On Thu, Apr 1, 2010 at 3:21 AM, Kevin Grittner
kevin.gritt...@wicourts.govwrote:

 raghavendra t raagavendra@gmail.com wrote:

  my question is, how to get a performance on the existing indexes.
  You mean to say , drop the existing indexes and create the index
  with CONCURRENTLY. Does this give the performance back.

 You would normally want to create first and then drop the old ones,
 unless the old ones are hopelessly corrupted.  Since you still
 haven't given me any information to suggest you need to reindex
 except for the mention of corruption, or any information to help
 identify where the performance bottleneck is, I can't see any other
 improvements to suggest at this point.

 -Kevin



Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Kevin Grittner
raghavendra t raagavendra@gmail.com wrote:
 Thank you for the suggestion.
 
I'm sorry I couldn't come up with more, but what you've provided so
far is roughly equivalent to me telling you that it takes over four
hours to travel to see my Uncle Jim, and then asking you how I can
find out how he's doing in less time than that.  There's just not
much to go on.  :-(
 
If you proceed with the course suggested in the URL I referenced,
people on the list have a chance to be more helpful to you.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Jaime Casanova
On Wed, Mar 31, 2010 at 5:33 PM, raghavendra t
raagavendra@gmail.com wrote:

Why are you doing that?
 Our table face lot of updates and deletes in a day, so we prefer reindex to
 update the indexes as well overcome with a corrupted index.


do you have a corrupted index? if not, there is nothing to do...
REINDEX is not a mantenance task on postgres

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t

 I'm sorry I couldn't come up with more, but what you've provided so
 far is roughly equivalent to me telling you that it takes over four
 hours to travel to see my Uncle Jim, and then asking you how I can
 find out how he's doing in less time than that.  There's just not
 much to go on.  :-(

 If you proceed with the course suggested in the URL I referenced,
 people on the list have a chance to be more helpful to you.

Instead of looking into the priority of the question or where it has to be
posted, it would be appreciated to keep a discussion to the point
mentioned.  Truely this question belong to some other place as you have
mentioned in the URL. But answer for Q1 might be expected alteast. Hope i
could get the information from the other Thread in other catagory.

Thank you

Regards
Raghavendra


On Thu, Apr 1, 2010 at 3:40 AM, Kevin Grittner
kevin.gritt...@wicourts.govwrote:

 raghavendra t raagavendra@gmail.com wrote:
  Thank you for the suggestion.

 I'm sorry I couldn't come up with more, but what you've provided so
 far is roughly equivalent to me telling you that it takes over four
 hours to travel to see my Uncle Jim, and then asking you how I can
 find out how he's doing in less time than that.  There's just not
 much to go on.  :-(

 If you proceed with the course suggested in the URL I referenced,
 people on the list have a chance to be more helpful to you.

 -Kevin



Re: [PERFORM] temp table on commit delete rows: transaction overhead

2010-03-31 Thread Bruce Momjian
Tom Lane wrote:
 Artiom Makarov artiom.maka...@gmail.com writes:
  When temp tables with on commit delete rows exists, I can see a
  strange delay at any ?begin? and ?commit?.
 
 A delay at commit is hardly surprising, because each such temp table
 requires filesystem operations at commit (basically an ftruncate).
 I don't recall any operations at transaction start for such tables,
 but there may be some.

I think one of the problems is that we do the truncate even if the table
has not be touched by the query, which is poor behavior.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread Craig Ringer
Jaime Casanova wrote:
 On Wed, Mar 31, 2010 at 5:33 PM, raghavendra t
 raagavendra@gmail.com wrote:
 Why are you doing that?
 Our table face lot of updates and deletes in a day, so we prefer reindex to
 update the indexes as well overcome with a corrupted index.

 
 do you have a corrupted index? if not, there is nothing to do...
 REINDEX is not a mantenance task on postgres

Actually, if your free_space_map (pre 8.4) isn't up to keeping track of
bloat, or autovac isn't running enough, you're likely to get bloat of
indexes as well as tables that may need VACUUM FULL + REINDEX to
properly clean up.

It's probably better to fix your fsm/autovac settings then CLUSTER the
table so it doesn't happen again, though.

--
Craig Ringer

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance