Re: [PERFORM] Why is PostgreSQL so slow on Windows ( Postgres 8.3.7) version

2009-08-03 Thread Grzegorz Jaśkiewicz
how about normalizing the schema for start ?
by the looks of it, you have huge table,with plenty of varchars, that
smells like bad design of db.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Why is PostgreSQL so slow on Windows ( Postgres 8.3.7) version

2009-08-03 Thread Marc Cousin
The few 'obvious' things I see :

ID and POLLID aren't of the same type (numeric vs bigint)

TTIME isn't indexed.

And as a general matter, you should stick to native datatypes if you don't 
need numeric.

But as said in the other answer, maybe you should redo this schema and use 
more consistent datatypes

Anyway, from what I remenber, it's not advised to set up shared buffers that 
high for windows (I don't do so much windows myself, so maybe someone will be 
better informed).

Anyway you can start by correcting the schema…

On Friday 31 July 2009 07:45:55 pari krishnan wrote:
 Dear All,


 We are
 using Postgres 8.3.7 in our java application. We are doing performances
 tuning and load testing in our setup. we have noticed that ,some of our
 queries to the database taking long time to return the results.Please
 find our setup details belows.

 We observed that postgres is running in windows is slower than the linux .

 Machine  Database Details :

 Windows configuration:
 4 GB RAM
 4*1.6 GHZ
 windows 2008 server standard edition

 Postgresql configuration:

 shared_buffers: 1 GB
 Effective_cache_size: 2GB
 fsync: off  (even we tested this parameter is on ,we observed the same
 slowness )


 Database Details :

 Postgres  Database   : PostgreSQL 8.3.7.1
 Driver Version: PostgreSQL 8.3 JDBC4 with SSL (build 604)
 We are using 40 database connections.


 We have few tables which will be having more amount data.While running
 our application STATSDATA table will be created daily with table name
 with date.
 like as STATSDATA8_21_2009

 Schema for STATSDATA table

 create table STATSDATA8_21_2009(
 POLLID Numeric(19),
 INSTANCE varchar(100),
 TTIME Numeric(19),
 VAL Numeric(13)) ;CREATE INDEX POLLID%_ndx on STATSDATA%(POLLID)

 Schema for PolledData

 create table PolledData(
 NAME varchar(50) NOT NULL ,
 ID BIGINT NOT NULL ,
 AGENT varchar(50) NOT NULL ,
 COMMUNITY varchar(100) NOT NULL ,
 PERIOD INTEGER NOT NULL,
 ACTIVE varchar(10),
 OID varchar(200) NOT NULL,
 LOGDIRECTLY varchar(10),
 LOGFILE varchar(100),
 SSAVE varchar(10),
 THRESHOLD varchar(10),
 ISMULTIPLEPOLLEDDATA varchar(10),
 PREVIOUSSEVERITY INTEGER,
 NUMERICTYPE INTEGER,
 SAVEABSOLUTES varchar(10),
 TIMEAVG varchar(10),
 PORT INTEGER,
 WEBNMS varchar(100),
 GROUPNAME varchar(100),
 LASTCOUNTERVALUE BIGINT ,
 LASTTIMEVALUE BIGINT ,
 TIMEVAL BIGINT NOT NULL ,
 POLICYNAME varchar(100),
 THRESHOLDLIST varchar(200),
 DNSNAME varchar(100),
 SUFFIX varchar(20),
 STATSDATATABLENAME varchar(100),
 POLLERNAME varchar(200),
 FAILURECOUNT INTEGER,
 FAILURETHRESHOLD INTEGER,
 PARENTOBJ varchar(100),
 PROTOCOL varchar(50),
 SAVEPOLLCOUNT INTEGER,
 CURRENTSAVECOUNT INTEGER,
 SAVEONTHRESHOLD varchar(10),
 SNMPVERSION varchar(10),
 USERNAME varchar(30),
 CONTEXTNAME varchar(30),
 PRIMARY KEY (ID,NAME,AGENT,OID),
 index PolledData0_ndx ( NAME),
 index PolledData1_ndx ( AGENT),
 index PolledData2_ndx ( OID),
 index PolledData3_ndx ( ID),
 index PolledData4_ndx ( PARENTOBJ),
 )


 We
 have 300k row's in PolledData Table.In each STATSDATA table ,we have
 almost 12 to 13 million rows. Every one minute interval ,we insert data
 into to STATSDATA table. In our application ,we use insert and select
 query to STATSDATA table at regular interval. Please let us know why
 the below query takes more time to return the results. is there any
 thing we need to do to tune the postgres database ?




 Please find explain analyze output.


 First Query :

 postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID,
 PolledData.AGENT, N AME, INSTANCE, TTIME, VAL FROM PolledData,
 STATSDATA8_21_2009 WHERE ( ( PolledDa ta.ID=STATSDATA8_21_2009.POLLID) AND
 ( ( TTIME = 1250838027454) AND ( TTIME = 1250838079654) ) ) ) t1;
 QUERY
 PLAN

 ---
- --
 Aggregate  (cost=773897.12..773897.13 rows=1 width=0) (actual
 time=17818.410..1 7818.412 rows=1 loops=1)
-  Merge Join  (cost=717526.23..767505.06 rows=2556821 width=0) (actual
 time =17560.469..17801.790 rows=13721 loops=1)
  Merge Cond: (statsdata8_21_2009.pollid =
 ((polleddata.id)::numeric)) -  Sort  (cost=69708.44..69742.49 rows=13619
 width=8) (actual time=239 2.659..2416.093 rows=13721 loops=1)
Sort Key: statsdata8_21_2009.pollid
Sort Method:  quicksort  Memory: 792kB
-  Seq Scan on statsdata8_21_2009  (cost=0.00..68773.27
 rows=136 19 width=8) (actual time=0.077..2333.132 rows=13721 loops=1)
  Filter: ((ttime = 1250838027454::numeric) AND (ttime
 = 12 50838079654::numeric))
  -  Materialize  (cost=647817.78..688331.92 rows=3241131 width=8)
 (actu al time=15167.767..15282.232 rows=21582 loops=1)
-  Sort  (cost=647817.78..655920.61 rows=3241131 width=8)
 (actua l time=15167.756..15218.645 rows=21574 loops=1)
  

[PERFORM] Why is PostgreSQL so slow on Windows ( Postgres 8.3.7) version

2009-08-02 Thread parimala
Dear All,
 
 
 We are using Postgres 8.3.7 in our java application. We are doing performances 
tuning and load testing in our setup. we have noticed that ,some of our queries 
to the database taking long time to return the results.Please find our setup 
details belows.
 
We observed that postgres is running in windows is slower than the linux .
 
 Machine amp; Database Details :
 
 Windows configuration:
 4 GB RAM
 4*1.6 GHZ
 windows 2008 server standard edition
 
 Postgresql configuration:
 
 shared_buffers: 1 GB
 Effective_cache_size: 2GB
 fsync: off  (even we tested this parameter is on ,we observed the same 
slowness )
 
 
  Database Details : 
 
 Postgres Database : PostgreSQL 8.3.7.1
 Driver Version : PostgreSQL 8.3 JDBC4 with SSL (build 604)
  We are using 40 database connections.
 
 
 We have few tables which will be having more amount data.While running our 
application STATSDATA table will be created daily with table name with date.
 like as STATSDATA8_21_2009
 
 Schema for STATSDATA table
 
 create table STATSDATA8_21_2009(
 POLLID Numeric(19),
 INSTANCE varchar(100),
 TTIME Numeric(19),
 VAL Numeric(13)) ;CREATE INDEX POLLID%_ndx on STATSDATA%(POLLID)

Schema for PolledData

create table PolledData(
NAME varchar(50) NOT NULL ,
ID BIGINT NOT NULL ,
AGENT varchar(50) NOT NULL ,
COMMUNITY varchar(100) NOT NULL ,
PERIOD INTEGER NOT NULL,
ACTIVE varchar(10),
OID varchar(200) NOT NULL,
LOGDIRECTLY varchar(10),
LOGFILE varchar(100),
SSAVE varchar(10),
THRESHOLD varchar(10),
ISMULTIPLEPOLLEDDATA varchar(10),
PREVIOUSSEVERITY INTEGER,
NUMERICTYPE INTEGER,
SAVEABSOLUTES varchar(10),
TIMEAVG varchar(10),
PORT INTEGER,
WEBNMS varchar(100),
GROUPNAME varchar(100),
LASTCOUNTERVALUE BIGINT ,
LASTTIMEVALUE BIGINT ,
TIMEVAL BIGINT NOT NULL ,
POLICYNAME varchar(100),
THRESHOLDLIST varchar(200),
DNSNAME varchar(100),
SUFFIX varchar(20),
STATSDATATABLENAME varchar(100),
POLLERNAME varchar(200),
FAILURECOUNT INTEGER,
FAILURETHRESHOLD INTEGER,
PARENTOBJ varchar(100),
PROTOCOL varchar(50),
SAVEPOLLCOUNT INTEGER,
CURRENTSAVECOUNT INTEGER,
SAVEONTHRESHOLD varchar(10),
SNMPVERSION varchar(10),
USERNAME varchar(30),
CONTEXTNAME varchar(30),
PRIMARY KEY (ID,NAME,AGENT,OID),
index PolledData0_ndx ( NAME),
index PolledData1_ndx ( AGENT),
index PolledData2_ndx ( OID),
index PolledData3_ndx ( ID),
index PolledData4_ndx ( PARENTOBJ),
 )

 
We have 300k row's in PolledData Table.In each STATSDATA table ,we have almost 
12 to 13 million rows. Every one minute interval ,we insert data into to 
STATSDATA table. In our application ,we use insert and select query to 
STATSDATA table at regular interval. Please let us know why the below query 
takes more time to return the results. is there any thing we need to do to tune 
the postgres database ? 




Please find explain analyze output.

 
 First Query :

postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N
 AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_21_2009 WHERE ( ( 
PolledDa
 ta.ID=STATSDATA8_21_2009.POLLID) AND ( ( TTIME gt;= 1250838027454) AND ( 
TTIME lt;=
 1250838079654) ) ) ) t1;
  QUERY PLAN
 
 

 --
 Aggregate (cost=773897.12..773897.13 rows=1 width=0) (actual time=17818.410..1
 7818.412 rows=1 loops=1)
  -gt; Merge Join (cost=717526.23..767505.06 rows=2556821 width=0) (actual time
 =17560.469..17801.790 rows=13721 loops=1)
  Merge Cond: (statsdata8_21_2009.pollid = ((polleddata.id)::numeric))
  -gt; Sort (cost=69708.44..69742.49 rows=13619 width=8) (actual time=239
 2.659..2416.093 rows=13721 loops=1)
  Sort Key: statsdata8_21_2009.pollid
  Sort Method: quicksort Memory: 792kB
  -gt; Seq Scan on statsdata8_21_2009 (cost=0.00..68773.27 rows=136
 19 width=8) (actual time=0.077..2333.132 rows=13721 loops=1)
  Filter: ((ttime gt;= 1250838027454::numeric) AND (ttime lt;= 12
 50838079654::numeric))
  -gt; Materialize (cost=647817.78..688331.92 rows=3241131 width=8) (actu
 al time=15167.767..15282.232 rows=21582 loops=1)
  -gt; Sort (cost=647817.78..655920.61 rows=3241131 width=8) (actua
 l time=15167.756..15218.645 rows=21574 loops=1)
  Sort Key: ((polleddata.id)::numeric)
  Sort Method: external merge Disk: 736kB
  -gt; Seq Scan on polleddata (cost=0.00..164380.31 rows=3241
 131 width=8) (actual time=1197.278..14985.665 rows=23474 loops=1)
 Total runtime: 17826.511 ms
 (14 rows)
 
 Second Query :
 
 postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, 
N
 AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_20_2009 WHERE ( ( 
PolledDa
 ta.ID=STATSDATA8_20_2009.POLLID) AND ( ( TTIME gt;= 1250767134601) AND ( 
TTIME lt;=
  1250767384601) ) ) ) t1;
  QUERY PLAN
 
 

 -
 Aggregate (cost=1238144.31..1238144.32 rows=1 width=0) 

[PERFORM] Why is PostgreSQL so slow on Windows ( Postgres 8.3.7) version

2009-08-02 Thread pari krishnan
Dear All,


We are
using Postgres 8.3.7 in our java application. We are doing performances
tuning and load testing in our setup. we have noticed that ,some of our
queries to the database taking long time to return the results.Please
find our setup details belows.

We observed that postgres is running in windows is slower than the linux .

Machine  Database Details :

Windows configuration:
4 GB RAM
4*1.6 GHZ
windows 2008 server standard edition

Postgresql configuration:

shared_buffers: 1 GB
Effective_cache_size: 2GB
fsync: off  (even we tested this parameter is on ,we observed the same slowness 
)


Database Details : 

Postgres  Database   : PostgreSQL 8.3.7.1
Driver Version: PostgreSQL 8.3 JDBC4 with SSL (build 604)
We are using 40 database connections.


We have few tables which will be having more amount data.While running
our application STATSDATA table will be created daily with table name
with date.
like as STATSDATA8_21_2009

Schema for STATSDATA table

create table STATSDATA8_21_2009(
POLLID Numeric(19),
INSTANCE varchar(100),
TTIME Numeric(19),
VAL Numeric(13)) ;CREATE INDEX POLLID%_ndx on STATSDATA%(POLLID)

Schema for PolledData

create table PolledData(
NAME varchar(50) NOT NULL ,
ID BIGINT NOT NULL ,
AGENT varchar(50) NOT NULL ,
COMMUNITY varchar(100) NOT NULL ,
PERIOD INTEGER NOT NULL,
ACTIVE varchar(10),
OID varchar(200) NOT NULL,
LOGDIRECTLY varchar(10),
LOGFILE varchar(100),
SSAVE varchar(10),
THRESHOLD varchar(10),
ISMULTIPLEPOLLEDDATA varchar(10),
PREVIOUSSEVERITY INTEGER,
NUMERICTYPE INTEGER,
SAVEABSOLUTES varchar(10),
TIMEAVG varchar(10),
PORT INTEGER,
WEBNMS varchar(100),
GROUPNAME varchar(100),
LASTCOUNTERVALUE BIGINT ,
LASTTIMEVALUE BIGINT ,
TIMEVAL BIGINT NOT NULL ,
POLICYNAME varchar(100),
THRESHOLDLIST varchar(200),
DNSNAME varchar(100),
SUFFIX varchar(20),
STATSDATATABLENAME varchar(100),
POLLERNAME varchar(200),
FAILURECOUNT INTEGER,
FAILURETHRESHOLD INTEGER,
PARENTOBJ varchar(100),
PROTOCOL varchar(50),
SAVEPOLLCOUNT INTEGER,
CURRENTSAVECOUNT INTEGER,
SAVEONTHRESHOLD varchar(10),
SNMPVERSION varchar(10),
USERNAME varchar(30),
CONTEXTNAME varchar(30),
PRIMARY KEY (ID,NAME,AGENT,OID),
index PolledData0_ndx ( NAME),
index PolledData1_ndx ( AGENT),
index PolledData2_ndx ( OID),
index PolledData3_ndx ( ID),
index PolledData4_ndx ( PARENTOBJ),
)


We
have 300k row's in PolledData Table.In each STATSDATA table ,we have
almost 12 to 13 million rows. Every one minute interval ,we insert data
into to STATSDATA table. In our application ,we use insert and select
query to STATSDATA table at regular interval. Please let us know why
the below query takes more time to return the results. is there any
thing we need to do to tune the postgres database ? 




Please find explain analyze output.


First Query :

postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N
AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_21_2009 WHERE ( ( PolledDa
ta.ID=STATSDATA8_21_2009.POLLID) AND ( ( TTIME = 1250838027454) AND ( TTIME =
1250838079654) ) ) ) t1;
QUERY PLAN


--
 Aggregate  (cost=773897.12..773897.13 rows=1 width=0) (actual time=17818.410..1
7818.412 rows=1 loops=1)
   -  Merge Join  (cost=717526.23..767505.06 rows=2556821 width=0) (actual time
=17560.469..17801.790 rows=13721 loops=1)
 Merge Cond: (statsdata8_21_2009.pollid = ((polleddata.id)::numeric))
 -  Sort  (cost=69708.44..69742.49 rows=13619 width=8) (actual time=239
2.659..2416.093 rows=13721 loops=1)
   Sort Key: statsdata8_21_2009.pollid
   Sort Method:  quicksort  Memory: 792kB
   -  Seq Scan on statsdata8_21_2009  (cost=0.00..68773.27 rows=136
19 width=8) (actual time=0.077..2333.132 rows=13721 loops=1)
 Filter: ((ttime = 1250838027454::numeric) AND (ttime = 12
50838079654::numeric))
 -  Materialize  (cost=647817.78..688331.92 rows=3241131 width=8) (actu
al time=15167.767..15282.232 rows=21582 loops=1)
   -  Sort  (cost=647817.78..655920.61 rows=3241131 width=8) (actua
l time=15167.756..15218.645 rows=21574 loops=1)
 Sort Key: ((polleddata.id)::numeric)
 Sort Method:  external merge  Disk: 736kB
 -  Seq Scan on polleddata  (cost=0.00..164380.31 rows=3241
131 width=8) (actual time=1197.278..14985.665 rows=23474 loops=1)
 Total runtime: 17826.511 ms
(14 rows)

Second Query :

postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N
AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_20_2009 WHERE ( ( PolledDa
ta.ID=STATSDATA8_20_2009.POLLID) AND ( ( TTIME =  1250767134601) AND ( TTIME =
  1250767384601) ) ) ) t1;
   QUERY PLAN


Re: [PERFORM] Why is PostgreSQL so slow on Windows ( Postgres 8.3.7) version

2009-08-02 Thread PFC


,some of our queries to the database taking long time to return the  
results.


 fsync: off	 (even we tested this parameter is on ,we observed the same  
slowness )


	If your queries take long time to return results, I suppose you are  
talking about SELECTs.


	fsync = off will not make SELECTs faster (only inserts, updates, deletes)  
but it is not worth it as you risk data loss.


	synchronous_commit = on has about the same advantages (faster...) as  
fsync=off, but with no risk of data loss, so it is much better !



We have 300k row's in PolledData Table.In each STATSDATA table ,we have  
almost 12 to 13 million rows.


OK. So you insert 13 million rows per day ?
That is about 150 rows per second.


Every one minute interval ,we insert data into to STATSDATA table.


	I assume you are making an INSERT INTO statsdata VALUES (.. 150  
values .)

and not 150 inserts, yes ?


First Query :
SELECT COUNT(*) FROM (

SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIME, VAL
FROM PolledData, STATSDATA8_21_2009 WHERE
( ( PolledData.ID=STATSDATA8_21_2009.POLLID)
AND ( ( TTIME = 1250838027454)
AND ( TTIME =1250838079654) ) ) ) t1;

* You could rewrite as :

SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIME, VAL
FROM PolledData
JOIN STATSDATA8_21_2009 ON ( PolledData.ID = STATSDATA8_21_2009.POLLID)
WHERE TTIME BETWEEN ... AND ...

- It is exactly the same query, but much easier to read.

* some ANALYZE-ing of your tables would be useful, since the estimates  
from the planner look suspiciously different from reality

- ANALYZE is fast, you can run it often if you INSERT rows all the time

* You are joining on POLLID which is a NUMERIC in one table and a BIGINT  
in the other table.

- Is there any reason for this type difference ?
- Could you use BIGINT in both tables ?
- BIGINT is faster than NUMERIC and uses less space.
- Type conversions use CPU cycles too.

* Should StatsData.ID have a foreign key REFERENCES PolledData.ID ?
- This won't make the query faster, but if you know all rows in StatsData  
reference rows in PolledData (because of the FK constraint) and you want a  
count(*) like above, you don't need to JOIN.


* TTIME = 1250838027454 AND TTIME =1250838079654
- TTIME should be TIMESTAMP (with or without TIMEZONE) or BIGINT but  
certainly not NUMERIC
- An index on StatsData.TTIME would be useful, it would avoid Seq Scan,  
replacing it with a Bitmap Scan, much faster


* work_mem
- since you have few connections you could increase work_mem


Second Query :


Same as first query


Third Query


SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIME,  
VAL

FROM PolledData, STATSDATA8_21_2009
WHERE ( ( PolledData.ID=STATSDATA8_21_2009.POLLID)
AND ( ( TTIME = 1250838027454) AND ( TTIME =1250838027454) ) )

union all  SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIME, VAL FROM  
PolledData, STATSDATA8_20_2009
WHERE ( ( PolledData.ID=STATSDATA8_20_2009.POLLID) AND ( ( TTIME =  
1250767134601) AND ( TTIME = 1250767134601) ) ) )t1 ;


Basically this is, again, exactly the same query as above, but two times,  
and UNION ALL'ed


* You could rewrite it like this :

SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIME, VAL
FROM
( SELECT ... FROM STATSDATA8_21_2009 WHERE TTIME BETWEEN ... AND ... )
UNION ALL SELECT ... FROM STATSDATA8_20_2009 WHERE TTIME BETWEEN ... AND  
... )

)
JOIN STATSDATA8_21_2009 ON ( PolledData.ID = STATSDATA8_21_2009.POLLID)

* If TTIME is the current time, and you insert data as it comes, data in  
StatsData tables is probably already ordered on TTIME.
- If it is not the case, once a table is filled and becomes read-only,  
consider CLUSTER on the index you created on TTIME

- It will make range queries on TTIME much faster

* Query plan
Seq Scan on statsdata8_21_2009  (cost=0.00..70574.88 rows=1 width=32)  
(actual time=0.047..29066.227 rows=227 loops=1)
Seq Scan on statsdata8_20_2009  (cost=0.00..382519.60 rows=1 width=32)  
(actual time=3136.008..93985.540 rows=1 loops=1)


Postgres thinks there is 1 row in those tables... that's probably not the  
case !

The first one returns 227 rows, so the plan chosen in a catastrophe.

I was a bit intrigued by your query, so I made a little test...

BEGIN;
CREATE TABLE test( x INT, y INT );
INSERT INTO test (SELECT n,n FROM generate_series( 1,100 ) AS n );
CREATE INDEX test_x ON test( x );
CREATE INDEX test_y ON test( y );
COMMIT;

ANALYZE test;

test= EXPLAIN ANALYZE SELECT * FROM test a JOIN test b ON (b.x=a.x) WHERE  
a.x BETWEEN 0 AND 1;

 QUERY PLAN
-
 Hash Join  (cost=480.53..23759.14 rows=10406 width=16) (actual  
time=15.614..1085.085 rows=1 loops=1)

   Hash Cond: (b.x = a.x)
   -  Seq Scan on test b  (cost=0.00..14424.76 rows=76 

Re: [PERFORM] Why is PostgreSQL so slow on Windows ( Postgres 8.3.7) version

2009-08-02 Thread Euler Taveira de Oliveira
parimala escreveu:

[Don't repeat your answer. It's a PITA to receive multiple identical copies]

 We are using Postgres 8.3.7 in our java application. We are doing
 performances tuning and load testing in our setup. we have noticed that
 ,some of our queries to the database taking long time to return the
 results.Please find our setup details belows.
 
 We observed that postgres is running in windows is slower than the linux .
 
That is true and it will be for quite some time. Windows port is very recent
if we compare it with the long road Unix support.

 Postgresql configuration:
 
 shared_buffers: 1 GB
I don't use Windows but I read some Windows users saying that it isn't
appropriate to set the shared_buffers too high. Take a look at the archives.

 Effective_cache_size: 2GB
 fsync: off  (even we tested this parameter is on ,we observed the same
 slowness )
 
What about the other parameters that are different from default (uncommented
parameters)? Also, don't turn off the fsync unless you're pretty sure about
the consequences.

 We have 300k row's in PolledData Table.In each STATSDATA table ,we have
 almost 12 to 13 million rows. Every one minute interval ,we insert data
 into to STATSDATA table. In our application ,we use insert and select
 query to STATSDATA table at regular interval. Please let us know why the
 below query takes more time to return the results. is there any thing we
 need to do to tune the postgres database ?
 
It seems very strange that your queries are not using the indexes. Do you have
autovacuum turn on? Do you recently analyze your tables?

  Merge Cond: (statsdata8_21_2009.pollid =
 ((polleddata.id)::numeric))
Out of curiosity, why does foreign key have different datatype of its primary 
key?


-- 
  Euler Taveira de Oliveira
  http://www.timbira.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance