Re: [PERFORM] Please Help: PostgreSQL performance Optimization

2006-01-13 Thread Jamal Ghaffour

Andrew Lazarus a écrit :


Jamal Ghaffour wrote:


CREATE TABLE cookies (
   domain varchar(50) NOT NULL,
   path varchar(50) NOT NULL,
   name varchar(50) NOT NULL,
   principalid varchar(50) NOT NULL,
   host text NOT NULL,
   value text NOT NULL,
   secure bool NOT NULL,
   timestamp timestamp with time zone NOT NULL DEFAULT 
CURRENT_TIMESTAMP+TIME '04:00:00',

   PRIMARY KEY  (domain,path,name,principalid)
)



[snip]

SELECT path, upper(name) AS name, value FROM cookies  WHERE 
timestampCURRENT_TIMESTAMP AND principalid='192.168.8.219' AND 
secure=FALSE AND (domain='ping.icap-elios.com' OR 
domain='.icap-elios.com')




I think the problem here is that the column order in the index doesn't 
match the columns used in the WHERE clause criteria. Try adding an 
index on (domain,principalid) or (domain,principalid,timestamp). If 
these are your only queries, you can get the same effect by 
re-ordering the columns in the table so that this is the column order 
used by the primary key and its implicit index.


You should check up on EXPLAIN and EXPLAIN ANALYZE to help you debug 
slow queries.


Hi,
I created an index into the cookies table
CREATE INDEX index_cookies_select ON cookies (domain, principalid, 
timestamp);

and execute my UPDATE and select queries:

1 - The first select quey give the following results:

icap=# EXPLAIN ANALYZE SELECT path, upper(name) AS name, value FROM 
cookies WHERE timestampCURRENT_TIMESTAMP AND 
principalid='192.168.8.219' AND secure=FALSE AND 
(domain='ping.icap-elios.com' OR domain='.icap-elios.com');
   
QUERY 
PLAN  



Bitmap Heap Scan on cookies  (cost=4.02..8.04 rows=1 width=268) (actual 
time=0.107..0.108 rows=1 loops=1)
  Recheck Cond: domain)::text = 'ping.icap-elios.com'::text) AND 
((principalid)::text = '192.168.8.219'::text) AND (timestamp  now())) 
OR (((domain)::text = '.icap-elios.com'::text) AND 
((principalid)::text = '192.168.8.219'::text) AND (timestamp  now(

  Filter: ((timestamp  now()) AND (NOT secure))
  -  BitmapOr  (cost=4.02..4.02 rows=1 width=0) (actual 
time=0.091..0.091 rows=0 loops=1)
-  Bitmap Index Scan on index_cookies_select  (cost=0.00..2.01 
rows=1 width=0) (actual time=0.077..0.077 rows=1 loops=1)
  Index Cond: (((domain)::text = 
'ping.icap-elios.com'::text) AND ((principalid)::text = 
'192.168.8.219'::text) AND (timestamp  now()))
-  Bitmap Index Scan on index_cookies_select  (cost=0.00..2.01 
rows=1 width=0) (actual time=0.012..0.012 rows=0 loops=1)
  Index Cond: (((domain)::text = '.icap-elios.com'::text) 
AND ((principalid)::text = '192.168.8.219'::text) AND (timestamp  now()))

Total runtime: 0.155 ms
(9 rows)

2- After that, i launch my test code  that execute continuely the UPDATE 
and select queries (in loop manner), after 1 minute of continuous 
execution, i obtain the following result:
icap=# EXPLAIN ANALYZE SELECT path, upper(name) AS name, value FROM 
cookies WHERE timestampCURRENT_TIMESTAMP AND 
principalid='192.168.8.219' AND secure=FALSE AND 
(domain='ping.icap-elios.com' OR domain='.icap-elios.com');
   
QUERY 
PLAN  



Bitmap Heap Scan on cookies  (cost=4.02..8.04 rows=1 width=268) (actual 
time=39.545..39.549 rows=1 loops=1)
  Recheck Cond: domain)::text = 'ping.icap-elios.com'::text) AND 
((principalid)::text = '192.168.8.219'::text) AND (timestamp  now())) 
OR (((domain)::text = '.icap-elios.com'::text) AND 
((principalid)::text = '192.168.8.219'::text) AND (timestamp  now(

  Filter: ((timestamp  now()) AND (NOT secure))
  -  BitmapOr  (cost=4.02..4.02 rows=1 width=0) (actual 
time=39.512..39.512 rows=0 loops=1)
-  Bitmap Index Scan on index_cookies_select  (cost=0.00..2.01 
rows=1 width=0) (actual time=39.471..39.471 rows=2 loops=1)
  Index Cond: (((domain)::text = 
'ping.icap-elios.com'::text) AND ((principalid)::text = 
'192.168.8.219'::text) AND (timestamp  now()))
-  Bitmap Index Scan on 

Re: [PERFORM] Please Help: PostgreSQL performance Optimization

2006-01-13 Thread Frank Wiles
On Thu, 12 Jan 2006 01:32:10 +0100
Jamal Ghaffour [EMAIL PROTECTED] wrote:

 I'm using the default configuration file, and i m asking if i have to
 change some paramters to have a good performance.

  In general the answer is yes.  The default is a pretty good best guess
  at what sorts of values work for your typical system, but if you run
  into performance problems the config file is where you should look
  first, provided you've done the simple things like adding good
  indexes, vacumm analyze, etc. 

  You'll want to consult the following various documentation out there
  to help your properly tune your configuration: 

  http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
  http://www.powerpostgresql.com/Docs
  http://www.powerpostgresql.com/PerfList
  http://www.revsys.com/writings/postgresql-performance.html

  Hopefully these will help you understand how to set your configuration
  values. 

 -
   Frank Wiles [EMAIL PROTECTED]
   http://www.wiles.org
 -


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] insert without oids

2006-01-13 Thread Michael Stone

OIDs seem to be on their way out, and most of the time you can get a
more helpful result by using a serial primary key anyway, but I wonder
if there's any extension to INSERT to help identify what unique id a
newly-inserted key will get? Using OIDs the insert would return the OID
of the inserted row, which could be useful if you then want to refer to
that row in a subsequent operation. You could get the same result by
manually retrieving the next number in the sequence and using that value
in the insert, but at the cost of additional DB operations. Are there
plans on updating the insert API for the post-OID world?

Mike Stone

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] insert without oids

2006-01-13 Thread Michael Fuhr
On Fri, Jan 13, 2006 at 03:10:11PM -0500, Michael Stone wrote:
 Are there plans on updating the insert API for the post-OID world?

Are you looking for this TODO item?

* Allow INSERT/UPDATE ... RETURNING new.col or old.col

  This is useful for returning the auto-generated key for an INSERT.
  One complication is how to handle rules that run as part of the
  insert.

http://www.postgresql.org/docs/faqs.TODO.html

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] insert without oids

2006-01-13 Thread Neil Conway
On Fri, 2006-01-13 at 15:10 -0500, Michael Stone wrote:
 OIDs seem to be on their way out, and most of the time you can get a
 more helpful result by using a serial primary key anyway, but I wonder
 if there's any extension to INSERT to help identify what unique id a
 newly-inserted key will get? Using OIDs the insert would return the OID
 of the inserted row, which could be useful if you then want to refer to
 that row in a subsequent operation. You could get the same result by
 manually retrieving the next number in the sequence and using that value
 in the insert, but at the cost of additional DB operations.

There's really no additional operations required:

INSERT INTO t1 VALUES (...);
INSERT INTO t2 VALUES (currval('t1_id_seq'), ...);

You need a separate SELECT if you want to use the generated sequence
value outside the database, although the INSERT ... RETURNING extension
will avoid that (there's a patch implementing this, although it is not
yet in CVS).

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] insert without oids

2006-01-13 Thread Michael Stone

On Fri, Jan 13, 2006 at 04:29:15PM -0500, Neil Conway wrote:

There's really no additional operations required:
INSERT INTO t2 VALUES (currval('t1_id_seq'), ...);
You need a separate SELECT if you want to use the generated sequence
value outside the database, 


That would, of course, be the goal. IOW, if you have a table which has
data which is unique only for the serial column, the old syntax provided
a way to refer to the newly inserted row uniquely without any additional
operations. 

although the INSERT ... RETURNING extension will avoid that 


That sounds promising. I'll have to put the TODO list on my todo list.
:)

Mike Stone

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Slow query with joins

2006-01-13 Thread Jim C. Nasby
On Wed, Jan 11, 2006 at 10:30:58PM +0100, Bendik Rognlien Johansen wrote:
 The sort is definitively the culprit. When I removed it the query was  
 instant. I tried setting work_mem = 131072 but it did not seem to  
 help. I really don't understand this :-( Any other ideas?

What's explain analyze show with the sort in?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Stable function being evaluated more than once in a single query

2006-01-13 Thread Jim C. Nasby
On Wed, Jan 11, 2006 at 11:33:23PM -0500, Tom Lane wrote:
 Mark Liberman [EMAIL PROTECTED] writes:
  I've got a set-returning function,  defined as STABLE, that I reference 
  twice
  within a single query, yet appears to be evaluated via two seperate 
  function 
  scans.
 
 There is no guarantee, express or implied, that this won't be the case.
 
 (Seems like we just discussed this a couple days ago...)

Well, from 32.6:

This category allows the optimizer to optimize away multiple calls of
the function within a single query.

That could certainly be read as indicating that if the function is used
twice in one query it could be optimized to one call.

Is the issue that the optimizer won't combine two function calls (ie:
SELECT foo(..) ... WHERE foo(..)), or is it that sometimes it won't make
the optimization (maybe depending on the query plan, for example)?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Throwing unnecessary joins away

2006-01-13 Thread Jim C. Nasby
On Thu, Jan 12, 2006 at 01:35:07PM +0100, Alessandro Baretta wrote:
 Ott? Havasv?lgyi wrote:
 Hi all,
  
 Is PostgreSQL able to throw unnecessary joins?
 For example I have two tables, and I join then with their primary keys, 
 say type of bigint . In this case if I don't reference to one of the 
 tables anywhere except the join condition, then the join can be eliminated.
 Or if I do a table1 left join table2 (table1.referer=table2.id)  (N : 
 1 relationship), and I don't reference table2 anywhere else, then it is 
 unnecessary.
 
 It cannot possibly remove unnecessary joins, simply because the join 
 influences whether a tuple in the referenced table gets selected and how 
 many times.

It can remove them if it's an appropriate outer join, or if there is
appropriate RI that proves that the join won't change what data is
selected.

A really common example of this is creating views that pull in tables
that have text names to go with id's, ie:

CREATE TABLE bug_status(
bug_status_id   serial  PRIMARY KEY
, bug_status_name   textNOT NULL UNIQUE
);

CREATE TABLE bug(
...
, bug_status_id int REFERENCES bug_status(bug_status_id)
);

CREATE VIEW bug_v AS
SELECT b.*, bs.bug_status_name FROM bug b JOIN bug_status NATURAL
;

If you have a bunch of cases like that and start building views on views
it's very easy to end up in situations where you don't have any need of
bug_status_name at all. And because of the RI, you know that removing
the join can't possibly change the bug.* portion of that view.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Throwing unnecessary joins away

2006-01-13 Thread Jim C. Nasby
On Thu, Jan 12, 2006 at 07:51:22PM +0100, Ott? Havasv?lgyi wrote:
 Hi,
 
 If the join is to a primary key or notnull unique column(s), then
 inner join is also ok. But of course left join is the simpler case.
 An example:

Actually, you need both the unique/pk constraint, and RI (a fact I
missed in the email I just sent). Nullability is another consideration
as well. But there certainly are some pretty common cases that can be
optimized for.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Stable function being evaluated more than once in a single query

2006-01-13 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 Is the issue that the optimizer won't combine two function calls (ie:
 SELECT foo(..) ... WHERE foo(..)), or is it that sometimes it won't make
 the optimization (maybe depending on the query plan, for example)?

What the STABLE category actually does is give the planner permission to
use the function within an indexscan qualification, eg,
WHERE indexed_column = f(42)
Since an indexscan involves evaluating the comparison expression just
once and using its value to search the index, this would be incorrect
if the expression's value might change from row to row.  (For VOLATILE
functions, we assume that the correct behavior is the naive SQL
semantics of actually computing the WHERE clause at each candidate row.)

There is no function cache and no checking for duplicate expressions.
I think we do check for duplicate aggregate expressions, but not
anything else.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Stable function being evaluated more than once in a single query

2006-01-13 Thread Jim C. Nasby
Adding -docs...

On Fri, Jan 13, 2006 at 07:27:28PM -0500, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  Is the issue that the optimizer won't combine two function calls (ie:
  SELECT foo(..) ... WHERE foo(..)), or is it that sometimes it won't make
  the optimization (maybe depending on the query plan, for example)?
 
 What the STABLE category actually does is give the planner permission to
 use the function within an indexscan qualification, eg,
   WHERE indexed_column = f(42)
 Since an indexscan involves evaluating the comparison expression just
 once and using its value to search the index, this would be incorrect
 if the expression's value might change from row to row.  (For VOLATILE
 functions, we assume that the correct behavior is the naive SQL
 semantics of actually computing the WHERE clause at each candidate row.)
 
 There is no function cache and no checking for duplicate expressions.
 I think we do check for duplicate aggregate expressions, but not
 anything else.
 
In that case I'd say that the sSTABLE section of 32.6 should be changed
to read:

A STABLE function cannot modify the database and is guaranteed to
return the same results given the same arguments for all calls within a
single surrounding query. This category gives the planner permission to
use the function within an indexscan qualification. (Since an indexscan
involves evaluating the comparison expression just once and using its
value to search the index, this would be incorrect if the expression's
value might change from row to row.) There is no function cache and no
checking for duplicate expressions.

I can provide a patch to that effect if it's easier...

On a related note, would it be difficult to recognize multiple calls of
the same function in one query? ISTM that would be a win for all but the
most trivial of functions...
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Extremely irregular query performance

2006-01-13 Thread Bruce Momjian
Jean-Philippe Cote wrote:
 
 
 Can I actully know whether a given plan is excuted with GEQO on ?
 In other words, if I launch 'explain query', I'll get a given plan, but if 
 I re-launch
 the query (withtout the 'explain' keyword), could I get a different
 plan given that GEQO induces some randomness ?
 
 Is it the plan that is different in the fastest case with GEQO or is it
 the time needed to plan that is causing the GEQO to beat the exhaustive
 search?

Yes, is it likely that when using GEQO you would get a different plan
each time, so running it with and without EXPLAIN would produce
different plans.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[PERFORM] Hanging Query

2006-01-13 Thread vimal . gupta

We have to inserts a records(15000- 2) into a table which also
contains (15000-2) records, then after insertion, we have to delete
the records according to a business rule.
Above process is taking place in a transaction and we are using batches
of 128 to insert records.
Everything works fine on QA environment but somehow after inserts,
delete query hangs in production environment. Delete query has some
joins with other table and a self join. There is no exception as we
have done enough exception handling. It simply hangs with no trace in
application logs.

When I do ps aux , I see
postgres  5294 41.3  2.4 270120 38092 pts/4  R10:41  52:56
postgres: nuuser nm 127.0.0.1 DELETE

Postgres 7.3.4 on Linux..

Thanks for any help..

Vimal


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] = forces row compare and not index elements compare when possible

2006-01-13 Thread Bernard Dhooghe
Suppose a table with structure:

Table public.t4

 Column | Type  | Modifiers
+---+---
 c1 | character(10) | not null
 c2 | character(6)  | not null
 c3 | date  | not null
 c4 | character(30) |
 c5 | numeric(10,2) | not null
Indexes:
t4_prim PRIMARY KEY, btree (c1, c2, c3)

Then 2 queries

echo explain  select * from t4 where (c1,c2,c3) =
('A','B','1990-01-01') order by c1,c2,c3|psql test
QUERY PLAN

--
 Index Scan using t4_prim on t4  (cost=0.00..54.69 rows=740 width=75)
   Filter: (ROW(c1, c2, c3) = ROW('A'::bpchar, 'B'::bpchar,
'1990-01-01'::date))
(2 rows)

and

echo explain  select * from t4 where (c1,c2,c3) =
('A','B','1990-01-01') orde
QUERY PLAN

--
 Index Scan using t4_prim on t4  (cost=0.00..54.69 rows=740 width=75)
   Filter: (ROW(c1, c2, c3) = ROW('A'::bpchar, 'B'::bpchar,
'1990-01-01'::date))
(2 rows)

So switching from (c1,c2,c3) compare from =  to = makes the optimizer
see the where clause as a row filter, which is not really the case.

Further

echo explain  select * from t4 where (c1,c2) = ('A','B') order by
c1,c2,c3|ps
QUERY PLAN
---
 Index Scan using t4_prim on t4  (cost=0.00..4.83 rows=1 width=75)
   Index Cond: ((c1 = 'A'::bpchar) AND (c2 = 'B'::bpchar))
(2 rows)

here again the  index can be used (again), the row count can be greater
than one.

but

 echo explain  select * from t4 where (c1,c2) = ('A','B') order by
c1,c2,c3|p
  QUERY PLAN
--
 Index Scan using t4_prim on t4  (cost=0.00..52.84 rows=740 width=75)
   Filter: (ROW(c1, c2) = ROW('A'::bpchar, 'B'::bpchar))
(2 rows)


So = (or =) is not optimized against an index where it could be.



Bernard Dhooghe


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] Extremely irregular query performance

2006-01-13 Thread Jean-Philippe Côté

Hi,

I'm running version 8.1 on a dedicated Sun v20 server (2 AMD x64's)
with 4Gb of RAM. I have recently noticed that the performance of
some more complex queries is extremely variable and irregular.
For example, I currently have a query that returns a small number 
of rows (5) by joining a dozen of tables. Below are the running times
obtained by repeatedly lauching this query in psql:

Time: 424.848 ms
Time: 1615.143 ms
Time: 15036.475 ms
Time: 83471.683 ms
Time: 163.224 ms
Time: 2454.939 ms
Time: 188.093 ms
Time: 158.071 ms
Time: 192.431 ms
Time: 195.076 ms
Time: 635.739 ms
Time: 164549.902 ms

As you can see, the performance is most of the time pretty good (less
than 1 second), but every fourth of fifth time I launch the query
the server seems to go into orbit. For the longer running times,
I can see from top that the server process uses almost 100% of
a CPU.

This is rather worrisome, as I cannot be confident of the overall performance
of my application with so much variance in query response times.

I suspect a configuration problem related to the cache mechanism 
(shared_buffers? effective_cache_size?), but to be honest I do not know 
where to start to diagnose it. 

Any help would be greatly appreciated.

Thanks in advance,

J-P


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[PERFORM] Postgres8.0 Planner chooses WRONG plan.

2006-01-13 Thread Pallav Kalva

Hi ,


   I am having problem optimizing this query, Postgres optimizer uses a 
plan which invloves seq-scan on a table. And when I choose a option to 
disable seq-scan it uses index-scan and obviously the query is much faster.

   All tables are daily vacummed and analyzed as per docs.

  Why cant postgres use index-scan ?


Postgres Version:8.0.2
Platform : Fedora

Here is the explain analyze output. Let me know if any more information 
is needed.  Can we make postgres use index scan for this query ?


Thanks!
Pallav.

---
explain analyze
select * from provisioning.alerts where countystate = 'FL' and countyno 
= '099' and status = 'ACTIVE' ;


   
QUERY PLAN   
--
Nested Loop  (cost=3.45..15842.17 rows=1 width=125) (actual 
time=913.491..18992.009 rows=110 loops=1)
  -  Nested Loop  (cost=3.45..15838.88 rows=1 width=86) (actual 
time=913.127..18958.482 rows=110 loops=1)
-  Hash Join  (cost=3.45..15835.05 rows=1 width=82) (actual 
time=913.093..18954.951 rows=110 loops=1)
  Hash Cond: (outer.fkserviceinstancestatusid = 
inner.serviceinstancestatusid)
  -  Hash Join  (cost=2.38..15833.96 rows=2 width=74) 
(actual time=175.139..18952.830 rows=358 loops=1)
Hash Cond: (outer.fkserviceofferingid = 
inner.serviceofferingid)
-  Seq Scan on serviceinstance si  
(cost=0.00..15831.52 rows=7 width=78) (actual time=174.430..18948.210 
rows=358 loops=1)
  Filter: (((subplan) = 'FL'::text) AND 
((subplan) = '099'::text))

  SubPlan
-  Result  (cost=0.00..0.01 rows=1 
width=0) (actual time=0.090..0.093 rows=1 loops=3923)
-  Result  (cost=0.00..0.01 rows=1 
width=0) (actual time=0.058..0.061 rows=1 loops=265617)
-  Hash  (cost=2.38..2.38 rows=3 width=4) (actual 
time=0.444..0.444 rows=0 loops=1)
  -  Hash Join  (cost=1.08..2.38 rows=3 
width=4) (actual time=0.312..0.428 rows=1 loops=1)
Hash Cond: (outer.fkserviceid = 
inner.serviceid)
-  Seq Scan on serviceoffering so  
(cost=0.00..1.18 rows=18 width=8) (actual time=0.005..0.068 rows=18 loops=1)
-  Hash  (cost=1.07..1.07 rows=1 
width=4) (actual time=0.036..0.036 rows=0 loops=1)
  -  Seq Scan on service s  
(cost=0.00..1.07 rows=1 width=4) (actual time=0.014..0.019 rows=1 loops=1)
Filter: (servicename = 
'alert'::text)
  -  Hash  (cost=1.06..1.06 rows=1 width=16) (actual 
time=0.044..0.044 rows=0 loops=1)
-  Seq Scan on serviceinstancestatus sis  
(cost=0.00..1.06 rows=1 width=16) (actual time=0.017..0.024 rows=1 loops=1)

  Filter: (status = 'ACTIVE'::text)
-  Index Scan using pk_account_accountid on account a  
(cost=0.00..3.82 rows=1 width=8) (actual time=0.012..0.016 rows=1 loops=110)

  Index Cond: (outer.fkaccountid = a.accountid)
  -  Index Scan using pk_contact_contactid on contact c  
(cost=0.00..3.24 rows=1 width=47) (actual time=0.014..0.018 rows=1 
loops=110)

Index Cond: (outer.fkcontactid = c.contactid)
  SubPlan
-  Result  (cost=0.00..0.01 rows=1 width=0) (actual 
time=0.072..0.075 rows=1 loops=110)
-  Result  (cost=0.00..0.01 rows=1 width=0) (actual 
time=0.079..0.082 rows=1 loops=110)
-  Result  (cost=0.00..0.01 rows=1 width=0) (actual 
time=0.086..0.089 rows=1 loops=110)

Total runtime: 18992.694 ms
(30 rows)

Time: 18996.203 ms

-- As you can see the -  Seq Scan on serviceinstance si  
(cost=0.00..15831.52 rows=7 width=78) (actual time=174.430..18948.210 
rows=358 loops=1) was taking too long .
   same query when i disable the seq-scan it uses index-scan and its 
much faster now


set enable_seqscan=false;
SET
Time: 0.508 ms
explain analyze
select * from provisioning.alerts where countystate = 'FL' and countyno 
= '099' and status = 'ACTIVE' ;



QUERY PLAN  
-
Nested Loop  (cost=9.10..16676.10 rows=1 width=125) (actual 
time=24.792..3898.939 rows=110 

Re: [PERFORM] Extremely irregular query performance

2006-01-13 Thread Kenneth Marshall
On Thu, Jan 12, 2006 at 09:48:41AM +, Simon Riggs wrote:
 On Wed, 2006-01-11 at 22:23 -0500, Tom Lane wrote:
  =?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?= [EMAIL PROTECTED] writes:
   Thanks a lot for this info, I was indeed exceeding the genetic
   optimizer's threshold.  Now that it is turned off, I get
   a very stable response time of 435ms (more or less 5ms) for
   the same query. It is about three times slower than the best
   I got with the genetic optimizer on, but the overall average
   is much lower.
  
  Hmm.  It would be interesting to use EXPLAIN ANALYZE to confirm that the
  plan found this way is the same as the best plan found by GEQO, and
  the extra couple hundred msec is the price you pay for the exhaustive
  plan search.  If GEQO is managing to find a plan better than the regular
  planner then we need to look into why ...
 
 It seems worth noting in the EXPLAIN whether GEQO has been used to find
 the plan, possibly along with other factors influencing the plan such as
 enable_* settings.
 

Is it the plan that is different in the fastest case with GEQO or is it
the time needed to plan that is causing the GEQO to beat the exhaustive
search?

Ken


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Extremely irregular query performance

2006-01-13 Thread Kenneth Marshall
On Thu, Jan 12, 2006 at 03:23:14PM -0500, Jean-Philippe Cote wrote:
 
 
 Can I actully know whether a given plan is excuted with GEQO on ?
 In other words, if I launch 'explain query', I'll get a given plan, but if 
 I re-launch
 the query (withtout the 'explain' keyword), could I get a different
 plan given that GEQO induces some randomness ?
 
 Is it the plan that is different in the fastest case with GEQO or is it
 the time needed to plan that is causing the GEQO to beat the exhaustive
 search?
 
GEQO will be used if the number of joins is over the GEQO limit in
the configuration file. The GEQO process is an iterative random
process to find an query plan. The EXPLAIN results are the plan for that
query, but not neccessarily for subsequent runs. GEQO's advantage is a
much faster plan time than the exhaustive search method normally used.
If the resulting plan time is less than the exhaustive search plan time,
for short queries you can have the GECO run more quickly than the
exhaustive search result. Of course, if you PREPARE the query the plan
time drops out.

Ken

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq