[PERFORM]

2006-02-22 Thread Chethana, Rao (IE10)








Hello!



This is Chethana. I need to know how to improve the
performance of postgresql. It is rich in features but slow in performance.

Pls do reply back ASAP.



Thank you,

Chethana.










Re: [PERFORM]

2006-02-22 Thread Richard Huxton

Chethana, Rao (IE10) wrote:


This is Chethana.  I need to know how to improve the performance of
postgresql.It is rich in features but slow in performance.


You'll need to provide some details first.

How are you using PostgreSQL?
How many concurrent users?
Mostly updates or small selects or large summary reports?

What hardware do you have?
What configuration changes have you made?

Are you having problems with all queries or only some?
Have you checked the plans for these with EXPLAIN ANALYSE?
Have you made sure your tables are vacuumed and analysed?

That should be a start
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM]

2006-02-22 Thread Gourish Singbal

try this.

http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
http://www.powerpostgresql.com/PerfList
Performance depends on the postgresql.conf parameters apart from the hardware details.


On 2/22/06, Chethana, Rao (IE10) [EMAIL PROTECTED] wrote:


Hello!

This is Chethana. I need to know how to improve the performance of postgresql. It is rich in features but slow in performance.

Pls do reply back ASAP.

Thank you,
Chethana.
-- Best,Gourish Singbal 


Re: [PERFORM]--pls reply ASAP

2006-02-22 Thread Richard Huxton

Chethana, Rao (IE10) wrote:

Hello!

Thank you for responding quickly.  I really need ur help.


Please make sure you cc: the list - I don't read this inbox regularly.


Sir, here r the answers for ur questions, please do tell me what to do
next(regarding increasing performance of postgresql),  so that I can
proceed  further.

How are you using PostgreSQL?
We r using 7.4.3 with max of (512*6) around 3000 records.


Max of what are (512*6)? Rows? Tables? Sorry - I don't understand what 
you mean here.


Oh, and upgrade to the latest release of 7.4.x - there are important 
bugfixes.



How many concurrent users?
It configures for 100, but we r using 4 or 5 only.

Mostly updates or small selects or large summary reports?
Update,delete,insert operations.

What hardware do you have?
X86 based, 233 MHz, 256 MB RAM.


Hmm - not blazing fast, but it'll certainly run on that.


What configuration changes have you made?
No changes, we've used default settings.


That will need changing. As Gourish suggested in another reply, read the 
notes here:

http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html

You'll want to be careful with the memory settings given that you've 
only got 256MB to play with. Don't allocate too much to PostgreSQL 
itself, let the o.s. cache some files for you.



Are you having problems with all queries or only some?
Only some queries, particularly foreign key.


Are you happy that there are indexes on the referring side of the 
foreign key where necessary? The primary keys you reference will have 
indexes on them, the other side will not unless you add them yourself.



Have you checked the plans for these with EXPLAIN ANALYSE?
No.


That would be something worth doing then. Find a bad query, run EXPLAIN 
ANALYSE SELECT ... and post a new question with the output and details 
of the tables involved.



Have you made sure your tables are vacuumed and analysed?
Yes.


Good. With the limited amount of RAM you have, you'll want to use it as 
efficiently as possible.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Help with nested loop left join performance

2006-02-22 Thread Tom Lane
Richard Huxton dev@archonet.com writes:
 George Woodring wrote:
  FROM
  settop_billing
  LEFT OUTER JOIN
  (dhct JOIN dhct_davic USING(mac))
  USING
  (mac)
  WHERE
  region='GTown1E' AND node='1E012'

 With 7.4 I seem to remember that explicit JOINs force the evaluation 
 order, but I'm not if even later versions will rewrite your query. It's 
 too early in the morning for me to figure out if it's safe in all cases.

CVS HEAD can re-order left joins in common cases, but no existing
release will touch the ordering of outer joins at all.

It's impossible to tell here which tables the WHERE-clause restrictions
actually bear on, so there's no way to say whether a different join
order would help.  My guess though is that George may be stuck --- in
general you can't move a join into or out of the right side of a left
join without changing the answers.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] LIKE query on indexes

2006-02-22 Thread Brendan Duddridge
Hi,Can this technique work with case insensitive ILIKE?It didn't seem to use the index when I used ILIKE instead of LIKE.Thanks, Brendan Duddridge | CTO | 403-277-5591 x24 |  [EMAIL PROTECTED]  ClickSpace Interactive Inc.  Suite L100, 239 - 10th Ave. SE  Calgary, AB  T2G 0V9 http://www.clickspace.com   On Feb 21, 2006, at 1:28 PM, Ibrahim Tekin wrote:this trick did the job. thanks.On 2/21/06, Alvaro Herrera [EMAIL PROTECTED] wrote: Scott Marlowe wrote: On Tue, 2006-02-21 at 09:57, Ibrahim Tekin wrote:  hi,   i have btree index on a text type field. i want see rows which starts  with certain characters on that field. so i write a query like this:   SELECT * FROM mytable WHERE myfield LIKE 'john%'since this condition is from start of the field, query planner should  use index to find such elements but explain command shows me it will  do a sequential scan.is this lack of a feature or i am wrong somewhere? This is an artifact of how PostgreSQL handles locales other than ASCII. If you want such a query to use an index, you need to back up your  database, and re-initdb with --locale=C as an argument or you can choose to create an index with the text_pattern_opsoperator class, which would be used in a LIKE constraint regardless oflocale. http://www.postgresql.org/docs/8.1/static/indexes-opclass.html--Alvaro Herrera http://www.CommandPrompt.com/The PostgreSQL Company - Command Prompt, Inc.

smime.p7s
Description: S/MIME cryptographic signature


Re: [PERFORM]

2006-02-22 Thread Vivek Khera
On Feb 22, 2006, at 5:38 AM, Chethana, Rao (IE10) wrote:It is rich in features but slow in performance.No, it is fast and feature-rich.  But you have to tune it for your specific needs; the default configuration is not ideal for large DBs.

[PERFORM] Good News re count(*) in 8.1

2006-02-22 Thread Kevin Grittner
I hesitate to raise this issue again, but I've noticed something which I
thought might be worth mentioning.  I've never thought the performance
of count(*) on a table was a significant issue, but I'm prepared to say
that -- for me, at least -- it is officially and totally a NON-issue.

We are replicating data from 72 source databases, each with the
official copy of a subset of the data, to four identical consolidated
databases, spread to separate locations, to serve our web site and other
organization-wide needs.  Currently, two of these central databases are
running a commercial product and two are running PostgreSQL.  There have
been several times that I have run a SELECT COUNT(*) on an entire table
on all central machines.  On identical hardware, with identical data,
and equivalent query loads, the PostgreSQL databases have responded with
a count in 50% to 70% of the time of the commercial product, in spite of
the fact that the commercial product does a scan of a non-clustered
index while PostgreSQL scans the data pages.

The tables have had from a few million to 132 million rows.  The
databases are about 415 GB each.  The servers have 6 GB RAM each.  We've
been running PostgreSQL 8.1, tuned and maintained based on advice from
the documentation and these lists.

I suspect that where people report significantly worse performance for
count(*) under PostgreSQL than some other product, it may sometimes be
the case that they have not properly tuned PostgreSQL, or paid attention
to maintenance issues regarding dead space in the tables.

My recent experience, for what it's worth.

-Kevin


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Good News re count(*) in 8.1

2006-02-22 Thread Luke Lonergan
Kevin,

On 2/22/06 8:57 AM, Kevin Grittner [EMAIL PROTECTED] wrote:

 I hesitate to raise this issue again, but I've noticed something which I
 thought might be worth mentioning.  I've never thought the performance
 of count(*) on a table was a significant issue, but I'm prepared to say
 that -- for me, at least -- it is officially and totally a NON-issue.

Cool!  Kudos to Tom for implementing the improvements in the executor to
move tuples faster through the pipeline.

We see a CPU limit (yes, another limit) of about 300MB/s now on Opteron 250
processors running on Linux.  The filesystem can do 420MB/s sequential scan
in 8k pages, but Postgres count(*) on 8.1.3 can only do about 300MB/s.  This
is still a very large improvement over past versions, but we'd always like
to see more... 

- Luke



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM]--pls reply ASAP

2006-02-22 Thread Theodore LoScalzo
I know I am sticking my nose in an area here that I have not been  
involved in but

this issue is important to me.
Chethana I have a couple of questions based on what you said you are  
using as a

platform. see below :

On Feb 22, 2006, at 8:22 AM, Richard Huxton wrote:


Chethana, Rao (IE10) wrote:

Hello!
Thank you for responding quickly.  I really need ur help.


Please make sure you cc: the list - I don't read this inbox regularly.

Sir, here r the answers for ur questions, please do tell me what  
to do

next(regarding increasing performance of postgresql),  so that I can
proceed  further.
How are you using PostgreSQL?
We r using 7.4.3 with max of (512*6) around 3000 records.


Max of what are (512*6)? Rows? Tables? Sorry - I don't understand  
what you mean here.


Oh, and upgrade to the latest release of 7.4.x - there are  
important bugfixes.



How many concurrent users?
It configures for 100, but we r using 4 or 5 only.
Mostly updates or small selects or large summary reports?
Update,delete,insert operations.
What hardware do you have?
X86 based, 233 MHz, 256 MB RAM.

What Operating System are you running this on??
How much other stuff or applications are you running on the box
Is this a IDE hard drive system?? SCSI?? Bus Speed?? is it a older  
server or a pc??
You dont have a large database at all but quick access to the data  
that is residing in
the database has a lot to do with how the hardware is configured and  
what other programs

are using the limited system resources!


Hmm - not blazing fast, but it'll certainly run on that.


What configuration changes have you made?
No changes, we've used default settings.


That will need changing. As Gourish suggested in another reply,  
read the notes here:

http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html

You'll want to be careful with the memory settings given that  
you've only got 256MB to play with. Don't allocate too much to  
PostgreSQL itself, let the o.s. cache some files for you.



Are you having problems with all queries or only some?
Only some queries, particularly foreign key.


Are you happy that there are indexes on the referring side of the  
foreign key where necessary? The primary keys you reference will  
have indexes on them, the other side will not unless you add them  
yourself.



Have you checked the plans for these with EXPLAIN ANALYSE?
No.


That would be something worth doing then. Find a bad query, run  
EXPLAIN ANALYSE SELECT ... and post a new question with the output  
and details of the tables involved.



Have you made sure your tables are vacuumed and analysed?
Yes.


Good. With the limited amount of RAM you have, you'll want to use  
it as efficiently as possible.


--
  Richard Huxton
  Archonet Ltd

Theodore LoScalzo



---(end of  
broadcast)---

TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] Joins and full index scans...mysql vs postgres?

2006-02-22 Thread ryan groth
I am issing a query like this:
SELECT *
   FROM users users
   LEFT JOIN phorum_users_base ON users.uid = phorum_users_base.user_id
   LEFT JOIN useraux ON useraux.uid = users.uid;

The joins are all on the PKs of the tables. It takes 1000ms to run on
postgres. The identical mysql version runs in 230ms. The problem seems
to stem from postgres's insistence to do three complete table scans,
where mysql does one and joins 1:1 against the results of the first. I
have switched the joins to inner joins and the difference is negligible.
Here are the explains on both postgres and mysql. Is there a way to
optimize this basic query for postgres that I am missing?

Postgres Explain 

Merge Left Join  (cost=0.00..2656.36 rows=6528 width=1522)
Merge Cond: (outer.uid = inner.uid)
  -  Merge Left Join  (cost=0.00..1693.09 rows=6528 width=1264)
Merge Cond: (outer.uid = inner.user_id)
-  Index Scan using users_pkey on users  (cost=0.00..763.81
rows=6528 width=100)
-  Index Scan using phorum_users_base_pkey on phorum_users_base
 (cost=0.00..822.92 rows=9902 width=1168)
  -  Index Scan using useraux_pkey on useraux  (cost=0.00..846.40
rows=7582 width=262)


MySQL Explain:

id,select_type,table,possible_keys,key,key_len,ref,rows,extra
1, 'PRIMARY', 'USERS', 'ALL', '', '', '', '', 6528, ''
1, 'PRIMARY', 'phorum_users_base', 'eq_ref', 'PRIMARY', 'PRIMARY', '4',
'wh2o.USERS.UID', 1, ''
1, 'PRIMARY', 'useraux', 'eq_ref', 'PRIMARY', 'PRIMARY', '4',
'wh2o.USERS.UID', 1, ''


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Good News re count(*) in 8.1

2006-02-22 Thread Tom Lane
Kevin Grittner [EMAIL PROTECTED] writes:
 We are replicating data from 72 source databases, each with the
 official copy of a subset of the data, to four identical consolidated
 databases, spread to separate locations, to serve our web site and other
 organization-wide needs.  Currently, two of these central databases are
 running a commercial product and two are running PostgreSQL.  There have
 been several times that I have run a SELECT COUNT(*) on an entire table
 on all central machines.  On identical hardware, with identical data,
 and equivalent query loads, the PostgreSQL databases have responded with
 a count in 50% to 70% of the time of the commercial product, in spite of
 the fact that the commercial product does a scan of a non-clustered
 index while PostgreSQL scans the data pages.

Interesting.  I think though that the people who are complaining come
from databases where COUNT(*) takes constant time because the DB keeps
a running count in the table's metadata.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Joins and full index scans...mysql vs postgres?

2006-02-22 Thread Steinar H. Gunderson
On Wed, Feb 22, 2006 at 12:26:47PM -0500, ryan groth wrote:
 Postgres Explain 

We need to see EXPLAIN ANALYZE results here.

What's your work_mem set to?

/* Steinar */
-- 
Homepage: http://www.sesse.net/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Joins and full index scans...mysql vs postgres?

2006-02-22 Thread ryan groth
Does this work:

Merge Left Join  (cost=0.00..2656.36 rows=6528 width=1522) (actual
time=0.057..123.659 rows=6528 loops=1)
  Merge Cond: (outer.uid = inner.uid)
  -  Merge Left Join  (cost=0.00..1693.09 rows=6528 width=1264)
(actual time=0.030..58.876 rows=6528 loops=1)
Merge Cond: (outer.uid = inner.user_id)
-  Index Scan using users_pkey on users  (cost=0.00..763.81
rows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)
-  Index Scan using phorum_users_base_pkey on
phorum_users_base  (cost=0.00..822.92 rows=9902 width=1168) (actual
time=0.007..15.674 rows=9845 loops=1)
  -  Index Scan using useraux_pkey on useraux  (cost=0.00..846.40
rows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)
Total runtime: 127.442 ms


 On Wed, Feb 22, 2006 at 12:26:47PM -0500, ryan groth wrote:
  Postgres Explain 
 
 We need to see EXPLAIN ANALYZE results here.
 
 What's your work_mem set to?
 
 /* Steinar */
 -- 
 Homepage: http://www.sesse.net/
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 
 

-- 


 On Wed, Feb 22, 2006 at 12:26:47PM -0500, ryan groth wrote:
  Postgres Explain 
 
 We need to see EXPLAIN ANALYZE results here.
 
 What's your work_mem set to?
 
 /* Steinar */
 -- 
 Homepage: http://www.sesse.net/
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 
 

-- 


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Joins and full index scans...mysql vs postgres?

2006-02-22 Thread ryan groth

workmem is set to the default, increasing it decreases performance.

 Does this work:
 
 Merge Left Join  (cost=0.00..2656.36 rows=6528 width=1522) (actual
 time=0.057..123.659 rows=6528 loops=1)
   Merge Cond: (outer.uid = inner.uid)
   -  Merge Left Join  (cost=0.00..1693.09 rows=6528 width=1264)
 (actual time=0.030..58.876 rows=6528 loops=1)
 Merge Cond: (outer.uid = inner.user_id)
 -  Index Scan using users_pkey on users  (cost=0.00..763.81
 rows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)
 -  Index Scan using phorum_users_base_pkey on
 phorum_users_base  (cost=0.00..822.92 rows=9902 width=1168) (actual
 time=0.007..15.674 rows=9845 loops=1)
   -  Index Scan using useraux_pkey on useraux  (cost=0.00..846.40
 rows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)
 Total runtime: 127.442 ms
 
 
  On Wed, Feb 22, 2006 at 12:26:47PM -0500, ryan groth wrote:
   Postgres Explain 
  
  We need to see EXPLAIN ANALYZE results here.
  
  What's your work_mem set to?
  
  /* Steinar */
  -- 
  Homepage: http://www.sesse.net/
  
  ---(end of broadcast)---
  TIP 3: Have you checked our extensive FAQ?
  
 http://www.postgresql.org/docs/faq
  
  
 
 -- 
 
 
  On Wed, Feb 22, 2006 at 12:26:47PM -0500, ryan groth wrote:
   Postgres Explain 
  
  We need to see EXPLAIN ANALYZE results here.
  
  What's your work_mem set to?
  
  /* Steinar */
  -- 
  Homepage: http://www.sesse.net/
  
  ---(end of broadcast)---
  TIP 3: Have you checked our extensive FAQ?
  
 http://www.postgresql.org/docs/faq
  
  
 
 -- 
 
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org
 
 

-- 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Joins and full index scans...mysql vs postgres?

2006-02-22 Thread ryan groth
Hmm, it came from the timer on the pgadmin III sql query tool. I guess
the 1,000ms includes the round-trip? See the wierd thing is that
mysqlserver is running default configuration on a virtual machine
(P3/1.3GHZ conf'd for 128mb ram) over a 100m/b ethernet connection.
Postgres is running on a real P4/3.0ghz 4GB running localhost. Timings
from the mysql query tool indicate that the 6.5k record query runs in
1.3346s (.3361s) vs. the pgadmin query tool saying that the query runs
997+3522 ms. Am I reading these numbers wrong? Are these numbers
reflective of application performance? Is there an optimization I am
missing?

Ryan


 On Wed, 22 Feb 2006, ryan groth wrote:
 
  Does this work:
 
  Merge Left Join  (cost=0.00..2656.36 rows=6528 width=1522) (actual
  time=0.057..123.659 rows=6528 loops=1)
Merge Cond: (outer.uid = inner.uid)
-  Merge Left Join  (cost=0.00..1693.09 rows=6528 width=1264)
  (actual time=0.030..58.876 rows=6528 loops=1)
  Merge Cond: (outer.uid = inner.user_id)
  -  Index Scan using users_pkey on users  (cost=0.00..763.81
  rows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)
  -  Index Scan using phorum_users_base_pkey on
  phorum_users_base  (cost=0.00..822.92 rows=9902 width=1168) (actual
  time=0.007..15.674 rows=9845 loops=1)
-  Index Scan using useraux_pkey on useraux  (cost=0.00..846.40
  rows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)
  Total runtime: 127.442 ms
 
 Well, this implies the query took about 127 ms on the server side. Where
 did the 1000 ms number come from (was that on a client, and if so, what
 type)?
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 
 

-- 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Joins and full index scans...mysql vs postgres?

2006-02-22 Thread Scott Marlowe
On Wed, 2006-02-22 at 12:11, ryan groth wrote:
 Does this work:
 
 Merge Left Join  (cost=0.00..2656.36 rows=6528 width=1522) (actual
 time=0.057..123.659 rows=6528 loops=1)
   Merge Cond: (outer.uid = inner.uid)
   -  Merge Left Join  (cost=0.00..1693.09 rows=6528 width=1264)
 (actual time=0.030..58.876 rows=6528 loops=1)
 Merge Cond: (outer.uid = inner.user_id)
 -  Index Scan using users_pkey on users  (cost=0.00..763.81
 rows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)
 -  Index Scan using phorum_users_base_pkey on
 phorum_users_base  (cost=0.00..822.92 rows=9902 width=1168) (actual
 time=0.007..15.674 rows=9845 loops=1)
   -  Index Scan using useraux_pkey on useraux  (cost=0.00..846.40
 rows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)
 Total runtime: 127.442 ms

In MySQL, have you tried writing a short perl or php script or even
timing the mysql client running in one shot mode (I assume it can do
that) from the outside to see how long it takes to actually run the
query AND retrieve the data?

My guess is most of the time for both queries will be taken in
delivering the data.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] Slow query

2006-02-22 Thread Jeremy Haile
I am running a query that joins against several large tables (~5 million
rows each).  The query takes an exteremely long time to run, and the
explain output is a bit beyond my level of understanding.  It is an
auto-generated query, so the aliases are fairly ugly.  I can clean them
up (rename them) if it would help.  Also, let me know if I can send any
more information that would help (e.g. table schema)

Also, is there any resources where I can get a better understanding of
what PostgreSQL means when it says Sort Sort Key Bitmap Index Scan
Hash Cond etc. etc. - and how to recognize problems by looking at the
output.  I can understand the output for simple queries (e.g. is the
planner using an index or performing a seq. scan), but when you get to
more complex queries like the one below I lose my way =)

I would really appreciate it if someone from this list could tell me if
there is anything that is obviously wrong with the query or schema and
what I could do to improve the performance.

PostgreSQL 8.1
RedHat Enterprise Linux 4

--QUERY
select distinct city4_.region_id as region1_29_, city4_1_.name as
name29_, city4_.state_id as state2_30_ 
from registered_voters registered0_ 
 inner join registered_voter_addresses addresses1_ on
 registered0_.registered_voter_id=addresses1_.registered_voter_id 
 inner join registered_voter_addresses_regions regions2_ on
 addresses1_.address_id=regions2_.registered_voter_addresses_address_id 
 inner join regions region3_ on
 regions2_.regions_region_id=region3_.region_id 
 inner join cities city4_ on
 addresses1_.city_id=city4_.region_id 
 inner join regions city4_1_ on
 city4_.region_id=city4_1_.region_id 
where region3_.region_id='093c44e8-f3b2-4c60-8be3-2b4d148f9f5a' 
order by city4_1_.name


--EXPLAIN/ANALYZE OUTPUT
Unique  (cost=3572907.42..3623589.94 rows=4076438 width=93) (actual
time=2980825.714..3052333.753 rows=1124 loops=1)
  -  Sort  (cost=3572907.42..3585578.05 rows=5068252 width=93) (actual
time=2980825.710..2987407.888 rows=4918204 loops=1)
Sort Key: city4_1_.name, city4_.region_id, city4_.state_id
-  Hash Join  (cost=717783.40..1430640.10 rows=5068252
width=93) (actual time=1400141.559..2016131.467 rows=4918204 loops=1)
  Hash Cond:
((outer.registered_voter_addresses_address_id)::text =
(inner.address_id)::text)
  -  Bitmap Heap Scan on
registered_voter_addresses_regions regions2_  (cost=54794.95..575616.49
rows=5116843 width=80) (actual time=45814.469..155044.478 rows=4918205
loops=1)
Recheck Cond:
('093c44e8-f3b2-4c60-8be3-2b4d148f9f5a'::text =
(regions_region_id)::text)
-  Bitmap Index Scan on
reg_voter_address_region_region_idx  (cost=0.00..54794.95 rows=5116843
width=0) (actual time=45807.157..45807.157 rows=4918205 loops=1)
  Index Cond:
('093c44e8-f3b2-4c60-8be3-2b4d148f9f5a'::text =
(regions_region_id)::text)
  -  Hash  (cost=642308.89..642308.89 rows=741420
width=173) (actual time=1354217.934..1354217.934 rows=4918204 loops=1)
-  Hash Join  (cost=328502.66..642308.89
rows=741420 width=173) (actual time=204565.031..1268303.832 rows=4918204
loops=1)
  Hash Cond:
((outer.registered_voter_id)::text =
(inner.registered_voter_id)::text)
  -  Seq Scan on registered_voters
registered0_  (cost=0.00..173703.02 rows=4873202 width=40) (actual
time=0.005..39364.261 rows=4873167 loops=1)
  -  Hash  (cost=303970.34..303970.34
rows=748528 width=213) (actual time=204523.861..204523.861 rows=4918204
loops=1)
-  Hash Join  (cost=263.22..303970.34
rows=748528 width=213) (actual time=101.628..140936.062 rows=4918204
loops=1)
  Hash Cond:
((outer.city_id)::text = (inner.region_id)::text)
  -  Seq Scan on
registered_voter_addresses addresses1_  (cost=0.00..271622.23
rows=4919923 width=120) (actual time=0.025..98416.667 rows=4918205
loops=1)
  -  Hash  (cost=260.35..260.35
rows=1147 width=173) (actual time=101.582..101.582 rows=1147 loops=1)
-  Hash Join 
(cost=48.80..260.35 rows=1147 width=173) (actual time=88.608..98.984
rows=1147 loops=1)
  Hash Cond:
((outer.region_id)::text = (inner.region_id)::text)
  -  Seq Scan on
regions city4_1_  (cost=0.00..162.39 rows=7539 width=53) (actual
time=0.048..35.204 rows=7539 loops=1)
  -  Hash 
(cost=45.93..45.93 rows=1147 width=120) (actual time=48.896..48.896
rows=1147 loops=1)
-  Nested Loop
 (cost=0.00..45.93 rows=1147 width=120) (actual time=35.791..47.012
rows=1147 

Re: [PERFORM] Large Database Design Help

2006-02-22 Thread Orion
I just wanted to thank everyone for your input on my question.  You've
given me a lot of tools to solve my problem here.

Orion

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Joins and full index scans...mysql vs postgres?

2006-02-22 Thread PFC



997+3522 ms. Am I reading these numbers wrong? Are these numbers
reflective of application performance? Is there an optimization I am
missing?


	It also reflects the time it takes to pgadmin to insert the results into  
its GUI...


	If you want to get an approximation of the time the server needs to  
process your request, without the data marshalling time on the network and  
anything, you can either use EXPLAIN ANALYZE (but mysql doesn't have it,  
and the instrumentation adds overhead), or simply something like SELECT  
sum(1) FROM (query to benchmark), which only returns 1 row, and the sum()  
overhead is minimal, and it works on most databases. I find it useful  
because in knowing which portion of the time is spent by the server  
processing the query, or in data transfer, or in data decoding on the  
client side, or simply in displaying...


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Joins and full index scans...mysql vs postgres?

2006-02-22 Thread Chris

ryan groth wrote:

I am issing a query like this:
SELECT *
   FROM users users
   LEFT JOIN phorum_users_base ON users.uid = phorum_users_base.user_id
   LEFT JOIN useraux ON useraux.uid = users.uid;




I'm not sure if postgres would rewrite your query to do the joins 
properly, though I guess someone else might've already suggested this :)



I'm probably wrong but I read that as:

join users - phorum_users_base (ON users.uid = phorum_users_base.user_id)

join phorum_users_base - useraux (ON useraux.uid = users.uid) which 
won't be indexable because u.uid doesn't exist in phorum_users_base.




Try

SELECT *
FROM users users
LEFT JOIN phorum_users_base ON users.uid = phorum_users_base.user_id
LEFT JOIN useraux ON useraux.uid = phorum_users_base.user_id

or

SELECT *
FROM users u, phorum_users_base pub, useraux ua WHERE u.uid = 
pub.user_id AND au.uid = u.uid AND pub.user_id=au.uid;



--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Joins and full index scans...mysql vs postgres?

2006-02-22 Thread Christopher Kings-Lynne
The pgAdmin query tool is known to give an answer about 5x the real 
answer - don't believe it!


ryan groth wrote:

Hmm, it came from the timer on the pgadmin III sql query tool. I guess
the 1,000ms includes the round-trip? See the wierd thing is that
mysqlserver is running default configuration on a virtual machine
(P3/1.3GHZ conf'd for 128mb ram) over a 100m/b ethernet connection.
Postgres is running on a real P4/3.0ghz 4GB running localhost. Timings
from the mysql query tool indicate that the 6.5k record query runs in
1.3346s (.3361s) vs. the pgadmin query tool saying that the query runs
997+3522 ms. Am I reading these numbers wrong? Are these numbers
reflective of application performance? Is there an optimization I am
missing?

Ryan



On Wed, 22 Feb 2006, ryan groth wrote:


Does this work:

Merge Left Join  (cost=0.00..2656.36 rows=6528 width=1522) (actual
time=0.057..123.659 rows=6528 loops=1)
  Merge Cond: (outer.uid = inner.uid)
  -  Merge Left Join  (cost=0.00..1693.09 rows=6528 width=1264)
(actual time=0.030..58.876 rows=6528 loops=1)
Merge Cond: (outer.uid = inner.user_id)
-  Index Scan using users_pkey on users  (cost=0.00..763.81
rows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)
-  Index Scan using phorum_users_base_pkey on
phorum_users_base  (cost=0.00..822.92 rows=9902 width=1168) (actual
time=0.007..15.674 rows=9845 loops=1)
  -  Index Scan using useraux_pkey on useraux  (cost=0.00..846.40
rows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)
Total runtime: 127.442 ms

Well, this implies the query took about 127 ms on the server side. Where
did the 1000 ms number come from (was that on a client, and if so, what
type)?

---(end of broadcast)---
TIP 6: explain analyze is your friend







---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Good News re count(*) in 8.1

2006-02-22 Thread Greg Stark

Kevin Grittner [EMAIL PROTECTED] writes:

 There have been several times that I have run a SELECT COUNT(*) on an entire
 table on all central machines. On identical hardware, with identical data,
 and equivalent query loads, the PostgreSQL databases have responded with a
 count in 50% to 70% of the time of the commercial product, in spite of the
 fact that the commercial product does a scan of a non-clustered index while
 PostgreSQL scans the data pages.

I take it these are fairly narrow rows? The big benefit of index-only scans
come in when you're scanning extremely wide tables, often counting rows
matching some indexed criteria.

-- 
greg


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq