Re: [PERFORM] Improving Inner Join Performance

2006-01-06 Thread Andy
Yes I have indexes an all join fields. The tables have around 30 columns each and around 100k rows. The database is vacuumed every hour. Andy. - Original Message - From: Frank Wiles [EMAIL PROTECTED] To: Andy [EMAIL PROTECTED] Cc: pgsql-performance@postgresql.org Sent: Thursday,

Re: [PERFORM] Improving Inner Join Performance

2006-01-06 Thread Michael Glaesemann
On Jan 6, 2006, at 18:21 , Andy wrote: Yes I have indexes an all join fields. The tables have around 30 columns each and around 100k rows. The database is vacuumed every hour. Just to chime in, VACUUM != VACUUM ANALYZE. ANALYZE is what updates database statistics and affects query

Re: [PERFORM] Improving Inner Join Performance

2006-01-06 Thread Andy
Sorry, I had to be more specific. VACUUM ANALYZE is performed every hour. Regards, Andy. - Original Message - From: Michael Glaesemann [EMAIL PROTECTED] To: Andy [EMAIL PROTECTED] Cc: pgsql-performance@postgresql.org Sent: Friday, January 06, 2006 11:45 AM Subject: Re: [PERFORM]

Re: [PERFORM] Improving Inner Join Performance

2006-01-06 Thread Pandurangan R S
If the users puts in some other search fields on the where then the query runs faster but in this format sometimes it takes a lot lot of time(sometimes even 2,3 seconds). Can you eloborate under what conditions which query is slower? On 1/5/06, Andy [EMAIL PROTECTED] wrote: Hi to all,

Re: [PERFORM] Improving Inner Join Performance

2006-01-06 Thread Andy
At the moment: o.id_status3 can have values only 4 and 6. The 6 is around 90% from the whole table. This is why seq scan is made. Now, depending on the user input the query can have more where fields. For example: SELECT count(*) FROM orders o INNER JOIN report r ON r.id_order=o.id

Re: [PERFORM] improving write performance for logging

2006-01-06 Thread Ian Westmacott
On Thu, 2006-01-05 at 19:08 -0600, Jim C. Nasby wrote: On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote: The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3). The tables are on a 10-spindle (SCSI) RAID50 with dual U320 controllers (XFS). This is overkill for

Re: [PERFORM] effizient query with jdbc

2006-01-06 Thread Sebastian Hennebrueder
You could issue one query containing a select uuid FROM MDM.KEYWORDS_INFO WHERE KEYWORDS_ID in (xy) where xy is a large comma separated list of your values. Best Regards / Viele Grüße Sebastian Hennebrueder - http://www.laliluna.de * Tutorials for JSP, JavaServer Faces, Struts, Hibernate

Re: [PERFORM] Improving Inner Join Performance

2006-01-06 Thread Jaime Casanova
On 1/6/06, Andy [EMAIL PROTECTED] wrote: At the moment: o.id_status3 can have values only 4 and 6. The 6 is around 90% from the whole table. This is why seq scan is made. given this if you make id_status 3 you will never use an index because you will be scanning 4 and 6 the only values in this

Re: [PERFORM] improving write performance for logging

2006-01-06 Thread Jim C. Nasby
On Fri, Jan 06, 2006 at 09:00:06AM -0500, Ian Westmacott wrote: On Thu, 2006-01-05 at 19:08 -0600, Jim C. Nasby wrote: On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote: The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3). The tables are on a 10-spindle (SCSI)

Re: [PERFORM] Slow query. Any way to speed up?

2006-01-06 Thread Patrick Hatcher
Duh sorry. We will eventually move to 8.x, it's just a matter of finding the time: Explain analyze Select gmmid, gmmname, divid, divname, feddept, fedvend,itemnumber as mstyle,amc_week_id, sum(tran_itm_total) as net_dollars FROM public.tbldetaillevel_report a2 join cdm.cdm_ddw_tran_item a1 on

Re: [PERFORM] improving write performance for logging

2006-01-06 Thread Ian Westmacott
On Fri, 2006-01-06 at 10:37 -0600, Jim C. Nasby wrote: The problem is that if you lose WAL or the data, you've lost everything. So you might as well use raid0 for the data if you're using it for WAL. Or switch WAL to raid1. Actually, a really good controller *might* be able to do a good job of

Re: [PERFORM] Slow query. Any way to speed up?

2006-01-06 Thread Tom Lane
Patrick Hatcher [EMAIL PROTECTED] writes: - Seq Scan on cdm_ddw_tran_item a1 (cost=0.00..1547562.88 rows=8754773 width=23) (actual time=14.219..535704.691 rows=10838135 loops=1) Filter: appl_id)::text = 'MCOM'::text) OR

Re: [PERFORM] improving write performance for logging

2006-01-06 Thread Tom Lane
Ian Westmacott [EMAIL PROTECTED] writes: If the WAL is lost, can you lose more than the data since the last checkpoint? The problem is that you might have partially-applied actions since the last checkpoint, rendering your database inconsistent; then *all* the data is suspect if not actually

[PERFORM] help tuning queries on large database

2006-01-06 Thread peter royal
Howdy. I'm running into scaling problems when testing with a 16gb (data +indexes) database. I can run a query, and it returns in a few seconds. If I run it again, it returns in a few milliseconds. I realize this is because during subsequent runs, the necessary disk pages have been cached

Re: [PERFORM] help tuning queries on large database

2006-01-06 Thread Tom Lane
peter royal [EMAIL PROTECTED] writes: So, my question is, is there anything I can do to boost performance with what I've got, or am I in a position where the only 'fix' is more faster disks? I can't think of any schema/index changes that would help, since everything looks pretty optimal

Re: [PERFORM] help tuning queries on large database

2006-01-06 Thread Harry Jackson
On 1/6/06, peter royal [EMAIL PROTECTED] wrote: PostgreSQL 8.1.1 shared_buffers = 1 # (It was higher, 50k, but didn't help any, so brought down to free ram for disk cache) work_mem = 8196 random_page_cost = 3 effective_cache_size = 25 I have played with both disk cache settings

Re: [PERFORM] help tuning queries on large database

2006-01-06 Thread David Lang
On Fri, 6 Jan 2006, Tom Lane wrote: Date: Fri, 06 Jan 2006 18:47:55 -0500 From: Tom Lane [EMAIL PROTECTED] To: peter royal [EMAIL PROTECTED] Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] help tuning queries on large database peter royal [EMAIL PROTECTED] writes: So, my question