Actually it hasn't been my experience either. Most of my queries against 
the database, large and small are either a little quicker or no real 
difference. I have only really noticed big differences under stress when 
memory (RAM) is being squeezed. The main winner on 2.6 seems to be 
write performance and memory management.

Unfortunately I only have one test machine and I can't really keep 
switching between 2.4 and 2.6 to do the comparisons. I had written 
down 27 timings from a set of SQL of varying complexity using the 2.4 
kernel. Each SQL statement was executed 10 times and the average of 
the last 5 was used. I can only really compare those timings against the 
new installation on 2.6. I know that this is not ideal "real world" testing, 
but it is good enough for me at the moment. Unless anyone has 
contradictory indications then I will proceed with 2.6.

I did increase the default stats target from 10 to 50 and re-analysed. 
The explain numbers are slightly different, but the time to run was 
almost the same. Not surprising since the plan was the same.

QUERY PLAN 
Merge Join  (cost=0.00..192636.20 rows=2845920 width=92) 
  Merge Cond: ("outer".reqt_id = "inner".reqt_id) 
  ->  Index Scan using order_reqt_pkey on order_reqt r  (cost=0.00..52662.40 
rows=2206291 width=6) 
  ->  Index Scan using staff_book_idx2 on staff_booking  (cost=0.00..102529.28 
rows=2845920 width=90) 


On 3 Apr 2004 at 10:59, Josh Berkus wrote:

Gary,

> There are no indexes on the columns involved in the update, they are  
> not required for my usual select statements. This is an attempt to  
> slightly denormalise the design to get the performance up comparable  
> to SQL Server 2000. We hope to move some of our databases over to  
> PostgreSQL later in the year and this is part of the ongoing testing.  
> SQLServer's query optimiser is a bit smarter that PostgreSQL's (yet) 
> so I am hand optimising some of the more frequently used  
> SQL and/or tweaking the database design slightly. 

Hmmm ... that hasn't been my general experience on complex queries.   However, 
it may be due to a difference in ANALYZE statistics.   I'd love to see you 
increase your default_stats_target, re-analyze, and see if PostgreSQL gets 
"smarter".

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly


-- 
Incoming mail is certified Virus Free.
Checked by AVG Anti-Virus (http://www.grisoft.com).
Version: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to