Re: [PERFORM] firebird X postgresql 8.1.2 windows, performance comparison

2006-03-09 Thread David Brown
Andre Felipe Machado wrote: It seems that Firebird windows can use adequately as much ram it finds and postgresql windows can not. PostgreSQL relies on the OS cache to utilize RAM. Make sure that most of the RAM is 'available' so Windows can do its thing. effective_cache_size should be set

Re: [PERFORM] ok you all win what is best opteron (I dont want a

2005-05-13 Thread David Brown
Joel Fradkin wrote: Is the battery backed cache good or bad for Postgres? Battery-backed avoids corruption if you have an unexpected power loss. It's considered mandatory with large-cache write-back controllers if you can't afford to lose any data. They are telling me I can only get a duel chann

Re: [PERFORM] multi-column index

2005-03-16 Thread David Brown
Whoa Josh! I don't believe you're going to reduce the cost by 10 times through a bit of tweaking - not without lowering the sequential scan cost as well. The only thing I can think of is perhaps his primary index drastically needs repacking. Otherwise, isn't there a real anomaly here? Halving t

Re: [PERFORM] cpu_tuple_cost

2005-03-15 Thread David Brown
Gregory Stark wrote: The "this day and age" argument isn't very convincing. Hard drive capacity growth has far outstripped hard drive seek time and bandwidth improvements. Random access has more penalty than ever. In point of fact, there haven't been noticeable seek time improvements for years.

Re: [PERFORM] What's better: Raid 0 or disk for seperate pg_xlog

2005-03-10 Thread David Brown
Karim Nassar wrote: Context: I have a two disk server that is about to become dedicated to postgresql (it's a sun v40z running gentoo linux). What's "theoretically better"? 1) OS and pg_xlog on one disk, rest of postgresql on the other? (if I understand the above correctly) 2) Everything stripe

Re: [PERFORM] index scan on =, but not < ?

2005-03-09 Thread David Brown
Jim C. Nasby wrote: Ahh, I was thinking of a high correlation factor on the index. I still question 5% though... that seems awefully low. Not really. It all depends on how many records you're packing into each page. 1% may well be the threshold for small records. Tom mentioned this in the last

Re: [PERFORM] index scan on =, but not < ?

2005-03-09 Thread David Brown
Assuming your system isn't starved for memory, shouldn't repeated page fetches be hitting the cache? I've also wondered about the conventional wisdom that read ahead doesn't help random reads. I may well be missing something, but *if* the OS has enough memory to cache most of the table, surely

Re: [PERFORM] bad performances using hashjoin

2005-02-21 Thread David Brown
Gaetano Mendola wrote: I think is due the fact that first queries were performed in peakhours. A memory intensive operation takes 7.5 times longer during heavy loads. Doesn't this suggest that swapping of working memory is occurring? ---(end of broadcast)-

Re: [PERFORM] bad performances using hashjoin

2005-02-20 Thread David Brown
Tom Lane wrote: However: the reason the second plan wins is because there are zero rows fetched from sat_request, and so the bulk of the plan is never executed at all. I doubt the second plan would win if there were any matching sat_request rows. That's what I thought at first, but if you look mor

Re: [PERFORM] Planner really hates nested loops

2005-02-03 Thread David Brown
Tom Lane wrote: What exactly did you do to force the various plan choices? (I see some ridiculous choices of indexscans, for instance, suggesting improper use of enable_seqscan in some cases.) Except for forcing a hash with indexes (to show that increased use of indexes is not necessarily good),

[PERFORM] Planner really hates nested loops

2005-02-03 Thread David Brown
I'm hoping someone can shed some light on these results. The 'factor' compares the ratios of cost to actual for different plans. Perhaps nested loops should be given a discount in the planner? The estimates seem to be out by one and a half orders of magnitude. :( == QUERY == SEL

Re: [PERFORM] OFFSET impact on Performance???

2005-01-26 Thread David Brown
Although larger offsets have some effect, your real problem is the sort (of 42693 rows). Try: SELECT r.id_order FROM report r WHERE r.id_order IN (SELECT id FROM orders WHERE id_status = 6 ORDER BY 1 LIMIT 10 OFFSET 1000) ORDER BY 1 The subquery doesn't *have* to sort because the table

Re: [PERFORM] Seqscan rather than Index

2004-12-16 Thread David Brown
> You might want to reduce random_page_cost a little. > Keep in mind that your test case is small enough to fit in RAM and is > probably not reflective of what will happen with larger tables. I am also running 8.0 rc1 for Windows. Despite many hours spent tweaking various planner cost constants,

[PERFORM] Analyzer is clueless

2004-11-17 Thread David Brown
I'm doing some performance profiling with a simple two-table query: SELECT L."ProductID", sum(L."Amount") FROM "drinv" H JOIN "drinvln" L ON L."OrderNo" = H."OrderNo" WHERE ("OrderDate" between '2003-01-01' AND '2003-04-30') GROUP BY L."ProductID" drinv and drinvln have about 100,000 and 3,500,00