Andre Felipe Machado wrote:
It seems that Firebird windows can use adequately as much ram it finds
and postgresql windows can not.
PostgreSQL relies on the OS cache to utilize RAM. Make sure that most of
the RAM is 'available' so Windows can do its thing.
effective_cache_size should be set
Joel Fradkin wrote:
Is the battery backed cache good or bad for Postgres?
Battery-backed avoids corruption if you have an unexpected power loss.
It's considered mandatory with large-cache write-back controllers if you
can't afford to lose any data.
They are telling me I can only get a duel chann
Whoa Josh! I don't believe you're going to reduce the cost by 10 times
through a bit of tweaking - not without lowering the sequential scan
cost as well.
The only thing I can think of is perhaps his primary index drastically
needs repacking. Otherwise, isn't there a real anomaly here? Halving t
Gregory Stark wrote:
The "this day and age" argument isn't very convincing. Hard drive capacity
growth has far outstripped hard drive seek time and bandwidth improvements.
Random access has more penalty than ever.
In point of fact, there haven't been noticeable seek time improvements
for years.
Karim Nassar wrote:
Context: I have a two disk server that is about to become dedicated to
postgresql (it's a sun v40z running gentoo linux).
What's "theoretically better"?
1) OS and pg_xlog on one disk, rest of postgresql on the other? (if I
understand the above correctly)
2) Everything stripe
Jim C. Nasby wrote:
Ahh, I was thinking of a high correlation factor on the index. I still
question 5% though... that seems awefully low.
Not really. It all depends on how many records you're packing into each
page. 1% may well be the threshold for small records.
Tom mentioned this in the last
Assuming your system isn't starved for memory, shouldn't repeated page
fetches be hitting the cache?
I've also wondered about the conventional wisdom that read ahead doesn't
help random reads. I may well be missing something, but *if* the OS has
enough memory to cache most of the table, surely
Gaetano Mendola wrote:
I think is due the fact that first queries were performed in peakhours.
A memory intensive operation takes 7.5 times longer during heavy loads.
Doesn't this suggest that swapping of working memory is occurring?
---(end of broadcast)-
Tom Lane wrote:
However: the reason the second plan wins is because there are zero rows
fetched from sat_request, and so the bulk of the plan is never executed
at all. I doubt the second plan would win if there were any matching
sat_request rows.
That's what I thought at first, but if you look mor
Tom Lane wrote:
What exactly did you do to force the various plan choices? (I see some
ridiculous choices of indexscans, for instance, suggesting improper use
of enable_seqscan in some cases.)
Except for forcing a hash with indexes (to show that increased use of
indexes is not necessarily good),
I'm hoping someone can shed some light on these results. The 'factor'
compares the ratios of cost to actual for different plans. Perhaps
nested loops should be given a discount in the planner? The estimates
seem to be out by one and a half orders of magnitude. :(
== QUERY ==
SEL
Although larger offsets have some effect, your real problem is the sort
(of 42693 rows).
Try:
SELECT r.id_order
FROM report r
WHERE r.id_order IN
(SELECT id
FROM orders
WHERE id_status = 6
ORDER BY 1
LIMIT 10 OFFSET 1000)
ORDER BY 1
The subquery doesn't *have* to sort because the table
> You might want to reduce random_page_cost a little.
> Keep in mind that your test case is small enough to fit in RAM and is
> probably not reflective of what will happen with larger tables.
I am also running 8.0 rc1 for Windows. Despite many hours spent tweaking
various planner cost constants,
I'm doing some performance profiling with a simple two-table query:
SELECT L."ProductID", sum(L."Amount")
FROM "drinv" H
JOIN "drinvln" L ON L."OrderNo" = H."OrderNo"
WHERE
("OrderDate" between '2003-01-01' AND '2003-04-30')
GROUP BY L."ProductID"
drinv and drinvln have about 100,000 and 3,500,00
14 matches
Mail list logo