On Thursday 18 December 2003 09:24, David Shadovitz wrote:
> Old server:
> # VACUUM FULL abc;
> VACUUM
> # VACUUM VERBOSE abc;
> NOTICE: --Relation abc--
> NOTICE: Pages 1526: Changed 0, Empty 0; Tup 91528; Vac 0, Keep 0, UnUsed
> 32. Total CPU 0.07s/0.52u sec elapsed 0.60 sec.
> VACUUM
Dennis, Shridhar, and Neil,
Thanks for your input. Here are my responses:
I ran VACUUM FULL on the table in question. Although that did reduce "Pages"
and "UnUsed", the "SELECT *" query is still much slower on this installation
than in the new, restored one.
Old server:
# VACUUM FULL abc;
If you have 3 1.5GB tables then you might as well go for 4GB while you're at
it. Make sure you've got a bigmem kernel either running or available, and
boost effective_cache_size by whatever amount you increase the RAM by. We
run a Quad Xeon/4GB server on Redhat 7.3 and it's solid as a rock.
Ther
On Dec 17, 2003, at 11:57 AM, Nick Fankhauser wrote:
Hi-
After having done my best to squeeze better performance out of our
application by tuning within our existing resources, I'm falling back
on
adding memory as a short-term solution while we get creative for a
long-term
fix. I'm curious abou
Hi-
After having done my best to squeeze better performance out of our
application by tuning within our existing resources, I'm falling back on
adding memory as a short-term solution while we get creative for a long-term
fix. I'm curious about what experiences others have had with the process of
a
Hannu Krosing <[EMAIL PROTECTED]> writes:
> Chadwick, Russell kirjutas L, 13.12.2003 kell 00:40:
>> Can anyone explain why this table which has never had more than a
>> couple rows in it shows > 500k in the query planner even after running
>> vacuum full.
> It can be that there is an idle transact
> As a question, what does explain analyze give you if you
> set enable_nestloop=false; before trying the query?
Here are the results- It looks quite a bit more painful than the other plan,
although the wall time is in the same ballpark.
alpha=# explain analyze
alpha-# select
alpha-# min(a
> It seems that your basic problem is that you're fetching lots of rows
> from two big ol' tables.
> It doesn't seem to me that there would be a substantially better plan
> for this query with your tables as they stand.
That's more or less the conclusion I had come to. I was just hoping someone
e
> The fact that it's taking you 9ms to do each index lookup
> suggests to me that
> it's going to disk each time. Does that sound plausible, or do
> you think you
> have enough RAM to cache your large indexes?
I'm sure we don't have enough RAM to cache all of our large indexes, so your
suppositio
On Tuesday 16 December 2003 17:06, Nick Fankhauser - Doxpop wrote:
> Hi-
>
> I'm trying to optimize a query that I *think* should run very fast.
> Essentially, I'm joining two tables that have very selective indexes and
> constraining the query on an indexed field. (There's a third small lookup
> t
10 matches
Mail list logo