Greg Smith пишет:
I can't replicate your problem on the current development 9.0; all
three plans come back with results quickly when I just tried it:
Nested Loop (cost=0.00..50.76 rows=204 width=32) (actual
time=0.049..0.959 rows=200 loops=1)
-> Seq Scan on t1 (cost=0.00..1.06 rows=1 widt
Greg Smith пишет:
Vlad Arkhipov wrote:
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does
not do that.
This isn't strictly an optimization problem; it's an issue with
statistics the optimizer has to work
I appreciate all the comments.
I will perform some benchmarking before doing the rewrite to be certain of
how it will impact performance. At the very least, I think can say for
near-certain now that the indexes are not going to help me given the
particular queries I am dealing with and limited num
On Wed, Apr 21, 2010 at 11:06 AM, Rick wrote:
> I have a DB with small and large tables that can go up to 15G.
> For performance benefits, it appears that analyze has much less cost
> than vacuum, but the same benefits?
Err, no. ANALYZE gathers statistics for the query planner; VACUUM
clears out
On Apr 21, 9:52 pm, kevin.gritt...@wicourts.gov ("Kevin Grittner")
wrote:
> I wrote:
> > ALTER TABLE ALTER plugins_guide_address
> > ALTER COLUMN city_id SET STATISTICS 1000;
>
> One too many ALTERs in there. Should be:
>
> ALTER TABLE plugins_guide_address
> ALTER COLUMN city_id SET STATISTIC
> Try this:
>
> ALTER TABLE ALTER plugins_guide_address
> ALTER COLUMN city_id SET STATISTICS 1000;
> ANALYZE plugins_guide_address;
>
> Then try your query.
No luck... The same query time...
> I have one more diagnostic query to test, if the above doesn't work:
>
> explain analyze
> SELECT id F
I have a DB with small and large tables that can go up to 15G.
For performance benefits, it appears that analyze has much less cost
than vacuum, but the same benefits?
I can’t find any clear recommendations for frequencies and am
considering these parameters:
Autovacuum_vacuum_threshold = 5
Au
On Thu, Apr 22, 2010 at 10:11 AM, Merlin Moncure wrote:
> The timings are similar, but the array returning case:
> *) runs in a single statement. If this is executed from the client
> that means less round trips
> *) can be passed around as a variable between functions. temp table
> requires re
On Wed, Apr 21, 2010 at 4:16 PM, Eliot Gable
wrote:
> I have previously discussed my very long PL/PGSQL stored procedure on this
> list. However, without getting into too many details, I have another
> performance-related question.
ok, here's a practical comparion:
-- test data
create table foo(f
Vlad Arkhipov wrote:
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does
not do that.
This isn't strictly an optimization problem; it's an issue with
statistics the optimizer has to work with, the ones ANAL
On Thu, Apr 22, 2010 at 8:14 AM, Merlin Moncure wrote:
> This will use an index on bar.bar_id if it exists. Obviously, any
> indexes on foo are not used after creating the array but doesn't
> matter much as long as the right side is indexed. Your cursor method
> does do any better in this regard
On Wed, Apr 21, 2010 at 4:16 PM, Eliot Gable
wrote:
> I have previously discussed my very long PL/PGSQL stored procedure on this
> list. However, without getting into too many details, I have another
> performance-related question.
>
> The procedure currently uses cursors to return multiple result
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does not
do that.
create temp table t1 (id bigint, t bigint);
insert into t1 values (1, 1);
insert into t1 values (2, 2);
insert into t1 values (2, 3);
insert i
13 matches
Mail list logo