Re: [PERFORM] using an index worst performances

2004-08-22 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Rod Taylor wrote:
|>>What are the times without the index, with the index and with the higher
|>>statistics value when using a prepared query?
|>
|>Using a prepared query:
|>
|>Without index and default stat 10 :1.12 ms
|>Without index and default stat 1000 :  1.25 ms
|>With index and default stat 10:1.35 ms
|>With index and default stat 1000:  1.6 ms
|>
|>that values are the average obtained after the very first one,
|>on 20 execution.
|
|
| Most interesting. And the plans chosen with the 2 different default stat
| targets are the same? Sorry if I missed the post indicating they were.
|
| If the plans are the same, it would be interesting to get a profile on
| the 2 different cases with that index in place across 100k iterations of
| the prepared query.
Do you have an advice on the profiler to use ?
Regards
Gaetano Mendola

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBKHBg7UpzwH2SGd4RAmGCAKDOZ3xXNPFhhGSMN89MssR7UZnY3ACg6sAY
mWKo4uAZzv1ZtmBsfQZ2SBc=
=NQf/
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] using an index worst performances

2004-08-20 Thread Tom Lane
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
> | Could we see EXPLAIN ANALYZE EXECUTE output for each case?

> [snip]
> See above.

Okay, so the issue here is choosing between a nestloop or a hash join
that have very nearly equal estimated costs:

> ~   ->  Hash Join  (cost=1.74..46.14 rows=1 width=760) (actual 
> time=0.342..0.825 rows=3 loops=1)
> ~ Hash Cond: (("outer".vtype)::text = ("inner"."type")::text)

> ~   ->  Nested Loop  (cost=0.00..46.13 rows=1 width=760) (actual 
> time=0.278..0.933 rows=3 loops=1)
> ~ Join Filter: (("outer".vtype)::text = ("inner"."type")::text)

In the indexed case it's the same choice, but at a different level of joining:

> ~ ->  Hash Join  (cost=1.74..13.15 rows=1 width=768) (actual 
> time=0.281..0.651 rows=5 loops=1)
> ~   Hash Cond: (("outer".vtype)::text = 
> ("inner"."type")::text)

> ~ ->  Nested Loop  (cost=0.00..13.14 rows=1 width=768) (actual 
> time=0.268..0.936 rows=5 loops=1)
> ~   Join Filter: (("outer".vtype)::text = 
> ("inner"."type")::text)

With only 0.01 unit of difference in the costs, it's perfectly plausible
for a change in the statistics to change the estimated cost just enough
to give one plan or the other the edge in estimated cost.

Given that the runtimes are in fact pretty similar, it doesn't bother me
that the planner is estimating them as essentially identical.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] using an index worst performances

2004-08-20 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
| Gaetano Mendola <[EMAIL PROTECTED]> writes:
|
|>Using a prepared query:
|
|
|>Without index and default stat 10 :1.12 ms
ariadne=# explain analyze execute test_ariadne;
~  
QUERY PLAN
- 
--
~ Limit  (cost=46.15..46.17 rows=1 width=760) (actual time=0.926..1.035 rows=3 loops=1)
~   ->  Unique  (cost=46.15..46.17 rows=1 width=760) (actual time=0.904..0.969 rows=3 
loops=1)
~ ->  Sort  (cost=46.15..46.15 rows=1 width=760) (actual time=0.891..0.909 
rows=3 loops=1)
~   Sort Key: store_nodes.parent, store_nodes.priority, store_nodes."path", 
store_objects.id, store_objects."type", store_objects.object, date_part('epoch'::text, 
store_objects.lastchanged), store_objects.vtype
~   ->  Hash Join  (cost=1.74..46.14 rows=1 width=760) (actual 
time=0.342..0.825 rows=3 loops=1)
~ Hash Cond: (("outer".vtype)::text = ("inner"."type")::text)
~ ->  Nested Loop  (cost=0.00..44.38 rows=1 width=760) (actual 
time=0.198..0.618 rows=3 loops=1)
~   ->  Nested Loop  (cost=0.00..38.93 rows=1 width=104) 
(actual time=0.157..0.447 rows=3 loops=1)
~ ->  Seq Scan on store_prop_article  (cost=0.00..1.75 
rows=7 width=8) (actual time=0.030..0.119 rows=7 loops=1)
~   Filter: ((ar_start <= 1092924200) AND (ar_end 
>= 1092924200) AND ((ar_display)::text = 'default'::text))
~ ->  Index Scan using store_nodes_object on 
store_nodes  (cost=0.00..5.30 rows=1 width=96) (actual time=0.019..0.023 rows=0 loops=7)
~   Index Cond: ("outer".object = 
store_nodes.object)
~   Filter: (("path")::text ~~ 
'/sites/broadsat/news/%'::text)
~   ->  Index Scan using store_objects_pkey on store_objects  
(cost=0.00..5.43 rows=1 width=672) (actual time=0.013..0.020 rows=1 loops=3)
~ Index Cond: ("outer".object = store_objects.id)
~ ->  Hash  (cost=1.74..1.74 rows=2 width=11) (actual 
time=0.085..0.085 rows=0 loops=1)
~   ->  Seq Scan on store_types  (cost=0.00..1.74 rows=2 
width=11) (actual time=0.038..0.064 rows=1 loops=1)
~ Filter: ((implements)::text = 'particle'::text)
~ Total runtime: 1.199 ms
(19 rows)
|>Without index and default stat 1000 :  1.25 ms
ariadne=# explain analyze execute test_ariadne;
~  
QUERY PLAN
- 
--
~ Limit  (cost=46.14..46.16 rows=1 width=760) (actual time=1.027..1.126 rows=3 loops=1)
~   ->  Unique  (cost=46.14..46.16 rows=1 width=760) (actual time=1.014..1.077 rows=3 
loops=1)
~ ->  Sort  (cost=46.14..46.14 rows=1 width=760) (actual time=1.001..1.019 
rows=3 loops=1)
~   Sort Key: store_nodes.parent, store_nodes.priority, store_nodes."path", 
store_objects.id, store_objects."type", store_objects.object, date_part('epoch'::text, 
store_objects.lastchanged), store_objects.vtype
~   ->  Nested Loop  (cost=0.00..46.13 rows=1 width=760) (actual 
time=0.278..0.933 rows=3 loops=1)
~ Join Filter: (("outer".vtype)::text = ("inner"."type")::text)
~ ->  Nested Loop  (cost=0.00..44.38 rows=1 width=760) (actual 
time=0.208..0.591 rows=3 loops=1)
~   ->  Nested Loop  (cost=0.00..38.93 rows=1 width=104) 
(actual time=0.168..0.417 rows=3 loops=1)
~ ->  Seq Scan on store_prop_article  (cost=0.00..1.75 
rows=7 width=8) (actual time=0.038..0.118 rows=7 loops=1)
~   Filter: ((ar_start <= 1092924200) AND (ar_end 
>= 1092924200) AND ((ar_display)::text = 'default'::text))
~ ->  Index Scan using store_nodes_object on 
store_nodes  (cost=0.00..5.30 rows=1 width=96) (actual time=0.016..0.020 rows=0 loops=7)
~   Index Cond: ("outer".object = 
store_nodes.object)
~   Filter: (("path")::text ~~ 
'/sites/broadsat/news/%'::text)
~   ->  Index Scan using store_objects_pkey on store_objects  
(cost=0.00..5.43 rows=1 width=672) (actual time=0.012..0.022 rows=1 loops=3)
~ 

Re: [PERFORM] using an index worst performances

2004-08-20 Thread Tom Lane
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> Using a prepared query:

> Without index and default stat 10 :1.12 ms
> Without index and default stat 1000 :  1.25 ms
> With index and default stat 10:1.35 ms
> With index and default stat 1000:  1.6 ms

Could we see EXPLAIN ANALYZE EXECUTE output for each case?

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] using an index worst performances

2004-08-20 Thread Gaetano Mendola
Rod Taylor wrote:
On Fri, 2004-08-20 at 05:37, Gaetano Mendola wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Christopher Kings-Lynne wrote:
|>>> Without index: 1.140 ms
|>>> With index: 1.400 ms
|>>> With default_statistic_targer = 200:   1.800 ms
|>>
|>>
|>>
|>>
|>> Can I just check that 1.800ms means 1.8 secs (You're using . as the
|>> thousands separator)?
|>>
|>> If it means 1.8ms then frankly the times are too short to mean
|>> anything without running them 100 times and averaging.
|>
|>
|>
|>
|> It mean 1.8 ms and that execution time is sticky to that value even
|> with 1000 times.
|
|
| Given the almost irrelvant difference in the speed of those queries, I'd
| say that with the stats so high, postgres simply takes longer to check
| the statistics to come to the same conclusion.  ie. it has to loop over
| 200 rows instead of just 10.
The time increase seems too much.

We can test this.
What are the times without the index, with the index and with the higher
statistics value when using a prepared query?
Using a prepared query:
Without index and default stat 10 :1.12 ms
Without index and default stat 1000 :  1.25 ms
With index and default stat 10:1.35 ms
With index and default stat 1000:  1.6 ms
that values are the average obtained after the very first one,
on 20 execution.

Regards
Gaetano Mendola












---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] using an index worst performances

2004-08-20 Thread Rod Taylor
On Fri, 2004-08-20 at 05:37, Gaetano Mendola wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Christopher Kings-Lynne wrote:
> 
> |>>> Without index: 1.140 ms
> |>>> With index: 1.400 ms
> |>>> With default_statistic_targer = 200:   1.800 ms
> |>>
> |>>
> |>>
> |>>
> |>> Can I just check that 1.800ms means 1.8 secs (You're using . as the
> |>> thousands separator)?
> |>>
> |>> If it means 1.8ms then frankly the times are too short to mean
> |>> anything without running them 100 times and averaging.
> |>
> |>
> |>
> |>
> |> It mean 1.8 ms and that execution time is sticky to that value even
> |> with 1000 times.
> |
> |
> | Given the almost irrelvant difference in the speed of those queries, I'd
> | say that with the stats so high, postgres simply takes longer to check
> | the statistics to come to the same conclusion.  ie. it has to loop over
> | 200 rows instead of just 10.
> 
> The time increase seems too much.

We can test this.

What are the times without the index, with the index and with the higher
statistics value when using a prepared query?



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] using an index worst performances

2004-08-20 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Christopher Kings-Lynne wrote:
|>>> Without index: 1.140 ms
|>>> With index: 1.400 ms
|>>> With default_statistic_targer = 200:   1.800 ms
|>>
|>>
|>>
|>>
|>> Can I just check that 1.800ms means 1.8 secs (You're using . as the
|>> thousands separator)?
|>>
|>> If it means 1.8ms then frankly the times are too short to mean
|>> anything without running them 100 times and averaging.
|>
|>
|>
|>
|> It mean 1.8 ms and that execution time is sticky to that value even
|> with 1000 times.
|
|
| Given the almost irrelvant difference in the speed of those queries, I'd
| say that with the stats so high, postgres simply takes longer to check
| the statistics to come to the same conclusion.  ie. it has to loop over
| 200 rows instead of just 10.
The time increase seems too much.
Regards
Gaetano Mendola

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBJcZW7UpzwH2SGd4RAuiMAJ971EAtr1RrHu2QMi0YYk0kKeuQmACg9bd3
CFcmq5MRG/Eq3RXdNOdu43Y=
=Bvo8
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] using an index worst performances

2004-08-19 Thread Christopher Kings-Lynne
Without index: 1.140 ms
With index: 1.400 ms
With default_statistic_targer = 200:   1.800 ms

Can I just check that 1.800ms means 1.8 secs (You're using . as the 
thousands separator)?

If it means 1.8ms then frankly the times are too short to mean 
anything without running them 100 times and averaging.

It mean 1.8 ms and that execution time is sticky to that value even
with 1000 times.
Given the almost irrelvant difference in the speed of those queries, I'd 
say that with the stats so high, postgres simply takes longer to check 
the statistics to come to the same conclusion.  ie. it has to loop over 
200 rows instead of just 10.

Chris
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] using an index worst performances

2004-08-19 Thread Gaetano Mendola
Richard Huxton wrote:
Gaetano Mendola wrote:
Hi all,
I'm tring to optimize the following query:
http://rafb.net/paste/results/YdO9vM69.html
as you can see from the explain after defining the
index the performance is worst.
If I raise the default_statistic_target to 200
then the performance are worst then before:
Without index: 1.140 ms
With index: 1.400 ms
With default_statistic_targer = 200:   1.800 ms

Can I just check that 1.800ms means 1.8 secs (You're using . as the 
thousands separator)?

If it means 1.8ms then frankly the times are too short to mean anything 
without running them 100 times and averaging.

It mean 1.8 ms and that execution time is sticky to that value even
with 1000 times.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] using an index worst performances

2004-08-19 Thread Richard Huxton
Gaetano Mendola wrote:
Hi all,
I'm tring to optimize the following query:
http://rafb.net/paste/results/YdO9vM69.html
as you can see from the explain after defining the
index the performance is worst.
If I raise the default_statistic_target to 200
then the performance are worst then before:
Without index: 1.140 ms
With index: 1.400 ms
With default_statistic_targer = 200:   1.800 ms
Can I just check that 1.800ms means 1.8 secs (You're using . as the 
thousands separator)?

If it means 1.8ms then frankly the times are too short to mean anything 
without running them 100 times and averaging.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PERFORM] using an index worst performances

2004-08-19 Thread Gaetano Mendola
Hi all,
I'm tring to optimize the following query:
http://rafb.net/paste/results/YdO9vM69.html
as you can see from the explain after defining the
index the performance is worst.
If I raise the default_statistic_target to 200
then the performance are worst then before:
Without index: 1.140 ms
With index: 1.400 ms
With default_statistic_targer = 200:   1.800 ms
tought anyone ?

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings