Re: [HACKERS] Maximum statistics target

2008-03-20 Thread Kenneth Marshall
On Thu, Mar 20, 2008 at 11:17:10AM -0500, Decibel! wrote:
> On Mar 10, 2008, at 1:26 PM, Peter Eisentraut wrote:
>
> At some point I think it makes a lot more sense to just have VACUUM gather 
> stats as it goes, rather than have ANALYZE generate a bunch of random IO.
>
> BTW, when it comes to the case of the OP, perhaps we can build enough 
> intelligence for the system to understand when the stats follow some type 
> of pattern (ie: a geometric distribution), and store the stats differently.
> -- 
> Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
> Give your computer some brain candy! www.distributed.net Team #1828
>
+1 for opportunistically gathering stats during other I/O such as
vacuums and sequential scans. It would be interesting to have a hook
to allow processes to attach to the dataflow from other queries.

Cheers,
Ken

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maximum statistics target

2008-03-20 Thread Decibel!

On Mar 10, 2008, at 1:26 PM, Peter Eisentraut wrote:

Am Montag, 10. März 2008 schrieb Gregory Stark:
It's not possible to believe that you'd not notice O(N^2)  
behavior for N
approaching 80 ;-).  Perhaps your join columns were unique  
keys, and

thus didn't have any most-common-values?


We could remove the hard limit on statistics target and impose the  
limit

instead on the actual size of the arrays. Ie, allow people to specify
larger sample sizes and discard unreasonably large excess data  
(possibly

warning them when that happens).


I have run some more useful tests now with more distinct values.   
The planning
times do increase, but this is not the primary worry.  If you want  
to spend
20 seconds of planning to speed up your query by 40 seconds, this  
could
surely be a win in some scenarios, and not a catastrophic loss if  
not.  The
practical problems lie with memory usage in ANALYZE, in two ways.   
First, at
some point it will try to construct pg_statistic rows that don't  
fit into the
1GB limit, as mentioned upthread.  You get a funny error message  
and it
aborts.  This is fixable with some cosmetics.  Second, ANALYZE  
appears to
temporarily leak memory (it probably doesn't bother to free things  
along the
way, as most of the code does), and so some not so large statistics  
targets
(say, 4) can get your system swapping like crazy.  A crafty  
user could
probably kill the system that way, perhaps even with the restricted  
settings
we have now.  I haven't inspected the code in detail yet, but I  
imagine a few
pfree() calls and/or a counter that checks the current memory usage  
against

maintenance_work_mem could provide additional safety.  If we could get
ANALYZE under control, then I imagine this would provide a more  
natural upper

bound for the statistics targets, and it would be controllable by the
administrator.


At some point I think it makes a lot more sense to just have VACUUM  
gather stats as it goes, rather than have ANALYZE generate a bunch of  
random IO.


BTW, when it comes to the case of the OP, perhaps we can build enough  
intelligence for the system to understand when the stats follow some  
type of pattern (ie: a geometric distribution), and store the stats  
differently.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [HACKERS] Maximum statistics target

2008-03-10 Thread Stephen Denne
> We could remove the hard limit on statistics target and 
> impose the limit
> instead on the actual size of the arrays. Ie, allow people to 
> specify larger
> sample sizes and discard unreasonably large excess data 
> (possibly warning them
> when that happens).
> 
> That would remove the screw case the original poster had 
> where he needed to
> scan a large portion of the table to see at least one of 
> every value even
> though there were only 169 distinct values.
> 
> -- 
>   Gregory Stark


That was my use case, but I wasn't the OP.

Your suggestion would satisfy what I was trying to do. However, a higher stats 
target wouldn't solve my root problem (how the planner uses the gathered 
stats), and the statistics gathered at 1000 (and indeed at 200) are quite a 
good representation of what is in the table.

I don't like the idea of changing one limit into two limits. Or are you 
suggesting changing the algorithm that determines how many, and which pages to 
analyze, perhaps so that it is adaptive to the results of the analysis as it 
progresses? That doesn't sound easy.

Regards,
Stephen Denne.

Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer 
focus, and courage. This email with any attachments is confidential and may be 
subject to legal privilege.  If it is not intended for you please advise by 
reply immediately, destroy it and do not copy, disclose or use it in any way.

__
  This email has been scanned by the DMZGlobal Business Quality 
  Electronic Messaging Suite.
Please see http://www.dmzglobal.com/services/bqem.htm for details.
__



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maximum statistics target

2008-03-10 Thread Peter Eisentraut
Am Montag, 10. März 2008 schrieb Gregory Stark:
> > It's not possible to believe that you'd not notice O(N^2) behavior for N
> > approaching 80 ;-).  Perhaps your join columns were unique keys, and
> > thus didn't have any most-common-values?
>
> We could remove the hard limit on statistics target and impose the limit
> instead on the actual size of the arrays. Ie, allow people to specify
> larger sample sizes and discard unreasonably large excess data (possibly
> warning them when that happens).

I have run some more useful tests now with more distinct values.  The planning 
times do increase, but this is not the primary worry.  If you want to spend 
20 seconds of planning to speed up your query by 40 seconds, this could 
surely be a win in some scenarios, and not a catastrophic loss if not.  The 
practical problems lie with memory usage in ANALYZE, in two ways.  First, at 
some point it will try to construct pg_statistic rows that don't fit into the 
1GB limit, as mentioned upthread.  You get a funny error message and it 
aborts.  This is fixable with some cosmetics.  Second, ANALYZE appears to 
temporarily leak memory (it probably doesn't bother to free things along the 
way, as most of the code does), and so some not so large statistics targets 
(say, 4) can get your system swapping like crazy.  A crafty user could 
probably kill the system that way, perhaps even with the restricted settings 
we have now.  I haven't inspected the code in detail yet, but I imagine a few 
pfree() calls and/or a counter that checks the current memory usage against 
maintenance_work_mem could provide additional safety.  If we could get 
ANALYZE under control, then I imagine this would provide a more natural upper 
bound for the statistics targets, and it would be controllable by the 
administrator.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maximum statistics target

2008-03-10 Thread Gregory Stark
"Tom Lane" <[EMAIL PROTECTED]> writes:

> Peter Eisentraut <[EMAIL PROTECTED]> writes:
>> Am Freitag, 7. März 2008 schrieb Tom Lane:
>>> IIRC, egjoinsel is one of the weak spots, so tests involving planning of
>>> joins between two tables with large MCV lists would be a good place to
>>> start.
>
>> I have run tests with joining two and three tables with 10 million rows each,
>> and the planning times seem to be virtually unaffected by the statistics 
>> target, for values between 10 and 80.
>
> It's not possible to believe that you'd not notice O(N^2) behavior for N
> approaching 80 ;-).  Perhaps your join columns were unique keys, and
> thus didn't have any most-common-values?

We could remove the hard limit on statistics target and impose the limit
instead on the actual size of the arrays. Ie, allow people to specify larger
sample sizes and discard unreasonably large excess data (possibly warning them
when that happens).

That would remove the screw case the original poster had where he needed to
scan a large portion of the table to see at least one of every value even
though there were only 169 distinct values.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's RemoteDBA services!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maximum statistics target

2008-03-10 Thread Tom Lane
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Am Freitag, 7. März 2008 schrieb Tom Lane:
>> IIRC, egjoinsel is one of the weak spots, so tests involving planning of
>> joins between two tables with large MCV lists would be a good place to
>> start.

> I have run tests with joining two and three tables with 10 million rows each,
> and the planning times seem to be virtually unaffected by the statistics 
> target, for values between 10 and 80.

It's not possible to believe that you'd not notice O(N^2) behavior for N
approaching 80 ;-).  Perhaps your join columns were unique keys, and
thus didn't have any most-common-values?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maximum statistics target

2008-03-10 Thread Guillaume Smet
On Mon, Mar 10, 2008 at 11:36 AM, Peter Eisentraut <[EMAIL PROTECTED]> wrote:
>  The time to analyze is also quite constant, just before you run out of
>  memory. :)  The MaxAllocSize is the limiting factor in all this.  In my
>  example, statistics targets larger than about 80 created pg_statistic
>  rows that would have been larger than 1GB, so they couldn't be stored.

>From my experience on real life examples, the time to analyze is far
from being constant when you raise the statistics target but it may be
related to the schema of our tables.

cityvox=# \timing
Timing is on.
cityvox=# show default_statistics_target ;
 default_statistics_target
---
 10
(1 row)

Time: 0.101 ms
cityvox=# ANALYZE evenement;
ANALYZE
Time: 406.069 ms
cityvox=# ANALYZE evenement;
ANALYZE
Time: 412.355 ms
cityvox=# set default_statistics_target = 30;
SET
Time: 0.165 ms
cityvox=# ANALYZE evenement;
ANALYZE
Time: 1419.161 ms
cityvox=# ANALYZE evenement;
ANALYZE
Time: 1381.754 ms
cityvox=# set default_statistics_target = 100;
SET
Time: 1.853 ms
cityvox=# ANALYZE evenement;
ANALYZE
Time: 5211.785 ms
cityvox=# ANALYZE evenement;
ANALYZE
Time: 5178.764 ms

That said I totally agree that it's not a good idea to have a strict
maximum value if we haven't technical reasons for that.

--
Guillaume

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maximum statistics target

2008-03-10 Thread Cédric Villemain
Le Monday 10 March 2008, Peter Eisentraut a écrit :
> Am Freitag, 7. März 2008 schrieb Tom Lane:
> > I'm not wedded to the number 1000 in particular --- obviously that's
> > just a round number.  But it would be good to see some performance tests
> > with larger settings before deciding that we don't need a limit.
>
> Well, I'm not saying we should raise the default statistics target.  But
> setting an arbitrary limit on the grounds that larger values might slow the
> system is like limiting the size of tables because larger tables will cause
> slower queries.  Users should have the option of finding out the best
> balance for themselves.  If there are concerns with larger statistics
> targets, we should document them.  I find nothing about this in the
> documentation at the moment.

I find 2 things:
«Increasing the target causes a proportional increase in the time and space 
needed to do ANALYZE. »
in http://www.postgresql.org/docs/current/static/sql-analyze.html
and
« ... at the price of consuming more space in pg_statistic and slightly more 
time to compute the estimates»
in http://www.postgresql.org/docs/current/static/planner-stats.html

But probably not clear enought about time impact in query plan.


>
> > IIRC, egjoinsel is one of the weak spots, so tests involving planning of
> > joins between two tables with large MCV lists would be a good place to
> > start.
>
> I have run tests with joining two and three tables with 10 million rows
> each, and the planning times seem to be virtually unaffected by the
> statistics target, for values between 10 and 80.  They all look more or
> less like this:
>
> test=# explain select * from test1, test2 where test1.a = test2.b;
>  QUERY PLAN
> ---
>-- Hash Join  (cost=308311.00..819748.00 rows=1000 width=16)
>Hash Cond: (test1.a = test2.b)
>->  Seq Scan on test1  (cost=0.00..144248.00 rows=1000 width=8)
>->  Hash  (cost=144248.00..144248.00 rows=1000 width=8)
>  ->  Seq Scan on test2  (cost=0.00..144248.00 rows=1000
> width=8) (5 rows)
>
> Time: 132,350 ms
>
> and with indexes
>
> test=# explain select * from test1, test2 where test1.a = test2.b;
>  QUERY PLAN
> ---
>- Merge Join  (cost=210416.65..714072.26 rows=1000
> width=16)
>Merge Cond: (test1.a = test2.b)
>->  Index Scan using test1_index1 on test1  (cost=0.00..282036.13
> rows=1000 width=8)
>->  Index Scan using test2_index1 on test2  (cost=0.00..282036.13
> rows=1000 width=8)
> (4 rows)
>
> Time: 168,455 ms
>
> The time to analyze is also quite constant, just before you run out of
> memory. :)  The MaxAllocSize is the limiting factor in all this.  In my
> example, statistics targets larger than about 80 created pg_statistic
> rows that would have been larger than 1GB, so they couldn't be stored.
>
> I suggest that we get rid of the limit of 1000, adequately document
> whatever issues might exist with large values (possibly not many, see
> above), and add an error message more user-friendly than "invalid memory
> alloc request size" for the cases where the value is too large to be
> storable.



-- 
Cédric Villemain
Administrateur de Base de Données
Cel: +33 (0)6 74 15 56 53
http://dalibo.com - http://dalibo.org


signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] Maximum statistics target

2008-03-10 Thread Peter Eisentraut
Am Freitag, 7. März 2008 schrieb Tom Lane:
> I'm not wedded to the number 1000 in particular --- obviously that's
> just a round number.  But it would be good to see some performance tests
> with larger settings before deciding that we don't need a limit.

Well, I'm not saying we should raise the default statistics target.  But 
setting an arbitrary limit on the grounds that larger values might slow the 
system is like limiting the size of tables because larger tables will cause 
slower queries.  Users should have the option of finding out the best balance 
for themselves.  If there are concerns with larger statistics targets, we 
should document them.  I find nothing about this in the documentation at the 
moment.

> IIRC, egjoinsel is one of the weak spots, so tests involving planning of
> joins between two tables with large MCV lists would be a good place to
> start.

I have run tests with joining two and three tables with 10 million rows each, 
and the planning times seem to be virtually unaffected by the statistics 
target, for values between 10 and 80.  They all look more or less like 
this:

test=# explain select * from test1, test2 where test1.a = test2.b;
 QUERY PLAN
-
 Hash Join  (cost=308311.00..819748.00 rows=1000 width=16)
   Hash Cond: (test1.a = test2.b)
   ->  Seq Scan on test1  (cost=0.00..144248.00 rows=1000 width=8)
   ->  Hash  (cost=144248.00..144248.00 rows=1000 width=8)
 ->  Seq Scan on test2  (cost=0.00..144248.00 rows=1000 width=8)
(5 rows)

Time: 132,350 ms

and with indexes

test=# explain select * from test1, test2 where test1.a = test2.b;
 QUERY PLAN

 Merge Join  (cost=210416.65..714072.26 rows=1000 width=16)
   Merge Cond: (test1.a = test2.b)
   ->  Index Scan using test1_index1 on test1  (cost=0.00..282036.13 
rows=1000 width=8)
   ->  Index Scan using test2_index1 on test2  (cost=0.00..282036.13 
rows=1000 width=8)
(4 rows)

Time: 168,455 ms

The time to analyze is also quite constant, just before you run out of 
memory. :)  The MaxAllocSize is the limiting factor in all this.  In my 
example, statistics targets larger than about 80 created pg_statistic 
rows that would have been larger than 1GB, so they couldn't be stored.

I suggest that we get rid of the limit of 1000, adequately document whatever 
issues might exist with large values (possibly not many, see above), and add 
an error message more user-friendly than "invalid memory alloc request size" 
for the cases where the value is too large to be storable.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maximum statistics target

2008-03-09 Thread Stephen Denne
Tom Lane wrote:
> Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
> > On Fri, Mar 07, 2008 at 07:25:25PM +0100, Peter Eisentraut wrote:
> >> What's the problem with setting it to ten million if I 
> have ten million values 
> >> in the table and I am prepared to spend the resources to 
> maintain those 
> >> statistics?
> 
> > That it'll probably take 10 million seconds to calculate the plans
> > using it? I think Tom pointed there are a few places that are O(n^2)
> > the number entries...
> 
> I'm not wedded to the number 1000 in particular --- obviously that's
> just a round number.  But it would be good to see some 
> performance tests
> with larger settings before deciding that we don't need a limit.

I recently encountered a situation where I would have liked to be able to try a 
larger limit (amongst other ideas for improving my situation):

I have a field whose distribution of frequencies of values is roughly 
geometric, rather than flat.
Total rows = 36 million
relpages=504864
Distinct field values in use = 169
10 values account for 50% of the rows.
41 values account for 90% of the rows.

After setting statistics target to 1000 for that field, and analyzing the 
table, the statistics row for that field had 75 most frequent values and a 
histogram with 76 entries in it. Estimating 151 values in total.

For this situation using a larger statistics target should result in more pages 
being read, and a more accurate record of statistics. It shouldn't result in 
significantly more work for the planner.

It wouldn't solve my problem though, which is frequent over-estimation of rows 
when restricting by this field with values not known at plan time.

Regards,
Stephen Denne.

Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer 
focus, and courage. This email with any attachments is confidential and may be 
subject to legal privilege.  If it is not intended for you please advise by 
reply immediately, destroy it and do not copy, disclose or use it in any way.

__
  This email has been scanned by the DMZGlobal Business Quality 
  Electronic Messaging Suite.
Please see http://www.dmzglobal.com/services/bqem.htm for details.
__



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maximum statistics target

2008-03-07 Thread Tom Lane
Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
> On Fri, Mar 07, 2008 at 07:25:25PM +0100, Peter Eisentraut wrote:
>> What's the problem with setting it to ten million if I have ten million 
>> values 
>> in the table and I am prepared to spend the resources to maintain those 
>> statistics?

> That it'll probably take 10 million seconds to calculate the plans
> using it? I think Tom pointed there are a few places that are O(n^2)
> the number entries...

I'm not wedded to the number 1000 in particular --- obviously that's
just a round number.  But it would be good to see some performance tests
with larger settings before deciding that we don't need a limit.

IIRC, egjoinsel is one of the weak spots, so tests involving planning of
joins between two tables with large MCV lists would be a good place to
start.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maximum statistics target

2008-03-07 Thread Martijn van Oosterhout
On Fri, Mar 07, 2008 at 07:25:25PM +0100, Peter Eisentraut wrote:
> What's the problem with setting it to ten million if I have ten million 
> values 
> in the table and I am prepared to spend the resources to maintain those 
> statistics?

That it'll probably take 10 million seconds to calculate the plans
using it? I think Tom pointed there are a few places that are O(n^2)
the number entries...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while 
> boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


[HACKERS] Maximum statistics target

2008-03-07 Thread Peter Eisentraut
Related to the concurrent discussion about selectivity estimations ...

What is the reason the statistics target is limited to 1000?  I've seen more 
than one case where increasing the statistics target to 1000 improved results 
and one would have wanted to increase it further.

What's the problem with setting it to ten million if I have ten million values 
in the table and I am prepared to spend the resources to maintain those 
statistics?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers