Mark Wong mark...@gmail.com writes:
On Mon, Dec 8, 2008 at 4:34 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Mark Wong mark...@gmail.com writes:
On Tue, Dec 2, 2008 at 2:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Are any of the queries complicated enough to trigger GEQO planning?
Sorry for the
On Mon, Dec 8, 2008 at 4:34 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Mark Wong mark...@gmail.com writes:
On Tue, Dec 2, 2008 at 2:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Are any of the queries complicated enough to trigger GEQO planning?
Is there a debug option that we could use to see?
Tom Lane [EMAIL PROTECTED] wrote:
[ a bit off-topic for the thread, but ... ]
Kevin Grittner [EMAIL PROTECTED] writes:
I'll attach the query and plan. You'll note that the query looks a
little odd, especially all the (1=1) tests.
FWIW, it would be better to use TRUE as a placeholder in
[ a bit off-topic for the thread, but ... ]
Kevin Grittner [EMAIL PROTECTED] writes:
I'll attach the query and plan. You'll note that the query looks a
little odd, especially all the (1=1) tests.
FWIW, it would be better to use TRUE as a placeholder in your
generated queries. I don't suppose
On Tue, Dec 2, 2008 at 2:25 AM, Tom Lane [EMAIL PROTECTED] wrote:
Greg Smith [EMAIL PROTECTED] writes:
... where the Power Test seems to oscillate between degrees of good and bad
behavior seemingly at random.
Are any of the queries complicated enough to trigger GEQO planning?
Is there a
Mark Wong [EMAIL PROTECTED] writes:
On Tue, Dec 2, 2008 at 2:25 AM, Tom Lane [EMAIL PROTECTED] wrote:
Are any of the queries complicated enough to trigger GEQO planning?
Is there a debug option that we could use to see?
Well, you could set geqo=off and see if the behavior changes, but
it'd be
Kevin Grittner [EMAIL PROTECTED] writes:
One more data point to try to help.
While the jump from a default_statistics_target from 10 to 1000
resulted in a plan time increase for a common query from 50 ms to 310
ms, at a target of 50 the plan time was 53 ms. Analyze time was 7.2
minutes
Gregory Stark [EMAIL PROTECTED] wrote:
Incidentally this timing is with the 75kB toasted arrays in shared
buffers
because the table has just been analyzed. If it was on a busy system
then
just
planning the query could involve 75kB of I/O which is what I believe
was
happening to me way
Gregory Stark [EMAIL PROTECTED] wrote:
Incidentally, here's a graph of the explain time for that plan. It
looks
pretty linear to me
Except for that sweet spot between 50 and 100.
Any idea what's up with that?
-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
All,
I'm thinking that default_statistics_target is disputable enough that we
want to move discussion of it to pgsql-performance, and for version 0.1 of
the tuning wizard, exclude it.
--
--Josh
Josh Berkus
PostgreSQL
San Francisco
--
Sent via pgsql-hackers mailing list
Thanks for putting out pgtune - it's a sorely needed tool.
I had a chance to look over the source code and have a few comments,
mostly about python specific coding conventions.
- The windows specific try block ( line 16 ) raises a ValueError vs
ImportError on my debian machine. Maybe it would be
On Fri, 5 Dec 2008, Nathan Boley wrote:
- all classes ( 58, 135, 205 ) are 'old-style' classes. I dont see
any reason to use classic classes ( unless Python 2.1 is a support
goal? )
I'm not targeting anything older then 2.4, as that's the oldest version I
have installed anywhere. 2.4 is
On Fri, 2008-12-05 at 17:27 -0500, Greg Smith wrote:
On Fri, 5 Dec 2008, Nathan Boley wrote:
- all classes ( 58, 135, 205 ) are 'old-style' classes. I dont see
any reason to use classic classes ( unless Python 2.1 is a support
goal? )
I'm not targeting anything older then 2.4, as
Looking at eqjoinsel I think it could be improved algorithmically if we keep
the mcv list in sorted order, even if it's just binary sorted order. But I'm
not sure what else uses those values and whether the current ordering is
significant. I'm also not sure it's the only O(n^2) algorithm there
Greg Smith [EMAIL PROTECTED] writes:
On Thu, 4 Dec 2008, Gregory Stark wrote:
My point was more that you could have a data warehouse on a non-dedicated
machine, you could have a web server on a non-dedicated machine, or you could
have a mixed server on a non-dedicated machine.
I should
Gregory Stark [EMAIL PROTECTED] wrote:
That sounds like it would be an interesting query to analyze in more
detail.
Is there any chance to could run the complete graph and get a chart
of
analyze
times for all statistics values from 1..1000 ? And log the explain
plans to
a
file so we
Kevin Grittner [EMAIL PROTECTED] writes:
There are some very big tables in that query which contain some
confidential data.
oh well.
I'll attach the query and plan. You'll note that the query looks a
little odd, especially all the (1=1) tests.
That is interesting. I seem to recall Tom
Gregory Stark [EMAIL PROTECTED] wrote:
And log the explain plans to a
file so we can look for at what statistics targets the plan changed?
Well, I can give you explain analyze output for
default_statistics_target 10 and 50, for whatever that's worth.
Unfortunately I blew my save from the
Greg Smith wrote:
I'm not the sort to be too concerned myself that
the guy who thinks he's running a DW on a system with 64MB of RAM might
get bad settings, but it's a fair criticism to point that out as a problem.
In defense of thinking about very small configurations, I've seen many
cases
On Thu, 2008-12-04 at 10:20 -0800, Ron Mayer wrote:
Greg Smith wrote:
I'm not the sort to be too concerned myself that
the guy who thinks he's running a DW on a system with 64MB of RAM might
get bad settings, but it's a fair criticism to point that out as a problem.
In defense of
Well that's a bit if hyperbole. There's a gulf of difference between
an embedded use case where it should fit within an acceptable
footprint for a desktop app component of maybe a megabyte or so of ram
and disk - if we're generous and saying it should run comfortably
without having to spec
In defense of thinking about very small configurations, I've seen many
cases where an enterprise-software salesperson's laptop is running a
demo - either in a small virtual machine in the laptop, or on an
overloaded windows box.Even though the customer might end up
running with 64GB, the
Joshua D. Drake wrote:
On Thu, 2008-12-04 at 10:20 -0800, Ron Mayer wrote:
Greg Smith wrote:
I'm not the sort to be too concerned myself that
the guy who thinks he's running a DW on a system with 64MB of RAM might
get bad settings, but it's a fair criticism to point that out as a problem.
In
On Thu, 2008-12-04 at 10:55 -0800, Ron Mayer wrote:
Joshua D. Drake wrote:
Although I get your point, that is a job for sqllite not postgresql.
PostgreSQL is not a end all be all solution and it is definitely not
designed to be embedded which is essentially what you are suggesting
with
Joshua D. Drake [EMAIL PROTECTED] wrote:
Fair enough, then make sure you are demoing on a platform that can
handle PostgreSQL :)
There are a lot of good reasons for people to be running an instance
of PostgreSQL on a small machine, running it on a machine with other
software, or running
On Thu, 2008-12-04 at 14:05 -0600, Kevin Grittner wrote:
Joshua D. Drake [EMAIL PROTECTED] wrote:
Fair enough, then make sure you are demoing on a platform that can
handle PostgreSQL :)
There are a lot of good reasons for people to be running an instance
of PostgreSQL on a small
On Thu, 4 Dec 2008, Ron Mayer wrote:
OTOH there tends to be less DBA time available to tune the smaller demo
instances that comego as sales people upgrade their laptops; so
improved automation would be much appreciated there.
I have a TODO list for things that might be interesting to add to
Greg Smith [EMAIL PROTECTED] wrote:
On Thu, 4 Dec 2008, Ron Mayer wrote:
OTOH there tends to be less DBA time available to tune the smaller
demo
instances that comego as sales people upgrade their laptops; so
improved automation would be much appreciated there.
I have a TODO list for
On Thu, 4 Dec 2008, Kevin Grittner wrote:
I think there needs to be some easy way to choose an option which
yields a configuration similar to what we've had in recent production
releases -- something that will start up and allow minimal testing on
even a small machine.
But that's the goal of
Greg Smith [EMAIL PROTECTED] wrote:
On Thu, 4 Dec 2008, Kevin Grittner wrote:
I think there needs to be some easy way to choose an option which
yields a configuration similar to what we've had in recent
production
releases -- something that will start up and allow minimal testing
on
even
Greg Smith wrote:
On Thu, 4 Dec 2008, Ron Mayer wrote:
OTOH there tends to be less DBA time available to tune the smaller
demo instances that comego as sales people upgrade their laptops; so
improved automation would be much appreciated there.
I have a TODO list for things that might be
On Thu, Dec 4, 2008 at 5:11 PM, Greg Smith [EMAIL PROTECTED] wrote:
On Thu, 4 Dec 2008, Kevin Grittner wrote:
I think there needs to be some easy way to choose an option which
yields a configuration similar to what we've had in recent production
releases -- something that will start up and
On Thu, 4 Dec 2008, Robert Haas wrote:
Just let's please change it both places, rather than letting
contrib/pgtune be a backdoor to get around not liking what initdb does.
And similarly with the other parameters...
Someone running pgtune has specifically asked for their database to be
tuned
On Thu, 2008-12-04 at 21:51 -0500, Greg Smith wrote:
On Thu, 4 Dec 2008, Robert Haas wrote:
Just let's please change it both places, rather than letting
contrib/pgtune be a backdoor to get around not liking what initdb does.
And similarly with the other parameters...
Someone running
Looks like I need to add Python 2.5+Linux to my testing set. I did not
expect that the UNIX distributions of Python 2.5 would ship with wintypes.py
at all. I think I can fix this on the spot though. On line 40, you'll find
this bit:
except ImportError:
Change that to the following:
On Wed, 2008-12-03 at 13:30 -0500, Robert Haas wrote:
Looks like I need to add Python 2.5+Linux to my testing set. I did not
expect that the UNIX distributions of Python 2.5 would ship with wintypes.py
at all. I think I can fix this on the spot though. On line 40, you'll find
this bit:
I'm not sure what mixed mode is supposed to be, but based on what
I've seen so far, I'm a skeptical of the idea that encouraging people
to raise default_statistics_target to 50 and turn on
constraint_exclusion is reasonable.
Why?
Because both of those settings are strictly worse for my
Joshua D. Drake [EMAIL PROTECTED] writes:
On Wed, 2008-12-03 at 13:30 -0500, Robert Haas wrote:
I'm not sure what mixed mode is supposed to be, but based on what
I've seen so far, I'm a skeptical of the idea that encouraging people
to raise default_statistics_target to 50 and turn on
Well did you have any response to what I posited before? I said mixed should
produce the same settings that the default initdb settings produce. At least
on a moderately low-memory machine that initdb targets.
I'm actually really skeptical of this whole idea of modes. The main
thing mode
I can see an argument about constraint_exclusion but
default_statistics_target I don't.
Why not? I don't want to accept a big increase in ANALYZE times (or
planning times, though I'm really not seeing that at this point)
without some benefit.
It seems unlikely that you would want 256 MB of
Joshua D. Drake [EMAIL PROTECTED] writes:
It also seems unlikely that you would hit 256MB of checkpoint segments
on a 100MB database before checkpoint_timeout and if you did, you
certainly did need them.
Remember postgresql only creates the segments when it needs them.
Should we change the
On Wed, Dec 3, 2008 at 4:41 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
If you are concerned about the analyze time between 10, 50 and 150, I
would suggest that you are concerned about the wrong things. Remember
I can't rule that out. What things do you think I should be concerned
about?
On Wed, 2008-12-03 at 17:33 -0500, Robert Haas wrote:
On Wed, Dec 3, 2008 at 4:41 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
If you are concerned about the analyze time between 10, 50 and 150, I
would suggest that you are concerned about the wrong things. Remember
I can't rule that out.
Robert Haas [EMAIL PROTECTED] wrote:
On Wed, Dec 3, 2008 at 4:41 PM, Joshua D. Drake
[EMAIL PROTECTED] wrote:
If you are concerned about the analyze time between 10, 50 and 150,
I
would suggest that you are concerned about the wrong things.
Remember
I can't rule that out. What things do
On Wed, 3 Dec 2008, Gregory Stark wrote:
It sure seems strange to me to have initdb which presumably is targeting a
mixed system -- where it doesn't know for sure what workload will be run --
produce a different set of values than the tuner on the same machine.
It's been a long time since the
Kevin Grittner [EMAIL PROTECTED] writes:
One more data point to try to help.
While the jump from a default_statistics_target from 10 to 1000
resulted in a plan time increase for a common query from 50 ms to 310
ms, at a target of 50 the plan time was 53 ms.
That sounds like it would be
On Thu, 2008-12-04 at 00:11 +, Gregory Stark wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
I
started to do this for you last week but got side-tracked. Do you have any
time for this?
I can do it if you have a script.
So how big should a minimum postgres install be not including
On Wed, 3 Dec 2008, Robert Haas wrote:
I'm not sure if you've thought about this, but there is also a
difference between max_connections and maximum LIKELY connections.
It's actually an implicit assumption of the model Josh threw out if you
stare at the numbers. The settings for work_mem
On Wed, 3 Dec 2008, Guillaume Smet wrote:
- it would be really nice to make it work with Python 2.4 as RHEL 5 is
a Python 2.4 thing and it is a very widespread platform out there,
The 2.5 stuff is only required in order to detect memory on Windows. My
primary box is RHEL5 and runs 2.4, it
Greg Smith [EMAIL PROTECTED] writes:
On Wed, 3 Dec 2008, Gregory Stark wrote:
It sure seems strange to me to have initdb which presumably is targeting a
mixed system -- where it doesn't know for sure what workload will be run --
produce a different set of values than the tuner on the same
On Wed, 3 Dec 2008, Robert Haas wrote:
Then I tried -T web and got what seemed like a more reasonable set
of values. But I wasn't sure I needed that many connections, so I
added -c 150 to see how much difference that made. Kaboom!
That and the import errors fixed in the version attached
Greg Smith [EMAIL PROTECTED] writes:
Is it worse to suffer from additional query overhead if you're sloppy with
the tuning tool, or to discover addition partitions didn't work as you
expected?
Surely that's the same question we faced when deciding what the Postgres
default should be?
That
Joshua D. Drake [EMAIL PROTECTED] writes:
On Thu, 2008-12-04 at 00:11 +, Gregory Stark wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
I
started to do this for you last week but got side-tracked. Do you have any
time for this?
I can do it if you have a script.
Well, I can send you
Gregory Stark escribió:
Joshua D. Drake [EMAIL PROTECTED] writes:
I don't think at any time I have said to my self, I am going to set this
parameter low so I don't fill up my disk. If I am saying that to myself
I have either greatly underestimated the hardware for the task. Consider
that
Joshua D. Drake [EMAIL PROTECTED] writes:
On Thu, 2008-12-04 at 00:11 +, Gregory Stark wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
I
started to do this for you last week but got side-tracked. Do you have any
time for this?
I can do it if you have a script.
So how big should a
On Wed, 2008-12-03 at 22:17 -0300, Alvaro Herrera wrote:
Gregory Stark escribió:
Joshua D. Drake [EMAIL PROTECTED] writes:
I don't think at any time I have said to my self, I am going to set this
parameter low so I don't fill up my disk. If I am saying that to myself
I have either
On Thu, 4 Dec 2008, Gregory Stark wrote:
Right now, my program doesn't fiddle with any memory settings if you've got
less than 256MB of RAM.
What I'm suggesting is that you shouldn't have to special case this. That you
should expect whatever formulas you're using to produce the same values as
The idea of the mixed mode is that you want to reduce the odds someone will
get a massively wrong configuration if they're not paying attention. Is it
worse to suffer from additional query overhead if you're sloppy with the
tuning tool, or to discover addition partitions didn't work as you
What fun. I'm beginning to remember why nobody has ever managed to deliver
a community tool that helps with this configuration task before.
I have to say I really like this tool. It may not be perfect but it's
a lot easier than trying to do this analysis from scratch. And we are
really only
Greg Smith [EMAIL PROTECTED] writes:
On Thu, 4 Dec 2008, Gregory Stark wrote:
What I'm suggesting is that you shouldn't have to special case this. That you
should expect whatever formulas you're using to produce the same values as
initdb if they were run on the same machine initdb is
On Thu, 4 Dec 2008, Gregory Stark wrote:
Greg Smith [EMAIL PROTECTED] writes:
Is it worse to suffer from additional query overhead if you're sloppy with
the tuning tool, or to discover addition partitions didn't work as you
expected?
Surely that's the same question we faced when deciding
Greg Smith [EMAIL PROTECTED] writes:
On Thu, 4 Dec 2008, Gregory Stark wrote:
Greg Smith [EMAIL PROTECTED] writes:
Is it worse to suffer from additional query overhead if you're sloppy with
the tuning tool, or to discover addition partitions didn't work as you
expected?
Surely that's the
On Mon, Dec 1, 2008 at 9:32 PM, Greg Smith [EMAIL PROTECTED] wrote:
On Mon, 1 Dec 2008, Mark Wong wrote:
So then I attempted to see if there might have been difference between the
executing time of each individual query with the above parameters. The
queries that don't seem to be effected are
If we do though, it shouldn't default one way and then get randomly flipped by
a tool that has the same information to make its decision on. What I'm saying
is that mixed is the same information that initdb had about the workload.
+1.
If we do change this then I wonder if we need the
On Thu, 4 Dec 2008, Gregory Stark wrote:
My point was more that you could have a data warehouse on a
non-dedicated machine, you could have a web server on a non-dedicated
machine, or you could have a mixed server on a non-dedicated machine.
I should just finish the documentation, where there
I think the tests you could consider next is to graph the target going from
10 to 100 in steps of 10 just for those 5 queries. If it gradually
degrades, that's interesting but hard to nail down. But if there's a sharp
transition, getting an explain plan for the two sides of that should
On Wed, 3 Dec 2008, Mark Wong wrote:
http://207.173.203.223/~markwkm/pgsql/default_statistics_target/q2.png
http://207.173.203.223/~markwkm/pgsql/default_statistics_target/q9.png
http://207.173.203.223/~markwkm/pgsql/default_statistics_target/q17.png
Alvaro Herrera wrote:
Gregory Stark escribió:
Joshua D. Drake [EMAIL PROTECTED] writes:
I don't think at any time I have said to my self, I am going to set this
parameter low so I don't fill up my disk. If I am saying that to myself
I have either greatly underestimated the hardware for the
Greg Smith [EMAIL PROTECTED] writes:
... where the Power Test seems to oscillate between degrees of good and bad
behavior seemingly at random.
Are any of the queries complicated enough to trigger GEQO planning?
regards, tom lane
--
Sent via pgsql-hackers mailing list
Gregory Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
Dann Corbit [EMAIL PROTECTED] writes:
I also do not believe that there is any value that will be the right
answer. But a table of data might be useful both for people who want to
toy with altering the values and also for those who
Greg,
On Mon, Dec 1, 2008 at 3:17 AM, Greg Smith [EMAIL PROTECTED] wrote:
./pgtune -i ~/data/postgresql.conf
First, thanks for your work: it will really help a lot of people to
have a decent default configuration.
A couple of comments from reading the code (I didn't run it yet):
- it would be
On Mon, Dec 1, 2008 at 3:21 AM, Greg Smith [EMAIL PROTECTED] wrote:
On Sun, 30 Nov 2008, Greg Smith wrote:
Memory detection works on recent (=2.5) version of Python for Windows
now.
I just realized that the provided configuration is really not optimal for
Windows users because of the known
Dave Page wrote:
On Mon, Dec 1, 2008 at 3:21 AM, Greg Smith [EMAIL PROTECTED] wrote:
On Sun, 30 Nov 2008, Greg Smith wrote:
Memory detection works on recent (=2.5) version of Python for Windows
now.
I just realized that the provided configuration is really not optimal for
Windows users
I just gave this a try and got:
$ ./pgtune
Traceback (most recent call last):
File ./pgtune, line 20, in module
from ctypes.wintypes import *
File /usr/lib/python2.5/ctypes/wintypes.py, line 21, in module
class VARIANT_BOOL(_SimpleCData):
ValueError: _type_ 'v' not supported
This is
On Mon, 1 Dec 2008, Dave Page wrote:
It's going to be of little use to 99% of Windows users anyway as it's
written in Python. What was wrong with C?
It's 471 lines of Python code that leans heavily on that language's
Dictionary type to organize everything. Had I insisted on writing
Greg Smith [EMAIL PROTECTED] writes:
I'd ultimately like to use the Python version as a spec to produce a C
implementation, because that's the only path to get something like this
integrated into initdb itself.
It won't get integrated into initdb in any case: a standalone tool is
the correct
On Mon, 1 Dec 2008, Robert Haas wrote:
I just gave this a try and got:
$ ./pgtune
Traceback (most recent call last):
File ./pgtune, line 20, in module
from ctypes.wintypes import *
File /usr/lib/python2.5/ctypes/wintypes.py, line 21, in module
class VARIANT_BOOL(_SimpleCData):
On Mon, 1 Dec 2008, Tom Lane wrote:
Greg Smith [EMAIL PROTECTED] writes:
I'd ultimately like to use the Python version as a spec to produce a C
implementation, because that's the only path to get something like this
integrated into initdb itself.
It won't get integrated into initdb in any
On Thu, Nov 13, 2008 at 11:53 AM, Tom Lane [EMAIL PROTECTED] wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
A lot of people have suggested raising our default_statistics target,
and it has been rejected because there's some O(n^2) behavior in the
planner, and it makes ANALYZE slower, but
On Mon, 1 Dec 2008, Mark Wong wrote:
So then I attempted to see if there might have been difference between
the executing time of each individual query with the above parameters.
The queries that don't seem to be effected are Q1, Q4, Q12, Q13, and
Q15. Q17 suggests that anything higher than
Hi all,
I have some data [...]
Thanks for gathering this data.
The first thing I notice is that the two versions of Q17 that you are
running are actually not the exact same query - there are hard-coded
constants that are different in each case, and that matters. The
substituted parameter
On Tue, 18 Nov 2008, Josh Berkus wrote:
Regarding the level of default_stats_target, it sounds like people agree
that it ought to be raised for the DW use-case, but disagree how much.
If that's the case, what if we compromize at 50 for mixed and 100 for
DW?
That's what I ended up doing.
On Sun, 30 Nov 2008, Greg Smith wrote:
Memory detection works on recent (=2.5) version of Python for Windows
now.
I just realized that the provided configuration is really not optimal for
Windows users because of the known limitations that prevent larger
shared_buffers settings from being
On Sun, Nov 30, 2008 at 09:17:37PM -0500, Greg Smith wrote:
That's what I ended up doing. The attached version of this script and its
data files (I dumped all the useful bits in the current HEAD pg_settings
for it to use) now hits all of the initial goals I had for a useful
working tool
On Mon, 1 Dec 2008, Martijn van Oosterhout wrote:
Do you have a check somewhere to see if this exceeds the total SYSV
memory allowed by the OS. Otherwise you've just output an unstartable
config. The output of /sbin/sysctl should tell you.
Something to address that is listed as the first
Even though we all agree default_statistics_target = 10 is too low,
proposing a 40X increase in the default value requires more evidence
than this. In particular, the prospect of a 1600-fold increase in
the typical cost of eqjoinsel() is a mite scary.
I just did some very quick testing of a
On Thu, Nov 27, 2008 at 05:15:04PM -0500, Robert Haas wrote:
A random thought: maybe the reason I'm not seeing any benefit is
because my tables are just too small - most contain at most a few
thousand rows, and some are much smaller. Maybe
default_statistics_target should vary with the table
Robert Haas [EMAIL PROTECTED] writes:
ANALYZE with default_statistics_target set to 10 takes 13 s. With
100, 92 s. With 1000, 289 s.
That is interesting. It would also be interesting to total up the time it
takes to run EXPLAIN (without ANALYZE) for a large number of queries.
I did start
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Thu, Nov 27, 2008 at 05:15:04PM -0500, Robert Haas wrote:
[...] Maybe
default_statistics_target should vary with the table size? Something
like, 0.1% of the rows to a maximum of 100... and then 0.01% of the
rows after that to some higher
Joshua D. Drake [EMAIL PROTECTED] writes:
On Tue, 2008-11-25 at 20:33 -0500, Tom Lane wrote:
So we really don't have any methodically-gathered evidence about the
effects of different stats settings. It wouldn't take a lot to convince
us to switch to a different default, I think, but it would
Tom Lane [EMAIL PROTECTED] writes:
Dann Corbit [EMAIL PROTECTED] writes:
I also do not believe that there is any value that will be the right
answer. But a table of data might be useful both for people who want to
toy with altering the values and also for those who want to set the
defaults.
On Tue, Nov 25, 2008 at 06:59:25PM -0800, Dann Corbit wrote:
I do have a statistics idea/suggestion (possibly useful with some future
PostgreSQL 9.x or something):
It is a simple matter to calculate lots of interesting univarate summary
statistics with a single pass over the data (perhaps
On Nov 25, 2008, at 8:59 PM, Dann Corbit wrote:
It is a simple matter to calculate lots of interesting univarate
summary
statistics with a single pass over the data (perhaps during a vacuum
full).
I don't think that the problem we have is how to collect statistics
(well, except for
On Nov 25, 2008, at 7:06 PM, Gregory Stark wrote:
The thought occurs to me that we're looking at this from the
wrong side of the
coin. I've never, ever seen query plan time pose a problem with
Postgres, even
without using prepared statements.
I certainly have seen plan times be a
Decibel! [EMAIL PROTECTED] writes:
On Nov 25, 2008, at 7:06 PM, Gregory Stark wrote:
The thought occurs to me that we're looking at this from the wrong side of
the
coin. I've never, ever seen query plan time pose a problem with Postgres,
even
without using prepared statements.
I
Decibel! [EMAIL PROTECTED] wrote:
On Nov 25, 2008, at 7:06 PM, Gregory Stark wrote:
The thought occurs to me that we're looking at this from the
wrong side of the
coin. I've never, ever seen query plan time pose a problem with
Postgres, even
without using prepared statements.
I
Kevin Grittner [EMAIL PROTECTED] wrote:
I hadn't
tried it lately, so I just gave it a go with switching from a
default
statistics target of 10 with no overrides to 1000.
Oh, this was on 8.2.7, Linux, pretty beefy machine. Do you want the
whole set of config info and the hardware specs, or
On Nov 19, 2008, at 11:51 PM, Tom Lane wrote:
Dann Corbit [EMAIL PROTECTED] writes:
I think the idea that there IS a magic number is the problem.
No amount of testing is ever going to refute the argument that,
under
some other workload, a different value might better.
But that doesn't
Decibel! [EMAIL PROTECTED] writes:
The thought occurs to me that we're looking at this from the wrong
side of the coin. I've never, ever seen query plan time pose a
problem with Postgres, even without using prepared statements.
That tells more about the type of queries you tend to run than
Decibel! [EMAIL PROTECTED] writes:
Is there even a good way to find out what planning time was? Is there a way
to
gather that stat for every query a session runs?
\timing
explain select ...
The thought occurs to me that we're looking at this from the wrong side of
the
coin. I've
1 - 100 of 165 matches
Mail list logo