On Mon, Mar 10, 2014 at 3:16 PM, Kevin Grittner wrote:
> Andres Freund wrote:
> > On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
>
> >> I don't really know about cpu_tuple_cost. Kevin's often
> >> advocated raising it, but I haven't heard anyone else advocate
> >> for that. I think we need da
On 03/10/2014 03:16 PM, Kevin Grittner wrote:
> I only have anecdotal evidence, though. I have seen it help dozens
> of times, and have yet to see it hurt. That said, most people on
> this list are probably capable of engineering a benchmark which
> will show whichever result they would prefer.
Andres Freund wrote:
> On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
>> I don't really know about cpu_tuple_cost. Kevin's often
>> advocated raising it, but I haven't heard anyone else advocate
>> for that. I think we need data points from more people to know
>> whether or not that's a good i
On 02/18/2014 12:19 AM, Andres Freund wrote:
> On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
>> I don't think anyone objected to increasing the defaults for work_mem
>> and maintenance_work_mem by 4x, and a number of people were in favor,
>> so I think we should go ahead and do that. If you'd l
On Mon, Feb 24, 2014 at 1:05 PM, Bruce Momjian wrote:
> On Mon, Feb 17, 2014 at 11:14:33AM -0500, Bruce Momjian wrote:
>> On Sun, Feb 16, 2014 at 09:26:47PM -0500, Robert Haas wrote:
>> > > So, would anyone like me to create patches for any of these items before
>> > > we hit 9.4 beta? We have ad
On Mon, Feb 17, 2014 at 11:14:33AM -0500, Bruce Momjian wrote:
> On Sun, Feb 16, 2014 at 09:26:47PM -0500, Robert Haas wrote:
> > > So, would anyone like me to create patches for any of these items before
> > > we hit 9.4 beta? We have added autovacuum_work_mem, and increasing
> > > work_mem and m
On Sun, Feb 16, 2014 at 6:26 PM, Robert Haas wrote:
> The current bgwriter_lru_maxpages value limits the background writer
> to a maximum of 4MB/s. If one imagines shared_buffers = 8GB, that
> starts to seem rather low, but I don't have a good feeling for what a
> better value would be.
>
I
On Mon, Feb 17, 2014 at 8:31 AM, Stephen Frost wrote:
>> Actually, I object to increasing work_mem by default. In my experience
>> most of the untuned servers are backing some kind of web application and
>> often run with far too many connections. Increasing work_mem for those
>> is dangerous.
>
>
On 18/02/14 03:48, Tom Lane wrote:
Gavin Flower writes:
On 17/02/14 15:26, Robert Haas wrote:
I don't really know about cpu_tuple_cost. Kevin's often advocated
raising it, but I haven't heard anyone else advocate for that. I
think we need data points from more people to know whether or not
t
On Mon, Feb 17, 2014 at 07:39:47PM +0100, Andres Freund wrote:
> On 2014-02-17 13:33:17 -0500, Robert Haas wrote:
> > On Mon, Feb 17, 2014 at 11:33 AM, Andres Freund
> > wrote:
> > >> And I still disagree with this- even in those cases. Those same untuned
> > >> servers are running dirt-simple q
On 2014-02-17 13:33:17 -0500, Robert Haas wrote:
> On Mon, Feb 17, 2014 at 11:33 AM, Andres Freund
> wrote:
> >> And I still disagree with this- even in those cases. Those same untuned
> >> servers are running dirt-simple queries 90% of the time and they won't
> >> use any more memory from this,
On Mon, Feb 17, 2014 at 11:33 AM, Andres Freund wrote:
> On 2014-02-17 11:31:56 -0500, Stephen Frost wrote:
>> * Andres Freund (and...@2ndquadrant.com) wrote:
>> > On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
>> > > I don't think anyone objected to increasing the defaults for work_mem
>> > > a
Andres Freund writes:
> On 2014-02-17 12:23:58 -0500, Robert Haas wrote:
>> I think you may be out-voted.
> I realize that, but I didn't want to let the "I don't think anyone
> objected" stand :)
FWIW, I think we need to be pretty gradual about this sort of thing,
because push-back from the fiel
On 2014-02-17 12:23:58 -0500, Robert Haas wrote:
> On Mon, Feb 17, 2014 at 11:19 AM, Andres Freund
> wrote:
> > On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
> >> I don't think anyone objected to increasing the defaults for work_mem
> >> and maintenance_work_mem by 4x, and a number of people w
On Mon, Feb 17, 2014 at 11:19 AM, Andres Freund wrote:
> On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
>> I don't think anyone objected to increasing the defaults for work_mem
>> and maintenance_work_mem by 4x, and a number of people were in favor,
>> so I think we should go ahead and do that.
On 2014-02-17 11:31:56 -0500, Stephen Frost wrote:
> * Andres Freund (and...@2ndquadrant.com) wrote:
> > On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
> > > I don't think anyone objected to increasing the defaults for work_mem
> > > and maintenance_work_mem by 4x, and a number of people were in
* Andres Freund (and...@2ndquadrant.com) wrote:
> On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
> > I don't think anyone objected to increasing the defaults for work_mem
> > and maintenance_work_mem by 4x, and a number of people were in favor,
> > so I think we should go ahead and do that. If y
On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
> I don't think anyone objected to increasing the defaults for work_mem
> and maintenance_work_mem by 4x, and a number of people were in favor,
> so I think we should go ahead and do that. If you'd like to do the
> honors, by all means!
Actually, I
On Sun, Feb 16, 2014 at 09:26:47PM -0500, Robert Haas wrote:
> > So, would anyone like me to create patches for any of these items before
> > we hit 9.4 beta? We have added autovacuum_work_mem, and increasing
> > work_mem and maintenance_work_mem by 4x is a simple operation. Not sure
> > about th
Gavin Flower writes:
> On 17/02/14 15:26, Robert Haas wrote:
>> I don't really know about cpu_tuple_cost. Kevin's often advocated
>> raising it, but I haven't heard anyone else advocate for that. I
>> think we need data points from more people to know whether or not
>> that's a good idea in gene
On 17/02/14 15:26, Robert Haas wrote:
On Thu, Feb 13, 2014 at 3:34 PM, Bruce Momjian wrote:
On Fri, Oct 11, 2013 at 03:39:51PM -0700, Kevin Grittner wrote:
Josh Berkus wrote:
On 10/11/2013 01:11 PM, Bruce Momjian wrote:
In summary, I think we need to:
* decide on new defaults for work_mem
On 02/16/2014 09:26 PM, Robert Haas wrote:
> I don't really know about cpu_tuple_cost. Kevin's often advocated
> raising it, but I haven't heard anyone else advocate for that. I
> think we need data points from more people to know whether or not
> that's a good idea in general.
In 10 years of tu
On Thu, Feb 13, 2014 at 3:34 PM, Bruce Momjian wrote:
> On Fri, Oct 11, 2013 at 03:39:51PM -0700, Kevin Grittner wrote:
>> Josh Berkus wrote:
>> > On 10/11/2013 01:11 PM, Bruce Momjian wrote:
>> >> In summary, I think we need to:
>> >>
>> >> * decide on new defaults for work_mem and maintenance_
On Fri, Oct 11, 2013 at 03:39:51PM -0700, Kevin Grittner wrote:
> Josh Berkus wrote:
> > On 10/11/2013 01:11 PM, Bruce Momjian wrote:
> >> In summary, I think we need to:
> >>
> >> * decide on new defaults for work_mem and maintenance_work_mem
> >> * add an initdb flag to allow users/packagers t
All,
So, I did an informal survey last night a SFPUG, among about 30
PostgreSQL DBAs and developers. While hardly a scientific sample, it's
a data point on what we're looking at for servers.
Out of the 30, 6 had one or more production instances of PostgreSQL
running on machines or VMs with less
On Thu, Oct 17, 2013 at 7:22 AM, Robert Haas wrote:
> On Wed, Oct 16, 2013 at 5:14 PM, Josh Berkus wrote:
>> On 10/16/2013 01:25 PM, Andrew Dunstan wrote:
>>> Andres has just been politely pointing out to me that my knowledge of
>>> memory allocators is a little out of date (i.e. by a decade or t
On 10/17/2013 10:33 AM, Jeff Janes wrote:
A lot. A whole lot, more than what most people have in production
with more than that. You are forgetting a very large segment of the
population who run... VMs.
Why don't we just have 3 default config files:
2GB memory
4GB memo
On Thu, Oct 17, 2013 at 9:03 AM, Joshua D. Drake wrote:
>
> On 10/17/2013 08:55 AM, Kevin Grittner wrote:
>
>>
>> Robert Haas wrote:
>>
>> I still think my previous proposal of increasing the defaults for
>>> work_mem and maintenance_work_mem by 4X would serve many more
>>> people well than it w
On 10/17/2013 09:49 AM, Robert Haas wrote:
A lot. A whole lot, more than what most people have in production with more
than that. You are forgetting a very large segment of the population who
run... VMs.
That's true, but are you actually arguing for keeping work_mem at 1MB?
Even on a VM with
On Thu, Oct 17, 2013 at 12:03 PM, Joshua D. Drake
wrote:
> On 10/17/2013 08:55 AM, Kevin Grittner wrote:
>> Robert Haas wrote:
>>
>>> I still think my previous proposal of increasing the defaults for
>>> work_mem and maintenance_work_mem by 4X would serve many more
>>> people well than it would
JD,
> A lot. A whole lot, more than what most people have in production with
> more than that. You are forgetting a very large segment of the
> population who run... VMs.
Actually, even a "mini" AWS instance has 1GB of RAM. And nobody who
uses a "micro" is going to expect it to perform well unde
On 10/17/2013 08:55 AM, Kevin Grittner wrote:
Robert Haas wrote:
I still think my previous proposal of increasing the defaults for
work_mem and maintenance_work_mem by 4X would serve many more
people well than it would serve poorly. I haven't heard anyone
disagree with that notion. Does an
Robert Haas wrote:
> I still think my previous proposal of increasing the defaults for
> work_mem and maintenance_work_mem by 4X would serve many more
> people well than it would serve poorly. I haven't heard anyone
> disagree with that notion. Does anyone disagree? Should we do
> it?
I think
On Wed, Oct 16, 2013 at 5:14 PM, Josh Berkus wrote:
> On 10/16/2013 01:25 PM, Andrew Dunstan wrote:
>> Andres has just been politely pointing out to me that my knowledge of
>> memory allocators is a little out of date (i.e. by a decade or two), and
>> that this memory is not in fact likely to be h
On 10/16/2013 01:25 PM, Andrew Dunstan wrote:
> Andres has just been politely pointing out to me that my knowledge of
> memory allocators is a little out of date (i.e. by a decade or two), and
> that this memory is not in fact likely to be held for a long time, at
> least on most modern systems. Th
On Wed, Oct 16, 2013 at 5:30 PM, Bruce Momjian wrote:
> On Wed, Oct 16, 2013 at 04:25:37PM -0400, Andrew Dunstan wrote:
>>
>> On 10/09/2013 11:06 AM, Andrew Dunstan wrote:
>> >
>> >
>> >
>> >The assumption that each connection won't use lots of work_mem is
>> >also false, I think, especially in th
On Wed, Oct 16, 2013 at 04:25:37PM -0400, Andrew Dunstan wrote:
>
> On 10/09/2013 11:06 AM, Andrew Dunstan wrote:
> >
> >
> >
> >The assumption that each connection won't use lots of work_mem is
> >also false, I think, especially in these days of connection
> >poolers.
> >
> >
>
>
> Andres has j
On 10/09/2013 11:06 AM, Andrew Dunstan wrote:
The assumption that each connection won't use lots of work_mem is also
false, I think, especially in these days of connection poolers.
Andres has just been politely pointing out to me that my knowledge of
memory allocators is a little out
From: "Andres Freund"
I've seen several sites shutting down because of forgotten prepared
transactions causing bloat and anti-wraparound shutdowns.
From: "Magnus Hagander"
I would say *using* an external transaction manager *is* the irregular
thing. The current default *is* friendly for norm
On Tue, Oct 15, 2013 at 7:32 PM, Andres Freund wrote:
> On 2013-10-15 19:29:50 +0200, Magnus Hagander wrote:
>> On Tue, Oct 15, 2013 at 7:26 PM, Andres Freund
>> wrote:
>> > On 2013-10-15 10:19:06 -0700, Josh Berkus wrote:
>> >> On 10/15/2013 05:52 AM, Magnus Hagander wrote:
>> >> > But the argu
On 2013-10-15 19:29:50 +0200, Magnus Hagander wrote:
> On Tue, Oct 15, 2013 at 7:26 PM, Andres Freund wrote:
> > On 2013-10-15 10:19:06 -0700, Josh Berkus wrote:
> >> On 10/15/2013 05:52 AM, Magnus Hagander wrote:
> >> > But the argument about being friendly for new users should definitely
> >> >
On Tue, Oct 15, 2013 at 7:26 PM, Andres Freund wrote:
> On 2013-10-15 10:19:06 -0700, Josh Berkus wrote:
>> On 10/15/2013 05:52 AM, Magnus Hagander wrote:
>> > But the argument about being friendly for new users should definitely
>> > have us change wal_level and max_wal_senders.
>>
>> +1 for havi
On 2013-10-15 10:19:06 -0700, Josh Berkus wrote:
> On 10/15/2013 05:52 AM, Magnus Hagander wrote:
> > But the argument about being friendly for new users should definitely
> > have us change wal_level and max_wal_senders.
>
> +1 for having replication supported out-of-the-box aside from pg_hba.con
On 10/15/2013 05:52 AM, Magnus Hagander wrote:
> But the argument about being friendly for new users should definitely
> have us change wal_level and max_wal_senders.
+1 for having replication supported out-of-the-box aside from pg_hba.conf.
To put it another way: users are more likely to care ab
On Tue, Oct 15, 2013 at 2:47 PM, MauMau wrote:
> From: "Dimitri Fontaine"
>
>> The reason why that parameter default has changed from 5 to 0 is that
>> some people would mistakenly use a prepared transaction without a
>> transaction manager. Few only people are actually using a transaction
>> man
From: "Dimitri Fontaine"
The reason why that parameter default has changed from 5 to 0 is that
some people would mistakenly use a prepared transaction without a
transaction manager. Few only people are actually using a transaction
manager that it's better to have them have to set PostgreSQL.
I
On 2013-10-15 21:41:18 +0900, MauMau wrote:
> Likewise, non-zero max_prepared_transactons would improve the
> impression of PostgreSQL (for limited number of users, though), and it
> wouldn't do any harm.
I've seen several sites shutting down because of forgotten prepared
transactions causing bloa
From: "Magnus Hagander"
On Oct 12, 2013 2:13 AM, "MauMau" wrote:
I'm not sure if many use XA features, but I saw the questions and answer
a few times, IIRC. In the trouble situation, PostgreSQL outputs an
intuitive message like "increase max_prepared_transactions", so many users
might possib
On 10/14/13 8:18 AM, Robert Haas wrote:
On Sat, Oct 12, 2013 at 3:07 AM, Magnus Hagander wrote:
On Oct 11, 2013 10:23 PM, "Josh Berkus" wrote:
On 10/11/2013 01:11 PM, Bruce Momjian wrote:
In summary, I think we need to:
* decide on new defaults for work_mem and maintenance_work_mem
* add
On Sat, Oct 12, 2013 at 3:07 AM, Magnus Hagander wrote:
> On Oct 11, 2013 10:23 PM, "Josh Berkus" wrote:
>> On 10/11/2013 01:11 PM, Bruce Momjian wrote:
>> > In summary, I think we need to:
>> >
>> > * decide on new defaults for work_mem and maintenance_work_mem
>> > * add an initdb flag to all
On 2013-10-12 09:04:55 +0200, Magnus Hagander wrote:
> Frankly, I think we'd help 1000 times more users of we enabled a few wal
> writers by default and jumped the wal level. Mainly so they could run one
> off base backup. That's used by orders of magnitude more users than XA.
Yes, I've thought ab
Magnus Hagander writes:
> Frankly, I think we'd help 1000 times more users of we enabled a few wal
> writers by default and jumped the wal level. Mainly so they could run one
> off base backup. That's used by orders of magnitude more users than XA.
+1, or += default max_wal_senders actually ;-)
"MauMau" writes:
> I understand this problem occurs only when the user configured the
> application server to use distributed transactions, the application server
> crashed between prepare and commit/rollback, and the user doesn't recover
> the application server. So only improper operation produ
On Oct 11, 2013 10:23 PM, "Josh Berkus" wrote:
>
> On 10/11/2013 01:11 PM, Bruce Momjian wrote:
> > In summary, I think we need to:
> >
> > * decide on new defaults for work_mem and maintenance_work_mem
> > * add an initdb flag to allow users/packagers to set shared_bufffers?
> > * add an autov
On Oct 12, 2013 2:13 AM, "MauMau" wrote:
>
> From: "Bruce Momjian"
>>
>> On Thu, Oct 10, 2013 at 11:01:52PM +0900, MauMau wrote:
>>>
>>> Although this is not directly related to memory, could you set
>>> max_prepared_transactions = max_connections at initdb time? People
>>> must feel frustrated
From: "Dimitri Fontaine"
"MauMau" writes:
Although this is not directly related to memory, could you set
max_prepared_transactions = max_connections at initdb time? People must
You really need to have a transaction manager around when issuing
prepared transaction as failing to commit/rollba
From: "Bruce Momjian"
On Thu, Oct 10, 2013 at 11:01:52PM +0900, MauMau wrote:
Although this is not directly related to memory, could you set
max_prepared_transactions = max_connections at initdb time? People
must feel frustrated when they can't run applications on a Java or
.NET application se
Josh Berkus wrote:
> On 10/11/2013 01:11 PM, Bruce Momjian wrote:
>> In summary, I think we need to:
>>
>> * decide on new defaults for work_mem and maintenance_work_mem
>> * add an initdb flag to allow users/packagers to set shared_bufffers?
>> * add an autovacuum_work_mem setting?
>> * chang
On 10/11/2013 01:11 PM, Bruce Momjian wrote:
> In summary, I think we need to:
>
> * decide on new defaults for work_mem and maintenance_work_mem
> * add an initdb flag to allow users/packagers to set shared_bufffers?
> * add an autovacuum_work_mem setting?
> * change the default for temp_buff
On Thu, Oct 10, 2013 at 10:20:36PM -0700, Josh Berkus wrote:
> Robert,
>
> >> The counter-proposal to "auto-tuning" is just to raise the default for
> >> work_mem to 4MB or 8MB. Given that Bruce's current formula sets it at
> >> 6MB for a server with 8GB RAM, I don't really see the benefit of goi
On Thu, Oct 10, 2013 at 9:41 PM, Christopher Browne wrote:
> On Thu, Oct 10, 2013 at 12:28 PM, Bruce Momjian wrote:
>> How do we handle the Python dependency, or is this all to be done in
>> some other language? I certainly am not ready to take on that job.
>
> I should think it possible to reim
Robert,
>> The counter-proposal to "auto-tuning" is just to raise the default for
>> work_mem to 4MB or 8MB. Given that Bruce's current formula sets it at
>> 6MB for a server with 8GB RAM, I don't really see the benefit of going
>> to a whole lot of code and formulas in order to end up at a figur
On Thu, Oct 10, 2013 at 6:27 PM, Josh Berkus wrote:
>> More generally, Josh has made repeated comments that various proposed
>> value/formulas for work_mem are too low, but obviously the people who
>> suggested them didn't think so. So I'm a bit concerned that we don't
>> all agree on what the en
On Thu, Oct 10, 2013 at 6:36 PM, Bruce Momjian wrote:
> Patch attached.
ISTM that we have broad consensus that doing this at initdb time is
more desirable than doing it in the server on the fly. Not everyone
agrees with that (you don't, for instance) but there were many, many
votes in favor of t
On 10/10/13 9:44 AM, MauMau wrote:
From: "Robert Haas"
On Thu, Oct 10, 2013 at 1:23 AM, Magnus Hagander wrote:
I think it would be even simpler, and more reliable, to start with the
parameter to initdb - I like that. But instead of having it set a new
variable based on that and then autotune
On Thu, Oct 10, 2013 at 03:27:17PM -0700, Josh Berkus wrote:
>
> > More generally, Josh has made repeated comments that various proposed
> > value/formulas for work_mem are too low, but obviously the people who
> > suggested them didn't think so. So I'm a bit concerned that we don't
> > all agree
On Thu, Oct 10, 2013 at 03:40:17PM -0700, Josh Berkus wrote:
>
> >> I don't follow that. Why would using a connection pooler change the
> >> multiples
> >> of work_mem that a connection would use?
> >
> > I assume that a connection pooler would keep processes running longer,
> > so even if they
On Thu, Oct 10, 2013 at 02:44:12PM -0400, Peter Eisentraut wrote:
> On 10/10/13 11:31 AM, Bruce Momjian wrote:
> > Let me walk through the idea of adding an available_mem setting, that
> > Josh suggested, and which I think addresses Robert's concern about
> > larger shared_buffers and Windows serve
>> I don't follow that. Why would using a connection pooler change the
>> multiples
>> of work_mem that a connection would use?
>
> I assume that a connection pooler would keep processes running longer,
> so even if they were not all using work_mem, they would have that memory
> mapped into the
On Thu, Oct 10, 2013 at 11:18:28AM -0700, Josh Berkus wrote:
> Bruce,
>
> >> That's way low, and frankly it's not worth bothering with this if all
> >> we're going to get is an incremental increase. In that case, let's just
> >> set the default to 4MB like Robert suggested.
> >
> > Uh, well, 100
On Thu, Oct 10, 2013 at 11:14:27AM -0700, Jeff Janes wrote:
> The assumption that each connection won't use lots of work_mem is also
> false, I think, especially in these days of connection poolers.
>
>
> I don't follow that. Why would using a connection pooler change the multiples
> of
> More generally, Josh has made repeated comments that various proposed
> value/formulas for work_mem are too low, but obviously the people who
> suggested them didn't think so. So I'm a bit concerned that we don't
> all agree on what the end goal of this activity looks like.
The counter-proposa
On Thu, Oct 10, 2013 at 3:41 PM, Christopher Browne wrote:
> On Thu, Oct 10, 2013 at 12:28 PM, Bruce Momjian wrote:
>> How do we handle the Python dependency, or is this all to be done in
>> some other language? I certainly am not ready to take on that job.
>
> I should think it possible to reim
On Thu, Oct 10, 2013 at 12:28 PM, Bruce Momjian wrote:
> How do we handle the Python dependency, or is this all to be done in
> some other language? I certainly am not ready to take on that job.
I should think it possible to reimplement it in C. It was considerably
useful to start by implementi
On Thu, Oct 10, 2013 at 8:46 PM, Robert Haas wrote:
> On Thu, Oct 10, 2013 at 2:45 PM, Josh Berkus wrote:
>> On 10/10/2013 11:41 AM, Robert Haas wrote:
>>> tunedb --available-memory=32GB
>>>
>>> ...and it will print out a set of proposed configuration settings. If
>>> we want a mode that rewrite
On 10/10/13 11:45 AM, Bruce Momjian wrote:
> I think the big win for a tool would be to query the user about how they
> are going to be using Postgres, and that can then spit out values the
> user can add to postgresql.conf, or to a config file that is included at
> the end of postgresql.conf.
I t
On Wed, Oct 9, 2013 at 10:21 PM, Magnus Hagander wrote:
>> Well, the Postgres defaults won't really change, because the default
>> vacuum_work_mem will be -1, which will have vacuum defer to
>> maintenance_work_mem. Under this scheme, vacuum only *prefers* to get
>> bound working memory size from
On 10/10/2013 11:41 AM, Robert Haas wrote:
> tunedb --available-memory=32GB
>
> ...and it will print out a set of proposed configuration settings. If
> we want a mode that rewrites the configuration file, we could have:
>
> tunedb --available-memory=32GB --rewrite-config-file=$PATH
>
> ...but t
On Thu, Oct 10, 2013 at 8:41 PM, Robert Haas wrote:
> On Thu, Oct 10, 2013 at 12:28 PM, Bruce Momjian wrote:
>> On Thu, Oct 10, 2013 at 12:00:54PM -0400, Stephen Frost wrote:
>>> * Bruce Momjian (br...@momjian.us) wrote:
>>> > Well, I like the idea of initdb calling the tool, though the tool then
On Thu, Oct 10, 2013 at 2:45 PM, Josh Berkus wrote:
> On 10/10/2013 11:41 AM, Robert Haas wrote:
>> tunedb --available-memory=32GB
>>
>> ...and it will print out a set of proposed configuration settings. If
>> we want a mode that rewrites the configuration file, we could have:
>>
>> tunedb --avai
On 10/10/13 11:31 AM, Bruce Momjian wrote:
> Let me walk through the idea of adding an available_mem setting, that
> Josh suggested, and which I think addresses Robert's concern about
> larger shared_buffers and Windows servers.
I think this is a promising idea. available_mem could even be set
au
On Thu, Oct 10, 2013 at 11:43 AM, Robert Haas wrote:
> On Thu, Oct 10, 2013 at 1:37 PM, Josh Berkus wrote:
>> So, the question is: can we reasonably determine, at initdb time, how
>> much RAM the system has?
>
> As long as you are willing to write platform-dependent code, yes.
That's why trying
On Thu, Oct 10, 2013 at 11:41 AM, Robert Haas wrote:
> I don't see why it can't be done in C. The server is written in C,
> and so is initdb. So no matter where we do this, it's gonna be in C.
> Where does Python enter into it?
I mentioned that pgtune was written in Python, but as you say that'
On Thu, Oct 10, 2013 at 1:37 PM, Josh Berkus wrote:
> So, the question is: can we reasonably determine, at initdb time, how
> much RAM the system has?
As long as you are willing to write platform-dependent code, yes.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgr
On Thu, Oct 10, 2013 at 12:28 PM, Bruce Momjian wrote:
> On Thu, Oct 10, 2013 at 12:00:54PM -0400, Stephen Frost wrote:
>> * Bruce Momjian (br...@momjian.us) wrote:
>> > Well, I like the idea of initdb calling the tool, though the tool then
>> > would need to be in C probably as we can't require p
> It also doesn't address my point that, if we are worst-case-scenario
> default-setting, we're going to end up with defaults which aren't
> materially different from the current defaults. In which case, why even
> bother with this whole exercise?
Oh, and let me reiterate: the way to optimize wo
Bruce,
>> That's way low, and frankly it's not worth bothering with this if all
>> we're going to get is an incremental increase. In that case, let's just
>> set the default to 4MB like Robert suggested.
>
> Uh, well, 100 backends at 6MB gives us 600MB, and if each backend uses
> 3x work_mem, th
On Wed, Oct 9, 2013 at 8:06 AM, Andrew Dunstan wrote:
>
> On 10/09/2013 10:45 AM, Bruce Momjian wrote:
>
>> On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:
>>
>>> Effectively, if every session uses one full work_mem, you end up
>>> with
>>> total work_mem usage equal to s
Bruce,
* Bruce Momjian (br...@momjian.us) wrote:
> On Thu, Oct 10, 2013 at 12:00:54PM -0400, Stephen Frost wrote:
> > I'm really not impressed with this argument. Either the user is going
> > to go and modify the config file, in which case I would hope that they'd
> > at least glance around at wh
All,
We can't reasonably require user input at initdb time, because most
users don't run initdb by hand -- their installer does it for them. So
any "tuning" which initdb does needs to be fully automated.
So, the question is: can we reasonably determine, at initdb time, how
much RAM the system ha
On Thu, Oct 10, 2013 at 10:20:02AM -0700, Josh Berkus wrote:
> On 10/09/2013 02:15 PM, Bruce Momjian wrote:
> > and for shared_buffers of 2GB:
> >
> > test=> show shared_buffers;
> > shared_buffers
> >
> > 2GB
> > (1 row)
> >
> > test=> SHOW work_mem
On 10/09/2013 02:15 PM, Bruce Momjian wrote:
> and for shared_buffers of 2GB:
>
> test=> show shared_buffers;
>shared_buffers
>
>2GB
> (1 row)
>
> test=> SHOW work_mem;
>work_mem
> --
>6010kB
> (1 r
> Because 'maintenance' operations were rarer, so we figured we could use
> more memory in those cases.
Once we brought Autovacuum into core, though, we should have changed that.
However, I agree with Magnus that the simple course is to have an
autovacuum_worker_memory setting which overrides ma
On Thu, Oct 10, 2013 at 12:59:39PM -0400, Andrew Dunstan wrote:
>
> On 10/10/2013 12:45 PM, Bruce Momjian wrote:
> >On Thu, Oct 10, 2013 at 12:39:04PM -0400, Andrew Dunstan wrote:
> >>On 10/10/2013 12:28 PM, Bruce Momjian wrote:
> >>>How do we handle the Python dependency, or is this all to be don
On 10/10/2013 12:45 PM, Bruce Momjian wrote:
On Thu, Oct 10, 2013 at 12:39:04PM -0400, Andrew Dunstan wrote:
On 10/10/2013 12:28 PM, Bruce Momjian wrote:
How do we handle the Python dependency, or is this all to be done in
some other language? I certainly am not ready to take on that job.
W
On Thu, Oct 10, 2013 at 12:39:04PM -0400, Andrew Dunstan wrote:
>
> On 10/10/2013 12:28 PM, Bruce Momjian wrote:
> >
> >How do we handle the Python dependency, or is this all to be done in
> >some other language? I certainly am not ready to take on that job.
>
>
> Without considering any wider
On 10/10/2013 12:28 PM, Bruce Momjian wrote:
How do we handle the Python dependency, or is this all to be done in
some other language? I certainly am not ready to take on that job.
Without considering any wider question here, let me just note this:
Anything that can be done in this area in
On Thu, Oct 10, 2013 at 12:00:54PM -0400, Stephen Frost wrote:
> * Bruce Momjian (br...@momjian.us) wrote:
> > Well, I like the idea of initdb calling the tool, though the tool then
> > would need to be in C probably as we can't require python for initdb.
> > The tool would not address Robert's is
* Bruce Momjian (br...@momjian.us) wrote:
> Well, I like the idea of initdb calling the tool, though the tool then
> would need to be in C probably as we can't require python for initdb.
> The tool would not address Robert's issue of someone increasing
> shared_buffers on their own.
I'm really no
On Thu, Oct 10, 2013 at 11:45:41AM -0400, Stephen Frost wrote:
> * Bruce Momjian (br...@momjian.us) wrote:
> > On Thu, Oct 10, 2013 at 11:18:46AM -0400, Stephen Frost wrote:
> > > For this case, I think the suggestion made by MauMau would be better-
> > > tell the user (in the postgresql.conf comme
1 - 100 of 167 matches
Mail list logo