Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
That's easily fixed, I think. We just need to remember what we have
proved works.
I can apply the attached patch if you think that's worth doing.
If you like; but if so, remove the comment saying that there's a
co
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> That's easily fixed, I think. We just need to remember what we have
> proved works.
> I can apply the attached patch if you think that's worth doing.
If you like; but if so, remove the comment saying that there's a
connection between the required list
Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
In experimenting I needed to set this at 20 for it to bite much. If we
wanted to fine tune it I'd be inclined to say that we wanted
20*connections buffers for the first, say, 50 or 100 connections and 10
or 16 times for each con
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> In experimenting I needed to set this at 20 for it to bite much. If we
> wanted to fine tune it I'd be inclined to say that we wanted
> 20*connections buffers for the first, say, 50 or 100 connections and 10
> or 16 times for each connection over tha
I wrote:
You probably need to fix the max-connections pass so that it applies the
same changes to max_fsm_pages as the second pass does --- otherwise, its
assumption that shared_buffers can really be set that way will be wrong.
Other than that I didn't see any problem with the shared_buffers
Tom Lane wrote:
Leaving aside the question of max_connections, which seems to be the
most controversial, is there any objection to the proposal to increase
the settings tried for shared_buffers (up to 4000) and max_fsm_pages (up
to 20) ? If not, I'll apply a patch for those changes short
Tom Lane wrote:
I was thinking of a linear factor plus clamps to minimum and maximum
values --- does that make it work any better?
Can you suggest some factor/clamp values? Obviously it would be
reasonable to set the max clamp at the max shared_buffers size we would
test in the next ste
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> In experimenting I needed to set this at 20 for it to bite much. If we
> wanted to fine tune it I'd be inclined to say that we wanted
> 20*connections buffers for the first, say, 50 or 100 connections and 10
> or 16 times for each connection over tha
I wrote:
Tom Lane said:
I think this probably needs to be more aggressive
though. In a
situation of limited SHMMAX it's probably more important to keep
shared_buffers as high as we can than to get a high max_connections. We
could think about increasing the 5x multiplier, adding Min and/or
"Andrew Dunstan" <[EMAIL PROTECTED]> writes:
> Tom Lane said:
>> The existing initdb code actually does try to scale them in sync to
>> some extent ---
> Yes, I know. What I meant was that we could try using one phase
> rather than two. But that's only one possible approach.
I think that's a bad
Tom Lane said:
> Andrew Dunstan <[EMAIL PROTECTED]> writes:
>> Maybe we need to split this into two pieces, given Tom's legitimate
>> concern about semaphore use. How about we increase the allowed range
>> for shared_buffers and max_fsm_pages, as proposed in my patch, and
>> leave the max_connect
[moved to -hackers]
Petr Jelinek said:
> Andrew Dunstan wrote:
>>
>> Just because we can run with very little memory doesn't mean we have to.
What is the point of having lots of memory if you don't use it? We are
talking defaults here. initdb will still scale down on
>> resource-starved machine
Robert Treat wrote:
Maybe we should write something in to check if apache is installed if we're so
concerned about that usage...
Er, yeah, I'll get right on that. (Don't hold your breath.)
I already know that I set the connection limits
lower on most of the installations I do (given than
Tom Lane wrote:
BTW, I fat-fingered the calculations I was doing last night --- the
actual shmem consumption in CVS tip seems to be more like 17K per
max_connection increment, assuming max_locks_per_connection = 64.
ITYM max_locks_per_transaction (which as the docs say is confusingly na
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> Maybe we need to split this into two pieces, given Tom's legitimate
> concern about semaphore use. How about we increase the allowed range for
> shared_buffers and max_fsm_pages, as proposed in my patch, and leave the
> max_connections issue on the ta
[moving to -hackers]
Peter Eisentraut wrote:
Am Samstag, 24. Dezember 2005 00:20 schrieb Andrew Dunstan:
The rationale is one connection per apache thread (which on Windows
defaults to 400). If people think this is too many I could live with
winding it back a bit - the defaults number of a
Tom Lane wrote:
daveg <[EMAIL PROTECTED]> writes:
I don't understand the motivation for so many connections by default, it
seems wasteful in most cases.
I think Andrew is thinking about database-backed Apache servers ...
Some quick checks say that CVS tip's demand for shared memory
17 matches
Mail list logo