> Merlin Moncure
> > The name max_locks_per_transaction indicates a limit of some kind. The
> > documentation doesn't mention anything about whether that limit is
> > enforced
> > or not.
> >
> > I suggest the additional wording:
> > "This parameter is not a hard limit: No limit is enforced on the
> The name max_locks_per_transaction indicates a limit of some kind. The
> documentation doesn't mention anything about whether that limit is
> enforced
> or not.
>
> I suggest the additional wording:
> "This parameter is not a hard limit: No limit is enforced on the
number of
> locks in each tran
>Tom Lane
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > Does this mean that the parameter max_locks_per_transaction
> isn't honoured
> > at all, it is just used to size the lock table
>
> Yes, and that's how it's documented.
>
The name max_locks_per_transaction indicates a limit of some kind. Th
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> Does this mean that the parameter max_locks_per_transaction isn't honoured
> at all, it is just used to size the lock table
Yes, and that's how it's documented.
regards, tom lane
---(end of broadcast)
>Tom Lane
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > According to postgresql.conf, using these settings the lock table eats
> > 64*260*100 bytes = < 2M. Well, if it's running my server out of shared
> > memory, it's eating much, much more shmem than previously thought.
>
> Hmm, the 260 is
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> I was wondering how ~ 10k locks ran me out of shared memory when each
> lock takes ~ 260b (half that, as you say) and I am running 8k buffers =
> 64M.
The number of buffers you have doesn't have anything to do with this.
The question is how much share
Tgl wrote:
> > As I see it, this means the user-locks (and perhaps all
> > locks...?) eat around ~ 6k bytes memory each.
>
> They're allocated in groups of 32, which would work out to close to
6k;
> maybe you were measuring the incremental cost of allocating the first
one?
I got my 6k figure by d
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> According to postgresql.conf, using these settings the lock table eats
> 64*260*100 bytes = < 2M. Well, if it's running my server out of shared
> memory, it's eating much, much more shmem than previously thought.
Hmm, the 260 is out of date I think.
tgl wrote:
> There is a secondary issue here, which is that we don't have provision
> to recycle hash table entries back into the general shared memory pool
> (mainly because there *is* no "shared memory pool", only never-yet-
> allocated space). So when you do release these locks, the freed space
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > In other words, after doing a select user_write_lock_oid(t.oid) from
> > big_table t;
> > It's server restart time.
>
> User locks are not released at transaction failure. Quitting that
> backend should have got you out of it, however.
Right, my
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> In other words, after doing a select user_write_lock_oid(t.oid) from
> big_table t;
> It's server restart time.
User locks are not released at transaction failure. Quitting that
backend should have got you out of it, however.
> What's really interes
Tom,
I noticed your recent corrections to lock.c regarding the releasing of
locks in an out of shared memory condition. This may or may not be
relevant, but when I purposefully use up all the lock space with user
locks, the server runs out of shared memory and stays out until it is
restarted (not
12 matches
Mail list logo