Jan Wieck <[EMAIL PROTECTED]> writes:
> We can't resize shared memory because we allocate the whole thing in
> one big hump - which causes the shmmax problem BTW. If we allocate
> that in chunks of multiple blocks, we only have to give it a total
> maximum size to get the hash tables and other stuff right from the
> beginning. But the vast majority of memory, the buffers themself, can
> be made adjustable at runtime.

Yeah, writing a palloc()-style wrapper over shm has been suggested
before (by myself among others). You could do the shm allocation in
fixed-size blocks (say, 1 MB each), and then do our own memory
management to allocate and release smaller chunks of shm when
requested. I'm not sure what it really buys us, though: sure, we can
expand the shared buffer area to some degree, but

        (a) how do we know what the right size of the shared buffer
            area /should/ be? It is difficult enough to avoid running
            the machine out of physical memory, let alone figure out
            how much memory is being used by the kernel for the buffer
            cache and how much we should use ourselves. I think the
            DBA needs to configure this anyway.

        (b) the amount of shm we can ultimately use is finite, so we
            will still need to use a lot of caution when placing
            dynamically-sized data structures in shm. A shm_alloc()
            might help this somewhat, but I don't see how it would
            remove the fundamental problem.

-Neil


---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to