Florian Pflug <f...@phlo.org> Monday 20 of June 2011 17:01:40 > On Jun20, 2011, at 16:39 , Radosław Smogura wrote: > > Florian Pflug <f...@phlo.org> Monday 20 of June 2011 16:16:58 > > > >> On Jun20, 2011, at 15:27 , Radosław Smogura wrote: > >>> 1. mmap some large amount of anonymous virtual memory (this will be > >>> maximum size of shared memory). ... > >>> Point 1. will no eat memory, as memory allocation is delayed and in > >>> 64bit platforms you may reserve quite huge chunk of this, and in > >>> future it may be possible using mmap / munmap to concat chunks / > >>> defrag it etc. > >> > >> I think this breaks with strict overcommit settings (i.e. > >> vm.overcommit_memory = 2 on linux). To fix that, you'd need a way to > >> tell the kernel (or glibc) to simply reserve a chunk of virtual address > >> space for further user. Not sure if there's a API for that... > >> > >> best regards, > >> Florian Pflug > > > > This may be achived by many other things, like mmap /dev/null. > > Are you sure? Isn't mmap()ing /dev/null a way to *allocate* memory? > > Or at least this is what I always thought glibc does when you malloc() > are large block at once. (This allows it to actually return the memory > to the kernel once you free() it, which isn't possible if the memory > was allocated simply by extending the heap). > > You can work around this by mmap()ing an actual file, because then > the kernel knows it can use the file as backing store and thus doesn't > need to reserve actual physical memory. (In a way, this just adds > additional swap space). Doesn't seem very clean though... > > Even if there's a way to work around a strict overcommit setting, unless > the workaround is a syscall *explicitly* designed for that purpose, I'd > be very careful with using it. You might just as well be exploiting a > bug in the overcommit accounting logic and future kernel versions may > simply choose to fix the bug... > > best regards, > Florian Pflug
I'm sure at 99%. When I ware "playing" with mmap I preallocated, probably, about 100GB of memory. Regards, Radek -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers