> This is likely a naive response, but on Linux have you thought
>using /dev/shm? It's a tmpfs ramdisk that is needed by POSIX shared memory
>calls shm_open and shm_unlink in glibc 2.2 and above. It grows and shrinks as
>required and uses almost no memory if it's never populated with files.    
> 
>As a simple test I created /dev/shm/test.d using sqlite3, created a simple
>table and populated it with a couple of rows of data. I connected to the
>database from another sqlite instance, and I was able to read the data just
>fine. After closing down both instances the test database was still there (no
>surprise, it's a filesystem after all).     
>
> Dunno if that helps any.
>
> Glenn McAllister
> SOMA Networks, Inc.

  Hmm, great idea! I forgot that .... This solves the problem with little 
databases, really.

  But as I said in another e-mail, I need to use shared memories greater than 
5GB .... :-) The problem with lots of RAM is the work the kernel has to do to 
control all this memory .... Think: each memory page (in linux) is 4kbytes. 
How many pages are necessary to map 8GB of memory ? So, using shared memory 
created with the flag SHM_HUGETLB (HugeTLB file system support), each memory 
page has the size of 4M (with kernels compiled with 4GB option) and 2M (with 
kernels compiled with 64GB option). Using pages with 2M or 4M saves a lot of 
kernel work.

  The tmpfs doesn't use this strategy, having the same performance of 
malloc'ed memory (but the locks will work and it's possible to share the 
database with other processes :-)

  []'s

Mateus Inssa

Reply via email to