On 08/09/2010 03:17 AM, Stefan Hajnoczi wrote:
Use -mem-path /path/to/directory. It's used for hugetlbfs support on
Linux but it should work on a normal filesystem too.
Stefan
It *almost* works, except for some minor obstacles:
1) Normally the pages get mmap()'d with MAP_PRIVATE so they COW'd rather
than written to the backing file:
#ifdef MAP_POPULATE
/* NB: MAP_POPULATE won't exhaustively alloc all phys pages in the case
* MAP_PRIVATE is requested. For mem_prealloc we mmap as MAP_SHARED
* to sidestep this quirk.
*/
flags = mem_prealloc ? MAP_POPULATE | MAP_SHARED : MAP_PRIVATE;
area = mmap(0, memory, PROT_READ | PROT_WRITE, flags, fd, 0);
#else
area = mmap(0, memory, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
#endif
You can force a MAP_SHARED mapping without any changes to qemu by using
the -mem_prealloc option, but you'll get MAP_POPULATE as well, which may
not be desirable. A small patch would do the job though.
2) exec.c:file_ram_alloc() assumes you're allocating off a hugetlbfs and
makes some system calls to get the block/hugepage size. A quick hack
might be to comment out the following in exec.c:gethugepagesize():
if (fs.f_type != HUGETLBFS_MAGIC)
fprintf(stderr, "Warning: path not on HugeTLBFS: %s\n", path);
You may also want to replace the mkstemp() with a mkostemp() and set
O_SYNC on the file
But beyond hacks, I think generalizing -mempath might have some other
useful applications (using it as a way to expose tmpfs-backed/numactl'd
files as numa nodes to guests came up in an earlier discussion, and
memory compression via zram/compcache is another).