Re: .img on nfs, relative on ram, consuming mass ram

2010-09-20 Thread TOURNIER Frédéric
Heres my benches, done in two days so dates are weird and results are very 
approximative.
What surprises me are in the Part 2, swap occured.
In 3 and 4, the ram is eaten up, even if the vm just booted.


Part 0

End of boot :

bash-3.1$ free
 total   used   free sharedbuffers cached
Mem:   2056840 5008361556004  0   2244 359504
-/+ buffers/cache: 1390881917752
Swap:  3903784  03903784



Part 1

qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw 
es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img,cache=none

bash-3.1$ free
 total   used   free sharedbuffers cached
Mem:   20568401656280 400560  0  34884 378332
-/+ buffers/cache:1243064 813776
Swap:  3903784  03903784

bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
58946 -rw-r--r-- 1 ftournier info 60424192 2010-09-16 17:49 
/mnt/hd/sda/sda1/tmp/relqlio.img

650M download inside the vm

bash-3.1$ free
 total   used   free sharedbuffers cached
Mem:   20568401677648 379192  0  33860 397716
-/+ buffers/cache:1246072 810768
Swap:  3903784  03903784

bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
914564 -rw-r--r-- 1 ftournier info 935723008 2010-09-20 14:07 
/mnt/hd/sda/sda1/tmp/relqlio.img



Part 2

reboot

qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw 
es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img

bash-3.1$ free
 total   used   free sharedbuffers cached
Mem:   20568402040172  16668  0  32952 758948
-/+ buffers/cache:1248272 808568
Swap:  3903784  03903784

bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
60739 -rw-r--r-- 1 ftournier info 62259200 2010-09-16 17:57 
/mnt/hd/sda/sda1/tmp/relqlio.img

650M download inside the vm

bash-3.1$ free
 total   used   free sharedbuffers cached
Mem:   20568402040540  16300  0  34412 765208
-/+ buffers/cache:1240920 815920
Swap:  3903784   81603895624

bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
842430 -rw-r--r-- 1 ftournier info 861929472 2010-09-20 14:20 
/mnt/hd/sda/sda1/tmp/relqlio.img



Part 3

reboot

qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw 
es1370 -name qlio -drive file=/tmp/relqlio.img

note : /tmp is a tmpfs filesystem

bash-3.1$ free
 total   used   free sharedbuffers cached
Mem:   20568402009688  47152  0248 766328
-/+ buffers/cache:1243112 813728
Swap:  3903784  03903784

bash-3.1$ ls -lsa /tmp/relqlio.img
59848 -rw-r--r-- 1 ftournier info 61407232 2010-09-16 18:04 /tmp/relqlio.img

650M download inside the vm

bash-3.1$ free
 total   used   free sharedbuffers cached
Mem:   20568402041404  15436  0128 921276
-/+ buffers/cache:112 936840
Swap:  3903784 2488043654980

bash-3.1$ ls -lsa /tmp/relqlio.img
885448 -rw-r--r-- 1 ftournier info 906821632 2010-09-20 14:40 /tmp/relqlio.img



Part 4

reboot

qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw 
es1370 -name qlio -drive file=/dev/shm/relqlio.img

bash-3.1$ free
 total   used   free sharedbuffers cached
Mem:   20568402009980  46860  0172 767328
-/+ buffers/cache:1242480 814360
Swap:  3903784  03903784

bash-3.1$ ls -lsa /dev/shm/relqlio.img
58496 -rw-r--r-- 1 ftournier info 59899904 2010-09-16 18:11 /dev/shm/relqlio.img

650M download inside the vm

bash-3.1$ free
 total   used   free sharedbuffers cached
Mem:   20568402041576  15264  0 92 938976
-/+ buffers/cache:1102508 954332
Swap:  3903784 2662323637552

bash-3.1$ ls -lsa /dev/shm/relqlio.img
1016912 -rw-r--r-- 1 ftournier info 1039400960 2010-09-20 15:15 
/dev/shm/relqlio.img
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: .img on nfs, relative on ram, consuming mass ram

2010-09-20 Thread TOURNIER Frédéric
On Mon, 20 Sep 2010 16:00:53 +0200
Andre Przywara andre.przyw...@amd.com wrote:

 TOURNIER Frédéric wrote:
  Heres my benches, done in two days so dates are weird and results are very 
  approximative.
  What surprises me are in the Part 2, swap occured.
 I don't know exactly why, but I have seen a small usage of swap 
 occasionally without real memory pressure. So I'd consider this normal.
Mmm i don't like strange normal things... Anyway my current setting is the N°1.
And my target the 3 or 4 ^^.

  In 3 and 4, the ram is eaten up, even if the vm just booted.
 Where is the RAM eaten up? I see always always 800 MB free, some more 
 even after the d/l:
 You have to look at the second line of the free column, no the first 
 one. As you can see the OS has still enough RAM to afford a large cache, 
 so it uses this. Unused RAM is just a waste of resources (because it is 
 always there and there is no reason to not use it). If the 'cached' 
 contains a lot of clean pages, the OS can simply claim them should an 
 application request more memory. If you want a proof of this, try:
 # echo 3  /proc/sys/vm/drop_caches
 This should free the cache and give you a high real free value back.
Ok i'll take a closer look. But i see no reason why so much cache is used.
I think there's some kind of duplicated pages between nfs and qemu-kvm.
Maybe there's an idea for a future enhancement for some -nfs-image switch ?
 
 Have you tried cache=none with the tmpfs scenario?
Oh yeah i tried and tried. Unfortunately this is impossible : Invalid argument
For shm and ramfs too.

 That should save you 
 some of the host's cached memory (note the difference between part 1 and 
 part2 in that respect), maybe at the expense of the guest's memory used 
 more heavily. Your choice here, that depends on the actual memory 
 utilization in the guest.
 
 As I said before, it is not a very good idea to use such a setup (with 
 the relative image on tmpfs) if you are doing actual disk I/O, 
 especially large writes. AFAIK QCOW[2] does not really shrink, it only 
 grows, so you will end up with out-of-memory at some point.
 But if you can restrict the amount of written data, this may work.
Well i'm aware of this dangerous setting but i really tried to make it work 
because it's so comfortable.
If some of you readers have some spare time, and two machines (2G ram on each 
is a good start), try this setting.
Read on NFS, write on local, ram if possible. The performance of the guest is 
awesome, especially if the original .img is pre-cached in the ram of the server.

 
 Regards,
 Andre.
 
 P.S. Sorry for the confusion about tmpfs vs. ramfs in my last week's mail.

No problem.
Thank you for taking time.
And beeing answered by some...@amd.com is a must ^^

Fred.

 
  
  
  Part 0
  
  End of boot :
  
  bash-3.1$ free
   total   used   free sharedbuffers cached
  Mem:   2056840 5008361556004  0   2244 359504
  -/+ buffers/cache: 1390881917752
  Swap:  3903784  03903784
  
  
  
  Part 1
  
  qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime 
  -soundhw es1370 -name qlio -drive 
  file=/mnt/hd/sda/sda1/tmp/relqlio.img,cache=none
  
  bash-3.1$ free
   total   used   free sharedbuffers cached
  Mem:   20568401656280 400560  0  34884 378332
  -/+ buffers/cache:1243064 813776
  Swap:  3903784  03903784
  
  bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
  58946 -rw-r--r-- 1 ftournier info 60424192 2010-09-16 17:49 
  /mnt/hd/sda/sda1/tmp/relqlio.img
  
  650M download inside the vm
  
  bash-3.1$ free
   total   used   free sharedbuffers cached
  Mem:   20568401677648 379192  0  33860 397716
  -/+ buffers/cache:1246072 810768
  Swap:  3903784  03903784
  
  bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
  914564 -rw-r--r-- 1 ftournier info 935723008 2010-09-20 14:07 
  /mnt/hd/sda/sda1/tmp/relqlio.img
  
  
  
  Part 2
  
  reboot
  
  qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime 
  -soundhw es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img
  
  bash-3.1$ free
   total   used   free sharedbuffers cached
  Mem:   20568402040172  16668  0  32952 758948
  -/+ buffers/cache:1248272 808568
  Swap:  3903784  03903784
  
  bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
  60739 -rw-r--r-- 1 ftournier info 62259200 2010-09-16 17:57 
  /mnt/hd/sda/sda1/tmp/relqlio.img
  
  650M download inside the vm
  
  bash-3.1$ free
   total   used   free sharedbuffers cached
  Mem:   20568402040540  16300  0  34412

Re: .img on nfs, relative on ram, consuming mass ram

2010-09-16 Thread TOURNIER Frédéric
Ok, thanks for taking time.
I'll dig into your answers.

So as i run relative.img on diskless systems with original.img on nfs, what are 
the best practice/tips i can use ?

Is ramfs more suitable than tmpfs ?

Fred.

On Thu, 16 Sep 2010 11:09:49 +0200
Andre Przywara andre.przyw...@amd.com wrote:

 TOURNIER Frédéric wrote:
  Hi !
  Here's my config :
  
  Version : qemu-kvm-0.12.5, qemu-kvm-0.12.4
  Hosts : AMD 64X2, Phenom and Core 2 duo
  OS : Slackware 64 13.0
  Kernel : 2.6.35.4 and many previous versions
  
  I use a PXE server to boot semi-diskless (swap partitions and some local 
  stuff) stations.
  This server also serves a read-only nfs folder, with plenty of .img on it.
  When clients connects, a relative image is created in /tmp, which is a 
  tmpfs, so hosted in ram.
  
  And here i go on my 2G stations :
  qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime 
  -soundhw es1370 /tmp/relimg.img
  qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime 
  -soundhw es1370 /dev/shm/relimg.img
  
  I tried both. Always the same result : the ram is consumed quickly, and 
  mass swap occurs.
 Which is only natural, as tmpfs is promising to never swap. So it will 
 take precedence over other RAM (that's why it is limited to half of the 
 memory by default). As soon as the guest has (re)written more disk 
 sectors than your free RAM can hold, the system will start to swap out 
 your guest RAM (and other host applications).
 So in general you should avoid putting relative disk images to tmpfs if 
 your host memory is limited. As a workaround you could try to further 
 limit the tmpfs max size (mount -t tmpfs -o size=512M none /dev/shm), 
 but this could lead to data loss in your guest as it possibly cannot 
 back the written sectors anymore.
  On a 4G system, i see kvm uses more than 1024, maybe 1200.
  And everytime a launch a program inside the vm, the amount of the host free 
  ram (not cached) diminishes, which is weird, because it should have been 
  reserved.
 KVM uses on-demand paging like other applications. So it will not 
 reserve memory for your guest (unless you use hugetlbfs's -mempath):
 $ kvm -cdrom ttylinux_ser.iso -nographic -m 3072M
 $ top
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND 
 
   6015 user  20   0 3205m 128m 3020 S2  2.2   0:04.94 kvm 
 
 
 Regards,
 Andre.
 
  
  So on a 2G system, swap occurs very fast and the machine slow a lot down.
  An on a total diskless system, this leads fast to a freeze.
  
  I have no problem if i use a relative image on disk :
  qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime 
  -soundhw es1370 -drive file=/mnt/hd/sda/sda1/tmp/relimg.img,cache=none
 
 -- 
 Andre Przywara
 AMD-Operating System Research Center (OSRC), Dresden, Germany
 Tel: +49 351 448-3567-12
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: .img on nfs, relative on ram, consuming mass ram

2010-09-16 Thread TOURNIER Frédéric
I can't do this because i need performance.

I'm currently doing some tests. Will post soon.

My config's map :

NFS  PXE server  qemu-kvm host
---  
| img.img |__|relimg.img|
| readonly|   net|  |
---  

Keep in touch and thx for time.

Fred.

On Thu, 16 Sep 2010 15:48:18 +0200
Andre Przywara andre.przyw...@amd.com wrote:

 Stefan Hajnoczi wrote:
  2010/9/16 Andre Przywara andre.przyw...@amd.com:
  TOURNIER Frédéric wrote:
  Ok, thanks for taking time.
  I'll dig into your answers.
 
  So as i run relative.img on diskless systems with original.img on nfs,
  what are the best practice/tips i can use ?
  I thinks it is -snapshot you are looking for.
  This will put the backing store into normal RAM, and you can later commit
  it to the original image if needed. See the qemu manpage for more details.
  In a nutshell you just specify the original image and add -snapshot to the
  command line.
  
  -snapshot creates a temporary qcow2 image in /tmp whose backing file
  is your original image.  I'm not sure what you mean by This will put
  the backing store into normal RAM?
 Stefan, you are right. I never looked into the code and because the file 
 in /tmp is deleted just after creation there wasn't a sign of it.
 For some reason I thought that the buffer would just be allocated in 
 memory. Sorry, my mistake and thanks for pointing this out.
 
 So Fred, unfortunately this does not solve your problem. I guess you run 
 into a general problem. If the guest actually changes so much of the 
 disk that this cannot be backed by RAM in the host, you have lost.
 One solution could be to just make (at least parts of) the disk 
 read-only (a write protected /usr partition works quite well).
 If you are sure that writes are not that frequent, you could think of 
 putting the overlay file also on the remote storage (NFS). Although this 
 is rather slow, it shouldn't matter if there aren't many writes and the 
 local page cache should catch most of the accesses (while still being 
 nice to other RAM users).
 
 Regards,
 Andre.
  
  Stefan
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
  
 
 
 -- 
 Andre Przywara
 AMD-Operating System Research Center (OSRC), Dresden, Germany
 Tel: +49 351 448-3567-12
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


.img on nfs, relative on ram, consuming mass ram

2010-09-15 Thread TOURNIER Frédéric
Hi !
Here's my config :

Version : qemu-kvm-0.12.5, qemu-kvm-0.12.4
Hosts : AMD 64X2, Phenom and Core 2 duo
OS : Slackware 64 13.0
Kernel : 2.6.35.4 and many previous versions

I use a PXE server to boot semi-diskless (swap partitions and some local stuff) 
stations.
This server also serves a read-only nfs folder, with plenty of .img on it.
When clients connects, a relative image is created in /tmp, which is a tmpfs, 
so hosted in ram.

And here i go on my 2G stations :
qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw 
es1370 /tmp/relimg.img
qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw 
es1370 /dev/shm/relimg.img

I tried both. Always the same result : the ram is consumed quickly, and mass 
swap occurs.
On a 4G system, i see kvm uses more than 1024, maybe 1200.
And everytime a launch a program inside the vm, the amount of the host free ram 
(not cached) diminishes, which is weird, because it should have been reserved.

So on a 2G system, swap occurs very fast and the machine slow a lot down.
An on a total diskless system, this leads fast to a freeze.

I have no problem if i use a relative image on disk :
qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw 
es1370 -drive file=/mnt/hd/sda/sda1/tmp/relimg.img,cache=none

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html