Re: [kvm-devel] Qemu-kvm is leaking my memory ???
Zdenek Kabelac wrote: 2008/3/23, Avi Kivity [EMAIL PROTECTED]: Avi Kivity wrote: I see the same issue too now, and am investigating. The attached patch should fix the issue. It is present in 2.6.25-rc6 only, and not in kvm.git, which is why few people noticed it. Hi Tested - and actually seeing no difference in my case of memory leak. Still it looks like over 30M per execution of qemu is lost. (tested with fresh 2.6.25-rc6 with your patch) Can you double check? 2.6.25-rc6 definitely leaks without, and here it doesn't with the patch. Also now I'd have said that before my dmsetup status loop test case was not causing big problems and it was just enough to run another dmsetup to unblock the loop - now it's usually leads to some wierd end of qemu itself - will explore more So it's probably fixing some bug - and exposing another. As I said before - in my debuger it was looping in page_fault hadler - i.e. memory should be paged_in - but as soon as the handler return to the code to continue memcopy - new page_fault is invoked and pointer couters are not changed. I'll add some code to make it possible to enable the mmu tracer in runtime. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] Qemu-kvm is leaking my memory ???
Avi Kivity wrote: Tested - and actually seeing no difference in my case of memory leak. Still it looks like over 30M per execution of qemu is lost. (tested with fresh 2.6.25-rc6 with your patch) Can you double check? 2.6.25-rc6 definitely leaks without, and here it doesn't with the patch. btw, there's an additional patch I have queued up that might have an effect. please test the attached (which is my 2.6.25 queue). -- error compiling committee.c: too many arguments to function diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d8172aa..e55af12 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -222,8 +222,7 @@ static int is_io_pte(unsigned long pte) static int is_rmap_pte(u64 pte) { - return pte != shadow_trap_nonpresent_pte - pte != shadow_notrap_nonpresent_pte; + return is_shadow_present_pte(pte); } static gfn_t pse36_gfn_delta(u32 gpte) @@ -893,14 +892,25 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte, int *ptwrite, gfn_t gfn, struct page *page) { u64 spte; - int was_rmapped = is_rmap_pte(*shadow_pte); + int was_rmapped = 0; int was_writeble = is_writeble_pte(*shadow_pte); + hfn_t host_pfn = (*shadow_pte PT64_BASE_ADDR_MASK) PAGE_SHIFT; pgprintk(%s: spte %llx access %x write_fault %d user_fault %d gfn %lx\n, __FUNCTION__, *shadow_pte, pt_access, write_fault, user_fault, gfn); + if (is_rmap_pte(*shadow_pte)) { + if (host_pfn != page_to_pfn(page)) { + pgprintk(hfn old %lx new %lx\n, + host_pfn, page_to_pfn(page)); + rmap_remove(vcpu-kvm, shadow_pte); + } + else + was_rmapped = 1; + } + /* * We don't set the accessed bit, since we sometimes want to see * whether the guest actually used the pte (in order to detect @@ -1402,7 +1412,7 @@ static void mmu_guess_page_from_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, up_read(current-mm-mmap_sem); vcpu-arch.update_pte.gfn = gfn; - vcpu-arch.update_pte.page = gfn_to_page(vcpu-kvm, gfn); + vcpu-arch.update_pte.page = page; } void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 94ea724..8e14628 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -349,8 +349,6 @@ static void update_exception_bitmap(struct kvm_vcpu *vcpu) static void reload_tss(void) { -#ifndef CONFIG_X86_64 - /* * VT restores TR but not its size. Useless. */ @@ -361,7 +359,6 @@ static void reload_tss(void) descs = (void *)gdt.base; descs[GDT_ENTRY_TSS].type = 9; /* available TSS */ load_TR_desc(); -#endif } static void load_transition_efer(struct vcpu_vmx *vmx) @@ -1436,7 +1433,7 @@ static int init_rmode_tss(struct kvm *kvm) int ret = 0; int r; - down_read(current-mm-mmap_sem); + down_read(kvm-slots_lock); r = kvm_clear_guest_page(kvm, fn, 0, PAGE_SIZE); if (r 0) goto out; @@ -1459,7 +1456,7 @@ static int init_rmode_tss(struct kvm *kvm) ret = 1; out: - up_read(current-mm-mmap_sem); + up_read(kvm-slots_lock); return ret; } - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] Qemu-kvm is leaking my memory ???
2008/3/24, Avi Kivity [EMAIL PROTECTED]: Avi Kivity wrote: Tested - and actually seeing no difference in my case of memory leak. Still it looks like over 30M per execution of qemu is lost. (tested with fresh 2.6.25-rc6 with your patch) Can you double check? 2.6.25-rc6 definitely leaks without, and here it doesn't with the patch. btw, there's an additional patch I have queued up that might have an effect. please test the attached (which is my 2.6.25 queue). Yep - I've made a quick short test - and it looks promising - so far I can not see the leak with your additional patch. But I still have get my busy loop problem. Though now it's sometime stack-back-traced on the leaveq - maybe this instruction might cause some problems ?? Before this patch - I've always got the back-trace at the point of copy_user_generic_string - now its slightly different -- and still applies when I run the second dmsetup status - it unblocks the looped one) Call Trace: [8803558d] :dm_mod:dm_compat_ctl_ioctl+0xd/0x20 [802bd352] compat_sys_ioctl+0x182/0x3d0 [80283d20] vfs_write+0x130/0x170 [80221192] sysenter_do_call+0x1b/0x66 Call Trace: [88032100] ? :dm_mod:table_status+0x0/0x90 [80436809] ? error_exit+0x0/0x51 [88032100] ? :dm_mod:table_status+0x0/0x90 [8032d157] ? copy_user_generic_string+0x17/0x40 [880332d7] ? :dm_mod:copy_params+0x87/0xb0 [80237b11] ? __capable+0x11/0x30 [88033469] ? :dm_mod:ctl_ioctl+0x169/0x260 [80340712] ? tty_ldisc_deref+0x62/0x80 [8034320c] ? tty_write+0x22c/0x260 [8803358d] ? :dm_mod:dm_compat_ctl_ioctl+0xd/0x20 [802bd352] ? compat_sys_ioctl+0x182/0x3d0 [80283d20] ? vfs_write+0x130/0x170 [80221192] ? sysenter_do_call+0x1b/0x66 Here is dissambled dm_compat_ctl_ioctl: 1fa0 dm_compat_ctl_ioctl: return (long)ctl_ioctl(command, (struct dm_ioctl __user *)u); } #ifdef CONFIG_COMPAT static long dm_compat_ctl_ioctl(struct file *file, uint command, ulong u) { 1fa0: 55 push %rbp 1fa1: 89 f7 mov%esi,%edi 1fa3: 48 89 e5mov%rsp,%rbp return r; } static long dm_ctl_ioctl(struct file *file, uint command, ulong u) { return (long)ctl_ioctl(command, (struct dm_ioctl __user *)u); 1fa6: 89 d6 mov%edx,%esi 1fa8: e8 73 fd ff ff callq 1d20 ctl_ioctl #ifdef CONFIG_COMPAT static long dm_compat_ctl_ioctl(struct file *file, uint command, ulong u) { return (long)dm_ctl_ioctl(file, command, (ulong) compat_ptr(u)); } 1fad: c9 leaveq return r; } static long dm_ctl_ioctl(struct file *file, uint command, ulong u) { return (long)ctl_ioctl(command, (struct dm_ioctl __user *)u); 1fae: 48 98 cltq #ifdef CONFIG_COMPAT static long dm_compat_ctl_ioctl(struct file *file, uint command, ulong u) { return (long)dm_ctl_ioctl(file, command, (ulong) compat_ptr(u)); } 1fb0: c3 retq Zdenek - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] Qemu-kvm is leaking my memory ???
Zdenek Kabelac wrote: 2008/3/19, Avi Kivity [EMAIL PROTECTED]: Zdenek Kabelac wrote: 2008/3/19, Avi Kivity [EMAIL PROTECTED]: Zdenek Kabelac wrote: 2008/3/16, Avi Kivity [EMAIL PROTECTED]: The -vnc switch, so there's no local X server. A remote X server should be fine as well. Use runlevel 3, which means network but no local X server. Ok I've finaly got some time to make a comparable measurements about memory - I'm attaching empty trace log which is from the level where most of processes were killed (as you can see in the 'ps' trace) Then there are attachments after using qemu 7 times (log of free before execution is also attached) Both logs are after 3/proc/sys/vm/drop_cache I see the same issue too now, and am investigating. -- error compiling committee.c: too many arguments to function - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] Qemu-kvm is leaking my memory ???
Avi Kivity wrote: I see the same issue too now, and am investigating. The attached patch should fix the issue. It is present in 2.6.25-rc6 only, and not in kvm.git, which is why few people noticed it. -- error compiling committee.c: too many arguments to function diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 4ba85d9..e55af12 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1412,7 +1412,7 @@ static void mmu_guess_page_from_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, up_read(current-mm-mmap_sem); vcpu-arch.update_pte.gfn = gfn; - vcpu-arch.update_pte.page = gfn_to_page(vcpu-kvm, gfn); + vcpu-arch.update_pte.page = page; } void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] Qemu-kvm is leaking my memory ???
2008/3/23, Avi Kivity [EMAIL PROTECTED]: Avi Kivity wrote: I see the same issue too now, and am investigating. The attached patch should fix the issue. It is present in 2.6.25-rc6 only, and not in kvm.git, which is why few people noticed it. Hi Tested - and actually seeing no difference in my case of memory leak. Still it looks like over 30M per execution of qemu is lost. (tested with fresh 2.6.25-rc6 with your patch) Also now I'd have said that before my dmsetup status loop test case was not causing big problems and it was just enough to run another dmsetup to unblock the loop - now it's usually leads to some wierd end of qemu itself - will explore more So it's probably fixing some bug - and exposing another. As I said before - in my debuger it was looping in page_fault hadler - i.e. memory should be paged_in - but as soon as the handler return to the code to continue memcopy - new page_fault is invoked and pointer couters are not changed. Zdenek - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] Qemu-kvm is leaking my memory ???
2008/3/16, Avi Kivity [EMAIL PROTECTED]: Zdenek Kabelac wrote: Hello Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels (with many debug options) I've noticed that over the time my memory seems to disappear somewhere. Here is my memory trace after boot and some time of work - thus memory should be populated. No idea how these should add up. What does 'free' say? Ok - here goes my free log (I'm loggin free prior each start of my qemu-kvm so here is the log for this afternoon: (I'm running same apps all the time - except during kernel compilation I'm reading some www pages - and working with gnome-terminal - so some slightly more memory could have been eaten by them - but not in the range of hundreds of MB) Wed Mar 19 12:54:38 CET 2008 total used free sharedbuffers cached Mem: 20074601525240 482220 0 18060 469812 -/+ buffers/cache:1037368 970092 Swap:0 0 0 Wed Mar 19 13:27:51 CET 2008 total used free sharedbuffers cached Mem: 20074601491672 515788 0 13024 404220 -/+ buffers/cache:1074428 933032 Swap:0 0 0 Wed Mar 19 13:51:38 CET 2008 total used free sharedbuffers cached Mem: 20074601513000 494460 0 12676 366708 -/+ buffers/cache:1133616 873844 Swap:0 0 0 Wed Mar 19 14:05:30 CET 2008 total used free sharedbuffers cached Mem: 20074601976592 30868 0 12220 785672 -/+ buffers/cache:1178700 828760 Swap:0 0 0 Wed Mar 19 14:13:52 CET 2008 total used free sharedbuffers cached Mem: 20074601865500 141960 0 14592 633136 -/+ buffers/cache:1217772 789688 Swap:0 0 0 Wed Mar 19 14:16:04 CET 2008 total used free sharedbuffers cached Mem: 20074601533432 474028 0 5852 304736 -/+ buffers/cache:1222844 784616 Swap:0 0 0 Wed Mar 19 15:05:33 CET 2008 total used free sharedbuffers cached Mem: 20074601545796 461664 0 4100 276756 -/+ buffers/cache:1264940 742520 Swap:0 0 0 Wed Mar 19 15:14:07 CET 2008 total used free sharedbuffers cached Mem: 20074601748680 258780 0 8324 427172 -/+ buffers/cache:1313184 694276 Swap:0 0 0 -now it's: total used free sharedbuffers cached Mem: 20074601784952 222508 0 20644 335360 -/+ buffers/cache:1428948 578512 Swap:0 0 0 and top-twenty memory list of currently running processes: top - 15:52:29 up 19:07, 12 users, load average: 0.33, 0.30, 0.60 Tasks: 298 total, 1 running, 296 sleeping, 1 stopped, 0 zombie Cpu(s): 1.6%us, 3.3%sy, 0.0%ni, 95.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2007460k total, 1770748k used, 236712k free,20304k buffers Swap:0k total,0k used,0k free, 335036k cached PID PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND 15974 20 0 655m 207m 28m S 0.0 10.6 3:31.31 firefox 3980 20 0 378m 63m 10m S 1.3 3.2 1:00.53 gnome-terminal 2657 20 0 481m 58m 9928 S 2.3 3.0 19:16.03 Xorg 12492 20 0 494m 34m 17m S 0.0 1.8 1:20.52 pidgin 3535 20 0 336m 22m 12m S 0.0 1.2 0:15.41 gnome-panel 3571 20 0 265m 16m 10m S 0.0 0.9 0:06.25 nm-applet 3638 20 0 298m 16m 9296 S 0.0 0.8 0:36.79 wnck-applet 3546 20 0 458m 16m 10m S 0.0 0.8 1:21.65 gnome-power-man 3579 20 0 261m 16m 8252 S 0.0 0.8 0:02.65 python 3532 20 0 200m 15m 8144 S 0.3 0.8 1:14.34 metacity 3754 20 0 325m 14m 9856 S 0.0 0.7 0:00.42 mixer_applet2 3909 20 0 243m 14m 7988 S 0.0 0.7 0:06.13 notification-da 3706 20 0 330m 14m 9764 S 0.0 0.7 0:01.40 clock-applet 3534 20 0 449m 13m 9884 S 0.0 0.7 0:00.92 nautilus 3540 20 0 250m 12m 8616 S 0.3 0.6 0:07.30 pk-update-icon 3708 20 0 300m 12m 7940 S 0.0 0.6 0:03.15 gnome-keyboard- 3752 20 0 290m 11m 8028 S 0.0 0.6 0:00.27 gnome-brightnes 3553 20 0 286m 11m 8144 S 0.0 0.6 0:04.29 krb5-auth-dialo 3761 20 0 270m 11m 7968 S 0.0 0.6 0:23.02 cpufreq-applet 2898 20 0 328m 10m 8240 S 0.0 0.5 0:07.95 gnome-settings- 3702 20 0 282m 9436 7460 S 0.0 0.5 0:00.25 drivemount_appl 3749 20 0 288m 8848 6924 S 0.0 0.4 0:00.11 gnome-inhibit-a 3756
Re: [kvm-devel] Qemu-kvm is leaking my memory ???
Zdenek Kabelac wrote: 2008/3/16, Avi Kivity [EMAIL PROTECTED]: Zdenek Kabelac wrote: Hello Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels (with many debug options) I've noticed that over the time my memory seems to disappear somewhere. Here is my memory trace after boot and some time of work - thus memory should be populated. No idea how these should add up. What does 'free' say? Ok - here goes my free log (I'm loggin free prior each start of my qemu-kvm so here is the log for this afternoon: (I'm running same apps all the time - except during kernel compilation I'm reading some www pages - and working with gnome-terminal - so some slightly more memory could have been eaten by them - but not in the range of hundreds of MB) Can you make sure that it isn't other processes? Go to runlevel 3 and start the VM using vnc or X-over-network? What host kernel and kvm version are you using? -- error compiling committee.c: too many arguments to function - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] Qemu-kvm is leaking my memory ???
2008/3/19, Avi Kivity [EMAIL PROTECTED]: Zdenek Kabelac wrote: 2008/3/16, Avi Kivity [EMAIL PROTECTED]: Zdenek Kabelac wrote: Hello Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels (with many debug options) I've noticed that over the time my memory seems to disappear somewhere. Here is my memory trace after boot and some time of work - thus memory should be populated. No idea how these should add up. What does 'free' say? Ok - here goes my free log (I'm loggin free prior each start of my qemu-kvm so here is the log for this afternoon: (I'm running same apps all the time - except during kernel compilation I'm reading some www pages - and working with gnome-terminal - so some slightly more memory could have been eaten by them - but not in the range of hundreds of MB) Can you make sure that it isn't other processes? Go to runlevel 3 and start the VM using vnc or X-over-network? Hmmm not really sure what do you mean by external VNC - I could grab this info once I'll finish some work today and kill all the apps running in the system - so most of the memory should be released - will go to single mode for this - is this what do you want ? What host kernel and kvm version are you using? Usually running quite up-to-date Linus git tree kernel - Both host/guest are running 2.6.25-rc6 kernels For compiling using gcc-4.3 kvm itself is fedora rawhide package: kvm-63-2.fc9.x86_64 (somehow I've troubles to compile the kvm-userspace git tree as libkvm mismatches my kernel version - which probably means I would have to use kvm linux kernel to use kvm-userspace ??) (actually why the gcc-3.x is preferred when this compiler is IMHO far more broken then 4.3 ?) I think I've already posted my configuration already several times if it's needed I'll repost again - I've many debugging features enabled for my kernels (yet having no idea how to use them to detect my lost memory :)) Zdenek - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] Qemu-kvm is leaking my memory ???
Zdenek Kabelac wrote: 2008/3/19, Avi Kivity [EMAIL PROTECTED]: Zdenek Kabelac wrote: 2008/3/16, Avi Kivity [EMAIL PROTECTED]: Zdenek Kabelac wrote: Hello Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels (with many debug options) I've noticed that over the time my memory seems to disappear somewhere. Here is my memory trace after boot and some time of work - thus memory should be populated. No idea how these should add up. What does 'free' say? Ok - here goes my free log (I'm loggin free prior each start of my qemu-kvm so here is the log for this afternoon: (I'm running same apps all the time - except during kernel compilation I'm reading some www pages - and working with gnome-terminal - so some slightly more memory could have been eaten by them - but not in the range of hundreds of MB) Can you make sure that it isn't other processes? Go to runlevel 3 and start the VM using vnc or X-over-network? Hmmm not really sure what do you mean by external VNC - I could grab this info once I'll finish some work today and kill all the apps running in the system - so most of the memory should be released - will go to single mode for this - is this what do you want ? The -vnc switch, so there's no local X server. A remote X server should be fine as well. Use runlevel 3, which means network but no local X server. What host kernel and kvm version are you using? Usually running quite up-to-date Linus git tree kernel - Both host/guest are running 2.6.25-rc6 kernels For compiling using gcc-4.3 kvm itself is fedora rawhide package: kvm-63-2.fc9.x86_64 (somehow I've troubles to compile the kvm-userspace git tree as libkvm mismatches my kernel version - which probably means I would have to use kvm linux kernel to use kvm-userspace ??) If running kvm.git, do ./configure --with-patched-kernel. Please report kvm compiler errors. (actually why the gcc-3.x is preferred when this compiler is IMHO far more broken then 4.3 ?) qemu requires gcc 3. The kernel may be compiled with any gcc that it supports. -- error compiling committee.c: too many arguments to function - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] Qemu-kvm is leaking my memory ???
Zdenek Kabelac wrote: Hello Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels (with many debug options) I've noticed that over the time my memory seems to disappear somewhere. Here is my memory trace after boot and some time of work - thus memory should be populated. No idea how these should add up. What does 'free' say? -- error compiling committee.c: too many arguments to function - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] Qemu-kvm is leaking my memory ???
Hello Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels (with many debug options) I've noticed that over the time my memory seems to disappear somewhere. Here is my memory trace after boot and some time of work - thus memory should be populated. MemTotal: 2007460 kB MemFree:618772 kB Buffers: 46044 kB Cached: 733156 kB SwapCached: 0 kB Active: 613384 kB Inactive: 541844 kB SwapTotal: 0 kB SwapFree:0 kB Dirty: 148 kB Writeback: 0 kB AnonPages: 376152 kB Mapped: 67184 kB Slab:80340 kB SReclaimable:50284 kB SUnreclaim: 30056 kB PageTables: 27976 kB NFS_Unstable:0 kB Bounce: 0 kB CommitLimit: 1003728 kB Committed_AS: 810968 kB VmallocTotal: 34359738367 kB VmallocUsed: 71244 kB VmallocChunk: 34359666419 kB 618772 + 46044 + 733156 + 148 + 376152 + 67184 + 80340 + 50284 + 30056 + 27976 = 2030112 2GB (though could be wrong - I could be wrong and adding something improperly) And this memory listing is when I work during the day with qemu-kvm do something like 30-50 qemu restarts. Then before I rebooted the machine I've killed nearly all running task (i.e no Xserver, most of services turned of) MemTotal: 2007416 kB MemFree:652412 kB Buffers: 7 kB Cached: 607144 kB SwapCached: 0 kB Active: 571464 kB Inactive: 709796 kB SwapTotal: 0 kB SwapFree:0 kB Dirty: 0 kB Writeback: 0 kB AnonPages:6408 kB Mapped: 4844 kB Slab:52620 kB SReclaimable:32752 kB SUnreclaim: 19868 kB PageTables: 1468 kB NFS_Unstable:0 kB Bounce: 0 kB CommitLimit: 1003708 kB Committed_AS:33988 kB VmallocTotal: 34359738367 kB VmallocUsed: 68152 kB VmallocChunk: 34359668731 kB I've have expected much more free memory here and I definitely do not see how this could combine 2GB of my memory: 652412 + 7 + 607144 + 6408 + 4844 + 52620 + 32752 + 19868 + 1468 = 1447516 1.4GB so where is my 600MB piece of memory hiding ? Zdenek - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel