I've been playing around with these patches.  If I do an 
madvise(MADV_DONTNEED) in userspace, when I close the VM, I get the 
following bug.  My knowledge of the mm is limited but since 
madvise(MADV_DONTNEED) effectively does a zap_page_range() I wonder if 
we're lacking the necessary callback to also remove any potential GPA 
covered by that range from shadow page cache.

Regards,

Anthony Liguori

[  860.724555] rmap_remove: ffff81004c48cf00 506d1025 0->BUG
[  860.724603] ------------[ cut here ]------------
[  860.724606] kernel BUG at 
/home/anthony/git/fresh/kvm-userspace/kernel/mmu.c:433!
[  860.724608] invalid opcode: 0000 [1] SMP
[  860.724611] CPU 0
[  860.724613] Modules linked in: kvm_intel kvm i915 drm af_packet 
rfcomm l2cap bluetooth nbd thinkpad_acpi ppdev acpi_cpufreq 
cpufreq_userspace cpufreq_conservative cpufreq_powersave cpufreq_stats 
cpufreq_ondemand freq_table ac bay battery container video sbs button 
dock ipv6 bridge ipt_REJECT xt_state xt_tcpudp iptable_filter 
ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack 
nfnetlink ip_tables x_tables deflate zlib_deflate twofish twofish_common 
camellia serpent blowfish des cbc aes xcbc sha256 sha1 crypto_null 
af_key sbp2 lp joydev arc4 ecb blkcipher snd_hda_intel snd_pcm_oss 
snd_mixer_oss iwl4965 snd_pcm iwlwifi_mac80211 pcmcia snd_seq_dummy 
sdhci snd_seq_oss cfg80211 parport_pc parport serio_raw psmouse mmc_core 
pcspkr yenta_socket rsrc_nonstatic pcmcia_core intel_agp snd_seq_midi 
snd_rawmidi snd_seq_midi_event snd_seq shpchp pci_hotplug snd_timer 
snd_seq_device snd soundcore snd_page_alloc evdev ext3 jbd mbcache sg 
sr_mod cdrom sd_mod usbhid hid ata_piix ata_generic libata scsi_mod 
ohci1394 ieee1394 ehci_hcd e1000 uhci_hcd usbcore dm_mirror dm_snapshot 
dm_mod thermal processor fan fuse apparmor commoncap
[  860.724688] Pid: 7372, comm: qemu-system-x86 Not tainted 
2.6.22-14-generic #1
[  860.724690] RIP: 0010:[<ffffffff88384ef3>]  [<ffffffff88384ef3>] 
:kvm:rmap_remove+0xb3/0x190
[  860.724704] RSP: 0018:ffff81004f079d28  EFLAGS: 00010292
[  860.724706] RAX: 0000000000000040 RBX: ffff81004ccc9580 RCX: 
ffffffff80534b68
[  860.724709] RDX: ffffffff80534b68 RSI: 0000000000000086 RDI: 
ffffffff80534b60
[  860.724711] RBP: ffff81004c48cf00 R08: 0000000000000000 R09: 
0000000000000000
[  860.724714] R10: ffffffff805ce880 R11: ffffffff8021e2c0 R12: 
ffff81004cda0000
[  860.724716] R13: ffff81004ccc9580 R14: ffff81004cda0000 R15: 
000ffffffffff000
[  860.724719] FS:  00002b55f14e6d30(0000) GS:ffffffff80560000(0000) 
knlGS:0000000000000000
[  860.724721] CS:  0010 DS: 002b ES: 002b CR0: 000000008005003b
[  860.724724] CR2: 00002b55f0129680 CR3: 0000000000201000 CR4: 
00000000000026e0
[  860.724726] Process qemu-system-x86 (pid: 7372, threadinfo 
ffff81004f078000, task ffff810056d974a0)
[  860.724728] Stack:  ffff81004c48cf00 00000000000001e0 
0000000000000000 ffffffff883851e4
[  860.724734]  ffff8100672cf650 ffff81004c63a000 ffff81004c63a000 
ffff81004cda0000
[  860.724739]  ffff8100512056a8 ffff810050c75100 ffff81004dfb9a90 
ffffffff88385453
[  860.724743] Call Trace:
[  860.724755]  [<ffffffff883851e4>] :kvm:kvm_mmu_zap_page+0x214/0x250
[  860.724769]  [<ffffffff88385453>] :kvm:free_mmu_pages+0x23/0x50
[  860.724777]  [<ffffffff8838549d>] :kvm:kvm_mmu_destroy+0x1d/0x70
[  860.724788]  [<ffffffff883819e1>] :kvm:kvm_vcpu_uninit+0x11/0x30
[  860.724795]  [<ffffffff8839fc7b>] :kvm_intel:vmx_free_vcpu+0x5b/0x70
[  860.724803]  [<ffffffff88382d4a>] :kvm:kvm_destroy_vm+0xca/0x130
[  860.724813]  [<ffffffff88382f60>] :kvm:kvm_vm_release+0x10/0x20
[  860.724820]  [<ffffffff8029a3c1>] __fput+0xc1/0x1e0
[  860.724834]  [<ffffffff8837f9ea>] :kvm:kvm_vcpu_release+0x1a/0x30
[  860.724838]  [<ffffffff8029a3c1>] __fput+0xc1/0x1e0
[  860.724848]  [<ffffffff80297334>] filp_close+0x54/0x90
[  860.724854]  [<ffffffff80237c8d>] put_files_struct+0xed/0x120
[  860.724864]  [<ffffffff80239051>] do_exit+0x1a1/0x940
[  860.724878]  [<ffffffff8023981c>] do_group_exit+0x2c/0x80
[  860.724884]  [<ffffffff80209e8e>] system_call+0x7e/0x83
[  860.724899]
[  860.724900]
[  860.724901] Code: 0f 0b eb fe 48 89 c7 48 83 e7 fe 0f 84 a1 00 00 00 
45 31 c0
[  860.724911] RIP  [<ffffffff88384ef3>] :kvm:rmap_remove+0xb3/0x190
[  860.724919]  RSP <ffff81004f079d28>
[  860.724921] Fixing recursive fault but reboot is needed!


Izik Eidus wrote:
> this patchs allow the guest not shadowed memory to be swapped out.
>
> to make it the must effective you should run -kvm-shadow-memory 1 (witch 
> will make your machine slow)
> with -kvm-shadow-memory 1,  3giga memory guest can get to be just 32mb 
> on physical host!
>
> when not using -kvm-shadow-memory, i saw 4100mb machine getting to as 
> low as 168mb on the physical host (not as bad as i thought it would be, 
> and surely not as bad as it can be with 41mb of shadow pages :))
>
>
> it seems to be very stable, it didnt crushed to me once, and i was able 
> to run:
> 2 3giga each windows xp  + 5giga linux guest
>
> and
> 2 4.1 giga each windows xp and 2 2giga each windows xp.
>
> few things to note:
> ignore for now the ugly messages at dmesg, it is due to the fact that 
> gfn_to_page try to sleep while local intrreupts disabled ( we have to 
> split some emulator function so it wont do it)
>
> and i saw some issue with the new rmapp at fedora 7 live cd, for some 
> reason , in the nonpaging mode rmap_remove getting called about 50 times 
> less than it need
> it doesnt happen at other linux guests, need to check this... (for now 
> it mean you might have about 200k of memory leak for each fedora 7 live 
> cd you are runing )
>
> also note that now kvm load much faster, beacuse no memset on all the 
> memory is needed (beacuse gfn_to_page get called at run time)
>
> (avi, and dor, note that this patch include small fix to a bug in the 
> patch that i sent you)
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> kvm-devel mailing list
> kvm-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>
>   


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to