Anthony Liguori wrote:
> Anthony Liguori wrote:
>> Very nice!
>>
>> I've tested this series (with your new 3/4) with win2k, winxp, ubuntu 
>> 7.10, and opensuse.  Everything seemed to work just fine.
>
> Spoke too soon, found the following in dmesg:
>
> [35078.913071] BUG: scheduling while atomic: 
> qemu-system-x86/0x10000001/21612
> [35078.913077]
> [35078.913079] Call Trace:
> [35078.913112]  [<ffffffff804301c5>] thread_return+0x21e/0x6c9
> [35078.913129]  [<ffffffff8027a00d>] zone_statistics+0x7d/0x80
> [35078.913139]  [<ffffffff80273691>] get_page_from_freelist+0x441/0x5b0
> [35078.913168]  [<ffffffff8022ffec>] __cond_resched+0x1c/0x50
> [35078.913174]  [<ffffffff804306f2>] cond_resched+0x32/0x40
> [35078.913181]  [<ffffffff8024e4d9>] down_read+0x9/0x20
> [35078.913199]  [<ffffffff8839a87c>] :kvm:gfn_to_page+0x4c/0x130
> [35078.913207]  [<ffffffff8027b76d>] vm_normal_page+0x3d/0xc0
> [35078.913230]  [<ffffffff8839ff94>] :kvm:gpa_to_hpa+0x24/0x70
> [35078.913249]  [<ffffffff883a007e>] 
> :kvm:paging32_set_pte_common+0x9e/0x2b0
> [35078.913285]  [<ffffffff883a02d9>] :kvm:paging32_set_pte+0x49/0x50
> [35078.913308]  [<ffffffff883a091d>] :kvm:kvm_mmu_pte_write+0x33d/0x3b0
> [35078.913350]  [<ffffffff883a0ca2>] :kvm:paging32_walk_addr+0x292/0x310
> [35078.913383]  [<ffffffff883a0e30>] :kvm:paging32_page_fault+0xc0/0x300
> [35078.913399]  [<ffffffff883a294c>] :kvm:x86_emulate_insn+0x11c/0x4190
> [35078.913448]  [<ffffffff883ba36b>] 
> :kvm_intel:handle_exception+0x21b/0x2a0
> [35078.913474]  [<ffffffff8839ca5c>] :kvm:kvm_vcpu_ioctl+0xddc/0x1130
> [35078.913488]  [<ffffffff8022cacc>] task_rq_lock+0x4c/0x90
> [35078.913494]  [<ffffffff8022c599>] __activate_task+0x29/0x50
> [35078.913504]  [<ffffffff8022f30c>] try_to_wake_up+0x5c/0x3f0
> [35078.913511]  [<ffffffff8025240f>] futex_wait+0x2df/0x3c0
> [35078.913521]  [<ffffffff8022cacc>] task_rq_lock+0x4c/0x90
> [35078.913528]  [<ffffffff8022c599>] __activate_task+0x29/0x50
> [35078.913545]  [<ffffffff8022c307>] __wake_up_common+0x47/0x80
> [35078.913561]  [<ffffffff8022ca03>] __wake_up+0x43/0x70
> [35078.913575]  [<ffffffff80326971>] __up_read+0x21/0xb0
> [35078.913585]  [<ffffffff802528a0>] futex_wake+0xd0/0xf0
> [35078.913617]  [<ffffffff80240810>] __dequeue_signal+0x110/0x1d0
> [35078.913633]  [<ffffffff8023ffce>] recalc_sigpending+0xe/0x30
> [35078.913638]  [<ffffffff8024212c>] dequeue_signal+0x5c/0x190
> [35078.913662]  [<ffffffff802a6e05>] do_ioctl+0x35/0xe0
> [35078.913675]  [<ffffffff802a6f24>] vfs_ioctl+0x74/0x2d0
> [35078.913680]  [<ffffffff8023ffce>] recalc_sigpending+0xe/0x30
> [35078.913684]  [<ffffffff80240057>] sigprocmask+0x67/0xf0
> [35078.913697]  [<ffffffff802a7215>] sys_ioctl+0x95/0xb0
> [35078.913715]  [<ffffffff80209e8e>] system_call+0x7e/0x83
> [35078.913743]
>
this is funny, but this message is "ok"
i wrote it when i sent the patch,
it happen beacuse the disable_local_irqs in kvm, (beacuse in this part 
some emulator function get called and do gfn_to_page()
we have to split this function,....
so it isnt really bug in the swapping, it is beacuse get_user_pages do 
cond_reschd(), when we will split the emulator function, we wont have 
this message :)
> Regards,
>
> Anthony Liguori
>
>> I also was able to create four 1G VMs on my 2G laptop :-)  That was 
>> very neat.
>>
>> Regards,
>>
>> Anthony Liguori
>>
>> Izik Eidus wrote:
>>> this patchs allow the guest not shadowed memory to be swapped out.
>>>
>>> to make it the must effective you should run -kvm-shadow-memory 1 
>>> (witch will make your machine slow)
>>> with -kvm-shadow-memory 1,  3giga memory guest can get to be just 
>>> 32mb on physical host!
>>>
>>> when not using -kvm-shadow-memory, i saw 4100mb machine getting to 
>>> as low as 168mb on the physical host (not as bad as i thought it 
>>> would be, and surely not as bad as it can be with 41mb of shadow 
>>> pages :))
>>>
>>>
>>> it seems to be very stable, it didnt crushed to me once, and i was 
>>> able to run:
>>> 2 3giga each windows xp  + 5giga linux guest
>>>
>>> and
>>> 2 4.1 giga each windows xp and 2 2giga each windows xp.
>>>
>>> few things to note:
>>> ignore for now the ugly messages at dmesg, it is due to the fact 
>>> that gfn_to_page try to sleep while local intrreupts disabled ( we 
>>> have to split some emulator function so it wont do it)
>>>
>>> and i saw some issue with the new rmapp at fedora 7 live cd, for 
>>> some reason , in the nonpaging mode rmap_remove getting called about 
>>> 50 times less than it need
>>> it doesnt happen at other linux guests, need to check this... (for 
>>> now it mean you might have about 200k of memory leak for each fedora 
>>> 7 live cd you are runing )
>>>
>>> also note that now kvm load much faster, beacuse no memset on all 
>>> the memory is needed (beacuse gfn_to_page get called at run time)
>>>
>>> (avi, and dor, note that this patch include small fix to a bug in 
>>> the patch that i sent you)
>>>
>>> ------------------------------------------------------------------------- 
>>>
>>> This SF.net email is sponsored by: Splunk Inc.
>>> Still grepping through log files to find problems?  Stop.
>>> Now Search log events and configuration files using AJAX and a browser.
>>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>>> _______________________________________________
>>> kvm-devel mailing list
>>> kvm-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>>>
>>>   
>>
>>
>


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to