Anthony Liguori wrote:
>>> What I don't understand, is how we can have something like
>>> mmu_unshadow() called automatically when an mmap() is initiated from
>>> userspace. We could just add an ioctl() to do it from userspace but
>>> I think it would be nicer if it Just Worked.
>>>
>>
>> Behol
Avi Kivity wrote:
> Anthony Liguori wrote:
>> Izik Eidus wrote:
>>
That's not quite what I was wondering.
When you do an madvise() in userspace, the result is that when that
memory is accessed again, linux will demand-fault in a zero page
and COW it appropriately. If w
Anthony Liguori wrote:
> Izik Eidus wrote:
>
>>> That's not quite what I was wondering.
>>>
>>> When you do an madvise() in userspace, the result is that when that
>>> memory is accessed again, linux will demand-fault in a zero page and
>>> COW it appropriately. If we do madvise() on the VA r
Anthony Liguori wrote:
> Izik Eidus wrote:
>
>> Anthony Liguori wrote:
>>
>>> I've been playing around with these patches. If I do an
>>> madvise(MADV_DONTNEED) in userspace, when I close the VM, I get the
>>> following bug. My knowledge of the mm is limited but since
>>> madvise(MADV_
Izik Eidus wrote:
> Anthony Liguori wrote:
>
>> Izik Eidus wrote:
>>
>>> Anthony Liguori wrote:
>>>
I've been playing around with these patches. If I do an
madvise(MADV_DONTNEED) in userspace, when I close the VM, I get the
following bug. My knowledge of the mm is
Anthony Liguori wrote:
> I've been playing around with these patches. If I do an
> madvise(MADV_DONTNEED) in userspace, when I close the VM, I get the
> following bug. My knowledge of the mm is limited but since
> madvise(MADV_DONTNEED) effectively does a zap_page_range() I wonder if
> we're
Anthony Liguori wrote:
> Izik Eidus wrote:
>>>
>>> That's not quite what I was wondering.
>>>
>>> When you do an madvise() in userspace, the result is that when that
>>> memory is accessed again, linux will demand-fault in a zero page and
>>> COW it appropriately. If we do madvise() on the VA re
Izik Eidus wrote:
>>
>> That's not quite what I was wondering.
>>
>> When you do an madvise() in userspace, the result is that when that
>> memory is accessed again, linux will demand-fault in a zero page and
>> COW it appropriately. If we do madvise() on the VA representing
>> guest physical m
Anthony Liguori wrote:
> Izik Eidus wrote:
>> Anthony Liguori wrote:
>>> Izik Eidus wrote:
Anthony Liguori wrote:
> I've been playing around with these patches. If I do an
> madvise(MADV_DONTNEED) in userspace, when I close the VM, I get
> the following bug. My knowledge of the
Izik Eidus wrote:
> Anthony Liguori wrote:
>> Izik Eidus wrote:
>>> Anthony Liguori wrote:
I've been playing around with these patches. If I do an
madvise(MADV_DONTNEED) in userspace, when I close the VM, I get the
following bug. My knowledge of the mm is limited but since
Anthony Liguori wrote:
> Izik Eidus wrote:
>> Anthony Liguori wrote:
>>> I've been playing around with these patches. If I do an
>>> madvise(MADV_DONTNEED) in userspace, when I close the VM, I get the
>>> following bug. My knowledge of the mm is limited but since
>>> madvise(MADV_DONTNEED) eff
Izik Eidus wrote:
> Anthony Liguori wrote:
>> I've been playing around with these patches. If I do an
>> madvise(MADV_DONTNEED) in userspace, when I close the VM, I get the
>> following bug. My knowledge of the mm is limited but since
>> madvise(MADV_DONTNEED) effectively does a zap_page_range
Anthony Liguori wrote:
> I've been playing around with these patches. If I do an
> madvise(MADV_DONTNEED) in userspace, when I close the VM, I get the
> following bug. My knowledge of the mm is limited but since
> madvise(MADV_DONTNEED) effectively does a zap_page_range() I wonder if
> we're
I've been playing around with these patches. If I do an
madvise(MADV_DONTNEED) in userspace, when I close the VM, I get the
following bug. My knowledge of the mm is limited but since
madvise(MADV_DONTNEED) effectively does a zap_page_range() I wonder if
we're lacking the necessary callback to
On Mon, 2007-10-15 at 11:13 +0200, Carsten Otte wrote:
> Izik Eidus wrote:
> > this patchs allow the guest not shadowed memory to be swapped out.
> This patch has greatly improved since I've read the swapping code last
> time. While not having time for a deep review, it looks very clean and
> san
Izik Eidus wrote:
> this patchs allow the guest not shadowed memory to be swapped out.
This patch has greatly improved since I've read the swapping code last
time. While not having time for a deep review, it looks very clean and
sane to me when scrolling over.
---
Anthony Liguori wrote:
> Anthony Liguori wrote:
>> Very nice!
>>
>> I've tested this series (with your new 3/4) with win2k, winxp, ubuntu
>> 7.10, and opensuse. Everything seemed to work just fine.
>
> Spoke too soon, found the following in dmesg:
>
> [35078.913071] BUG: scheduling while atomic:
Anthony Liguori wrote:
> Very nice!
>
> I've tested this series (with your new 3/4) with win2k, winxp, ubuntu
> 7.10, and opensuse. Everything seemed to work just fine.
Spoke too soon, found the following in dmesg:
[35078.913071] BUG: scheduling while atomic:
qemu-system-x86/0x1001/21612
[
Very nice!
I've tested this series (with your new 3/4) with win2k, winxp, ubuntu
7.10, and opensuse. Everything seemed to work just fine.
I also was able to create four 1G VMs on my 2G laptop :-) That was very
neat.
Regards,
Anthony Liguori
Izik Eidus wrote:
> this patchs allow the guest n
Izik Eidus wrote:
> Izik Eidus wrote:
>> Anthony Liguori wrote:
>>> Izik Eidus wrote:
this patchs allow the guest not shadowed memory to be swapped out.
to make it the must effective you should run -kvm-shadow-memory 1
(witch will make your machine slow)
with -kvm-shadow-m
Izik Eidus wrote:
> Anthony Liguori wrote:
>> Izik Eidus wrote:
>>> this patchs allow the guest not shadowed memory to be swapped out.
>>>
>>> to make it the must effective you should run -kvm-shadow-memory 1
>>> (witch will make your machine slow)
>>> with -kvm-shadow-memory 1, 3giga memory gues
Anthony Liguori wrote:
> Izik Eidus wrote:
>> this patchs allow the guest not shadowed memory to be swapped out.
>>
>> to make it the must effective you should run -kvm-shadow-memory 1
>> (witch will make your machine slow)
>> with -kvm-shadow-memory 1, 3giga memory guest can get to be just
>> 3
Izik Eidus wrote:
> this patchs allow the guest not shadowed memory to be swapped out.
>
> to make it the must effective you should run -kvm-shadow-memory 1 (witch
> will make your machine slow)
> with -kvm-shadow-memory 1, 3giga memory guest can get to be just 32mb
> on physical host!
>
> when
this patchs allow the guest not shadowed memory to be swapped out.
to make it the must effective you should run -kvm-shadow-memory 1 (witch
will make your machine slow)
with -kvm-shadow-memory 1, 3giga memory guest can get to be just 32mb
on physical host!
when not using -kvm-shadow-memory, i
24 matches
Mail list logo