Zhang, Xiantao wrote:
Avi Kivity wrote:
Zhang, Xiantao wrote:
+#define TO_LEGACY_IO(addr) (((addr0x3ff) 12 2)|((addr)
0x3))
Please change to a function. Other than that, patch looks good.
Thanks ! Attached. :)
Xiantao
From: Zhang Xiantao [EMAIL
Unfortunately, it can's apply on tip. Could you attach the diff ?
Thanks
Xiantao
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
When You Fall in Love http://81.71.5.34/
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
Hi tony,
This is the revised patch per comments.
We don't use index as parameter of ia64_ptr_entry,
Because it will cause the unconvenientce to user.
1. the user need to remember index.
2. the user may want to use one purge to purge two previous inserts.
We add some optimization for
On Thu, Jan 24, 2008 at 06:21:56PM +0200, Avi Kivity wrote:
Bernhard Kaindl wrote:
I did not test this patch as I did not find documentation on how to run the
test cases and I could not find a make target to run them from make.
make -C user test_cases
user/kvmctl
From: Zhang Xiantao [EMAIL PROTECTED]
Date: Thu, 31 Jan 2008 12:03:39 +0800
Subject: [PATCH] Appoint kvm/ia64 Maintainers
Signed-off-by Anthony Xu [EMAIL PROTECTED]
Signed-off-by Xiantao Zhang [EMAIL PROTECTED]
---
MAINTAINERS |9 +
1 files changed, 9 insertions(+), 0 deletions(-)
On Thu, Jan 31, 2008 at 08:50:01AM +0200, Avi Kivity wrote:
This is surprising. pagefault_disable() is really a preempt_disable(), and
kvm_read_guest_atomic() should only be called from atomic contexts (with
preemption already disabled), no?
_spin_lock calls preempt_disable() and that's the
From: Zhang Xiantao [EMAIL PROTECTED]
Date: Tue, 29 Jan 2008 14:42:44 +0800
Subject: [PATCH] kvm/ia64: add Kconfig for kvm configuration.
Kconfig adds kvm/ia64 configurations at kernel configuration
time.
Signed-off-by: Xiantao Zhang [EMAIL PROTECTED]
---
arch/ia64/kvm/Kconfig | 43
From: Zhang Xiantao [EMAIL PROTECTED]
Date: Tue, 29 Jan 2008 14:43:32 +0800
Subject: [PATCH] kvm/ia64: Add Makefile for kvm files compile.
Adds Makefile for kvm compile.
Signed-off-by: Xiantao Zhang [EMAIL PROTECTED]
---
arch/ia64/kvm/Makefile | 61
From: Zhang Xiantao [EMAIL PROTECTED]
Date: Tue, 29 Jan 2008 14:35:44 +0800
Subject: [PATCH] kvm/ia64: add optimization for some virtulization
faults
optvfault.S adds optimization for some performance-critical
virtualization
faults.
Signed-off-by: Anthony Xu [EMAIL PROTECTED]
Signed-off-by:
From: Zhang Xiantao [EMAIL PROTECTED]
Date: Tue, 29 Jan 2008 17:27:06 +0800
Subject: [PATCH] README: How to boot up guests on kvm/ia64
Guide: How to boot up guests on kvm/ia64
Signed-off-by: Xiantao Zhang [EMAIL PROTECTED]
---
arch/ia64/kvm/README | 72
From: Zhang Xiantao [EMAIL PROTECTED]
Date: Tue, 29 Jan 2008 14:40:41 +0800
Subject: [PATCH] kvm/ia64: Generate offset values for assembly code use.
asm-offsets.c will generate offset values used for assembly code
for some fileds of special structures.
Signed-off-by: Anthony Xu [EMAIL PROTECTED]
Carlo Marcelo Arenas Belon wrote:
[I forgot to reply to the this:]
--- kvm-60/user/test/x86/access.c
+++ kvm-60/user/test/x86/access.c 2008/01/24 15:14:16
@@ -1,6 +1,7 @@
#include smp.h
#include printf.h
+#include string.h
#define true 1
#define false 0
@@ -569,7 +570,7 @@
On Wed, Jan 30, 2008 at 05:46:21PM -0800, Christoph Lameter wrote:
Well the GRU uses follow_page() instead of get_user_pages. Performance is
a major issue for the GRU.
GRU is a external TLB, we have to allocate RAM instead but we do it
through the regular userland paging mechanism.
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: Thu, 17 Jan 2008 14:03:04 +0800
Subject: [PATCH] kvm: ia64 : Export some symbols out for module use.
Export empty_zero_page, ia64_sal_cache_flush, ia64_sal_freq_base
in this patch.
Signed-off-by: Xiantao Zhang [EMAIL PROTECTED]
---
From: Zhang Xiantao [EMAIL PROTECTED]
Date: Tue, 29 Jan 2008 13:31:55 +0800
Subject: [PATCH] kvm/ia64: VMM module interfaces.
vmm.c adds the interfaces with kvm/module, and initialize global data
area.
Signed-off-by: Xiantao Zhang[EMAIL PROTECTED]
---
arch/ia64/kvm/vmm.c | 57
From: Zhang Xiantao [EMAIL PROTECTED]
Date: Tue, 29 Jan 2008 13:15:05 +0800
Subject: [PATCH] kvm/ia64: Add kvm sal/pal virtulization support.
Some sal/pal calls would be traped to kvm for virtulization
from guest firmware.
Signed-off-by: Xiantao Zhang[EMAIL PROTECTED]
---
arch/ia64/kvm/kvm_fw.c
From: Xiantao Zhang [EMAIL PROTECTED]
Date: Thu, 31 Jan 2008 15:38:36 +0800
Subject: [PATCH] kvm/ia64: Add header files for kvm/ia64.
Three header files are added:
asm-ia64/kvm.h
asm-ia64/kvm_host.h
asm-ia64/kvm_para.h
Signed-off-by: Xiantao Zhang [EMAIL PROTECTED]
---
include/asm-ia64/kvm.h
From: Zhang Xiantao [EMAIL PROTECTED]
Date: Thu, 31 Jan 2008 17:10:52 +0800
Subject: [PATCH] Add API for allocating TR resouce.
Dynamic TR resouce should be managed in an uniform way.
Signed-off-by: Xiantao Zhang [EMAIL PROTECTED]
Signed-off-by: Anthony Xu[EMAIL PROTECTED]
---
From: Zhang Xiantao [EMAIL PROTECTED]
Date: Tue, 29 Jan 2008 14:26:29 +0800
Subject: [PATCH] kvm/ia64: Add TLB virtulization support.
vtlb.c includes tlb/VHPT virtulization.
Signed-off-by: Anthony Xu [EMAIL PROTECTED]
Signed-off-by: Xiantao Zhang [EMAIL PROTECTED]
---
arch/ia64/kvm/vtlb.c | 606
Hi, Avi/Tony
We have rebased kvm/ia64 code to latest kvm. In this version, we
have fixed coding style issues, and all patches can pass checkpatch.pl,
except one assembly header file, which is copyied from kernel, so we
didn't change its issues.
Compared with last version, we implemented
Andrea Arcangeli wrote:
On Thu, Jan 31, 2008 at 08:50:01AM +0200, Avi Kivity wrote:
This is surprising. pagefault_disable() is really a preempt_disable(), and
kvm_read_guest_atomic() should only be called from atomic contexts (with
preemption already disabled), no?
_spin_lock
Carlo Marcelo Arenas Belon wrote:
On Thu, Jan 24, 2008 at 06:21:56PM +0200, Avi Kivity wrote:
Bernhard Kaindl wrote:
I did not test this patch as I did not find documentation on how to run the
test cases and I could not find a make target to run them from make.
make
On Wed, Jan 30, 2008 at 08:57:52PM -0800, Christoph Lameter wrote:
@@ -211,7 +212,9 @@ asmlinkage long sys_remap_file_pages(uns
spin_unlock(mapping-i_mmap_lock);
}
+ mmu_notifier(invalidate_range_begin, mm, start, start + size, 0);
err = populate_range(mm,
The following series fixes the last remaining warning from the testsuite in
x86 and together with it the make rules for building the other test case
affected by this changes :
PATCH 1/2 : make smp_init parameter be a function that returns int
PATCH 2/2 : fix building smp.flat
Tested in
Fixes :
test/x86/access.c:577: warning: passing argument 1 of 'smp_init' from
incompatible pointer type
Signed-off-by: Carlo Marcelo Arenas Belon [EMAIL PROTECTED]
---
user/test/x86/lib/smp.c |4 ++--
user/test/x86/lib/smp.h |2 +-
user/test/x86/smptest.c |3 ++-
3 files
On Thu, Jan 31, 2008 at 12:34:37PM +0200, Avi Kivity wrote:
I see. Will merge that patch, thanks.
Thanks. BTW, with this fix I finally got KVM swapping 100% stable
on my test system.
However I had to rollback everything: I'm using my last mmu notifier
patch (not Christoph's ones), my mmu
On Thu, Jan 31, 2008 at 01:58:42PM +0100, Andrea Arcangeli wrote:
It might also be something stale in the buildsystem (perhaps a distcc
of ccache glitch?), I also cleared 1G of ccache just to be sure in
My build problem might have been related to the fact the
kvm-userland/kernel/include
Zhang, Xiantao wrote:
Unfortunately, it can's apply on tip. Could you attach the diff ?
It works for me:
$ git clone git://git.kernel.org/pub/scm/virt/kvm/kvm-userspace.git
remote: Generating pack...
remote: Done counting 37449 objects.
remote: Deltifying 37449 objects...
remote: 100%
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
I doubt Christoph's V4 was close to final yet, GRU wasn't covered at
all yet, not even mremap was covered at all (nor XPMEM nor GRU) in V4.
The GRU not covered? Why would you think that way? mremap is covered
because of the callbacks in
Christian Borntraeger wrote:
Rusty,
currently virtio_blk uses one major number per device. While this works
quite well on most systems it is wasteful and will exhaust major numbers
on larger installations.
This patch allocates a major number on init and will use 16 minor numbers
for each
Talking to Robin and Jack we found taht we still have a hole during fork.
Fork may set a pte writeprotect. At that point the remote pte are
not marked readonly(!). Remote writes may occur to pages that are marked
readonly locally without this patch.
mmu_notifier: Provide invalidate_range on
mmu_notifier: Reduce size of mm_struct if !CONFIG_MMU_NOTIFIER
Andrea and Peter had a concern about this.
Use an #ifdef to make the mmu_notifer_head structure empty if we have
no notifier. That allows the use of the structure in inline functions
(which allows parameter verification even if
mmu_notifier: Move mmu_notifier_release up to get rid of invalidate_all()
It seems that it is safe to call mmu_notifier_release() before we tear down
the pages and the vmas since we are the only executing thread.
mmu_notifier_release
can then also tear down all its external ptes and thus we can
# HG changeset patch
# User Jerone Young [EMAIL PROTECTED]
# Date 1201818508 21600
# Node ID 739668b2f1913a381899d82912517a6568b6f30c
# Parent 5b553559aa641bf36b190541f775396ecdbf0e78
Update Copyrights on PowerPC KVM Qemu code.
I missed a copyright that needed to be in one file. Also I placed a
Previously, the BIOS would probe the CPUs for SMP guests. This tends to be
very unreliably because of startup timing issues. By passing the number of
CPUs in the CMOS, the BIOS can detect the number of CPUs much more reliably.
Index: qemu/hw/pc.c
KVM supports more than 2GB of memory for x86_64 hosts. The following patch
fixes a number of type related issues where int's were being used when they
shouldn't have been. It also introduces CMOS support so the BIOS can build
the appropriate e820 tables.
Index: qemu/cpu-all.h
KVM supports the ability to use ACPI to shutdown guests. In order to enable
this requires some fixes to be able to generate the SCI interrupt and the
appropriate plumbing.
Index: qemu/hw/acpi.c
===
--- qemu.orig/hw/acpi.c 2008-01-30
KVM requires that any ROM memory be registerd through a second interface. This
patch refactors the option ROM loading to simplify adding KVM support (which
will follow in the next patch).
Index: qemu/hw/pc.c
===
---
KVM is a Linux interface for providing userspace interfaces for accelerated
virtualization. It has been included since 2.6.20 and supports Intel VT and
AMD-V. Ports are under way for ia64, embedded PowerPC, and s390.
This set of patches provide basic support for KVM in QEMU. It does not
FYI, for the new files introduced, Avi should be following up with a
patch to add Copyrights to the files. They will be licensed under the GPL.
Regards,
Anthony Liguori
Anthony Liguori wrote:
KVM is a Linux interface for providing userspace interfaces for accelerated
virtualization. It has
On Thu, 31 Jan 2008, Christoph Lameter wrote:
pagefault against the main linux page fault, given we already have all
needed serialization out of the PT lock. XPMEM is forced to do that
pt lock cannot serialize with invalidate_range since it is split. A range
requires locking for a series
On Thu, Jan 31, 2008 at 12:18:54PM -0800, Christoph Lameter wrote:
pt lock cannot serialize with invalidate_range since it is split. A range
requires locking for a series of ptes not only individual ones.
The lock I take already protects up to 512 ptes yes. I call
invalidate_pages only across
On Thu, Jan 31, 2008 at 12:21:34PM -0800, Christoph Lameter wrote:
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
I doubt Christoph's V4 was close to final yet, GRU wasn't covered at
all yet, not even mremap was covered at all (nor XPMEM nor GRU) in V4.
The GRU not covered? Why would you
Bugs item #1883972, was opened at 2008-02-01 00:32
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting:
https://sourceforge.net/tracker/?func=detailatid=893831aid=1883972group_id=180599
Please note that this message will contain a full copy of
On Thu, Jan 31, 2008 at 03:09:55PM -0800, Christoph Lameter wrote:
On Thu, 31 Jan 2008, Christoph Lameter wrote:
pagefault against the main linux page fault, given we already have all
needed serialization out of the PT lock. XPMEM is forced to do that
pt lock cannot serialize with
On Thursday 31 January 2008, Anthony Liguori wrote:
KVM supports more than 2GB of memory for x86_64 hosts. The following patch
fixes a number of type related issues where int's were being used when they
shouldn't have been. It also introduces CMOS support so the BIOS can build
the
On Thu, Jan 31, 2008 at 02:01:43PM -0800, Christoph Lameter wrote:
Talking to Robin and Jack we found taht we still have a hole during fork.
Fork may set a pte writeprotect. At that point the remote pte are
not marked readonly(!). Remote writes may occur to pages that are marked
readonly
On Thu, Jan 31, 2008 at 02:21:58PM -0800, Christoph Lameter wrote:
Is this okay for KVM too?
-release isn't implemented at all in KVM, only the list_del generates
complications.
I think current code could be already safe through the mm_count pin,
becasue KVM relies on the fact anybody pinning
-cmos_init(ram_size, above_4g_mem_size, boot_device, hd);
+cmos_init(ram_size, above_4g_mem_size, boot_device, hd, smp_cpus);
smp_cpus is a global variable. Why bother passing it around?
Are the CMOS contents documented anywhere?
Paul
Paul Brook wrote:
On Thursday 31 January 2008, Anthony Liguori wrote:
KVM supports more than 2GB of memory for x86_64 hosts. The following patch
fixes a number of type related issues where int's were being used when they
shouldn't have been. It also introduces CMOS support so the BIOS
Paul Brook wrote:
-cmos_init(ram_size, above_4g_mem_size, boot_device, hd);
+cmos_init(ram_size, above_4g_mem_size, boot_device, hd, smp_cpus);
smp_cpus is a global variable. Why bother passing it around?
True, I'll update the patch
Are the CMOS contents documented
+#define PHYS_RAM_MAX_SIZE (2047 * 1024 * 1024 * 1024ULL)
This seems fairly arbitrary. Why? Any limit is certainly target specific.
On a 32-bit host, a 2GB limit is pretty reasonable since you're limited
in virtual address space. On a 64-bit host, there isn't this
fundamental limit. If
Paul Brook wrote:
+#define PHYS_RAM_MAX_SIZE (2047 * 1024 * 1024 * 1024ULL)
This seems fairly arbitrary. Why? Any limit is certainly target specific.
On a 32-bit host, a 2GB limit is pretty reasonable since you're limited
in virtual address space. On a 64-bit host, there
Are the CMOS contents documented anywhere?
No, but if you have a suggestion of where to document them, I'll add
documentation.
I suggest in or with the BIOS sources.
As we're using a common BIOS it seems a good idea to make sure this kind of
things is coordinated.
Paul
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
I appreciate the review! I hope my entirely bug free and
strightforward #v5 will strongly increase the probability of getting
this in sooner than later. If something else it shows the approach I
prefer to cover GRU/KVM 100%, leaving the overkill
I just noticed that my previous patch hit one of the subtleties that
Blue Swirl warned about. Changing caddr32_t causes the IP header and
IP header overlay to be different sizes, which essentially breaks
networking altogether.
I humbly offer the following patch, which fixes only the easy
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
The GRU not covered? Why would you think that way? mremap is covered
because of the callbacks in unmap_region().
I wouldn't be so sure. ptep_clear_flush is called for a reason and you
have zero range_start _before_ the ptep_clear_flush. If
Blue Swirl wrote:
On 1/30/08, Scott Pakin [EMAIL PROTECTED] wrote:
Zhang, Xiantao wrote:
Scott Pakin wrote:
The attached patch corrects a bug in qemu/slirp/tcp_var.h that
defines the seg_next field in struct tcpcb to be 32 bits wide
regardless of 32/64-bitness. seg_next is assigned a
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
GRU. Thanks to the PT lock this remains a totally obviously safe
design and it requires zero additional locking anywhere (nor linux VM,
nor in the mmu notifier methods, nor in the KVM/GRU page fault).
Na. I would not be so sure about having caught
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
Good catch! This was missing also in my #v5 (KVM doesn't need that
because the only possible cows on sptes can be generated by ksm, but
it would have been a problem for GRU). The more I think about it, the
How do you think the GRU should know when
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
On Thu, Jan 31, 2008 at 02:21:58PM -0800, Christoph Lameter wrote:
Is this okay for KVM too?
-release isn't implemented at all in KVM, only the list_del generates
complications.
Why would the list_del generate problems?
I think current code
@@ -2033,6 +2034,7 @@ void exit_mmap(struct mm_struct *mm)
unsigned long end;
/* mm's last user has gone, and its about to be pulled down */
+ mmu_notifier(invalidate_all, mm, 0);
arch_exit_mmap(mm);
The name of the invalidate_all callout is not very descriptive.
mmu_notifier: Provide invalidate_range for move_page_tables
Move page tables also needs to invalidate the external references
and hold new references off while moving page table entries.
This is already guaranteed by holding a writelock
on mmap_sem for get_user_pages() but follow_page() is not
On Thu, Jan 31, 2008 at 05:37:21PM -0800, Christoph Lameter wrote:
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
I appreciate the review! I hope my entirely bug free and
strightforward #v5 will strongly increase the probability of getting
this in sooner than later. If something else it
On Thu, Jan 31, 2008 at 07:56:12PM -0600, Jack Steiner wrote:
@@ -2033,6 +2034,7 @@ void exit_mmap(struct mm_struct *mm)
unsigned long end;
/* mm's last user has gone, and its about to be pulled down */
+ mmu_notifier(invalidate_all, mm, 0);
arch_exit_mmap(mm);
On Thu, Jan 31, 2008 at 08:24:44PM -0600, Robin Holt wrote:
On Thu, Jan 31, 2008 at 07:56:12PM -0600, Jack Steiner wrote:
@@ -2033,6 +2034,7 @@ void exit_mmap(struct mm_struct *mm)
unsigned long end;
/* mm's last user has gone, and its about to be pulled down */
+
On Thu, Jan 31, 2008 at 05:57:25PM -0800, Christoph Lameter wrote:
Move page tables also needs to invalidate the external references
and hold new references off while moving page table entries.
I must admit to not having spent any time thinking about this, but aren't
we moving the entries from
On Thu, 31 Jan 2008, Robin Holt wrote:
Jack has repeatedly pointed out needing an unregister outside the
mmap_sem. I still don't see the benefit to not having the lock in the mm.
I never understood why this would be needed. -release removes the
mmu_notifier right now.
On Thu, 31 Jan 2008, Jack Steiner wrote:
Christoph, is it time to post a new series of patches? I've got
as many fixup patches as I have patches in the original posting.
Maybe wait another day? This is getting a bit too frequent and so far we
have only minor changes.
On Thu, 31 Jan 2008, Robin Holt wrote:
On Thu, Jan 31, 2008 at 05:57:25PM -0800, Christoph Lameter wrote:
Move page tables also needs to invalidate the external references
and hold new references off while moving page table entries.
I must admit to not having spent any time thinking about
Scott Pakin wrote:
Zhang, Xiantao wrote:
Scott Pakin wrote:
The attached patch corrects a bug in qemu/slirp/tcp_var.h that
defines the seg_next field in struct tcpcb to be 32 bits wide
regardless of 32/64-bitness. seg_next is assigned a pointer value
in qemu/slirp/tcp_subr.c, then cast back
On Thu, Jan 31, 2008 at 06:39:19PM -0800, Christoph Lameter wrote:
On Thu, 31 Jan 2008, Robin Holt wrote:
Jack has repeatedly pointed out needing an unregister outside the
mmap_sem. I still don't see the benefit to not having the lock in the mm.
I never understood why this would be
Hi, Xiantao
void __init
diff --git a/include/asm-ia64/processor.h b/include/asm-ia64/processor.h
index be3b0ae..038642f 100644
--- a/include/asm-ia64/processor.h
+++ b/include/asm-ia64/processor.h
@@ -472,7 +472,7 @@ ia64_set_psr (__u64 psr)
{
ia64_stop();
On Thu, 31 Jan 2008, Robin Holt wrote:
Mutex locking? Could you be more specific?
I think he is talking about the external locking that xpmem will need
to do to ensure we are not able to refault pages inside of regions that
are undergoing recall/page table clearing. At least that has been
On Thu, Jan 31, 2008 at 06:39:19PM -0800, Christoph Lameter wrote:
On Thu, 31 Jan 2008, Robin Holt wrote:
Jack has repeatedly pointed out needing an unregister outside the
mmap_sem. I still don't see the benefit to not having the lock in the mm.
I never understood why this would be
On Thu, 31 Jan 2008, Robin Holt wrote:
Both xpmem and GRU have means of removing their context seperate from
process termination. XPMEMs is by closing the fd, I believe GRU is
the same. In the case of XPMEM, we are able to acquire the mmap_sem.
For GRU, I don't think it is possible, but I
On Thu, 31 Jan 2008, Jack Steiner wrote:
I currently unlink the mmu_notifier when the last GRU mapping is closed. For
example, if a user does a:
gru_create_context();
...
gru_destroy_context();
the mmu_notifier is unlinked and all task tables allocated
by the
Akio Takebe wrote:
Hi, Xiantao
void __init
diff --git a/include/asm-ia64/processor.h
b/include/asm-ia64/processor.h index be3b0ae..038642f 100644 ---
a/include/asm-ia64/processor.h +++ b/include/asm-ia64/processor.h
@@ -472,7 +472,7 @@ ia64_set_psr (__u64 psr)
{
ia64_stop();
Index: linux-2.6/include/linux/mmu_notifier.h
...
+struct mmu_notifier_ops {
...
+ /*
+ * invalidate_range_begin() and invalidate_range_end() must paired.
+ * Multiple invalidate_range_begin/ends may be nested or called
+ * concurrently. That is legit. However, no new
On Thu, 31 Jan 2008, Robin Holt wrote:
+ void (*invalidate_range_end)(struct mmu_notifier *mn,
+struct mm_struct *mm, int atomic);
I think we need to pass in the same start-end here as well. Without it,
the first invalidate_range would have to block faulting
Hi, Xiantao
Why do you remove ia64_srlz_d()?
We should need srlz.d if we change PSR bits(e.g. PSR.dt and so on).
Does srlz.i do also date serialization?
Hi, Akio
Srlz.i implicitly ensures srlz.d per SDM.
OK, thanks.
Best Regards,
Akio Takebe
Hi, Xiantao
+void thash_vhpt_insert(VCPU *v, u64 pte, u64 itir, u64 va, int type)
+{
+ u64 phy_pte, psr;
+ ia64_rr mrr;
+
+ mrr.val = ia64_get_rr(va);
+ phy_pte = translate_phy_pte(pte, itir, va);
+
+ if (itir_ps(itir) = mrr.ps) {
+ vhpt_insert(phy_pte, itir,
Akio Takebe wrote:
Hi, Xiantao
+void thash_vhpt_insert(VCPU *v, u64 pte, u64 itir, u64 va, int
type) +{ + u64 phy_pte, psr;
+ia64_rr mrr;
+
+mrr.val = ia64_get_rr(va);
+phy_pte = translate_phy_pte(pte, itir, va);
+
+if (itir_ps(itir) = mrr.ps) {
+
On Thu, Jan 31, 2008 at 07:58:40PM -0800, Christoph Lameter wrote:
On Thu, 31 Jan 2008, Robin Holt wrote:
+ void (*invalidate_range_end)(struct mmu_notifier *mn,
+ struct mm_struct *mm, int atomic);
I think we need to pass in the same start-end here as well.
?=
Date: Fri, 01 Feb 2008 12:19:50 +0800
MIME-Version: 1.0
Content-Type: multipart/alternative;
boundary==_NextPart_001_0018_01C39816.2C871AA0
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2800.1106
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1106
On Thu, 31 Jan 2008, Robin Holt wrote:
Index: linux-2.6/mm/memory.c
...
@@ -1668,6 +1678,7 @@ gotten:
page_cache_release(old_page);
unlock:
pte_unmap_unlock(page_table, ptl);
+ mmu_notifier(invalidate_range_end, mm, 0);
I think we can get an _end call without
Notifier functions for hardware and software that establishes external
references to pages of a Linux system. The notifier calls ensure that
external mappings are removed when the Linux VM removes memory ranges
or individual pages from a process.
This first portion is fitting for external mmu's
The invalidation of address ranges in a mm_struct needs to be
performed when pages are removed or permissions etc change.
invalidate_range_begin/end() is frequently called with only mmap_sem
held. If invalidate_range_begin() is called with locks held then we
pass a flag into invalidate_range() to
Support for an additional 3rd class of users of mmu_notifier.
These special additional callbacks are required because XPmem does
use its own rmap (multiple processes on a serires of remote Linux instances
may be accessing the memory of a process). XPmem may have to send out
notifications to
This is a patchset implementing MMU notifier callbacks based on Andrea's
earlier work. These are needed if Linux pages are referenced from something
else than tracked by the rmaps of the kernel (an external MMU).
The known immediate users are
KVM
- Establishes a refcount to the page via
Two callbacks to remove individual pages as done in rmap code
invalidate_page()
Called from the inner loop of rmap walks to invalidate pages.
age_page()
Called for the determination of the page referenced status.
If we do not care about page referenced status then an age_page
Am Donnerstag, 31. Januar 2008 schrieb Anthony Liguori:
There's are some other limitations to the number of virtio block
devices. For instances...
sprintf(vblk-disk-disk_name, vd%c, virtblk_index++);
This gets bogus after 64 disks.
Right. I will fix that with an additional patch.
93 matches
Mail list logo