[kvm-devel] [PATCH] KVM: KVM/IA64: Set KVM_IOAPIC_NUM_PINS to 48.
Hi, Avi This patch should be a fix for v2.6.26. Otherwise, guests can't enable networking. Xiantao >From df3a290e438b3079edb3627f2fea3e1fdd85b5f2 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 14 May 2008 19:44:57 +0800 Subject: [PATCH] KVM: KVM/IA64: Set KVM_IOAPIC_NUM_PINS to 48. Guest's firmware needs the viosapic with 48 pins for ia64 guests. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- include/asm-ia64/kvm.h |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/include/asm-ia64/kvm.h b/include/asm-ia64/kvm.h index eb2d355..f9db48e 100644 --- a/include/asm-ia64/kvm.h +++ b/include/asm-ia64/kvm.h @@ -29,7 +29,7 @@ /* Architectural interrupt line count. */ #define KVM_NR_INTERRUPTS 256 -#define KVM_IOAPIC_NUM_PINS 24 +#define KVM_IOAPIC_NUM_PINS 48 struct kvm_ioapic_state { __u64 base_address; -- 1.5.2 0002-KVM-KVM-IA64-Set-KVM_IOAPIC_NUM_PINS-to-48.patch Description: 0002-KVM-KVM-IA64-Set-KVM_IOAPIC_NUM_PINS-to-48.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] [PATCH] KVM: Qemu: Build fix for kvm/ia64
Could you help to try the attached one. Xiantao -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Avi Kivity Sent: 2008年5月9日 23:56 To: Zhang, Xiantao Cc: kvm-devel@lists.sourceforge.net; [EMAIL PROTECTED] Subject: Re: [kvm-ia64-devel] [PATCH] KVM: Qemu: Build fix for kvm/ia64 Zhang, Xiantao wrote: > Avi, > Please drop the previous one due to a wrong attachment. > Xiantao > > From a9f479197f0a0efa45a930309fad03fd690cba60 Mon Sep 17 00:00:00 2001 > From: Xiantao Zhang <[EMAIL PROTECTED]> > Date: Thu, 8 May 2008 10:16:05 +0800 > Subject: [PATCH] KVM: Qemu : IA-64 build fix. > > Remove unexisting header inclusion, and set correct phys_ram_size > for ipf machine. > Patch doesn't apply. Can you recheck? -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. - This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone ___ kvm-ia64-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel 0001-KVM-Qemu-IA-64-build-fix.patch Description: 0001-KVM-Qemu-IA-64-build-fix.patch - This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] [PATCH] KVM: Qemu: Build fix for kvm/ia64
Avi, Please drop the previous one due to a wrong attachment. Xiantao >From a9f479197f0a0efa45a930309fad03fd690cba60 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Thu, 8 May 2008 10:16:05 +0800 Subject: [PATCH] KVM: Qemu : IA-64 build fix. Remove unexisting header inclusion, and set correct phys_ram_size for ipf machine. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- qemu/hw/ipf.c |4 +++- qemu/target-ia64/machine.c |1 - 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/qemu/hw/ipf.c b/qemu/hw/ipf.c index a84e343..54eaca0 100644 --- a/qemu/hw/ipf.c +++ b/qemu/hw/ipf.c @@ -535,7 +535,8 @@ static void ipf_init1(ram_addr_t ram_size, int vga_ram_size, for(i = 0; i < MAX_SERIAL_PORTS; i++) { if (serial_hds[i]) { -serial_init(serial_io[i], i8259[serial_irq[i]], serial_hds[i]); +serial_init(serial_io[i], i8259[serial_irq[i]], 115200, + serial_hds[i]); } } @@ -669,4 +670,5 @@ QEMUMachine ipf_machine = { "itanium", "Itanium Platform", ipf_init_pci, +VGA_RAM_SIZE + GFW_SIZE, }; diff --git a/qemu/target-ia64/machine.c b/qemu/target-ia64/machine.c index ba06d7b..4dc5d5e 100644 --- a/qemu/target-ia64/machine.c +++ b/qemu/target-ia64/machine.c @@ -1,6 +1,5 @@ #include "hw/hw.h" #include "hw/boards.h" -#include "hw/ipf.h" #include "exec-all.h" #include "qemu-kvm.h" -- 1.5.2 0001-KVM-Qemu-IA-64-build-fix.patch Description: 0001-KVM-Qemu-IA-64-build-fix.patch - This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH] KVM: Qemu: Build fix for kvm/ia64
>From a9f479197f0a0efa45a930309fad03fd690cba60 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Thu, 8 May 2008 10:16:05 +0800 Subject: [PATCH] KVM: Qemu : IA-64 build fix. Remove unexisting header inclusion, and set correct phys_ram_size for ipf machine. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- qemu/hw/ipf.c |4 +++- qemu/target-ia64/machine.c |1 - 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/qemu/hw/ipf.c b/qemu/hw/ipf.c index a84e343..54eaca0 100644 --- a/qemu/hw/ipf.c +++ b/qemu/hw/ipf.c @@ -535,7 +535,8 @@ static void ipf_init1(ram_addr_t ram_size, int vga_ram_size, for(i = 0; i < MAX_SERIAL_PORTS; i++) { if (serial_hds[i]) { -serial_init(serial_io[i], i8259[serial_irq[i]], serial_hds[i]); +serial_init(serial_io[i], i8259[serial_irq[i]], 115200, + serial_hds[i]); } } @@ -669,4 +670,5 @@ QEMUMachine ipf_machine = { "itanium", "Itanium Platform", ipf_init_pci, +VGA_RAM_SIZE + VGA_RAM_SIZE, }; diff --git a/qemu/target-ia64/machine.c b/qemu/target-ia64/machine.c index ba06d7b..4dc5d5e 100644 --- a/qemu/target-ia64/machine.c +++ b/qemu/target-ia64/machine.c @@ -1,6 +1,5 @@ #include "hw/hw.h" #include "hw/boards.h" -#include "hw/ipf.h" #include "exec-all.h" #include "qemu-kvm.h" -- 1.5.2 0001-KVM-Qemu-IA-64-build-fix.patch Description: 0001-KVM-Qemu-IA-64-build-fix.patch - This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH] GVMM module shouldn't link the position-dependent objects
Critical fix for kvm/ia64 build. Issue introduced by ea696f9cf37d8ab9236dd133ddb2727264f3add6. From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 7 May 2008 17:34:52 +0800 Subject: [PATCH] KVM: kvm/ia-64: GVMM module shouldn't link the position-dependent objects. Create two files: memset.S and memcpy.S which just includes the files under arch/ia64/lib/{memset.S, memcpy.S} respectively. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/Makefile |2 +- arch/ia64/kvm/memcpy.S |1 + arch/ia64/kvm/memset.S |1 + 3 files changed, 3 insertions(+), 1 deletions(-) create mode 100644 arch/ia64/kvm/memcpy.S create mode 100644 arch/ia64/kvm/memset.S diff --git a/arch/ia64/kvm/Makefile b/arch/ia64/kvm/Makefile index 5235339..d60c5c8 100644 --- a/arch/ia64/kvm/Makefile +++ b/arch/ia64/kvm/Makefile @@ -54,5 +54,5 @@ EXTRA_CFLAGS_vcpu.o += -mfixed-range=f2-f5,f12-f127 kvm-intel-objs = vmm.o vmm_ivt.o trampoline.o vcpu.o optvfault.o mmio.o \ vtlb.o process.o #Add link memcpy and memset to avoid possible structure assignment error -kvm-intel-objs += ../lib/memset.o ../lib/memcpy.o +kvm-intel-objs += memcpy.o memset.o obj-$(CONFIG_KVM_INTEL) += kvm-intel.o diff --git a/arch/ia64/kvm/memcpy.S b/arch/ia64/kvm/memcpy.S new file mode 100644 index 000..c04cdbe --- /dev/null +++ b/arch/ia64/kvm/memcpy.S @@ -0,0 +1 @@ +#include "../lib/memcpy.S" diff --git a/arch/ia64/kvm/memset.S b/arch/ia64/kvm/memset.S new file mode 100644 index 000..83c3066 --- /dev/null +++ b/arch/ia64/kvm/memset.S @@ -0,0 +1 @@ +#include "../lib/memset.S" -- 1.5.2 0001-KVM-kvm-ia-64-GVMM-module-shouldn-t-link-the-posit.patch Description: 0001-KVM-kvm-ia-64-GVMM-module-shouldn-t-link-the-posit.patch - This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [PATCH] Build fix for kvm/ia64 userspace.
> One way would be to define a new kvm_ia64_fpreg and use that. Seems > that the standard ia64_fpreg is unusable in userspace due to the issue > you mentioned. Better way. Attached the patch. From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 7 May 2008 17:37:32 +0800 Subject: [PATCH] KVM: kvm/ia64 : Using self-defined kvm_fpreg strucutre to replace kernel's ia64_fpreg for avoiding conflicts with userspace headers. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- include/asm-ia64/kvm.h | 10 -- 1 files changed, 8 insertions(+), 2 deletions(-) diff --git a/include/asm-ia64/kvm.h b/include/asm-ia64/kvm.h index eb2d355..a1da4c4 100644 --- a/include/asm-ia64/kvm.h +++ b/include/asm-ia64/kvm.h @@ -22,7 +22,6 @@ */ #include -#include #include @@ -61,6 +60,13 @@ struct kvm_ioapic_state { #define KVM_CONTEXT_SIZE 8*1024 +struct kvm_fpreg { + union { + unsigned long bits[2]; + long double __dummy;/* force 16-byte alignment */ + } u; +}; + union context { /* 8K size */ chardummy[KVM_CONTEXT_SIZE]; @@ -77,7 +83,7 @@ union context { unsigned long ibr[8]; unsigned long dbr[8]; unsigned long pkr[8]; - struct ia64_fpreg fr[128]; + struct kvm_fpreg fr[128]; }; }; -- 1.5.2 0002-KVM-kvm-ia64-Using-self-defined-kvm_fpreg-strucut.patch Description: 0002-KVM-kvm-ia64-Using-self-defined-kvm_fpreg-strucut.patch - This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [PATCH] Build fix for kvm/ia64 userspace.
Avi Kivity wrote: > Zhang, Xiantao wrote: >> Hi, Avi >> This patch should go into RC1, otherwise it will block kvm/ia64 >> userspace build. >> >> diff --git a/include/asm-ia64/kvm.h b/include/asm-ia64/kvm.h index >> eb2d355..62b5fad 100644 --- a/include/asm-ia64/kvm.h >> +++ b/include/asm-ia64/kvm.h >> @@ -22,7 +22,12 @@ >> */ >> >> #include >> + >> +#ifdef __KERNEL__ >> #include >> +#else >> +#include >> +#endif >> > > Fishy. A kernel header including a userspace header? > > Maybe you need to include unconditionally? Hi, Avi You know, kvm.h is shared by userspace and kernel. But unfortunately, the usersapce header files have redefinition for one strucutre (structure ia64_fpreg) {One in asm/fpu.h and the other one in bits/sigcontext}, maybe a bug here. Therefore, if userspace code includes fpu.h and sigcontext.h in one source file, it will complain the redefinition. Do you have good idea to cope with this issue ? Xiantao - This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH] Build fix for kvm/ia64 userspace.
Hi, Avi This patch should go into RC1, otherwise it will block kvm/ia64 userspace build. Xiantao >From 55584a9ecdfbea61ab90014c9cc14c5a22b696dd Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Mon, 5 May 2008 12:49:35 +0800 Subject: [PATCH] KVM: KVM/ia64: built fix for kvm userspace. kvm.h is shared by userspace and kernel, and it needs to include different headers in two cases. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- include/asm-ia64/kvm.h |5 + 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/include/asm-ia64/kvm.h b/include/asm-ia64/kvm.h index eb2d355..62b5fad 100644 --- a/include/asm-ia64/kvm.h +++ b/include/asm-ia64/kvm.h @@ -22,7 +22,12 @@ */ #include + +#ifdef __KERNEL__ #include +#else +#include +#endif #include -- 1.5.2 0001-KVM-KVM-ia64-built-fix-for-kvm-userspace.patch Description: 0001-KVM-KVM-ia64-built-fix-for-kvm-userspace.patch - This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [Patch][00/18] kvm-ia64 for kernel V10
Compared with V9, just fixed indentation issues in patch 12. I put it the patchset in git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git kvm-ia64-mc10. Please help to review. Specially, the first two patches (TR Management patch and smp_call_function_mask patch) needs Tony's review and Ack. Thanks:-) Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] [16/18]KVM:IA64 : Add kvm sal/palvirtulization support.V9
Changed to be a soft and clear saying. Thanks! :-) Xiantao Jes Sorensen wrote: > Zhang, Xiantao wrote: >>> From 5c70c038c57190144390ae9d30c3d06afba103d4 Mon Sep 17 00:00:00 >>> 2001 >> From: Xiantao Zhang <[EMAIL PROTECTED]> >> Date: Tue, 1 Apr 2008 14:59:30 +0800 >> Subject: [PATCH] KVM:IA64 : Add kvm sal/pal virtulization support. >> >> Some sal/pal calls would be traped to kvm for virtulization >> from guest firmware. >> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- >> arch/ia64/kvm/kvm_fw.c | 500 > > Hi Xiantao, > > A few more comments: > > >> --- /dev/null >> +++ b/arch/ia64/kvm/kvm_fw.c > >> +static void kvm_get_pal_call_data(struct kvm_vcpu *vcpu, >> +u64 *gr28, u64 *gr29, u64 *gr30, u64 *gr31) { >> +struct exit_ctl_data *p; >> + >> +if (vcpu) { >> +p = &vcpu->arch.exit_data; >> +if (p->exit_reason == EXIT_REASON_PAL_CALL) { >> +*gr28 = p->u.pal_data.gr28; >> +*gr29 = p->u.pal_data.gr29; >> +*gr30 = p->u.pal_data.gr30; >> +*gr31 = p->u.pal_data.gr31; >> +return ; >> +} >> +} >> +printk(KERN_DEBUG"Error occurs in kvm_get_pal_call_data!!\n"); > > Maybe make this error message a bit more elaborate with information > about what the error is? > >> +static void set_sal_result(struct kvm_vcpu *vcpu, >> +struct sal_ret_values result) { >> +struct exit_ctl_data *p; >> + >> +p = kvm_get_exit_data(vcpu); >> +if (p && p->exit_reason == EXIT_REASON_SAL_CALL) { >> +p->u.sal_data.ret = result; >> +return ; >> +} >> +printk(KERN_WARNING"Error occurs!!!\n"); > > I love this error message :-) Seriously though, please make it say > where it is and what has been detected. > > Cheers, > Jes > > - > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketp lace > ___ > kvm-ia64-devel mailing list > [EMAIL PROTECTED] > https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [13/18] KVM:IA64: Add optimization for some virtulization faults V9
Fixed. Thanks :) Jes Sorensen wrote: > Zhang, Xiantao wrote: >>> From 0d698efed15759b49e78adcef085feda0a14a175 Mon Sep 17 00:00:00 >>> 2001 >> From: Xiantao Zhang <[EMAIL PROTECTED]> >> Date: Tue, 1 Apr 2008 14:57:09 +0800 >> Subject: [PATCH] KVM:IA64: Add optimization for some virtulization >> faults >> >> optvfault.S Add optimization for some performance-critical >> virtualization faults. Signed-off-by: Anthony Xu >> <[EMAIL PROTECTED]> >> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- >> arch/ia64/kvm/optvfault.S | 918 >> + >> 1 files changed, 918 insertions(+), 0 deletions(-) >> create mode 100644 arch/ia64/kvm/optvfault.S > > Hi Xiantao, > > This one still suffers from the bad indentation. > > Cheers, > Jes > > - > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketp lace > ___ > kvm-devel mailing list > kvm-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/kvm-devel - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [Patch][00/18] kvm-ia64 for kernel V9
Zhang, Xiantao wrote: > Hi, All > According to the comments from V8, I refined the code, and worked out > the new patchset. Please help to review. Thanks! :-) > In this new version, most of typdefs are removed to comply with the > requirement of coding style. Fixed the issues found by reviewers. > Thanks for your effort! The whole patchset are checked by the script > checkpatch.pl. Except that one file including asmbly code will report > some warnings, the other one should be good to check-in. Xiantao BTW, you still can get it through git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git kvm-ia64-mc9. It is based on latest kvm.git. Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
Carsten Otte wrote: > Zhang, Xiantao wrote: >> Carsten Otte wrote: >>> Zhang, Xiantao wrote: >>>> Hi, Carsten >>>> Why do you think it is racy? In this function, >>>> target_vcpu->arch.launched should be set to 1 for the first run, >>>> and keep its value all the time. Except the first IPI to wake up >>>> the vcpu, all IPIs received by target vcpu should go into "else" >>>> condition. So you mean the race condition exist in "else" code ? >>> For example to lock against destroying that vcpu. Or, the waitqueue >>> may become active after if (waitqueue_active()) and before >>> wake_up_interruptible(). In that case, the target vcpu might sleep >>> and not get waken up by the ipi. >> I don't think it may cause issue, because the target vcpu at least >> can be waken up by the timer interrupt. >> >> But as you said, x86 side also have the same race issue ? > As far as I can tell, x86 does'nt have that race. Hi, Carsten I can't understand why it only exist at IA64 side. Thank you! Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [07/18]KVM:IA64 : VMM module interfaces. V9
>From b4d573038915205c7b85740bf80bd0e0c82a702a Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 14:49:24 +0800 Subject: [PATCH] KVM:IA64 : VMM module interfaces. vmm.c adds the interfaces with kvm/module, and initialize global data area. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/vmm.c | 66 +++ 1 files changed, 66 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/vmm.c diff --git a/arch/ia64/kvm/vmm.c b/arch/ia64/kvm/vmm.c new file mode 100644 index 000..2275bf4 --- /dev/null +++ b/arch/ia64/kvm/vmm.c @@ -0,0 +1,66 @@ +/* + * vmm.c: vmm module interface with kvm module + * + * Copyright (c) 2007, Intel Corporation. + * + * Xiantao Zhang ([EMAIL PROTECTED]) + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + */ + + +#include +#include + +#include "vcpu.h" + +MODULE_AUTHOR("Intel"); +MODULE_LICENSE("GPL"); + +extern char kvm_ia64_ivt; +extern fpswa_interface_t *vmm_fpswa_interface; + +struct kvm_vmm_info vmm_info = { + .module = THIS_MODULE, + .vmm_entry = vmm_entry, + .tramp_entry = vmm_trampoline, + .vmm_ivt = (unsigned long)&kvm_ia64_ivt, +}; + +static int __init kvm_vmm_init(void) +{ + + vmm_fpswa_interface = fpswa_interface; + + /*Register vmm data to kvm side*/ + return kvm_init(&vmm_info, 1024, THIS_MODULE); +} + +static void __exit kvm_vmm_exit(void) +{ + kvm_exit(); + return ; +} + +void vmm_spin_lock(spinlock_t *lock) +{ + _vmm_raw_spin_lock(lock); +} + +void vmm_spin_unlock(spinlock_t *lock) +{ + _vmm_raw_spin_unlock(lock); +} +module_init(kvm_vmm_init) +module_exit(kvm_vmm_exit) -- 1.5.2 0007-KVM-IA64-VMM-module-interfaces.patch Description: 0007-KVM-IA64-VMM-module-interfaces.patch - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [02/18] [PATCH] IA64: Implement smp_call_function_mask for ia64
For considering Jes's performance concern, I kept the old smp_call_function, and add smp_call_function_mask separately. Xiantao >From fe3c5deac39033fb7651ecce5df3d1dce7dd66f7 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 14:38:21 +0800 Subject: [PATCH] IA64: Implement smp_call_function_mask for ia64 This interface provides more flexible functionalityfor smp infrastructure. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kernel/smp.c | 82 include/asm-ia64/smp.h |3 ++ 2 files changed, 85 insertions(+), 0 deletions(-) diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c index 4e446aa..9a9d4c4 100644 --- a/arch/ia64/kernel/smp.c +++ b/arch/ia64/kernel/smp.c @@ -213,6 +213,19 @@ send_IPI_allbutself (int op) * Called with preemption disabled. */ static inline void +send_IPI_mask(cpumask_t mask, int op) +{ + unsigned int cpu; + + for_each_cpu_mask(cpu, mask) { + send_IPI_single(cpu, op); + } +} + +/* + * Called with preemption disabled. + */ +static inline void send_IPI_all (int op) { int i; @@ -401,6 +414,75 @@ smp_call_function_single (int cpuid, void (*func) (void *info), void *info, int } EXPORT_SYMBOL(smp_call_function_single); +/** + * smp_call_function_mask(): Run a function on a set of other CPUs. + * The set of cpus to run on. Must not include the current cpu. + * The function to run. This must be fast and non-blocking. + * An arbitrary pointer to pass to the function. + * If true, wait (atomically) until function + * has completed on other CPUs. + * + * Returns 0 on success, else a negative status code. + * + * If @wait is true, then returns once @func has returned; otherwise + * it returns just before the target cpu calls @func. + * + * You must not call this function with disabled interrupts or from a + * hardware interrupt handler or from a bottom half handler. + */ +int smp_call_function_mask(cpumask_t mask, + void (*func)(void *), void *info, + int wait) +{ + struct call_data_struct data; + cpumask_t allbutself; + int cpus; + + spin_lock(&call_lock); + allbutself = cpu_online_map; + cpu_clear(smp_processor_id(), allbutself); + + cpus_and(mask, mask, allbutself); + cpus = cpus_weight(mask); + if (!cpus) { + spin_unlock(&call_lock); + return 0; + } + + /* Can deadlock when called with interrupts disabled */ + WARN_ON(irqs_disabled()); + + data.func = func; + data.info = info; + atomic_set(&data.started, 0); + data.wait = wait; + if (wait) + atomic_set(&data.finished, 0); + + call_data = &data; + mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC*/ + + /* Send a message to other CPUs */ + if (cpus_equal(mask, allbutself)) + send_IPI_allbutself(IPI_CALL_FUNC); + else + send_IPI_mask(mask, IPI_CALL_FUNC); + + /* Wait for response */ + while (atomic_read(&data.started) != cpus) + cpu_relax(); + + if (wait) + while (atomic_read(&data.finished) != cpus) + cpu_relax(); + call_data = NULL; + + spin_unlock(&call_lock); + return 0; + +} +EXPORT_SYMBOL(smp_call_function_mask); + /* * this function sends a 'generic call function' IPI to all other CPUs * in the system. diff --git a/include/asm-ia64/smp.h b/include/asm-ia64/smp.h index 4fa733d..ec5f355 100644 --- a/include/asm-ia64/smp.h +++ b/include/asm-ia64/smp.h @@ -38,6 +38,9 @@ ia64_get_lid (void) return lid.f.id << 8 | lid.f.eid; } +extern int smp_call_function_mask(cpumask_t mask, void (*func)(void *), + void *info, int wait); + #define hard_smp_processor_id()ia64_get_lid() #ifdef CONFIG_SMP -- 1.5.2 0002-IA64-Implement-smp_call_function_mask-for-ia64.patch Description: 0002-IA64-Implement-smp_call_function_mask-for-ia64.patch - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [10/18][PATCH] KVM:IA64 : Add mmio decoder for kvm/ia64.
>From 6fcd534964e91d409ee4dba39c393dc7cf019b97 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 14:53:32 +0800 Subject: [PATCH] KVM:IA64 : Add mmio decoder for kvm/ia64. mmio.c includes mmio decoder, and related mmio logics.. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/mmio.c | 341 ++ 1 files changed, 341 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/mmio.c diff --git a/arch/ia64/kvm/mmio.c b/arch/ia64/kvm/mmio.c new file mode 100644 index 000..351bf70 --- /dev/null +++ b/arch/ia64/kvm/mmio.c @@ -0,0 +1,341 @@ +/* + * mmio.c: MMIO emulation components. + * Copyright (c) 2004, Intel Corporation. + * Yaozu Dong (Eddie Dong) ([EMAIL PROTECTED]) + * Kun Tian (Kevin Tian) ([EMAIL PROTECTED]) + * + * Copyright (c) 2007 Intel Corporation KVM support. + * Xuefei Xu (Anthony Xu) ([EMAIL PROTECTED]) + * Xiantao Zhang ([EMAIL PROTECTED]) + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include + +#include "vcpu.h" + +static void vlsapic_write_xtp(struct kvm_vcpu *v, uint8_t val) +{ + VLSAPIC_XTP(v) = val; +} + +/* + * LSAPIC OFFSET + */ +#define PIB_LOW_HALF(ofst) !(ofst & (1 << 20)) +#define PIB_OFST_INTA 0x1E +#define PIB_OFST_XTP 0x1E0008 + +/* + * execute write IPI op. + */ +static void vlsapic_write_ipi(struct kvm_vcpu *vcpu, + uint64_t addr, uint64_t data) +{ + struct exit_ctl_data *p = ¤t_vcpu->arch.exit_data; + unsigned long psr; + + local_irq_save(psr); + + p->exit_reason = EXIT_REASON_IPI; + p->u.ipi_data.addr.val = addr; + p->u.ipi_data.data.val = data; + vmm_transition(current_vcpu); + + local_irq_restore(psr); + +} + +void lsapic_write(struct kvm_vcpu *v, unsigned long addr, + unsigned long length, unsigned long val) +{ + addr &= (PIB_SIZE - 1); + + switch (addr) { + case PIB_OFST_INTA: + /*panic_domain(NULL, "Undefined write on PIB INTA\n");*/ + panic_vm(v); + break; + case PIB_OFST_XTP: + if (length == 1) { + vlsapic_write_xtp(v, val); + } else { + /*panic_domain(NULL, + "Undefined write on PIB XTP\n");*/ + panic_vm(v); + } + break; + default: + if (PIB_LOW_HALF(addr)) { + /*lower half */ + if (length != 8) + /*panic_domain(NULL, + "Can't LHF write with size %ld!\n", + length);*/ + panic_vm(v); + else + vlsapic_write_ipi(v, addr, val); + } else { /* upper half + printk("IPI-UHF write %lx\n",addr);*/ + panic_vm(v); + } + break; + } +} + +unsigned long lsapic_read(struct kvm_vcpu *v, unsigned long addr, + unsigned long length) +{ + uint64_t result = 0; + + addr &= (PIB_SIZE - 1); + + switch (addr) { + case PIB_OFST_INTA: + if (length == 1) /* 1 byte load */ + ; /* There is no i8259, there is no INTA access*/ + else + /*panic_domain(NULL,"Undefined read on PIB INTA\n"); */ + panic_vm(v); + + break; + case PIB_OFST_XTP: + if (length == 1) { + result = VLSAPIC_XTP(v); + /* printk("read xtp %lx\n", result); */ + } else { + /*panic_domain(NULL, + "Undefined read on PIB XTP\n");*/ + panic_vm(v); + } + break; + default: + panic_vm(v); + break; + } + return result; +} + +static void mmio_access(struct kvm_vcpu *vcpu, u64 src_pa, u64 *dest, + u16 s, int ma, int dir) +{ + unsigned long iot; + struct exit_ctl_data *p = &
[kvm-devel] [16/18]KVM:IA64 : Add kvm sal/pal virtulization support.V9
>From 5c70c038c57190144390ae9d30c3d06afba103d4 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 14:59:30 +0800 Subject: [PATCH] KVM:IA64 : Add kvm sal/pal virtulization support. Some sal/pal calls would be traped to kvm for virtulization from guest firmware. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/kvm_fw.c | 500 1 files changed, 500 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/kvm_fw.c diff --git a/arch/ia64/kvm/kvm_fw.c b/arch/ia64/kvm/kvm_fw.c new file mode 100644 index 000..508225d --- /dev/null +++ b/arch/ia64/kvm/kvm_fw.c @@ -0,0 +1,500 @@ +/* + * PAL/SAL call delegation + * + * Copyright (c) 2004 Li Susie <[EMAIL PROTECTED]> + * Copyright (c) 2005 Yu Ke <[EMAIL PROTECTED]> + * Copyright (c) 2007 Xiantao Zhang <[EMAIL PROTECTED]> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + */ + +#include +#include + +#include "vti.h" +#include "misc.h" + +#include +#include +#include + +/* + * Handy macros to make sure that the PAL return values start out + * as something meaningful. + */ +#define INIT_PAL_STATUS_UNIMPLEMENTED(x) \ + { \ + x.status = PAL_STATUS_UNIMPLEMENTED;\ + x.v0 = 0; \ + x.v1 = 0; \ + x.v2 = 0; \ + } + +#define INIT_PAL_STATUS_SUCCESS(x) \ + { \ + x.status = PAL_STATUS_SUCCESS; \ + x.v0 = 0; \ + x.v1 = 0; \ + x.v2 = 0; \ +} + +static void kvm_get_pal_call_data(struct kvm_vcpu *vcpu, + u64 *gr28, u64 *gr29, u64 *gr30, u64 *gr31) { + struct exit_ctl_data *p; + + if (vcpu) { + p = &vcpu->arch.exit_data; + if (p->exit_reason == EXIT_REASON_PAL_CALL) { + *gr28 = p->u.pal_data.gr28; + *gr29 = p->u.pal_data.gr29; + *gr30 = p->u.pal_data.gr30; + *gr31 = p->u.pal_data.gr31; + return ; + } + } + printk(KERN_DEBUG"Error occurs in kvm_get_pal_call_data!!\n"); +} + +static void set_pal_result(struct kvm_vcpu *vcpu, + struct ia64_pal_retval result) { + + struct exit_ctl_data *p; + + p = kvm_get_exit_data(vcpu); + if (p && p->exit_reason == EXIT_REASON_PAL_CALL) { + p->u.pal_data.ret = result; + return ; + } + INIT_PAL_STATUS_UNIMPLEMENTED(p->u.pal_data.ret); +} + +static void set_sal_result(struct kvm_vcpu *vcpu, + struct sal_ret_values result) { + struct exit_ctl_data *p; + + p = kvm_get_exit_data(vcpu); + if (p && p->exit_reason == EXIT_REASON_SAL_CALL) { + p->u.sal_data.ret = result; + return ; + } + printk(KERN_WARNING"Error occurs!!!\n"); +} + +struct cache_flush_args { + u64 cache_type; + u64 operation; + u64 progress; + long status; +}; + +cpumask_t cpu_cache_coherent_map; + +static void remote_pal_cache_flush(void *data) +{ + struct cache_flush_args *args = data; + long status; + u64 progress = args->progress; + + status = ia64_pal_cache_flush(args->cache_type, args->operation, + &progress, NULL); + if (status != 0) + args->status = status; +} + +static struct ia64_pal_retval pal_cache_flush(struct kvm_vcpu *vcpu) +{ + u64 gr28, gr29, gr30, gr31; + struct ia64_pal_retval result = {0, 0, 0, 0}; + struct cache_flush_args args = {0, 0, 0, 0}; + long psr; + + gr28 = gr29 = gr30 = gr31 = 0; + kvm_get_pal_call_data(vcpu, &gr28, &gr29, &gr30, &gr31); + + if (gr31 != 0) + printk(KERN_ERR"vcpu:%p called cache_flush error!\n", vcpu); + + /* Always call Host Pal in int=1 */ + gr30 &= ~PAL_CACHE_FLUSH_CHK_INTRS; + args.cache_type = gr29; + args.operation = gr30; + smp_call_function(remote_pal_cac
[kvm-devel] [08/18][PATCH] KVM:IA64 : Add TLB virtulization support. V9
>From 2d624a8e44bb284224820cd61fe2f0975c029fda Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 14:50:59 +0800 Subject: [PATCH] KVM:IA64 : Add TLB virtulization support. vtlb.c includes tlb/VHPT virtulization. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/vtlb.c | 636 ++ 1 files changed, 636 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/vtlb.c diff --git a/arch/ia64/kvm/vtlb.c b/arch/ia64/kvm/vtlb.c new file mode 100644 index 000..def4576 --- /dev/null +++ b/arch/ia64/kvm/vtlb.c @@ -0,0 +1,636 @@ +/* + * vtlb.c: guest virtual tlb handling module. + * Copyright (c) 2004, Intel Corporation. + * Yaozu Dong (Eddie Dong) <[EMAIL PROTECTED]> + * Xuefei Xu (Anthony Xu) <[EMAIL PROTECTED]> + * + * Copyright (c) 2007, Intel Corporation. + * Xuefei Xu (Anthony Xu) <[EMAIL PROTECTED]> + * Xiantao Zhang <[EMAIL PROTECTED]> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include "vcpu.h" + +#include + +#include + +/* + * Check to see if the address rid:va is translated by the TLB + */ + +static int __is_tr_translated(struct thash_data *trp, u64 rid, u64 va) +{ + return ((trp->p) && (trp->rid == rid) + && ((va-trp->vadr) < PSIZE(trp->ps))); +} + +/* + * Only for GUEST TR format. + */ +static int __is_tr_overlap(struct thash_data *trp, u64 rid, u64 sva, u64 eva) +{ + u64 sa1, ea1; + + if (!trp->p || trp->rid != rid) + return 0; + + sa1 = trp->vadr; + ea1 = sa1 + PSIZE(trp->ps) - 1; + eva -= 1; + if ((sva > ea1) || (sa1 > eva)) + return 0; + else + return 1; + +} + +void machine_tlb_purge(u64 va, u64 ps) +{ + ia64_ptcl(va, ps << 2); +} + +void local_flush_tlb_all(void) +{ + int i, j; + unsigned long flags, count0, count1; + unsigned long stride0, stride1, addr; + + addr= current_vcpu->arch.ptce_base; + count0 = current_vcpu->arch.ptce_count[0]; + count1 = current_vcpu->arch.ptce_count[1]; + stride0 = current_vcpu->arch.ptce_stride[0]; + stride1 = current_vcpu->arch.ptce_stride[1]; + + local_irq_save(flags); + for (i = 0; i < count0; ++i) { + for (j = 0; j < count1; ++j) { + ia64_ptce(addr); + addr += stride1; + } + addr += stride0; + } + local_irq_restore(flags); + ia64_srlz_i(); /* srlz.i implies srlz.d */ +} + +int vhpt_enabled(struct kvm_vcpu *vcpu, u64 vadr, enum vhpt_ref ref) +{ + union ia64_rrvrr; + union ia64_pta vpta; + struct ia64_psr vpsr; + + vpsr = *(struct ia64_psr *)&VCPU(vcpu, vpsr); + vrr.val = vcpu_get_rr(vcpu, vadr); + vpta.val = vcpu_get_pta(vcpu); + + if (vrr.ve & vpta.ve) { + switch (ref) { + case DATA_REF: + case NA_REF: + return vpsr.dt; + case INST_REF: + return vpsr.dt && vpsr.it && vpsr.ic; + case RSE_REF: + return vpsr.dt && vpsr.rt; + + } + } + return 0; +} + +struct thash_data *vsa_thash(union ia64_pta vpta, u64 va, u64 vrr, u64 *tag) +{ + u64 index, pfn, rid, pfn_bits; + + pfn_bits = vpta.size - 5 - 8; + pfn = REGION_OFFSET(va) >> _REGION_PAGE_SIZE(vrr); + rid = _REGION_ID(vrr); + index = ((rid & 0xff) << pfn_bits)|(pfn & ((1UL << pfn_bits) - 1)); + *tag = ((rid >> 8) & 0x) | ((pfn >> pfn_bits) << 16); + + return (struct thash_data *)((vpta.base << PTA_BASE_SHIFT) + + (index << 5)); +} + +struct thash_data *__vtr_lookup(struct kvm_vcpu *vcpu, u64 va, int type) +{ + + struct thash_data *trp; + int i; + u64 rid; + + rid = vcpu_get_rr(vcpu, va); + rid = rid & RR_RID_MASK;; + if (type == D_TLB) { + if (vcpu_quick_region_check(vcpu->arch.dtr_regions, va)) { + for (trp = (struct thash_data *)&vcpu->arch.dtrs, i = 0; + i < NDTRS; i++, trp++) { + if (__is_tr_tr
[kvm-devel] [18/18] KVM: IA64: A Guide about how to create kvm guests on ia64 V9
>From 365a0bb8b49354f9111b5745edb21b5e153784d9 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 15:08:29 +0800 Subject: [PATCH] KVM: IA64: A Guide about how to create kvm guests on ia64 Guide for creating virtual machine on kvm/ia64. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- Documentation/ia64/kvm.txt | 82 1 files changed, 82 insertions(+), 0 deletions(-) create mode 100644 Documentation/ia64/kvm.txt diff --git a/Documentation/ia64/kvm.txt b/Documentation/ia64/kvm.txt new file mode 100644 index 000..a573521 --- /dev/null +++ b/Documentation/ia64/kvm.txt @@ -0,0 +1,82 @@ +Currently, kvm module in EXPERIMENTAL stage on IA64. This means that +interfaces are not stable enough to use. So, plase had better don't run +critical applications in virtual machine. We will try our best to make it +strong in future versions! + Guide: How to boot up guests on kvm/ia64 + +This guide is to describe how to enable kvm support for IA-64 systems. + +1. Get the kvm source from git.kernel.org. + Userspace source: + git clone git://git.kernel.org/pub/scm/virt/kvm/kvm-userspace.git + Kernel Source: + git clone git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git + +2. Compile the source code. + 2.1 Compile userspace code: + (1)cd ./kvm-userspace + (2)./configure + (3)cd kernel + (4)make sync LINUX= $kernel_dir (kernel_dir is the directory of kernel source.) + (5)cd .. + (6)make qemu + (7)cd qemu; make install + + 2.2 Compile kernel source code: + (1) cd ./$kernel_dir + (2) Make menuconfig + (3) Enter into virtualization option, and choose kvm. + (4) make + (5) Once (4) done, make modules_install + (6) Make initrd, and use new kernel to reboot up host machine. + (7) Once (6) done, cd $kernel_dir/arch/ia64/kvm + (8) insmod kvm.ko; insmod kvm-intel.ko + +Note: For step 2, please make sure that host page size == TARGET_PAGE_SIZE of qemu, otherwise, may fail. + +3. Get Guest Firmware named as Flash.fd, and put it under right place: + (1) If you have the guest firmware (binary) released by Intel Corp for Xen, use it directly. + + (2) If you have no firmware at hand, Please download its source from + hg clone http://xenbits.xensource.com/ext/efi-vfirmware.hg + you can get the firmware's binary in the directory of efi-vfirmware.hg/binaries. + + (3) Rename the firware you owned to Flash.fd, and copy it to /usr/local/share/qemu + +4. Boot up Linux or Windows guests: + 4.1 Create or install a image for guest boot. If you have xen experience, it should be easy. + + 4.2 Boot up guests use the following command. + /usr/local/bin/qemu-system-ia64 -smp xx -m 512 -hda $your_image + (xx is the number of virtual processors for the guest, now the maximum value is 4) + +5. Known possibile issue on some platforms with old Firmware. + +If meet strange host crashe issues, try to solve it through either of the following ways: + +(1): Upgrade your Firmware to the latest one. + +(2): Applying the below patch to kernel source. +diff --git a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S +index 0b53344..f02b0f7 100644 +--- a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S +@@ -84,7 +84,8 @@ GLOBAL_ENTRY(ia64_pal_call_static) + mov ar.pfs = loc1 + mov rp = loc0 + ;; +- srlz.d // seralize restoration of psr.l ++ srlz.i // seralize restoration of psr.l ++ ;; + br.ret.sptk.many b0 + END(ia64_pal_call_static) + +6. Bug report: + If you found any issues when use kvm/ia64, Please post the bug info to kvm-ia64-devel mailing list. + https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel/ + +Thanks for your interest! Let's work together, and make kvm/ia64 stronger and stronger! + + + Xiantao Zhang <[EMAIL PROTECTED]> + 2008.3.10 -- 1.5.2 0018-KVM-IA64-A-Guide-about-how-to-create-kvm-guests-on.patch Description: 0018-KVM-IA64-A-Guide-about-how-to-create-kvm-guests-on.patch - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [17/18]KVM:IA64 Enable kvm build for ia64. V9
>From fe8c760aad0b51bad533c608d23ba460f0c46446 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Fri, 28 Mar 2008 14:58:47 +0800 Subject: [PATCH] KVM:IA64 Enable kvm build for ia64 Update the related Makefile and Kconfig for kvm build Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/Kconfig |3 ++ arch/ia64/Makefile |1 + arch/ia64/kvm/Kconfig | 46 arch/ia64/kvm/Makefile | 61 4 files changed, 111 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/Kconfig create mode 100644 arch/ia64/kvm/Makefile diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 8fa3faf..a7bb62e 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -19,6 +19,7 @@ config IA64 select HAVE_OPROFILE select HAVE_KPROBES select HAVE_KRETPROBES + select HAVE_KVM default y help The Itanium Processor Family is Intel's 64-bit successor to @@ -589,6 +590,8 @@ config MSPEC source "fs/Kconfig" +source "arch/ia64/kvm/Kconfig" + source "lib/Kconfig" # diff --git a/arch/ia64/Makefile b/arch/ia64/Makefile index f1645c4..ec4cca4 100644 --- a/arch/ia64/Makefile +++ b/arch/ia64/Makefile @@ -57,6 +57,7 @@ core-$(CONFIG_IA64_GENERIC) += arch/ia64/dig/ core-$(CONFIG_IA64_HP_ZX1) += arch/ia64/dig/ core-$(CONFIG_IA64_HP_ZX1_SWIOTLB) += arch/ia64/dig/ core-$(CONFIG_IA64_SGI_SN2)+= arch/ia64/sn/ +core-$(CONFIG_KVM) += arch/ia64/kvm/ drivers-$(CONFIG_PCI) += arch/ia64/pci/ drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/ diff --git a/arch/ia64/kvm/Kconfig b/arch/ia64/kvm/Kconfig new file mode 100644 index 000..d2e54b9 --- /dev/null +++ b/arch/ia64/kvm/Kconfig @@ -0,0 +1,46 @@ +# +# KVM configuration +# +config HAVE_KVM + bool + +menuconfig VIRTUALIZATION + bool "Virtualization" + depends on HAVE_KVM || IA64 + default y + ---help--- + Say Y here to get to see options for using your Linux host to run other + operating systems inside virtual machines (guests). + This option alone does not add any kernel code. + + If you say N, all options in this submenu will be skipped and disabled. + +if VIRTUALIZATION + +config KVM + tristate "Kernel-based Virtual Machine (KVM) support" + depends on HAVE_KVM && EXPERIMENTAL + select PREEMPT_NOTIFIERS + select ANON_INODES + ---help--- + Support hosting fully virtualized guest machines using hardware + virtualization extensions. You will need a fairly recent + processor equipped with virtualization extensions. You will also + need to select one or more of the processor modules below. + + This module provides access to the hardware capabilities through + a character device node named /dev/kvm. + + To compile this as a module, choose M here: the module + will be called kvm. + + If unsure, say N. + +config KVM_INTEL + tristate "KVM for Intel Itanium 2 processors support" + depends on KVM && m + ---help--- + Provides support for KVM on Itanium 2 processors equipped with the VT + extensions. + +endif # VIRTUALIZATION diff --git a/arch/ia64/kvm/Makefile b/arch/ia64/kvm/Makefile new file mode 100644 index 000..26697d3 --- /dev/null +++ b/arch/ia64/kvm/Makefile @@ -0,0 +1,61 @@ +#This Make file is to generate asm-offsets.h and build source. +# + +#Generate asm-offsets.h for vmm module build +offsets-file := asm-offsets.h + +always := $(offsets-file) +targets := $(offsets-file) +targets += arch/ia64/kvm/asm-offsets.s +clean-files := $(addprefix $(objtree)/,$(targets) $(obj)/memcpy.S $(obj)/memset.S) + +# Default sed regexp - multiline due to syntax constraints +define sed-y + "/^->/{s:^->\([^ ]*\) [\$$#]*\([^ ]*\) \(.*\):#define \1 \2 /* \3 */:; s:->::; p;}" +endef + +quiet_cmd_offsets = GEN $@ +define cmd_offsets + (set -e; \ +echo "#ifndef __ASM_KVM_OFFSETS_H__"; \ +echo "#define __ASM_KVM_OFFSETS_H__"; \ +echo "/*"; \ +echo " * DO NOT MODIFY."; \ +echo " *"; \ +echo " * This file was generated by Makefile"; \ +echo " *"; \ +echo " */"; \ +echo ""; \ +sed -ne $(sed-y) $<; \ +echo ""; \ +echo "#endif" ) > $@ +endef +# We use internal rules to avoid the "is up to date" message from make +arch/ia64/kvm/asm-offsets.s: arch/ia64/kvm/asm-offsets.c + $(call if_changed_dep,cc_s_c) + +$(obj)/$(offsets-file): arch/ia64/kvm/asm-offsets.s + $(call cmd,offsets) + +# +# Makefile for Kernel-based Virtual Machine module +# + +EXTRA_CFLAGS += -Ivirt/kvm -Iarch/ia64/kvm/ + +$(addprefix $(objtree)/,$(obj)/memcpy.S $(obj)/memset.S): + $(shell ln -snf ../lib/memcpy.S $(src)/memcpy.S) + $(shell ln -snf ../lib/memset.S $(src)/memset.
[kvm-devel] [Patch][04/18] KVM: IA64: Add header files for kvm/ia64. V9
>From 759f98f6cb61f5f6064180562656ef052f38490c Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 14:45:06 +0800 Subject: [PATCH] KVM: IA64: Add header files for kvm/ia64. Three header files are added: asm-ia64/kvm.h asm-ia64/kvm_host.h asm-ia64/kvm_para.h Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- include/asm-ia64/kvm.h | 205 + include/asm-ia64/kvm_host.h | 524 +++ include/asm-ia64/kvm_para.h | 29 +++ 3 files changed, 758 insertions(+), 0 deletions(-) create mode 100644 include/asm-ia64/kvm.h create mode 100644 include/asm-ia64/kvm_host.h create mode 100644 include/asm-ia64/kvm_para.h diff --git a/include/asm-ia64/kvm.h b/include/asm-ia64/kvm.h new file mode 100644 index 000..eb2d355 --- /dev/null +++ b/include/asm-ia64/kvm.h @@ -0,0 +1,205 @@ +#ifndef __ASM_IA64_KVM_H +#define __ASM_IA64_KVM_H + +/* + * asm-ia64/kvm.h: kvm structure definitions for ia64 + * + * Copyright (C) 2007 Xiantao Zhang <[EMAIL PROTECTED]> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include +#include + +#include + +/* Architectural interrupt line count. */ +#define KVM_NR_INTERRUPTS 256 + +#define KVM_IOAPIC_NUM_PINS 24 + +struct kvm_ioapic_state { + __u64 base_address; + __u32 ioregsel; + __u32 id; + __u32 irr; + __u32 pad; + union { + __u64 bits; + struct { + __u8 vector; + __u8 delivery_mode:3; + __u8 dest_mode:1; + __u8 delivery_status:1; + __u8 polarity:1; + __u8 remote_irr:1; + __u8 trig_mode:1; + __u8 mask:1; + __u8 reserve:7; + __u8 reserved[4]; + __u8 dest_id; + } fields; + } redirtbl[KVM_IOAPIC_NUM_PINS]; +}; + +#define KVM_IRQCHIP_PIC_MASTER 0 +#define KVM_IRQCHIP_PIC_SLAVE1 +#define KVM_IRQCHIP_IOAPIC 2 + +#define KVM_CONTEXT_SIZE 8*1024 + +union context { + /* 8K size */ + chardummy[KVM_CONTEXT_SIZE]; + struct { + unsigned long psr; + unsigned long pr; + unsigned long caller_unat; + unsigned long pad; + unsigned long gr[32]; + unsigned long ar[128]; + unsigned long br[8]; + unsigned long cr[128]; + unsigned long rr[8]; + unsigned long ibr[8]; + unsigned long dbr[8]; + unsigned long pkr[8]; + struct ia64_fpreg fr[128]; + }; +}; + +struct thash_data { + union { + struct { + unsigned long p: 1; /* 0 */ + unsigned long rv1 : 1; /* 1 */ + unsigned long ma : 3; /* 2-4 */ + unsigned long a: 1; /* 5 */ + unsigned long d: 1; /* 6 */ + unsigned long pl : 2; /* 7-8 */ + unsigned long ar : 3; /* 9-11 */ + unsigned long ppn : 38; /* 12-49 */ + unsigned long rv2 : 2; /* 50-51 */ + unsigned long ed : 1; /* 52 */ + unsigned long ig1 : 11; /* 53-63 */ + }; + struct { + unsigned long __rv1 : 53; /* 0-52 */ + unsigned long contiguous : 1; /*53 */ + unsigned long tc : 1; /* 54 TR or TC */ + unsigned long cl : 1; + /* 55 I side or D side cache line */ + unsigned long len : 4; /* 56-59 */ + unsigned long io : 1; /* 60 entry is for io or not */ + unsigned long nomap : 1; + /* 61 entry cann't be inserted into machine TLB.*/ + unsigned long checked : 1; + /* 62 for VTLB/VHPT sanity check */ + unsigned long invalid : 1; + /* 63 invalid entry */ +
[kvm-devel] [13/18] KVM:IA64: Add optimization for some virtulization faults V9
>From 0d698efed15759b49e78adcef085feda0a14a175 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 14:57:09 +0800 Subject: [PATCH] KVM:IA64: Add optimization for some virtulization faults optvfault.S Add optimization for some performance-critical virtualization faults. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/optvfault.S | 918 + 1 files changed, 918 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/optvfault.S diff --git a/arch/ia64/kvm/optvfault.S b/arch/ia64/kvm/optvfault.S new file mode 100644 index 000..5de210e --- /dev/null +++ b/arch/ia64/kvm/optvfault.S @@ -0,0 +1,918 @@ +/* + * arch/ia64/vmx/optvfault.S + * optimize virtualization fault handler + * + * Copyright (C) 2006 Intel Co + * Xuefei Xu (Anthony Xu) <[EMAIL PROTECTED]> + */ + +#include +#include + +#include "vti.h" +#include "asm-offsets.h" + +#define ACCE_MOV_FROM_AR +#define ACCE_MOV_FROM_RR +#define ACCE_MOV_TO_RR +#define ACCE_RSM +#define ACCE_SSM +#define ACCE_MOV_TO_PSR +#define ACCE_THASH + +//mov r1=ar3 +GLOBAL_ENTRY(kvm_asm_mov_from_ar) +#ifndef ACCE_MOV_FROM_AR +br.many kvm_virtualization_fault_back +#endif +add r18=VMM_VCPU_ITC_OFS_OFFSET, r21 +add r16=VMM_VCPU_LAST_ITC_OFFSET,r21 +extr.u r17=r25,6,7 +;; +ld8 r18=[r18] +mov r19=ar.itc +mov r24=b0 +;; +add r19=r19,r18 +addl [EMAIL PROTECTED](asm_mov_to_reg),gp +;; +st8 [r16] = r19 +adds r30=kvm_resume_to_guest-asm_mov_to_reg,r20 +shladd r17=r17,4,r20 +;; +mov b0=r17 +br.sptk.few b0 +;; +END(kvm_asm_mov_from_ar) + + +// mov r1=rr[r3] +GLOBAL_ENTRY(kvm_asm_mov_from_rr) +#ifndef ACCE_MOV_FROM_RR +br.many kvm_virtualization_fault_back +#endif +extr.u r16=r25,20,7 +extr.u r17=r25,6,7 +addl [EMAIL PROTECTED](asm_mov_from_reg),gp +;; +adds r30=kvm_asm_mov_from_rr_back_1-asm_mov_from_reg,r20 +shladd r16=r16,4,r20 +mov r24=b0 +;; +add r27=VMM_VCPU_VRR0_OFFSET,r21 +mov b0=r16 +br.many b0 +;; +kvm_asm_mov_from_rr_back_1: +adds r30=kvm_resume_to_guest-asm_mov_from_reg,r20 +adds r22=asm_mov_to_reg-asm_mov_from_reg,r20 +shr.u r26=r19,61 +;; +shladd r17=r17,4,r22 +shladd r27=r26,3,r27 +;; +ld8 r19=[r27] +mov b0=r17 +br.many b0 +END(kvm_asm_mov_from_rr) + + +// mov rr[r3]=r2 +GLOBAL_ENTRY(kvm_asm_mov_to_rr) +#ifndef ACCE_MOV_TO_RR +br.many kvm_virtualization_fault_back +#endif +extr.u r16=r25,20,7 +extr.u r17=r25,13,7 +addl [EMAIL PROTECTED](asm_mov_from_reg),gp +;; +adds r30=kvm_asm_mov_to_rr_back_1-asm_mov_from_reg,r20 +shladd r16=r16,4,r20 +mov r22=b0 +;; +add r27=VMM_VCPU_VRR0_OFFSET,r21 +mov b0=r16 +br.many b0 +;; +kvm_asm_mov_to_rr_back_1: +adds r30=kvm_asm_mov_to_rr_back_2-asm_mov_from_reg,r20 +shr.u r23=r19,61 +shladd r17=r17,4,r20 +;; +//if rr6, go back +cmp.eq p6,p0=6,r23 +mov b0=r22 +(p6) br.cond.dpnt.many kvm_virtualization_fault_back +;; +mov r28=r19 +mov b0=r17 +br.many b0 +kvm_asm_mov_to_rr_back_2: +adds r30=kvm_resume_to_guest-asm_mov_from_reg,r20 +shladd r27=r23,3,r27 +;; // vrr.rid<<4 |0xe +st8 [r27]=r19 +mov b0=r30 +;; +extr.u r16=r19,8,26 +extr.u r18 =r19,2,6 +mov r17 =0xe +;; +shladd r16 = r16, 4, r17 +extr.u r19 =r19,0,8 +;; +shl r16 = r16,8 +;; +add r19 = r19, r16 +;; //set ve 1 +dep r19=-1,r19,0,1 +cmp.lt p6,p0=14,r18 +;; +(p6) mov r18=14 +;; +(p6) dep r19=r18,r19,2,6 +;; +cmp.eq p6,p0=0,r23 +;; +cmp.eq.or p6,p0=4,r23 +;; +adds r16=VMM_VCPU_MODE_FLAGS_OFFSET,r21 +(p6) adds r17=VMM_VCPU_META_SAVED_RR0_OFFSET,r21 +;; +ld4 r16=[r16] +cmp.eq p7,p0=r0,r0 +(p6) shladd r17=r23,1,r17 +;; +(p6) st8 [r17]=r19 +(p6) tbit.nz p6,p7=r16,0 +;; +(p7) mov rr[r28]=r19 +mov r24=r22 +br.many b0 +END(kvm_asm_mov_to_rr) + + +//rsm +GLOBAL_ENTRY(kvm_asm_rsm) +#ifndef ACCE_RSM +br.many kvm_virtualization_fault_back +#endif +add r16=VMM_VPD_BASE_OFFSET,r21 +extr.u r26=r25,6,21 +extr.u r27=r25,31,2 +;; +ld8 r16=[r16] +extr.u r28=r25,36,1 +dep r26=r27,r26,21,2 +;; +add r17=VPD_VPSR_START_OFFSET,r16 +add r22=VMM_VCPU_MODE_FLAGS_OFFSET,r21 +//r26 is imm24 +dep r26=r28,r26,23,1 +;; +ld8 r18=[r17] +movl r28=IA64_PSR_IC+IA64_PSR_I+IA64_PSR_DT+IA64_PSR_SI +ld4 r23=[r22] +sub r27=-1,r26 +mov r24=b0 +;; +mov r20=cr.ipsr +or r28=r27,r28 +and r19=r18,r27 +;; +st8 [r17]=r19 +and r20=r20,r28 +/* Comment it out due to short of fp lazy alorgithm support +adds r27=IA64_VCPU_FP_PSR_OFFSET,r21 +;; +ld8 r27=[r27] +;; +tbit.nz p8,p0= r27,IA64_PSR_DFH_BIT +;; +(p8) dep r20=-1,r20,IA64_
[kvm-devel] [14/18] KVM:IA64 : Generate offset values for assembly code use. V9
>From 10d254b2fdb93a366a2f3f29e17f6c1dfbb6e0a6 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 14:57:53 +0800 Subject: [PATCH] KVM:IA64 : Generate offset values for assembly code use. asm-offsets.c will generate offset values used for assembly code for some fileds of special structures. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/asm-offsets.c | 251 +++ 1 files changed, 251 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/asm-offsets.c diff --git a/arch/ia64/kvm/asm-offsets.c b/arch/ia64/kvm/asm-offsets.c new file mode 100644 index 000..4e3dc13 --- /dev/null +++ b/arch/ia64/kvm/asm-offsets.c @@ -0,0 +1,251 @@ +/* + * asm-offsets.c Generate definitions needed by assembly language modules. + * This code generates raw asm output which is post-processed + * to extract and format the required data. + * + * Anthony Xu<[EMAIL PROTECTED]> + * Xiantao Zhang <[EMAIL PROTECTED]> + * Copyright (c) 2007 Intel Corporation KVM support. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include +#include + +#include "vcpu.h" + +#define task_struct kvm_vcpu + +#define DEFINE(sym, val) \ + asm volatile("\n->" #sym " (%0) " #val : : "i" (val)) + +#define BLANK() asm volatile("\n->" : :) + +#define OFFSET(_sym, _str, _mem) \ +DEFINE(_sym, offsetof(_str, _mem)); + +void foo(void) +{ + DEFINE(VMM_TASK_SIZE, sizeof(struct kvm_vcpu)); + DEFINE(VMM_PT_REGS_SIZE, sizeof(struct kvm_pt_regs)); + + BLANK(); + + DEFINE(VMM_VCPU_META_RR0_OFFSET, + offsetof(struct kvm_vcpu, arch.metaphysical_rr0)); + DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET, + offsetof(struct kvm_vcpu, + arch.metaphysical_saved_rr0)); + DEFINE(VMM_VCPU_VRR0_OFFSET, + offsetof(struct kvm_vcpu, arch.vrr[0])); + DEFINE(VMM_VPD_IRR0_OFFSET, + offsetof(struct vpd, irr[0])); + DEFINE(VMM_VCPU_ITC_CHECK_OFFSET, + offsetof(struct kvm_vcpu, arch.itc_check)); + DEFINE(VMM_VCPU_IRQ_CHECK_OFFSET, + offsetof(struct kvm_vcpu, arch.irq_check)); + DEFINE(VMM_VPD_VHPI_OFFSET, + offsetof(struct vpd, vhpi)); + DEFINE(VMM_VCPU_VSA_BASE_OFFSET, + offsetof(struct kvm_vcpu, arch.vsa_base)); + DEFINE(VMM_VCPU_VPD_OFFSET, + offsetof(struct kvm_vcpu, arch.vpd)); + DEFINE(VMM_VCPU_IRQ_CHECK, + offsetof(struct kvm_vcpu, arch.irq_check)); + DEFINE(VMM_VCPU_TIMER_PENDING, + offsetof(struct kvm_vcpu, arch.timer_pending)); + DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET, + offsetof(struct kvm_vcpu, arch.metaphysical_saved_rr0)); + DEFINE(VMM_VCPU_MODE_FLAGS_OFFSET, + offsetof(struct kvm_vcpu, arch.mode_flags)); + DEFINE(VMM_VCPU_ITC_OFS_OFFSET, + offsetof(struct kvm_vcpu, arch.itc_offset)); + DEFINE(VMM_VCPU_LAST_ITC_OFFSET, + offsetof(struct kvm_vcpu, arch.last_itc)); + DEFINE(VMM_VCPU_SAVED_GP_OFFSET, + offsetof(struct kvm_vcpu, arch.saved_gp)); + + BLANK(); + + DEFINE(VMM_PT_REGS_B6_OFFSET, + offsetof(struct kvm_pt_regs, b6)); + DEFINE(VMM_PT_REGS_B7_OFFSET, + offsetof(struct kvm_pt_regs, b7)); + DEFINE(VMM_PT_REGS_AR_CSD_OFFSET, + offsetof(struct kvm_pt_regs, ar_csd)); + DEFINE(VMM_PT_REGS_AR_SSD_OFFSET, + offsetof(struct kvm_pt_regs, ar_ssd)); + DEFINE(VMM_PT_REGS_R8_OFFSET, + offsetof(struct kvm_pt_regs, r8)); + DEFINE(VMM_PT_REGS_R9_OFFSET, + offsetof(struct kvm_pt_regs, r9)); + DEFINE(VMM_PT_REGS_R10_OFFSET, + offsetof(struct kvm_pt_regs, r10)); + DEFINE(VMM_PT_REGS_R11_OFFSET, + offsetof(struct kvm_pt_regs, r11)); + DEFINE(VMM_PT_REGS_CR_IPSR_OFFSET, + offsetof(struct kvm_pt_regs, c
[kvm-devel] [Patch][03/18] Prepare some structure definition and routines for kvm use.
>From 7f1714377e6d5812b4557bb3ccd8268b57865952 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 14:42:00 +0800 Subject: [PATCH] KVM: IA64 : Prepare some structure definitions and routines for kvm use. Register structures are defined per SDM. Add three small routines for kernel: ia64_ttag, ia64_loadrs, ia64_flushrs Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- include/asm-ia64/gcc_intrin.h | 12 include/asm-ia64/processor.h | 63 + 2 files changed, 75 insertions(+), 0 deletions(-) diff --git a/include/asm-ia64/gcc_intrin.h b/include/asm-ia64/gcc_intrin.h index de2ed2c..2fe292c 100644 --- a/include/asm-ia64/gcc_intrin.h +++ b/include/asm-ia64/gcc_intrin.h @@ -21,6 +21,10 @@ #define ia64_invala_fr(regnum) asm volatile ("invala.e f%0" :: "i"(regnum)) +#define ia64_flushrs() asm volatile ("flushrs;;":::"memory") + +#define ia64_loadrs() asm volatile ("loadrs;;":::"memory") + extern void ia64_bad_param_for_setreg (void); extern void ia64_bad_param_for_getreg (void); @@ -517,6 +521,14 @@ do { \ #define ia64_ptrd(addr, size) \ asm volatile ("ptr.d %0,%1" :: "r"(addr), "r"(size) : "memory") +#define ia64_ttag(addr) \ +({ \ + __u64 ia64_intri_res; \ + asm volatile ("ttag %0=%1" : "=r"(ia64_intri_res) : "r" (addr)); \ + ia64_intri_res; \ +}) + + /* Values for lfhint in ia64_lfetch and ia64_lfetch_fault */ #define ia64_lfhint_none 0 diff --git a/include/asm-ia64/processor.h b/include/asm-ia64/processor.h index 741f7ec..6aff126 100644 --- a/include/asm-ia64/processor.h +++ b/include/asm-ia64/processor.h @@ -119,6 +119,69 @@ struct ia64_psr { __u64 reserved4 : 19; }; +union ia64_isr { + __u64 val; + struct { + __u64 code : 16; + __u64 vector : 8; + __u64 reserved1 : 8; + __u64 x : 1; + __u64 w : 1; + __u64 r : 1; + __u64 na : 1; + __u64 sp : 1; + __u64 rs : 1; + __u64 ir : 1; + __u64 ni : 1; + __u64 so : 1; + __u64 ei : 2; + __u64 ed : 1; + __u64 reserved2 : 20; + }; +}; + +union ia64_lid { + __u64 val; + struct { + __u64 rv : 16; + __u64 eid : 8; + __u64 id : 8; + __u64 ig : 32; + }; +}; + +union ia64_tpr { + __u64 val; + struct { + __u64 ig0 : 4; + __u64 mic : 4; + __u64 rsv : 8; + __u64 mmi : 1; + __u64 ig1 : 47; + }; +}; + +union ia64_itir { + __u64 val; + struct { + __u64 rv3 : 2; /* 0-1 */ + __u64 ps : 6; /* 2-7 */ + __u64 key : 24; /* 8-31 */ + __u64 rv4 : 32; /* 32-63 */ + }; +}; + +union ia64_rr { + __u64 val; + struct { + __u64 ve : 1; /* enable hw walker */ + __u64 reserved0: 1; /* reserved */ + __u64 ps : 6; /* log page size */ + __u64 rid : 24; /* region id */ + __u64 reserved1: 32; /* reserved */ + }; +}; + /* * CPU type, hardware bug flags, and per-CPU state. Frequently used * state comes earlier: -- 1.5.2 0003-KVM-IA64-Prepare-some-structure-and-routines-for.patch Description: 0003-KVM-IA64-Prepare-some-structure-and-routines-for.patch - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [Patch][00/18] kvm-ia64 for kernel V9
Hi, All According to the comments from V8, I refined the code, and worked out the new patchset. Please help to review. Thanks! :-) In this new version, most of typdefs are removed to comply with the requirement of coding style. Fixed the issues found by reviewers. Thanks for your effort! The whole patchset are checked by the script checkpatch.pl. Except that one file including asmbly code will report some warnings, the other one should be good to check-in. Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [01/18]PATCH Add API for allocating dynamic TR resouce. V9
>From b0c5c7fc45bbe0f56efba28e814ccb67b8c8 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Tue, 1 Apr 2008 14:34:50 +0800 Subject: [PATCH] IA64: Add API for allocating Dynamic TR resource. Dynamic TR resource should be managed in the uniform way. Add two interfaces for kernel: ia64_itr_entry: Allocate a (pair of) TR for caller. ia64_ptr_entry: Purge a (pair of ) TR by caller. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> Signed-off-by: Anthony Xu<[EMAIL PROTECTED]> --- arch/ia64/kernel/mca.c | 49 +++ arch/ia64/kernel/mca_asm.S |5 + arch/ia64/mm/tlb.c | 198 include/asm-ia64/kregs.h |3 + include/asm-ia64/tlb.h | 26 ++ 5 files changed, 281 insertions(+), 0 deletions(-) diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c index 6c18221..607006a 100644 --- a/arch/ia64/kernel/mca.c +++ b/arch/ia64/kernel/mca.c @@ -97,6 +97,7 @@ #include #include +#include #include "mca_drv.h" #include "entry.h" @@ -112,6 +113,7 @@ DEFINE_PER_CPU(u64, ia64_mca_data); /* == __per_cpu_mca[smp_processor_id()] */ DEFINE_PER_CPU(u64, ia64_mca_per_cpu_pte); /* PTE to map per-CPU area */ DEFINE_PER_CPU(u64, ia64_mca_pal_pte); /* PTE to map PAL code */ DEFINE_PER_CPU(u64, ia64_mca_pal_base);/* vaddr PAL code granule */ +DEFINE_PER_CPU(u64, ia64_mca_tr_reload); /* Flag for TR reload */ unsigned long __per_cpu_mca[NR_CPUS]; @@ -1182,6 +1184,49 @@ all_in: return; } +/* mca_insert_tr + * + * Switch rid when TR reload and needed! + * iord: 1: itr, 2: itr; + * +*/ +static void mca_insert_tr(u64 iord) +{ + + int i; + u64 old_rr; + struct ia64_tr_entry *p; + unsigned long psr; + int cpu = smp_processor_id(); + + psr = ia64_clear_ic(); + for (i = IA64_TR_ALLOC_BASE; i < IA64_TR_ALLOC_MAX; i++) { + p = &__per_cpu_idtrs[cpu][iord-1][i]; + if (p->pte & 0x1) { + old_rr = ia64_get_rr(p->ifa); + if (old_rr != p->rr) { + ia64_set_rr(p->ifa, p->rr); + ia64_srlz_d(); + } + ia64_ptr(iord, p->ifa, p->itir >> 2); + ia64_srlz_i(); + if (iord & 0x1) { + ia64_itr(0x1, i, p->ifa, p->pte, p->itir >> 2); + ia64_srlz_i(); + } + if (iord & 0x2) { + ia64_itr(0x2, i, p->ifa, p->pte, p->itir >> 2); + ia64_srlz_i(); + } + if (old_rr != p->rr) { + ia64_set_rr(p->ifa, old_rr); + ia64_srlz_d(); + } + } + } + ia64_set_psr(psr); +} + /* * ia64_mca_handler * @@ -1271,6 +1316,10 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw, monarch_cpu = -1; #endif } + if (__get_cpu_var(ia64_mca_tr_reload)) { + mca_insert_tr(0x1); /*Reload dynamic itrs*/ + mca_insert_tr(0x2); /*Reload dynamic itrs*/ + } if (notify_die(DIE_MCA_MONARCH_LEAVE, "MCA", regs, (long)&nd, 0, recover) == NOTIFY_STOP) ia64_mca_spin(__func__); diff --git a/arch/ia64/kernel/mca_asm.S b/arch/ia64/kernel/mca_asm.S index 8bc7d25..a06d465 100644 --- a/arch/ia64/kernel/mca_asm.S +++ b/arch/ia64/kernel/mca_asm.S @@ -219,8 +219,13 @@ ia64_reload_tr: mov r20=IA64_TR_CURRENT_STACK ;; itr.d dtr[r20]=r16 + GET_THIS_PADDR(r2, ia64_mca_tr_reload) + mov r18 = 1 ;; srlz.d + ;; + st8 [r2] =r18 + ;; done_tlb_purge_and_reload: diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c index 655da24..626100c 100644 --- a/arch/ia64/mm/tlb.c +++ b/arch/ia64/mm/tlb.c @@ -26,6 +26,8 @@ #include #include #include +#include +#include static struct { unsigned long mask; /* mask of supported purge page-sizes */ @@ -39,6 +41,10 @@ struct ia64_ctx ia64_ctx = { }; DEFINE_PER_CPU(u8, ia64_need_tlb_flush); +DEFINE_PER_CPU(u8, ia64_tr_num); /*Number of TR slots in current processor*/ +DEFINE_PER_CPU(u8, ia64_tr_used); /*Max Slot number used by kernel*/ + +struct ia64_tr_entry __per_cpu_idtrs[NR_CPUS][2][IA64_TR_ALLOC_MAX]; /* * Initializes the ia64_ctx.bitmap array based on max_ctx+1. @@ -190,6 +196,9 @@ ia64_tlb_init (void) ia64_ptce_info_t uninitialized_var(ptce_info); /* GCC be quiet */ unsigned long tr_pgbits; long status; + pal_vm_info_1_u_t vm_info_1; + pal_vm_info_2_u_t vm_info_2; + int cpu = smp_processor_id(); if ((status = ia64_pal_vm_page_size(&tr_pgbits, &purge.mask)) != 0) { printk(KERN_ERR "PAL_V
Re: [kvm-devel] [04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
Carsten Otte wrote: > Zhang, Xiantao wrote: >> Hi, Carsten >> Why do you think it is racy? In this function, >> target_vcpu->arch.launched should be set to 1 for the first run, and >> keep its value all the time. Except the first IPI to wake up the >> vcpu, all IPIs received by target vcpu should go into "else" >> condition. So you mean the race condition exist in "else" code ? > For example to lock against destroying that vcpu. Or, the waitqueue > may become active after if (waitqueue_active()) and before > wake_up_interruptible(). In that case, the target vcpu might sleep and > not get waken up by the ipi. I don't think it may cause issue, because the target vcpu at least can be waken up by the timer interrupt. But as you said, x86 side also have the same race issue ? Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [07/17][PATCH] kvm/ia64: Add TLB virtulizationsupport.-V8
Jes Sorensen wrote: > Zhang, Xiantao wrote: >>> From 6b731c15afa8cec84f16408c421c286f1dd1b7d3 Mon Sep 17 00:00:00 >>> 2001 >> From: Xiantao Zhang <[EMAIL PROTECTED]> >> Date: Wed, 12 Mar 2008 13:45:40 +0800 >> Subject: [PATCH] KVM:IA64 : Add TLB virtulization support. >> >> vtlb.c includes tlb/VHPT virtulization. >> Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> >> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> > > Hi Xiantao, > > Just a clarification question on this one: > > >> +void machine_tlb_purge(u64 va, u64 ps) >> +{ >> +ia64_ptcl(va, ps << 2); >> +} > > What is the purpose of machine_tlb_purge()? Is it supposed to do a > global purge of the tlb on the host machine? If so, how does this > macro differ from platform_global_tlb_purge()? Hi, Jes Not for global purge, and just for purge local processsor's TLB entry covered by the parameters :-) Xiantao > I am mentioning this because it's very important to keep in mind that > the regular tlb purging instructions are not functional on all ia64 > platforms, which is why we have special implementations via the > machine vector interface. If global puge, we indeed need to consider the machine vectore. - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
Carsten Otte wrote: > Zhang, Xiantao wrote: >> +static struct kvm_vcpu *lid_to_vcpu(struct kvm *kvm, unsigned long >> id, +unsigned long eid) +{ >> +ia64_lid_t lid; >> +int i; >> + >> +for (i = 0; i < KVM_MAX_VCPUS; i++) { >> +if (kvm->vcpus[i]) { >> +lid.val = VCPU_LID(kvm->vcpus[i]); >> +if (lid.id == id && lid.eid == eid) >> +return kvm->vcpus[i]; >> +} >> +} >> + >> +return NULL; >> +} >> + >> +static int handle_ipi(struct kvm_vcpu *vcpu, struct kvm_run >> *kvm_run) +{ + struct exit_ctl_data *p = kvm_get_exit_data(vcpu); >> +struct kvm_vcpu *target_vcpu; >> +struct kvm_pt_regs *regs; >> +ia64_ipi_a addr = p->u.ipi_data.addr; >> +ia64_ipi_d data = p->u.ipi_data.data; >> + >> +target_vcpu = lid_to_vcpu(vcpu->kvm, addr.id, addr.eid); + if >> (!target_vcpu) + return handle_vm_error(vcpu, kvm_run); >> + >> +if (!target_vcpu->arch.launched) { >> +regs = vcpu_regs(target_vcpu); >> + >> +regs->cr_iip = vcpu->kvm->arch.rdv_sal_data.boot_ip; >> +regs->r1 = vcpu->kvm->arch.rdv_sal_data.boot_gp; + >> +target_vcpu->arch.mp_state = VCPU_MP_STATE_RUNNABLE; >> +if (waitqueue_active(&target_vcpu->wq)) >> +wake_up_interruptible(&target_vcpu->wq); >> +} else { >> +vcpu_deliver_ipi(target_vcpu, data.dm, data.vector); + if >> (target_vcpu != vcpu) + kvm_vcpu_kick(target_vcpu); >> +} >> + >> +return 1; >> +} > *Shrug*. This looks highly racy to me. You do access various values in > target_vcpu without any lock! I know that taking the target vcpu's > lock does'nt work because that one is held all the time during > KVM_VCPU_RUN. My solution to that was struct local_interrupt, which > has its own lock, and has the waitqueue plus everything I need to send > a sigp [that's our flavor of ipi]. ex Hi, Carsten Why do you think it is racy? In this function, target_vcpu->arch.launched should be set to 1 for the first run, and keep its value all the time. Except the first IPI to wake up the vcpu, all IPIs received by target vcpu should go into "else" condition. So you mean the race condition exist in "else" code ? Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
Carsten Otte wrote: > Zhang, Xiantao wrote: >> +static struct kvm_vcpu *lid_to_vcpu(struct kvm *kvm, unsigned long >> id, +unsigned long eid) +{ >> +ia64_lid_t lid; >> +int i; >> + >> +for (i = 0; i < KVM_MAX_VCPUS; i++) { >> +if (kvm->vcpus[i]) { >> +lid.val = VCPU_LID(kvm->vcpus[i]); >> +if (lid.id == id && lid.eid == eid) >> +return kvm->vcpus[i]; >> +} >> +} >> + >> +return NULL; >> +} >> + >> +static int handle_ipi(struct kvm_vcpu *vcpu, struct kvm_run >> *kvm_run) +{ + struct exit_ctl_data *p = kvm_get_exit_data(vcpu); >> +struct kvm_vcpu *target_vcpu; >> +struct kvm_pt_regs *regs; >> +ia64_ipi_a addr = p->u.ipi_data.addr; >> +ia64_ipi_d data = p->u.ipi_data.data; >> + >> +target_vcpu = lid_to_vcpu(vcpu->kvm, addr.id, addr.eid); + if >> (!target_vcpu) + return handle_vm_error(vcpu, kvm_run); >> + >> +if (!target_vcpu->arch.launched) { >> +regs = vcpu_regs(target_vcpu); >> + >> +regs->cr_iip = vcpu->kvm->arch.rdv_sal_data.boot_ip; >> +regs->r1 = vcpu->kvm->arch.rdv_sal_data.boot_gp; + >> +target_vcpu->arch.mp_state = VCPU_MP_STATE_RUNNABLE; >> +if (waitqueue_active(&target_vcpu->wq)) >> +wake_up_interruptible(&target_vcpu->wq); >> +} else { >> +vcpu_deliver_ipi(target_vcpu, data.dm, data.vector); + if >> (target_vcpu != vcpu) + kvm_vcpu_kick(target_vcpu); >> +} >> + >> +return 1; >> +} > *Shrug*. This looks highly racy to me. You do access various values in > target_vcpu without any lock! I know that taking the target vcpu's > lock does'nt work because that one is held all the time during > KVM_VCPU_RUN. My solution to that was struct local_interrupt, which > has its own lock, and has the waitqueue plus everything I need to send > a sigp [that's our flavor of ipi]. > >> +int kvm_emulate_halt(struct kvm_vcpu *vcpu) >> +{ >> + >> +ktime_t kt; >> +long itc_diff; >> +unsigned long vcpu_now_itc; >> + >> +unsigned long expires; >> +struct hrtimer *p_ht = &vcpu->arch.hlt_timer; > That makes me jealous, I'd love to have hrtimer on s390 for this. I've > got to round up to the next jiffie. *Sigh* > >> +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, + struct >> kvm_sregs *sregs) +{ >> +printk(KERN_WARNING"kvm:kvm_arch_vcpu_ioctl_set_sregs >> called!!\n"); >> +return 0; >> +} >> + >> +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, + struct >> kvm_sregs *sregs) +{ >> +printk(KERN_WARNING"kvm:kvm_arch_vcpu_ioctl_get_sregs >> called!!\n"); >> +return 0; >> + >> +} > Suggestion: if get/set sregs does'nt seem useful on ia64, why not > return -EINVAL? In that case, you could also not print a kern warning, > the user will either handle that situation or complain. > >> +int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) >> +{ > >> +/*FIXME:Need to removed it later!!\n*/ >> +vcpu->arch.apic = kzalloc(sizeof(struct kvm_lapic), GFP_KERNEL); >> +vcpu->arch.apic->vcpu = vcpu; > Fixme! Removed! >> +static int vti_vcpu_setup(struct kvm_vcpu *vcpu, int id) +{ >> +unsigned long psr; >> +int r; >> + >> +local_irq_save(psr); >> +r = kvm_insert_vmm_mapping(vcpu); >> +if (r) >> +goto fail; >> +r = kvm_vcpu_init(vcpu, vcpu->kvm, id); >> +if (r) >> +goto fail; > Maybe change to return r, rather then goto fail? It should be same. >> +int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct >> kvm_fpu *fpu) +{ >> +printk(KERN_WARNING"kvm:IA64 doesn't need to export" + "fpu to >> userspace!\n"); +return 0; >> +} >> + >> +int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct >> kvm_fpu *fpu) +{ >> +printk(KERN_WARNING"kvm:IA64 doesn't need to export" >> +"fpu to userspace !\n"); >> +return 0; >> +} > maybe -EINVAL? Good suggestion! >> +static int find_highest_bits(int *dat) >> +{ >> +u32 bits, bitnum; >> +int i; >> + >> +/* loop for all 256
Re: [kvm-devel] [kvm-ia64-devel] [03/15][PATCH] kvm/ia64: Add header files forkvm/ia64. V8
Carsten Otte wrote: > Zhang, Xiantao wrote: >> +typedef union context { >> +/* 8K size */ >> +chardummy[KVM_CONTEXT_SIZE]; >> +struct { >> +unsigned long psr; >> +unsigned long pr; >> +unsigned long caller_unat; >> +unsigned long pad; >> +unsigned long gr[32]; >> +unsigned long ar[128]; >> +unsigned long br[8]; >> +unsigned long cr[128]; >> +unsigned long rr[8]; >> +unsigned long ibr[8]; >> +unsigned long dbr[8]; >> +unsigned long pkr[8]; >> +struct ia64_fpreg fr[128]; >> +}; >> +} context_t; > This looks ugly to me. I'd rather prefer to have a straight struct > with elements psr...fr[], and cast the pointer to char* when needed. > KVM_CONTEXT_SIZE can be used as parameter to kzalloc() on allocation, > it's too large to be on stack anyway. We need to allocate enough memory fix area, considering back-ward compabitility. In migration or save/restore case, we need to save this area. If migration happens in different kvm versions, and the size of different, it may cause issues. For example, we added a new field in new kvm, and restore a new snapshot to old versions, it may fail. >> +typedef struct thash_data { >> +union { >> +struct { >> +unsigned long p: 1; /* 0 */ >> +unsigned long rv1 : 1; /* 1 */ >> +unsigned long ma : 3; /* 2-4 */ >> +unsigned long a: 1; /* 5 */ >> +unsigned long d: 1; /* 6 */ >> +unsigned long pl : 2; /* 7-8 */ >> +unsigned long ar : 3; /* 9-11 */ >> +unsigned long ppn : 38; /* 12-49 */ >> +unsigned long rv2 : 2; /* 50-51 */ >> +unsigned long ed : 1; /* 52 */ >> +unsigned long ig1 : 11; /* 53-63 */ >> +}; >> +struct { >> +unsigned long __rv1 : 53; /* 0-52 */ >> +unsigned long contiguous : 1; /*53 */ >> +unsigned long tc : 1; /* 54 TR or TC */ + unsigned >> long cl : 1; + /* 55 I side or D side cache line */ >> +unsigned long len : 4; /* 56-59 */ >> +unsigned long io : 1; /* 60 entry is for io or >> not */ >> +unsigned long nomap : 1; >> +/* 61 entry cann't be inserted into machine >> TLB.*/ >> +unsigned long checked : 1; >> +/* 62 for VTLB/VHPT sanity check */ >> +unsigned long invalid : 1; >> +/* 63 invalid entry */ >> +}; >> +unsigned long page_flags; >> +}; /* same for VHPT and TLB */ >> + >> +union { >> +struct { >> +unsigned long rv3 : 2; >> +unsigned long ps : 6; >> +unsigned long key : 24; >> +unsigned long rv4 : 32; >> +}; >> +unsigned long itir; >> +}; >> +union { >> +struct { >> +unsigned long ig2 : 12; >> +unsigned long vpn : 49; >> +unsigned long vrn : 3; >> +}; >> +unsigned long ifa; >> +unsigned long vadr; >> +struct { >> +unsigned long tag : 63; >> +unsigned long ti : 1; >> +}; >> +unsigned long etag; >> +}; >> +union { >> +struct thash_data *next; >> +unsigned long rid; >> +unsigned long gpaddr; >> +}; >> +} thash_data_t; > A matter of taste, but I'd prefer unsigned long mask, and > #define MASK_BIT_FOR_PURPUSE over bitfields. This structure could be > much smaller that way. Yes, but it may be not so flexible to use. >> +struct kvm_regs { >> +char *saved_guest; >> +char *saved_stack; >> +struct saved_vpd vpd; >> +/*Arch-regs*/ >> +int mp_state; >> +unsigned long vmm_rr; >> +/* TR and TC. */ >> +struct thash_data itrs[
Re: [kvm-devel] [04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
Jes Sorensen wrote: > Zhang, Xiantao wrote: >>> From 62895ff991d48398a77afdbf7f2bef127e802230 Mon Sep 17 00:00:00 >>> 2001 >> From: Xiantao Zhang <[EMAIL PROTECTED]> >> Date: Fri, 28 Mar 2008 09:49:57 +0800 >> Subject: [PATCH] KVM: IA64: Add kvm arch-specific core code for >> kvm/ia64. >> >> kvm_ia64.c is created to handle kvm ia64-specific core logic. >> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> > > More comments, a couple of bugs in this one. > >> +#include >> +#include > > Don't think you need vmalloc.h here. Originally, we called vmalloc, but removed later. Maybe we can remove it now. >> +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct >> kvm_regs *regs) +{ > [snip] >> +copy_from_user(&vcpu->arch.guest, regs->saved_guest, >> +sizeof(union context)); >> +copy_from_user(vcpu + 1, regs->saved_stack + sizeof(struct >> kvm_vcpu), >> +IA64_STK_OFFSET - sizeof(struct kvm_vcpu)); > > You need to check the return values from copy_from_user() here and > deal with possible failure. > >> +vcpu->arch.apic = kzalloc(sizeof(struct kvm_lapic), GFP_KERNEL); >> +vcpu->arch.apic->vcpu = vcpu; > > Whoops! Missing NULL pointer check here after the kzalloc. Good catch. Fixed! >> +copy_to_user(regs->saved_guest, &vcpu->arch.guest, >> +sizeof(union context)); + copy_to_user(regs->saved_stack, >> (void *)vcpu, IA64_STK_OFFSET); > > Same problem as above - check the return values. - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [05/17][PATCH] kvm/ia64 : Add head files for kvm/ia64
Jes Sorensen wrote: > Hi Xiantao, Hi, Jes I fixed the coding style issues. Thanks! > More comments. > > Zhang, Xiantao wrote: >>> From 696b9eea9f5001a7b7a07c0e58514aa10306b91a Mon Sep 17 00:00:00 >>> 2001 >> From: Xiantao Zhang <[EMAIL PROTECTED]> >> Date: Fri, 28 Mar 2008 09:51:36 +0800 >> Subject: [PATCH] KVM:IA64 : Add head files for kvm/ia64 >> >> ia64_regs: some defintions for special registers >> which aren't defined in asm-ia64/ia64regs. > > Please put missing definitions of registers into asm-ia64/ia64regs.h > if they are official definitions from the spec. Moved! >> kvm_minstate.h : Marcos about Min save routines. >> lapic.h: apic structure definition. >> vcpu.h : routions related to vcpu virtualization. >> vti.h : Some macros or routines for VT support on Itanium. >> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> > >> +/* >> + * Flushrs instruction stream. >> + */ >> +#define ia64_flushrs() asm volatile ("flushrs;;":::"memory") + >> +#define ia64_loadrs() asm volatile ("loadrs;;":::"memory") > > Please put these into include/asm-ia64/gcc_intrin.h OK. >> +#define ia64_get_rsc() >> \ >> +({ >> \ >> +unsigned long val; >> \ >> +asm volatile ("mov %0=ar.rsc;;" : "=r"(val) :: "memory"); \ >> +val; >> \ >> +}) >> + >> +#define ia64_set_rsc(val) \ >> +asm volatile ("mov ar.rsc=%0;;" :: "r"(val) : "memory") > > Please update the ia64_get/set_reg macros to handle the RSC register > and use those macros. Moved. >> +#define ia64_get_bspstore() >> \ >> +({ >> \ >> +unsigned long val; >> \ >> +asm volatile ("mov %0=ar.bspstore;;" : "=r"(val) :: "memory"); \ >> +val; >> \ >> +}) > > Ditto for for AR.BSPSTORE > >> +#define ia64_get_rnat() >> \ >> +({ >> \ >> +unsigned long val; >> \ >> +asm volatile ("mov %0=ar.rnat;" : "=r"(val) :: "memory"); \ >> +val; >> \ >> +}) > > Ditto for AR.RNAT > >> +static inline unsigned long ia64_get_itc(void) >> +{ >> +unsigned long result; >> +result = ia64_getreg(_IA64_REG_AR_ITC); >> +return result; >> +} > > This exists in include/asm-ia64/delay.h > >> +static inline void ia64_set_dcr(unsigned long dcr) +{ >> +ia64_setreg(_IA64_REG_CR_DCR, dcr); >> +} > > Please just call ia64_setreg() in your code rather than defining a > wrapper for it. Sure. >> +#define ia64_ttag(addr) >> \ >> +({ >> \ >> +__u64 ia64_intri_res; >> \ >> +asm volatile ("ttag %0=%1" : "=r"(ia64_intri_res) : "r" (addr)); \ >> +ia64_intri_res; >> \ >> +}) > > Please add to include/asm-ia64/gcc_intrin.h instead. > >> diff --git a/arch/ia64/kvm/lapic.h b/arch/ia64/kvm/lapic.h >> new file mode 100644 >> index 000..152cbdc >> --- /dev/null >> +++ b/arch/ia64/kvm/lapic.h >> @@ -0,0 +1,27 @@ >> +#ifndef __KVM_IA64_LAPIC_H >> +#define __KVM_IA64_LAPIC_H >> + >> +#include "iodev.h" > > I don't understand why iodev.h is included here? It is inherited from x86 side, and forget to remove it. Seems redundant. >> --- /dev/null >> +++ b/arch/ia64/kvm/vcpu.h > > The formatting of this file is dodgy, please try and make it comply > with the Linux standards in Documentation/CodingStyle > >> +#define _vmm_raw_spin_lock(x) >> \ > [snip] >> + >> +#define _vmm_raw_spin_unlock(x) \ > > Could you explain the reasoning behind these two macros? Whenever I > see open coded spin lock modifications like these, I have to admit I > get a bit worried. In the architecture of kvm/ia64, gvmm and host are in the two different worlds, and gvmm can't call host's interface. In migration case, we need to take a lock to sync the status of dirty memory. In order to make it work, this spin_lock is defined and used. >> +typedef struct kvm_vcpu VCPU; >> +typedef struct kvm_pt_regs REGS; >> +typedef enum { DATA_REF, NA_REF, INST_REF, RSE_REF } vhpt_ref_t; >> +typedef enum { INSTRUCTION, DATA, REGISTER } miss_type; > > ARGH! Please see previous mail about typedefs! I suspect this is code > inherited from Xen ? Xen has a lot of really nasty and pointless > typedefs like these :-( Remove
Re: [kvm-devel] [01/17]PATCH Add API for allocating dynamic TR resouce. V8
Carsten Otte wrote: > Zhang, Xiantao wrote: >> +/* mca_insert_tr >> + * >> + * Switch rid when TR reload and needed! >> + * iord: 1: itr, 2: itr; >> + * >> +*/ >> +static void mca_insert_tr(u64 iord) >> +{ >> + >> +int i; >> +u64 old_rr; >> +struct ia64_tr_entry *p; >> +unsigned long psr; >> +int cpu = smp_processor_id(); > What if CONFIG_PREEMPT is on, and we're being preempted and scheduled > to a different CPU here? Are we running preempt disabled here? If so, > the function header should state that this function needs to be called > preempt_disabled. The function insert one TR to local TLB, and doesn't allow preempt before and after the call, so the caller should be with preempt_disable before calling into this routine. Maybe the descripiton of this function should contain "Called with preempt disabled!". Does it make sense ? Xiantao >> +/* >> + * ia64_insert_tr in virtual mode. Allocate a TR slot + * >> + * target_mask : 0x1 : itr, 0x2 : dtr, 0x3 : idtr >> + * >> + * va : virtual address. >> + * pte : pte entries inserted. >> + * log_size: range to be covered. >> + * >> + * Return value: <0 : error No. >> + * >> + *>=0 : slot number allocated for TR. >> + */ >> +int ia64_itr_entry(u64 target_mask, u64 va, u64 pte, u64 log_size) >> +{ + int i, r; >> +unsigned long psr; >> +struct ia64_tr_entry *p; >> +int cpu = smp_processor_id(); > Same here. > >> +/* >> + * ia64_purge_tr >> + * >> + * target_mask: 0x1: purge itr, 0x2 : purge dtr, 0x3 purge idtr. + * >> + * slot: slot number to be freed. >> + */ >> +void ia64_ptr_entry(u64 target_mask, int slot) >> +{ >> +int cpu = smp_processor_id(); >> +int i; >> +struct ia64_tr_entry *p; > Here again. - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [17/17][PATCH] kvm/ia64: How to boot up guests on kvm/ia64 -V8
Akio Takebe wrote: > Hi, Xiantao > >> +3. Get Guest Firmware named as Flash.fd, and put it under right >> place: + (1) If you have the guest firmware (binary)released by Intel >> Corp for Xen, you can use it directly. >> +(2) If you want to build a guest firmware form source code. >> Please download the source from >> +hg clone >> http://xenbits.xensource.com/ext/efi-vfirmware.hg >> +Use the Guide of the source to build open Guest >> Firmware. > The Guide is not include in this README, > so we may make a link to the Guide in wiki. > Or we may add a link to binary release of GFW. Good suggestion! If that users don't need to build firmware by themselves. :) Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [02/17][PATCH] Implement smp_call_function_mask foria64 - V8
Jes Sorensen wrote: > Zhang, Xiantao wrote: >>> From 697d50286088e98da5ac8653c80aaa96c81abf87 Mon Sep 17 00:00:00 >>> 2001 >> From: Xiantao Zhang <[EMAIL PROTECTED]> >> Date: Mon, 31 Mar 2008 09:50:24 +0800 >> Subject: [PATCH] KVM:IA64: Implement smp_call_function_mask for ia64 >> >> This function provides more flexible interface for smp >> infrastructure. Signed-off-by: Xiantao Zhang >> <[EMAIL PROTECTED]> > > Hi Xiantao, > > I'm a little wary of the performance impact of this change. Doing a > cpumask compare on all smp_call_function calls seems a little > expensive. Maybe it's just noise in the big picture compared to the > actual cost of the IPIs, but I thought I'd bring it up. > Keep in mind that a cpumask can be fairly big these days, max NR_CPUS > is currently 4096. For those booting a kernel with NR_CPUS at 4096 on > a dual CPU machine, it would be a bit expensive. > > Why not keep smp_call_function() the way it was before, rather than > implementing it via the call to smp_call_function_mask()? Hi, Jes I'm not aware of the performance impact before. If the worst case occurs, it need 64 comparisions ? Maybe keeping old smp_call_function is better ? Xiantao > > - > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketp lace > ___ > kvm-devel mailing list > kvm-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/kvm-devel - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [01/17]PATCH Add API for allocating dynamic TR resouce. V8
Jes Sorensen wrote: > Hi Xiantao, > > I general I think the code in this patch is fine. I have a couple of > nit-picking comments: > >> +if (target_mask&0x1) { > > The formatting here isn't quite what most of the kernel does. It would > be better if you added spaces so it's a little easier to read, ie: Good suggesion! > if (target_mask & 0x1) { > >> +p = &__per_cpu_idtrs[cpu][0][0]; >> +for (i = IA64_TR_ALLOC_BASE; i <= per_cpu(ia64_tr_used, cpu); >> +i++, >> p++) { >> +if (p->pte&0x1) > > Same thing here. > >> +#define RR_TO_RID(rr) ((rr)<<32>>40) > > I would prefer to have this one defined like this: > > #define RR_TO_RID(rr) (rr >> 8) & 0xff > > It should generate the same code, but is more intuitive for the > reader. Looks better :) > Otherwise I think this patch is fine - this is really just cosmetics. Thank you! Xiantao 0001-KVM-IA64-Add-API-for-allocating-Dynamic-TR-resource.patch Description: 0001-KVM-IA64-Add-API-for-allocating-Dynamic-TR-resource.patch - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [07/17][PATCH] kvm/ia64: Add TLB virtulization support.-V8
>From 6b731c15afa8cec84f16408c421c286f1dd1b7d3 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:45:40 +0800 Subject: [PATCH] KVM:IA64 : Add TLB virtulization support. vtlb.c includes tlb/VHPT virtulization. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/vtlb.c | 631 ++ 1 files changed, 631 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/vtlb.c diff --git a/arch/ia64/kvm/vtlb.c b/arch/ia64/kvm/vtlb.c new file mode 100644 index 000..6e6ed25 --- /dev/null +++ b/arch/ia64/kvm/vtlb.c @@ -0,0 +1,631 @@ +/* + * vtlb.c: guest virtual tlb handling module. + * Copyright (c) 2004, Intel Corporation. + * Yaozu Dong (Eddie Dong) <[EMAIL PROTECTED]> + * Xuefei Xu (Anthony Xu) <[EMAIL PROTECTED]> + * + * Copyright (c) 2007, Intel Corporation. + * Xuefei Xu (Anthony Xu) <[EMAIL PROTECTED]> + * Xiantao Zhang <[EMAIL PROTECTED]> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include "vcpu.h" + +#include +/* + * Check to see if the address rid:va is translated by the TLB + */ + +static int __is_tr_translated(thash_data_t *trp, u64 rid, u64 va) +{ + return ((trp->p) && (trp->rid == rid) + && ((va-trp->vadr) < PSIZE(trp->ps))); +} + +/* + * Only for GUEST TR format. + */ +static int __is_tr_overlap(thash_data_t *trp, u64 rid, u64 sva, u64 eva) +{ + u64 sa1, ea1; + + if (!trp->p || trp->rid != rid) + return 0; + + sa1 = trp->vadr; + ea1 = sa1 + PSIZE(trp->ps) - 1; + eva -= 1; + if ((sva > ea1) || (sa1 > eva)) + return 0; + else + return 1; + +} + +void machine_tlb_purge(u64 va, u64 ps) +{ + ia64_ptcl(va, ps << 2); +} + +void local_flush_tlb_all(void) +{ + int i, j; + unsigned long flags, count0, count1; + unsigned long stride0, stride1, addr; + + addr= current_vcpu->arch.ptce_base; + count0 = current_vcpu->arch.ptce_count[0]; + count1 = current_vcpu->arch.ptce_count[1]; + stride0 = current_vcpu->arch.ptce_stride[0]; + stride1 = current_vcpu->arch.ptce_stride[1]; + + local_irq_save(flags); + for (i = 0; i < count0; ++i) { + for (j = 0; j < count1; ++j) { + ia64_ptce(addr); + addr += stride1; + } + addr += stride0; + } + local_irq_restore(flags); + ia64_srlz_i(); /* srlz.i implies srlz.d */ +} + +int vhpt_enabled(VCPU *vcpu, u64 vadr, vhpt_ref_t ref) +{ + ia64_rrvrr; + ia64_pta vpta; + ia64_psr vpsr; + + vpsr.val = VCPU(vcpu, vpsr); + vrr.val = vcpu_get_rr(vcpu, vadr); + vpta.val = vcpu_get_pta(vcpu); + + if (vrr.ve & vpta.ve) { + switch (ref) { + case DATA_REF: + case NA_REF: + return vpsr.dt; + case INST_REF: + return vpsr.dt && vpsr.it && vpsr.ic; + case RSE_REF: + return vpsr.dt && vpsr.rt; + + } + } + return 0; +} + +thash_data_t *vsa_thash(ia64_pta vpta, u64 va, u64 vrr, u64 *tag) +{ + u64 index, pfn, rid, pfn_bits; + + pfn_bits = vpta.size - 5 - 8; + pfn = REGION_OFFSET(va) >> _REGION_PAGE_SIZE(vrr); + rid = _REGION_ID(vrr); + index = ((rid & 0xff) << pfn_bits)|(pfn & ((1UL << pfn_bits) - 1)); + *tag = ((rid >> 8) & 0x) | ((pfn >> pfn_bits) << 16); + + return (thash_data_t *)((vpta.base << PTA_BASE_SHIFT) + (index << 5)); +} + +thash_data_t *__vtr_lookup(VCPU *vcpu, u64 va, int type) +{ + + thash_data_t *trp; + int i; + u64 rid; + + rid = vcpu_get_rr(vcpu, va); + rid = rid & RR_RID_MASK;; + if (type == D_TLB) { + if (vcpu_quick_region_check(vcpu->arch.dtr_regions, va)) { + for (trp = (thash_data_t *)&vcpu->arch.dtrs, i = 0; + i < NDTRS; i++, trp++) { + if (__is_tr_translated(trp, rid, va)) + return trp; + } + } + } else { +
[kvm-devel] [03/15][PATCH] kvm/ia64: Add header files for kvm/ia64. V8
>From 03259a60f3c8104cd61f523f9ddeccce0e635782 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Fri, 28 Mar 2008 09:48:10 +0800 Subject: [PATCH] KVM: IA64: Add header files for kvm/ia64. Three header files are added: asm-ia64/kvm.h asm-ia64/kvm_host.h asm-ia64/kvm_para.h Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- include/asm-ia64/kvm.h | 205 + include/asm-ia64/kvm_host.h | 530 +++ include/asm-ia64/kvm_para.h | 29 +++ 3 files changed, 764 insertions(+), 0 deletions(-) create mode 100644 include/asm-ia64/kvm.h create mode 100644 include/asm-ia64/kvm_host.h create mode 100644 include/asm-ia64/kvm_para.h diff --git a/include/asm-ia64/kvm.h b/include/asm-ia64/kvm.h new file mode 100644 index 000..8c70dd6 --- /dev/null +++ b/include/asm-ia64/kvm.h @@ -0,0 +1,205 @@ +#ifndef __ASM_KVM_IA64_H +#define __ASM_KVM_IA64_H + +/* + * asm-ia64/kvm.h: kvm structure definitions for ia64 + * + * Copyright (C) 2007 Xiantao Zhang <[EMAIL PROTECTED]> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include +#include + +#include + +/* Architectural interrupt line count. */ +#define KVM_NR_INTERRUPTS 256 + +#define KVM_IOAPIC_NUM_PINS 24 + +struct kvm_ioapic_state { + __u64 base_address; + __u32 ioregsel; + __u32 id; + __u32 irr; + __u32 pad; + union { + __u64 bits; + struct { + __u8 vector; + __u8 delivery_mode:3; + __u8 dest_mode:1; + __u8 delivery_status:1; + __u8 polarity:1; + __u8 remote_irr:1; + __u8 trig_mode:1; + __u8 mask:1; + __u8 reserve:7; + __u8 reserved[4]; + __u8 dest_id; + } fields; + } redirtbl[KVM_IOAPIC_NUM_PINS]; +}; + +#define KVM_IRQCHIP_PIC_MASTER 0 +#define KVM_IRQCHIP_PIC_SLAVE1 +#define KVM_IRQCHIP_IOAPIC 2 + +#define KVM_CONTEXT_SIZE 8*1024 + +typedef union context { + /* 8K size */ + chardummy[KVM_CONTEXT_SIZE]; + struct { + unsigned long psr; + unsigned long pr; + unsigned long caller_unat; + unsigned long pad; + unsigned long gr[32]; + unsigned long ar[128]; + unsigned long br[8]; + unsigned long cr[128]; + unsigned long rr[8]; + unsigned long ibr[8]; + unsigned long dbr[8]; + unsigned long pkr[8]; + struct ia64_fpreg fr[128]; + }; +} context_t; + +typedef struct thash_data { + union { + struct { + unsigned long p: 1; /* 0 */ + unsigned long rv1 : 1; /* 1 */ + unsigned long ma : 3; /* 2-4 */ + unsigned long a: 1; /* 5 */ + unsigned long d: 1; /* 6 */ + unsigned long pl : 2; /* 7-8 */ + unsigned long ar : 3; /* 9-11 */ + unsigned long ppn : 38; /* 12-49 */ + unsigned long rv2 : 2; /* 50-51 */ + unsigned long ed : 1; /* 52 */ + unsigned long ig1 : 11; /* 53-63 */ + }; + struct { + unsigned long __rv1 : 53; /* 0-52 */ + unsigned long contiguous : 1; /*53 */ + unsigned long tc : 1; /* 54 TR or TC */ + unsigned long cl : 1; + /* 55 I side or D side cache line */ + unsigned long len : 4; /* 56-59 */ + unsigned long io : 1; /* 60 entry is for io or not */ + unsigned long nomap : 1; + /* 61 entry cann't be inserted into machine TLB.*/ + unsigned long checked : 1; + /* 62 for VTLB/VHPT sanity check */ + unsigned long invalid : 1; + /*
[kvm-devel] [17/17][PATCH] kvm/ia64: How to boot up guests on kvm/ia64 -V8
>From b04624ce5ff919d776bf1d64b157d67410c6bc27 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:57:33 +0800 Subject: [PATCH] KVM:IA64 : How to boot up guests on kvm/ia64 Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- Documentation/ia64/kvm-howto.txt | 74 ++ 1 files changed, 74 insertions(+), 0 deletions(-) create mode 100644 Documentation/ia64/kvm-howto.txt diff --git a/Documentation/ia64/kvm-howto.txt b/Documentation/ia64/kvm-howto.txt new file mode 100644 index 000..5a8049c --- /dev/null +++ b/Documentation/ia64/kvm-howto.txt @@ -0,0 +1,74 @@ + Guide: How to boot up guests on kvm/ia64 + +1. Get the kvm source from git.kernel.org. + Userspace source: + git clone git://git.kernel.org/pub/scm/virt/kvm/kvm-userspace.git + Kernel Source: + git clone git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git + +2. Compile the source code. + 2.1 Compile userspace code: + (1)cd ./kvm-userspace + (2)./configure + (3)cd kernel + (4)make sync LINUX= $kernel_dir (kernel_dir is the directory of kernel source.) + (5)cd .. + (6)make qemu + (7)cd qemu; make install + + 2.2 Compile kernel source code: + (1) cd ./$kernel_dir + (2) Make menuconfig + (3) Enter into virtualization option, and choose kvm. + (4) make + (5) Once (4) done, make modules_install + (6) Make initrd, and use new kernel to reboot up host machine. + (7) Once (6) done, cd $kernel_dir/arch/ia64/kvm + (8) insmod kvm.ko; insmod kvm-intel.ko + +Note: For step 2, please make sure that host page size == TARGET_PAGE_SIZE of qemu, otherwise, may fail. + +3. Get Guest Firmware named as Flash.fd, and put it under right place: + (1) If you have the guest firmware (binary)released by Intel Corp for Xen, you can use it directly. + (2) If you want to build a guest firmware form source code. Please download the source from + hg clone http://xenbits.xensource.com/ext/efi-vfirmware.hg + Use the Guide of the source to build open Guest Firmware. + (3) Rename it to Flash.fd, and copy it to /usr/local/share/qemu +Note: For step 3, kvm use the guest firmware which complies with the one Xen uses. + +4. Boot up Linux or Windows guests: + 4.1 Create or install a image for guest boot. If you have xen experience, it should be easy. + + 4.2 Boot up guests use the following command. + /usr/local/bin/qemu-system-ia64 -smp xx -m 512 -hda $your_image + (xx is the number of virtual processors for the guest, now the maximum value is 4) + +5. Known possibile issue on some platforms with old Firmware + +If meet strange host crashes, you may try to solve it through either of the following methods. +(1): Upgrade your Firmware to the latest one. + +(2): Applying the below patch to kernel source. +diff --git a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S +index 0b53344..f02b0f7 100644 +--- a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S +@@ -84,7 +84,8 @@ GLOBAL_ENTRY(ia64_pal_call_static) + mov ar.pfs = loc1 + mov rp = loc0 + ;; +- srlz.d // serialize restoration of psr.l ++ srlz.i // serialize restoration of psr.l ++ ;; + br.ret.sptk.many b0 + END(ia64_pal_call_static) + +6. Bug report: + If you found any issues when use kvm/ia64, Please post the bug info to kvm-ia64-devel mailing list. + https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel/ + +Thanks for your interest! Let's work together, and make kvm/ia64 stronger and stronger! + + + Xiantao Zhang <[EMAIL PROTECTED]> + 2008.3.10 -- 1.5.2 0017-KVM-IA64-How-to-boot-up-guests-on-kvm-ia64.patch Description: 0017-KVM-IA64-How-to-boot-up-guests-on-kvm-ia64.patch - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [15/17][PATCH] kvm/ia64: Add kvm sal/pal virtulization support. V8
>From e9f15f3838626eacface8a863394e6b8825182be Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:42:18 +0800 Subject: [PATCH] KVM:IA64 : Add kvm sal/pal virtulization support. Some sal/pal calls would be traped to kvm for virtulization from guest firmware. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/kvm_fw.c | 500 1 files changed, 500 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/kvm_fw.c diff --git a/arch/ia64/kvm/kvm_fw.c b/arch/ia64/kvm/kvm_fw.c new file mode 100644 index 000..077d6e7 --- /dev/null +++ b/arch/ia64/kvm/kvm_fw.c @@ -0,0 +1,500 @@ +/* + * PAL/SAL call delegation + * + * Copyright (c) 2004 Li Susie <[EMAIL PROTECTED]> + * Copyright (c) 2005 Yu Ke <[EMAIL PROTECTED]> + * Copyright (c) 2007 Xiantao Zhang <[EMAIL PROTECTED]> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + */ + +#include +#include + +#include "vti.h" +#include "misc.h" + +#include +#include +#include + +/* + * Handy macros to make sure that the PAL return values start out + * as something meaningful. + */ +#define INIT_PAL_STATUS_UNIMPLEMENTED(x) \ + { \ + x.status = PAL_STATUS_UNIMPLEMENTED;\ + x.v0 = 0; \ + x.v1 = 0; \ + x.v2 = 0; \ + } + +#define INIT_PAL_STATUS_SUCCESS(x) \ + { \ + x.status = PAL_STATUS_SUCCESS; \ + x.v0 = 0; \ + x.v1 = 0; \ + x.v2 = 0; \ +} + +static void kvm_get_pal_call_data(struct kvm_vcpu *vcpu, + u64 *gr28, u64 *gr29, u64 *gr30, u64 *gr31) { + struct exit_ctl_data *p; + + if (vcpu) { + p = &vcpu->arch.exit_data; + if (p->exit_reason == EXIT_REASON_PAL_CALL) { + *gr28 = p->u.pal_data.gr28; + *gr29 = p->u.pal_data.gr29; + *gr30 = p->u.pal_data.gr30; + *gr31 = p->u.pal_data.gr31; + return ; + } + } + printk(KERN_DEBUG"Error occurs in kvm_get_pal_call_data!!\n"); +} + +static void set_pal_result(struct kvm_vcpu *vcpu, + struct ia64_pal_retval result) { + + struct exit_ctl_data *p; + + p = kvm_get_exit_data(vcpu); + if (p && p->exit_reason == EXIT_REASON_PAL_CALL) { + p->u.pal_data.ret = result; + return ; + } + INIT_PAL_STATUS_UNIMPLEMENTED(p->u.pal_data.ret); +} + +static void set_sal_result(struct kvm_vcpu *vcpu, + struct sal_ret_values result) { + struct exit_ctl_data *p; + + p = kvm_get_exit_data(vcpu); + if (p && p->exit_reason == EXIT_REASON_SAL_CALL) { + p->u.sal_data.ret = result; + return ; + } + printk(KERN_WARNING"Error occurs!!!\n"); +} + +struct cache_flush_args { + u64 cache_type; + u64 operation; + u64 progress; + long status; +}; + +cpumask_t cpu_cache_coherent_map; + +static void remote_pal_cache_flush(void *data) +{ + struct cache_flush_args *args = data; + long status; + u64 progress = args->progress; + + status = ia64_pal_cache_flush(args->cache_type, args->operation, + &progress, NULL); + if (status != 0) + args->status = status; +} + +static struct ia64_pal_retval pal_cache_flush(struct kvm_vcpu *vcpu) +{ + u64 gr28, gr29, gr30, gr31; + struct ia64_pal_retval result = {0, 0, 0, 0}; + struct cache_flush_args args = {0, 0, 0, 0}; + long psr; + + gr28 = gr29 = gr30 = gr31 = 0; + kvm_get_pal_call_data(vcpu, &gr28, &gr29, &gr30, &gr31); + + if (gr31 != 0) + printk(KERN_ERR"vcpu:%p called cache_flush error!\n", vcpu); + + /* Always call Host Pal in int=1 */ + gr30 &= ~PAL_CACHE_FLUSH_CHK_INTRS; + args.cache_type = gr29; + args.operation = gr30; + smp_call_function(remote_pal_ca
[kvm-devel] [16/17] [PATCH] kvm:ia64 Enable kvm build for ia64 - V8
>From 9b38270a4c01d8cfe85cd022e22a6f5c0efe45e7 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Fri, 28 Mar 2008 14:58:47 +0800 Subject: [PATCH] KVM:IA64 Enable kvm build for ia64 Update the related Makefile and KConfig for kvm build Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/Kconfig |3 ++ arch/ia64/Makefile |1 + arch/ia64/kvm/Kconfig | 46 arch/ia64/kvm/Makefile | 61 4 files changed, 111 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/Kconfig create mode 100644 arch/ia64/kvm/Makefile diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 8fa3faf..a7bb62e 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -19,6 +19,7 @@ config IA64 select HAVE_OPROFILE select HAVE_KPROBES select HAVE_KRETPROBES + select HAVE_KVM default y help The Itanium Processor Family is Intel's 64-bit successor to @@ -589,6 +590,8 @@ config MSPEC source "fs/Kconfig" +source "arch/ia64/kvm/Kconfig" + source "lib/Kconfig" # diff --git a/arch/ia64/Makefile b/arch/ia64/Makefile index f1645c4..ec4cca4 100644 --- a/arch/ia64/Makefile +++ b/arch/ia64/Makefile @@ -57,6 +57,7 @@ core-$(CONFIG_IA64_GENERIC) += arch/ia64/dig/ core-$(CONFIG_IA64_HP_ZX1) += arch/ia64/dig/ core-$(CONFIG_IA64_HP_ZX1_SWIOTLB) += arch/ia64/dig/ core-$(CONFIG_IA64_SGI_SN2)+= arch/ia64/sn/ +core-$(CONFIG_KVM) += arch/ia64/kvm/ drivers-$(CONFIG_PCI) += arch/ia64/pci/ drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/ diff --git a/arch/ia64/kvm/Kconfig b/arch/ia64/kvm/Kconfig new file mode 100644 index 000..d2e54b9 --- /dev/null +++ b/arch/ia64/kvm/Kconfig @@ -0,0 +1,46 @@ +# +# KVM configuration +# +config HAVE_KVM + bool + +menuconfig VIRTUALIZATION + bool "Virtualization" + depends on HAVE_KVM || IA64 + default y + ---help--- + Say Y here to get to see options for using your Linux host to run other + operating systems inside virtual machines (guests). + This option alone does not add any kernel code. + + If you say N, all options in this submenu will be skipped and disabled. + +if VIRTUALIZATION + +config KVM + tristate "Kernel-based Virtual Machine (KVM) support" + depends on HAVE_KVM && EXPERIMENTAL + select PREEMPT_NOTIFIERS + select ANON_INODES + ---help--- + Support hosting fully virtualized guest machines using hardware + virtualization extensions. You will need a fairly recent + processor equipped with virtualization extensions. You will also + need to select one or more of the processor modules below. + + This module provides access to the hardware capabilities through + a character device node named /dev/kvm. + + To compile this as a module, choose M here: the module + will be called kvm. + + If unsure, say N. + +config KVM_INTEL + tristate "KVM for Intel Itanium 2 processors support" + depends on KVM && m + ---help--- + Provides support for KVM on Itanium 2 processors equipped with the VT + extensions. + +endif # VIRTUALIZATION diff --git a/arch/ia64/kvm/Makefile b/arch/ia64/kvm/Makefile new file mode 100644 index 000..cde7d8e --- /dev/null +++ b/arch/ia64/kvm/Makefile @@ -0,0 +1,61 @@ +#This Make file is to generate asm-offsets.h and build source. +# + +#Generate asm-offsets.h for vmm module build +offsets-file := asm-offsets.h + +always := $(offsets-file) +targets := $(offsets-file) +targets += arch/ia64/kvm/asm-offsets.s +clean-files := $(addprefix $(objtree)/,$(targets) $(obj)/memcpy.S $(obj)/memset.S) + +# Default sed regexp - multiline due to syntax constraints +define sed-y + "/^->/{s:^->\([^ ]*\) [\$$#]*\([^ ]*\) \(.*\):#define \1 \2 /* \3 */:; s:->::; p;}" +endef + +quiet_cmd_offsets = GEN $@ +define cmd_offsets + (set -e; \ +echo "#ifndef __ASM_KVM_OFFSETS_H__"; \ +echo "#define __ASM_KVM_OFFSETS_H__"; \ +echo "/*"; \ +echo " * DO NOT MODIFY."; \ +echo " *"; \ +echo " * This file was generated by Makefile"; \ +echo " *"; \ +echo " */"; \ +echo ""; \ +sed -ne $(sed-y) $<; \ +echo ""; \ +echo "#endif" ) > $@ +endef +# We use internal rules to avoid the "is up to date" message from make +arch/ia64/kvm/asm-offsets.s: arch/ia64/kvm/asm-offsets.c + $(call if_changed_dep,cc_s_c) + +$(obj)/$(offsets-file): arch/ia64/kvm/asm-offsets.s + $(call cmd,offsets) + +# +# Makefile for Kernel-based Virtual Machine module +# + +EXTRA_CFLAGS += -Ivirt/kvm -Iarch/ia64/kvm/ + +$(addprefix $(objtree)/,$(obj)/memcpy.S $(obj)/memset.S): + $(shell ln -snf ../lib/memcpy.S $(src)/memcpy.S) + $(shell ln -snf ../lib/memset.S $(src)/memset.
[kvm-devel] [13/17][PATCH] kvm/ia64: Generate offset values for assembly code use. V8
>From b0f8c3bf3b020077c14bebd9d052cec455ccedaf Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:50:13 +0800 Subject: [PATCH] KVM:IA64 : Generate offset values for assembly code use. asm-offsets.c will generate offset values used for assembly code for some fileds of special structures. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/asm-offsets.c | 251 +++ 1 files changed, 251 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/asm-offsets.c diff --git a/arch/ia64/kvm/asm-offsets.c b/arch/ia64/kvm/asm-offsets.c new file mode 100644 index 000..fc2ac82 --- /dev/null +++ b/arch/ia64/kvm/asm-offsets.c @@ -0,0 +1,251 @@ +/* + * asm-offsets.c Generate definitions needed by assembly language modules. + * This code generates raw asm output which is post-processed + * to extract and format the required data. + * + * Anthony Xu<[EMAIL PROTECTED]> + * Xiantao Zhang <[EMAIL PROTECTED]> + * Copyright (c) 2007 Intel Corporation KVM support. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include +#include + +#include "vcpu.h" + +#define task_struct kvm_vcpu + +#define DEFINE(sym, val) \ + asm volatile("\n->" #sym " (%0) " #val : : "i" (val)) + +#define BLANK() asm volatile("\n->" : :) + +#define OFFSET(_sym, _str, _mem) \ +DEFINE(_sym, offsetof(_str, _mem)); + +void foo(void) +{ + DEFINE(VMM_TASK_SIZE, sizeof(struct kvm_vcpu)); + DEFINE(VMM_PT_REGS_SIZE, sizeof(struct kvm_pt_regs)); + + BLANK(); + + DEFINE(VMM_VCPU_META_RR0_OFFSET, + offsetof(struct kvm_vcpu, arch.metaphysical_rr0)); + DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET, + offsetof(struct kvm_vcpu, + arch.metaphysical_saved_rr0)); + DEFINE(VMM_VCPU_VRR0_OFFSET, + offsetof(struct kvm_vcpu, arch.vrr[0])); + DEFINE(VMM_VPD_IRR0_OFFSET, + offsetof(struct vpd, irr[0])); + DEFINE(VMM_VCPU_ITC_CHECK_OFFSET, + offsetof(struct kvm_vcpu, arch.itc_check)); + DEFINE(VMM_VCPU_IRQ_CHECK_OFFSET, + offsetof(struct kvm_vcpu, arch.irq_check)); + DEFINE(VMM_VPD_VHPI_OFFSET, + offsetof(struct vpd, vhpi)); + DEFINE(VMM_VCPU_VSA_BASE_OFFSET, + offsetof(struct kvm_vcpu, arch.vsa_base)); + DEFINE(VMM_VCPU_VPD_OFFSET, + offsetof(struct kvm_vcpu, arch.vpd)); + DEFINE(VMM_VCPU_IRQ_CHECK, + offsetof(struct kvm_vcpu, arch.irq_check)); + DEFINE(VMM_VCPU_TIMER_PENDING, + offsetof(struct kvm_vcpu, arch.timer_pending)); + DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET, + offsetof(struct kvm_vcpu, arch.metaphysical_saved_rr0)); + DEFINE(VMM_VCPU_MODE_FLAGS_OFFSET, + offsetof(struct kvm_vcpu, arch.mode_flags)); + DEFINE(VMM_VCPU_ITC_OFS_OFFSET, + offsetof(struct kvm_vcpu, arch.itc_offset)); + DEFINE(VMM_VCPU_LAST_ITC_OFFSET, + offsetof(struct kvm_vcpu, arch.last_itc)); + DEFINE(VMM_VCPU_SAVED_GP_OFFSET, + offsetof(struct kvm_vcpu, arch.saved_gp)); + + BLANK(); + + DEFINE(VMM_PT_REGS_B6_OFFSET, + offsetof(struct kvm_pt_regs, b6)); + DEFINE(VMM_PT_REGS_B7_OFFSET, + offsetof(struct kvm_pt_regs, b7)); + DEFINE(VMM_PT_REGS_AR_CSD_OFFSET, + offsetof(struct kvm_pt_regs, ar_csd)); + DEFINE(VMM_PT_REGS_AR_SSD_OFFSET, + offsetof(struct kvm_pt_regs, ar_ssd)); + DEFINE(VMM_PT_REGS_R8_OFFSET, + offsetof(struct kvm_pt_regs, r8)); + DEFINE(VMM_PT_REGS_R9_OFFSET, + offsetof(struct kvm_pt_regs, r9)); + DEFINE(VMM_PT_REGS_R10_OFFSET, + offsetof(struct kvm_pt_regs, r10)); + DEFINE(VMM_PT_REGS_R11_OFFSET, + offsetof(struct kvm_pt_regs, r11)); + DEFINE(VMM_PT_REGS_CR_IPSR_OFFSET, + offsetof(struct kvm_pt_regs,
[kvm-devel] [12/17][PATCH] kvm/ia64: add optimization for some virtulization faults- V8
>From a2bf407dd4dbcec75a076b9ed9a6d22ab98c54b7 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:49:38 +0800 Subject: [PATCH] KVM:IA64: add optimization for some virtulization faults optvfault.S adds optimization for some performance-critical virtualization faults. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/optvfault.S | 918 + 1 files changed, 918 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/optvfault.S diff --git a/arch/ia64/kvm/optvfault.S b/arch/ia64/kvm/optvfault.S new file mode 100644 index 000..5de210e --- /dev/null +++ b/arch/ia64/kvm/optvfault.S @@ -0,0 +1,918 @@ +/* + * arch/ia64/vmx/optvfault.S + * optimize virtualization fault handler + * + * Copyright (C) 2006 Intel Co + * Xuefei Xu (Anthony Xu) <[EMAIL PROTECTED]> + */ + +#include +#include + +#include "vti.h" +#include "asm-offsets.h" + +#define ACCE_MOV_FROM_AR +#define ACCE_MOV_FROM_RR +#define ACCE_MOV_TO_RR +#define ACCE_RSM +#define ACCE_SSM +#define ACCE_MOV_TO_PSR +#define ACCE_THASH + +//mov r1=ar3 +GLOBAL_ENTRY(kvm_asm_mov_from_ar) +#ifndef ACCE_MOV_FROM_AR +br.many kvm_virtualization_fault_back +#endif +add r18=VMM_VCPU_ITC_OFS_OFFSET, r21 +add r16=VMM_VCPU_LAST_ITC_OFFSET,r21 +extr.u r17=r25,6,7 +;; +ld8 r18=[r18] +mov r19=ar.itc +mov r24=b0 +;; +add r19=r19,r18 +addl [EMAIL PROTECTED](asm_mov_to_reg),gp +;; +st8 [r16] = r19 +adds r30=kvm_resume_to_guest-asm_mov_to_reg,r20 +shladd r17=r17,4,r20 +;; +mov b0=r17 +br.sptk.few b0 +;; +END(kvm_asm_mov_from_ar) + + +// mov r1=rr[r3] +GLOBAL_ENTRY(kvm_asm_mov_from_rr) +#ifndef ACCE_MOV_FROM_RR +br.many kvm_virtualization_fault_back +#endif +extr.u r16=r25,20,7 +extr.u r17=r25,6,7 +addl [EMAIL PROTECTED](asm_mov_from_reg),gp +;; +adds r30=kvm_asm_mov_from_rr_back_1-asm_mov_from_reg,r20 +shladd r16=r16,4,r20 +mov r24=b0 +;; +add r27=VMM_VCPU_VRR0_OFFSET,r21 +mov b0=r16 +br.many b0 +;; +kvm_asm_mov_from_rr_back_1: +adds r30=kvm_resume_to_guest-asm_mov_from_reg,r20 +adds r22=asm_mov_to_reg-asm_mov_from_reg,r20 +shr.u r26=r19,61 +;; +shladd r17=r17,4,r22 +shladd r27=r26,3,r27 +;; +ld8 r19=[r27] +mov b0=r17 +br.many b0 +END(kvm_asm_mov_from_rr) + + +// mov rr[r3]=r2 +GLOBAL_ENTRY(kvm_asm_mov_to_rr) +#ifndef ACCE_MOV_TO_RR +br.many kvm_virtualization_fault_back +#endif +extr.u r16=r25,20,7 +extr.u r17=r25,13,7 +addl [EMAIL PROTECTED](asm_mov_from_reg),gp +;; +adds r30=kvm_asm_mov_to_rr_back_1-asm_mov_from_reg,r20 +shladd r16=r16,4,r20 +mov r22=b0 +;; +add r27=VMM_VCPU_VRR0_OFFSET,r21 +mov b0=r16 +br.many b0 +;; +kvm_asm_mov_to_rr_back_1: +adds r30=kvm_asm_mov_to_rr_back_2-asm_mov_from_reg,r20 +shr.u r23=r19,61 +shladd r17=r17,4,r20 +;; +//if rr6, go back +cmp.eq p6,p0=6,r23 +mov b0=r22 +(p6) br.cond.dpnt.many kvm_virtualization_fault_back +;; +mov r28=r19 +mov b0=r17 +br.many b0 +kvm_asm_mov_to_rr_back_2: +adds r30=kvm_resume_to_guest-asm_mov_from_reg,r20 +shladd r27=r23,3,r27 +;; // vrr.rid<<4 |0xe +st8 [r27]=r19 +mov b0=r30 +;; +extr.u r16=r19,8,26 +extr.u r18 =r19,2,6 +mov r17 =0xe +;; +shladd r16 = r16, 4, r17 +extr.u r19 =r19,0,8 +;; +shl r16 = r16,8 +;; +add r19 = r19, r16 +;; //set ve 1 +dep r19=-1,r19,0,1 +cmp.lt p6,p0=14,r18 +;; +(p6) mov r18=14 +;; +(p6) dep r19=r18,r19,2,6 +;; +cmp.eq p6,p0=0,r23 +;; +cmp.eq.or p6,p0=4,r23 +;; +adds r16=VMM_VCPU_MODE_FLAGS_OFFSET,r21 +(p6) adds r17=VMM_VCPU_META_SAVED_RR0_OFFSET,r21 +;; +ld4 r16=[r16] +cmp.eq p7,p0=r0,r0 +(p6) shladd r17=r23,1,r17 +;; +(p6) st8 [r17]=r19 +(p6) tbit.nz p6,p7=r16,0 +;; +(p7) mov rr[r28]=r19 +mov r24=r22 +br.many b0 +END(kvm_asm_mov_to_rr) + + +//rsm +GLOBAL_ENTRY(kvm_asm_rsm) +#ifndef ACCE_RSM +br.many kvm_virtualization_fault_back +#endif +add r16=VMM_VPD_BASE_OFFSET,r21 +extr.u r26=r25,6,21 +extr.u r27=r25,31,2 +;; +ld8 r16=[r16] +extr.u r28=r25,36,1 +dep r26=r27,r26,21,2 +;; +add r17=VPD_VPSR_START_OFFSET,r16 +add r22=VMM_VCPU_MODE_FLAGS_OFFSET,r21 +//r26 is imm24 +dep r26=r28,r26,23,1 +;; +ld8 r18=[r17] +movl r28=IA64_PSR_IC+IA64_PSR_I+IA64_PSR_DT+IA64_PSR_SI +ld4 r23=[r22] +sub r27=-1,r26 +mov r24=b0 +;; +mov r20=cr.ipsr +or r28=r27,r28 +and r19=r18,r27 +;; +st8 [r17]=r19 +and r20=r20,r28 +/* Comment it out due to short of fp lazy alorgithm support +adds r27=IA64_VCPU_FP_PSR_OFFSET,r21 +;; +ld8 r27=[r27] +;; +tbit.nz p8,p0= r27,IA64_PSR_DFH_BIT +;; +(p8) dep r20=-1,r20,IA6
[kvm-devel] [09/17] [PATCH] kvm/ia64: Add mmio decoder for kvm/ia64. V8
>From cb572f8887ccfb939457c79fb2d2893ead2a3632 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Mon, 31 Mar 2008 10:08:09 +0800 Subject: [PATCH] KVM:IA64 : Add mmio decoder for kvm/ia64. mmio.c includes mmio decoder routines. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/mmio.c | 340 ++ 1 files changed, 340 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/mmio.c diff --git a/arch/ia64/kvm/mmio.c b/arch/ia64/kvm/mmio.c new file mode 100644 index 000..9ba879f --- /dev/null +++ b/arch/ia64/kvm/mmio.c @@ -0,0 +1,340 @@ +/* + * mmio.c: MMIO emulation components. + * Copyright (c) 2004, Intel Corporation. + * Yaozu Dong (Eddie Dong) ([EMAIL PROTECTED]) + * Kun Tian (Kevin Tian) ([EMAIL PROTECTED]) + * + * Copyright (c) 2007 Intel Corporation KVM support. + * Xuefei Xu (Anthony Xu) ([EMAIL PROTECTED]) + * Xiantao Zhang ([EMAIL PROTECTED]) + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include + +#include "vcpu.h" + +static void vlsapic_write_xtp(VCPU *v, uint8_t val) +{ + VLSAPIC_XTP(v) = val; +} + +/* + * LSAPIC OFFSET + */ +#define PIB_LOW_HALF(ofst) !(ofst & (1 << 20)) +#define PIB_OFST_INTA 0x1E +#define PIB_OFST_XTP 0x1E0008 + +/* + * execute write IPI op. + */ +static void vlsapic_write_ipi(VCPU *vcpu, uint64_t addr, uint64_t data) +{ + struct exit_ctl_data *p = ¤t_vcpu->arch.exit_data; + unsigned long psr; + + local_irq_save(psr); + + p->exit_reason = EXIT_REASON_IPI; + p->u.ipi_data.addr.val = addr; + p->u.ipi_data.data.val = data; + vmm_transition(current_vcpu); + + local_irq_restore(psr); + +} + +void lsapic_write(VCPU *v, unsigned long addr, unsigned long length, + unsigned long val) +{ + addr &= (PIB_SIZE - 1); + + switch (addr) { + case PIB_OFST_INTA: + /*panic_domain(NULL, "Undefined write on PIB INTA\n");*/ + panic_vm(v); + break; + case PIB_OFST_XTP: + if (length == 1) { + vlsapic_write_xtp(v, val); + } else { + /*panic_domain(NULL, + "Undefined write on PIB XTP\n");*/ + panic_vm(v); + } + break; + default: + if (PIB_LOW_HALF(addr)) { + /*lower half */ + if (length != 8) + /*panic_domain(NULL, + "Can't LHF write with size %ld!\n", + length);*/ + panic_vm(v); + else + vlsapic_write_ipi(v, addr, val); + } else { /* upper half + printk("IPI-UHF write %lx\n",addr);*/ + panic_vm(v); + } + break; + } +} + +unsigned long lsapic_read(VCPU *v, unsigned long addr, + unsigned long length) +{ + uint64_t result = 0; + + addr &= (PIB_SIZE - 1); + + switch (addr) { + case PIB_OFST_INTA: + if (length == 1) /* 1 byte load */ + ; /* There is no i8259, there is no INTA access*/ + else + /*panic_domain(NULL,"Undefined read on PIB INTA\n"); */ + panic_vm(v); + + break; + case PIB_OFST_XTP: + if (length == 1) { + result = VLSAPIC_XTP(v); + /* printk("read xtp %lx\n", result); */ + } else { + /*panic_domain(NULL, + "Undefined read on PIB XTP\n");*/ + panic_vm(v); + } + break; + default: + panic_vm(v); + break; + } + return result; +} + +static void mmio_access(VCPU *vcpu, u64 src_pa, u64 *dest, + u16 s, int ma, int dir) +{ + unsigned long iot; + struct exit_ctl_data *p = &vcpu->arch.exit_data; + unsigned long psr; + + iot = __gpfn_is_io(src_pa >> PAGE_SHIFT); + + local_ir
[kvm-devel] [06/17][PATCH] kvm/ia64: VMM module interfaces.--V8
>From f0a80c40029b18df15ae2da7e4784a8881fe4c06 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:44:37 +0800 Subject: [PATCH] KVM:IA64 : VMM module interfaces. vmm.c adds the interfaces with kvm/module, and initialize global data area. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/vmm.c | 66 +++ 1 files changed, 66 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/vmm.c diff --git a/arch/ia64/kvm/vmm.c b/arch/ia64/kvm/vmm.c new file mode 100644 index 000..2275bf4 --- /dev/null +++ b/arch/ia64/kvm/vmm.c @@ -0,0 +1,66 @@ +/* + * vmm.c: vmm module interface with kvm module + * + * Copyright (c) 2007, Intel Corporation. + * + * Xiantao Zhang ([EMAIL PROTECTED]) + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + */ + + +#include +#include + +#include "vcpu.h" + +MODULE_AUTHOR("Intel"); +MODULE_LICENSE("GPL"); + +extern char kvm_ia64_ivt; +extern fpswa_interface_t *vmm_fpswa_interface; + +struct kvm_vmm_info vmm_info = { + .module = THIS_MODULE, + .vmm_entry = vmm_entry, + .tramp_entry = vmm_trampoline, + .vmm_ivt = (unsigned long)&kvm_ia64_ivt, +}; + +static int __init kvm_vmm_init(void) +{ + + vmm_fpswa_interface = fpswa_interface; + + /*Register vmm data to kvm side*/ + return kvm_init(&vmm_info, 1024, THIS_MODULE); +} + +static void __exit kvm_vmm_exit(void) +{ + kvm_exit(); + return ; +} + +void vmm_spin_lock(spinlock_t *lock) +{ + _vmm_raw_spin_lock(lock); +} + +void vmm_spin_unlock(spinlock_t *lock) +{ + _vmm_raw_spin_unlock(lock); +} +module_init(kvm_vmm_init) +module_exit(kvm_vmm_exit) -- 1.5.2 0006-KVM-IA64-VMM-module-interfaces.patch Description: 0006-KVM-IA64-VMM-module-interfaces.patch - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [02/17][PATCH] Implement smp_call_function_mask for ia64 - V8
>From 697d50286088e98da5ac8653c80aaa96c81abf87 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Mon, 31 Mar 2008 09:50:24 +0800 Subject: [PATCH] KVM:IA64: Implement smp_call_function_mask for ia64 This function provides more flexible interface for smp infrastructure. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kernel/smp.c | 84 +-- include/asm-ia64/smp.h |3 ++ 2 files changed, 69 insertions(+), 18 deletions(-) diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c index 4e446aa..5bb241f 100644 --- a/arch/ia64/kernel/smp.c +++ b/arch/ia64/kernel/smp.c @@ -213,6 +213,19 @@ send_IPI_allbutself (int op) * Called with preemption disabled. */ static inline void +send_IPI_mask(cpumask_t mask, int op) +{ + unsigned int cpu; + + for_each_cpu_mask(cpu, mask) { + send_IPI_single(cpu, op); + } +} + +/* + * Called with preemption disabled. + */ +static inline void send_IPI_all (int op) { int i; @@ -401,33 +414,36 @@ smp_call_function_single (int cpuid, void (*func) (void *info), void *info, int } EXPORT_SYMBOL(smp_call_function_single); -/* - * this function sends a 'generic call function' IPI to all other CPUs - * in the system. - */ - -/* - * [SUMMARY] Run a function on all other CPUs. - * The function to run. This must be fast and non-blocking. - * An arbitrary pointer to pass to the function. - * currently unused. - * If true, wait (atomically) until function has completed on other CPUs. - * [RETURNS] 0 on success, else a negative status code. +/** + * smp_call_function_mask(): Run a function on a set of other CPUs. + * The set of cpus to run on. Must not include the current cpu. + * The function to run. This must be fast and non-blocking. + * An arbitrary pointer to pass to the function. + * If true, wait (atomically) until function + * has completed on other CPUs. * - * Does not return until remote CPUs are nearly ready to execute or are or have - * executed. + * Returns 0 on success, else a negative status code. + * + * If @wait is true, then returns once @func has returned; otherwise + * it returns just before the target cpu calls @func. * * You must not call this function with disabled interrupts or from a * hardware interrupt handler or from a bottom half handler. */ -int -smp_call_function (void (*func) (void *info), void *info, int nonatomic, int wait) +int smp_call_function_mask(cpumask_t mask, + void (*func)(void *), void *info, + int wait) { struct call_data_struct data; + cpumask_t allbutself; int cpus; spin_lock(&call_lock); - cpus = num_online_cpus() - 1; + allbutself = cpu_online_map; + cpu_clear(smp_processor_id(), allbutself); + + cpus_and(mask, mask, allbutself); + cpus = cpus_weight(mask); if (!cpus) { spin_unlock(&call_lock); return 0; @@ -445,7 +461,12 @@ smp_call_function (void (*func) (void *info), void *info, int nonatomic, int wai call_data = &data; mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC */ - send_IPI_allbutself(IPI_CALL_FUNC); + + /* Send a message to other CPUs */ + if (cpus_equal(mask, allbutself)) + send_IPI_allbutself(IPI_CALL_FUNC); + else + send_IPI_mask(mask, IPI_CALL_FUNC); /* Wait for response */ while (atomic_read(&data.started) != cpus) @@ -458,6 +479,33 @@ smp_call_function (void (*func) (void *info), void *info, int nonatomic, int wai spin_unlock(&call_lock); return 0; + +} +EXPORT_SYMBOL(smp_call_function_mask); + +/* + * this function sends a 'generic call function' IPI to all other CPUs + * in the system. + */ + +/* + * [SUMMARY] Run a function on all other CPUs. + * The function to run. This must be fast and non-blocking. + * An arbitrary pointer to pass to the function. + * currently unused. + * If true, wait (atomically) until function has completed on other CPUs. + * [RETURNS] 0 on success, else a negative status code. + * + * Does not return until remote CPUs are nearly ready to execute or are or have + * executed. + * + * You must not call this function with disabled interrupts or from a + * hardware interrupt handler or from a bottom half handler. + */ +int +smp_call_function (void (*func) (void *info), void *info, int nonatomic, int wait) +{ + return smp_call_function_mask(cpu_online_map, func, info, wait); } EXPORT_SYMBOL(smp_call_function); diff --git a/include/asm-ia64/smp.h b/include/asm-ia64/smp.h index 4fa733d..ec5f355 100644 --- a/include/asm-ia64/smp.h +++ b/include/asm-ia64/smp.h @@ -38,6 +38,9 @@ ia64_get_lid (void) return lid.f.id << 8 | lid.f.eid; } +extern int smp_call_function_
[kvm-devel] [Patch][00/17] kvm-ia64 for kernel V8
This version included the review comments from community. Thanks for your feedback! :-) The whole git repo is located at git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git kvm-ia64-mc8. Documentation/ia64/kvm-howto.txt | 71 + arch/ia64/Kconfig|6 arch/ia64/Makefile |1 arch/ia64/kernel/mca.c | 50 arch/ia64/kernel/mca_asm.S |5 arch/ia64/kernel/smp.c | 84 + arch/ia64/kvm/Kconfig| 43 arch/ia64/kvm/Makefile | 61 + arch/ia64/kvm/asm-offsets.c | 251 arch/ia64/kvm/ia64_regs.h| 234 arch/ia64/kvm/kvm_fw.c | 500 + arch/ia64/kvm/kvm_ia64.c | 1789 arch/ia64/kvm/kvm_minstate.h | 273 arch/ia64/kvm/lapic.h| 27 arch/ia64/kvm/misc.h | 93 + arch/ia64/kvm/mmio.c | 349 ++ arch/ia64/kvm/optvfault.S| 918 arch/ia64/kvm/process.c | 979 + arch/ia64/kvm/trampoline.S | 1040 ++ arch/ia64/kvm/vcpu.c | 2145 +++ arch/ia64/kvm/vcpu.h | 749 + arch/ia64/kvm/vmm.c | 66 + arch/ia64/kvm/vmm_ivt.S | 1425 + arch/ia64/kvm/vti.h | 290 + arch/ia64/kvm/vtlb.c | 631 +++ arch/ia64/mm/tlb.c | 170 +++ include/asm-ia64/kregs.h |3 include/asm-ia64/kvm.h | 205 +++ include/asm-ia64/kvm_host.h | 530 + include/asm-ia64/kvm_para.h | 29 include/asm-ia64/tlb.h | 12 include/linux/smp.h |3 32 files changed, 13014 insertions(+), 18 deletions(-) - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [01/17]PATCH Add API for allocating dynamic TR resouce. V8
>From 32013d1407ccbacf962031c308f31494b3f24aa3 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Sat, 29 Mar 2008 09:23:37 +0800 Subject: [PATCH] KVM:IA64 Add API for allocating Dynamic TR resource. Dynamic TR resource should be managed in the uniform way. Add two interfaces for kernel: ia64_itr_entry: Allocate a (pair of) TR for caller. ia64_ptr_entry: Purge a (pair of ) TR by caller. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> Signed-off-by: Anthony Xu<[EMAIL PROTECTED]> --- arch/ia64/kernel/mca.c | 49 +++ arch/ia64/kernel/mca_asm.S |5 + arch/ia64/mm/tlb.c | 196 include/asm-ia64/kregs.h |3 + include/asm-ia64/tlb.h | 14 +++ 5 files changed, 267 insertions(+), 0 deletions(-) diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c index 6c18221..ee6924b 100644 --- a/arch/ia64/kernel/mca.c +++ b/arch/ia64/kernel/mca.c @@ -97,6 +97,7 @@ #include #include +#include #include "mca_drv.h" #include "entry.h" @@ -112,6 +113,7 @@ DEFINE_PER_CPU(u64, ia64_mca_data); /* == __per_cpu_mca[smp_processor_id()] */ DEFINE_PER_CPU(u64, ia64_mca_per_cpu_pte); /* PTE to map per-CPU area */ DEFINE_PER_CPU(u64, ia64_mca_pal_pte); /* PTE to map PAL code */ DEFINE_PER_CPU(u64, ia64_mca_pal_base);/* vaddr PAL code granule */ +DEFINE_PER_CPU(u64, ia64_mca_tr_reload); /* Flag for TR reload */ unsigned long __per_cpu_mca[NR_CPUS]; @@ -1182,6 +1184,49 @@ all_in: return; } +/* mca_insert_tr + * + * Switch rid when TR reload and needed! + * iord: 1: itr, 2: itr; + * +*/ +static void mca_insert_tr(u64 iord) +{ + + int i; + u64 old_rr; + struct ia64_tr_entry *p; + unsigned long psr; + int cpu = smp_processor_id(); + + psr = ia64_clear_ic(); + for (i = IA64_TR_ALLOC_BASE; i < IA64_TR_ALLOC_MAX; i++) { + p = &__per_cpu_idtrs[cpu][iord-1][i]; + if (p->pte&0x1) { + old_rr = ia64_get_rr(p->ifa); + if (old_rr != p->rr) { + ia64_set_rr(p->ifa, p->rr); + ia64_srlz_d(); + } + ia64_ptr(iord, p->ifa, p->itir >> 2); + ia64_srlz_i(); + if (iord & 0x1) { + ia64_itr(0x1, i, p->ifa, p->pte, p->itir >> 2); + ia64_srlz_i(); + } + if (iord & 0x2) { + ia64_itr(0x2, i, p->ifa, p->pte, p->itir >> 2); + ia64_srlz_i(); + } + if (old_rr != p->rr) { + ia64_set_rr(p->ifa, old_rr); + ia64_srlz_d(); + } + } + } + ia64_set_psr(psr); +} + /* * ia64_mca_handler * @@ -1271,6 +1316,10 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw, monarch_cpu = -1; #endif } + if (__get_cpu_var(ia64_mca_tr_reload)) { + mca_insert_tr(0x1); /*Reload dynamic itrs*/ + mca_insert_tr(0x2); /*Reload dynamic itrs*/ + } if (notify_die(DIE_MCA_MONARCH_LEAVE, "MCA", regs, (long)&nd, 0, recover) == NOTIFY_STOP) ia64_mca_spin(__func__); diff --git a/arch/ia64/kernel/mca_asm.S b/arch/ia64/kernel/mca_asm.S index 8bc7d25..a06d465 100644 --- a/arch/ia64/kernel/mca_asm.S +++ b/arch/ia64/kernel/mca_asm.S @@ -219,8 +219,13 @@ ia64_reload_tr: mov r20=IA64_TR_CURRENT_STACK ;; itr.d dtr[r20]=r16 + GET_THIS_PADDR(r2, ia64_mca_tr_reload) + mov r18 = 1 ;; srlz.d + ;; + st8 [r2] =r18 + ;; done_tlb_purge_and_reload: diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c index 655da24..392866a 100644 --- a/arch/ia64/mm/tlb.c +++ b/arch/ia64/mm/tlb.c @@ -26,6 +26,8 @@ #include #include #include +#include +#include static struct { unsigned long mask; /* mask of supported purge page-sizes */ @@ -39,6 +41,10 @@ struct ia64_ctx ia64_ctx = { }; DEFINE_PER_CPU(u8, ia64_need_tlb_flush); +DEFINE_PER_CPU(u8, ia64_tr_num); /*Number of TR slots in current processor*/ +DEFINE_PER_CPU(u8, ia64_tr_used); /*Max Slot number used by kernel*/ + +struct ia64_tr_entry __per_cpu_idtrs[NR_CPUS][2][IA64_TR_ALLOC_MAX]; /* * Initializes the ia64_ctx.bitmap array based on max_ctx+1. @@ -190,6 +196,9 @@ ia64_tlb_init (void) ia64_ptce_info_t uninitialized_var(ptce_info); /* GCC be quiet */ unsigned long tr_pgbits; long status; + pal_vm_info_1_u_t vm_info_1; + pal_vm_info_2_u_t vm_info_2; + int cpu = smp_processor_id(); if ((status = ia64_pal_vm_page_size(&tr_pgbits, &purge.mask)) != 0) { printk(KERN_ERR "PAL_VM
Re: [kvm-devel] [kvm-ia64-devel] [09/17] [PATCH] kvm/ia64: Add mmio decoder for kvm/ia64.
[EMAIL PROTECTED] wrote: > Hi, > > Selon "Zhang, Xiantao" <[EMAIL PROTECTED]>: > >>> From 5f82ea88c095cf89cbae920944c05e578f35365f Mon Sep 17 00:00:00 >>> 2001 >> From: Xiantao Zhang <[EMAIL PROTECTED]> >> Date: Wed, 12 Mar 2008 14:48:09 +0800 >> Subject: [PATCH] kvm/ia64: Add mmio decoder for kvm/ia64. [...] >> +post_update = (inst.M5.i << 7) + inst.M5.imm7; >> +if (inst.M5.s) >> +temp -= post_update; >> +else >> +temp += post_update; > > The sign extension is not done correctly here. (This has been fixed > in Xen code). Include the fix in the latest merge candidate patchset. Thanks! :) git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git kvm-ia64-mc8 > >> +post_update = (inst.M3.i << 7) + inst.M3.imm7; >> +if (inst.M3.s) >> +temp -= post_update; >> +else >> +temp += post_update; > > Ditto. > >> +post_update = (inst.M10.i << 7) + inst.M10.imm7; + if >> (inst.M10.s) + temp -= post_update; >> +else >> +temp += post_update; > > Ditto. > >> +post_update = (inst.M10.i << 7) + inst.M10.imm7; + if >> (inst.M10.s) + temp -= post_update; >> +else >> +temp += post_update; > > Ditto. > >> +post_update = (inst.M15.i << 7) + inst.M15.imm7; + if >> (inst.M15.s) + temp -= post_update; >> +else >> +temp += post_update; > > Ditto. > > Tristan. - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] [02/17][PATCH] Implement smp_call_function_mask for ia64
Avi Kivity wrote: > Zhang, Xiantao wrote: >> >> diff --git a/include/linux/smp.h b/include/linux/smp.h >> index 55232cc..b71820b 100644 >> --- a/include/linux/smp.h >> +++ b/include/linux/smp.h >> @@ -56,6 +56,9 @@ int smp_call_function(void(*func)(void *info), >> void *info, int retry, int wait); >> >> int smp_call_function_single(int cpuid, void (*func) (void *info), >> void *info, int retry, int wait); >> +int smp_call_function_mask(cpumask_t mask, >> + void (*func)(void *), void *info, >> + int wait); >> >> > > For all other archs, smp_call_function_mask() is declared in > so please define it there. A separate patch can move the > declarations to , since it makes sense to have just one > declaration (and the uniprocessor version is declared there anyway). OK, moved it to asm-ia64/smp.h first, although only x86 arch defined the interface in current code. :) Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [01/17]PATCH Add API for allocating dynamic TR resouce. V7
From 1028321e00b0f3a60fc414484754f489a70f2400 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Sat, 29 Mar 2008 09:23:37 +0800 Subject: [PATCH] Add API for allocating Dynamic TR resouce. Dynamic TR resouce should be managed in an uniform way. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> Signed-off-by: Anthony Xu<[EMAIL PROTECTED]> --- arch/ia64/kernel/mca.c | 49 +++ arch/ia64/kernel/mca_asm.S |5 + arch/ia64/mm/tlb.c | 196 include/asm-ia64/kregs.h |3 + include/asm-ia64/tlb.h | 14 +++ 5 files changed, 267 insertions(+), 0 deletions(-) diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c index 6c18221..ee6924b 100644 --- a/arch/ia64/kernel/mca.c +++ b/arch/ia64/kernel/mca.c @@ -97,6 +97,7 @@ #include #include +#include #include "mca_drv.h" #include "entry.h" @@ -112,6 +113,7 @@ DEFINE_PER_CPU(u64, ia64_mca_data); /* == __per_cpu_mca[smp_processor_id()] */ DEFINE_PER_CPU(u64, ia64_mca_per_cpu_pte); /* PTE to map per-CPU area */ DEFINE_PER_CPU(u64, ia64_mca_pal_pte); /* PTE to map PAL code */ DEFINE_PER_CPU(u64, ia64_mca_pal_base);/* vaddr PAL code granule */ +DEFINE_PER_CPU(u64, ia64_mca_tr_reload); /* Flag for TR reload */ unsigned long __per_cpu_mca[NR_CPUS]; @@ -1182,6 +1184,49 @@ all_in: return; } +/* mca_insert_tr + * + * Switch rid when TR reload and needed! + * iord: 1: itr, 2: itr; + * +*/ +static void mca_insert_tr(u64 iord) +{ + + int i; + u64 old_rr; + struct ia64_tr_entry *p; + unsigned long psr; + int cpu = smp_processor_id(); + + psr = ia64_clear_ic(); + for (i = IA64_TR_ALLOC_BASE; i < IA64_TR_ALLOC_MAX; i++) { + p = &__per_cpu_idtrs[cpu][iord-1][i]; + if (p->pte&0x1) { + old_rr = ia64_get_rr(p->ifa); + if (old_rr != p->rr) { + ia64_set_rr(p->ifa, p->rr); + ia64_srlz_d(); + } + ia64_ptr(iord, p->ifa, p->itir >> 2); + ia64_srlz_i(); + if (iord & 0x1) { + ia64_itr(0x1, i, p->ifa, p->pte, p->itir >> 2); + ia64_srlz_i(); + } + if (iord & 0x2) { + ia64_itr(0x2, i, p->ifa, p->pte, p->itir >> 2); + ia64_srlz_i(); + } + if (old_rr != p->rr) { + ia64_set_rr(p->ifa, old_rr); + ia64_srlz_d(); + } + } + } + ia64_set_psr(psr); +} + /* * ia64_mca_handler * @@ -1271,6 +1316,10 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw, monarch_cpu = -1; #endif } + if (__get_cpu_var(ia64_mca_tr_reload)) { + mca_insert_tr(0x1); /*Reload dynamic itrs*/ + mca_insert_tr(0x2); /*Reload dynamic itrs*/ + } if (notify_die(DIE_MCA_MONARCH_LEAVE, "MCA", regs, (long)&nd, 0, recover) == NOTIFY_STOP) ia64_mca_spin(__func__); diff --git a/arch/ia64/kernel/mca_asm.S b/arch/ia64/kernel/mca_asm.S index 8bc7d25..a06d465 100644 --- a/arch/ia64/kernel/mca_asm.S +++ b/arch/ia64/kernel/mca_asm.S @@ -219,8 +219,13 @@ ia64_reload_tr: mov r20=IA64_TR_CURRENT_STACK ;; itr.d dtr[r20]=r16 + GET_THIS_PADDR(r2, ia64_mca_tr_reload) + mov r18 = 1 ;; srlz.d + ;; + st8 [r2] =r18 + ;; done_tlb_purge_and_reload: diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c index 655da24..0b418cc 100644 --- a/arch/ia64/mm/tlb.c +++ b/arch/ia64/mm/tlb.c @@ -26,6 +26,8 @@ #include #include #include +#include +#include static struct { unsigned long mask; /* mask of supported purge page-sizes */ @@ -39,6 +41,10 @@ struct ia64_ctx ia64_ctx = { }; DEFINE_PER_CPU(u8, ia64_need_tlb_flush); +DEFINE_PER_CPU(u8, ia64_tr_num); /*Number of TR slots in current processor*/ +DEFINE_PER_CPU(u8, ia64_tr_used); /*Max Slot number used by kernel*/ + +struct ia64_tr_entry __per_cpu_idtrs[NR_CPUS][2][IA64_TR_ALLOC_MAX]; /* * Initializes the ia64_ctx.bitmap array based on max_ctx+1. @@ -190,6 +196,9 @@ ia64_tlb_init (void) ia64_ptce_info_t uninitialized_var(ptce_info); /* GCC be quiet */ unsigned long tr_pgbits; long status; + pal_vm_info_1_u_t vm_info_1; + pal_vm_info_2_u_t vm_info_2; + int cpu = smp_processor_id(); if ((status = ia64_pal_vm_page_size(&tr_pgbits, &purge.mask)) != 0) { printk(KERN_ERR "PAL_VM_PAGE_SIZE failed with status=%ld; " @@ -206,4 +215,191 @@ ia64_tlb_init (void) local_cpu_data->ptce_stride[1] = ptce_info.stride[1];
[kvm-devel] [17/17][PATCH] kvm/ia64: How to boot up guests on kvm/ia64 V7
>From 454e8a4473ed13ce313b2ba3b654feb926a891b7 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:57:33 +0800 Subject: [PATCH] kvm/ia64: How to boot up guests on kvm/ia64 Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- Documentation/ia64/kvm-howto.txt | 74 ++ 1 files changed, 74 insertions(+), 0 deletions(-) create mode 100644 Documentation/ia64/kvm-howto.txt diff --git a/Documentation/ia64/kvm-howto.txt b/Documentation/ia64/kvm-howto.txt new file mode 100644 index 000..5a8049c --- /dev/null +++ b/Documentation/ia64/kvm-howto.txt @@ -0,0 +1,74 @@ + Guide: How to boot up guests on kvm/ia64 + +1. Get the kvm source from git.kernel.org. + Userspace source: + git clone git://git.kernel.org/pub/scm/virt/kvm/kvm-userspace.git + Kernel Source: + git clone git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git + +2. Compile the source code. + 2.1 Compile userspace code: + (1)cd ./kvm-userspace + (2)./configure + (3)cd kernel + (4)make sync LINUX= $kernel_dir (kernel_dir is the directory of kernel source.) + (5)cd .. + (6)make qemu + (7)cd qemu; make install + + 2.2 Compile kernel source code: + (1) cd ./$kernel_dir + (2) Make menuconfig + (3) Enter into virtualization option, and choose kvm. + (4) make + (5) Once (4) done, make modules_install + (6) Make initrd, and use new kernel to reboot up host machine. + (7) Once (6) done, cd $kernel_dir/arch/ia64/kvm + (8) insmod kvm.ko; insmod kvm-intel.ko + +Note: For step 2, please make sure that host page size == TARGET_PAGE_SIZE of qemu, otherwise, may fail. + +3. Get Guest Firmware named as Flash.fd, and put it under right place: + (1) If you have the guest firmware (binary)released by Intel Corp for Xen, you can use it directly. + (2) If you want to build a guest firmware form source code. Please download the source from + hg clone http://xenbits.xensource.com/ext/efi-vfirmware.hg + Use the Guide of the source to build open Guest Firmware. + (3) Rename it to Flash.fd, and copy it to /usr/local/share/qemu +Note: For step 3, kvm use the guest firmware which complies with the one Xen uses. + +4. Boot up Linux or Windows guests: + 4.1 Create or install a image for guest boot. If you have xen experience, it should be easy. + + 4.2 Boot up guests use the following command. + /usr/local/bin/qemu-system-ia64 -smp xx -m 512 -hda $your_image + (xx is the number of virtual processors for the guest, now the maximum value is 4) + +5. Known possibile issue on some platforms with old Firmware + +If meet strange host crashes, you may try to solve it through either of the following methods. +(1): Upgrade your Firmware to the latest one. + +(2): Applying the below patch to kernel source. +diff --git a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S +index 0b53344..f02b0f7 100644 +--- a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S +@@ -84,7 +84,8 @@ GLOBAL_ENTRY(ia64_pal_call_static) + mov ar.pfs = loc1 + mov rp = loc0 + ;; +- srlz.d // serialize restoration of psr.l ++ srlz.i // serialize restoration of psr.l ++ ;; + br.ret.sptk.many b0 + END(ia64_pal_call_static) + +6. Bug report: + If you found any issues when use kvm/ia64, Please post the bug info to kvm-ia64-devel mailing list. + https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel/ + +Thanks for your interest! Let's work together, and make kvm/ia64 stronger and stronger! + + + Xiantao Zhang <[EMAIL PROTECTED]> + 2008.3.10 -- 1.5.2 0017-kvm-ia64-How-to-boot-up-guests-on-kvm-ia64.patch Description: 0017-kvm-ia64-How-to-boot-up-guests-on-kvm-ia64.patch - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [Patch][00/17] kvm-ia64 for kernel V7
Compared with V6, 1. Updated the PATCH 01 according to Tony's comments. 2. Updated the PATCH 14 according to Akio's comments. 3. Updated the PATCH 17 according to Akio's comments. The latest patchset is located at git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git kvm-ia64-mc7. Pelase have a review! Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [17/17][PATCH] kvm/ia64: How to boot up guests on kvm/ia64
Akio Takebe wrote: > Hi, > > I found 3 typos. > >> +3. Get Guest Firmware named as Flash.fd, and put it under right >> place: + (1) If you have the guest firmware (binary)released by Intel >> Corp for Xen, you can use it directly. >> +(2) If you want to build a guest firmware form souce code. souce >> ---> source > >> +5. Known possbile issue on some platforms with old Firmware >> + > possbile ---> possible > >> +(2): Applying the below patch to kernel source. >> +diff --git a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S >> +index 0b53344..f02b0f7 100644 >> +--- a/arch/ia64/kernel/pal.S >> b/arch/ia64/kernel/pal.S >> +@@ -84,7 +84,8 @@ GLOBAL_ENTRY(ia64_pal_call_static) + mov ar.pfs >> = loc1 + mov rp = loc0 >> +;; >> +- srlz.d // seralize restoration of psr.l >> ++ srlz.i // seralize restoration of psr.l seralize ---> serialize Good finding. The last one is copied from kernel, maybe need a patch to fix it :) Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [11/17][PATCH] kvm/ia64: add processor virtulization support.
Akio Takebe wrote: > Hi, Xiantao and Anthony > >> +void getfpreg(unsigned long regnum, struct ia64_fpreg *fpval, >> +struct kvm_pt_regs *regs) >> +{ >> +/* Take floating register rotation into consideration*/ >> +if (regnum >= IA64_FIRST_ROTATING_FR) >> +regnum = IA64_FIRST_ROTATING_FR + fph_index(regs, >> regnum); >> +#define CASE_FIXED_FP(reg) \ >> +case (reg) : \ >> +ia64_stf_spill(fpval, reg); \ >> +break >> + >> +switch (regnum) { >> +CASE_FIXED_FP(0); >> +CASE_FIXED_FP(1); >> +CASE_FIXED_FP(2); >> +CASE_FIXED_FP(3); >> +CASE_FIXED_FP(4); >> +CASE_FIXED_FP(5); >> + >> +CASE_FIXED_FP(6); >> +CASE_FIXED_FP(7); >> +CASE_FIXED_FP(8); >> +CASE_FIXED_FP(9); >> +CASE_FIXED_FP(10); >> +CASE_FIXED_FP(11); >> + > Is this correct ? Though I don't know why xen do so. > In the case of Xen, the above parts are; > > #define CASE_SAVED_FP(reg) \ > case reg: \ > fpval->u.bits[0] = regs->f##reg.u.bits[0]; \ > fpval->u.bits[1] = regs->f##reg.u.bits[1]; \ > break > > CASE_SAVED_FP(6); > CASE_SAVED_FP(7); > CASE_SAVED_FP(8); > Hi, Akio Current should be correct, because for every host<-> guest switch, we switched FPU accordingly. So the fpu register file is dedicated for current vm, getting it from physical register should be right. But for xen's code, due to lazy fpu save/restore, current fpu register file maybe not belong to current vcpu, so need to load it from stack. Thanks Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [14/17][PATCH] kvm/ia64: Add guest interruptioninjection support.
INITIAL_PSR_VALUE_AT_INTERRUPTION 0x001808028034 >> + > Xen also use this value, you had better use macros of PSR bits. > Or you can add the same comments as Xen. Hi, Akio. The comment is where it is used. Anyway, using macro is better. Changed. - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [08/17][PATCH] kvm/ia64: Add interruption vector table for vmm.
Akio Takebe wrote: > Hi, Xiantao > > a comments is below. > > >> +// 0x3000 Entry 12 (size 64 bundles) External Interrupt (4) >> +ENTRY(kvm_interrupt) +mov r31=pr// prepare to save predicates >> +mov r19=12 >> +mov r29=cr.ipsr >> +;; >> +tbit.z p6,p7=r29,IA64_PSR_VM_BIT >> +tbit.z p0,p15=r29,IA64_PSR_I_BIT >> +;; >> +(p7) br.sptk kvm_dispatch_interrupt >> +;; >> +mov r27=ar.rsc /* M */ >> +mov r20=r1 /* A */ >> +mov r25=ar.unat /* M */ >> +mov r26=ar.pfs /* I */ >> +mov r28=cr.iip /* M */ >> +cover /* B (or nothing) */ >> +;; >> +mov r1=sp >> +;; >> +invala /* M */ >> +mov r30=cr.ifs >> +;; >> +addl r1=-VMM_PT_REGS_SIZE,r1 >> +;; >> +adds r17=2*L1_CACHE_BYTES,r1/* really: biggest cache-line size >> */ +adds r16=PT(CR_IPSR),r1 >> +;; >> +lfetch.fault.excl.nt1 [r17],L1_CACHE_BYTES >> +st8 [r16]=r29 /* save cr.ipsr */ >> +;; >> +lfetch.fault.excl.nt1 [r17] >> +mov r29=b0 >> +;; >> +adds r16=PT(R8),r1 /* initialize first base pointer */ >> +adds r17=PT(R9),r1 /* initialize second base pointer */ >> +mov r18=r0 /* make sure r18 isn't NaT */ + ;; >> +.mem.offset 0,0; st8.spill [r16]=r8,16 >> +.mem.offset 8,0; st8.spill [r17]=r9,16 >> +;; >> +.mem.offset 0,0; st8.spill [r16]=r10,24 >> +.mem.offset 8,0; st8.spill [r17]=r11,24 >> +;; >> +st8 [r16]=r28,16/* save cr.iip */ >> +st8 [r17]=r30,16/* save cr.ifs */ >> +mov r8=ar.fpsr /* M */ >> +mov r9=ar.csd >> +mov r10=ar.ssd >> +movl r11=FPSR_DEFAULT /* L-unit */ >> +;; >> +st8 [r16]=r25,16/* save ar.unat */ >> +st8 [r17]=r26,16/* save ar.pfs */ >> +shl r18=r18,16 /* compute ar.rsc to be used for "loadrs" */ >> +;; >> +st8 [r16]=r27,16/* save ar.rsc */ >> +adds r17=16,r17 /* skip over ar_rnat field */ +;; >> +st8 [r17]=r31,16/* save predicates */ >> +adds r16=16,r16 /* skip over ar_bspstore field */ + ;; >> +st8 [r16]=r29,16/* save b0 */ >> +st8 [r17]=r18,16/* save ar.rsc value for "loadrs" */ +;; >> +.mem.offset 0,0; st8.spill [r16]=r20,16/* save original r1 */ >> +.mem.offset 8,0; st8.spill [r17]=r12,16 >> +adds r12=-16,r1 >> +/* switch to kernel memory stack (with 16 bytes of scratch) */ >> +;; +.mem.offset 0,0; st8.spill [r16]=r13,16 >> +.mem.offset 8,0; st8.spill [r17]=r8,16 /* save ar.fpsr */ +;; >> +.mem.offset 0,0; st8.spill [r16]=r15,16 >> +.mem.offset 8,0; st8.spill [r17]=r14,16 >> +dep r14=-1,r0,60,4 >> +;; >> +.mem.offset 0,0; st8.spill [r16]=r2,16 >> +.mem.offset 8,0; st8.spill [r17]=r3,16 >> +adds r2=VMM_PT_REGS_R16_OFFSET,r1 >> +adds r14 = VMM_VCPU_GP_OFFSET,r13 >> +;; >> +mov r8=ar.ccv >> +ld8 r14 = [r14] >> +;; >> +mov r1=r14 /* establish kernel global pointer */ >> +;; \ >> +bsw.1 >> +;; >> +alloc r14=ar.pfs,0,0,1,0// must be first in an insn group + >> mov out0=r13 +;; >> +ssm psr.ic >> +;; >> +srlz.i >> +;; >> +//(p15) ssm psr.i > Why do you comments out some ssm psr.i? >> +adds r3=8,r2// set up second base pointer for >> SAVE_REST >> +srlz.i // ensure everybody knows psr.ic is back >> on > Hmm, if the above ssm is not necessary, this srlz.i is also necessary. Currently, we didn't enable psr.i in GVMM, since all external interrupts should back to host. But for next step, we want to add the psr.i = 1 support in GVMM. The final decision should be based on performance evaluation, and to see whether it has impact on the performance of host and guest side. Now I want to keep it there as a tag. Xiantao - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [12/17][PATCH] kvm/ia64: add optimization for some virtulization faults
>From 2dbf7c93ff5e36a221761c690ff12e7be48a6bb2 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:49:38 +0800 Subject: [PATCH] kvm/ia64: add optimization for some virtulization faults optvfault.S adds optimization for some performance-critical virtualization faults. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/optvfault.S | 918 + 1 files changed, 918 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/optvfault.S diff --git a/arch/ia64/kvm/optvfault.S b/arch/ia64/kvm/optvfault.S new file mode 100644 index 000..5de210e --- /dev/null +++ b/arch/ia64/kvm/optvfault.S @@ -0,0 +1,918 @@ +/* + * arch/ia64/vmx/optvfault.S + * optimize virtualization fault handler + * + * Copyright (C) 2006 Intel Co + * Xuefei Xu (Anthony Xu) <[EMAIL PROTECTED]> + */ + +#include +#include + +#include "vti.h" +#include "asm-offsets.h" + +#define ACCE_MOV_FROM_AR +#define ACCE_MOV_FROM_RR +#define ACCE_MOV_TO_RR +#define ACCE_RSM +#define ACCE_SSM +#define ACCE_MOV_TO_PSR +#define ACCE_THASH + +//mov r1=ar3 +GLOBAL_ENTRY(kvm_asm_mov_from_ar) +#ifndef ACCE_MOV_FROM_AR +br.many kvm_virtualization_fault_back +#endif +add r18=VMM_VCPU_ITC_OFS_OFFSET, r21 +add r16=VMM_VCPU_LAST_ITC_OFFSET,r21 +extr.u r17=r25,6,7 +;; +ld8 r18=[r18] +mov r19=ar.itc +mov r24=b0 +;; +add r19=r19,r18 +addl [EMAIL PROTECTED](asm_mov_to_reg),gp +;; +st8 [r16] = r19 +adds r30=kvm_resume_to_guest-asm_mov_to_reg,r20 +shladd r17=r17,4,r20 +;; +mov b0=r17 +br.sptk.few b0 +;; +END(kvm_asm_mov_from_ar) + + +// mov r1=rr[r3] +GLOBAL_ENTRY(kvm_asm_mov_from_rr) +#ifndef ACCE_MOV_FROM_RR +br.many kvm_virtualization_fault_back +#endif +extr.u r16=r25,20,7 +extr.u r17=r25,6,7 +addl [EMAIL PROTECTED](asm_mov_from_reg),gp +;; +adds r30=kvm_asm_mov_from_rr_back_1-asm_mov_from_reg,r20 +shladd r16=r16,4,r20 +mov r24=b0 +;; +add r27=VMM_VCPU_VRR0_OFFSET,r21 +mov b0=r16 +br.many b0 +;; +kvm_asm_mov_from_rr_back_1: +adds r30=kvm_resume_to_guest-asm_mov_from_reg,r20 +adds r22=asm_mov_to_reg-asm_mov_from_reg,r20 +shr.u r26=r19,61 +;; +shladd r17=r17,4,r22 +shladd r27=r26,3,r27 +;; +ld8 r19=[r27] +mov b0=r17 +br.many b0 +END(kvm_asm_mov_from_rr) + + +// mov rr[r3]=r2 +GLOBAL_ENTRY(kvm_asm_mov_to_rr) +#ifndef ACCE_MOV_TO_RR +br.many kvm_virtualization_fault_back +#endif +extr.u r16=r25,20,7 +extr.u r17=r25,13,7 +addl [EMAIL PROTECTED](asm_mov_from_reg),gp +;; +adds r30=kvm_asm_mov_to_rr_back_1-asm_mov_from_reg,r20 +shladd r16=r16,4,r20 +mov r22=b0 +;; +add r27=VMM_VCPU_VRR0_OFFSET,r21 +mov b0=r16 +br.many b0 +;; +kvm_asm_mov_to_rr_back_1: +adds r30=kvm_asm_mov_to_rr_back_2-asm_mov_from_reg,r20 +shr.u r23=r19,61 +shladd r17=r17,4,r20 +;; +//if rr6, go back +cmp.eq p6,p0=6,r23 +mov b0=r22 +(p6) br.cond.dpnt.many kvm_virtualization_fault_back +;; +mov r28=r19 +mov b0=r17 +br.many b0 +kvm_asm_mov_to_rr_back_2: +adds r30=kvm_resume_to_guest-asm_mov_from_reg,r20 +shladd r27=r23,3,r27 +;; // vrr.rid<<4 |0xe +st8 [r27]=r19 +mov b0=r30 +;; +extr.u r16=r19,8,26 +extr.u r18 =r19,2,6 +mov r17 =0xe +;; +shladd r16 = r16, 4, r17 +extr.u r19 =r19,0,8 +;; +shl r16 = r16,8 +;; +add r19 = r19, r16 +;; //set ve 1 +dep r19=-1,r19,0,1 +cmp.lt p6,p0=14,r18 +;; +(p6) mov r18=14 +;; +(p6) dep r19=r18,r19,2,6 +;; +cmp.eq p6,p0=0,r23 +;; +cmp.eq.or p6,p0=4,r23 +;; +adds r16=VMM_VCPU_MODE_FLAGS_OFFSET,r21 +(p6) adds r17=VMM_VCPU_META_SAVED_RR0_OFFSET,r21 +;; +ld4 r16=[r16] +cmp.eq p7,p0=r0,r0 +(p6) shladd r17=r23,1,r17 +;; +(p6) st8 [r17]=r19 +(p6) tbit.nz p6,p7=r16,0 +;; +(p7) mov rr[r28]=r19 +mov r24=r22 +br.many b0 +END(kvm_asm_mov_to_rr) + + +//rsm +GLOBAL_ENTRY(kvm_asm_rsm) +#ifndef ACCE_RSM +br.many kvm_virtualization_fault_back +#endif +add r16=VMM_VPD_BASE_OFFSET,r21 +extr.u r26=r25,6,21 +extr.u r27=r25,31,2 +;; +ld8 r16=[r16] +extr.u r28=r25,36,1 +dep r26=r27,r26,21,2 +;; +add r17=VPD_VPSR_START_OFFSET,r16 +add r22=VMM_VCPU_MODE_FLAGS_OFFSET,r21 +//r26 is imm24 +dep r26=r28,r26,23,1 +;; +ld8 r18=[r17] +movl r28=IA64_PSR_IC+IA64_PSR_I+IA64_PSR_DT+IA64_PSR_SI +ld4 r23=[r22] +sub r27=-1,r26 +mov r24=b0 +;; +mov r20=cr.ipsr +or r28=r27,r28 +and r19=r18,r27 +;; +st8 [r17]=r19 +and r20=r20,r28 +/* Comment it out due to short of fp lazy alorgithm support +adds r27=IA64_VCPU_FP_PSR_OFFSET,r21 +;; +ld8 r27=[r27] +;; +tbit.nz p8,p0= r27,IA64_PSR_DFH_BIT +;; +(p8) dep r20=-1,r20,IA6
[kvm-devel] [13/17][PATCH] kvm/ia64: Generate offset values for assembly code use.
>From f21b39650592fff4d07c94730b0f4e9aa093b9a8 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:50:13 +0800 Subject: [PATCH] kvm/ia64: Generate offset values for assembly code use. asm-offsets.c will generate offset values used for assembly code for some fileds of special structures. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/asm-offsets.c | 251 +++ 1 files changed, 251 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/asm-offsets.c diff --git a/arch/ia64/kvm/asm-offsets.c b/arch/ia64/kvm/asm-offsets.c new file mode 100644 index 000..fc2ac82 --- /dev/null +++ b/arch/ia64/kvm/asm-offsets.c @@ -0,0 +1,251 @@ +/* + * asm-offsets.c Generate definitions needed by assembly language modules. + * This code generates raw asm output which is post-processed + * to extract and format the required data. + * + * Anthony Xu<[EMAIL PROTECTED]> + * Xiantao Zhang <[EMAIL PROTECTED]> + * Copyright (c) 2007 Intel Corporation KVM support. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include +#include + +#include "vcpu.h" + +#define task_struct kvm_vcpu + +#define DEFINE(sym, val) \ + asm volatile("\n->" #sym " (%0) " #val : : "i" (val)) + +#define BLANK() asm volatile("\n->" : :) + +#define OFFSET(_sym, _str, _mem) \ +DEFINE(_sym, offsetof(_str, _mem)); + +void foo(void) +{ + DEFINE(VMM_TASK_SIZE, sizeof(struct kvm_vcpu)); + DEFINE(VMM_PT_REGS_SIZE, sizeof(struct kvm_pt_regs)); + + BLANK(); + + DEFINE(VMM_VCPU_META_RR0_OFFSET, + offsetof(struct kvm_vcpu, arch.metaphysical_rr0)); + DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET, + offsetof(struct kvm_vcpu, + arch.metaphysical_saved_rr0)); + DEFINE(VMM_VCPU_VRR0_OFFSET, + offsetof(struct kvm_vcpu, arch.vrr[0])); + DEFINE(VMM_VPD_IRR0_OFFSET, + offsetof(struct vpd, irr[0])); + DEFINE(VMM_VCPU_ITC_CHECK_OFFSET, + offsetof(struct kvm_vcpu, arch.itc_check)); + DEFINE(VMM_VCPU_IRQ_CHECK_OFFSET, + offsetof(struct kvm_vcpu, arch.irq_check)); + DEFINE(VMM_VPD_VHPI_OFFSET, + offsetof(struct vpd, vhpi)); + DEFINE(VMM_VCPU_VSA_BASE_OFFSET, + offsetof(struct kvm_vcpu, arch.vsa_base)); + DEFINE(VMM_VCPU_VPD_OFFSET, + offsetof(struct kvm_vcpu, arch.vpd)); + DEFINE(VMM_VCPU_IRQ_CHECK, + offsetof(struct kvm_vcpu, arch.irq_check)); + DEFINE(VMM_VCPU_TIMER_PENDING, + offsetof(struct kvm_vcpu, arch.timer_pending)); + DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET, + offsetof(struct kvm_vcpu, arch.metaphysical_saved_rr0)); + DEFINE(VMM_VCPU_MODE_FLAGS_OFFSET, + offsetof(struct kvm_vcpu, arch.mode_flags)); + DEFINE(VMM_VCPU_ITC_OFS_OFFSET, + offsetof(struct kvm_vcpu, arch.itc_offset)); + DEFINE(VMM_VCPU_LAST_ITC_OFFSET, + offsetof(struct kvm_vcpu, arch.last_itc)); + DEFINE(VMM_VCPU_SAVED_GP_OFFSET, + offsetof(struct kvm_vcpu, arch.saved_gp)); + + BLANK(); + + DEFINE(VMM_PT_REGS_B6_OFFSET, + offsetof(struct kvm_pt_regs, b6)); + DEFINE(VMM_PT_REGS_B7_OFFSET, + offsetof(struct kvm_pt_regs, b7)); + DEFINE(VMM_PT_REGS_AR_CSD_OFFSET, + offsetof(struct kvm_pt_regs, ar_csd)); + DEFINE(VMM_PT_REGS_AR_SSD_OFFSET, + offsetof(struct kvm_pt_regs, ar_ssd)); + DEFINE(VMM_PT_REGS_R8_OFFSET, + offsetof(struct kvm_pt_regs, r8)); + DEFINE(VMM_PT_REGS_R9_OFFSET, + offsetof(struct kvm_pt_regs, r9)); + DEFINE(VMM_PT_REGS_R10_OFFSET, + offsetof(struct kvm_pt_regs, r10)); + DEFINE(VMM_PT_REGS_R11_OFFSET, + offsetof(struct kvm_pt_regs, r11)); + DEFINE(VMM_PT_REGS_CR_IPSR_OFFSET, + offsetof(struct kvm_pt_regs, c
[kvm-devel] [02/17][PATCH] Implement smp_call_function_mask for ia64
>From 9118d25b4e98bef3a62429f8c150e8d429396c40 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 12:58:02 +0800 Subject: [PATCH] Implement smp_call_function_mask for ia64 This function provides more flexible interface for smp infrastructure. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kernel/smp.c | 84 +-- include/linux/smp.h|3 ++ 2 files changed, 69 insertions(+), 18 deletions(-) diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c index 4e446aa..5bb241f 100644 --- a/arch/ia64/kernel/smp.c +++ b/arch/ia64/kernel/smp.c @@ -213,6 +213,19 @@ send_IPI_allbutself (int op) * Called with preemption disabled. */ static inline void +send_IPI_mask(cpumask_t mask, int op) +{ + unsigned int cpu; + + for_each_cpu_mask(cpu, mask) { + send_IPI_single(cpu, op); + } +} + +/* + * Called with preemption disabled. + */ +static inline void send_IPI_all (int op) { int i; @@ -401,33 +414,36 @@ smp_call_function_single (int cpuid, void (*func) (void *info), void *info, int } EXPORT_SYMBOL(smp_call_function_single); -/* - * this function sends a 'generic call function' IPI to all other CPUs - * in the system. - */ - -/* - * [SUMMARY] Run a function on all other CPUs. - * The function to run. This must be fast and non-blocking. - * An arbitrary pointer to pass to the function. - * currently unused. - * If true, wait (atomically) until function has completed on other CPUs. - * [RETURNS] 0 on success, else a negative status code. +/** + * smp_call_function_mask(): Run a function on a set of other CPUs. + * The set of cpus to run on. Must not include the current cpu. + * The function to run. This must be fast and non-blocking. + * An arbitrary pointer to pass to the function. + * If true, wait (atomically) until function + * has completed on other CPUs. * - * Does not return until remote CPUs are nearly ready to execute or are or have - * executed. + * Returns 0 on success, else a negative status code. + * + * If @wait is true, then returns once @func has returned; otherwise + * it returns just before the target cpu calls @func. * * You must not call this function with disabled interrupts or from a * hardware interrupt handler or from a bottom half handler. */ -int -smp_call_function (void (*func) (void *info), void *info, int nonatomic, int wait) +int smp_call_function_mask(cpumask_t mask, + void (*func)(void *), void *info, + int wait) { struct call_data_struct data; + cpumask_t allbutself; int cpus; spin_lock(&call_lock); - cpus = num_online_cpus() - 1; + allbutself = cpu_online_map; + cpu_clear(smp_processor_id(), allbutself); + + cpus_and(mask, mask, allbutself); + cpus = cpus_weight(mask); if (!cpus) { spin_unlock(&call_lock); return 0; @@ -445,7 +461,12 @@ smp_call_function (void (*func) (void *info), void *info, int nonatomic, int wai call_data = &data; mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC */ - send_IPI_allbutself(IPI_CALL_FUNC); + + /* Send a message to other CPUs */ + if (cpus_equal(mask, allbutself)) + send_IPI_allbutself(IPI_CALL_FUNC); + else + send_IPI_mask(mask, IPI_CALL_FUNC); /* Wait for response */ while (atomic_read(&data.started) != cpus) @@ -458,6 +479,33 @@ smp_call_function (void (*func) (void *info), void *info, int nonatomic, int wai spin_unlock(&call_lock); return 0; + +} +EXPORT_SYMBOL(smp_call_function_mask); + +/* + * this function sends a 'generic call function' IPI to all other CPUs + * in the system. + */ + +/* + * [SUMMARY] Run a function on all other CPUs. + * The function to run. This must be fast and non-blocking. + * An arbitrary pointer to pass to the function. + * currently unused. + * If true, wait (atomically) until function has completed on other CPUs. + * [RETURNS] 0 on success, else a negative status code. + * + * Does not return until remote CPUs are nearly ready to execute or are or have + * executed. + * + * You must not call this function with disabled interrupts or from a + * hardware interrupt handler or from a bottom half handler. + */ +int +smp_call_function (void (*func) (void *info), void *info, int nonatomic, int wait) +{ + return smp_call_function_mask(cpu_online_map, func, info, wait); } EXPORT_SYMBOL(smp_call_function); diff --git a/include/linux/smp.h b/include/linux/smp.h index 55232cc..b71820b 100644 --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -56,6 +56,9 @@ int smp_call_function(void(*func)(void *info), void *info, int retry, int wait); int smp_call_function_single(int cp
[kvm-devel] [07/17][PATCH] kvm/ia64: Add TLB virtulization support.
>From 56d3f7acf8d45d2491646be77ced344dcc516cd7 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:45:40 +0800 Subject: [PATCH] kvm/ia64: Add TLB virtulization support. vtlb.c includes tlb/VHPT virtulization. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/vtlb.c | 631 ++ 1 files changed, 631 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/vtlb.c diff --git a/arch/ia64/kvm/vtlb.c b/arch/ia64/kvm/vtlb.c new file mode 100644 index 000..6e6ed25 --- /dev/null +++ b/arch/ia64/kvm/vtlb.c @@ -0,0 +1,631 @@ +/* + * vtlb.c: guest virtual tlb handling module. + * Copyright (c) 2004, Intel Corporation. + * Yaozu Dong (Eddie Dong) <[EMAIL PROTECTED]> + * Xuefei Xu (Anthony Xu) <[EMAIL PROTECTED]> + * + * Copyright (c) 2007, Intel Corporation. + * Xuefei Xu (Anthony Xu) <[EMAIL PROTECTED]> + * Xiantao Zhang <[EMAIL PROTECTED]> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include "vcpu.h" + +#include +/* + * Check to see if the address rid:va is translated by the TLB + */ + +static int __is_tr_translated(thash_data_t *trp, u64 rid, u64 va) +{ + return ((trp->p) && (trp->rid == rid) + && ((va-trp->vadr) < PSIZE(trp->ps))); +} + +/* + * Only for GUEST TR format. + */ +static int __is_tr_overlap(thash_data_t *trp, u64 rid, u64 sva, u64 eva) +{ + u64 sa1, ea1; + + if (!trp->p || trp->rid != rid) + return 0; + + sa1 = trp->vadr; + ea1 = sa1 + PSIZE(trp->ps) - 1; + eva -= 1; + if ((sva > ea1) || (sa1 > eva)) + return 0; + else + return 1; + +} + +void machine_tlb_purge(u64 va, u64 ps) +{ + ia64_ptcl(va, ps << 2); +} + +void local_flush_tlb_all(void) +{ + int i, j; + unsigned long flags, count0, count1; + unsigned long stride0, stride1, addr; + + addr= current_vcpu->arch.ptce_base; + count0 = current_vcpu->arch.ptce_count[0]; + count1 = current_vcpu->arch.ptce_count[1]; + stride0 = current_vcpu->arch.ptce_stride[0]; + stride1 = current_vcpu->arch.ptce_stride[1]; + + local_irq_save(flags); + for (i = 0; i < count0; ++i) { + for (j = 0; j < count1; ++j) { + ia64_ptce(addr); + addr += stride1; + } + addr += stride0; + } + local_irq_restore(flags); + ia64_srlz_i(); /* srlz.i implies srlz.d */ +} + +int vhpt_enabled(VCPU *vcpu, u64 vadr, vhpt_ref_t ref) +{ + ia64_rrvrr; + ia64_pta vpta; + ia64_psr vpsr; + + vpsr.val = VCPU(vcpu, vpsr); + vrr.val = vcpu_get_rr(vcpu, vadr); + vpta.val = vcpu_get_pta(vcpu); + + if (vrr.ve & vpta.ve) { + switch (ref) { + case DATA_REF: + case NA_REF: + return vpsr.dt; + case INST_REF: + return vpsr.dt && vpsr.it && vpsr.ic; + case RSE_REF: + return vpsr.dt && vpsr.rt; + + } + } + return 0; +} + +thash_data_t *vsa_thash(ia64_pta vpta, u64 va, u64 vrr, u64 *tag) +{ + u64 index, pfn, rid, pfn_bits; + + pfn_bits = vpta.size - 5 - 8; + pfn = REGION_OFFSET(va) >> _REGION_PAGE_SIZE(vrr); + rid = _REGION_ID(vrr); + index = ((rid & 0xff) << pfn_bits)|(pfn & ((1UL << pfn_bits) - 1)); + *tag = ((rid >> 8) & 0x) | ((pfn >> pfn_bits) << 16); + + return (thash_data_t *)((vpta.base << PTA_BASE_SHIFT) + (index << 5)); +} + +thash_data_t *__vtr_lookup(VCPU *vcpu, u64 va, int type) +{ + + thash_data_t *trp; + int i; + u64 rid; + + rid = vcpu_get_rr(vcpu, va); + rid = rid & RR_RID_MASK;; + if (type == D_TLB) { + if (vcpu_quick_region_check(vcpu->arch.dtr_regions, va)) { + for (trp = (thash_data_t *)&vcpu->arch.dtrs, i = 0; + i < NDTRS; i++, trp++) { + if (__is_tr_translated(trp, rid, va)) + return trp; + } + } + } else { +
[kvm-devel] [09/17] [PATCH] kvm/ia64: Add mmio decoder for kvm/ia64.
>From 5f82ea88c095cf89cbae920944c05e578f35365f Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 14:48:09 +0800 Subject: [PATCH] kvm/ia64: Add mmio decoder for kvm/ia64. mmio.c includes mmio decoder routines. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/mmio.c | 349 ++ 1 files changed, 349 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/mmio.c diff --git a/arch/ia64/kvm/mmio.c b/arch/ia64/kvm/mmio.c new file mode 100644 index 000..3f8027a --- /dev/null +++ b/arch/ia64/kvm/mmio.c @@ -0,0 +1,349 @@ +/* + * mmio.c: MMIO emulation components. + * Copyright (c) 2004, Intel Corporation. + * Yaozu Dong (Eddie Dong) ([EMAIL PROTECTED]) + * Kun Tian (Kevin Tian) ([EMAIL PROTECTED]) + * + * Copyright (c) 2007 Intel Corporation KVM support. + * Xuefei Xu (Anthony Xu) ([EMAIL PROTECTED]) + * Xiantao Zhang ([EMAIL PROTECTED]) + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include + +#include "vcpu.h" + +static void vlsapic_write_xtp(VCPU *v, uint8_t val) +{ + VLSAPIC_XTP(v) = val; +} + +/* + * LSAPIC OFFSET + */ +#define PIB_LOW_HALF(ofst) !(ofst & (1 << 20)) +#define PIB_OFST_INTA 0x1E +#define PIB_OFST_XTP 0x1E0008 + +/* + * execute write IPI op. + */ +static void vlsapic_write_ipi(VCPU *vcpu, uint64_t addr, uint64_t data) +{ + struct exit_ctl_data *p = ¤t_vcpu->arch.exit_data; + unsigned long psr; + + local_irq_save(psr); + + p->exit_reason = EXIT_REASON_IPI; + p->u.ipi_data.addr.val = addr; + p->u.ipi_data.data.val = data; + vmm_transition(current_vcpu); + + local_irq_restore(psr); + +} + +void lsapic_write(VCPU *v, unsigned long addr, unsigned long length, + unsigned long val) +{ + addr &= (PIB_SIZE - 1); + + switch (addr) { + case PIB_OFST_INTA: + /*panic_domain(NULL, "Undefined write on PIB INTA\n");*/ + panic_vm(v); + break; + case PIB_OFST_XTP: + if (length == 1) { + vlsapic_write_xtp(v, val); + } else { + /*panic_domain(NULL, + "Undefined write on PIB XTP\n");*/ + panic_vm(v); + } + break; + default: + if (PIB_LOW_HALF(addr)) { + /*lower half */ + if (length != 8) + /*panic_domain(NULL, + "Can't LHF write with size %ld!\n", + length);*/ + panic_vm(v); + else + vlsapic_write_ipi(v, addr, val); + } else { /* upper half + printk("IPI-UHF write %lx\n",addr);*/ + panic_vm(v); + } + break; + } +} + +unsigned long lsapic_read(VCPU *v, unsigned long addr, + unsigned long length) +{ + uint64_t result = 0; + + addr &= (PIB_SIZE - 1); + + switch (addr) { + case PIB_OFST_INTA: + if (length == 1) /* 1 byte load */ + ; /* There is no i8259, there is no INTA access*/ + else + /*panic_domain(NULL,"Undefined read on PIB INTA\n"); */ + panic_vm(v); + + break; + case PIB_OFST_XTP: + if (length == 1) { + result = VLSAPIC_XTP(v); + /* printk("read xtp %lx\n", result); */ + } else { + /*panic_domain(NULL, + "Undefined read on PIB XTP\n");*/ + panic_vm(v); + } + break; + default: + panic_vm(v); + break; + } + return result; +} + +static void mmio_access(VCPU *vcpu, u64 src_pa, u64 *dest, + u16 s, int ma, int dir) +{ + unsigned long iot; + struct exit_ctl_data *p = &vcpu->arch.exit_data; + unsigned long psr; + + iot = __gpfn_is_io(src_pa >> PAGE_SHIFT); + + local_irq
[kvm-devel] [01/17]PATCH Add API for allocating dynamic TR resouce.
Refined according to Tony's comments. >From 837f0508a617ea0386808de9fd0f42ef4aefe5e0 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Thu, 27 Mar 2008 10:18:29 +0800 Subject: [PATCH] Add API for allocating TR resouce. Dynamic TR resouce should be managed in an uniform way. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> Signed-off-by: Anthony Xu<[EMAIL PROTECTED]> --- arch/ia64/kernel/mca.c | 50 + arch/ia64/kernel/mca_asm.S |5 ++ arch/ia64/mm/tlb.c | 170 include/asm-ia64/kregs.h |3 + include/asm-ia64/tlb.h | 12 +++ 5 files changed, 240 insertions(+), 0 deletions(-) diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c index 6c18221..51d0c26 100644 --- a/arch/ia64/kernel/mca.c +++ b/arch/ia64/kernel/mca.c @@ -97,6 +97,7 @@ #include #include +#include #include "mca_drv.h" #include "entry.h" @@ -112,8 +113,10 @@ DEFINE_PER_CPU(u64, ia64_mca_data); /* == __per_cpu_mca[smp_processor_id()] */ DEFINE_PER_CPU(u64, ia64_mca_per_cpu_pte); /* PTE to map per-CPU area */ DEFINE_PER_CPU(u64, ia64_mca_pal_pte); /* PTE to map PAL code */ DEFINE_PER_CPU(u64, ia64_mca_pal_base);/* vaddr PAL code granule */ +DEFINE_PER_CPU(u64, ia64_mca_tr_reload); /* Flag for TR reload */ unsigned long __per_cpu_mca[NR_CPUS]; +extern struct ia64_tr_entry __per_cpu_idtrs[NR_CPUS][2][IA64_TR_ALLOC_MAX]; /* In mca_asm.S */ extern voidia64_os_init_dispatch_monarch (void); @@ -1182,6 +1185,49 @@ all_in: return; } +/* mca_insert_tr + * + * Switch rid when TR reload and needed! + * iord: 1: itr, 2: itr; + * +*/ +static void mca_insert_tr(u64 iord) +{ + + int i; + u64 old_rr; + struct ia64_tr_entry *p; + unsigned long psr; + int cpu = smp_processor_id(); + + psr = ia64_clear_ic(); + for (i = IA64_TR_ALLOC_BASE; i < IA64_TR_ALLOC_MAX; i++) { + p = &__per_cpu_idtrs[cpu][iord-1][i]; + if (p->pte&0x1) { + old_rr = ia64_get_rr(p->ifa); + if (old_rr != p->rr) { + ia64_set_rr(p->ifa, p->rr); + ia64_srlz_d(); + } + ia64_ptr(iord, p->ifa, p->itir >> 2); + ia64_srlz_i(); + if (iord & 0x1) { + ia64_itr(0x1, i, p->ifa, p->pte, p->itir >> 2); + ia64_srlz_i(); + } + if (iord & 0x2) { + ia64_itr(0x2, i, p->ifa, p->pte, p->itir >> 2); + ia64_srlz_i(); + } + if (old_rr != p->rr) { + ia64_set_rr(p->ifa, old_rr); + ia64_srlz_d(); + } + } + } + ia64_set_psr(psr); +} + /* * ia64_mca_handler * @@ -1271,6 +1317,10 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw, monarch_cpu = -1; #endif } + if (__get_cpu_var(ia64_mca_tr_reload)) { + mca_insert_tr(0x1); /*Reload dynamic itrs*/ + mca_insert_tr(0x2); /*Reload dynamic itrs*/ + } if (notify_die(DIE_MCA_MONARCH_LEAVE, "MCA", regs, (long)&nd, 0, recover) == NOTIFY_STOP) ia64_mca_spin(__func__); diff --git a/arch/ia64/kernel/mca_asm.S b/arch/ia64/kernel/mca_asm.S index 8bc7d25..a06d465 100644 --- a/arch/ia64/kernel/mca_asm.S +++ b/arch/ia64/kernel/mca_asm.S @@ -219,8 +219,13 @@ ia64_reload_tr: mov r20=IA64_TR_CURRENT_STACK ;; itr.d dtr[r20]=r16 + GET_THIS_PADDR(r2, ia64_mca_tr_reload) + mov r18 = 1 ;; srlz.d + ;; + st8 [r2] =r18 + ;; done_tlb_purge_and_reload: diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c index 655da24..d7f8206 100644 --- a/arch/ia64/mm/tlb.c +++ b/arch/ia64/mm/tlb.c @@ -26,6 +26,8 @@ #include #include #include +#include +#include static struct { unsigned long mask; /* mask of supported purge page-sizes */ @@ -39,6 +41,10 @@ struct ia64_ctx ia64_ctx = { }; DEFINE_PER_CPU(u8, ia64_need_tlb_flush); +DEFINE_PER_CPU(u8, ia64_tr_num); /*Number of TR slots in current processor*/ +DEFINE_PER_CPU(u8, ia64_tr_used); /*Max Slot number used by kernel*/ + +struct ia64_tr_entry __per_cpu_idtrs[NR_CPUS][2][IA64_TR_ALLOC_MAX]; /* * Initializes the ia64_ctx.bitmap array based on max_ctx+1. @@ -190,6 +196,9 @@ ia64_tlb_init (void) ia64_ptce_info_t uninitialized_var(ptce_info); /* GCC be quiet */ unsigned long tr_pgbits; long status; + pal_vm_info_1_u_t vm_info_1; + pal_vm_info_2_u_t vm_info_2; + int cpu = smp_processor_id(); if ((status = ia64_pal_vm_page_size(&tr_pgbits, &pu
[kvm-devel] [03/15][PATCH] kvm/ia64: Add header files for kvm/ia64.
>From cf64ba3c5464b7da6c6fb2871b8424a08ade3ab2 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Fri, 28 Mar 2008 09:48:10 +0800 Subject: [PATCH] kvm/ia64: Add header files for kvm/ia64. Three header files are added: asm-ia64/kvm.h asm-ia64/kvm_host.h asm-ia64/kvm_para.h Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- include/asm-ia64/kvm.h | 205 + include/asm-ia64/kvm_host.h | 530 +++ include/asm-ia64/kvm_para.h | 29 +++ 3 files changed, 764 insertions(+), 0 deletions(-) create mode 100644 include/asm-ia64/kvm.h create mode 100644 include/asm-ia64/kvm_host.h create mode 100644 include/asm-ia64/kvm_para.h diff --git a/include/asm-ia64/kvm.h b/include/asm-ia64/kvm.h new file mode 100644 index 000..8c70dd6 --- /dev/null +++ b/include/asm-ia64/kvm.h @@ -0,0 +1,205 @@ +#ifndef __ASM_KVM_IA64_H +#define __ASM_KVM_IA64_H + +/* + * asm-ia64/kvm.h: kvm structure definitions for ia64 + * + * Copyright (C) 2007 Xiantao Zhang <[EMAIL PROTECTED]> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include +#include + +#include + +/* Architectural interrupt line count. */ +#define KVM_NR_INTERRUPTS 256 + +#define KVM_IOAPIC_NUM_PINS 24 + +struct kvm_ioapic_state { + __u64 base_address; + __u32 ioregsel; + __u32 id; + __u32 irr; + __u32 pad; + union { + __u64 bits; + struct { + __u8 vector; + __u8 delivery_mode:3; + __u8 dest_mode:1; + __u8 delivery_status:1; + __u8 polarity:1; + __u8 remote_irr:1; + __u8 trig_mode:1; + __u8 mask:1; + __u8 reserve:7; + __u8 reserved[4]; + __u8 dest_id; + } fields; + } redirtbl[KVM_IOAPIC_NUM_PINS]; +}; + +#define KVM_IRQCHIP_PIC_MASTER 0 +#define KVM_IRQCHIP_PIC_SLAVE1 +#define KVM_IRQCHIP_IOAPIC 2 + +#define KVM_CONTEXT_SIZE 8*1024 + +typedef union context { + /* 8K size */ + chardummy[KVM_CONTEXT_SIZE]; + struct { + unsigned long psr; + unsigned long pr; + unsigned long caller_unat; + unsigned long pad; + unsigned long gr[32]; + unsigned long ar[128]; + unsigned long br[8]; + unsigned long cr[128]; + unsigned long rr[8]; + unsigned long ibr[8]; + unsigned long dbr[8]; + unsigned long pkr[8]; + struct ia64_fpreg fr[128]; + }; +} context_t; + +typedef struct thash_data { + union { + struct { + unsigned long p: 1; /* 0 */ + unsigned long rv1 : 1; /* 1 */ + unsigned long ma : 3; /* 2-4 */ + unsigned long a: 1; /* 5 */ + unsigned long d: 1; /* 6 */ + unsigned long pl : 2; /* 7-8 */ + unsigned long ar : 3; /* 9-11 */ + unsigned long ppn : 38; /* 12-49 */ + unsigned long rv2 : 2; /* 50-51 */ + unsigned long ed : 1; /* 52 */ + unsigned long ig1 : 11; /* 53-63 */ + }; + struct { + unsigned long __rv1 : 53; /* 0-52 */ + unsigned long contiguous : 1; /*53 */ + unsigned long tc : 1; /* 54 TR or TC */ + unsigned long cl : 1; + /* 55 I side or D side cache line */ + unsigned long len : 4; /* 56-59 */ + unsigned long io : 1; /* 60 entry is for io or not */ + unsigned long nomap : 1; + /* 61 entry cann't be inserted into machine TLB.*/ + unsigned long checked : 1; + /* 62 for VTLB/VHPT sanity check */ + unsigned long invalid : 1; + /*
[kvm-devel] [16/17] [PATCH] kvm:ia64 Enable kvm build for ia64
>From 0639faa4a3347771e793e33652667272cc140240 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Fri, 28 Mar 2008 14:58:47 +0800 Subject: [PATCH] kvm:ia64 Enable kvm build for ia64 Update the related Makefile and KConfig for kvm build Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/Kconfig |3 ++ arch/ia64/Makefile |1 + arch/ia64/kvm/Kconfig | 46 arch/ia64/kvm/Makefile | 61 4 files changed, 111 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/Kconfig create mode 100644 arch/ia64/kvm/Makefile diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 8fa3faf..a7bb62e 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -19,6 +19,7 @@ config IA64 select HAVE_OPROFILE select HAVE_KPROBES select HAVE_KRETPROBES + select HAVE_KVM default y help The Itanium Processor Family is Intel's 64-bit successor to @@ -589,6 +590,8 @@ config MSPEC source "fs/Kconfig" +source "arch/ia64/kvm/Kconfig" + source "lib/Kconfig" # diff --git a/arch/ia64/Makefile b/arch/ia64/Makefile index f1645c4..ec4cca4 100644 --- a/arch/ia64/Makefile +++ b/arch/ia64/Makefile @@ -57,6 +57,7 @@ core-$(CONFIG_IA64_GENERIC) += arch/ia64/dig/ core-$(CONFIG_IA64_HP_ZX1) += arch/ia64/dig/ core-$(CONFIG_IA64_HP_ZX1_SWIOTLB) += arch/ia64/dig/ core-$(CONFIG_IA64_SGI_SN2)+= arch/ia64/sn/ +core-$(CONFIG_KVM) += arch/ia64/kvm/ drivers-$(CONFIG_PCI) += arch/ia64/pci/ drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/ diff --git a/arch/ia64/kvm/Kconfig b/arch/ia64/kvm/Kconfig new file mode 100644 index 000..d2e54b9 --- /dev/null +++ b/arch/ia64/kvm/Kconfig @@ -0,0 +1,46 @@ +# +# KVM configuration +# +config HAVE_KVM + bool + +menuconfig VIRTUALIZATION + bool "Virtualization" + depends on HAVE_KVM || IA64 + default y + ---help--- + Say Y here to get to see options for using your Linux host to run other + operating systems inside virtual machines (guests). + This option alone does not add any kernel code. + + If you say N, all options in this submenu will be skipped and disabled. + +if VIRTUALIZATION + +config KVM + tristate "Kernel-based Virtual Machine (KVM) support" + depends on HAVE_KVM && EXPERIMENTAL + select PREEMPT_NOTIFIERS + select ANON_INODES + ---help--- + Support hosting fully virtualized guest machines using hardware + virtualization extensions. You will need a fairly recent + processor equipped with virtualization extensions. You will also + need to select one or more of the processor modules below. + + This module provides access to the hardware capabilities through + a character device node named /dev/kvm. + + To compile this as a module, choose M here: the module + will be called kvm. + + If unsure, say N. + +config KVM_INTEL + tristate "KVM for Intel Itanium 2 processors support" + depends on KVM && m + ---help--- + Provides support for KVM on Itanium 2 processors equipped with the VT + extensions. + +endif # VIRTUALIZATION diff --git a/arch/ia64/kvm/Makefile b/arch/ia64/kvm/Makefile new file mode 100644 index 000..cde7d8e --- /dev/null +++ b/arch/ia64/kvm/Makefile @@ -0,0 +1,61 @@ +#This Make file is to generate asm-offsets.h and build source. +# + +#Generate asm-offsets.h for vmm module build +offsets-file := asm-offsets.h + +always := $(offsets-file) +targets := $(offsets-file) +targets += arch/ia64/kvm/asm-offsets.s +clean-files := $(addprefix $(objtree)/,$(targets) $(obj)/memcpy.S $(obj)/memset.S) + +# Default sed regexp - multiline due to syntax constraints +define sed-y + "/^->/{s:^->\([^ ]*\) [\$$#]*\([^ ]*\) \(.*\):#define \1 \2 /* \3 */:; s:->::; p;}" +endef + +quiet_cmd_offsets = GEN $@ +define cmd_offsets + (set -e; \ +echo "#ifndef __ASM_KVM_OFFSETS_H__"; \ +echo "#define __ASM_KVM_OFFSETS_H__"; \ +echo "/*"; \ +echo " * DO NOT MODIFY."; \ +echo " *"; \ +echo " * This file was generated by Makefile"; \ +echo " *"; \ +echo " */"; \ +echo ""; \ +sed -ne $(sed-y) $<; \ +echo ""; \ +echo "#endif" ) > $@ +endef +# We use internal rules to avoid the "is up to date" message from make +arch/ia64/kvm/asm-offsets.s: arch/ia64/kvm/asm-offsets.c + $(call if_changed_dep,cc_s_c) + +$(obj)/$(offsets-file): arch/ia64/kvm/asm-offsets.s + $(call cmd,offsets) + +# +# Makefile for Kernel-based Virtual Machine module +# + +EXTRA_CFLAGS += -Ivirt/kvm -Iarch/ia64/kvm/ + +$(addprefix $(objtree)/,$(obj)/memcpy.S $(obj)/memset.S): + $(shell ln -snf ../lib/memcpy.S $(src)/memcpy.S) + $(shell ln -snf ../lib/memset.S $(src)/memset.
[kvm-devel] [06/17][PATCH] kvm/ia64: VMM module interfaces.
>From 6af8b4d7ca1d4ec40cc634cf8b0d5ae8d2dc53ce Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:44:37 +0800 Subject: [PATCH] kvm/ia64: VMM module interfaces. vmm.c adds the interfaces with kvm/module, and initialize global data area. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/vmm.c | 66 +++ 1 files changed, 66 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/vmm.c diff --git a/arch/ia64/kvm/vmm.c b/arch/ia64/kvm/vmm.c new file mode 100644 index 000..2275bf4 --- /dev/null +++ b/arch/ia64/kvm/vmm.c @@ -0,0 +1,66 @@ +/* + * vmm.c: vmm module interface with kvm module + * + * Copyright (c) 2007, Intel Corporation. + * + * Xiantao Zhang ([EMAIL PROTECTED]) + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + */ + + +#include +#include + +#include "vcpu.h" + +MODULE_AUTHOR("Intel"); +MODULE_LICENSE("GPL"); + +extern char kvm_ia64_ivt; +extern fpswa_interface_t *vmm_fpswa_interface; + +struct kvm_vmm_info vmm_info = { + .module = THIS_MODULE, + .vmm_entry = vmm_entry, + .tramp_entry = vmm_trampoline, + .vmm_ivt = (unsigned long)&kvm_ia64_ivt, +}; + +static int __init kvm_vmm_init(void) +{ + + vmm_fpswa_interface = fpswa_interface; + + /*Register vmm data to kvm side*/ + return kvm_init(&vmm_info, 1024, THIS_MODULE); +} + +static void __exit kvm_vmm_exit(void) +{ + kvm_exit(); + return ; +} + +void vmm_spin_lock(spinlock_t *lock) +{ + _vmm_raw_spin_lock(lock); +} + +void vmm_spin_unlock(spinlock_t *lock) +{ + _vmm_raw_spin_unlock(lock); +} +module_init(kvm_vmm_init) +module_exit(kvm_vmm_exit) -- 1.5.2 0006-kvm-ia64-VMM-module-interfaces.patch Description: 0006-kvm-ia64-VMM-module-interfaces.patch - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [17/17][PATCH] kvm/ia64: How to boot up guests on kvm/ia64
>From 517a89fd248193f6a7049832e2c1b811afe98f96 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:57:33 +0800 Subject: [PATCH] kvm/ia64: How to boot up guests on kvm/ia64 Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- Documentation/ia64/kvm-howto.txt | 74 ++ 1 files changed, 74 insertions(+), 0 deletions(-) create mode 100644 Documentation/ia64/kvm-howto.txt diff --git a/Documentation/ia64/kvm-howto.txt b/Documentation/ia64/kvm-howto.txt new file mode 100644 index 000..ad853b9 --- /dev/null +++ b/Documentation/ia64/kvm-howto.txt @@ -0,0 +1,74 @@ + Guide: How to boot up guests on kvm/ia64 + +1. Get the kvm source from git.kernel.org. + Userspace source: + git clone git://git.kernel.org/pub/scm/virt/kvm/kvm-userspace.git + Kernel Source: + git clone git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git + +2. Compile the source code. + 2.1 Compile userspace code: + (1)cd ./kvm-userspace + (2)./configure + (3)cd kernel + (4)make sync LINUX= $kernel_dir (kernel_dir is the directory of kernel source.) + (5)cd .. + (6)make qemu + (7)cd qemu; make install + + 2.2 Compile kernel source code: + (1) cd ./$kernel_dir + (2) Make menuconfig + (3) Enter into virtualization option, and choose kvm. + (4) make + (5) Once (4) done, make modules_install + (6) Make initrd, and use new kernel to reboot up host machine. + (7) Once (6) done, cd $kernel_dir/arch/ia64/kvm + (8) insmod kvm.ko; insmod kvm-intel.ko + +Note: For step 2, please make sure that host page size == TARGET_PAGE_SIZE of qemu, otherwise, may fail. + +3. Get Guest Firmware named as Flash.fd, and put it under right place: + (1) If you have the guest firmware (binary)released by Intel Corp for Xen, you can use it directly. + (2) If you want to build a guest firmware form souce code. Please download the source from + hg clone http://xenbits.xensource.com/ext/efi-vfirmware.hg + Use the Guide of the source to build open Guest Firmware. + (3) Rename it to Flash.fd, and copy it to /usr/local/share/qemu +Note: For step 3, kvm use the guest firmware which complies with the one Xen uses. + +4. Boot up Linux or Windows guests: + 4.1 Create or install a image for guest boot. If you have xen experience, it should be easy. + + 4.2 Boot up guests use the following command. + /usr/local/bin/qemu-system-ia64 -smp xx -m 512 -hda $your_image + (xx is the number of virtual processors for the guest, now the maximum value is 4) + +5. Known possbile issue on some platforms with old Firmware + +If meet strange host crashes, you may try to solve it through either of the following methods. +(1): Upgrade your Firmware to the latest one. + +(2): Applying the below patch to kernel source. +diff --git a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S +index 0b53344..f02b0f7 100644 +--- a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S +@@ -84,7 +84,8 @@ GLOBAL_ENTRY(ia64_pal_call_static) + mov ar.pfs = loc1 + mov rp = loc0 + ;; +- srlz.d // seralize restoration of psr.l ++ srlz.i // seralize restoration of psr.l ++ ;; + br.ret.sptk.many b0 + END(ia64_pal_call_static) + +6. Bug report: + If you found any issues when use kvm/ia64, Please post the bug info to kvm-ia64-devel mailing list. + https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel/ + +Thanks for your interest! Let's work together, and make kvm/ia64 stronger and stronger! + + + Xiantao Zhang <[EMAIL PROTECTED]> + 2008.3.10 -- 1.5.2 0017-kvm-ia64-How-to-boot-up-guests-on-kvm-ia64.patch Description: 0017-kvm-ia64-How-to-boot-up-guests-on-kvm-ia64.patch - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [15/17][PATCH] kvm/ia64: Add kvm sal/pal virtulization support.
>From ba064fc79c5d8577543ae6e4a201f622f0c4b777 Mon Sep 17 00:00:00 2001 From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Wed, 12 Mar 2008 13:42:18 +0800 Subject: [PATCH] kvm/ia64: Add kvm sal/pal virtulization support. Some sal/pal calls would be traped to kvm for virtulization from guest firmware. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/kvm_fw.c | 500 1 files changed, 500 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/kvm_fw.c diff --git a/arch/ia64/kvm/kvm_fw.c b/arch/ia64/kvm/kvm_fw.c new file mode 100644 index 000..077d6e7 --- /dev/null +++ b/arch/ia64/kvm/kvm_fw.c @@ -0,0 +1,500 @@ +/* + * PAL/SAL call delegation + * + * Copyright (c) 2004 Li Susie <[EMAIL PROTECTED]> + * Copyright (c) 2005 Yu Ke <[EMAIL PROTECTED]> + * Copyright (c) 2007 Xiantao Zhang <[EMAIL PROTECTED]> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + */ + +#include +#include + +#include "vti.h" +#include "misc.h" + +#include +#include +#include + +/* + * Handy macros to make sure that the PAL return values start out + * as something meaningful. + */ +#define INIT_PAL_STATUS_UNIMPLEMENTED(x) \ + { \ + x.status = PAL_STATUS_UNIMPLEMENTED;\ + x.v0 = 0; \ + x.v1 = 0; \ + x.v2 = 0; \ + } + +#define INIT_PAL_STATUS_SUCCESS(x) \ + { \ + x.status = PAL_STATUS_SUCCESS; \ + x.v0 = 0; \ + x.v1 = 0; \ + x.v2 = 0; \ +} + +static void kvm_get_pal_call_data(struct kvm_vcpu *vcpu, + u64 *gr28, u64 *gr29, u64 *gr30, u64 *gr31) { + struct exit_ctl_data *p; + + if (vcpu) { + p = &vcpu->arch.exit_data; + if (p->exit_reason == EXIT_REASON_PAL_CALL) { + *gr28 = p->u.pal_data.gr28; + *gr29 = p->u.pal_data.gr29; + *gr30 = p->u.pal_data.gr30; + *gr31 = p->u.pal_data.gr31; + return ; + } + } + printk(KERN_DEBUG"Error occurs in kvm_get_pal_call_data!!\n"); +} + +static void set_pal_result(struct kvm_vcpu *vcpu, + struct ia64_pal_retval result) { + + struct exit_ctl_data *p; + + p = kvm_get_exit_data(vcpu); + if (p && p->exit_reason == EXIT_REASON_PAL_CALL) { + p->u.pal_data.ret = result; + return ; + } + INIT_PAL_STATUS_UNIMPLEMENTED(p->u.pal_data.ret); +} + +static void set_sal_result(struct kvm_vcpu *vcpu, + struct sal_ret_values result) { + struct exit_ctl_data *p; + + p = kvm_get_exit_data(vcpu); + if (p && p->exit_reason == EXIT_REASON_SAL_CALL) { + p->u.sal_data.ret = result; + return ; + } + printk(KERN_WARNING"Error occurs!!!\n"); +} + +struct cache_flush_args { + u64 cache_type; + u64 operation; + u64 progress; + long status; +}; + +cpumask_t cpu_cache_coherent_map; + +static void remote_pal_cache_flush(void *data) +{ + struct cache_flush_args *args = data; + long status; + u64 progress = args->progress; + + status = ia64_pal_cache_flush(args->cache_type, args->operation, + &progress, NULL); + if (status != 0) + args->status = status; +} + +static struct ia64_pal_retval pal_cache_flush(struct kvm_vcpu *vcpu) +{ + u64 gr28, gr29, gr30, gr31; + struct ia64_pal_retval result = {0, 0, 0, 0}; + struct cache_flush_args args = {0, 0, 0, 0}; + long psr; + + gr28 = gr29 = gr30 = gr31 = 0; + kvm_get_pal_call_data(vcpu, &gr28, &gr29, &gr30, &gr31); + + if (gr31 != 0) + printk(KERN_ERR"vcpu:%p called cache_flush error!\n", vcpu); + + /* Always call Host Pal in int=1 */ + gr30 &= ~PAL_CACHE_FLUSH_CHK_INTRS; + args.cache_type = gr29; + args.operation = gr30; + smp_call_function(remote_pal_cac
[kvm-devel] [Patch][00/17] kvm-ia64 for kernel V6
Hi This patchset enables kvm on ia64 platform. And it targets for Avi's pull to mainline. Please review. If you don't have concerns, I will ask Avi's pull for kvm.git. Thanks! Also, you can get it from git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git kvm-ia64-mc6 Tony, The first two patches touches kernel code, please have a review again, and ensure it is good for kernel. Thanks:) Xiantao Documentation/ia64/kvm-howto.txt | 71 + arch/ia64/Kconfig|6 arch/ia64/Makefile |1 arch/ia64/kernel/mca.c | 50 arch/ia64/kernel/mca_asm.S |5 arch/ia64/kernel/smp.c | 84 + arch/ia64/kvm/Kconfig| 43 arch/ia64/kvm/Makefile | 61 + arch/ia64/kvm/asm-offsets.c | 251 arch/ia64/kvm/ia64_regs.h| 234 arch/ia64/kvm/kvm_fw.c | 500 + arch/ia64/kvm/kvm_ia64.c | 1789 arch/ia64/kvm/kvm_minstate.h | 273 arch/ia64/kvm/lapic.h| 27 arch/ia64/kvm/misc.h | 93 + arch/ia64/kvm/mmio.c | 349 ++ arch/ia64/kvm/optvfault.S| 918 arch/ia64/kvm/process.c | 979 + arch/ia64/kvm/trampoline.S | 1040 ++ arch/ia64/kvm/vcpu.c | 2145 +++ arch/ia64/kvm/vcpu.h | 749 + arch/ia64/kvm/vmm.c | 66 + arch/ia64/kvm/vmm_ivt.S | 1425 + arch/ia64/kvm/vti.h | 290 + arch/ia64/kvm/vtlb.c | 631 +++ arch/ia64/mm/tlb.c | 170 +++ include/asm-ia64/kregs.h |3 include/asm-ia64/kvm.h | 205 +++ include/asm-ia64/kvm_host.h | 530 + include/asm-ia64/kvm_para.h | 29 include/asm-ia64/tlb.h | 12 include/linux/smp.h |3 32 files changed, 13014 insertions(+), 18 deletions(-) - Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ppc-devel] [PATCH] Move kvm_get_pit tolibkvm.c common code
Avi Kivity wrote: > Hollis Blanchard wrote: >> >> Don't compile kvm_*_pit() on architectures whose currently supported >> platforms do not contain a PIT. >> >> Signed-off-by: Hollis Blanchard <[EMAIL PROTECTED]> >> >> diff --git a/libkvm/libkvm.h b/libkvm/libkvm.h >> --- a/libkvm/libkvm.h >> +++ b/libkvm/libkvm.h >> @@ -539,6 +539,7 @@ int kvm_pit_in_kernel(kvm_context_t kvm) >> >> #ifdef KVM_CAP_PIT >> >> +#if defined(__i386__) || defined(__x86_64__) || defined(__ia64__) >> /*! >> * \brief Get in kernel PIT of the virtual domain >> * >> @@ -562,6 +563,8 @@ int kvm_set_pit(kvm_context_t kvm, struc >> >> #endif >> >> +#endif >> + >> #ifdef KVM_CAP_VAPIC > > ia64 doesn't have an in-kernel pit? (yet?) IA64 doesn't have pit on platform. Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] Cross-arch support for make syncin userspace
Avi Kivity wrote: > Zhang, Xiantao wrote: >> Avi Kivity wrote: >> > > I see. ./configure --with-patched-kernel should work for that, but I > have no issue with copying include/asm-ia64 either. Copy should be ugly, since it needs extral documentation to describle. If --with-patched-kernel can call a script, that should be fine as well. Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] Cross-arch support for make sync in userspace
Avi Kivity wrote: > Zhang, Xiantao wrote: >> Avi Kivity wrote: >> >>> Zhang, Xiantao wrote: >>> >>>> Hi, Avi >>>> Currently, make sync in userspace only syncs x86-specific heads >>>> from kernel source due to hard-coded in Makefile. >>>> Do you have plan to provide cross-arch support for that? >>>> >>> No plans. I'll apply patches though. But don't you need kernel >>> changes which make it impossible to run kvm-ia64 on older kernels? >>> >>> >>>> Other archs may >>>> need it for save/restore :) >>>> >>>> >>> Save/restore? Don't understand. >>> >> >> You know, currently make sync would sync header files to userspace >> from include/asm-x86/, so kvm.h and kvm_host.h are always synced >> from there for any archs. Since some arch-specific stuff for >> save/restore should be defined in include/asm-$arch/(kvm.h; >> kvm_host.h), so ia64 or other archs should need it when they >> implement save/restore. > > I see. But is 'make sync' actually useful for you? Can you run > kvm-ia64 on top of 2.6.24, which doesn't include your ia64 core API > changes? Now we don't intend to provide support for kernel which is older than 2.6.24. And we don't want to compile kernel module in userspace. But at least we need to ensure "make sync" work first, because we need it to guarantee Qemu to use right header files for its compilation. Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] Cross-arch support for make sync in userspace
Avi Kivity wrote: > Zhang, Xiantao wrote: >> Hi, Avi >> Currently, make sync in userspace only syncs x86-specific heads from >> kernel source due to hard-coded in Makefile. >> Do you have plan to provide cross-arch support for that? > > No plans. I'll apply patches though. But don't you need kernel > changes which make it impossible to run kvm-ia64 on older kernels? > >> Other archs may >> need it for save/restore :) >> > > Save/restore? Don't understand. You know, currently make sync would sync header files to userspace from include/asm-x86/, so kvm.h and kvm_host.h are always synced from there for any archs. Since some arch-specific stuff for save/restore should be defined in include/asm-$arch/(kvm.h; kvm_host.h), so ia64 or other archs should need it when they implement save/restore. Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] Comment out qemu_system_cpu_hot_add for ia64
From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Thu, 20 Mar 2008 10:17:29 +0800 Subject: [PATCH] kvm:qemu: qemu_system_cpu_hot_add not supported for ia64. Comment it out first for ia64 build. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- qemu/hw/acpi.c |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/qemu/hw/acpi.c b/qemu/hw/acpi.c index ae74f32..35641a0 100644 --- a/qemu/hw/acpi.c +++ b/qemu/hw/acpi.c @@ -718,7 +718,7 @@ static void disable_processor(struct gpe_regs *g, int cpu) g->en |= 1; g->down |= (1 << cpu); } - +#if defined(TARGET_I386) || defined(TARGET_X86_64) void qemu_system_cpu_hot_add(int cpu, int state) { CPUState *env; @@ -743,7 +743,7 @@ void qemu_system_cpu_hot_add(int cpu, int state) disable_processor(&gpe, cpu); qemu_set_irq(pm_state->irq, 0); } - +#endif static void enable_device(struct pci_status *p, struct gpe_regs *g, int slot) { g->sts |= 2; -- 1.5.2 0001-kvm-qemu-qemu_system_cpu_hot_add-not-supported-for.patch Description: 0001-kvm-qemu-qemu_system_cpu_hot_add-not-supported-for.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] Cross-arch support for make sync in userspace
Hi, Avi Currently, make sync in userspace only syncs x86-specific heads from kernel source due to hard-coded in Makefile. Do you have plan to provide cross-arch support for that? Other archs may need it for save/restore :) Thanks Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] kvm-ia64.git is created on master.kernel.org!
Akio Takebe wrote: > Hi, Xiantao > >> We have created kvm-ia64.git on master.kernel.org for open >> development, and the latest source is also included in this >> repository. So you can clone and make contributions to it now. >> Cheers!! >> In this repository, I created the branch kvm-ia64-mc4 to hold the >> patchset. Now, the whole community had better work on the branch >> together for reviewing code, doing cleanup, and adding the new >> features. If you have any contribution or questions, please feel >> free to submit to the kvm-ia64 mailing >> list(https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel). > Wow, greate! > Can we use the same userspace tree as x86? Yes, but seems it is broken for ia64 side due to latest merge with qemu upstream. > Are save/restore already available? It needs userspace patch. I enabled save&restore without log dirty mechanism, but it breaks after adding log dirty. So need more debug effort on it. Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] kvm-ia64.git is created on master.kernel.org!
Hi, guys We have created kvm-ia64.git on master.kernel.org for open development, and the latest source is also included in this repository. So you can clone and make contributions to it now. Cheers!! In this repository, I created the branch kvm-ia64-mc4 to hold the patchset. Now, the whole community had better work on the branch together for reviewing code, doing cleanup, and adding the new features. If you have any contribution or questions, please feel free to submit to the kvm-ia64 mailing list(https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel). BTW, You know, since 2.6.26 merge window is coming, we have to prepare a clean and mature tree before its due. Welcome to join in kvm/ia64 development! Thanks for you any contributions! Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] [PATCH] Using kzalloc to avoid allocatingkvm_regs from kernel stack
Updated one. Sorry for inconvenience. Xiantao 0001-kvm-Using-kzalloc-to-avoid-allocating-kvm_regs-from.patch Description: 0001-kvm-Using-kzalloc-to-avoid-allocating-kvm_regs-from.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [PATCH] Using kzalloc to avoid allocating kvm_regs from kernel stack
Please use the new one. Add the check for failed allocation. From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Mon, 25 Feb 2008 17:25:07 +0800 Subject: [PATCH] kvm: Using kzalloc to avoid allocating kvm_regs from kernel stack. Since the size of kvm_regs maybe too big to allocate from kernel stack, here use kzalloc to allocate it. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- virt/kvm/kvm_main.c | 21 ++--- 1 files changed, 14 insertions(+), 7 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index cf6df51..8d4326f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -806,25 +806,32 @@ static long kvm_vcpu_ioctl(struct file *filp, r = kvm_arch_vcpu_ioctl_run(vcpu, vcpu->run); break; case KVM_GET_REGS: { - struct kvm_regs kvm_regs; + struct kvm_regs *kvm_regs; - memset(&kvm_regs, 0, sizeof kvm_regs); - r = kvm_arch_vcpu_ioctl_get_regs(vcpu, &kvm_regs); + r = -ENOMEM; + kvm_regs = kzalloc(sizeof(struct kvm_regs), GFP_KERNEL); + if (!kvm_regs) + goto out; + r = kvm_arch_vcpu_ioctl_get_regs(vcpu, kvm_regs); if (r) goto out; r = -EFAULT; - if (copy_to_user(argp, &kvm_regs, sizeof kvm_regs)) + if (copy_to_user(argp, kvm_regs, sizeof(struct kvm_regs))) goto out; r = 0; break; } case KVM_SET_REGS: { - struct kvm_regs kvm_regs; + struct kvm_regs *kvm_regs; + r = -ENOMEM; + kvm_regs = kzalloc(sizeof(struct kvm_regs), GFP_KERNEL); + if (!kvm_regs) + goto out; r = -EFAULT; - if (copy_from_user(&kvm_regs, argp, sizeof kvm_regs)) + if (copy_from_user(kvm_regs, argp, sizeof(struct kvm_regs))) goto out; - r = kvm_arch_vcpu_ioctl_set_regs(vcpu, &kvm_regs); + r = kvm_arch_vcpu_ioctl_set_regs(vcpu, kvm_regs); if (r) goto out; r = 0; -- 1.5.2 0001-kvm-Using-kzalloc-to-avoid-allocating-kvm_regs-from.patch Description: 0001-kvm-Using-kzalloc-to-avoid-allocating-kvm_regs-from.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH] Using kzalloc to avoid allocating kvm_regs from kernel stack
From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Mon, 25 Feb 2008 17:11:43 +0800 Subject: [PATCH] kvm: Using kzalloc to avoid allocating kvm_regs from kernel stack. Since the size of struct kvm_regs maybe too big to allocate from kernel stack, here use kzalloc to allocate it. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- virt/kvm/kvm_main.c | 15 --- 1 files changed, 8 insertions(+), 7 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index cf6df51..5348538 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -806,25 +806,26 @@ static long kvm_vcpu_ioctl(struct file *filp, r = kvm_arch_vcpu_ioctl_run(vcpu, vcpu->run); break; case KVM_GET_REGS: { - struct kvm_regs kvm_regs; + struct kvm_regs *kvm_regs; - memset(&kvm_regs, 0, sizeof kvm_regs); - r = kvm_arch_vcpu_ioctl_get_regs(vcpu, &kvm_regs); + kvm_regs = kzalloc(sizeof(struct kvm_regs), GFP_KERNEL); + r = kvm_arch_vcpu_ioctl_get_regs(vcpu, kvm_regs); if (r) goto out; r = -EFAULT; - if (copy_to_user(argp, &kvm_regs, sizeof kvm_regs)) + if (copy_to_user(argp, kvm_regs, sizeof(struct kvm_regs))) goto out; r = 0; break; } case KVM_SET_REGS: { - struct kvm_regs kvm_regs; + struct kvm_regs *kvm_regs; + kvm_regs = kzalloc(sizeof(struct kvm_regs), GFP_KERNEL); r = -EFAULT; - if (copy_from_user(&kvm_regs, argp, sizeof kvm_regs)) + if (copy_from_user(kvm_regs, argp, sizeof(struct kvm_regs))) goto out; - r = kvm_arch_vcpu_ioctl_set_regs(vcpu, &kvm_regs); + r = kvm_arch_vcpu_ioctl_set_regs(vcpu, kvm_regs); if (r) goto out; r = 0; -- 1.5.2 0001-kvm-Using-kzalloc-to-avoid-allocating-kvm_regs-from.patch Description: 0001-kvm-Using-kzalloc-to-avoid-allocating-kvm_regs-from.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH] kvm also need to workaround changes about tcg code
From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Fri, 15 Feb 2008 10:50:22 +0800 Subject: [PATCH] qemu: IA64 also need to workaround tcg code. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- qemu/dyngen.c|1 - qemu/hw/ipf.c|1 - qemu/target-ia64/fake-exec.c | 44 ++ 3 files changed, 44 insertions(+), 2 deletions(-) create mode 100644 qemu/target-ia64/fake-exec.c diff --git a/qemu/dyngen.c b/qemu/dyngen.c index e5122e3..146d4ec 100644 --- a/qemu/dyngen.c +++ b/qemu/dyngen.c @@ -2767,7 +2767,6 @@ fprintf(outfile, "uint8_t *arm_pool_ptr = gen_code_buf + 0x100;\n"); #endif #ifdef HOST_IA64 -#error broken { long addend, not_first = 0; unsigned long sym_idx; diff --git a/qemu/hw/ipf.c b/qemu/hw/ipf.c index ce67715..8c5304d 100644 --- a/qemu/hw/ipf.c +++ b/qemu/hw/ipf.c @@ -37,7 +37,6 @@ #include "boards.h" #include "firmware.h" #include "ia64intrin.h" -#include "dyngen.h" #include #include "qemu-kvm.h" diff --git a/qemu/target-ia64/fake-exec.c b/qemu/target-ia64/fake-exec.c new file mode 100644 index 000..0be4ffd --- /dev/null +++ b/qemu/target-ia64/fake-exec.c @@ -0,0 +1,44 @@ +/* + * fake-exec.c for ia64. + * + * This is a file for stub functions so that compilation is possible + * when TCG CPU emulation is disabled during compilation. + * + * Copyright 2007 IBM Corporation. + * Added by & Authors: + * Jerone Young <[EMAIL PROTECTED]> + * + * Copyright 2008 Intel Corporation. + * Added by Xiantao Zhang <[EMAIL PROTECTED]> + * + * This work is licensed under the GNU GPL licence version 2 or later. + * + */ +#include "exec.h" +#include "cpu.h" + +int code_copy_enabled = 0; + +void cpu_gen_init(void) +{ +} + +unsigned long code_gen_max_block_size(void) +{ +return 32; +} + +int cpu_ia64_gen_code(CPUState *env, TranslationBlock *tb, int *gen_code_size_ptr) +{ +return 0; +} + +void flush_icache_range(unsigned long start, unsigned long stop) +{ +while (start < stop) { + asm volatile ("fc %0" :: "r"(start)); + start += 32; +} +asm volatile (";;sync.i;;srlz.i;;"); +} + -- 1.5.2 0001-qemu-IA64-also-need-to-workaround-tcg-code.patch Description: 0001-qemu-IA64-also-need-to-workaround-tcg-code.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [PATCH] Make non-x86 arch partially support makesync.
Hollis Blanchard wrote: > On Fri, 2008-02-01 at 17:34 +0800, Zhang, Xiantao wrote: >> From: Xiantao Zhang <[EMAIL PROTECTED]> >> Date: Fri, 1 Feb 2008 17:18:03 +0800 >> Subject: [PATCH] Make non-x86 arch partially support make sync. >> >> Make non-x86 arch partially support make sync, and other archs >> can get right header files for userspace. >> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- >> kernel/Makefile | 19 --- >> 1 files changed, 16 insertions(+), 3 deletions(-) Hi, Hollis This is a intial version to support more arch. Yes, as you pointed, we also need more work to support it fully. Could you make a patch for that ? Thanks Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [PATCH][21] Readme for kvm/ia64 boot.
Akio Takebe wrote: > Hi, Xiantao > >> From: Zhang Xiantao <[EMAIL PROTECTED]> >> Date: Tue, 29 Jan 2008 17:27:06 +0800 >> Subject: [PATCH] README: How to boot up guests on kvm/ia64 >> >> Guide: How to boot up guests on kvm/ia64 >> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- >> arch/ia64/kvm/README | 72 > The better place of the README is Documentation/ia64/. Hi, Akio Thank you for your suggestion! Yes, Documentation/ia64 is a better choice for that :) Thanks Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] Kvm/ia64 has enabled save&restore and Live migration.
Hi, all Thank you for your intention about kvm/ia64. Now, we have enabled save/restore and Live migration on kvm/ia64, and will send out the implementation after the leave for Chinese New Year!(Feb 4- Feb 11). Thanks! Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH] Let ioctl_get_regs to support input parameters.
From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Sun, 3 Feb 2008 14:46:02 +0800 Subject: [PATCH] kvm: Let ioctl_get_regs suppot input pararmeters. Since kvm_regs is allocated from kernel stack, and its size is limited. In order to save large register files of some archs, this API should accept input parameters. User need to pass a user-space pointer with large memory, and kernel can copy reigster files to it. Signed-off-by: Xiantao Zhang<[EMAIL PROTECTED]> --- virt/kvm/kvm_main.c |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a499f50..6d74d0b 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -807,7 +807,8 @@ static long kvm_vcpu_ioctl(struct file *filp, case KVM_GET_REGS: { struct kvm_regs kvm_regs; - memset(&kvm_regs, 0, sizeof kvm_regs); + if (copy_from_user(&kvm_regs, argp, sizeof kvm_regs)) + goto out; r = kvm_arch_vcpu_ioctl_get_regs(vcpu, &kvm_regs); if (r) goto out; -- 1.5.2 0001-kvm-Let-ioctl_get_regs-suppot-input-pararmeters.patch Description: 0001-kvm-Let-ioctl_get_regs-suppot-input-pararmeters.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH] Make non-x86 arch partially support make sync.
From: Xiantao Zhang <[EMAIL PROTECTED]> Date: Fri, 1 Feb 2008 17:18:03 +0800 Subject: [PATCH] Make non-x86 arch partially support make sync. Make non-x86 arch partially support make sync, and other archs can get right header files for userspace. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- kernel/Makefile | 19 --- 1 files changed, 16 insertions(+), 3 deletions(-) diff --git a/kernel/Makefile b/kernel/Makefile index 7a435b5..2f0d7d5 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -13,6 +13,17 @@ LINUX = ../linux-2.6 version = $(shell cd $(LINUX); git describe) +ARCH := $(shell uname -m | sed -e s/i.86/i386/) +SRCARCH:= $(ARCH) + +# Additional ARCH settings for x86 +ifeq ($(ARCH),i386) +SRCARCH := x86 +endif +ifeq ($(ARCH),x86_64) +SRCARCH := x86 +endif + _hack = mv $1 $1.orig && \ gawk -v version=$(version) -f hack-module.awk $1.orig \ | sed '/\#include/! s/\blapic\b/l_apic/g' > $1 && rm $1.orig @@ -30,14 +41,15 @@ all:: sync: rm -rf tmp rsync --exclude='*.mod.c' -R \ - "$(LINUX)"/arch/x86/kvm/./*.[ch] \ + "$(LINUX)"/arch/$(SRCARCH)/kvm/./*.[cSh] \ "$(LINUX)"/virt/kvm/./*.[ch] \ "$(LINUX)"/./include/linux/kvm*.h \ -"$(LINUX)"/./include/asm-x86/kvm*.h \ +"$(LINUX)"/./include/asm-$(SRCARCH)/kvm*.h \ tmp/ rm -rf include/asm - ln -s asm-x86 include/asm + ln -s asm-$(SRCARCH) include/asm +ifeq ($(SRCARCH),x86) $(call unifdef, include/linux/kvm.h) $(call unifdef, include/linux/kvm_para.h) $(call unifdef, include/asm-x86/kvm.h) @@ -48,6 +60,7 @@ sync: $(call hack, svm.c) $(call hack, x86.c) $(call hack, irq.h) +endif for i in $$(find tmp -type f -printf '%P '); \ do cmp -s $$i tmp/$$i || cp tmp/$$i $$i; done rm -rf tmp -- 1.5.2 0001-Make-non-x86-arch-partially-support-make-sync.patch Description: 0001-Make-non-x86-arch-partially-support-make-sync.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] [PATCH][10] Add TLB virtulization support.
Akio Takebe wrote: > Hi, Xiantao > >> +void thash_vhpt_insert(VCPU *v, u64 pte, u64 itir, u64 va, int >> type) +{ + u64 phy_pte, psr; >> +ia64_rr mrr; >> + >> +mrr.val = ia64_get_rr(va); >> +phy_pte = translate_phy_pte(&pte, itir, va); >> + >> +if (itir_ps(itir) >= mrr.ps) { >> +vhpt_insert(phy_pte, itir, va, pte); >> +} else { >> +phy_pte &= ~PAGE_FLAGS_RV_MASK; >> +psr = ia64_clear_ic(); >> +ia64_itc(type, va, phy_pte, itir_ps(itir)); >> +ia64_set_psr(psr); >> +ia64_srlz_i(); >> +} >> +} > You add ia64_srlz_i() into ia64_set_psr() with [02]patch. > So is this a redundancy if the patch is applied? Yes, we need to remove it. Once the second patch is picked up. Thanks Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] [PATCH][02] Change srlz.d to srlz.i foria64_set_psr
Akio Takebe wrote: > Hi, Xiantao > >> void __init >> diff --git a/include/asm-ia64/processor.h >> b/include/asm-ia64/processor.h index be3b0ae..038642f 100644 --- >> a/include/asm-ia64/processor.h +++ b/include/asm-ia64/processor.h >> @@ -472,7 +472,7 @@ ia64_set_psr (__u64 psr) >> { >> ia64_stop(); >> ia64_setreg(_IA64_REG_PSR_L, psr); >> -ia64_srlz_d(); >> +ia64_srlz_i(); >> } > Why do you remove ia64_srlz_d()? > We should need srlz.d if we change PSR bits(e.g. PSR.dt and so on). > Does srlz.i do also date serialization? Hi, Akio Srlz.i implicitly ensures srlz.d per SDM. Thanks Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] [PATCH] Making SLIRP code more64-bit clean
Scott Pakin wrote: > Zhang, Xiantao wrote: >> Scott Pakin wrote: >>> The attached patch corrects a bug in qemu/slirp/tcp_var.h that >>> defines the seg_next field in struct tcpcb to be 32 bits wide >>> regardless of 32/64-bitness. seg_next is assigned a pointer value >>> in qemu/slirp/tcp_subr.c, then cast back to a pointer in >>> qemu/slirp/tcp_input.c and dereferenced. That produces a SIGSEGV on >>> my system. >> >> >> I still hit it on IA64 platform with your patch, once configured with >> slirp. Scott With the enhanced patch, IA64 guests works well. Great!! If this fix be picked up, we can remove the configure option which excludes slirp compile for kvm/ia64. Thanks! Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] [kvm-ia64-devel] [Qemu-devel] Re: [PATCH] MakingSLIRP code more 64-bit clean
Blue Swirl wrote: > On 1/30/08, Scott Pakin <[EMAIL PROTECTED]> wrote: >> Zhang, Xiantao wrote: >>> Scott Pakin wrote: >>>> The attached patch corrects a bug in qemu/slirp/tcp_var.h that >>>> defines the seg_next field in struct tcpcb to be 32 bits wide >>>> regardless of 32/64-bitness. seg_next is assigned a pointer value >>>> in qemu/slirp/tcp_subr.c, then cast back to a pointer in >>>> qemu/slirp/tcp_input.c and dereferenced. That produces a SIGSEGV >>>> on my system. >>> >>> >>> I still hit it on IA64 platform with your patch, once configured >>> with slirp. >> >> Okay, here's a more thorough patch that fixes *all* of the "cast >> from/to pointer to/from integer of a different size" mistakes that >> gcc warns about. Does it also solve the SIGSEGV problem on IA64? > > The SLIRP code is much, much more subtle than that. Please see this > thread: > http://lists.gnu.org/archive/html/qemu-devel/2007-10/msg00542.html Got it. Thank you! Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH][03]Export some symbols out for module use.
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]> Date: Thu, 17 Jan 2008 14:03:04 +0800 Subject: [PATCH] kvm: ia64 : Export some symbols out for module use. Export empty_zero_page, ia64_sal_cache_flush, ia64_sal_freq_base in this patch. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kernel/ia64_ksyms.c |3 +++ arch/ia64/kernel/sal.c| 14 ++ include/asm-ia64/sal.h| 14 +++--- 3 files changed, 20 insertions(+), 11 deletions(-) diff --git a/arch/ia64/kernel/ia64_ksyms.c b/arch/ia64/kernel/ia64_ksyms.c index c3b4412..43d227f 100644 --- a/arch/ia64/kernel/ia64_ksyms.c +++ b/arch/ia64/kernel/ia64_ksyms.c @@ -12,6 +12,9 @@ EXPORT_SYMBOL(memset); EXPORT_SYMBOL(memcpy); EXPORT_SYMBOL(strlen); +#include +EXPORT_SYMBOL(empty_zero_page); + #include EXPORT_SYMBOL(ip_fast_csum); /* hand-coded assembly */ EXPORT_SYMBOL(csum_ipv6_magic); diff --git a/arch/ia64/kernel/sal.c b/arch/ia64/kernel/sal.c index 27c2ef4..67c1d34 100644 --- a/arch/ia64/kernel/sal.c +++ b/arch/ia64/kernel/sal.c @@ -284,6 +284,7 @@ ia64_sal_cache_flush (u64 cache_type) SAL_CALL(isrv, SAL_CACHE_FLUSH, cache_type, 0, 0, 0, 0, 0, 0); return isrv.status; } +EXPORT_SYMBOL(ia64_sal_cache_flush); void __init ia64_sal_init (struct ia64_sal_systab *systab) @@ -372,3 +373,16 @@ ia64_sal_oemcall_reentrant(struct ia64_sal_retval *isrvp, u64 oemfunc, return 0; } EXPORT_SYMBOL(ia64_sal_oemcall_reentrant); + +long +ia64_sal_freq_base (unsigned long which, unsigned long *ticks_per_second, + unsigned long *drift_info) +{ + struct ia64_sal_retval isrv; + + SAL_CALL(isrv, SAL_FREQ_BASE, which, 0, 0, 0, 0, 0, 0); + *ticks_per_second = isrv.v0; + *drift_info = isrv.v1; + return isrv.status; +} +EXPORT_SYMBOL(ia64_sal_freq_base); diff --git a/include/asm-ia64/sal.h b/include/asm-ia64/sal.h index 1f5412d..2251118 100644 --- a/include/asm-ia64/sal.h +++ b/include/asm-ia64/sal.h @@ -649,17 +649,6 @@ typedef struct err_rec { * Now define a couple of inline functions for improved type checking * and convenience. */ -static inline long -ia64_sal_freq_base (unsigned long which, unsigned long *ticks_per_second, - unsigned long *drift_info) -{ - struct ia64_sal_retval isrv; - - SAL_CALL(isrv, SAL_FREQ_BASE, which, 0, 0, 0, 0, 0, 0); - *ticks_per_second = isrv.v0; - *drift_info = isrv.v1; - return isrv.status; -} extern s64 ia64_sal_cache_flush (u64 cache_type); extern void __init check_sal_cache_flush (void); @@ -841,6 +830,9 @@ extern int ia64_sal_oemcall_nolock(struct ia64_sal_retval *, u64, u64, u64, u64, u64, u64, u64, u64); extern int ia64_sal_oemcall_reentrant(struct ia64_sal_retval *, u64, u64, u64, u64, u64, u64, u64, u64); +extern long +ia64_sal_freq_base (unsigned long which, unsigned long *ticks_per_second, + unsigned long *drift_info); #ifdef CONFIG_HOTPLUG_CPU /* * System Abstraction Layer Specification -- 1.5.1 0003-kvm-ia64-Export-some-symbols-out-for-module-use.patch Description: 0003-kvm-ia64-Export-some-symbols-out-for-module-use.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH] [00]Patch set to enable kvm on ia64 platforms
Hi, Avi/Tony We have rebased kvm/ia64 code to latest kvm. In this version, we have fixed coding style issues, and all patches can pass checkpatch.pl, except one assembly header file, which is copyied from kernel, so we didn't change its issues. Compared with last version, we implemented smp guest support, and addressed two stability issues which only take on with smp guests. Now, based on our own test results, it has good stability, and good performance. Please review and help to commit them before linux 2.6.25 merge :) [01] Appoint Maintainter for kvm/ia64. [02] Change srlz.d to srlz.i for ia64_set_psr to save unnecessary srlz.d, since kvm need to use it frequently. [03] Export three symbos for module use. [04] Add API for allocating TR resouce. For patch04, We want to add a comment TR resource API for kernel. It is not just used by kvm module. Our idea is that the first two pari of TRs are used as a fixed way, and we don't need to touch them. This API only manages TR resource for dynamic use. Based on Tony's comments, we changed its implementation, and optimized it for checking overlap. Thank you, Tony!. Since the above four patches touch the source code out of kvm world, they need Tony's Ack and Sign-off-by :) [05] Add kvm.h, kvm_host.h kvm_para.h for kvm/ia64 [06] Add kvm arch-specific core code for kvm/ia64. [07] Add kvm sal/pal virtulization support. [08] Add local head files for kvm/ia64 The above four patches implement arch-specific code for kvm/ia64 [09] Add VMM module interfaces. [10] Add TLB virtulization support. [11] Add mmio decoder for kvm/ia64. [12] Add interruption vector table for vmm. [13] Add trampoline for guest/host mode switch. [14] Add processor virtulization support. [15] add optimization for some virtulization faults [16] Generate offset values for assembly code use. [17] Add guest interruption injection support. The above patches implement GVMM code. [18] Add Kconfig for kvm configuration. [19] Add Makefile for kvm files compile. [20] Update IA64 Kconfig and Makefile to include kvm build. [21] Readme for kvm/ia64 Update Makefile/Kconfig for kvm build, and write a howto for making kvm/ia64. Thanks Xiantao - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH][10] Add TLB virtulization support.
From: Zhang Xiantao <[EMAIL PROTECTED]> Date: Tue, 29 Jan 2008 14:26:29 +0800 Subject: [PATCH] kvm/ia64: Add TLB virtulization support. vtlb.c includes tlb/VHPT virtulization. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/vtlb.c | 606 ++ 1 files changed, 606 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/vtlb.c diff --git a/arch/ia64/kvm/vtlb.c b/arch/ia64/kvm/vtlb.c new file mode 100644 index 000..25f9ad6 --- /dev/null +++ b/arch/ia64/kvm/vtlb.c @@ -0,0 +1,606 @@ +/* + * vtlb.c: guest virtual tlb handling module. + * Copyright (c) 2004, Intel Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + * Yaozu Dong (Eddie Dong) ([EMAIL PROTECTED]) + * Xuefei Xu (Anthony Xu) ([EMAIL PROTECTED]) + */ + +#include "vcpu.h" + +/* + * Check to see if the address rid:va is translated by the TLB + */ + +static int __is_tr_translated(thash_data_t *trp, u64 rid, u64 va) +{ + return ((trp->p) && (trp->rid == rid) + && ((va-trp->vadr) < PSIZE(trp->ps))); +} + +/* + * Only for GUEST TR format. + */ +static int __is_tr_overlap(thash_data_t *trp, u64 rid, u64 sva, u64 eva) +{ + u64 sa1, ea1; + + if (!trp->p || trp->rid != rid) + return 0; + + sa1 = trp->vadr; + ea1 = sa1 + PSIZE(trp->ps) - 1; + eva -= 1; + if ((sva > ea1) || (sa1 > eva)) + return 0; + else + return 1; + +} + +void machine_tlb_purge(u64 va, u64 ps) +{ + ia64_ptcl(va, ps << 2); +} + +void local_flush_tlb_all(void) +{ + int i, j; + unsigned long flags, count0, count1; + unsigned long stride0, stride1, addr; + + addr= current_vcpu->arch.ptce_base; + count0 = current_vcpu->arch.ptce_count[0]; + count1 = current_vcpu->arch.ptce_count[1]; + stride0 = current_vcpu->arch.ptce_stride[0]; + stride1 = current_vcpu->arch.ptce_stride[1]; + + local_irq_save(flags); + for (i = 0; i < count0; ++i) { + for (j = 0; j < count1; ++j) { + ia64_ptce(addr); + addr += stride1; + } + addr += stride0; + } + local_irq_restore(flags); + ia64_srlz_i(); /* srlz.i implies srlz.d */ +} + +int vhpt_enabled(VCPU *vcpu, u64 vadr, vhpt_ref_t ref) +{ + ia64_rrvrr; + ia64_pta vpta; + ia64_psr vpsr; + + vpsr.val = VCPU(vcpu, vpsr); + vrr.val = vcpu_get_rr(vcpu, vadr); + vpta.val = vcpu_get_pta(vcpu); + + if (vrr.ve & vpta.ve) { + switch (ref) { + case DATA_REF: + case NA_REF: + return vpsr.dt; + case INST_REF: + return vpsr.dt && vpsr.it && vpsr.ic; + case RSE_REF: + return vpsr.dt && vpsr.rt; + + } + } + return 0; +} + +thash_data_t *vsa_thash(ia64_pta vpta, u64 va, u64 vrr, u64 *tag) +{ + u64 index, pfn, rid, pfn_bits; + + pfn_bits = vpta.size - 5 - 8; + pfn = REGION_OFFSET(va) >> _REGION_PAGE_SIZE(vrr); + rid = _REGION_ID(vrr); + index = ((rid & 0xff) << pfn_bits)|(pfn & ((1UL << pfn_bits) - 1)); + *tag = ((rid >> 8) & 0x) | ((pfn >> pfn_bits) << 16); + + return (thash_data_t *)((vpta.base << PTA_BASE_SHIFT) + (index << 5)); +} + +thash_data_t *__vtr_lookup(VCPU *vcpu, u64 va, int type) +{ + + thash_data_t *trp; + int i; + u64 rid; + + rid = vcpu_get_rr(vcpu, va); + rid = rid & RR_RID_MASK;; + if (type == D_TLB) { + if (vcpu_quick_region_check(vcpu->arch.dtr_regions, va)) { + for (trp = (thash_data_t *)&vcpu->arch.dtrs, i = 0; + i < NDTRS; i++, trp++) { + if (__is_tr_translated(trp, rid, va)) + return trp; + } + } + } else { +
[kvm-devel] [PATCH][04]Add API for allocating TR resouce.
From: Zhang Xiantao <[EMAIL PROTECTED]> Date: Thu, 31 Jan 2008 17:10:52 +0800 Subject: [PATCH] Add API for allocating TR resouce. Dynamic TR resouce should be managed in an uniform way. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> Signed-off-by: Anthony Xu<[EMAIL PROTECTED]> --- arch/ia64/kernel/mca.c | 50 + arch/ia64/kernel/mca_asm.S |5 ++ arch/ia64/mm/tlb.c | 167 include/asm-ia64/kregs.h |3 + include/asm-ia64/tlb.h | 12 +++ 5 files changed, 237 insertions(+), 0 deletions(-) diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c index 6dbf591..4253343 100644 --- a/arch/ia64/kernel/mca.c +++ b/arch/ia64/kernel/mca.c @@ -89,6 +89,7 @@ #include #include +#include #include "mca_drv.h" #include "entry.h" @@ -104,8 +105,10 @@ DEFINE_PER_CPU(u64, ia64_mca_data); /* == __per_cpu_mca[smp_processor_id()] */ DEFINE_PER_CPU(u64, ia64_mca_per_cpu_pte); /* PTE to map per-CPU area */ DEFINE_PER_CPU(u64, ia64_mca_pal_pte); /* PTE to map PAL code */ DEFINE_PER_CPU(u64, ia64_mca_pal_base);/* vaddr PAL code granule */ +DEFINE_PER_CPU(u64, ia64_mca_tr_reload); /* Flag for TR reload */ unsigned long __per_cpu_mca[NR_CPUS]; +extern struct ia64_tr_entry __per_cpu_idtrs[NR_CPUS][2][IA64_TR_ALLOC_MAX]; /* In mca_asm.S */ extern voidia64_os_init_dispatch_monarch (void); @@ -1177,6 +1180,49 @@ all_in: return; } +/* mca_insert_tr + * + * Switch rid when TR reload and needed! + * iord: 1: itr, 2: itr; + * +*/ +static void mca_insert_tr(u64 iord) +{ + + int i; + u64 old_rr; + struct ia64_tr_entry *p; + unsigned long psr; + int cpu = smp_processor_id(); + + psr = ia64_clear_ic(); + for (i = IA64_TR_ALLOC_BASE; i < IA64_TR_ALLOC_MAX; i++) { + p = &__per_cpu_idtrs[cpu][iord-1][i]; + if (p->pte&0x1) { + old_rr = ia64_get_rr(p->ifa); + if (old_rr != p->rr) { + ia64_set_rr(p->ifa, p->rr); + ia64_srlz_d(); + } + ia64_ptr(iord, p->ifa, p->itir >> 2); + ia64_srlz_i(); + if (iord & 0x1) { + ia64_itr(0x1, i, p->ifa, p->pte, p->itir >> 2); + ia64_srlz_i(); + } + if (iord & 0x2) { + ia64_itr(0x2, i, p->ifa, p->pte, p->itir >> 2); + ia64_srlz_i(); + } + if (old_rr != p->rr) { + ia64_set_rr(p->ifa, old_rr); + ia64_srlz_d(); + } + } + } + ia64_set_psr(psr); +} + /* * ia64_mca_handler * @@ -1266,6 +1312,10 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw, monarch_cpu = -1; #endif } + if (__get_cpu_var(ia64_mca_tr_reload)) { + mca_insert_tr(0x1); /*Reload dynamic itrs*/ + mca_insert_tr(0x2); /*Reload dynamic itrs*/ + } if (notify_die(DIE_MCA_MONARCH_LEAVE, "MCA", regs, (long)&nd, 0, recover) == NOTIFY_STOP) ia64_mca_spin(__FUNCTION__); diff --git a/arch/ia64/kernel/mca_asm.S b/arch/ia64/kernel/mca_asm.S index 0f5965f..dd37dd0 100644 --- a/arch/ia64/kernel/mca_asm.S +++ b/arch/ia64/kernel/mca_asm.S @@ -215,8 +215,13 @@ ia64_reload_tr: mov r20=IA64_TR_CURRENT_STACK ;; itr.d dtr[r20]=r16 + GET_THIS_PADDR(r2, ia64_mca_tr_reload) + mov r18 = 1 ;; srlz.d + ;; + st8 [r2] =r18 + ;; done_tlb_purge_and_reload: diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c index 655da24..e27e101 100644 --- a/arch/ia64/mm/tlb.c +++ b/arch/ia64/mm/tlb.c @@ -26,6 +26,8 @@ #include #include #include +#include +#include static struct { unsigned long mask; /* mask of supported purge page-sizes */ @@ -39,6 +41,10 @@ struct ia64_ctx ia64_ctx = { }; DEFINE_PER_CPU(u8, ia64_need_tlb_flush); +DEFINE_PER_CPU(u8, ia64_tr_num); /*Number of TR slots in current processor*/ +DEFINE_PER_CPU(u8, ia64_tr_used); /*Max Slot number used by kernel*/ + +struct ia64_tr_entry __per_cpu_idtrs[NR_CPUS][2][IA64_TR_ALLOC_MAX]; /* * Initializes the ia64_ctx.bitmap array based on max_ctx+1. @@ -190,6 +196,9 @@ ia64_tlb_init (void) ia64_ptce_info_t uninitialized_var(ptce_info); /* GCC be quiet */ unsigned long tr_pgbits; long status; + pal_vm_info_1_u_t vm_info_1; + pal_vm_info_2_u_t vm_info_2; + int cpu = smp_processor_id(); if ((status = ia64_pa
[kvm-devel] [PATCH][20]Update IA64 Kconfig and Makefile to include kvm build.
From: Zhang Xiantao <[EMAIL PROTECTED]> Date: Tue, 29 Jan 2008 15:40:27 +0800 Subject: [PATCH] kvm/ia64: Update IA64 Kconfig and Makefile to include kvm build. Update IA64 Kconfig and Makefile to include kvm build. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/Kconfig |6 ++ arch/ia64/Makefile |1 + 2 files changed, 7 insertions(+), 0 deletions(-) diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index bef4772..4592130 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -99,6 +99,10 @@ config AUDIT_ARCH bool default y +config ARCH_SUPPORTS_KVM + bool + default y + choice prompt "System type" default IA64_GENERIC @@ -568,6 +572,8 @@ config MSPEC source "fs/Kconfig" +source "arch/ia64/kvm/Kconfig" + source "lib/Kconfig" # diff --git a/arch/ia64/Makefile b/arch/ia64/Makefile index b916ccf..c8d09be 100644 --- a/arch/ia64/Makefile +++ b/arch/ia64/Makefile @@ -55,6 +55,7 @@ core-$(CONFIG_IA64_GENERIC) += arch/ia64/dig/ core-$(CONFIG_IA64_HP_ZX1) += arch/ia64/dig/ core-$(CONFIG_IA64_HP_ZX1_SWIOTLB) += arch/ia64/dig/ core-$(CONFIG_IA64_SGI_SN2)+= arch/ia64/sn/ +core-$(CONFIG_KVM) += arch/ia64/kvm/ drivers-$(CONFIG_PCI) += arch/ia64/pci/ drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/ -- 1.5.1 0020-kvm-ia64-Update-IA64-Kconfig-and-Makefile-to-includ.patch Description: 0020-kvm-ia64-Update-IA64-Kconfig-and-Makefile-to-includ.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH][16] Generate offset values for assembly code use.
From: Zhang Xiantao <[EMAIL PROTECTED]> Date: Tue, 29 Jan 2008 14:40:41 +0800 Subject: [PATCH] kvm/ia64: Generate offset values for assembly code use. asm-offsets.c will generate offset values used for assembly code for some fileds of special structures. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/asm-offsets.c | 251 +++ 1 files changed, 251 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/asm-offsets.c diff --git a/arch/ia64/kvm/asm-offsets.c b/arch/ia64/kvm/asm-offsets.c new file mode 100644 index 000..d9a164d --- /dev/null +++ b/arch/ia64/kvm/asm-offsets.c @@ -0,0 +1,251 @@ +/* + * asm-offsets.c Generate definitions needed by assembly language modules. + * This code generates raw asm output which is post-processed + * to extract and format the required data. + * + * Anthony Xu<[EMAIL PROTECTED]> + * Zhang Xiantao <[EMAIL PROTECTED]> + * Copyright (c) 2007 Intel Corporation KVM support. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + */ + +#include +#include + +#include "vcpu.h" + +#define task_struct kvm_vcpu + +#define DEFINE(sym, val) \ + asm volatile("\n->" #sym " (%0) " #val : : "i" (val)) + +#define BLANK() asm volatile("\n->" : :) + +#define OFFSET(_sym, _str, _mem) \ +DEFINE(_sym, offsetof(_str, _mem)); + +void foo(void) +{ + DEFINE(VMM_TASK_SIZE, sizeof(struct kvm_vcpu)); + DEFINE(VMM_PT_REGS_SIZE, sizeof(struct kvm_pt_regs)); + + BLANK(); + + DEFINE(VMM_VCPU_META_RR0_OFFSET, + offsetof(struct kvm_vcpu, arch.metaphysical_rr0)); + DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET, + offsetof(struct kvm_vcpu, + arch.metaphysical_saved_rr0)); + DEFINE(VMM_VCPU_VRR0_OFFSET, + offsetof(struct kvm_vcpu, arch.vrr[0])); + DEFINE(VMM_VPD_IRR0_OFFSET, + offsetof(struct vpd, irr[0])); + DEFINE(VMM_VCPU_ITC_CHECK_OFFSET, + offsetof(struct kvm_vcpu, arch.itc_check)); + DEFINE(VMM_VCPU_IRQ_CHECK_OFFSET, + offsetof(struct kvm_vcpu, arch.irq_check)); + DEFINE(VMM_VPD_VHPI_OFFSET, + offsetof(struct vpd, vhpi)); + DEFINE(VMM_VCPU_VSA_BASE_OFFSET, + offsetof(struct kvm_vcpu, arch.vsa_base)); + DEFINE(VMM_VCPU_VPD_OFFSET, + offsetof(struct kvm_vcpu, arch.vpd)); + DEFINE(VMM_VCPU_IRQ_CHECK, + offsetof(struct kvm_vcpu, arch.irq_check)); + DEFINE(VMM_VCPU_TIMER_PENDING, + offsetof(struct kvm_vcpu, arch.timer_pending)); + DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET, + offsetof(struct kvm_vcpu, arch.metaphysical_saved_rr0)); + DEFINE(VMM_VCPU_MODE_FLAGS_OFFSET, + offsetof(struct kvm_vcpu, arch.mode_flags)); + DEFINE(VMM_VCPU_ITC_OFS_OFFSET, + offsetof(struct kvm_vcpu, arch.itc_offset)); + DEFINE(VMM_VCPU_LAST_ITC_OFFSET, + offsetof(struct kvm_vcpu, arch.last_itc)); + DEFINE(VMM_VCPU_SAVED_GP_OFFSET, + offsetof(struct kvm_vcpu, arch.saved_gp)); + + BLANK(); + + DEFINE(VMM_PT_REGS_B6_OFFSET, + offsetof(struct kvm_pt_regs, b6)); + DEFINE(VMM_PT_REGS_B7_OFFSET, + offsetof(struct kvm_pt_regs, b7)); + DEFINE(VMM_PT_REGS_AR_CSD_OFFSET, + offsetof(struct kvm_pt_regs, ar_csd)); + DEFINE(VMM_PT_REGS_AR_SSD_OFFSET, + offsetof(struct kvm_pt_regs, ar_ssd)); + DEFINE(VMM_PT_REGS_R8_OFFSET, + offsetof(struct kvm_pt_regs, r8)); + DEFINE(VMM_PT_REGS_R9_OFFSET, + offsetof(struct kvm_pt_regs, r9)); + DEFINE(VMM_PT_REGS_R10_OFFSET, + offsetof(struct kvm_pt_regs, r10)); + DEFINE(VMM_PT_REGS_R11_OFFSET, + offsetof(struct kvm_pt_regs, r11)); + DEFINE(VMM_PT_REGS_CR_IPSR_OFFSET, +
[kvm-devel] [PATCH][21] Readme for kvm/ia64 boot.
From: Zhang Xiantao <[EMAIL PROTECTED]> Date: Tue, 29 Jan 2008 17:27:06 +0800 Subject: [PATCH] README: How to boot up guests on kvm/ia64 Guide: How to boot up guests on kvm/ia64 Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/README | 72 ++ 1 files changed, 72 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/README diff --git a/arch/ia64/kvm/README b/arch/ia64/kvm/README new file mode 100644 index 000..22b1db7 --- /dev/null +++ b/arch/ia64/kvm/README @@ -0,0 +1,72 @@ + Guide: How to boot up guests on kvm/ia64 + +1. Get the kvm source from git.kernel.org. + Userspace source: + git clone git://git.kernel.org/pub/scm/virt/kvm/kvm-userspace.git + Kernel Source: + git clone git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm.git +2. Compile the source code. + 2.1 Compile userspace code: + (1)cd ./kvm-userspace + (2)./configure + (3)cd kernel + (4)make sync LINUX= $kernel_dir (kernel_dir is the directory of kernel source.) + (5)cd .. + (6)make qemu + (7)cd qemu; make install + 2.2 Compile kernel source code: + (1) cd ./$kernel_dir + (2) Make menuconfig + (3) Enter into virtualization option, and choose kvm. + (4) make + (5) Once (4) done, make modules_install + (6) Make initrd, and use new kernel to reboot up host machine. + (7) Once (6) done, cd $kernel_dir/arch/ia64/kvm + (8) insmod kvm.ko; insmod kvm-intel.ko + +Note: For step 2, please make sure that host page size == TARGET_PAGE_SIZE of qemu, otherwise, may fail. + +3. Get Guest Firmware named as Flash.fd, and put it under right place: + (1) If you have the guest firmware (binary)released by Intel Corp for Xen, you can use it directly. + (2) If you want to build a guest firmware form souce code. Please download the source from + hg clone http://xenbits.xensource.com/ext/efi-vfirmware.hg + Use the Guide of the source to build open Guest Firmware. + (3) Rename it to Flash.fd, and copy it to /usr/local/share/qemu +Note: For step 3, kvm use the guest firmware which complies with the one Xen uses. + +4. Boot up Linux or Windows guests: + 4.1 Create or install a image for guest boot. If you have xen experience, it should be easy. + + 4.2 Boot up guests use the following command. + /usr/local/bin/qemu-system-ia64 -smp xx -m 512 -hda $your_image + (xx is the number of virtual processors for the guest, now the maximum value is 4) + +5. Known possbile issue on some platforms with old Firmware + +If meet strange host crashes, you may try to solve it through either of the following methods. +(1): Upgrade your Firmware to the latest one. + +(2): Applying the below patch to kernel source. +diff --git a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S +index 0b53344..f02b0f7 100644 +--- a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S +@@ -84,7 +84,8 @@ GLOBAL_ENTRY(ia64_pal_call_static) + mov ar.pfs = loc1 + mov rp = loc0 + ;; +- srlz.d // seralize restoration of psr.l ++ srlz.i // seralize restoration of psr.l ++ ;; + br.ret.sptk.many b0 + END(ia64_pal_call_static) + +6. Bug report: + If you found any issues when use kvm/ia64, Please post the bug info to kvm-ia64-devel mailing list. + https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel/ + +Thanks for your interest! Let's work together, and make kvm/ia64 stronger and stronger! + + + Zhang Xiantao <[EMAIL PROTECTED]> + 2008.1.28 -- 1.5.1 0021-README-How-to-boot-up-guests-on-kvm-ia64.patch Description: 0021-README-How-to-boot-up-guests-on-kvm-ia64.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
[kvm-devel] [PATCH][15]add optimization for some virtulization faults
From: Zhang Xiantao <[EMAIL PROTECTED]> Date: Tue, 29 Jan 2008 14:35:44 +0800 Subject: [PATCH] kvm/ia64: add optimization for some virtulization faults optvfault.S adds optimization for some performance-critical virtualization faults. Signed-off-by: Anthony Xu <[EMAIL PROTECTED]> Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/optvfault.S | 918 + 1 files changed, 918 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/optvfault.S diff --git a/arch/ia64/kvm/optvfault.S b/arch/ia64/kvm/optvfault.S new file mode 100644 index 000..5de210e --- /dev/null +++ b/arch/ia64/kvm/optvfault.S @@ -0,0 +1,918 @@ +/* + * arch/ia64/vmx/optvfault.S + * optimize virtualization fault handler + * + * Copyright (C) 2006 Intel Co + * Xuefei Xu (Anthony Xu) <[EMAIL PROTECTED]> + */ + +#include +#include + +#include "vti.h" +#include "asm-offsets.h" + +#define ACCE_MOV_FROM_AR +#define ACCE_MOV_FROM_RR +#define ACCE_MOV_TO_RR +#define ACCE_RSM +#define ACCE_SSM +#define ACCE_MOV_TO_PSR +#define ACCE_THASH + +//mov r1=ar3 +GLOBAL_ENTRY(kvm_asm_mov_from_ar) +#ifndef ACCE_MOV_FROM_AR +br.many kvm_virtualization_fault_back +#endif +add r18=VMM_VCPU_ITC_OFS_OFFSET, r21 +add r16=VMM_VCPU_LAST_ITC_OFFSET,r21 +extr.u r17=r25,6,7 +;; +ld8 r18=[r18] +mov r19=ar.itc +mov r24=b0 +;; +add r19=r19,r18 +addl [EMAIL PROTECTED](asm_mov_to_reg),gp +;; +st8 [r16] = r19 +adds r30=kvm_resume_to_guest-asm_mov_to_reg,r20 +shladd r17=r17,4,r20 +;; +mov b0=r17 +br.sptk.few b0 +;; +END(kvm_asm_mov_from_ar) + + +// mov r1=rr[r3] +GLOBAL_ENTRY(kvm_asm_mov_from_rr) +#ifndef ACCE_MOV_FROM_RR +br.many kvm_virtualization_fault_back +#endif +extr.u r16=r25,20,7 +extr.u r17=r25,6,7 +addl [EMAIL PROTECTED](asm_mov_from_reg),gp +;; +adds r30=kvm_asm_mov_from_rr_back_1-asm_mov_from_reg,r20 +shladd r16=r16,4,r20 +mov r24=b0 +;; +add r27=VMM_VCPU_VRR0_OFFSET,r21 +mov b0=r16 +br.many b0 +;; +kvm_asm_mov_from_rr_back_1: +adds r30=kvm_resume_to_guest-asm_mov_from_reg,r20 +adds r22=asm_mov_to_reg-asm_mov_from_reg,r20 +shr.u r26=r19,61 +;; +shladd r17=r17,4,r22 +shladd r27=r26,3,r27 +;; +ld8 r19=[r27] +mov b0=r17 +br.many b0 +END(kvm_asm_mov_from_rr) + + +// mov rr[r3]=r2 +GLOBAL_ENTRY(kvm_asm_mov_to_rr) +#ifndef ACCE_MOV_TO_RR +br.many kvm_virtualization_fault_back +#endif +extr.u r16=r25,20,7 +extr.u r17=r25,13,7 +addl [EMAIL PROTECTED](asm_mov_from_reg),gp +;; +adds r30=kvm_asm_mov_to_rr_back_1-asm_mov_from_reg,r20 +shladd r16=r16,4,r20 +mov r22=b0 +;; +add r27=VMM_VCPU_VRR0_OFFSET,r21 +mov b0=r16 +br.many b0 +;; +kvm_asm_mov_to_rr_back_1: +adds r30=kvm_asm_mov_to_rr_back_2-asm_mov_from_reg,r20 +shr.u r23=r19,61 +shladd r17=r17,4,r20 +;; +//if rr6, go back +cmp.eq p6,p0=6,r23 +mov b0=r22 +(p6) br.cond.dpnt.many kvm_virtualization_fault_back +;; +mov r28=r19 +mov b0=r17 +br.many b0 +kvm_asm_mov_to_rr_back_2: +adds r30=kvm_resume_to_guest-asm_mov_from_reg,r20 +shladd r27=r23,3,r27 +;; // vrr.rid<<4 |0xe +st8 [r27]=r19 +mov b0=r30 +;; +extr.u r16=r19,8,26 +extr.u r18 =r19,2,6 +mov r17 =0xe +;; +shladd r16 = r16, 4, r17 +extr.u r19 =r19,0,8 +;; +shl r16 = r16,8 +;; +add r19 = r19, r16 +;; //set ve 1 +dep r19=-1,r19,0,1 +cmp.lt p6,p0=14,r18 +;; +(p6) mov r18=14 +;; +(p6) dep r19=r18,r19,2,6 +;; +cmp.eq p6,p0=0,r23 +;; +cmp.eq.or p6,p0=4,r23 +;; +adds r16=VMM_VCPU_MODE_FLAGS_OFFSET,r21 +(p6) adds r17=VMM_VCPU_META_SAVED_RR0_OFFSET,r21 +;; +ld4 r16=[r16] +cmp.eq p7,p0=r0,r0 +(p6) shladd r17=r23,1,r17 +;; +(p6) st8 [r17]=r19 +(p6) tbit.nz p6,p7=r16,0 +;; +(p7) mov rr[r28]=r19 +mov r24=r22 +br.many b0 +END(kvm_asm_mov_to_rr) + + +//rsm +GLOBAL_ENTRY(kvm_asm_rsm) +#ifndef ACCE_RSM +br.many kvm_virtualization_fault_back +#endif +add r16=VMM_VPD_BASE_OFFSET,r21 +extr.u r26=r25,6,21 +extr.u r27=r25,31,2 +;; +ld8 r16=[r16] +extr.u r28=r25,36,1 +dep r26=r27,r26,21,2 +;; +add r17=VPD_VPSR_START_OFFSET,r16 +add r22=VMM_VCPU_MODE_FLAGS_OFFSET,r21 +//r26 is imm24 +dep r26=r28,r26,23,1 +;; +ld8 r18=[r17] +movl r28=IA64_PSR_IC+IA64_PSR_I+IA64_PSR_DT+IA64_PSR_SI +ld4 r23=[r22] +sub r27=-1,r26 +mov r24=b0 +;; +mov r20=cr.ipsr +or r28=r27,r28 +and r19=r18,r27 +;; +st8 [r17]=r19 +and r20=r20,r28 +/* Comment it out due to short of fp lazy alorgithm support +adds r27=IA64_VCPU_FP_PSR_OFFSET,r21 +;; +ld8 r27=[r27] +;; +tbit.nz p8,p0= r27,IA64_PSR_DFH_BIT +;; +(p8) dep r20=-1,r20,IA64_PSR_DFH_BIT,1 +
[kvm-devel] [PATCH] [19]Add Makefile for kvm files compile.
From: Zhang Xiantao <[EMAIL PROTECTED]> Date: Tue, 29 Jan 2008 14:43:32 +0800 Subject: [PATCH] kvm/ia64: Add Makefile for kvm files compile. Adds Makefile for kvm compile. Signed-off-by: Xiantao Zhang <[EMAIL PROTECTED]> --- arch/ia64/kvm/Makefile | 61 1 files changed, 61 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/kvm/Makefile diff --git a/arch/ia64/kvm/Makefile b/arch/ia64/kvm/Makefile new file mode 100644 index 000..cde7d8e --- /dev/null +++ b/arch/ia64/kvm/Makefile @@ -0,0 +1,61 @@ +#This Make file is to generate asm-offsets.h and build source. +# + +#Generate asm-offsets.h for vmm module build +offsets-file := asm-offsets.h + +always := $(offsets-file) +targets := $(offsets-file) +targets += arch/ia64/kvm/asm-offsets.s +clean-files := $(addprefix $(objtree)/,$(targets) $(obj)/memcpy.S $(obj)/memset.S) + +# Default sed regexp - multiline due to syntax constraints +define sed-y + "/^->/{s:^->\([^ ]*\) [\$$#]*\([^ ]*\) \(.*\):#define \1 \2 /* \3 */:; s:->::; p;}" +endef + +quiet_cmd_offsets = GEN $@ +define cmd_offsets + (set -e; \ +echo "#ifndef __ASM_KVM_OFFSETS_H__"; \ +echo "#define __ASM_KVM_OFFSETS_H__"; \ +echo "/*"; \ +echo " * DO NOT MODIFY."; \ +echo " *"; \ +echo " * This file was generated by Makefile"; \ +echo " *"; \ +echo " */"; \ +echo ""; \ +sed -ne $(sed-y) $<; \ +echo ""; \ +echo "#endif" ) > $@ +endef +# We use internal rules to avoid the "is up to date" message from make +arch/ia64/kvm/asm-offsets.s: arch/ia64/kvm/asm-offsets.c + $(call if_changed_dep,cc_s_c) + +$(obj)/$(offsets-file): arch/ia64/kvm/asm-offsets.s + $(call cmd,offsets) + +# +# Makefile for Kernel-based Virtual Machine module +# + +EXTRA_CFLAGS += -Ivirt/kvm -Iarch/ia64/kvm/ + +$(addprefix $(objtree)/,$(obj)/memcpy.S $(obj)/memset.S): + $(shell ln -snf ../lib/memcpy.S $(src)/memcpy.S) + $(shell ln -snf ../lib/memset.S $(src)/memset.S) + +common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o ioapic.o) + +kvm-objs := $(common-objs) kvm_ia64.o kvm_fw.o +obj-$(CONFIG_KVM) += kvm.o + +FORCE : $(obj)/$(offsets-file) +EXTRA_CFLAGS_vcpu.o += -mfixed-range=f2-f5,f12-f127 +kvm-intel-objs = vmm.o vmm_ivt.o trampoline.o vcpu.o optvfault.o mmio.o \ + vtlb.o process.o +#Add link memcpy and memset to avoid possible structure assignment error +kvm-intel-objs += memset.o memcpy.o +obj-$(CONFIG_KVM_INTEL) += kvm-intel.o -- 1.5.1 0019-kvm-ia64-Add-Makefile-for-kvm-files-compile.patch Description: 0019-kvm-ia64-Add-Makefile-for-kvm-files-compile.patch - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel