Re: [PATCH v2 1/4] KVM: x86: use new CS.RPL as CPL during task switch

2014-05-26 Thread Marcelo Tosatti
On Thu, May 15, 2014 at 06:51:28PM +0200, Paolo Bonzini wrote:
> During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
> to all the other requirements) and will be the new CPL.  So far this
> worked by carefully setting the CS selector and flag before doing the
> task switch; however, this will not work once we get the CPL from SS.DPL:
> setting SS itself would fail if the task switch changes the privilege
> level.
> 
> Temporarily assume that the CPL comes from CS.RPL during task switch
> to a protected-mode task.  This is the same approach used in QEMU's
> emulation code, which (until version 2.0) manually tracks the CPL.
> 
> Signed-off-by: Paolo Bonzini 

Reviewed-by: Marcelo Tosatti 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/4] KVM: x86: use new CS.RPL as CPL during task switch

2014-05-26 Thread Marcelo Tosatti
On Thu, May 15, 2014 at 06:51:28PM +0200, Paolo Bonzini wrote:
 During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
 to all the other requirements) and will be the new CPL.  So far this
 worked by carefully setting the CS selector and flag before doing the
 task switch; however, this will not work once we get the CPL from SS.DPL:
 setting SS itself would fail if the task switch changes the privilege
 level.
 
 Temporarily assume that the CPL comes from CS.RPL during task switch
 to a protected-mode task.  This is the same approach used in QEMU's
 emulation code, which (until version 2.0) manually tracks the CPL.
 
 Signed-off-by: Paolo Bonzini pbonz...@redhat.com

Reviewed-by: Marcelo Tosatti mtosa...@redhat.com

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/4] KVM: x86: use new CS.RPL as CPL during task switch

2014-05-25 Thread Wei Huang
>> During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
>> to all the other requirements) and will be the new CPL.  So far this
>> worked by carefully setting the CS selector and flag before doing the
>
> s/flag/EFLAGS/
>
>> task switch; however, this will not work once we get the CPL from SS.DPL:
>> setting SS itself would fail if the task switch changes the privilege
>> level.
>
> More precisely, before patch 4 in this series setting CS.selector would
> already change the CPL.  After it, you actually have to set the full
> segment descriptor cache to change the CPL, so we cannot use
> ctxt->ops->cpl(ctxt) to retrieve the CPL during a task switch.  The
> check that fails without this patch is that SS.DPL must be == CPL, and
> the failure happens because ctxt->ops->cpl(ctxt) is the *old* SS.DPL.

This is informative and answered some of my questions. Thanks. It would
be helpful to include such info in patch description.

>
> Paolo
>
>> Temporarily assume that the CPL comes from CS.RPL during task switch
>> to a protected-mode task.  This is the same approach used in QEMU's
>> emulation code, which (until version 2.0) manually tracks the CPL.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/4] KVM: x86: use new CS.RPL as CPL during task switch

2014-05-25 Thread Wei Huang
 During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
 to all the other requirements) and will be the new CPL.  So far this
 worked by carefully setting the CS selector and flag before doing the

 s/flag/EFLAGS/

 task switch; however, this will not work once we get the CPL from SS.DPL:
 setting SS itself would fail if the task switch changes the privilege
 level.

 More precisely, before patch 4 in this series setting CS.selector would
 already change the CPL.  After it, you actually have to set the full
 segment descriptor cache to change the CPL, so we cannot use
 ctxt-ops-cpl(ctxt) to retrieve the CPL during a task switch.  The
 check that fails without this patch is that SS.DPL must be == CPL, and
 the failure happens because ctxt-ops-cpl(ctxt) is the *old* SS.DPL.

This is informative and answered some of my questions. Thanks. It would
be helpful to include such info in patch description.


 Paolo

 Temporarily assume that the CPL comes from CS.RPL during task switch
 to a protected-mode task.  This is the same approach used in QEMU's
 emulation code, which (until version 2.0) manually tracks the CPL.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/4] KVM: x86: use new CS.RPL as CPL during task switch

2014-05-16 Thread Paolo Bonzini

Il 15/05/2014 18:51, Paolo Bonzini ha scritto:

During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
to all the other requirements) and will be the new CPL.  So far this
worked by carefully setting the CS selector and flag before doing the


s/flag/EFLAGS/


task switch; however, this will not work once we get the CPL from SS.DPL:
setting SS itself would fail if the task switch changes the privilege
level.


More precisely, before patch 4 in this series setting CS.selector would 
already change the CPL.  After it, you actually have to set the full 
segment descriptor cache to change the CPL, so we cannot use 
ctxt->ops->cpl(ctxt) to retrieve the CPL during a task switch.  The 
check that fails without this patch is that SS.DPL must be == CPL, and 
the failure happens because ctxt->ops->cpl(ctxt) is the *old* SS.DPL.


Paolo


Temporarily assume that the CPL comes from CS.RPL during task switch
to a protected-mode task.  This is the same approach used in QEMU's
emulation code, which (until version 2.0) manually tracks the CPL.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/4] KVM: x86: use new CS.RPL as CPL during task switch

2014-05-16 Thread Paolo Bonzini

Il 15/05/2014 18:51, Paolo Bonzini ha scritto:

During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
to all the other requirements) and will be the new CPL.  So far this
worked by carefully setting the CS selector and flag before doing the


s/flag/EFLAGS/


task switch; however, this will not work once we get the CPL from SS.DPL:
setting SS itself would fail if the task switch changes the privilege
level.


More precisely, before patch 4 in this series setting CS.selector would 
already change the CPL.  After it, you actually have to set the full 
segment descriptor cache to change the CPL, so we cannot use 
ctxt-ops-cpl(ctxt) to retrieve the CPL during a task switch.  The 
check that fails without this patch is that SS.DPL must be == CPL, and 
the failure happens because ctxt-ops-cpl(ctxt) is the *old* SS.DPL.


Paolo


Temporarily assume that the CPL comes from CS.RPL during task switch
to a protected-mode task.  This is the same approach used in QEMU's
emulation code, which (until version 2.0) manually tracks the CPL.


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 1/4] KVM: x86: use new CS.RPL as CPL during task switch

2014-05-15 Thread Paolo Bonzini
During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
to all the other requirements) and will be the new CPL.  So far this
worked by carefully setting the CS selector and flag before doing the
task switch; however, this will not work once we get the CPL from SS.DPL:
setting SS itself would fail if the task switch changes the privilege
level.

Temporarily assume that the CPL comes from CS.RPL during task switch
to a protected-mode task.  This is the same approach used in QEMU's
emulation code, which (until version 2.0) manually tracks the CPL.

Signed-off-by: Paolo Bonzini 
---
 arch/x86/kvm/emulate.c | 60 +++---
 1 file changed, 33 insertions(+), 27 deletions(-)

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index e8a58409b5ac..47e716ef46b7 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -1410,11 +1410,11 @@ static int write_segment_descriptor(struct 
x86_emulate_ctxt *ctxt,
 }
 
 /* Does not support long mode */
-static int load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
-  u16 selector, int seg)
+static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+u16 selector, int seg, u8 cpl)
 {
struct desc_struct seg_desc, old_desc;
-   u8 dpl, rpl, cpl;
+   u8 dpl, rpl;
unsigned err_vec = GP_VECTOR;
u32 err_code = 0;
bool null_selector = !(selector & ~0x3); /* -0003 are null */
@@ -1442,7 +1442,6 @@ static int load_segment_descriptor(struct 
x86_emulate_ctxt *ctxt,
}
 
rpl = selector & 3;
-   cpl = ctxt->ops->cpl(ctxt);
 
/* NULL selector is not valid for TR, CS and SS (except for long mode) 
*/
if ((seg == VCPU_SREG_CS
@@ -1544,6 +1543,13 @@ exception:
return X86EMUL_PROPAGATE_FAULT;
 }
 
+static int load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+  u16 selector, int seg)
+{
+   u8 cpl = ctxt->ops->cpl(ctxt);
+   return __load_segment_descriptor(ctxt, selector, seg, cpl);
+}
+
 static void write_register_operand(struct operand *op)
 {
/* The 4-byte case *is* correct: in 64-bit mode we zero-extend. */
@@ -2405,6 +2411,7 @@ static int load_state_from_tss16(struct x86_emulate_ctxt 
*ctxt,
 struct tss_segment_16 *tss)
 {
int ret;
+   u8 cpl;
 
ctxt->_eip = tss->ip;
ctxt->eflags = tss->flag | 2;
@@ -2427,23 +2434,25 @@ static int load_state_from_tss16(struct 
x86_emulate_ctxt *ctxt,
set_segment_selector(ctxt, tss->ss, VCPU_SREG_SS);
set_segment_selector(ctxt, tss->ds, VCPU_SREG_DS);
 
+   cpl = tss->cs & 3;
+
/*
 * Now load segment descriptors. If fault happens at this stage
 * it is handled in a context of new task
 */
-   ret = load_segment_descriptor(ctxt, tss->ldt, VCPU_SREG_LDTR);
+   ret = __load_segment_descriptor(ctxt, tss->ldt, VCPU_SREG_LDTR, cpl);
if (ret != X86EMUL_CONTINUE)
return ret;
-   ret = load_segment_descriptor(ctxt, tss->es, VCPU_SREG_ES);
+   ret = __load_segment_descriptor(ctxt, tss->es, VCPU_SREG_ES, cpl);
if (ret != X86EMUL_CONTINUE)
return ret;
-   ret = load_segment_descriptor(ctxt, tss->cs, VCPU_SREG_CS);
+   ret = __load_segment_descriptor(ctxt, tss->cs, VCPU_SREG_CS, cpl);
if (ret != X86EMUL_CONTINUE)
return ret;
-   ret = load_segment_descriptor(ctxt, tss->ss, VCPU_SREG_SS);
+   ret = __load_segment_descriptor(ctxt, tss->ss, VCPU_SREG_SS, cpl);
if (ret != X86EMUL_CONTINUE)
return ret;
-   ret = load_segment_descriptor(ctxt, tss->ds, VCPU_SREG_DS);
+   ret = __load_segment_descriptor(ctxt, tss->ds, VCPU_SREG_DS, cpl);
if (ret != X86EMUL_CONTINUE)
return ret;
 
@@ -2521,6 +2530,7 @@ static int load_state_from_tss32(struct x86_emulate_ctxt 
*ctxt,
 struct tss_segment_32 *tss)
 {
int ret;
+   u8 cpl;
 
if (ctxt->ops->set_cr(ctxt, 3, tss->cr3))
return emulate_gp(ctxt, 0);
@@ -2539,7 +2549,8 @@ static int load_state_from_tss32(struct x86_emulate_ctxt 
*ctxt,
 
/*
 * SDM says that segment selectors are loaded before segment
-* descriptors
+* descriptors.  This is important because CPL checks will
+* use CS.RPL.
 */
set_segment_selector(ctxt, tss->ldt_selector, VCPU_SREG_LDTR);
set_segment_selector(ctxt, tss->es, VCPU_SREG_ES);
@@ -2553,43 +2564,38 @@ static int load_state_from_tss32(struct 
x86_emulate_ctxt *ctxt,
 * If we're switching between Protected Mode and VM86, we need to make
 * sure to update the mode before loading the segment descriptors so
 * that the selectors are interpreted correctly.
-*
-* Need to get rflags to the 

[PATCH v2 1/4] KVM: x86: use new CS.RPL as CPL during task switch

2014-05-15 Thread Paolo Bonzini
During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
to all the other requirements) and will be the new CPL.  So far this
worked by carefully setting the CS selector and flag before doing the
task switch; however, this will not work once we get the CPL from SS.DPL:
setting SS itself would fail if the task switch changes the privilege
level.

Temporarily assume that the CPL comes from CS.RPL during task switch
to a protected-mode task.  This is the same approach used in QEMU's
emulation code, which (until version 2.0) manually tracks the CPL.

Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
 arch/x86/kvm/emulate.c | 60 +++---
 1 file changed, 33 insertions(+), 27 deletions(-)

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index e8a58409b5ac..47e716ef46b7 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -1410,11 +1410,11 @@ static int write_segment_descriptor(struct 
x86_emulate_ctxt *ctxt,
 }
 
 /* Does not support long mode */
-static int load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
-  u16 selector, int seg)
+static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+u16 selector, int seg, u8 cpl)
 {
struct desc_struct seg_desc, old_desc;
-   u8 dpl, rpl, cpl;
+   u8 dpl, rpl;
unsigned err_vec = GP_VECTOR;
u32 err_code = 0;
bool null_selector = !(selector  ~0x3); /* -0003 are null */
@@ -1442,7 +1442,6 @@ static int load_segment_descriptor(struct 
x86_emulate_ctxt *ctxt,
}
 
rpl = selector  3;
-   cpl = ctxt-ops-cpl(ctxt);
 
/* NULL selector is not valid for TR, CS and SS (except for long mode) 
*/
if ((seg == VCPU_SREG_CS
@@ -1544,6 +1543,13 @@ exception:
return X86EMUL_PROPAGATE_FAULT;
 }
 
+static int load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+  u16 selector, int seg)
+{
+   u8 cpl = ctxt-ops-cpl(ctxt);
+   return __load_segment_descriptor(ctxt, selector, seg, cpl);
+}
+
 static void write_register_operand(struct operand *op)
 {
/* The 4-byte case *is* correct: in 64-bit mode we zero-extend. */
@@ -2405,6 +2411,7 @@ static int load_state_from_tss16(struct x86_emulate_ctxt 
*ctxt,
 struct tss_segment_16 *tss)
 {
int ret;
+   u8 cpl;
 
ctxt-_eip = tss-ip;
ctxt-eflags = tss-flag | 2;
@@ -2427,23 +2434,25 @@ static int load_state_from_tss16(struct 
x86_emulate_ctxt *ctxt,
set_segment_selector(ctxt, tss-ss, VCPU_SREG_SS);
set_segment_selector(ctxt, tss-ds, VCPU_SREG_DS);
 
+   cpl = tss-cs  3;
+
/*
 * Now load segment descriptors. If fault happens at this stage
 * it is handled in a context of new task
 */
-   ret = load_segment_descriptor(ctxt, tss-ldt, VCPU_SREG_LDTR);
+   ret = __load_segment_descriptor(ctxt, tss-ldt, VCPU_SREG_LDTR, cpl);
if (ret != X86EMUL_CONTINUE)
return ret;
-   ret = load_segment_descriptor(ctxt, tss-es, VCPU_SREG_ES);
+   ret = __load_segment_descriptor(ctxt, tss-es, VCPU_SREG_ES, cpl);
if (ret != X86EMUL_CONTINUE)
return ret;
-   ret = load_segment_descriptor(ctxt, tss-cs, VCPU_SREG_CS);
+   ret = __load_segment_descriptor(ctxt, tss-cs, VCPU_SREG_CS, cpl);
if (ret != X86EMUL_CONTINUE)
return ret;
-   ret = load_segment_descriptor(ctxt, tss-ss, VCPU_SREG_SS);
+   ret = __load_segment_descriptor(ctxt, tss-ss, VCPU_SREG_SS, cpl);
if (ret != X86EMUL_CONTINUE)
return ret;
-   ret = load_segment_descriptor(ctxt, tss-ds, VCPU_SREG_DS);
+   ret = __load_segment_descriptor(ctxt, tss-ds, VCPU_SREG_DS, cpl);
if (ret != X86EMUL_CONTINUE)
return ret;
 
@@ -2521,6 +2530,7 @@ static int load_state_from_tss32(struct x86_emulate_ctxt 
*ctxt,
 struct tss_segment_32 *tss)
 {
int ret;
+   u8 cpl;
 
if (ctxt-ops-set_cr(ctxt, 3, tss-cr3))
return emulate_gp(ctxt, 0);
@@ -2539,7 +2549,8 @@ static int load_state_from_tss32(struct x86_emulate_ctxt 
*ctxt,
 
/*
 * SDM says that segment selectors are loaded before segment
-* descriptors
+* descriptors.  This is important because CPL checks will
+* use CS.RPL.
 */
set_segment_selector(ctxt, tss-ldt_selector, VCPU_SREG_LDTR);
set_segment_selector(ctxt, tss-es, VCPU_SREG_ES);
@@ -2553,43 +2564,38 @@ static int load_state_from_tss32(struct 
x86_emulate_ctxt *ctxt,
 * If we're switching between Protected Mode and VM86, we need to make
 * sure to update the mode before loading the segment descriptors so
 * that the selectors are interpreted correctly.
-*
-* Need to get rflags to the vcpu