Re: [Xen-devel] [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology

2018-04-23 Thread Jan Beulich
>>> On 08.01.18 at 05:01,  wrote:
> Define interface, structures and hypercalls for toolstack to build
> cpu topology and for guest that will retrieve it [1].
> Two subop hypercalls introduced by this patch:
> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
> and XENMEM_get_cpu_topology to retrieve cpu topology information.
> 
> [1]: during guest creation, those information helps hvmloader to build ACPI.
> 
> Signed-off-by: Chao Gao 

This will want revisiting once Andrew's domain creation rework series has
landed. In no case do I view XENMEM_get_cpu_topology as something
sensible - XENMEM, as its name says, is about memory.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology

2018-01-10 Thread Andrew Cooper
On 09/01/2018 20:47, Chao Gao wrote:
> On Tue, Jan 09, 2018 at 11:47:54PM +, Andrew Cooper wrote:
>> On 08/01/18 04:01, Chao Gao wrote:
>>> Define interface, structures and hypercalls for toolstack to build
>>> cpu topology and for guest that will retrieve it [1].
>>> Two subop hypercalls introduced by this patch:
>>> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
>>> and XENMEM_get_cpu_topology to retrieve cpu topology information.
>>>
>>> [1]: during guest creation, those information helps hvmloader to build ACPI.
>>>
>>> Signed-off-by: Chao Gao 
>> I'm sorry, but this going in the wrong direction.  Details like this
>> should be contained and communicated exclusively in the CPUID policy.
>>
>> Before the spectre/meltdown fire started, I had a prototype series
>> introducing a toolstack interface for getting and setting a full CPUID
>> policy at once, rather than piecewise.  I will be continuing with this
> Is the new interface able to set CPUID policy for each vCPU rather than
> current for each domain? Otherwise I couldn't see how to set APIC_ID
> for each vcpu except by introducing a new interface.

AFAIR, all APIC IDs can be uniquely and correctly derived from the
shift/count information in leaf 0xb

We've already got some CPUID information that is derived either entirely
dynamically (e.g. OSXSAVE), or when first set.  This isn't any different.

>
>> work once the dust settles.
>>
>> In particular, we should not have multiple ways of conveying the same
>> information, or duplication of the same data inside the hypervisor.
>>
>> If you rearrange your series to put the struct cpuid_policy changes
>> first, then patch 2 will become far more simple.  HVMLoader should
>> derive its topology information from the CPUID instruction, just as is
>> expected on native hardware.
> Good point. It seems that in HVMLoader BSP should boot APs in a
> broadcase fashion and then information is collected via CPUID and then
> build MADT/SRAT.

We already do that for MTRR setup, but as before, I'm fairly sure the
BSP can do this alone.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology

2018-01-09 Thread Chao Gao
On Tue, Jan 09, 2018 at 11:47:54PM +, Andrew Cooper wrote:
>On 08/01/18 04:01, Chao Gao wrote:
>> Define interface, structures and hypercalls for toolstack to build
>> cpu topology and for guest that will retrieve it [1].
>> Two subop hypercalls introduced by this patch:
>> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
>> and XENMEM_get_cpu_topology to retrieve cpu topology information.
>>
>> [1]: during guest creation, those information helps hvmloader to build ACPI.
>>
>> Signed-off-by: Chao Gao 
>
>I'm sorry, but this going in the wrong direction.  Details like this
>should be contained and communicated exclusively in the CPUID policy.
>
>Before the spectre/meltdown fire started, I had a prototype series
>introducing a toolstack interface for getting and setting a full CPUID
>policy at once, rather than piecewise.  I will be continuing with this

Is the new interface able to set CPUID policy for each vCPU rather than
current for each domain? Otherwise I couldn't see how to set APIC_ID
for each vcpu except by introducing a new interface.

>work once the dust settles.
>
>In particular, we should not have multiple ways of conveying the same
>information, or duplication of the same data inside the hypervisor.
>
>If you rearrange your series to put the struct cpuid_policy changes
>first, then patch 2 will become far more simple.  HVMLoader should
>derive its topology information from the CPUID instruction, just as is
>expected on native hardware.

Good point. It seems that in HVMLoader BSP should boot APs in a
broadcase fashion and then information is collected via CPUID and then
build MADT/SRAT.

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology

2018-01-09 Thread Chao Gao
On Tue, Jan 09, 2018 at 12:18:13PM -0500, Daniel De Graaf wrote:
>On 01/09/2018 04:06 AM, Chao Gao wrote:
>> On Mon, Jan 08, 2018 at 01:14:44PM -0500, Daniel De Graaf wrote:
>> > On 01/07/2018 11:01 PM, Chao Gao wrote:
>> > > Define interface, structures and hypercalls for toolstack to build
>> > > cpu topology and for guest that will retrieve it [1].
>> > > Two subop hypercalls introduced by this patch:
>> > > XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
>> > > and XENMEM_get_cpu_topology to retrieve cpu topology information.
>> > > 
>> > > [1]: during guest creation, those information helps hvmloader to build 
>> > > ACPI.
>> > > 
>> > > Signed-off-by: Chao Gao 
>> > 
>> > When adding new XSM controls for use by device models, you also
>> > need to add the permissions to the device_model macro defined in
>> > tools/flask/policy/modules/xen.if.  If domains need to call this
>> > function on themselves (is this only true for get?), you will also
>> > need to add it to declare_domain_common.
>> > 
>> 
>> Hi, Daniel.
>> 
>> Yes. XENMEM_get_cpu_topology will be called by the domain itself.
>> And Both get and set will be called by dom0 when creating one domain.
>> So I need:
>> 1. add *set* and *get* to create_domain_common.
>> 2. add *set* to declare_domain_common.
>> 
>> Is it right?
>> 
>> Thanks
>> Chao
>
>It sounds like you need to add get to declare_domain_common (not set)
>because the domain only needs to invoke this on itself.  If the device
>model doesn't need to use these hypercalls (would guest cpu hotplug or
>similar things need them?), then that's all you need to add.

Got it. I will first recognize whether device model needs these
hypercalls. If yes, make changes to macro device_model accordingly.

Thanks
chao

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology

2018-01-09 Thread Andrew Cooper
On 08/01/18 04:01, Chao Gao wrote:
> Define interface, structures and hypercalls for toolstack to build
> cpu topology and for guest that will retrieve it [1].
> Two subop hypercalls introduced by this patch:
> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
> and XENMEM_get_cpu_topology to retrieve cpu topology information.
>
> [1]: during guest creation, those information helps hvmloader to build ACPI.
>
> Signed-off-by: Chao Gao 

I'm sorry, but this going in the wrong direction.  Details like this
should be contained and communicated exclusively in the CPUID policy.

Before the spectre/meltdown fire started, I had a prototype series
introducing a toolstack interface for getting and setting a full CPUID
policy at once, rather than piecewise.  I will be continuing with this
work once the dust settles.

In particular, we should not have multiple ways of conveying the same
information, or duplication of the same data inside the hypervisor.

If you rearrange your series to put the struct cpuid_policy changes
first, then patch 2 will become far more simple.  HVMLoader should
derive its topology information from the CPUID instruction, just as is
expected on native hardware.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology

2018-01-09 Thread Daniel De Graaf

On 01/09/2018 04:06 AM, Chao Gao wrote:

On Mon, Jan 08, 2018 at 01:14:44PM -0500, Daniel De Graaf wrote:

On 01/07/2018 11:01 PM, Chao Gao wrote:

Define interface, structures and hypercalls for toolstack to build
cpu topology and for guest that will retrieve it [1].
Two subop hypercalls introduced by this patch:
XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
and XENMEM_get_cpu_topology to retrieve cpu topology information.

[1]: during guest creation, those information helps hvmloader to build ACPI.

Signed-off-by: Chao Gao 


When adding new XSM controls for use by device models, you also
need to add the permissions to the device_model macro defined in
tools/flask/policy/modules/xen.if.  If domains need to call this
function on themselves (is this only true for get?), you will also
need to add it to declare_domain_common.



Hi, Daniel.

Yes. XENMEM_get_cpu_topology will be called by the domain itself.
And Both get and set will be called by dom0 when creating one domain.
So I need:
1. add *set* and *get* to create_domain_common.
2. add *set* to declare_domain_common.

Is it right?

Thanks
Chao


It sounds like you need to add get to declare_domain_common (not set)
because the domain only needs to invoke this on itself.  If the device
model doesn't need to use these hypercalls (would guest cpu hotplug or
similar things need them?), then that's all you need to add.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology

2018-01-09 Thread Chao Gao
On Mon, Jan 08, 2018 at 01:14:44PM -0500, Daniel De Graaf wrote:
>On 01/07/2018 11:01 PM, Chao Gao wrote:
>> Define interface, structures and hypercalls for toolstack to build
>> cpu topology and for guest that will retrieve it [1].
>> Two subop hypercalls introduced by this patch:
>> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
>> and XENMEM_get_cpu_topology to retrieve cpu topology information.
>> 
>> [1]: during guest creation, those information helps hvmloader to build ACPI.
>> 
>> Signed-off-by: Chao Gao 
>
>When adding new XSM controls for use by device models, you also
>need to add the permissions to the device_model macro defined in
>tools/flask/policy/modules/xen.if.  If domains need to call this
>function on themselves (is this only true for get?), you will also
>need to add it to declare_domain_common.
>

Hi, Daniel.

Yes. XENMEM_get_cpu_topology will be called by the domain itself.
And Both get and set will be called by dom0 when creating one domain.
So I need:
1. add *set* and *get* to create_domain_common.
2. add *set* to declare_domain_common.

Is it right?

Thanks
Chao

>> ---
>>   xen/arch/x86/domctl.c   | 27 ++
>>   xen/arch/x86/hvm/hvm.c  |  7 ++
>>   xen/arch/x86/mm.c   | 45 
>> +
>>   xen/include/asm-x86/hvm/domain.h| 15 +
>>   xen/include/public/domctl.h | 22 ++
>>   xen/include/public/memory.h | 27 +-
>>   xen/include/xsm/dummy.h |  6 +
>>   xen/xsm/dummy.c |  1 +
>>   xen/xsm/flask/hooks.c   | 10 +
>>   xen/xsm/flask/policy/access_vectors |  4 
>>   10 files changed, 163 insertions(+), 1 deletion(-)
>> 
>> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
>> index 36ab235..4e1bbd5 100644
>> --- a/xen/arch/x86/domctl.c
>> +++ b/xen/arch/x86/domctl.c
>> @@ -347,6 +347,29 @@ void arch_get_domain_info(const struct domain *d,
>>   info->flags |= XEN_DOMINF_hap;
>>   }
>> +static int arch_set_cpu_topology(struct domain *d,
>> + struct xen_domctl_cpu_topology *topology)
>> +{
>> +if ( !is_hvm_domain(d) ||
>> + !topology->size || topology->size > HVM_MAX_VCPUS )
>> +return -EINVAL;
>> +
>> +if ( !d->arch.hvm_domain.apic_id )
>> +d->arch.hvm_domain.apic_id = xmalloc_array(uint32_t, 
>> topology->size);
>> +
>> +if ( !d->arch.hvm_domain.apic_id )
>> +return -ENOMEM;
>> +
>> +if ( copy_from_guest(d->arch.hvm_domain.apic_id, topology->tid,
>> + topology->size) )
>> +return -EFAULT;
>> +
>> +d->arch.hvm_domain.apic_id_size = topology->size;
>> +d->arch.hvm_domain.core_per_socket = topology->core_per_socket;
>> +d->arch.hvm_domain.thread_per_core = topology->thread_per_core;
>> +return 0;
>> +}
>> +
>>   #define MAX_IOPORTS 0x1
>>   long arch_do_domctl(
>> @@ -1555,6 +1578,10 @@ long arch_do_domctl(
>>   recalculate_cpuid_policy(d);
>>   break;
>> +case XEN_DOMCTL_set_cpu_topology:
>> +ret = arch_set_cpu_topology(d, >u.cpu_topology);
>> +break;
>> +
>>   default:
>>   ret = iommu_do_domctl(domctl, d, u_domctl);
>>   break;
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index 71fddfd..b3b3224 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -1509,6 +1509,13 @@ int hvm_vcpu_initialise(struct vcpu *v)
>>   int rc;
>>   struct domain *d = v->domain;
>> +if ( v->vcpu_id > d->arch.hvm_domain.apic_id_size )
>> +{
>> +printk(XENLOG_ERR "d%dv%d's apic id isn't set.\n",
>> +   d->domain_id, v->vcpu_id);
>> +return -ENOENT;
>> +}
>> +
>>   hvm_asid_flush_vcpu(v);
>>   spin_lock_init(>arch.hvm_vcpu.tm_lock);
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index a56f875..b90e663 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -4413,6 +4413,51 @@ long arch_memory_op(unsigned long cmd, 
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>   return rc;
>>   }
>> +case XENMEM_get_cpu_topology:
>> +{
>> +struct domain *d;
>> +struct xen_cpu_topology_info topology;
>> +
>> +if ( copy_from_guest(, arg, 1) )
>> +return -EFAULT;
>> +
>> +if ( topology.pad || topology.pad2 )
>> +return -EINVAL;
>> +
>> +if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
>> +return -ESRCH;
>> +
>> +rc = xsm_get_cpu_topology(XSM_TARGET, d);
>> +if ( rc )
>> +goto get_cpu_topology_failed;
>> +
>> +rc = -EOPNOTSUPP;
>> +if ( !is_hvm_domain(d) || !d->arch.hvm_domain.apic_id )
>> +goto get_cpu_topology_failed;
>> +
>> +/* allow the size to be zero for users who don't care apic_id */

Re: [Xen-devel] [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology

2018-01-08 Thread Daniel De Graaf

On 01/07/2018 11:01 PM, Chao Gao wrote:

Define interface, structures and hypercalls for toolstack to build
cpu topology and for guest that will retrieve it [1].
Two subop hypercalls introduced by this patch:
XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
and XENMEM_get_cpu_topology to retrieve cpu topology information.

[1]: during guest creation, those information helps hvmloader to build ACPI.

Signed-off-by: Chao Gao 


When adding new XSM controls for use by device models, you also
need to add the permissions to the device_model macro defined in
tools/flask/policy/modules/xen.if.  If domains need to call this
function on themselves (is this only true for get?), you will also
need to add it to declare_domain_common.


---
  xen/arch/x86/domctl.c   | 27 ++
  xen/arch/x86/hvm/hvm.c  |  7 ++
  xen/arch/x86/mm.c   | 45 +
  xen/include/asm-x86/hvm/domain.h| 15 +
  xen/include/public/domctl.h | 22 ++
  xen/include/public/memory.h | 27 +-
  xen/include/xsm/dummy.h |  6 +
  xen/xsm/dummy.c |  1 +
  xen/xsm/flask/hooks.c   | 10 +
  xen/xsm/flask/policy/access_vectors |  4 
  10 files changed, 163 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 36ab235..4e1bbd5 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -347,6 +347,29 @@ void arch_get_domain_info(const struct domain *d,
  info->flags |= XEN_DOMINF_hap;
  }
  
+static int arch_set_cpu_topology(struct domain *d,

+ struct xen_domctl_cpu_topology *topology)
+{
+if ( !is_hvm_domain(d) ||
+ !topology->size || topology->size > HVM_MAX_VCPUS )
+return -EINVAL;
+
+if ( !d->arch.hvm_domain.apic_id )
+d->arch.hvm_domain.apic_id = xmalloc_array(uint32_t, topology->size);
+
+if ( !d->arch.hvm_domain.apic_id )
+return -ENOMEM;
+
+if ( copy_from_guest(d->arch.hvm_domain.apic_id, topology->tid,
+ topology->size) )
+return -EFAULT;
+
+d->arch.hvm_domain.apic_id_size = topology->size;
+d->arch.hvm_domain.core_per_socket = topology->core_per_socket;
+d->arch.hvm_domain.thread_per_core = topology->thread_per_core;
+return 0;
+}
+
  #define MAX_IOPORTS 0x1
  
  long arch_do_domctl(

@@ -1555,6 +1578,10 @@ long arch_do_domctl(
  recalculate_cpuid_policy(d);
  break;
  
+case XEN_DOMCTL_set_cpu_topology:

+ret = arch_set_cpu_topology(d, >u.cpu_topology);
+break;
+
  default:
  ret = iommu_do_domctl(domctl, d, u_domctl);
  break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 71fddfd..b3b3224 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1509,6 +1509,13 @@ int hvm_vcpu_initialise(struct vcpu *v)
  int rc;
  struct domain *d = v->domain;
  
+if ( v->vcpu_id > d->arch.hvm_domain.apic_id_size )

+{
+printk(XENLOG_ERR "d%dv%d's apic id isn't set.\n",
+   d->domain_id, v->vcpu_id);
+return -ENOENT;
+}
+
  hvm_asid_flush_vcpu(v);
  
  spin_lock_init(>arch.hvm_vcpu.tm_lock);

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index a56f875..b90e663 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4413,6 +4413,51 @@ long arch_memory_op(unsigned long cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
  return rc;
  }
  
+case XENMEM_get_cpu_topology:

+{
+struct domain *d;
+struct xen_cpu_topology_info topology;
+
+if ( copy_from_guest(, arg, 1) )
+return -EFAULT;
+
+if ( topology.pad || topology.pad2 )
+return -EINVAL;
+
+if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
+return -ESRCH;
+
+rc = xsm_get_cpu_topology(XSM_TARGET, d);
+if ( rc )
+goto get_cpu_topology_failed;
+
+rc = -EOPNOTSUPP;
+if ( !is_hvm_domain(d) || !d->arch.hvm_domain.apic_id )
+goto get_cpu_topology_failed;
+
+/* allow the size to be zero for users who don't care apic_id */
+if ( topology.size )
+{
+rc = -E2BIG;
+if ( topology.size != d->arch.hvm_domain.apic_id_size )
+goto get_cpu_topology_failed;
+
+rc = -EFAULT;
+if ( copy_to_guest(topology.tid.h, d->arch.hvm_domain.apic_id,
+   topology.size) )
+goto get_cpu_topology_failed;
+}
+
+topology.core_per_socket = d->arch.hvm_domain.core_per_socket;
+topology.thread_per_core = d->arch.hvm_domain.thread_per_core;
+
+rc = __copy_to_guest(arg, , 1) ? -EFAULT : 0;
+
+ get_cpu_topology_failed:
+rcu_unlock_domain(d);
+