On 08/30/2017 08:12 AM, Jan Beulich wrote:
>>>> On 30.08.17 at 07:33, <tianyu....@intel.com> wrote:
>> On 2017年08月29日 16:49, Jan Beulich wrote:
>>>>>> On 29.08.17 at 06:38, <tianyu....@intel.com> wrote:
>>>> On 2017年08月25日 22:10, Meng Xu wrote:
>>>>> How many VCPUs for a single VM do you want to support with this patch set?
>>>>
>>>> Hi Meng:
>>>>    Sorry for later response. We hope to increase max vcpu number to 512.
>>>> This also have dependency on other jobs(i.e, cpu topology, mult page
>>>> support for ioreq server and virtual IOMMU).
>>>
>>> I'm sorry for repeating this, but your first and foremost goal ought
>>> to be to address the known issues with VMs having up to 128
>>> vCPU-s; Andrew has been pointing this out in the past. I see no
>>> point in pushing up the limit if even the current limit doesn't work
>>> reliably in all cases.
>>>
>>
>> Hi Jan & Andrew:
>>      We ran some HPC benchmark(i.e, HPlinkpack, dgemm, sgemm, igemm and so
>> on) in a huge VM with 128 vcpus(Even >255 vcpus with non-upstreamed
>> patches) and didn't meet unreliable issue. These benchmarks run heavy
>> workloads in VM and some of them even last several hours.
> 
> I guess it heavily depends on what portions of hypervisor code
> those benchmarks exercise. Compute-intensives ones (which
> seems a likely case for HPC) aren't that interesting. Ones putting
> high pressure on e.g. the p2m lock, or ones causing high IPI rates
> (inside the guest) are likely to be more problematic.

Right -- and so if Andy's assessment is accurate, it would be a security
issue to allow *untrusted* guests to run with such a hugh number of
vcpus.  But it seems to me like it would still be useful for people who
run only trusted guests to run with more vcpus, as long as they
understand the potential limitations.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to