[ovirt-users] Re: Best performance for vcpu pinning

2022-11-16 Thread Tomáš Golembiovský
On Tue, Nov 15, 2022 at 05:53:13PM -, martin.dec...@ora-solutions.net wrote:
> Hi Klaas,
> 
> in our case, the Hypervisor CPU looks like this:
> 
> CPU(s):32
> On-line CPU(s) list:   0-31
> Thread(s) per core:2
> Core(s) per socket:8
> Socket(s): 2
> NUMA node(s):  2
> NUMA node0 CPU(s): 0-7,16-23
> NUMA node1 CPU(s): 8-15,24-31
> 
> The PROD VM should have 24 PINNED (due to licensing requirements) vCPUs out 
> of the 32 threads and around 838G of the 1 TB RAM.
> The TEST VM should have 8 vCPUs out of the 32 threads and around 128G of 1 TB 
> RAM.
> 
> With these PROD VM Requirements, it does not make sense to limit vCPUs to one 
> socket / NUMA node. The TEST VM can be limited to one socket.

In this case the option here would be to actually define two vNUMA in
your PROD and TEST VMs. Give your PROD VM 12+12 and your TEST VM 4+4.
Then make sure to define the pinning so that vCPUs of one vNUMA align
with CPUs of physical NUMA. Also sharing the CPU that VDSM is using with
the TEST VM might be preferable unless you want to share some CPUs
between TEST and PROD. So like this:

TEST: pCPUs 0,1,16,17 map to first vNUMA
  pCPUs 8,9,24,25 map to second vNUMA

PROD: pCPUs 2-7,18-23 map to first vNUMA 
  pCPUS 10-15,26-31 map to second vNUMA

> 
> I guess I have made these mistakes:
> 
> - I should not use Physical CPU thread 0 but leave it for Hypervisor
> - I have configured the VM "2 threads per core" with  cores='6' threads='2'/>,

This does not add up. 16 sockets is bit too much. I think you meant
2 sockets.

> but have not specified two threads for each vCPU in the pinning field.

I don't think this is strictly necessary. But you definitely want to
make sure that the virtual threads match the physical threads. In other
words you only want to mix and match pCPUs 0 and 16 (or 1 and 17, etc.) so
I would say 0#0_1#16 is fine just like 0#0,16_1#0,16.

Alternatively you may define only single threaded VM (topology 2:12:1)
and don't care about the physical threads that much.

Which of the alternatives is best would depend on your workload in the
guest and how well it can benefit from the knowledge of the host
topology.


Hope this helps

Tomas

> In the oVirt "Resource Allocation" Tab in "CPU Pinning Topology", i have 
> specified:
> 0#8_1#9_2#10_3#11_4#12_5#13_6#14_7#15_8#16_9#17_10#18_11#19_12#20_13#21_14#22_15#23_16#24_17#25_18#26_19#27_20#28_21#29_22#30_23#31
> 
> The Calculate CPU Pinning script from the RHV HANA Guide results in this 
> mapping:
> 
> 0#1,17
> 1#1,17
> 2#2,18
> 3#2,18
> 4#3,19
> 5#3,19
> 6#4,20
> 7#4,20
> 8#5,21
> 9#5,21
> 10#6,22
> 11#6,22
> 12#9,25
> 13#9,25
> 14#10,26
> 15#10,26
> 16#11,27
> 17#11,27
> 18#12,28
> 19#12,28
> 20#13,29
> 21#13,29
> 22#14,30
> 23#14,30
> 
> But with this approach, I can not separate CPUs between PROD VM and TEST VM.
> 
> Any ideas?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PF7UCDGZPL3FJ4O5XBICJNSJKOM2ALBE/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6BYXDUJ7ZHWP742RMJNY3XBD4T3SONDP/


[ovirt-users] Re: Best performance for vcpu pinning

2022-11-15 Thread martin . decker
Hi Klaas,

in our case, the Hypervisor CPU looks like this:

CPU(s):32
On-line CPU(s) list:   0-31
Thread(s) per core:2
Core(s) per socket:8
Socket(s): 2
NUMA node(s):  2
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31

The PROD VM should have 24 PINNED (due to licensing requirements) vCPUs out of 
the 32 threads and around 838G of the 1 TB RAM.
The TEST VM should have 8 vCPUs out of the 32 threads and around 128G of 1 TB 
RAM.

With these PROD VM Requirements, it does not make sense to limit vCPUs to one 
socket / NUMA node. The TEST VM can be limited to one socket.

I guess I have made these mistakes:

- I should not use Physical CPU thread 0 but leave it for Hypervisor
- I have configured the VM "2 threads per core" with , but have not specified two threads for each vCPU in 
the pinning field. In the oVirt "Resource Allocation" Tab in "CPU Pinning 
Topology", i have specified:
0#8_1#9_2#10_3#11_4#12_5#13_6#14_7#15_8#16_9#17_10#18_11#19_12#20_13#21_14#22_15#23_16#24_17#25_18#26_19#27_20#28_21#29_22#30_23#31

The Calculate CPU Pinning script from the RHV HANA Guide results in this 
mapping:

0#1,17
1#1,17
2#2,18
3#2,18
4#3,19
5#3,19
6#4,20
7#4,20
8#5,21
9#5,21
10#6,22
11#6,22
12#9,25
13#9,25
14#10,26
15#10,26
16#11,27
17#11,27
18#12,28
19#12,28
20#13,29
21#13,29
22#14,30
23#14,30

But with this approach, I can not separate CPUs between PROD VM and TEST VM.

Any ideas?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PF7UCDGZPL3FJ4O5XBICJNSJKOM2ALBE/


[ovirt-users] Re: Best performance for vcpu pinning

2022-11-14 Thread Klaas Demter
There is a script in the RHV for HANA Guide: 
https://access.redhat.com/sites/default/files/attachments/deploying_sap_hana_on_red_hat_virtualization_4.4_with_lun_pt_6tb_lm_and_cooper_lake.pdf 
(search for "Calculate CPU Pinning")


Greetings

Klaas

On 11/14/22 13:43, Liran Rotenberg wrote:



On Mon, Nov 14, 2022 at 11:12 AM  wrote:

Hello List,

how can I achieve the best performance with vcpu pinning in KVM?

Is it better to have 1:1 mapping between virtual and physical
thread like this:

    
    
    
    
    
    
    
    

Or is it better to allow each vCPU to run on any of the limited
number of physical threads?

e.g.
    
    
    
    
    
    
    
    


The hypervisor host has 2 CPUs. Each CPU has 8 cores and each core
2 Threads. In total this are 32 threads.

 lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz
Stepping:              7
CPU MHz:               4213.731
CPU max MHz:           4400.
CPU min MHz:           1200.
BogoMIPS:              7200.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              25344K
NUMA node0 CPU(s):     0-7,16-23
NUMA node1 CPU(s):     8-15,24-31

I need to run 2 VMs on this host. One is for production and should
have 24 vCPUs and the other one is for test and should have 8
vCPUs. Test VM workload should not impact Prod VM performance.

Current configuration is:

TEST:

  128
  1
  
    
    
    
    
    
    
    
    
  

  
    
    
      
    
  

PROD:

  192
  1
  
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
  

  
    
    
      
    
  

Versions are:
ovirt-host-4.3.5
libvirt-5.7.0-28.el7.x86_64

Hi Martin,
The recommended approach would be to pin the CPUs into specific 
physical CPUs.

The most important part is to be under the same socket.
VDSM uses physical CPU 1, unless you changed the default VDSM 
settings, so maybe it would be better to use the second socket.
You may also wish to refer to the NUMA topology (using socket 1 that 
would be - NUMA node1 CPU(s):     8-15,24-31).

Overall:
0#8,24_1#9,25_2#10,26 and so on.
This is also with the thought that physical cpu 8 and 24 are two 
threads in the same core. You can check it using VDSM API or just # 
cat /proc/cpuinfo.
Note, that physical CPUs you use with this method are in a shared pool 
and can be used by other VMs.
If you wish it to be exclusive to the VM, you may use the dedicated 
CPU feature under the VM resource allocation tab. Note, it will pin 
the CPUs for you.


Regards,
Liran.


Thanks in advance,
Martin
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/HCNWYSIBMUY7Q3YU25KQL2MUCK42GMJ6/


___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDWR2WVOLY7SCEXQHZQJJK7RMT4HBE3B/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XPQQA3CTL5RHJRCBYFKQRYCPGTOXET6/


[ovirt-users] Re: Best performance for vcpu pinning

2022-11-14 Thread Liran Rotenberg
On Mon, Nov 14, 2022 at 11:12 AM  wrote:

> Hello List,
>
> how can I achieve the best performance with vcpu pinning in KVM?
>
> Is it better to have 1:1 mapping between virtual and physical thread like
> this:
>
> 
> 
> 
> 
> 
> 
> 
> 
>
> Or is it better to allow each vCPU to run on any of the limited number of
> physical threads?
>
> e.g.
> 
> 
> 
> 
> 
> 
> 
> 
>
>
> The hypervisor host has 2 CPUs. Each CPU has 8 cores and each core 2
> Threads. In total this are 32 threads.
>
>  lscpu
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):32
> On-line CPU(s) list:   0-31
> Thread(s) per core:2
> Core(s) per socket:8
> Socket(s): 2
> NUMA node(s):  2
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 85
> Model name:Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz
> Stepping:  7
> CPU MHz:   4213.731
> CPU max MHz:   4400.
> CPU min MHz:   1200.
> BogoMIPS:  7200.00
> Virtualization:VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  1024K
> L3 cache:  25344K
> NUMA node0 CPU(s): 0-7,16-23
> NUMA node1 CPU(s): 8-15,24-31
>
> I need to run 2 VMs on this host. One is for production and should have 24
> vCPUs and the other one is for test and should have 8 vCPUs. Test VM
> workload should not impact Prod VM performance.
>
> Current configuration is:
>
> TEST:
>
>   128
>   1
>   
> 
> 
> 
> 
> 
> 
> 
> 
>   
>
>   
> 
> 
>   
> 
>   
>
> PROD:
>
>   192
>   1
>   
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>   
>
>   
> 
> 
>   
> 
>   
>
> Versions are:
> ovirt-host-4.3.5
> libvirt-5.7.0-28.el7.x86_64
>
Hi Martin,
The recommended approach would be to pin the CPUs into specific physical
CPUs.
The most important part is to be under the same socket.
VDSM uses physical CPU 1, unless you changed the default VDSM settings, so
maybe it would be better to use the second socket.
You may also wish to refer to the NUMA topology (using socket 1 that would
be - NUMA node1 CPU(s): 8-15,24-31).
Overall:
0#8,24_1#9,25_2#10,26 and so on.
This is also with the thought that physical cpu 8 and 24 are two threads in
the same core. You can check it using VDSM API or just # cat /proc/cpuinfo.
Note, that physical CPUs you use with this method are in a shared pool and
can be used by other VMs.
If you wish it to be exclusive to the VM, you may use the dedicated CPU
feature under the VM resource allocation tab. Note, it will pin the CPUs
for you.

Regards,
Liran.


> Thanks in advance,
> Martin
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HCNWYSIBMUY7Q3YU25KQL2MUCK42GMJ6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDWR2WVOLY7SCEXQHZQJJK7RMT4HBE3B/