Platform:
CloudStack 4.11.2 on CentOS 7
KVM Hypervisor on CentOS 7

I have found some throughput issues on our VirtualRuters and I've tracked
it down to CPU IRQ hitting 99% on the VR which was related to NIC
interrupts.

I decided to lookup what NIC is being emulated on the VRs; lsmod listed
three Intel NICs:

00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet
Controller (rev 03)
00:04.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet
Controller (rev 03)
00:05.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet
Controller (rev 03)

All my regular VMs are using Virtio network devices as specified within
template settings nicAdapter=virtio

When I manually updated the user_vm_details table for a VR with
nicAdapter=virtio and restarted the VR everything came up as expected, the
VR was started with virtio NICs and the IRQ issue was gone, also the
throughput doubled. Now lspci on the VR was showing:

00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
00:04.0 Ethernet controller: Red Hat, Inc Virtio network device
00:05.0 Ethernet controller: Red Hat, Inc Virtio network device

The problem I have is getting the nicAdapter setting at systemvm template
level like I do for regular VMs so that all VRs are deployed with virtio
network adapter. I don't want to set this manually for every network I
deploy. So I went to Templates -> Mysystemvm template -> Settings -> Add
Setting

Name:  nicAdapter
Value: virtio

BUT it just looks to me like systemvms don't honour vm_template_details
table where nicAdapter is specified. When a VR gets created, I looked up
content of the user_vm_details for the VR and found nicAdapter=virio is
missing but I would expect it to be there.

If there is anyone running CloudStack with KVM who could help on this that
would be great. It might well be a BUG and needs to be reported as such not
sure at this stage.

If any of the above is not entirely clear please let me know and I will try
my best to explain in more detail.

Thanks
Raf

Reply via email to