Hey, good to hear from you.

If you want to thank me - I would appreciate it if you could test other
Debian 6/7/8/9 (whatever OS type is available there) and report which of
those gives Intel NICs (a bug, by all means, but hard to say where exactly)
Which CentOS7 version exactly? Which qemu-kvm/libvirt version are you
running?

Cheers

On Fri, 15 May 2020 at 16:19, Rafal Turkiewicz <tur...@turexy.com> wrote:

> Andrija,
>
> You are the man! I have changed the OS Type to the default Debian 5 x64
> and boom! All sorted.
>
> It's really odd that picking older OS Type solved the issue where in fact
> the systemVM is running Debian 9. Is this a BUG of some sort?
>
> I might try and experiment with other OS Type Debian version X to see
> where it falls but for now I'm all happy!
>
> Once again thank you very much for the pointer!
>
> Raf
>
> On 2020/05/15 13:51:01, Andrija Panic <andrija.pa...@gmail.com> wrote:
> > In the upgrade guide, we always advise (when registering the new systeVM
> > template) to go as:
> >
> >       OS Type: Debian GNU/Linux 7.0 (64-bit) (or the highest Debian
> release
> > number available in the dropdown)
> >
> > That being said, in the clean 4.13 installation, the OS type is set to
> > Debian 5 x64 - so try each version and in between destroy VR (i.e.
> restart
> > the network with cleanup) and observe "lspci" if virtio or intel NICs -
> but
> > also make sure that each time the VR is created on KVM host (i.e. not on
> > XEN).
> >
> > In order to change OS type for systemVM template, you will have to use DB
> > - modify the "vm_template" table - update the "guest_os_id" field value
> for
> > that specific template, to the ID from the "guest_os" table where
> > name=Debian XXX 64.
> >
> > Hope that solves the issue - should by all means.
> >
> > Regards
> > Andrija
> >
> >
> > On Fri, 15 May 2020 at 15:33, Rafal Turkiewicz <tur...@turexy.com>
> wrote:
> >
> > > Hello Andrija,
> > >
> > > Thanks for your input the OS Type for the systemVM template is set to
> > > "Debian GNU/Linux 8 (64-bit)"
> > >
> > > I think I forgot to mention a very important aspect of my setup. This
> > > Cloudstack instance is powering XenServer and KVM where KVM was added
> > > recently.
> > >
> > > Your message made me think and look at my other (test lab) setup where
> > > CloudStack is only powering KVM hypervisors. I can confirm all VRs are
> > > running with virtio which implies there got to be something on the my
> mixed
> > > HV CloudStack.
> > >
> > > I will keep looking into this but if you have any further thoughts on
> this
> > > please let me know.
> > >
> > > Raf
> > >
> > > On 2020/05/15 11:14:37, Andrija Panic <andrija.pa...@gmail.com> wrote:
> > > > Rafal,
> > > >
> > > > what is the OS type you defined for the systemVM template?
> > > >
> > > > In my env, VR (VPC) - all interfaces are VirtIO.
> > > >
> > > > Best
> > > > Andrija
> > > >
> > > > On Fri, 15 May 2020 at 12:14, Rafal Turkiewicz <tur...@turexy.com>
> > > wrote:
> > > >
> > > > > Platform:
> > > > > CloudStack 4.11.2 on CentOS 7
> > > > > KVM Hypervisor on CentOS 7
> > > > >
> > > > > I have found some throughput issues on our VirtualRuters and I've
> > > tracked
> > > > > it down to CPU IRQ hitting 99% on the VR which was related to NIC
> > > > > interrupts.
> > > > >
> > > > > I decided to lookup what NIC is being emulated on the VRs; lsmod
> listed
> > > > > three Intel NICs:
> > > > >
> > > > > 00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit
> Ethernet
> > > > > Controller (rev 03)
> > > > > 00:04.0 Ethernet controller: Intel Corporation 82540EM Gigabit
> Ethernet
> > > > > Controller (rev 03)
> > > > > 00:05.0 Ethernet controller: Intel Corporation 82540EM Gigabit
> Ethernet
> > > > > Controller (rev 03)
> > > > >
> > > > > All my regular VMs are using Virtio network devices as specified
> within
> > > > > template settings nicAdapter=virtio
> > > > >
> > > > > When I manually updated the user_vm_details table for a VR with
> > > > > nicAdapter=virtio and restarted the VR everything came up as
> expected,
> > > the
> > > > > VR was started with virtio NICs and the IRQ issue was gone, also
> the
> > > > > throughput doubled. Now lspci on the VR was showing:
> > > > >
> > > > > 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
> > > > > 00:04.0 Ethernet controller: Red Hat, Inc Virtio network device
> > > > > 00:05.0 Ethernet controller: Red Hat, Inc Virtio network device
> > > > >
> > > > > The problem I have is getting the nicAdapter setting at systemvm
> > > template
> > > > > level like I do for regular VMs so that all VRs are deployed with
> > > virtio
> > > > > network adapter. I don't want to set this manually for every
> network I
> > > > > deploy. So I went to Templates -> Mysystemvm template -> Settings
> ->
> > > Add
> > > > > Setting
> > > > >
> > > > > Name:  nicAdapter
> > > > > Value: virtio
> > > > >
> > > > > BUT it just looks to me like systemvms don't honour
> vm_template_details
> > > > > table where nicAdapter is specified. When a VR gets created, I
> looked
> > > up
> > > > > content of the user_vm_details for the VR and found
> nicAdapter=virio is
> > > > > missing but I would expect it to be there.
> > > > >
> > > > > If there is anyone running CloudStack with KVM who could help on
> this
> > > that
> > > > > would be great. It might well be a BUG and needs to be reported as
> > > such not
> > > > > sure at this stage.
> > > > >
> > > > > If any of the above is not entirely clear please let me know and I
> > > will try
> > > > > my best to explain in more detail.
> > > > >
> > > > > Thanks
> > > > > Raf
> > > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Andrija Panić
> > > >
> > >
> >
> >
> > --
> >
> > Andrija Panić
> >
>


-- 

Andrija Panić

Reply via email to