But if I convert all the hosts to Rocky and upgrade to 4.16, I should be ok?
Thanks > On Wednesday, Dec 15, 2021 at 11:17 PM, Slavka Peleva > <slav...@storpool.com.INVALID (mailto:slav...@storpool.com.INVALID)> wrote: > Sorry, I didn't pay attention to your CS version. After the upgrade, I > think you will have the same problem. Because in the DB, there is > information about host/hosts on this cluster that is/are with CentOS. > > Best regards, > Slavka > > On Thu, Dec 16, 2021 at 8:49 AM Jeremy Hansen <jer...@skidrow.la.invalid> > wrote: > > > I noticed in the compatibility matrix that Rocky isn’t supported until > > 4.16.0.0. If I upgrade Cloudstack first, would this help or is it still > > going to complain about the centos/rocky mix? If I convert all my existing > > nodes to Rocky, which is the plan anyway, will this go away? Shouldn’t > > CentOS and Rocky be considered that same thing… sort of…? > > > > Thanks > > -jeremy > > > > > > > > > > On Wednesday, Dec 15, 2021 at 10:43 PM, Slavka Peleva < > > slav...@storpool.com.INVALID> wrote: > > Hi Jeremy, > > > > It will help if you have another cluster for Rocky Linux. Hosts need to be > > of the same OS, it's not possible to mix OSes in the same cluster. > > > > Best regards, > > Slavka > > > > On Thu, Dec 16, 2021 at 4:08 AM Jeremy Hansen <jer...@skidrow.la.invalid> > > wrote: > > > > Any tips on how I would troubleshoot this? I’ve tried downgrading libvirt > > and qemu and ca-certificates to the same version as the other functional > > nodes. That didn’t seem to help. This is obviously an ssl issue but I > > don’t really know what to do about it. > > > > 2021-12-15 18:04:14,438 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) Agent started > > 2021-12-15 18:04:14,444 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) Implementation Version is 4.15.0.0 > > 2021-12-15 18:04:14,447 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) agent.properties found at /etc/cloudstack/agent/agent.properties > > 2021-12-15 18:04:14,466 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) Defaulting to using properties file for storage > > 2021-12-15 18:04:14,467 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) Defaulting to the constant time backoff algorithm > > 2021-12-15 18:04:14,471 INFO [cloud.utils.LogUtils] (main:null) (logid:) > > log4j configuration found at /etc/cloudstack/agent/log4j-cloud.xml > > 2021-12-15 18:04:14,485 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) Using default Java settings for IPv6 preference for agent > > connection > > 2021-12-15 18:04:14,592 INFO [cloud.agent.Agent] (main:null) (logid:) id > > is 0 > > 2021-12-15 18:04:14,606 ERROR [kvm.resource.LibvirtComputingResource] > > (main:null) (logid:) uefi properties file not found due to: Unable to find > > file uefi.properties. > > 2021-12-15 18:04:14,663 INFO [kvm.resource.LibvirtConnection] (main:null) > > (logid:) No existing libvirtd connection found. Opening a new one > > 2021-12-15 18:04:14,890 INFO [kvm.resource.LibvirtComputingResource] > > (main:null) (logid:) No libvirt.vif.driver specified. Defaults to > > BridgeVifDriver. > > 2021-12-15 18:04:15,086 INFO [kvm.resource.LibvirtComputingResource] > > (main:null) (logid:) iscsi session clean up is disabled > > 2021-12-15 18:04:15,129 INFO [cloud.agent.Agent] (main:null) (logid:) > > Agent [id = 0 : type = LibvirtComputingResource : zone = 1 : pod = 1 : > > workers = 5 : host = 192.168.30.59 : port = 8250 > > 2021-12-15 18:04:15,139 INFO [utils.nio.NioClient] (main:null) (logid:) > > Connecting to 192.168.30.59:8250 > > 2021-12-15 18:04:15,153 INFO [utils.nio.Link] (main:null) (logid:) Conf > > file found: /etc/cloudstack/agent/agent.properties > > 2021-12-15 18:04:15,919 INFO [utils.nio.NioClient] (main:null) (logid:) > > SSL: Handshake done > > 2021-12-15 18:04:15,920 INFO [utils.nio.NioClient] (main:null) (logid:) > > Connected to 192.168.30.59:8250 > > 2021-12-15 18:04:16,057 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Attempting to create storage pool > > 18796842-a137-475d-9799-9874240e3c0c (Filesystem) in libvirt > > 2021-12-15 18:04:16,062 ERROR [kvm.resource.LibvirtConnection] > > (Agent-Handler-1:null) (logid:) Connection with libvirtd is broken: > > invalid > > connection pointer in virConnectGetVersion > > 2021-12-15 18:04:16,066 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Found existing defined storage pool > > 18796842-a137-475d-9799-9874240e3c0c, using it. > > 2021-12-15 18:04:16,066 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Trying to fetch storage pool > > 18796842-a137-475d-9799-9874240e3c0c from libvirt > > 2021-12-15 18:04:16,151 INFO [cloud.serializer.GsonHelper] > > (Agent-Handler-1:null) (logid:) Default Builder inited. > > 2021-12-15 18:04:16,272 INFO [cloud.agent.Agent] (Agent-Handler-2:null) > > (logid:) Proccess agent startup answer, agent id = 0 > > 2021-12-15 18:04:16,273 INFO [cloud.agent.Agent] (Agent-Handler-2:null) > > (logid:) Set agent id 0 > > 2021-12-15 18:04:16,289 INFO [cloud.agent.Agent] (Agent-Handler-2:null) > > (logid:) Startup Response Received: agent id = 0 > > 2021-12-15 18:04:16,289 INFO [cloud.agent.Agent] > > (AgentShutdownThread:null) (logid:) Stopping the agent: Reason = sig.kill > > > > I also noticed this: > > > > 2021-12-15 18:05:34,542 DEBUG [c.c.a.m.AgentManagerImpl] > > (AgentConnectTaskPool-19617:ctx-e0dfdb18) (logid:d3930423) Failed to > > handle > > host connection: > > java.lang.IllegalArgumentException: Can't add host: 192.168.30.54 with > > hostOS: Rocky into a cluster,in which there are CentOS hosts added > > at > > com.cloud.hypervisor.kvm.discoverer.LibvirtServerDiscoverer.createHostVOForConnectedAgent(LibvirtServerDiscoverer.java:456) > > > > at > > com.cloud.resource.ResourceManagerImpl.dispatchToStateAdapters(ResourceManagerImpl.java:1719) > > > > at > > com.cloud.resource.ResourceManagerImpl.createHostVO(ResourceManagerImpl.java:1946) > > > > at > > com.cloud.resource.ResourceManagerImpl.createHostVOForConnectedAgent(ResourceManagerImpl.java:2278) > > > > at jdk.internal.reflect.GeneratedMethodAccessor510.invoke(Unknown Source) > > at > > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > > at > > org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) > > > > at > > org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) > > > > at > > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) > > > > at > > org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:95) > > > > at > > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) > > > > at > > org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) > > > > at com.sun.proxy.$Proxy188.createHostVOForConnectedAgent(Unknown Source) > > at > > com.cloud.agent.manager.AgentManagerImpl.handleConnectedAgent(AgentManagerImpl.java:1097) > > > > at > > com.cloud.agent.manager.AgentManagerImpl$HandleAgentConnectTask.runInContext(AgentManagerImpl.java:1194) > > > > at > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48) > > > > at > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55) > > > > at > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102) > > > > at > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52) > > > > at > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45) > > > > at > > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > > > > at > > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > > > > at java.base/java.lang.Thread.run(Thread.java:829) > > > > The fact that this host has been converted to Rocky Linux is causing an > > issue? What’s the work around for this? > > > > Thanks > > -jeremy > > > > > > > > On Monday, Dec 13, 2021 at 12:19 AM, Jeremy Hansen <jer...@skidrow.la> > > wrote: > > It doesn’t error out but there is purposely no VMs running on these hosts > > to test the upgrade of the underlying distro before having to stop any > > active VMs. > > > > [jeremy@cm02 ~]$ sudo virsh list > > Id Name State > > -------------------- > > > > [jeremy@cm02 ~]$ > > > > > > -jeremy > > > > > > > > > > On Monday, Dec 13, 2021 at 12:06 AM, Wei ZHOU <ustcweiz...@gmail.com> > > wrote: > > Hi, > > > > Do virsh commands e.g. "virsh list" work ? > > > > -Wei > > > > On Mon, 13 Dec 2021 at 06:46, Jeremy Hansen <jer...@skidrow.la.invalid> > > wrote: > > > > Testing on an unused compute node, I tested upgrading to RockyLinux 8.5. > > > > I’m running Cloudstack 4.15.0.0. > > > > Trying to bring up cloudstack agent, I’m seeing some issues communicating > > with libvirt: > > > > 2021-12-12 21:25:03,992 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Attempting to create storage pool > > d1b1e853-1c30-473d-badc-6c30318aa5b0 (Filesystem) in libvirt > > 2021-12-12 21:25:03,997 ERROR [kvm.resource.LibvirtConnection] > > (Agent-Handler-1:null) (logid:) Connection with libvirtd is broken: > > invalid > > connection pointer in virConnectGetVersion > > 2021-12-12 21:25:04,000 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Found existing defined storage pool > > d1b1e853-1c30-473d-badc-6c30318aa5b0, using it. > > > > Dec 12 21:24:13 cm02 libvirtd[269244]: End of file while reading data: > > Input/output error > > Dec 12 21:24:13 cm02 libvirtd[269244]: End of file while reading data: > > Input/output error > > Dec 12 21:24:26 cm02 libvirtd[269244]: End of file while reading data: > > Input/output error > > Dec 12 21:24:26 cm02 libvirtd[269244]: End of file while reading data: > > Input/output error > > Dec 12 21:24:26 cm02 libvirtd[269244]: Cannot recv data: Input/output > > error > > > > Libvirt version is: libvirt-6.0.0-37.module+el8.5.0+670+c4aa478c.x86_64 > > > > Functional hosts that have yet to be upgraded are using: > > > > libvirt-6.0.0-35.module_el8.4.0+783+f8734d30.x86_64 > > > > > > My libvirtd.conf looks like this: > > > > listen_tcp=0 > > listen_tls=1 > > tcp_port="16509" > > auth_tcp="none" > > mdns_adv = 0 > > key_file="/etc/pki/libvirt/private/serverkey.pem" > > cert_file="/etc/pki/libvirt/servercert.pem" > > ca_file="/etc/pki/CA/cacert.pem" > > tls_port="16514" > > auth_tls=“none" > > > > 2021-12-12 21:43:42,841 ERROR [kvm.resource.LibvirtComputingResource] > > (main:null) (logid:) uefi properties file not found due to: Unable to find > > file uefi.properties. > > 2021-12-12 21:43:42,901 INFO [kvm.resource.LibvirtConnection] (main:null) > > (logid:) No existing libvirtd connection found. Opening a new one > > 2021-12-12 21:43:43,127 INFO [kvm.resource.LibvirtComputingResource] > > (main:null) (logid:) No libvirt.vif.driver specified. Defaults to > > BridgeVifDriver. > > 2021-12-12 21:43:43,296 INFO [kvm.resource.LibvirtComputingResource] > > (main:null) (logid:) iscsi session clean up is disabled > > 2021-12-12 21:43:43,312 INFO [cloud.agent.Agent] (main:null) (logid:) > > Agent [id = 0 : type = LibvirtComputingResource : zone = 1 : pod = 1 : > > workers = 5 : host = 192.168.30.59 : port = 8250 > > 2021-12-12 21:43:43,321 INFO [utils.nio.NioClient] (main:null) (logid:) > > Connecting to 192.168.30.59:8250 > > 2021-12-12 21:43:43,325 INFO [utils.nio.Link] (main:null) (logid:) Conf > > file found: /etc/cloudstack/agent/agent.properties > > 2021-12-12 21:43:43,840 INFO [utils.nio.NioClient] (main:null) (logid:) > > SSL: Handshake done > > 2021-12-12 21:43:43,840 INFO [utils.nio.NioClient] (main:null) (logid:) > > Connected to 192.168.30.59:8250 > > 2021-12-12 21:43:43,925 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Attempting to create storage pool > > 18796842-a137-475d-9799-9874240e3c0c (Filesystem) in libvirt > > 2021-12-12 21:43:43,929 ERROR [kvm.resource.LibvirtConnection] > > (Agent-Handler-1:null) (logid:) Connection with libvirtd is broken: > > invalid > > connection pointer in virConnectGetVersion > > 2021-12-12 21:43:43,932 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Found existing defined storage pool > > 18796842-a137-475d-9799-9874240e3c0c, using it. > > 2021-12-12 21:43:43,933 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Trying to fetch storage pool > > 18796842-a137-475d-9799-9874240e3c0c from libvirt > > 2021-12-12 21:43:43,985 INFO [cloud.serializer.GsonHelper] > > (Agent-Handler-1:null) (logid:) Default Builder inited. > > 2021-12-12 21:43:44,020 INFO [cloud.agent.Agent] (Agent-Handler-2:null) > > (logid:) Proccess agent startup answer, agent id = 0 > > 2021-12-12 21:43:44,022 INFO [cloud.agent.Agent] (Agent-Handler-2:null) > > (logid:) Set agent id 0 > > 2021-12-12 21:43:44,028 INFO [cloud.agent.Agent] (Agent-Handler-2:null) > > (logid:) Startup Response Received: agent id = 0 > > 2021-12-12 21:43:44,031 INFO [cloud.agent.Agent] > > (AgentShutdownThread:null) (logid:) Stopping the agent: Reason = sig.kill > > 2021-12-12 21:43:55,682 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) Agent started > > 2021-12-12 21:43:55,688 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) Implementation Version is 4.15.0.0 > > 2021-12-12 21:43:55,690 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) agent.properties found at /etc/cloudstack/agent/agent.properties > > 2021-12-12 21:43:55,709 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) Defaulting to using properties file for storage > > 2021-12-12 21:43:55,711 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) Defaulting to the constant time backoff algorithm > > 2021-12-12 21:43:55,714 INFO [cloud.utils.LogUtils] (main:null) (logid:) > > log4j configuration found at /etc/cloudstack/agent/log4j-cloud.xml > > 2021-12-12 21:43:55,728 INFO [cloud.agent.AgentShell] (main:null) > > (logid:) Using default Java settings for IPv6 preference for agent > > connection > > 2021-12-12 21:43:55,840 INFO [cloud.agent.Agent] (main:null) (logid:) id > > is 0 > > 2021-12-12 21:43:55,853 ERROR [kvm.resource.LibvirtComputingResource] > > (main:null) (logid:) uefi properties file not found due to: Unable to find > > file uefi.properties. > > 2021-12-12 21:43:55,909 INFO [kvm.resource.LibvirtConnection] (main:null) > > (logid:) No existing libvirtd connection found. Opening a new one > > 2021-12-12 21:43:56,145 INFO [kvm.resource.LibvirtComputingResource] > > (main:null) (logid:) No libvirt.vif.driver specified. Defaults to > > BridgeVifDriver. > > 2021-12-12 21:43:56,307 INFO [kvm.resource.LibvirtComputingResource] > > (main:null) (logid:) iscsi session clean up is disabled > > 2021-12-12 21:43:56,322 INFO [cloud.agent.Agent] (main:null) (logid:) > > Agent [id = 0 : type = LibvirtComputingResource : zone = 1 : pod = 1 : > > workers = 5 : host = 192.168.30.59 : port = 8250 > > 2021-12-12 21:43:56,331 INFO [utils.nio.NioClient] (main:null) (logid:) > > Connecting to 192.168.30.59:8250 > > 2021-12-12 21:43:56,335 INFO [utils.nio.Link] (main:null) (logid:) Conf > > file found: /etc/cloudstack/agent/agent.properties > > 2021-12-12 21:43:56,843 INFO [utils.nio.NioClient] (main:null) (logid:) > > SSL: Handshake done > > 2021-12-12 21:43:56,843 INFO [utils.nio.NioClient] (main:null) (logid:) > > Connected to 192.168.30.59:8250 > > 2021-12-12 21:43:56,941 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Attempting to create storage pool > > 18796842-a137-475d-9799-9874240e3c0c (Filesystem) in libvirt > > 2021-12-12 21:43:56,945 ERROR [kvm.resource.LibvirtConnection] > > (Agent-Handler-1:null) (logid:) Connection with libvirtd is broken: > > invalid > > connection pointer in virConnectGetVersion > > 2021-12-12 21:43:56,948 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Found existing defined storage pool > > 18796842-a137-475d-9799-9874240e3c0c, using it. > > 2021-12-12 21:43:56,948 INFO [kvm.storage.LibvirtStorageAdaptor] > > (Agent-Handler-1:null) (logid:) Trying to fetch storage pool > > 18796842-a137-475d-9799-9874240e3c0c from libvirt > > 2021-12-12 21:43:56,998 INFO [cloud.serializer.GsonHelper] > > (Agent-Handler-1:null) (logid:) Default Builder inited. > > 2021-12-12 21:43:57,031 INFO [cloud.agent.Agent] (Agent-Handler-2:null) > > (logid:) Proccess agent startup answer, agent id = 0 > > 2021-12-12 21:43:57,033 INFO [cloud.agent.Agent] > > (AgentShutdownThread:null) (logid:) Stopping the agent: Reason = sig.kill > > 2021-12-12 21:43:57,033 INFO [cloud.agent.Agent] (Agent-Handler-2:null) > > (logid:) Set agent id 0 > > 2021-12-12 21:43:57,040 INFO [cloud.agent.Agent] (Agent-Handler-2:null) > > (logid:) Startup Response Received: agent id = 0 > > > > Any ideas? I’m working toward upgrading the entire cluster along with > > Cloudstack itself. > > > > -jeremy > > > > > > > > > > > >
signature.asc
Description: PGP signature