Unfortunately I’m not exactly sure what the problem was, but I was able to get 
the fully-updated EL9.4 host back in the cluster after manually deleting all of 
the iSCSI nodes.

Some of the iscsiadm commands printed worked fine:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m iface
bond1 tcp,<empty>,<empty>,bond1,<empty>
default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>

[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T 
iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.54:3260,1 
--op=new
New iSCSI node [tcp:[hw=,ip=,net_if=bond1,iscsi_if=bond1] 192.168.56.54,3260,1 
iqn.2002-10.com.infortrend:raid.sn8087428.012] added

[root@lnxvirt06 ~]# iscsiadm -m node
192.168.56.54:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.001
192.168.56.54:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.001
192.168.56.56:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.101
192.168.56.56:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.101
192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012
192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012
192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112
192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.001
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.001
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.012
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.012
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.101
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.101
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.112
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.112
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.001
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.001
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.012
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.101
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.112
———

But others didn’t, where the only difference is the portal:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T 
iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.55:3260,1 
--op=new
iscsiadm: Error while adding record: invalid parameter
———

Likewise, I could delete some nodes using iscsiadm but not others:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T 
iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.54:3260,1 
--op=delete
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T 
iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.55:3260,1 
--op=delete
iscsiadm: Could not execute operation on all records: invalid parameter
[root@lnxvirt06 ~]# iscsiadm -m node -p 192.168.56.50 -o delete
iscsiadm: Could not execute operation on all records: invalid parameter
———

At this point I wiped out /var/lib/iscsi/, rebooted, and everything just worked.

Thanks so much for your time and help!

Sincerely,
Devin




> On Jun 7, 2024, at 10:26 AM, Jean-Louis Dupond <jean-lo...@dupond.be> wrote:
>
> 2024-06-07 09:59:16,720-0400 WARN  (jsonrpc/0) [storage.iscsiadm] iscsiadm 
> executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
> 2024-06-07 09:59:16,751-0400 INFO  (jsonrpc/0) [storage.iscsi] Adding iscsi 
> node for target 192.168.56.55:3260,1 
> iqn.2002-10.com.infortrend:raid.sn8087428.012 iface bond1 (iscsi:192)
> 2024-06-07 09:59:16,751-0400 WARN  (jsonrpc/0) [storage.iscsiadm] iscsiadm 
> executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 
> 'iqn.2002-10.com.infortrend:raid.sn8087428.012', '-I', 'bond1', '-p', 
> '192.168.56.55:3260,1', '--op=new'] (iscsiadm:104)
> 2024-06-07 09:59:16,785-0400 WARN  (jsonrpc/0) [storage.iscsiadm] iscsiadm 
> executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
> 2024-06-07 09:59:16,825-0400 ERROR (jsonrpc/0) [storage.storageServer] Could 
> not configure connection to 192.168.56.55:3260,1 
> iqn.2002-10.com.infortrend:raid.sn8087428.012 and iface <IscsiInterface 
> name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm: 
> Error while adding record: invalid parameter\n') (storageServer:580)
> Can you try to run those commands manually on the host?
> And see what it gives :)
> On 7/06/2024 16:13, Devin A. Bougie wrote:
>> Thank you!  I added a warning at the line you indicated, which produces the 
>> following output:
>>
>> ———
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,452-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,493-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,532-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,565-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,595-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,636-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,670-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,720-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,751-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', 
>> '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.012', '-I', 'bond1', '-p', 
>> '192.168.56.55:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,785-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,825-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', 
>> '-T', 'iqn.2002-10.com.infortrend:raid.sn8073743.001', '-I', 'bond1', '-p', 
>> '192.168.56.54:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,856-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,889-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', 
>> '-T', 'iqn.2002-10.com.infortrend:raid.sn8073743.101', '-I', 'bond1', '-p', 
>> '192.168.56.56:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,924-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,957-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', 
>> '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.112', '-I', 'bond1', '-p', 
>> '192.168.56.57:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,987-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,018-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', 
>> '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.012', '-I', 'bond1', '-p', 
>> '192.168.56.51:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,051-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,079-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', 
>> '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.001', '-I', 'bond1', '-p', 
>> '192.168.56.50:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,112-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,142-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', 
>> '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.112', '-I', 'bond1', '-p', 
>> '192.168.56.53:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,174-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,204-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', 
>> '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.101', '-I', 'bond1', '-p', 
>> '192.168.56.52:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,237-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,186-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,234-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,268-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,310-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,343-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,370-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,408-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,442-0400 WARN  (jsonrpc/0) 
>> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] 
>> (iscsiadm:104)
>> ———
>>
>> The full vdsm.log is below.
>>
>> Thanks again,
>> Devin
>>
>>
>>
>>
>> > On Jun 7, 2024, at 8:14 AM, Jean-Louis Dupond <jean-lo...@dupond.be> wrote:
>> >
>> > Weird, I have the same 6.2.1.9-1 version, and here it works.
>> > You can try to add some print here: 
>> > https://github.com/oVirt/vdsm/blob/4d11cae0b1b7318b282d9f90788748c0ef3cc965/lib/vdsm/storage/iscsiadm.py#L104
>> >
>> > This should print all executed iscsiadm commands.
>> >
>> >
>> > On 6/06/2024 20:50, Devin A. Bougie wrote:
>> >> Awesome, thanks again.  Yes, the host is fixed by just downgrading the 
>> >> iscsi-initiator-utils and iscsi-initiator-utils-iscsiuio packages from:
>> >> 6.2.1.9-1.gita65a472.el9.x86_64
>> >> to:
>> >> 6.2.1.4-3.git2a8f9d8.el9.x86_64
>> >>
>> >> Any additional pointers of where to look or how to debug the iscsiadm 
>> >> calls would be greatly appreciated.
>> >>
>> >> Many thanks!
>> >> Devin
>> >>
>> >>> On Jun 6, 2024, at 2:04 PM, Jean-Louis Dupond <jean-lo...@dupond.be> 
>> >>> wrote:
>> >>>
>> >>> 2024-06-06 13:28:10,478-0400 ERROR (jsonrpc/5) [storage.storageServer] 
>> >>> Could not configure connection to 192.168.56.57:3260,1 
>> >>> iqn.2002-10.com.infortrend:raid.sn8087428.112 and iface <IscsiInterface 
>> >>> name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm: 
>> >>> Error while adding record: invalid parameter\n') (storageServer:580)
>> >>>
>> >>> Seems like some issue with iscsiadm calls.
>> >>> Might want to debug which calls it does or what version change there is 
>> >>> for iscsiadm.
>> >>>
>> >>>
>> >>>
>> >>> "Devin A. Bougie" <devin.bou...@cornell.edu> schreef op 6 juni 2024 
>> >>> 19:32:29 CEST:
>> >>> Thanks so much!  Yes, that patch fixed the “out of sync network” issue.  
>> >>> However, we’re still unable to join a fully updated 9.4 host to the 
>> >>> cluster - now with "Failed to connect Host to Storage Servers”.  
>> >>> Downgrading all of the updated packages fixes the issue.
>> >>>
>> >>> Please see the attached vdsm.log and supervdsm.log from the host after 
>> >>> updating it to EL 9.4 and then trying to activate it.  Any more 
>> >>> suggestions would be greatly appreciated.
>> >>>
>> >>> Thanks again,
>> >>> Devin
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>> On Jun 5, 2024, at 2:35 AM, Jean-Louis Dupond <jean-lo...@dupond.be> 
>> >>>> wrote:
>> >>>>
>> >>>> You most likely need the following patch:
>> >>>> https://github.com/oVirt/vdsm/commit/49eaf70c5a14eb00e85eac5f91ac36f010a9a327
>> >>>>
>> >>>> Test with that, guess it's fixed then :)
>> >>>>
>> >>>> On 4/06/2024 22:33, Devin A. Bougie wrote:
>> >>>>> Are there any known incompatibilities with RHEL 9.4 (and derivatives)?
>> >>>>>
>> >>>>> We are running a 7-node ovirt 4.5.5-1.el8 self hosted engine cluster, 
>> >>>>> with all of the hosts running AlmaLinux 9.  After upgrading from 9.3 
>> >>>>> to 9.4, every node started flapping between “Up” and “NonOperational,” 
>> >>>>> with VMs in turn migrating between hosts.
>> >>>>>
>> >>>>> I believe the underlying issue (or at least the point I got stuck at) 
>> >>>>> was with two of our logical networks being stuck “out of sync” on all 
>> >>>>> hosts.  I was unable to synchronize networks or setup the networks 
>> >>>>> using the UI.  A reinstall of a host succeeded but then the host 
>> >>>>> immediately reverted to the same state with the same networks being 
>> >>>>> out of sync.
>> >>>>>
>> >>>>> I eventually found that if I downgraded the host from 9.4 to 9.3, it 
>> >>>>> immediately became stable and back online.
>> >>>>>
>> >>>>> Are there any known incompatibilities with RHEL 9.4 (and derivatives)? 
>> >>>>>  If not, I’m happy to upgrade a single node to test.  Please just let 
>> >>>>> me know what log files and details would be most helpful in debugging 
>> >>>>> what goes wrong.
>> >>>>>
>> >>>>> (And yes, I know we need to upgrade the hosted engine VM itself now 
>> >>>>> that CentOS Stream 8 is now EOL).
>> >>>>>
>> >>>>> Many thanks,
>> >>>>> Devin
>> >>>>>
>> >>>>> _______________________________________________
>> >>>>> Users mailing list -- users@ovirt.org
>> >>>>> To unsubscribe send an email to users-le...@ovirt.org
>> >>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >>>>> oVirt Code of Conduct: 
>> >>>>> https://www.ovirt.org/community/about/community-guidelines/
>> >>>>> List Archives: 
>> >>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIROQCVLPVBNJY6FWLE3VSLHRAZKRB3L/
>>
>>
>>
>> _______________________________________________
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIRCRFWLSG5ZCSHPXJ74HGBWWTJBJ2E7/
>>

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FIMMIELP2JUOT7FZKJQ7H6VN6Y33DUBI/

Reply via email to