[ovirt-users] Re: Ovirt 4.4.3 - Unable to start hosted engine

2020-11-28 Thread Marco Marino
ly (probably because the engine is down (again, I'm
not sure of this. I'm sorry if I'm writing stupid stuffs)).
Is my idea plausible? Can we think of this as a bug?

Please, any help is highly appreciated.
Thank you,
Marco











2.

On Fri, Nov 27, 2020 at 8:27 PM Marco Marino  wrote:

> Other details related to sanlock:
>
> 2020-11-27 20:25:10 7413 [61860]: verify_leader 1 wrong space name
> hosted-engin hosted-engine
> /run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
> 2020-11-27 20:25:10 7413 [61860]: leader1 delta_acquire_begin error -226
> lockspace hosted-engine host_id 1
> 2020-11-27 20:25:10 7413 [61860]: leader2 path
> /run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
> offset 0
> 2020-11-27 20:25:10 7413 [61860]: leader3 m 12212010 v 30004 ss 512 nh 0
> mh 1 oi 0 og 0 lv 0
> 2020-11-27 20:25:10 7413 [61860]: leader4 sn hosted-engin rn  ts 0 cs
> 23839828
> 2020-11-27 20:25:11 7414 [57456]: s38 add_lockspace fail result -226
> 2020-11-27 20:25:19 7421 [57456]: s39 lockspace
> hosted-engine:1:/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9:0
> 2020-11-27 20:25:19 7421 [62044]: verify_leader 1 wrong space name
> hosted-engin hosted-engine
> /run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
> 2020-11-27 20:25:19 7421 [62044]: leader1 delta_acquire_begin error -226
> lockspace hosted-engine host_id 1
> 2020-11-27 20:25:19 7421 [62044]: leader2 path
> /run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
> offset 0
> 2020-11-27 20:25:19 7421 [62044]: leader3 m 12212010 v 30004 ss 512 nh 0
> mh 1 oi 0 og 0 lv 0
> 2020-11-27 20:25:19 7421 [62044]: leader4 sn hosted-engin rn  ts 0 cs
> 23839828
> 2020-11-27 20:25:20 7422 [57456]: s39 add_lockspace fail result -226
> 2020-11-27 20:25:25 7427 [57456]: s40 lockspace
> hosted-engine:1:/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9:0
> 2020-11-27 20:25:25 7427 [62090]: verify_leader 1 wrong space name
> hosted-engin hosted-engine
> /run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
> 2020-11-27 20:25:25 7427 [62090]: leader1 delta_acquire_begin error -226
> lockspace hosted-engine host_id 1
> 2020-11-27 20:25:25 7427 [62090]: leader2 path
> /run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
> offset 0
> 2020-11-27 20:25:25 7427 [62090]: leader3 m 12212010 v 30004 ss 512 nh 0
> mh 1 oi 0 og 0 lv 0
> 2020-11-27 20:25:25 7427 [62090]: leader4 sn hosted-engin rn  ts 0 cs
> 23839828
> 2020-11-27 20:25:26 7428 [57456]: s40 add_lockspace fail result -226
>
> Any help is welcome. Thank you,
> Marco
>
>
> On Fri, Nov 27, 2020 at 6:47 PM Marco Marino 
> wrote:
>
>> Hi,
>> I have an ovirt 4.4.3  with 2 clusters, hosted engine and iscsi storage.
>> First cluster, composed of 2 servers (host1 and host2), is dedicated to the
>> hosted engine, the second cluster is for vms. Furthermore, there is a SAN
>> with 3 luns: one for hosted engine storage, one for vms and one unused. My
>> SAN is built on top of a pacemaker/drbd cluster with 2 nodes with a virtual
>> ip used as iscsi Portal IP. Starting from today, after a failover of the
>> iscsi cluster, I'm unable to start the hosted engine. It seems that there
>> is some problem with storage.
>> Actually I have only one node (host1) running in the cluster. It seems
>> there is some lock on lvs, but I'm not sure of this.
>>
>> Here are some details about the problem:
>>
>> 1. iscsiadm -m session
>> iSCSI Transport Class version 2.0-870
>> version 6.2.0.878-2
>> Target: iqn.2003-01.org.linux-iscsi.s1-node1.x8664:sn.2a734f67d5b1
>> (non-flash)
>> Current Portal: 10.3.8.8:3260,1
>> Persistent Portal: 10.3.8.8:3260,1
>> **
>> Interface:
>> **
>> Iface Name: default
>> Iface Transport: tcp
>> Iface Initiatorname: iqn.1994-05.com.redhat:4b668221d9a9
>> Iface IPaddress: 10.3.8.10
>> Iface HWaddress: default
>> Iface Netdev: default
>> SID: 1
>> iSCSI Connection State: LOGGED IN
>> iSCSI Session State: LOGGED_IN
>> Internal iscsid Session State: NO CHANGE
>> *
>> Timeouts:
>> *
>> Recovery Timeout: 5
>

[ovirt-users] Re: Ovirt 4.4.3 - Unable to start hosted engine

2020-11-27 Thread Marco Marino
Other details related to sanlock:

2020-11-27 20:25:10 7413 [61860]: verify_leader 1 wrong space name
hosted-engin hosted-engine
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
2020-11-27 20:25:10 7413 [61860]: leader1 delta_acquire_begin error -226
lockspace hosted-engine host_id 1
2020-11-27 20:25:10 7413 [61860]: leader2 path
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
offset 0
2020-11-27 20:25:10 7413 [61860]: leader3 m 12212010 v 30004 ss 512 nh 0 mh
1 oi 0 og 0 lv 0
2020-11-27 20:25:10 7413 [61860]: leader4 sn hosted-engin rn  ts 0 cs
23839828
2020-11-27 20:25:11 7414 [57456]: s38 add_lockspace fail result -226
2020-11-27 20:25:19 7421 [57456]: s39 lockspace
hosted-engine:1:/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9:0
2020-11-27 20:25:19 7421 [62044]: verify_leader 1 wrong space name
hosted-engin hosted-engine
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
2020-11-27 20:25:19 7421 [62044]: leader1 delta_acquire_begin error -226
lockspace hosted-engine host_id 1
2020-11-27 20:25:19 7421 [62044]: leader2 path
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
offset 0
2020-11-27 20:25:19 7421 [62044]: leader3 m 12212010 v 30004 ss 512 nh 0 mh
1 oi 0 og 0 lv 0
2020-11-27 20:25:19 7421 [62044]: leader4 sn hosted-engin rn  ts 0 cs
23839828
2020-11-27 20:25:20 7422 [57456]: s39 add_lockspace fail result -226
2020-11-27 20:25:25 7427 [57456]: s40 lockspace
hosted-engine:1:/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9:0
2020-11-27 20:25:25 7427 [62090]: verify_leader 1 wrong space name
hosted-engin hosted-engine
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
2020-11-27 20:25:25 7427 [62090]: leader1 delta_acquire_begin error -226
lockspace hosted-engine host_id 1
2020-11-27 20:25:25 7427 [62090]: leader2 path
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
offset 0
2020-11-27 20:25:25 7427 [62090]: leader3 m 12212010 v 30004 ss 512 nh 0 mh
1 oi 0 og 0 lv 0
2020-11-27 20:25:25 7427 [62090]: leader4 sn hosted-engin rn  ts 0 cs
23839828
2020-11-27 20:25:26 7428 [57456]: s40 add_lockspace fail result -226

Any help is welcome. Thank you,
Marco


On Fri, Nov 27, 2020 at 6:47 PM Marco Marino  wrote:

> Hi,
> I have an ovirt 4.4.3  with 2 clusters, hosted engine and iscsi storage.
> First cluster, composed of 2 servers (host1 and host2), is dedicated to the
> hosted engine, the second cluster is for vms. Furthermore, there is a SAN
> with 3 luns: one for hosted engine storage, one for vms and one unused. My
> SAN is built on top of a pacemaker/drbd cluster with 2 nodes with a virtual
> ip used as iscsi Portal IP. Starting from today, after a failover of the
> iscsi cluster, I'm unable to start the hosted engine. It seems that there
> is some problem with storage.
> Actually I have only one node (host1) running in the cluster. It seems
> there is some lock on lvs, but I'm not sure of this.
>
> Here are some details about the problem:
>
> 1. iscsiadm -m session
> iSCSI Transport Class version 2.0-870
> version 6.2.0.878-2
> Target: iqn.2003-01.org.linux-iscsi.s1-node1.x8664:sn.2a734f67d5b1
> (non-flash)
> Current Portal: 10.3.8.8:3260,1
> Persistent Portal: 10.3.8.8:3260,1
> **
> Interface:
> **
> Iface Name: default
> Iface Transport: tcp
> Iface Initiatorname: iqn.1994-05.com.redhat:4b668221d9a9
> Iface IPaddress: 10.3.8.10
> Iface HWaddress: default
> Iface Netdev: default
> SID: 1
> iSCSI Connection State: LOGGED IN
> iSCSI Session State: LOGGED_IN
> Internal iscsid Session State: NO CHANGE
> *
> Timeouts:
> *
> Recovery Timeout: 5
> Target Reset Timeout: 30
> LUN Reset Timeout: 30
> Abort Timeout: 15
> *
> CHAP:
> *
> username: 
> password: 
> username_in: 
> password_in: 
> 
> Negotiated iSCSI params:
> 
> HeaderDigest: None
> DataDigest: None
> MaxRecvDataSegmentLength: 262144
> MaxXmitDataSegmentLength: 262144
> FirstBurstLength: 65536
> MaxBurstLength: 262144
> ImmediateData: Yes
> InitialR2T: Yes
> MaxOutstandingR2T: 1
> 
> Attached SCSI devices:
> 
> Host Number: 7 State: running
> scsi7 Channel 00 Id 0 Lun: 0
>

[ovirt-users] Ovirt 4.4.3 - Unable to start hosted engine

2020-11-27 Thread Marco Marino
Hi,
I have an ovirt 4.4.3  with 2 clusters, hosted engine and iscsi storage.
First cluster, composed of 2 servers (host1 and host2), is dedicated to the
hosted engine, the second cluster is for vms. Furthermore, there is a SAN
with 3 luns: one for hosted engine storage, one for vms and one unused. My
SAN is built on top of a pacemaker/drbd cluster with 2 nodes with a virtual
ip used as iscsi Portal IP. Starting from today, after a failover of the
iscsi cluster, I'm unable to start the hosted engine. It seems that there
is some problem with storage.
Actually I have only one node (host1) running in the cluster. It seems
there is some lock on lvs, but I'm not sure of this.

Here are some details about the problem:

1. iscsiadm -m session
iSCSI Transport Class version 2.0-870
version 6.2.0.878-2
Target: iqn.2003-01.org.linux-iscsi.s1-node1.x8664:sn.2a734f67d5b1
(non-flash)
Current Portal: 10.3.8.8:3260,1
Persistent Portal: 10.3.8.8:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:4b668221d9a9
Iface IPaddress: 10.3.8.10
Iface HWaddress: default
Iface Netdev: default
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*
Timeouts:
*
Recovery Timeout: 5
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*
CHAP:
*
username: 
password: 
username_in: 
password_in: 

Negotiated iSCSI params:

HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Attached SCSI devices:

Host Number: 7 State: running
scsi7 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running
scsi7 Channel 00 Id 0 Lun: 1
Attached scsi disk sdc State: running

2. vdsm.log errors:

2020-11-27 18:37:16,786+0100 INFO  (jsonrpc/0) [api] FINISH getStats
error=Virtual machine does not exist: {'vmId':
'f3a1194d-0632-43c6-8e12-7f22518cff87'} (api:129)
.
2020-11-27 18:37:52,864+0100 INFO  (jsonrpc/4) [vdsm.api] FINISH
getVolumeInfo error=(-223, 'Sanlock resource read failure', 'Lease does not
exist on storage') from=::1,60880,
task_id=138a3615-d537-4e5f-a39c-335269ad0917 (api:52)
2020-11-27 18:37:52,864+0100 ERROR (jsonrpc/4) [storage.TaskManager.Task]
(Task='138a3615-d537-4e5f-a39c-335269ad0917') Unexpected error (task:880)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 887,
in _run
return fn(*args, **kargs)
  File "", line 2, in getVolumeInfo
  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 3142,
in getVolumeInfo
info = self._produce_volume(sdUUID, imgUUID, volUUID).getInfo()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/volume.py", line 258,
in getInfo
leasestatus = self.getLeaseStatus()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/volume.py", line 203,
in getLeaseStatus
self.volUUID)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 549, in
inquireVolumeLease
return self._domainLock.inquire(lease)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line
464, in inquire
sector=self._block_size)
sanlock.SanlockException: (-223, 'Sanlock resource read failure', 'Lease
does not exist on storage')
2020-11-27 18:37:52,865+0100 INFO  (jsonrpc/4) [storage.TaskManager.Task]
(Task='138a3615-d537-4e5f-a39c-335269ad0917') aborting: Task is aborted:
"value=(-223, 'Sanlock resource read failure', 'Lease does not exist on
storage') abortedcode=100" (task:1190)


3. supervdsm.log
MainProcess|monitor/de4645f::DEBUG::2020-11-27
18:41:25,286::commands::153::common.commands::(start) /usr/bin/taskset
--cpu-list 0-11 /usr/sbin/dmsetup remove
de4645fc--f379--4837--916b--a0c2b89927d9-dfa4e933--2b9c--4057--a4c5--aa4485b070e9
(cwd None)
MainProcess|monitor/de4645f::DEBUG::2020-11-27
18:41:25,293::commands::98::common.commands::(run) FAILED:  =
b'device-mapper: remove ioctl on
de4645fc--f379--4837--916b--a0c2b89927d9-dfa4e933--2b9c--4057--a4c5--aa4485b070e9
 failed: Device or resource busy\nCommand failed.\n';  = 1
MainProcess|monitor/de4645f::ERROR::2020-11-27
18:41:25,294::supervdsm_server::97::SuperVdsm.ServerCallback::(wrapper)
Error in devicemapper_removeMapping
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/devicemapper.py",
line 141, in removeMapping
commands.run(cmd)
  File "/usr/lib/python3.6/site-packages/vdsm/common/commands.py", line
101, in run
raise cmdutils.Error(args, p.returncode, out, err)
vdsm.common.cmdutils.Error: Command ['/usr/sbin/dmsetup', 'remove',

[ovirt-users] Re: Reinstall failed on 4.4.1.1 with custom kernel settings

2020-07-22 Thread Marco Marino
Yes, it seems. Thank you.
Anyway, I'm checking ansible playbook:
/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-facts/tasks/main.yml
(line 2):

- name: Detect host operating system
  set_fact:
el_ver: "{{ ansible_distribution_major_version|int
if ansible_distribution == 'RedHat' or ansible_distribution ==
'CentOS'
else 0 }}"
fc_ver: "{{ ansible_distribution_major_version|int
if ansible_distribution == 'Fedora'
else 0 }}"


The same playbook gives 2 different results: el_ver = 4 for ovirt 4.4.1.1
and el_ver=8 for ovirt 4.4.1. I don't know how this ansible task works but
I can say that the content of /etc/redhat-release is the same for both
nodes. Really strange!

Anyway, thank you for your support.
Best regards,
Marco

On Wed, Jul 22, 2020 at 11:02 AM Yedidyah Bar David  wrote:

> On Wed, Jul 22, 2020 at 11:53 AM Marco Marino 
> wrote:
> >
> > Hi,
> > my logs as attachment. Please note that each file has a suffix like
> "host1_" or "engine_" that means from which server log come from.
> > Anyway, in engine_host1_deploy.log I see an error during the playbook
> execution: "module_stdout" : "/bin/sh: /usr/bin/python2: No such file or
> directory\r\n"
> > Many questions on this: 1) Why python2 is needed? 2) Why reinstall
> process works on version 4.4.1 even though python2 is missing on 4.4.1?
>
> Seems like you ran into:
>
> https://bugzilla.redhat.com/1858234
>
> Adding Lev.
>
> Best regards,
>
> >
> > Thank you,
> > Marco
> >
> > On Wed, Jul 22, 2020 at 10:15 AM Yedidyah Bar David 
> wrote:
> >>
> >> On Wed, Jul 22, 2020 at 11:11 AM  wrote:
> >> >
> >> > Hi, after the upgrade from 4.4.1 to 4.4.1.1, it seems that I cannot
> reinstall adding custom kernel settings. In my case, I'm trying to add
> "ixgbe.allow_unsupported_sfp=1" to cmdline but reinstall process fails
> without showing any relevant error in /var/log/vdsm/vdsm.log.
> >>
> >> Can you please check/share other logs? Including all of
> >> /var/log/ovirt-engine (on the engine machine)?
> >>
> >> > I did the same test on version 4.4.1 and it works without problems.
> On version 4.4.1 I tried to (1) add a parameter with reinstall, (2) add a
> second parameter and reinstall, (3) remove all params and reinstall and (4)
> finally add only one custom param with reinstall. All tests are ok on
> version 4.4.1 and the problem happens only on version 4.4.1.1.
> >> > Please, have you some idea on how to solve the issue? Is this a bug?
> >>
> >> Sounds like one to me.
> >>
> >> Best regards,
> >> --
> >> Didi
> >>
> >
> >
> > --
> > Ai sensi dell'articolo 13 del Regolamento UE, n. 2016/679 (GDPR) si
> informa che Titolare del trattamento dei Suoi dati personali, anche
> particolari, compreso l'indirizzo di posta elettronica, è la EXTRAORDY
> S.r.l.. E' possibile revocare il consenso in qualsiasi momento senza
> pregiudicare la liceità del trattamento basata sul consenso prestato prima
> della revoca, nonché proporre reclamo all'Autorità di controllo. Agli
> interessati sono riconosciuti i diritti di cui agli artt. 15 ss. del
> Regolamento UE, n. 2016/679 e in particolare di chiedere al titolare del
> trattamento l'accesso ai dati personali e la rettifica o la cancellazione
> degli stessi o la limitazione del trattamento o la portabilità dei dati che
> lo riguardano o di opporsi al loro trattamento rivolgendo le richieste
> inviando un messaggio al seguente indirizzo e-mail priv...@extraordy.com.
>
>
>
> --
> Didi
>
>

-- 
Ai sensi dell'articolo 13 del Regolamento UE, n. 2016/679 (GDPR) si informa
che Titolare del trattamento dei Suoi dati personali, anche particolari,
compreso l'indirizzo di posta elettronica, è la EXTRAORDY S.r.l.. E'
possibile revocare il consenso in qualsiasi momento senza pregiudicare la
liceità del trattamento basata sul consenso prestato prima della revoca,
nonché proporre reclamo all'Autorità di controllo. Agli interessati sono
riconosciuti i diritti di cui agli artt. 15 ss. del Regolamento UE, n.
2016/679 e in particolare di chiedere al titolare del trattamento l'accesso
ai dati personali e la rettifica o la cancellazione degli stessi o la
limitazione del trattamento o la portabilità dei dati che lo riguardano o
di opporsi al loro trattamento rivolgendo le richieste inviando un
messaggio al seguente indirizzo e-mail priv...@extraordy.com.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JTXOIZGWTJONQCRJIFHWWGEQIBPT4G5N/


[ovirt-users] Re: Reinstall failed on 4.4.1.1 with custom kernel settings

2020-07-22 Thread Marco Marino
Hi,
my logs as attachment. Please note that each file has a suffix like
"host1_" or "engine_" that means from which server log come from.
Anyway, in engine_host1_deploy.log I see an error during the playbook
execution: "module_stdout" : "/bin/sh: /usr/bin/python2: No such file or
directory\r\n"
Many questions on this: 1) Why python2 is needed? 2) Why reinstall process
works on version 4.4.1 even though python2 is missing on 4.4.1?

Thank you,
Marco

On Wed, Jul 22, 2020 at 10:15 AM Yedidyah Bar David  wrote:

> On Wed, Jul 22, 2020 at 11:11 AM  wrote:
> >
> > Hi, after the upgrade from 4.4.1 to 4.4.1.1, it seems that I cannot
> reinstall adding custom kernel settings. In my case, I'm trying to add
> "ixgbe.allow_unsupported_sfp=1" to cmdline but reinstall process fails
> without showing any relevant error in /var/log/vdsm/vdsm.log.
>
> Can you please check/share other logs? Including all of
> /var/log/ovirt-engine (on the engine machine)?
>
> > I did the same test on version 4.4.1 and it works without problems. On
> version 4.4.1 I tried to (1) add a parameter with reinstall, (2) add a
> second parameter and reinstall, (3) remove all params and reinstall and (4)
> finally add only one custom param with reinstall. All tests are ok on
> version 4.4.1 and the problem happens only on version 4.4.1.1.
> > Please, have you some idea on how to solve the issue? Is this a bug?
>
> Sounds like one to me.
>
> Best regards,
> --
> Didi
>
>

-- 
Ai sensi dell'articolo 13 del Regolamento UE, n. 2016/679 (GDPR) si informa
che Titolare del trattamento dei Suoi dati personali, anche particolari,
compreso l'indirizzo di posta elettronica, è la EXTRAORDY S.r.l.. E'
possibile revocare il consenso in qualsiasi momento senza pregiudicare la
liceità del trattamento basata sul consenso prestato prima della revoca,
nonché proporre reclamo all'Autorità di controllo. Agli interessati sono
riconosciuti i diritti di cui agli artt. 15 ss. del Regolamento UE, n.
2016/679 e in particolare di chiedere al titolare del trattamento l'accesso
ai dati personali e la rettifica o la cancellazione degli stessi o la
limitazione del trattamento o la portabilità dei dati che lo riguardano o
di opporsi al loro trattamento rivolgendo le richieste inviando un
messaggio al seguente indirizzo e-mail priv...@extraordy.com.


ovirt-logs.tar.gz
Description: application/gzip
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SKLMX2R3UYFPDORP67WJY45FWUQIFXMB/