[ovirt-users] Re: VDSM Hooks during migration

2019-08-26 Thread Vrgotic, Marko
and Thank you for the update, it’s good to re-read pages once in a while.

Sent from my iPhone

> On 26 Aug 2019, at 17:30, Milan Zamazal  wrote:
> 
> "Vrgotic, Marko"  writes:
> 
>> Would you be so kind to help me/tell me or point me how to find which
>> Hooks, and in which order, are triggered when VM is being migrated?
> 
> See "VDSM and Hooks" appendix of oVirt Admin Guide:
> 
> https://ovirt.org/documentation/admin-guide/appe-VDSM_and_Hooks.html
> 
> Regards,
> Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YOW2K3SDLECUKOFJAOEIWPHE5LGAMUNH/


[ovirt-users] Re: VDSM Hooks during migration

2019-08-26 Thread Vrgotic, Marko
Hi Milan,

Thank you, I was aware of the page.
What I am aiming for is following:
We have a nauseate hook which deletes dns records from DNS server, for of a VM 
“destroyed”.
That is just as we wanted it, except in a case of Migration, which is also a 
“destructive” action, looking from perspective of a Hypervisor.
I was testing an order of Hooks triggered when I issue VM Migrate, in order to 
discover which Hook I can use to trigger update of the records for a VM that is 
Migrated.

Seems that “after_vm_destroy” is the last in order  hook to be executed when VM 
is migrated, and I wanted to verify that.

How come that there is no hook which enables VM start or continue on a 
destination hypervisor, after VM is migrated? Or am I missing something?

Sent from my iPhone

> On 26 Aug 2019, at 17:30, Milan Zamazal  wrote:
> 
> "Vrgotic, Marko"  writes:
> 
>> Would you be so kind to help me/tell me or point me how to find which
>> Hooks, and in which order, are triggered when VM is being migrated?
> 
> See "VDSM and Hooks" appendix of oVirt Admin Guide:
> 
> https://ovirt.org/documentation/admin-guide/appe-VDSM_and_Hooks.html
> 
> Regards,
> Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LMJ6QOFUNABXGZZS5SU5KTYFC6QNRF5I/


[ovirt-users] Re: oVirt 4.3.5 WARN no gluster network found in cluster

2019-08-26 Thread adrianquintero
Hi,
Yes I have glusternet shows the role  of "migration" and "gluster".
Hosts show 1 network connected to management and the other to logical network 
"glusternet"

Just not sure if I am interpreting right?
thanks,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MFQE6XRVHA76PI47O253VHV4EK3M5QDQ/


[ovirt-users] Re: Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-26 Thread Gianluca Cecchi
On Mon, Aug 26, 2019 at 6:40 PM Gianluca Cecchi 
wrote:

>
> It seems that these steps below solved the problem (donna what it was
> though..).
> Based on this similar (No worthy mechs found) I found inspiration from:
>
> https://lists.ovirt.org/pipermail/users/2017-January/079009.html
>
>
> [root@ovirt01 ~]# vdsm-tool configure
>
> Checking configuration status...
>
> abrt is not configured for vdsm
> Managed volume database is already configured
> lvm is configured for vdsm
> libvirt is already configured for vdsm
> SUCCESS: ssl configured to true. No conflicts
> Manual override for multipath.conf detected - preserving current
> configuration
> This manual override for multipath.conf was based on downrevved template.
> You are strongly advised to contact your support representatives
>
> Running configure...
> Reconfiguration of abrt is done.
>
> Done configuring modules to VDSM.
> [root@ovirt01 ~]#
>
> [root@ovirt01 ~]# systemctl restart vdsmd
>
>
I have also to say, as it is in some way involved in the thread messages,
that in my node, due to "historical reasons" I have:

[root@ovirt01 vdsm]# getenforce
Permissive
[root@ovirt01 vdsm]#

I don't know if in any way it can be part of the cause.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3FZJP2T4BNAYALNYX5IQ2EJGOYYPWRAK/


[ovirt-users] Re: Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-26 Thread Gianluca Cecchi
On Mon, Aug 26, 2019 at 6:13 PM Gianluca Cecchi 
wrote:

> On Mon, Aug 26, 2019 at 12:44 PM Ales Musil  wrote:
>
>>
>>
>> On Mon, Aug 26, 2019 at 12:30 PM Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Mon, Aug 26, 2019 at 11:58 AM Ales Musil  wrote:
>>>

 I can see that MOM is failing to start because some of the MOM
 dependencies is not starting. Can you please post output from 'systemctl
 status momd'?


>>>
>>>  ● momd.service - Memory Overcommitment Manager Daemon
>>>Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
>>> preset: disabled)
>>>Active: inactive (dead)
>>>
>>> perhaps any other daemon status?
>>> Or any momd related log file generated?
>>>
>>> BTW: I see on a running oVirt 4.3.5 node from another environment that
>>> the status of momd is the same inactive (dead)
>>>
>>>
>> What happens if you try to start the momd?
>>
>
> [root@ovirt01 ~]# systemctl status momd
> ● momd.service - Memory Overcommitment Manager Daemon
>Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
> preset: disabled)
>Active: inactive (dead)
> [root@ovirt01 ~]# systemctl start momd
> [root@ovirt01 ~]#
>
> [root@ovirt01 ~]# systemctl status momd -l
> ● momd.service - Memory Overcommitment Manager Daemon
>Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
> preset: disabled)
>Active: inactive (dead) since Mon 2019-08-26 18:10:20 CEST; 6s ago
>   Process: 18417 ExecStart=/usr/sbin/momd -c /etc/momd.conf -d --pid-file
> /var/run/momd.pid (code=exited, status=0/SUCCESS)
>  Main PID: 18419 (code=exited, status=0/SUCCESS)
>
> Aug 26 18:10:20 ovirt01.mydomain systemd[1]: Starting Memory
> Overcommitment Manager Daemon...
> Aug 26 18:10:20 ovirt01.mydomain systemd[1]: momd.service: Supervising
> process 18419 which is not our child. We'll most likely not notice when it
> exits.
> Aug 26 18:10:20 ovirt01.mydomain systemd[1]: Started Memory Overcommitment
> Manager Daemon.
> Aug 26 18:10:20 ovirt01.mydomain python[18419]: No worthy mechs found
> [root@ovirt01 ~]#
>
> [root@ovirt01 ~]# ps -fp 18419
> UIDPID  PPID  C STIME TTY  TIME CMD
> [root@ovirt01 ~]#
>
> [root@ovirt01 vdsm]# ps -fp 18417
> UIDPID  PPID  C STIME TTY  TIME CMD
> [root@ovirt01 vdsm]#
>
> No log file update under /var/log/vdsm
>
> [root@ovirt01 vdsm]# ls -lt | head -5
> total 118972
> -rw-r--r--. 1 root root 3406465 Aug 23 00:25 supervdsm.log
> -rw-r--r--. 1 root root   73621 Aug 23 00:25 upgrade.log
> -rw-r--r--. 1 vdsm kvm0 Aug 23 00:01 vdsm.log
> -rw-r--r--. 1 vdsm kvm   538480 Aug 22 23:46 vdsm.log.1.xz
> [root@ovirt01 vdsm]#
>
> Gianluca
>

It seems that these steps below solved the problem (donna what it was
though..).
Based on this similar (No worthy mechs found) I found inspiration from:

https://lists.ovirt.org/pipermail/users/2017-January/079009.html


[root@ovirt01 ~]# vdsm-tool configure

Checking configuration status...

abrt is not configured for vdsm
Managed volume database is already configured
lvm is configured for vdsm
libvirt is already configured for vdsm
SUCCESS: ssl configured to true. No conflicts
Manual override for multipath.conf detected - preserving current
configuration
This manual override for multipath.conf was based on downrevved template.
You are strongly advised to contact your support representatives

Running configure...
Reconfiguration of abrt is done.

Done configuring modules to VDSM.
[root@ovirt01 ~]#

[root@ovirt01 ~]# systemctl restart vdsmd
[root@ovirt01 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/etc/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
   Active: active (running) since Mon 2019-08-26 18:23:29 CEST; 19s ago
  Process: 27326 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh
--post-stop (code=exited, status=0/SUCCESS)
  Process: 27329 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
--pre-start (code=exited, status=0/SUCCESS)
 Main PID: 27401 (vdsmd)
Tasks: 75
   CGroup: /system.slice/vdsmd.service
   ├─27401 /usr/bin/python2 /usr/share/vdsm/vdsmd
   ├─27524 /usr/libexec/ioprocess --read-pipe-fd 49 --write-pipe-fd
47 --max-threads 10 --max-queued-requests 10
   ├─27531 /usr/libexec/ioprocess --read-pipe-fd 55 --write-pipe-fd
54 --max-threads 10 --max-queued-requests 10
   ├─27544 /usr/libexec/ioprocess --read-pipe-fd 60 --write-pipe-fd
59 --max-threads 10 --max-queued-requests 10
   ├─27553 /usr/libexec/ioprocess --read-pipe-fd 67 --write-pipe-fd
66 --max-threads 10 --max-queued-requests 10
   ├─27559 /usr/libexec/ioprocess --read-pipe-fd 72 --write-pipe-fd
71 --max-threads 10 --max-queued-requests 10
   └─27566 /usr/libexec/ioprocess --read-pipe-fd 78 --write-pipe-fd
77 --max-threads 10 --max-queued-requests 10

Aug 26 18:23:29 ovirt01.mydomain vdsmd_init_common.sh[27329]: vdsm: Running
dummybr
Aug 26 18:23:29 

[ovirt-users] Re: Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-26 Thread Gianluca Cecchi
On Mon, Aug 26, 2019 at 12:44 PM Ales Musil  wrote:

>
>
> On Mon, Aug 26, 2019 at 12:30 PM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Mon, Aug 26, 2019 at 11:58 AM Ales Musil  wrote:
>>
>>>
>>> I can see that MOM is failing to start because some of the MOM
>>> dependencies is not starting. Can you please post output from 'systemctl
>>> status momd'?
>>>
>>>
>>
>>  ● momd.service - Memory Overcommitment Manager Daemon
>>Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
>> preset: disabled)
>>Active: inactive (dead)
>>
>> perhaps any other daemon status?
>> Or any momd related log file generated?
>>
>> BTW: I see on a running oVirt 4.3.5 node from another environment that
>> the status of momd is the same inactive (dead)
>>
>>
> What happens if you try to start the momd?
>

[root@ovirt01 ~]# systemctl status momd
● momd.service - Memory Overcommitment Manager Daemon
   Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
preset: disabled)
   Active: inactive (dead)
[root@ovirt01 ~]# systemctl start momd
[root@ovirt01 ~]#

[root@ovirt01 ~]# systemctl status momd -l
● momd.service - Memory Overcommitment Manager Daemon
   Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
preset: disabled)
   Active: inactive (dead) since Mon 2019-08-26 18:10:20 CEST; 6s ago
  Process: 18417 ExecStart=/usr/sbin/momd -c /etc/momd.conf -d --pid-file
/var/run/momd.pid (code=exited, status=0/SUCCESS)
 Main PID: 18419 (code=exited, status=0/SUCCESS)

Aug 26 18:10:20 ovirt01.mydomain systemd[1]: Starting Memory Overcommitment
Manager Daemon...
Aug 26 18:10:20 ovirt01.mydomain systemd[1]: momd.service: Supervising
process 18419 which is not our child. We'll most likely not notice when it
exits.
Aug 26 18:10:20 ovirt01.mydomain systemd[1]: Started Memory Overcommitment
Manager Daemon.
Aug 26 18:10:20 ovirt01.mydomain python[18419]: No worthy mechs found
[root@ovirt01 ~]#

[root@ovirt01 ~]# ps -fp 18419
UIDPID  PPID  C STIME TTY  TIME CMD
[root@ovirt01 ~]#

[root@ovirt01 vdsm]# ps -fp 18417
UIDPID  PPID  C STIME TTY  TIME CMD
[root@ovirt01 vdsm]#

No log file update under /var/log/vdsm

[root@ovirt01 vdsm]# ls -lt | head -5
total 118972
-rw-r--r--. 1 root root 3406465 Aug 23 00:25 supervdsm.log
-rw-r--r--. 1 root root   73621 Aug 23 00:25 upgrade.log
-rw-r--r--. 1 vdsm kvm0 Aug 23 00:01 vdsm.log
-rw-r--r--. 1 vdsm kvm   538480 Aug 22 23:46 vdsm.log.1.xz
[root@ovirt01 vdsm]#

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AQM5N464ITSCGEYNPQSM5UMQRB7OFRFV/


[ovirt-users] Re: VDSM Hooks during migration

2019-08-26 Thread Milan Zamazal
"Vrgotic, Marko"  writes:

> Would you be so kind to help me/tell me or point me how to find which
> Hooks, and in which order, are triggered when VM is being migrated?

See "VDSM and Hooks" appendix of oVirt Admin Guide:

https://ovirt.org/documentation/admin-guide/appe-VDSM_and_Hooks.html

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SNUMEJZ5AFBFCRK5DA5GZ4CU4XMQECVG/


[ovirt-users] Re: VM --- is not responding.

2019-08-26 Thread Edoardo Mazza
...I saw the logs, the storage of domain is glusterfs but I didn't find any
error in gluster's log when the vm became unresponsive, it would be very
strange if the problem was in the storage controller, may be that the vm is too
used?
thanks
Edoardo

Il giorno mer 14 ago 2019 alle ore 18:47 Strahil  ha
scritto:

> Hm... It was supposes to show controller status.
> Maybe the hpssacli you have is not supporting your raid cards.Check for
> newer version on HPE's support page.
>
> Best Regards,
> Strahil Nikolov
>
> Best Regards,
> Strahil Nikolov
> On Aug 14, 2019 11:40, Edoardo Mazza  wrote:
>
> I installed hpssacli-2.40-13.0.x86_64.rpm and the result of "hpssacli ctrl
> all show status" is:
> Error: No controllers detected. Possible causes:.
> The s.o. run on sd cards and the vm runs on array on traditional disk
> thanks
> Edoardo
>
> Il giorno lun 12 ago 2019 alle ore 05:59 Strahil 
> ha scritto:
>
> Would you check the health status of the controllers :
> hpssacli ctrl all show status
>
> Best Regards,
> Strahil Nikolov
> On Aug 11, 2019 09:55, Edoardo Mazza  wrote:
>
> The hosts are 3 ProLiant DL380 Gen10, 2 hosts with HPE Smart Array P816i-a
> SR Gen10 like controller and the other host with
> HPE Smart Array P408i-a SR Gen10.  The storage for ovirt enviroment is
> gluster and the last host is the arbiter in the gluster enviroment.
> The S.M.A.R.T. healt status is ok for all host
> Edoardo
>
>
>
>
>
> Il giorno gio 8 ago 2019 alle ore 16:19 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>
>
> Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza 
> ha scritto:
>
> Hi all,
> It is more days that for same vm I received this error, but I don't
> underdand why.
> The traffic of the virtual machine is not excessive, cpu and ram to, but
> for few minutes the vm is not responding. and in the messages log file of
> the vm I received the error under, yo can help me?
> thanks
>
>
> can you check the S.M.A.R.T. health status of the disks?
>
>
>
> Edoardo
> kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s!
> [kworker/2:0:26227]
> Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc
> ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
> ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp
> llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
> nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat
> nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
> ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables
> ip6table_filter ip6_tables iptable_filter snd_hda_c
> odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel
> snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
>  glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm
> snd_timer snd soundcore virtio_rng sg virtio_balloon
> i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
> Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom
> virtio_net virtio_console virtio_scsi ata_generic p
> ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl
> floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm
> ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio
> dm_mirror dm_region_hash dm_log dm_mod
> Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump:
> loaded Tainted: G L    3.10.0-957.12.1.el7.x86_64 #1
> Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS
> 1.11.0-2.el7 04/01/2014
> Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable
> disk_events_workfn
> Aug  8 02:51:14 vmmysql kernel: task: 9e25b6609040 ti:
> 9e27b161 task.ti: 9e27b161
> Aug  8 02:51:14 vmmysql kernel: RIP: 00
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KVWJYZROBHCMKNVBZ2FZ75D3CV735MVY/


[ovirt-users] Re: Health Warning: Cinnamon toxic when adding EL7 nodes as oVirt hosts

2019-08-26 Thread thomas
> Hi,
> 
> On Thu, Aug 22, 2019 at 3:29 PM  
> I assume this was successful. Did you check what packages were
> actually installed? Especially which were updated?
> 
I went for minimal user actions, so things would be easy to repeat.

But while  "yum groupinstall cinnamon" is only a single command, but it pulls 
(and removes) the entire suite of Cinnamon desktop apps in one go, and with 
quite a few dependencies all over the place.

I couldn't quite compare everything, but I checked all the obvious oVirt 
packages, so everything with "*ovirt*, vdsm, otopi, cockpit, gluster etc.: 
Those had the very same numbers.
> 
> Before doing that, did you try disabling/removing full epel repo (only
> leaving enabled the parts enabled by ovirt-release* package)?
Yes, just disabling the epel-repo won't do the job...
> 
> 
> After installing Cinnamon?
> 
...once Cinnamon is installed, installation as a host fails, because Python 
can't find "rpmUtils".
If I remove Cinnamon (yum delete cinnamon; yum autoremove), it works again.
> 
> This helps if there is a *conflict*, not sure it does much if epel has
> a newer version.
> 
> 
> Didn't understand "through some time of y miniyum". ovirt-host-deploy,
I wouldn't either, I guess my fingers went into some kind of twist there ;-)

it should have ready "through some type of miniyum"

From what I understood reading the Python code there, the host deploy package 
is importing an rpmUtils Python-package (aka miniyum) to ensure that certain 
rpm-packages are either installed or pulled for the host. And from what I also 
remember going through the Github sources, this dependency on rpmUtils has been 
removed at some point

> which what is ran on the host at that point, is based on otopi, and
> otopi has a yum plugin, and a miniyum module that it uses, and these
> indeed try to install/update packages. This is optional - if you want
> to prevent that, check "OFFLINE" section in:
Yes, and my question was mostly if this is running perhaps in some type of 
Python chroot()/environment that's distinct from the 'normal' one on the target.
> 
> https://github.com/oVirt/ovirt-host-deploy/blob/master/README
> 
I'll have a look at that: Always better to satisfy dependencies early.
> 
On the book: 
> Such a thing does not exist, and if it did, it will quickly become
> out-of-date, and quickly get worse over time. If you search around,
> you actually can find parts of it scattered around, as blog posts,
> 'deep dive' videos, conference presentations, etc. Part of these is
> indeed out-of-date :-(, but at least you can rather easily see when
> they were posted an which version was documented. And of course, you
> have the source! :-)
Yep, source is there, but without some backgrounder, it's a rocky journey... 
and I have watched quite a few videos already and some blogs. Just don't know 
how far back I should go, it's ten years, I believe...

I believe the problem here ocurrs in the context of Otopi, and all I have been 
able to find on Otopi is that while the "human" mode is slightly better than 
the "machine" mode for interactive use, it's not really meant to be an end-user 
tool... A concept guide was nowhere to be found.
> 
> 
> I'd start with:
> 
> 1. Check host-deploy logs. You can find them on the engine machine
> (copied from the host) in /var/log/ovirt-engine/host-deploy. Compare
> failed and successful ones, especially around 'yum' - it should log
> the list of packages it's going to update etc.
Yup, looked at that, except that I couldn't quite find that list of packages: 
It fails trying to satisfy the pre-conditions (missing rpmUtils), before trying 
to check/install what it needs on the target.

> 
> 2. Compare 'rpm -qa' between a failed and a working setup. Also 'yum list
> all'.
Did that to exhaustion but not exhaustively...

Honestly, with the work-around (temporarily removing Cinnamon), it's not quite 
on the critical path any longer... I sure want to solve that puzzle, but I am 
not sure just when I'll be able to have another go at it.
> 
> 
> You mean attach to your email to the mailing list? Not sure you should
> see, but it's anyway considered better these days to upload somewhere
> (dropbox, google drive, some pastebin if it's just log snippets) and
> share a link. This applies to mails to the list. If you open a bug in
> bugzilla, please do attach everything directly.
I am using the Web-GUI not an e-mail client. I have been looking for some type 
of widget which allows me to add an attachment there, but only gut a "send" or 
"cancel" button.

I operate my Firefox in paranoid mode, no tracking/blocking ads-fingerprinting 
etc. so some critical Javascript could fail. 
> 
> Thanks and best regards,
> --
> Didi
Thank you!!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 

[ovirt-users] Re: Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-26 Thread Ales Musil
On Mon, Aug 26, 2019 at 12:30 PM Gianluca Cecchi 
wrote:

> On Mon, Aug 26, 2019 at 11:58 AM Ales Musil  wrote:
>
>>
>> I can see that MOM is failing to start because some of the MOM
>> dependencies is not starting. Can you please post output from 'systemctl
>> status momd'?
>>
>>
>
>  ● momd.service - Memory Overcommitment Manager Daemon
>Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
> preset: disabled)
>Active: inactive (dead)
>
> perhaps any other daemon status?
> Or any momd related log file generated?
>
> BTW: I see on a running oVirt 4.3.5 node from another environment that the
> status of momd is the same inactive (dead)
>
>
What happens if you try to start the momd?

-- 

Ales Musil

Associate Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQDOAOTV64ATBX457FSG3WQ72PYW3DT7/


[ovirt-users] Re: Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-26 Thread Gianluca Cecchi
On Mon, Aug 26, 2019 at 11:58 AM Ales Musil  wrote:

>
> I can see that MOM is failing to start because some of the MOM
> dependencies is not starting. Can you please post output from 'systemctl
> status momd'?
>
>

 ● momd.service - Memory Overcommitment Manager Daemon
   Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
preset: disabled)
   Active: inactive (dead)

perhaps any other daemon status?
Or any momd related log file generated?

BTW: I see on a running oVirt 4.3.5 node from another environment that the
status of momd is the same inactive (dead)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DWGPJKCBAHPL3RAZ5BDIFUYZM72RHGO7/


[ovirt-users] Re: Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-26 Thread Ales Musil
On Mon, Aug 26, 2019 at 11:49 AM Gianluca Cecchi 
wrote:

>
>
> On Mon, Aug 26, 2019 at 10:41 AM Ales Musil  wrote:
>
>>
>>
>> On Mon, Aug 26, 2019 at 10:23 AM Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Mon, Aug 26, 2019 at 9:57 AM Dominik Holler 
>>> wrote:
>>>


 On Sun, Aug 25, 2019 at 4:33 PM Gianluca Cecchi <
 gianluca.cec...@gmail.com> wrote:

> Il Ven 23 Ago 2019, 18:00 Gianluca Cecchi 
> ha scritto:
>
>> On Fri, Aug 23, 2019 at 5:06 PM Dominik Holler 
>> wrote:
>>
>>>
>>>
>>>
>>> Gianluca, can you please share the output of 'rpm -qa' of the
>>> affected host?
>>>
>>
>> here it is output of "rpm -qa | sort"
>>
>> https://drive.google.com/file/d/1JG8XfomPSgqp4Y40KOwTGsixnkqkMfml/view?usp=sharing
>>
>
>
> Anything useful from list of pages for me to try?
>

 I was not able to understand why the services did not start as expected.
 Can you please share the relevant information from the journal?


>>>
>>> Are you interested only to the last boot journal entries, correct?
>>> Because I presume I have not set it as persistent and it seems oVirt
>>> doesn't set it.
>>> Any special switch to give to journalctl command?
>>>
>>
>> journalctl -xe could give us hint what is preventing vdsmd from starting.
>>
>>
>>>
>>>
> here it is:
>
> https://drive.google.com/file/d/1AyBUPTVqpiSAIYBqe8B6OU8Gn9Qg9cS5/view?usp=sharing
>
>
> thanks,
> Gianluca
>

I can see that MOM is failing to start because some of the MOM dependencies
is not starting. Can you please post output from 'systemctl status momd'?

-- 

Ales Musil

Associate Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCEFGJ4L636DHTB4BCFUGIAISF7D637O/


[ovirt-users] Re: Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-26 Thread Gianluca Cecchi
On Mon, Aug 26, 2019 at 10:41 AM Ales Musil  wrote:

>
>
> On Mon, Aug 26, 2019 at 10:23 AM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Mon, Aug 26, 2019 at 9:57 AM Dominik Holler 
>> wrote:
>>
>>>
>>>
>>> On Sun, Aug 25, 2019 at 4:33 PM Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> wrote:
>>>
 Il Ven 23 Ago 2019, 18:00 Gianluca Cecchi 
 ha scritto:

> On Fri, Aug 23, 2019 at 5:06 PM Dominik Holler 
> wrote:
>
>>
>>
>>
>> Gianluca, can you please share the output of 'rpm -qa' of the
>> affected host?
>>
>
> here it is output of "rpm -qa | sort"
>
> https://drive.google.com/file/d/1JG8XfomPSgqp4Y40KOwTGsixnkqkMfml/view?usp=sharing
>


 Anything useful from list of pages for me to try?

>>>
>>> I was not able to understand why the services did not start as expected.
>>> Can you please share the relevant information from the journal?
>>>
>>>
>>
>> Are you interested only to the last boot journal entries, correct?
>> Because I presume I have not set it as persistent and it seems oVirt
>> doesn't set it.
>> Any special switch to give to journalctl command?
>>
>
> journalctl -xe could give us hint what is preventing vdsmd from starting.
>
>
>>
>>
here it is:
https://drive.google.com/file/d/1AyBUPTVqpiSAIYBqe8B6OU8Gn9Qg9cS5/view?usp=sharing


thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHMQ6YSWI2PSEF5E3L7WFFLJSYQPSE66/


[ovirt-users] Engine died while creating VM

2019-08-26 Thread Erick Perez - Quadrian Enterprises
Hi,
I was just creating a VM on a freshly installed 2-node selfhosted
ovirt setup with a NFS backend.
NFS connections are v4.2 for VM data domain

What other log is needed?

Thanks, in advance.


Here is the output of the last 300 lines of /var/log/messages
[root@hvm002 ~]#
[root@hvm002 ~]# tail /var/log/messages -n 300
Aug 26 04:34:56 hvm002 systemd: ovirt-ha-agent.service: main process
exited, code=exited, status=157/n/a
Aug 26 04:34:56 hvm002 systemd: Unit ovirt-ha-agent.service entered
failed state.
Aug 26 04:34:56 hvm002 systemd: ovirt-ha-agent.service failed.
Aug 26 04:35:06 hvm002 systemd: ovirt-ha-agent.service holdoff time
over, scheduling restart.
Aug 26 04:35:06 hvm002 systemd: Cannot add dependency job for unit
lvm2-lvmetad.socket, ignoring: Unit is masked.
Aug 26 04:35:06 hvm002 systemd: Stopped oVirt Hosted Engine High
Availability Monitoring Agent.
Aug 26 04:35:06 hvm002 systemd: Started oVirt Hosted Engine High
Availability Monitoring Agent.
Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed
to start necessary monitors
Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent
call last):#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 131, in _run_agent#012return action(he)#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 55, in action_proper#012return he.start_monitoring()#012
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 432, in start_monitoring#012self._initialize_broker()#012
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 556, in _initialize_broker#012m.get('options', {}))#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 86, in start_monitor#012).format(t=type, o=options,
e=e)#012RequestError: brokerlink - failed to start monitor via
ovirt-ha-broker: [Errno 2] No such file or directory, [monitor:
'network', options: {'tcp_t_address': '', 'network_test': 'dns',
'tcp_t_port': '', 'addr': '172.21.48.1'}]
Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent
Aug 26 04:35:07 hvm002 systemd: ovirt-ha-agent.service: main process
exited, code=exited, status=157/n/a
Aug 26 04:35:07 hvm002 systemd: Unit ovirt-ha-agent.service entered
failed state.
Aug 26 04:35:07 hvm002 systemd: ovirt-ha-agent.service failed.
Aug 26 04:35:08 hvm002 vdsm[7369]: ERROR failed to retrieve Hosted
Engine HA score '[Errno 2] No such file or directory'Is the Hosted
Engine setup finished?
Aug 26 04:35:15 hvm002 systemd: Cannot add dependency job for unit
lvm2-lvmetad.socket, ignoring: Unit is masked.
Aug 26 04:35:15 hvm002 systemd: Starting Cockpit Web Service...
Aug 26 04:35:15 hvm002 systemd: Started Cockpit Web Service.
Aug 26 04:35:15 hvm002 cockpit-ws: Using certificate:
/etc/cockpit/ws-certs.d/0-self-signed.cert
Aug 26 04:35:15 hvm002 cockpit-ws: couldn't read from connection: Peer
sent fatal TLS alert: Certificate is bad
Aug 26 04:35:17 hvm002 systemd: ovirt-ha-agent.service holdoff time
over, scheduling restart.
Aug 26 04:35:17 hvm002 systemd: Cannot add dependency job for unit
lvm2-lvmetad.socket, ignoring: Unit is masked.
Aug 26 04:35:17 hvm002 systemd: Stopped oVirt Hosted Engine High
Availability Monitoring Agent.
Aug 26 04:35:17 hvm002 systemd: Started oVirt Hosted Engine High
Availability Monitoring Agent.
Aug 26 04:35:17 hvm002 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed
to start necessary monitors
Aug 26 04:35:17 hvm002 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent
call last):#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 131, in _run_agent#012return action(he)#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 55, in action_proper#012return he.start_monitoring()#012
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 432, in start_monitoring#012self._initialize_broker()#012
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 556, in _initialize_broker#012m.get('options', {}))#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 86, in start_monitor#012).format(t=type, o=options,
e=e)#012RequestError: brokerlink - failed to start monitor via
ovirt-ha-broker: [Errno 2] No such file or directory, [monitor:
'network', options: {'tcp_t_address': '', 'network_test': 'dns',
'tcp_t_port': '', 'addr': '172.21.48.1'}]
Aug 26 04:35:17 hvm002 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent
Aug 26 04:35:17 hvm002 systemd: 

[ovirt-users] Re: Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-26 Thread Ales Musil
On Mon, Aug 26, 2019 at 10:23 AM Gianluca Cecchi 
wrote:

> On Mon, Aug 26, 2019 at 9:57 AM Dominik Holler  wrote:
>
>>
>>
>> On Sun, Aug 25, 2019 at 4:33 PM Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> Il Ven 23 Ago 2019, 18:00 Gianluca Cecchi 
>>> ha scritto:
>>>
 On Fri, Aug 23, 2019 at 5:06 PM Dominik Holler 
 wrote:

>
>
>
> Gianluca, can you please share the output of 'rpm -qa' of the affected
> host?
>

 here it is output of "rpm -qa | sort"

 https://drive.google.com/file/d/1JG8XfomPSgqp4Y40KOwTGsixnkqkMfml/view?usp=sharing

>>>
>>>
>>> Anything useful from list of pages for me to try?
>>>
>>
>> I was not able to understand why the services did not start as expected.
>> Can you please share the relevant information from the journal?
>>
>>
>
> Are you interested only to the last boot journal entries, correct? Because
> I presume I have not set it as persistent and it seems oVirt doesn't set it.
> Any special switch to give to journalctl command?
>

journalctl -xe could give us hint what is preventing vdsmd from starting.


>
> Thanks,
> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KDTVJXSAJRNCCWUGS7Q7MGHPZA2DLQJA/
>


-- 

Ales Musil

Associate Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YXUXB3DBG3F5GS4TEKRHOJKXQSSX3ZHQ/


[ovirt-users] Re: Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-26 Thread Gianluca Cecchi
On Mon, Aug 26, 2019 at 9:57 AM Dominik Holler  wrote:

>
>
> On Sun, Aug 25, 2019 at 4:33 PM Gianluca Cecchi 
> wrote:
>
>> Il Ven 23 Ago 2019, 18:00 Gianluca Cecchi  ha
>> scritto:
>>
>>> On Fri, Aug 23, 2019 at 5:06 PM Dominik Holler 
>>> wrote:
>>>



 Gianluca, can you please share the output of 'rpm -qa' of the affected
 host?

>>>
>>> here it is output of "rpm -qa | sort"
>>>
>>> https://drive.google.com/file/d/1JG8XfomPSgqp4Y40KOwTGsixnkqkMfml/view?usp=sharing
>>>
>>
>>
>> Anything useful from list of pages for me to try?
>>
>
> I was not able to understand why the services did not start as expected.
> Can you please share the relevant information from the journal?
>
>

Are you interested only to the last boot journal entries, correct? Because
I presume I have not set it as persistent and it seems oVirt doesn't set it.
Any special switch to give to journalctl command?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KDTVJXSAJRNCCWUGS7Q7MGHPZA2DLQJA/


[ovirt-users] Re: Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-26 Thread Dominik Holler
On Sun, Aug 25, 2019 at 4:33 PM Gianluca Cecchi 
wrote:

> Il Ven 23 Ago 2019, 18:00 Gianluca Cecchi  ha
> scritto:
>
>> On Fri, Aug 23, 2019 at 5:06 PM Dominik Holler 
>> wrote:
>>
>>>
>>>
>>>
>>> Gianluca, can you please share the output of 'rpm -qa' of the affected
>>> host?
>>>
>>
>> here it is output of "rpm -qa | sort"
>>
>> https://drive.google.com/file/d/1JG8XfomPSgqp4Y40KOwTGsixnkqkMfml/view?usp=sharing
>>
>
>
> Anything useful from list of pages for me to try?
>

I was not able to understand why the services did not start as expected.
Can you please share the relevant information from the journal?


> Thanks
> Gianluca
>
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ENDKUPCKYAQFCLKYZ27OTBYKQVAMPJWM/