[ovirt-users] Do I need to set VM vnic profile for GlusterFS network?

2018-08-15 Thread Jayme
I followed this guide to get my three node HCI cluster up and running:
https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/

The docs were a bit out of date but most things matched up.

I have configured an internal GlusterFS network on a separate subnet on
10Gbe switch and have set it as the glusterFS and migration network.

Is this all that is needed to be done for all glusterFS traffic to pass
through that network or must the glusterFS vnic profile also be selected
for each VM?  I'm not sure if my VMs should be using ovirtmgmnt vnic or
GlusterFS vnic.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KCTVYA2Q6NHJDCMTZJBEQV3LK45WFERA/


[ovirt-users] Re: storage domain sync problem and fail to run vm after change LUN mapping

2018-08-15 Thread dvotrak
Hi, any suggestion on how to fix this?

Sent with [ProtonMail](https://protonmail.com) Secure Email.

‐‐‐ Original Message ‐‐‐
Il 12 agosto 2018 5:30 PM, dvotrak  ha scritto:

> Ok, so what is the correct way too fix this? After change multipath 
> configuration, should I restart engine or put something on maintenance (and 
> then reactivate it) to be sure that every value in the DB is fixed? Can I do 
> this change one host at time or it's better to do it on all hosts 
> simultaneously (global maintenance)?
>
> Sent with [ProtonMail](https://protonmail.com) Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> Il 12 agosto 2018 4:54 PM, Fred Rolland  ha scritto:
>
>> Vdsm requires to have "user_friendly_names no" in order to have same device 
>> names on all the hosts.
>>
>> https://github.com/oVirt/vdsm/blob/9d229e0c1b486c87c682904740c38bcbd0682fa7/lib/vdsm/tool/configurators/multipath.py#L113
>>
>> On Thu, Aug 9, 2018 at 11:55 PM, dvotrak  wrote:
>>
>>> Environment:
>>> Ovirt version: 4.2.5
>>> Centos 7.5
>>>
>>> During a planned maintenance window, multipath configuration was changed 
>>> according to the storage vendor guidelines.
>>> The new configuration modify user_friendly_names from no to yes, so the 
>>> multipath -ll output changed from this "360060160a6213400fa2d31acbbfce511 
>>> dm-8 DGC" to this: "mpathh (360060160a6213400fa2d31acbbfce511) dm-8 DGC".
>>>
>>> After this change, the following errors are displayed:
>>>
>>> 1) when attempting to start a VM with direct LUN: Failed to run VM  
>>> on Host  VM  is down. Exit message: Bad volume 
>>> specification 
>>> Inspecting the VM disk, the lun id remains the previous one 
>>> (360060160a6213400fa2d31acbbfce511) insted of the new one (mpathh)
>>> Remove/detach and re-add/re-attach the disk to VM doesen't help.
>>>
>>> 2) randomly: Storage domains with IDs [] could not be synchronized. To 
>>> synchronize them, please move them to maintenance and then activate.
>>> Moving it to maintenance and then activate it doesen't help.
>>>
>>> Inspecting storage domain luns, almost all change the lun id to the new 
>>> ones (mpathXX), but this one remain in "the old style" (maybe because 
>>> lun_id is a primary key and this one was already taken?)
>>>
>>> Any idea on how to resolve these problems?
>>>
>>> Thanks
>>>
>>> Sent with [ProtonMail](https://protonmail.com) Secure Email.
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDGU2DA3S5LHKYFHANZU5Q72PD25UP52/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7N3VIPJWMFVBIC664FUOYEKCWRKAYP2Y/


[ovirt-users] Re: oVirt 4.2.5 : VM snapshot creation does not work : command HSMGetAllTasksStatusesVDS failed: Could not acquire resource

2018-08-15 Thread Nir Soffer
On Wed, Aug 15, 2018 at 10:30 PM Алексей Максимов <
aleksey.i.maksi...@yandex.ru> wrote:

> Hello Nir
>
> > To confirm this theory, please share the output of:
> > Top volume:
> > dd if=/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/metadata bs=512 count=1
> skip=16 iflag=direct
>
> DOMAIN=6db73566-0f7f-4438-a9ef-6815075f45ea
> CTIME=1533083673
> FORMAT=COW
> DISKTYPE=DATA
> LEGALITY=LEGAL
> SIZE=62914560
> VOLTYPE=LEAF
> DESCRIPTION=
> IMAGE=cdf1751b-64d3-42bc-b9ef-b0174c7ea068
> PUUID=208ece15-1c71-46f2-a019-6a9fce4309b2
> MTIME=0
> POOL_UUID=
> TYPE=SPARSE
> GEN=0
> EOF
> 1+0 records in
> 1+0 records out
> 512 bytes (512 B) copied, 0.000348555 s, 1.5 MB/s
>
>
> > Base volume:
> > dd if=/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/metadata bs=512 count=1
> skip=23 iflag=direct
>
>
> DOMAIN=6db73566-0f7f-4438-a9ef-6815075f45ea
> CTIME=1512474404
> FORMAT=COW
> DISKTYPE=2
> LEGALITY=LEGAL
> SIZE=62914560
> VOLTYPE=INTERNAL
> DESCRIPTION={"DiskAlias":"KOM-APP14_Disk1","DiskDescription":""}
> IMAGE=cdf1751b-64d3-42bc-b9ef-b0174c7ea068
> PUUID=----
> MTIME=0
> POOL_UUID=
> TYPE=SPARSE
> GEN=0
> EOF
> 1+0 records in
> 1+0 records out
> 512 bytes (512 B) copied, 0.00031362 s, 1.6 MB/s
>
>
> > Deleted volume?:
> > dd if=/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/metadata bs=512 count=1
> skip=15 iflag=direct
>
>
> NONE=##
> EOF
> 1+0 records in
> 1+0 records out
> 512 bytes (512 B) copied, 0.000350361 s, 1.5 MB/s
>

This confirms that
6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58
is a deleted volume.

To fix this VM, please remove this volume. Run these commands on the SPM
host:

systemctl stop vdsmd

lvremove 
6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58
systemctl start vdsmd

You should be able to create snapshot after that.


>
>
> 15.08.2018, 21:09, "Nir Soffer" :
> > On Wed, Aug 15, 2018 at 6:14 PM Алексей Максимов <
> aleksey.i.maksi...@yandex.ru> wrote:
> >> Hello Nir
> >>
> >> Thanks for the answer.
> >> The output of the commands is below.
> >>
> >>
> *
> >>> 1. Please share the output of this command on one of the hosts:
> >>> lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068
> >>
> *
> >> # lvs -o vg_name,lv_name,tags | grep
> cdf1751b-64d3-42bc-b9ef-b0174c7ea068
> >>
> >>   VG   LV
>  LV Tags
> >>   ...
> >>   6db73566-0f7f-4438-a9ef-6815075f45ea
> 208ece15-1c71-46f2-a019-6a9fce4309b2
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_23,PU_----
> >>   6db73566-0f7f-4438-a9ef-6815075f45ea
> 4974a4cc-b388-456f-b98e-19d2158f0d58
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_15,PU_----
> >>   6db73566-0f7f-4438-a9ef-6815075f45ea
> 8c66f617-7add-410c-b546-5214b0200832
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_16,PU_208ece15-1c71-46f2-a019-6a9fce4309b2
> >
> > So we have 2 volumes - 2 are base volumes:
> >
> > - 208ece15-1c71-46f2-a019-6a9fce4309b2
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_23,PU_----
> > - 4974a4cc-b388-456f-b98e-19d2158f0d58
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_15,PU_----
> >
> > And one is top volume:
> > - 8c66f617-7add-410c-b546-5214b0200832
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_16,PU_208ece15-1c71-46f2-a019-6a9fce4309b2
> >
> > So according to vdsm, this is the chain:
> >
> > 208ece15-1c71-46f2-a019-6a9fce4309b2 <-
> 8c66f617-7add-410c-b546-5214b0200832 (top)
> >
> > The volume 4974a4cc-b388-456f-b98e-19d2158f0d58 is not part of this
> chain.
> >
> >>
> *
> >>> qemu-img info --backing /dev/vg_name/lv_name
> >>
> *
> >>
> >> # qemu-img info --backing
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
> >>
> >> image:
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
> >> file format: qcow2
> >> 

[ovirt-users] Re: hosted engine not reachable

2018-08-15 Thread Douglas Duckworth
Eventually failed.

I am running CentOS 7.5 on the host.  After re-reading documentation it
seems that my /var partition might not be large enough, as it's only 30GB,
but no warning message indicating that's an issue.

Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690

On Wed, Aug 15, 2018 at 2:10 PM, Douglas Duckworth 
wrote:

> Ok the ansible engine-deploy now seems to be stuck and same step:
>
> [ INFO  ] TASK [Force host-deploy in offline mode]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Add host]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [Wait for the host to be up]
>
> On the hypervisor in syslog I see:
>
> Aug 15 14:09:26 ovirt-hv1 python: ansible-ovirt_hosts_facts Invoked with
> pattern=name=ovirt-hv1.pbtech fetch_nested=False nested_attributes=[]
> auth={'timeout': 0, 'url': 'https://ovirt-engine.pbtech/ovirt-engine/api',
>
> Within the VM, which I can access over virtual machine network, I see:
>
> Aug 15 18:08:06 ovirt-engine python: 192.168.122.69 - - [15/Aug/2018
> 14:08:06] "GET /v2.0/networks HTTP/1.1" 200 -
> Aug 15 18:08:11 ovirt-engine ovsdb-server: ovs|8|stream_ssl|WARN|SSL_read:
> system error (Connection reset by peer)
> Aug 15 18:08:11 ovirt-engine ovsdb-server: ovs|9|jsonrpc|WARN|ssl:127
> .0.0.1:50356: receive error: Connection reset by peer
> Aug 15 18:08:11 ovirt-engine ovsdb-server: ovs|00010|reconnect|WARN|ssl:1
> 27.0.0.1:50356: connection dropped (Connection reset by peer)
>
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit
> Weill Cornell Medicine
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
>
> On Wed, Aug 15, 2018 at 1:21 PM, Douglas Duckworth <
> dod2...@med.cornell.edu> wrote:
>
>> Same VDSM error
>>
>> This is the state shown by service after the failed state messages:
>>
>> ● vdsmd.service - Virtual Desktop Server Manager
>>Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled;
>> vendor preset: enabled)
>>Active: active (running) since Wed 2018-08-15 13:07:48 EDT; 4min 10s
>> ago
>>  Main PID: 18378 (vdsmd)
>> Tasks: 56
>>CGroup: /system.slice/vdsmd.service
>>├─18378 /usr/bin/python2 /usr/share/vdsm/vdsmd
>>├─18495 /usr/libexec/ioprocess --read-pipe-fd 45
>> --write-pipe-fd 44 --max-threads 10 --max-queued-requests 10
>>├─18504 /usr/libexec/ioprocess --read-pipe-fd 53
>> --write-pipe-fd 51 --max-threads 10 --max-queued-requests 10
>>└─20825 /usr/libexec/ioprocess --read-pipe-fd 60
>> --write-pipe-fd 59 --max-threads 10 --max-queued-requests 10
>>
>> Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: WARN Not ready yet,
>> ignoring event '|virt|VM_status|c5463d87-c964-4430-9fdb-0e97d56cf812'
>> args={'c5463d87-c964-4430-9fdb-0e97d56cf812': {'status': 'Up',
>> 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port':
>> '5900'}], 'hash': '6802750603520244794', 'cpuUser': '0.00',
>> 'monitorResponse': '0', 'cpuUsage': '0.00', 'elapsedTime': '124', 'cpuSys':
>> '0.00', 'vcpuPeriod': 10L, 'timeOffset': '0', 'clientIp': '',
>> 'pauseCode': 'NOERR', 'vcpuQuota': '-1'}}
>> Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: WARN MOM not available.
>> Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: WARN MOM not available, KSM
>> stats will be missing.
>> Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: ERROR failed to retrieve
>> Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted
>> Engine setup finished?
>> Aug 15 13:07:50 ovirt-hv1.pbtech vdsm[18378]: WARN Not ready yet,
>> ignoring event '|virt|VM_status|c5463d87-c964-4430-9fdb-0e97d56cf812'
>> args={'c5463d87-c964-4430-9fdb-0e97d56cf812': {'status': 'Up',
>> 'username': 'Unknown', 'memUsage': '40', 'guestFQDN': '', 'memoryStats':
>> {'swap_out': '0', 'majflt': '0', 'mem_cached': '772684', 'mem_free':
>> '1696572', 'mem_buffers': '9348', 'swap_in': '0', 'pageflt': '3339',
>> 'mem_total': '3880652', 'mem_unused': '1696572'}, 'session': 'Unknown',
>> 'netIfaces': [], 'guestCPUCount': -1, 'appsList': (), 'guestIPs': '',
>> 'disksUsage': []}}
>> Aug 15 13:08:04 ovirt-hv1.pbtech vdsm[18378]: ERROR failed to retrieve
>> Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted
>> Engine setup finished?
>> Aug 15 13:08:16 ovirt-hv1.pbtech vdsm[18378]: WARN File:
>> /var/lib/libvirt/qemu/channels/c5463d87-c964-4430-9fdb-
>> 0e97d56cf812.com.redhat.rhevm.vdsm already removed
>> Aug 15 13:08:16 ovirt-hv1.pbtech vdsm[18378]: WARN File:
>> /var/lib/libvirt/qemu/channels/c5463d87-c964-4430-9fdb-
>> 0e97d56cf812.org.qemu.guest_agent.0 already removed
>> Aug 15 13:08:16 ovirt-hv1.pbtech vdsm[18378]: WARN File:
>> /var/run/ovirt-vmconsole-console/c5463d87-c964-4430-9fdb-0e97d56cf812.sock
>> already removed
>> Aug 15 13:08:19 ovirt-hv1.pbtech vdsm[18378]: ERROR failed to retrieve
>> Hosted Engine HA score '[Errno 2] No such file 

[ovirt-users] Re: oVirt 4.2.5 : VM snapshot creation does not work : command HSMGetAllTasksStatusesVDS failed: Could not acquire resource

2018-08-15 Thread Алексей Максимов
Hello Nir

> To confirm this theory, please share the output of:
> Top volume:
> dd if=/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/metadata bs=512 count=1 
> skip=16 iflag=direct

DOMAIN=6db73566-0f7f-4438-a9ef-6815075f45ea
CTIME=1533083673
FORMAT=COW
DISKTYPE=DATA
LEGALITY=LEGAL
SIZE=62914560
VOLTYPE=LEAF
DESCRIPTION=
IMAGE=cdf1751b-64d3-42bc-b9ef-b0174c7ea068
PUUID=208ece15-1c71-46f2-a019-6a9fce4309b2
MTIME=0
POOL_UUID=
TYPE=SPARSE
GEN=0
EOF
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.000348555 s, 1.5 MB/s


> Base volume:
> dd if=/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/metadata bs=512 count=1 
> skip=23 iflag=direct


DOMAIN=6db73566-0f7f-4438-a9ef-6815075f45ea
CTIME=1512474404
FORMAT=COW
DISKTYPE=2
LEGALITY=LEGAL
SIZE=62914560
VOLTYPE=INTERNAL
DESCRIPTION={"DiskAlias":"KOM-APP14_Disk1","DiskDescription":""}
IMAGE=cdf1751b-64d3-42bc-b9ef-b0174c7ea068
PUUID=----
MTIME=0
POOL_UUID=
TYPE=SPARSE
GEN=0
EOF
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00031362 s, 1.6 MB/s


> Deleted volume?:
> dd if=/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/metadata bs=512 count=1 
> skip=15 iflag=direct

NONE=##
EOF
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.000350361 s, 1.5 MB/s


15.08.2018, 21:09, "Nir Soffer" :
> On Wed, Aug 15, 2018 at 6:14 PM Алексей Максимов 
>  wrote:
>> Hello Nir
>>
>> Thanks for the answer.
>> The output of the commands is below.
>>
>> *
>>> 1. Please share the output of this command on one of the hosts:
>>> lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068
>> *
>> # lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068
>>
>>   VG                                   LV                                   
>> LV Tags
>>   ...
>>   6db73566-0f7f-4438-a9ef-6815075f45ea 208ece15-1c71-46f2-a019-6a9fce4309b2 
>> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_23,PU_----
>>   6db73566-0f7f-4438-a9ef-6815075f45ea 4974a4cc-b388-456f-b98e-19d2158f0d58 
>> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_15,PU_----
>>   6db73566-0f7f-4438-a9ef-6815075f45ea 8c66f617-7add-410c-b546-5214b0200832 
>> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_16,PU_208ece15-1c71-46f2-a019-6a9fce4309b2
>
> So we have 2 volumes - 2 are base volumes:
>
> - 208ece15-1c71-46f2-a019-6a9fce4309b2 
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_23,PU_----
> - 4974a4cc-b388-456f-b98e-19d2158f0d58 
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_15,PU_----
>
> And one is top volume:
> - 8c66f617-7add-410c-b546-5214b0200832 
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_16,PU_208ece15-1c71-46f2-a019-6a9fce4309b2
>
> So according to vdsm, this is the chain:
>
>     208ece15-1c71-46f2-a019-6a9fce4309b2 <- 
> 8c66f617-7add-410c-b546-5214b0200832 (top)
>
> The volume 4974a4cc-b388-456f-b98e-19d2158f0d58 is not part of this chain.
>
>> *
>>> qemu-img info --backing /dev/vg_name/lv_name
>> *
>>
>> # qemu-img info --backing 
>> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
>>
>> image: 
>> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
>> file format: qcow2
>> virtual size: 30G (32212254720 bytes)
>> disk size: 0
>> cluster_size: 65536
>> Format specific information:
>>     compat: 1.1
>>     lazy refcounts: false
>>     refcount bits: 16
>>     corrupt: false
>
> This is the base volume according to vdsm and qemu, good.
>
>> # qemu-img info --backing 
>> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58
>>
>> image: 
>> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58
>> file format: qcow2
>> virtual size: 30G (32212254720 bytes)
>> disk size: 0
>> cluster_size: 65536
>> backing file: 208ece15-1c71-46f2-a019-6a9fce4309b2 (actual path: 
>> 

[ovirt-users] Re: Issue with NFS and Storage domain setup

2018-08-15 Thread Greg Sheremeta
Hi,

Wild guess, but it could be selinux. Try permissive mode or check ausearch?

Greg


On Wed, Aug 15, 2018 at 10:25 AM Douglas Duckworth 
wrote:

>
> https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/
>
> Try the script outlined in section "nfs-check-program."
>
>
>
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit
> Weill Cornell Medicine
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
>
> On Mon, Aug 13, 2018 at 11:21 PM, Inquirer Guy 
> wrote:
>
>> Adding to the below issue, my NODE01 can see the NFS share i created from
>> the ENGINE01 which I don't know how it got through because when I add a
>> storage domain from the ovirt engine I still get the error
>>
>>
>>
>>
>>
>>
>>
>> On 14 August 2018 at 10:22, Inquirer Guy  wrote:
>>
>>> Hi Ovirt,
>>>
>>> I successfully installed both ovirt-engine(ENGINE01) and ovirt
>>> node(NODE01) on a separate machines. I also created a FreeNAS(NAS01) with
>>> NFS share and already connected to my NODE01, all of these server though I
>>> haven't setup a DNS server, was manually added hostname on every machines
>>> and I can lookup and ping on them without a problem, I was able to add the
>>> NODE01 to my ENGINE01 as well.
>>>
>>> My issue was when I tried creating a storage domain on my ENGINE01, I
>>> did the below steps before running the engine-setup while also following
>>> the guide on the ovirt url:
>>> https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/
>>> 
>>>
>>> #touch /etc/exports
>>> #systemctl start rpcbind nfs-server
>>> #systemctl enable rpcbind nfs-server
>>> #engine-setup
>>> #mkdir /var/lib/exports/data
>>> #chown vdsm:kvm /var/lib/exports/data
>>>
>>> I added the 2 just in case but I have tried each alone but all fails
>>> #vi /etc/exports
>>> /var/lib/exports/data
>>> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
>>> /var/lib/exports/data   0.0.0.0/0.0.0.0(rw)
>>> 
>>>
>>> #systemctl restart rpc-statd nfs-server
>>>
>>>
>>> Once I started to add my storage domain I get the below error
>>>
>>>
>>>
>>> Attached is the engine log for your reference.
>>>
>>> Hope you guys can help me with these, Im really interested with this
>>> great product. Thanks!
>>>
>>
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LKLOBS67TJY23JZ4RA2FVQSUV5BYURVO/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K2M673IZROZ67L4SEFV6IBV5I3RH7NKL/


[ovirt-users] Re: hosted engine not reachable

2018-08-15 Thread Douglas Duckworth
Ok the ansible engine-deploy now seems to be stuck and same step:

[ INFO  ] TASK [Force host-deploy in offline mode]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Add host]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Wait for the host to be up]

On the hypervisor in syslog I see:

Aug 15 14:09:26 ovirt-hv1 python: ansible-ovirt_hosts_facts Invoked with
pattern=name=ovirt-hv1.pbtech fetch_nested=False nested_attributes=[]
auth={'timeout': 0, 'url': 'https://ovirt-engine.pbtech/ovirt-engine/api',

Within the VM, which I can access over virtual machine network, I see:

Aug 15 18:08:06 ovirt-engine python: 192.168.122.69 - - [15/Aug/2018
14:08:06] "GET /v2.0/networks HTTP/1.1" 200 -
Aug 15 18:08:11 ovirt-engine ovsdb-server:
ovs|8|stream_ssl|WARN|SSL_read: system error (Connection reset by peer)
Aug 15 18:08:11 ovirt-engine ovsdb-server: ovs|9|jsonrpc|WARN|ssl:
127.0.0.1:50356: receive error: Connection reset by peer
Aug 15 18:08:11 ovirt-engine ovsdb-server: ovs|00010|reconnect|WARN|ssl:
127.0.0.1:50356: connection dropped (Connection reset by peer)

Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690

On Wed, Aug 15, 2018 at 1:21 PM, Douglas Duckworth 
wrote:

> Same VDSM error
>
> This is the state shown by service after the failed state messages:
>
> ● vdsmd.service - Virtual Desktop Server Manager
>Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
> preset: enabled)
>Active: active (running) since Wed 2018-08-15 13:07:48 EDT; 4min 10s ago
>  Main PID: 18378 (vdsmd)
> Tasks: 56
>CGroup: /system.slice/vdsmd.service
>├─18378 /usr/bin/python2 /usr/share/vdsm/vdsmd
>├─18495 /usr/libexec/ioprocess --read-pipe-fd 45
> --write-pipe-fd 44 --max-threads 10 --max-queued-requests 10
>├─18504 /usr/libexec/ioprocess --read-pipe-fd 53
> --write-pipe-fd 51 --max-threads 10 --max-queued-requests 10
>└─20825 /usr/libexec/ioprocess --read-pipe-fd 60
> --write-pipe-fd 59 --max-threads 10 --max-queued-requests 10
>
> Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: WARN Not ready yet, ignoring
> event '|virt|VM_status|c5463d87-c964-4430-9fdb-0e97d56cf812'
> args={'c5463d87-c964-4430-9fdb-0e97d56cf812': {'status': 'Up',
> 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port':
> '5900'}], 'hash': '6802750603520244794', 'cpuUser': '0.00',
> 'monitorResponse': '0', 'cpuUsage': '0.00', 'elapsedTime': '124', 'cpuSys':
> '0.00', 'vcpuPeriod': 10L, 'timeOffset': '0', 'clientIp': '',
> 'pauseCode': 'NOERR', 'vcpuQuota': '-1'}}
> Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: WARN MOM not available.
> Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: WARN MOM not available, KSM
> stats will be missing.
> Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: ERROR failed to retrieve
> Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted
> Engine setup finished?
> Aug 15 13:07:50 ovirt-hv1.pbtech vdsm[18378]: WARN Not ready yet, ignoring
> event '|virt|VM_status|c5463d87-c964-4430-9fdb-0e97d56cf812'
> args={'c5463d87-c964-4430-9fdb-0e97d56cf812': {'status': 'Up',
> 'username': 'Unknown', 'memUsage': '40', 'guestFQDN': '', 'memoryStats':
> {'swap_out': '0', 'majflt': '0', 'mem_cached': '772684', 'mem_free':
> '1696572', 'mem_buffers': '9348', 'swap_in': '0', 'pageflt': '3339',
> 'mem_total': '3880652', 'mem_unused': '1696572'}, 'session': 'Unknown',
> 'netIfaces': [], 'guestCPUCount': -1, 'appsList': (), 'guestIPs': '',
> 'disksUsage': []}}
> Aug 15 13:08:04 ovirt-hv1.pbtech vdsm[18378]: ERROR failed to retrieve
> Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted
> Engine setup finished?
> Aug 15 13:08:16 ovirt-hv1.pbtech vdsm[18378]: WARN File:
> /var/lib/libvirt/qemu/channels/c5463d87-c964-4430-
> 9fdb-0e97d56cf812.com.redhat.rhevm.vdsm already removed
> Aug 15 13:08:16 ovirt-hv1.pbtech vdsm[18378]: WARN File:
> /var/lib/libvirt/qemu/channels/c5463d87-c964-4430-
> 9fdb-0e97d56cf812.org.qemu.guest_agent.0 already removed
> Aug 15 13:08:16 ovirt-hv1.pbtech vdsm[18378]: WARN File:
> /var/run/ovirt-vmconsole-console/c5463d87-c964-4430-9fdb-0e97d56cf812.sock
> already removed
> Aug 15 13:08:19 ovirt-hv1.pbtech vdsm[18378]: ERROR failed to retrieve
> Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted
> Engine setup finished?
>
> Note 'ipAddress': '0' though I see IP was leased out via DHCP server:
>
> Aug 15 13:05:55 server dhcpd: DHCPACK on 10.0.0.178 to 00:16:3e:54:fb:7f
> via em1
>
> While I can ping it from my NFS server which provides storage domain:
>
> 64 bytes from ovirt-hv1.pbtech (10.0.0.176): icmp_seq=1 ttl=64 time=0.253
> ms
>
>
>
>
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit
> Weill Cornell Medicine
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
>
> On Wed, Aug 15, 2018 at 12:50 

[ovirt-users] Re: oVirt 4.2.5 : VM snapshot creation does not work : command HSMGetAllTasksStatusesVDS failed: Could not acquire resource

2018-08-15 Thread Nir Soffer
On Wed, Aug 15, 2018 at 6:14 PM Алексей Максимов <
aleksey.i.maksi...@yandex.ru> wrote:

> Hello Nir
>
> Thanks for the answer.
> The output of the commands is below.
>
>
>
> *
> > 1. Please share the output of this command on one of the hosts:
> > lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068
>
> *
> # lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068
>
>   VG   LV
>  LV Tags
>   ...
>   6db73566-0f7f-4438-a9ef-6815075f45ea
> 208ece15-1c71-46f2-a019-6a9fce4309b2
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_23,PU_----
>   6db73566-0f7f-4438-a9ef-6815075f45ea
> 4974a4cc-b388-456f-b98e-19d2158f0d58
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_15,PU_----
>   6db73566-0f7f-4438-a9ef-6815075f45ea
> 8c66f617-7add-410c-b546-5214b0200832
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_16,PU_208ece15-1c71-46f2-a019-6a9fce4309b2
>

So we have 2 volumes - 2 are base volumes:

- 208ece15-1c71-46f2-a019-6a9fce4309b2
IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_23,PU_----
- 4974a4cc-b388-456f-b98e-19d2158f0d58
IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_15,PU_----

And one is top volume:
- 8c66f617-7add-410c-b546-5214b0200832
IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_16,PU_208ece15-1c71-46f2-a019-6a9fce4309b2

So according to vdsm, this is the chain:

208ece15-1c71-46f2-a019-6a9fce4309b2 <-
8c66f617-7add-410c-b546-5214b0200832 (top)

The volume 4974a4cc-b388-456f-b98e-19d2158f0d58 is not part of this chain.


>
> *
> > qemu-img info --backing /dev/vg_name/lv_name
>
> *
>
>
> # qemu-img info --backing
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
>
> image:
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
> file format: qcow2
> virtual size: 30G (32212254720 bytes)
> disk size: 0
> cluster_size: 65536
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>

This is the base volume according to vdsm and qemu, good.


> # qemu-img info --backing
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58
>
> image:
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58
> file format: qcow2
> virtual size: 30G (32212254720 bytes)
> disk size: 0
> cluster_size: 65536
> backing file: 208ece15-1c71-46f2-a019-6a9fce4309b2 (actual path:
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2)
> backing file format: qcow2
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
> image:
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
> file format: qcow2
> virtual size: 30G (32212254720 bytes)
> disk size: 0
> cluster_size: 65536
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>

This is the deleted volume according to vdsm metadata. We can see that this
volume
still has a backing file pointing to the base volume.


> # qemu-img info --backing
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/8c66f617-7add-410c-b546-5214b0200832
>
> image:
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/8c66f617-7add-410c-b546-5214b0200832
> file format: qcow2
> virtual size: 30G (32212254720 bytes)
> disk size: 0
> cluster_size: 65536
> backing file: 208ece15-1c71-46f2-a019-6a9fce4309b2 (actual path:
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2)
> backing file format: qcow2
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
> image:
> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
> file format: qcow2
> virtual size: 30G (32212254720 bytes)
> disk size: 0
> cluster_size: 65536
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>

This is top top volume.

So I think this is what happened:

You had this chain in the past:

208ece15-1c71-46f2-a019-6a9fce4309b2 <-
4974a4cc-b388-456f-b98e-19d2158f0d5 <- 8c66f617-7add-410c-b546-5214b0200832
(top)

You deleted a snapshot in engine, which created the new chain:


[ovirt-users] Data Center non-responsive, storage not loading, and psql errors after importing an ova

2018-08-15 Thread josh
I posted about this a few months ago on Reddit when it happened the first time, 
and I have now repeated it on two more separate installations. I have a simple 
oVirt 4.2 hosted-engine setup, two hosts, with a qnap NAS as the shared NFS 
storage. I copied an ova file (exported from vmware) to the NFS storage. I then 
imported it by logging into the engine gui, going to "Virtual machines", 
selecting the host/file path/datacenter/etc, and starting the import. Once I 
started the import, I get an alert that it failed almost immediately, followed 
by a notice that the data center is in a non-responsive state. Clicking on the 
"storage" tab under Data Center, or going to the "Storage Domain" page directly 
yields the three ". . ." loading animation, which never ends.

I can still see the storage mounts on the hosts, and I can move files to and 
from them.

the engine.log file on the hosted-engine VM contains a lot of the following 
lines:

Caused by: org.springframework.dao.DataIntegrityViolationException: 
PreparedStatementCallback; SQL [select * from getstorage_domains_list_by_imageid

(?)]; ERROR: integer out of range

Where: PL/pgSQL function getstorage_domains_list_by_imageid(uuid) line 3 at 
RETURN QUERY; nested exception is org.postgresql.util.PSQLException: ER

ROR: integer out of range

Where: PL/pgSQL function getstorage_domains_list_by_imageid(uuid) line 3 at 
RETURN QUERY

at 
org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:102)
 [spring-jdbc.jar:4.3.

9.RELEASE]

What I have tried:

Restarting the hosted-engine service
Restarting nfs and vdsm services
Rebooting the hosted-engine
Rebooting both hosts
Entering an exiting maintenance mode (one host never came out of maintenance, 
and refuses to with the error " General command validation failure ")

Any idea what might have happened? And what should I have done to try to 
rectify it? Entering maintenance mode and restarting services seems to have 
made things much worse.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O4CPY5EYCOT3WALYBQLCLBQJDHIKKFUF/


[ovirt-users] Re: hosted engine not reachable

2018-08-15 Thread Douglas Duckworth
Same VDSM error

This is the state shown by service after the failed state messages:

● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
   Active: active (running) since Wed 2018-08-15 13:07:48 EDT; 4min 10s ago
 Main PID: 18378 (vdsmd)
Tasks: 56
   CGroup: /system.slice/vdsmd.service
   ├─18378 /usr/bin/python2 /usr/share/vdsm/vdsmd
   ├─18495 /usr/libexec/ioprocess --read-pipe-fd 45 --write-pipe-fd
44 --max-threads 10 --max-queued-requests 10
   ├─18504 /usr/libexec/ioprocess --read-pipe-fd 53 --write-pipe-fd
51 --max-threads 10 --max-queued-requests 10
   └─20825 /usr/libexec/ioprocess --read-pipe-fd 60 --write-pipe-fd
59 --max-threads 10 --max-queued-requests 10

Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: WARN Not ready yet, ignoring
event '|virt|VM_status|c5463d87-c964-4430-9fdb-0e97d56cf812'
args={'c5463d87-c964-4430-9fdb-0e97d56cf812': {'status': 'Up',
'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port':
'5900'}], 'hash': '6802750603520244794', 'cpuUser': '0.00',
'monitorResponse': '0', 'cpuUsage': '0.00', 'elapsedTime': '124', 'cpuSys':
'0.00', 'vcpuPeriod': 10L, 'timeOffset': '0', 'clientIp': '',
'pauseCode': 'NOERR', 'vcpuQuota': '-1'}}
Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: WARN MOM not available.
Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: WARN MOM not available, KSM
stats will be missing.
Aug 15 13:07:49 ovirt-hv1.pbtech vdsm[18378]: ERROR failed to retrieve
Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted
Engine setup finished?
Aug 15 13:07:50 ovirt-hv1.pbtech vdsm[18378]: WARN Not ready yet, ignoring
event '|virt|VM_status|c5463d87-c964-4430-9fdb-0e97d56cf812'
args={'c5463d87-c964-4430-9fdb-0e97d56cf812': {'status': 'Up', 'username':
'Unknown', 'memUsage': '40', 'guestFQDN': '', 'memoryStats': {'swap_out':
'0', 'majflt': '0', 'mem_cached': '772684', 'mem_free': '1696572',
'mem_buffers': '9348', 'swap_in': '0', 'pageflt': '3339', 'mem_total':
'3880652', 'mem_unused': '1696572'}, 'session': 'Unknown', 'netIfaces': [],
'guestCPUCount': -1, 'appsList': (), 'guestIPs': '', 'disksUsage': []}}
Aug 15 13:08:04 ovirt-hv1.pbtech vdsm[18378]: ERROR failed to retrieve
Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted
Engine setup finished?
Aug 15 13:08:16 ovirt-hv1.pbtech vdsm[18378]: WARN File:
/var/lib/libvirt/qemu/channels/c5463d87-c964-4430-9fdb-0e97d56cf812.com.redhat.rhevm.vdsm
already removed
Aug 15 13:08:16 ovirt-hv1.pbtech vdsm[18378]: WARN File:
/var/lib/libvirt/qemu/channels/c5463d87-c964-4430-9fdb-0e97d56cf812.org.qemu.guest_agent.0
already removed
Aug 15 13:08:16 ovirt-hv1.pbtech vdsm[18378]: WARN File:
/var/run/ovirt-vmconsole-console/c5463d87-c964-4430-9fdb-0e97d56cf812.sock
already removed
Aug 15 13:08:19 ovirt-hv1.pbtech vdsm[18378]: ERROR failed to retrieve
Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted
Engine setup finished?

Note 'ipAddress': '0' though I see IP was leased out via DHCP server:

Aug 15 13:05:55 server dhcpd: DHCPACK on 10.0.0.178 to 00:16:3e:54:fb:7f
via em1

While I can ping it from my NFS server which provides storage domain:

64 bytes from ovirt-hv1.pbtech (10.0.0.176): icmp_seq=1 ttl=64 time=0.253 ms




Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690

On Wed, Aug 15, 2018 at 12:50 PM, Douglas Duckworth  wrote:

> Ok
>
> I was now able to get to the step:
>
> Engine replied: DB Up!Welcome to Health Status!
>
> By removing a bad entry from /etc/hosts for ovirt-engine.pbech which
> pointed to an IP on the local virtualization network.
>
> Though now when trying to connect to engine during deploy:
>
> [ ERROR ] The VDSM host was found in a failed state. Please check engine
> and bootstrap installation logs.
>
> [ ERROR ] Unable to add ovirt-hv1.pbtech to the manager
>
> Then repeating
>
> [ INFO  ] Still waiting for engine to start...
>
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit
> Weill Cornell Medicine
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
>
> On Wed, Aug 15, 2018 at 10:34 AM, Douglas Duckworth <
> dod2...@med.cornell.edu> wrote:
>
>> Hi
>>
>> I keep getting this error after running
>>
>> sudo hosted-engine --deploy --noansible
>>
>> [ INFO  ] Engine is still not reachable, waiting...
>> [ ERROR ] Failed to execute stage 'Closing up': Engine is still not
>> reachable
>>
>> I do see a VM running
>>
>> 10:20   2:51 /usr/libexec/qemu-kvm -name guest=HostedEngine,debug-threa
>> ds=on
>>
>> Though
>>
>> sudo hosted-engine --vm-status
>> [Errno 2] No such file or directory
>> Cannot connect to the HA daemon, please check the logs
>> An error occured while retrieving vm status, please make sure the HA
>> daemon is ready and reachable.
>> 

[ovirt-users] Timezone error when trying to import VMWare created .ova

2018-08-15 Thread jthomasp
I get:
Cannot import VM. Invalid time zone for given OS type.
Attribute: vm.vmStatic

I found an old thread here: 
https://lists.ovirt.org/pipermail/users/2014-January/019363.html
But it states the issue was resolved in 3.3.1.
Running oVirt 4.2.5.1
The VM has the proper time zone set and the clock synced with a time server.
Hardware compatibility of the VM has been upgraded to VM Version 9 as 
recommended on the oVirt IRC channel.
I appreciate any help!

Thanks,
Jason
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QKYORZ7VNR3WFFH2BRDHWQE4GEDJ4ZGY/


[ovirt-users] Re: hosted engine not reachable

2018-08-15 Thread Douglas Duckworth
Ok

I was now able to get to the step:

Engine replied: DB Up!Welcome to Health Status!

By removing a bad entry from /etc/hosts for ovirt-engine.pbech which
pointed to an IP on the local virtualization network.

Though now when trying to connect to engine during deploy:

[ ERROR ] The VDSM host was found in a failed state. Please check engine
and bootstrap installation logs.

[ ERROR ] Unable to add ovirt-hv1.pbtech to the manager

Then repeating

[ INFO  ] Still waiting for engine to start...

Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690

On Wed, Aug 15, 2018 at 10:34 AM, Douglas Duckworth  wrote:

> Hi
>
> I keep getting this error after running
>
> sudo hosted-engine --deploy --noansible
>
> [ INFO  ] Engine is still not reachable, waiting...
> [ ERROR ] Failed to execute stage 'Closing up': Engine is still not
> reachable
>
> I do see a VM running
>
> 10:20   2:51 /usr/libexec/qemu-kvm -name guest=HostedEngine,debug-
> threads=on
>
> Though
>
> sudo hosted-engine --vm-status
> [Errno 2] No such file or directory
> Cannot connect to the HA daemon, please check the logs
> An error occured while retrieving vm status, please make sure the HA
> daemon is ready and reachable.
> Unable to connect the HA Broker
>
> Can someone please help?
>
> Each time this failed I ran "/usr/sbin/ovirt-hosted-engine-cleanup" then
> tried deployment again.
>
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit
> Weill Cornell Medicine
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4C2SLOSN63K3NA2XJUXBVOBHVCZUEIS/


[ovirt-users] Re: oVirt 4.2.5 : VM snapshot creation does not work : command HSMGetAllTasksStatusesVDS failed: Could not acquire resource

2018-08-15 Thread Алексей Максимов
Hello Nir

Thanks for the answer. 
The output of the commands is below.


*
> 1. Please share the output of this command on one of the hosts:
> lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068
*
# lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068

  VG   LV   LV 
Tags
  ...
  6db73566-0f7f-4438-a9ef-6815075f45ea 208ece15-1c71-46f2-a019-6a9fce4309b2 
IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_23,PU_----
  6db73566-0f7f-4438-a9ef-6815075f45ea 4974a4cc-b388-456f-b98e-19d2158f0d58 
IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_15,PU_----
  6db73566-0f7f-4438-a9ef-6815075f45ea 8c66f617-7add-410c-b546-5214b0200832 
IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_16,PU_208ece15-1c71-46f2-a019-6a9fce4309b2
  ...


*
> 2. For every volume, share the output of qemu-img info:
> If the lv is not active, activate it:
> lvchange -ay vg_name/lv_name
*

# lvdisplay 
6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2

  --- Logical volume ---
  LV Path
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
  LV Name208ece15-1c71-46f2-a019-6a9fce4309b2
  VG Name6db73566-0f7f-4438-a9ef-6815075f45ea
  LV UUIDk28hUo-Z6t7-wKdO-x7kz-ceYL-Vuzx-f9jLWi
  LV Write Accessread/write
  LV Creation host, time VM32.sub.holding.com, 2017-12-05 14:46:42 +0300
  LV Status  NOT available
  LV Size33.00 GiB
  Current LE 264
  Segments   4
  Allocation inherit
  Read ahead sectors auto



# lvdisplay 
6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58

  --- Logical volume ---
  LV Path
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58
  LV Name4974a4cc-b388-456f-b98e-19d2158f0d58
  VG Name6db73566-0f7f-4438-a9ef-6815075f45ea
  LV UUIDHnnP01-JGxU-9zne-HB6n-BcaE-2lrM-qr9KPI
  LV Write Accessread/write
  LV Creation host, time VM12.sub.holding.com, 2018-07-31 03:35:20 +0300
  LV Status  NOT available
  LV Size2.00 GiB
  Current LE 16
  Segments   1
  Allocation inherit
  Read ahead sectors auto



# lvdisplay 
6db73566-0f7f-4438-a9ef-6815075f45ea/8c66f617-7add-410c-b546-5214b0200832

  --- Logical volume ---
  LV Path
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/8c66f617-7add-410c-b546-5214b0200832
  LV Name8c66f617-7add-410c-b546-5214b0200832
  VG Name6db73566-0f7f-4438-a9ef-6815075f45ea
  LV UUIDMG1VRN-IqRn-mOGm-F4ul-ufbZ-Dywb-M3V14P
  LV Write Accessread/write
  LV Creation host, time VM12.sub.holding.com, 2018-08-01 03:34:31 +0300
  LV Status  NOT available
  LV Size1.00 GiB
  Current LE 8
  Segments   1
  Allocation inherit
  Read ahead sectors auto


# lvchange -ay 
6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
# lvchange -ay 
6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58
# lvchange -ay 
6db73566-0f7f-4438-a9ef-6815075f45ea/8c66f617-7add-410c-b546-5214b0200832


*
> qemu-img info --backing /dev/vg_name/lv_name
*


# qemu-img info --backing 
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2

image: 
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 0
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false



# qemu-img info --backing 
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58

image: 
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/4974a4cc-b388-456f-b98e-19d2158f0d58
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 0
cluster_size: 65536

[ovirt-users] Re: Issue with NFS and Storage domain setup

2018-08-15 Thread Douglas Duckworth
https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/

Try the script outlined in section "nfs-check-program."



Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690

On Mon, Aug 13, 2018 at 11:21 PM, Inquirer Guy 
wrote:

> Adding to the below issue, my NODE01 can see the NFS share i created from
> the ENGINE01 which I don't know how it got through because when I add a
> storage domain from the ovirt engine I still get the error
>
>
>
>
>
>
>
> On 14 August 2018 at 10:22, Inquirer Guy  wrote:
>
>> Hi Ovirt,
>>
>> I successfully installed both ovirt-engine(ENGINE01) and ovirt
>> node(NODE01) on a separate machines. I also created a FreeNAS(NAS01) with
>> NFS share and already connected to my NODE01, all of these server though I
>> haven't setup a DNS server, was manually added hostname on every machines
>> and I can lookup and ping on them without a problem, I was able to add the
>> NODE01 to my ENGINE01 as well.
>>
>> My issue was when I tried creating a storage domain on my ENGINE01, I did
>> the below steps before running the engine-setup while also following the
>> guide on the ovirt url: https://www.ovirt.org/document
>> ation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/
>> 
>>
>> #touch /etc/exports
>> #systemctl start rpcbind nfs-server
>> #systemctl enable rpcbind nfs-server
>> #engine-setup
>> #mkdir /var/lib/exports/data
>> #chown vdsm:kvm /var/lib/exports/data
>>
>> I added the 2 just in case but I have tried each alone but all fails
>> #vi /etc/exports
>> /var/lib/exports/data   *(rw,sync,no_subtree_check,all
>> _squash,anonuid=36,anongid=36)
>> /var/lib/exports/data   0.0.0.0/0.0.0.0(rw)
>> 
>>
>> #systemctl restart rpc-statd nfs-server
>>
>>
>> Once I started to add my storage domain I get the below error
>>
>>
>>
>> Attached is the engine log for your reference.
>>
>> Hope you guys can help me with these, Im really interested with this
>> great product. Thanks!
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LKLOBS67TJY23JZ4RA2FVQSUV5BYURVO/


[ovirt-users] Re: Issue with NFS and Storage domain setup

2018-08-15 Thread Benny Zlotnik
Can you attach the vdsm log?

On Wed, Aug 15, 2018 at 5:16 PM Inquirer Guy  wrote:

> Adding to the below issue, my NODE01 can see the NFS share i created from
> the ENGINE01 which I don't know how it got through because when I add a
> storage domain from the ovirt engine I still get the error
>
>
>
>
>
>
>
> On 14 August 2018 at 10:22, Inquirer Guy  wrote:
>
>> Hi Ovirt,
>>
>> I successfully installed both ovirt-engine(ENGINE01) and ovirt
>> node(NODE01) on a separate machines. I also created a FreeNAS(NAS01) with
>> NFS share and already connected to my NODE01, all of these server though I
>> haven't setup a DNS server, was manually added hostname on every machines
>> and I can lookup and ping on them without a problem, I was able to add the
>> NODE01 to my ENGINE01 as well.
>>
>> My issue was when I tried creating a storage domain on my ENGINE01, I did
>> the below steps before running the engine-setup while also following the
>> guide on the ovirt url:
>> https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/
>>
>> #touch /etc/exports
>> #systemctl start rpcbind nfs-server
>> #systemctl enable rpcbind nfs-server
>> #engine-setup
>> #mkdir /var/lib/exports/data
>> #chown vdsm:kvm /var/lib/exports/data
>>
>> I added the 2 just in case but I have tried each alone but all fails
>> #vi /etc/exports
>> /var/lib/exports/data
>> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
>> /var/lib/exports/data   0.0.0.0/0.0.0.0(rw)
>>
>> #systemctl restart rpc-statd nfs-server
>>
>>
>> Once I started to add my storage domain I get the below error
>>
>>
>>
>> Attached is the engine log for your reference.
>>
>> Hope you guys can help me with these, Im really interested with this
>> great product. Thanks!
>>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4UVDHNLSFSDHUZU3VXSZVUYUCUR55YI2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2QUK3EAPP6R3VG7NZNGTZQAQSGSIGQTQ/


[ovirt-users] Re: Issue with NFS and Storage domain setup

2018-08-15 Thread Inquirer Guy
Adding to the below issue, my NODE01 can see the NFS share i created from
the ENGINE01 which I don't know how it got through because when I add a
storage domain from the ovirt engine I still get the error







On 14 August 2018 at 10:22, Inquirer Guy  wrote:

> Hi Ovirt,
>
> I successfully installed both ovirt-engine(ENGINE01) and ovirt
> node(NODE01) on a separate machines. I also created a FreeNAS(NAS01) with
> NFS share and already connected to my NODE01, all of these server though I
> haven't setup a DNS server, was manually added hostname on every machines
> and I can lookup and ping on them without a problem, I was able to add the
> NODE01 to my ENGINE01 as well.
>
> My issue was when I tried creating a storage domain on my ENGINE01, I did
> the below steps before running the engine-setup while also following the
> guide on the ovirt url: https://www.ovirt.org/documentation/how-to/
> troubleshooting/troubleshooting-nfs-storage-issues/
>
> #touch /etc/exports
> #systemctl start rpcbind nfs-server
> #systemctl enable rpcbind nfs-server
> #engine-setup
> #mkdir /var/lib/exports/data
> #chown vdsm:kvm /var/lib/exports/data
>
> I added the 2 just in case but I have tried each alone but all fails
> #vi /etc/exports
> /var/lib/exports/data   *(rw,sync,no_subtree_check,
> all_squash,anonuid=36,anongid=36)
> /var/lib/exports/data   0.0.0.0/0.0.0.0(rw)
>
> #systemctl restart rpc-statd nfs-server
>
>
> Once I started to add my storage domain I get the below error
>
>
>
> Attached is the engine log for your reference.
>
> Hope you guys can help me with these, Im really interested with this great
> product. Thanks!
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4UVDHNLSFSDHUZU3VXSZVUYUCUR55YI2/


[ovirt-users] ovirt-ansible download/upload of snapshots for backup

2018-08-15 Thread Николаев Алексей
Hi community! Does the ansible module "ovirt_snapshots" support download/upload of snapshots?According to the https://bugzilla.redhat.com/show_bug.cgi?id=1405805 support of this functionality is already implemented in the ovirt API.  How to use ovirt-ansible to implement the following backup strategy: take a snapshot, back up a virtual machine from a snapshot, save a backup to storage, data domain, export domain, etc?___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QDSBYMQFTKYMZJEAXGPKKWLZ4FJLRFGA/


[ovirt-users] Re: oVirt 4.2.5 : VM snapshot creation does not work : command HSMGetAllTasksStatusesVDS failed: Could not acquire resource

2018-08-15 Thread Nir Soffer
On Tue, Aug 14, 2018 at 6:03 PM Алексей Максимов <
aleksey.i.maksi...@yandex.ru> wrote:

> Hello, Nir
>
> Log in attachment.
>

In the log we can see both createVolume and deleteVolume fail for this disk
uuid:
cdf1751b-64d3-42bc-b9ef-b0174c7ea068

1. Please share the output of this command on one of the hosts:

lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068

This will show all the volumes belonging to this disk.

2. For every volume, share the output of qemu-img info:

If the lv is not active, activate it:

lvchange -ay vg_name/lv_name

Then run qemu-img info to find the actual chain:

qemu-img info --backing /dev/vg_name/lv_name

If the lv was not active, deactivate it - we don't want to leave unused lvs
active.

lvchange -an vg_name/lv_name

3. On of the volume in the chain will not be part of the chain.

No other volume will use it as backing file, and it may not have a backing
file, or it may point to another volume in the chain.

Once we found this volume, please check engine logs for this volume uuid.
You will probaly
find that the volume was deleted in the past. Maybe you will not find it
since it was deleted
months or years ago.

4. To verify that this volume does not have metadata, check the volume MD_N
tag.
N is the offset in 512 bytes blocks from the start of the metadata volume.

This will read the volume metadata block:

dd if=dev/vg_name/metadata bs=512 count=1 skip=N iflag=direct

We expect to see:

NONE=...

5. To remove this volume use:

lvremove vg_name/lv_name

Once the volume is removed, you will be able to create snapshot.



>
> 14.08.2018, 01:30, "Nir Soffer" :
>
> On Mon, Aug 13, 2018 at 1:45 PM Aleksey Maksimov <
> aleksey.i.maksi...@yandex.ru> wrote:
>
> We use oVirt 4.2.5.2-1.el7 (Hosted engine / 4 hosts in cluster / about
> twenty virtual machines)
> Virtual machine disks are located on the Data Domain from FC SAN.
> Snapshots of all virtual machines are created normally. But for one
> virtual machine, we can not create a snapshot.
>
> When we try to create a snapshot in the oVirt web console, we see such
> errors:
>
> Aug 13, 2018, 1:05:06 PM Failed to complete snapshot 'KOM-APP14_BACKUP01'
> creation for VM 'KOM-APP14'.
> Aug 13, 2018, 1:05:01 PM VDSM KOM-VM14 command HSMGetAllTasksStatusesVDS
> failed: Could not acquire resource. Probably resource factory threw an
> exception.: ()
> Aug 13, 2018, 1:05:00 PM Snapshot 'KOM-APP14_BACKUP01' creation for VM
> 'KOM-APP14' was initiated by pe...@sub.holding.com@sub.holding.com-authz.
>
> At this time on the server with the role of "SPM" in the vdsm.log we see
> this:
>
> ...
> 2018-08-13 05:05:06,471-0500 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer]
> RPC call VM.getStats succeeded in 0.00 seconds (__init__:573)
> 2018-08-13 05:05:06,478-0500 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer]
> RPC call Image.deleteVolumes succeeded in 0.05 seconds (__init__:573)
> 2018-08-13 05:05:06,478-0500 INFO  (tasks/3)
> [storage.ThreadPool.WorkerThread] START task
> bb45ae7e-77e9-4fec-9ee2-8e1f0ad3d589 (cmd= >, args=None)
> (threadPool:208)
> 2018-08-13 05:05:07,009-0500 WARN  (tasks/3) [storage.ResourceManager]
> Resource factory failed to create resource '01_img_6db73566-0f7f-4438-a9ef-
> 6815075f45ea.cdf1751b-64d3-42bc-b9ef-b0174c7ea068'. Canceling request.
> (resourceManager:543)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py",
> line 539, in registerResource
> obj = namespaceObj.factory.createResource(name, lockType)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", line
> 193, in createResource
> lockType)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", line
> 122, in __getResourceCandidatesList
> imgUUID=resourceName)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 213,
> in getChain
> if srcVol.isLeaf():
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
> 1430, in isLeaf
> return self._manifest.isLeaf()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
> 139, in isLeaf
> return self.getVolType() == sc.type2name(sc.LEAF_VOL)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
> 135, in getVolType
> self.voltype = self.getMetaParam(sc.VOLTYPE)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
> 119, in getMetaParam
> meta = self.getMetadata()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py",
> line 112, in getMetadata
> md = VolumeMetadata.from_lines(lines)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py",
> line 103, in from_lines
> "Missing metadata key: %s: found: %s" % (e, md))
> MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing
> metadata key: 'DOMAIN': found: {'NONE':
> 

[ovirt-users] Re: "ISCSI multipathing" tab isn't appear in datacenter settings

2018-08-15 Thread Elad Ben Aharon
Hi,

This behavior is by design.
For the iSCSI multipathing sub tab to appear, the data center should have
an iSCSI domain attached.

Thanks,


Elad Ben Aharon
RHV QE


On Mon, Aug 13, 2018 at 5:54 PM,  wrote:

> Hello.
>
> We have 6 servers in our cluster, which use 2 storage through iSCSI
> connections. Each storage has 2 nodes. Each node has 2 IP addresses in two
> different VLANs. Each host has 2 networks in this VLANs, so, the iSCSI
> traffic is separated from other types of traffic.
> I want to turn on iSCSI multipathing beetween hosts and storage, but
> "ISCSI multipathing" tab isn't appear in datacenter settings. But when I'm
> add a additional storage domain, "ISCSI multipathing" tab is displaying. If
> I want to detach this additional storage domain, "ISCSI multipathing" tab
> is disappear at once.
> Why is this happening?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/JYHNC3LASD44OXDGMLO4DMX73SGGQO3L/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UWXHJUAUQ3XIXU7ZDP2LAH7AT3LX56NC/


[ovirt-users] ovirt-node-ng update to 4.2.5 failed

2018-08-15 Thread p . staniforth
Hello,
  I tried to update a ovirt-ng-node from 4.2.4 via the engine and it 
failed, I also tried using "yum update ovirt-node-ng-image-update".
What is the correct way to update a node and how do I delete old layers 

Thanks,
   Paul S.
The output from "nodectl info" is

layers: 
  ovirt-node-ng-4.2.4-0.20180626.0: 
ovirt-node-ng-4.2.4-0.20180626.0+1
  ovirt-node-ng-4.1.9-0.20180124.0: 
ovirt-node-ng-4.1.9-0.20180124.0+1
  ovirt-node-ng-4.2.5.1-0.20180731.0: 
ovirt-node-ng-4.2.5.1-0.20180731.0+1
bootloader: 
  default: ovirt-node-ng-4.2.4-0.20180626.0+1
  entries: 
ovirt-node-ng-4.2.4-0.20180626.0+1: 
  index: 0
  title: ovirt-node-ng-4.2.4-0.20180626.0
  kernel: 
/boot/ovirt-node-ng-4.2.4-0.20180626.0+1/vmlinuz-3.10.0-862.3.3.el7.x86_64
  args: "ro crashkernel=auto 
rd.lvm.lv=onn/ovirt-node-ng-4.2.4-0.20180626.0+1 rd.lvm.lv=onn/swap rhgb quiet 
LANG=en_GB.UTF-8 img.bootid=ovirt-node-ng-4.2.4-0.20180626.0+1"
  initrd: 
/boot/ovirt-node-ng-4.2.4-0.20180626.0+1/initramfs-3.10.0-862.3.3.el7.x86_64.img
  root: /dev/onn/ovirt-node-ng-4.2.4-0.20180626.0+1
ovirt-node-ng-4.1.9-0.20180124.0+1: 
  index: 1
  title: ovirt-node-ng-4.1.9-0.20180124.0
  kernel: 
/boot/ovirt-node-ng-4.1.9-0.20180124.0+1/vmlinuz-3.10.0-693.11.6.el7.x86_64
  args: "ro crashkernel=auto 
rd.lvm.lv=onn/ovirt-node-ng-4.1.9-0.20180124.0+1 rd.lvm.lv=onn/swap rhgb quiet 
LANG=en_GB.UTF-8 img.bootid=ovirt-node-ng-4.1.9-0.20180124.0+1"
  initrd: 
/boot/ovirt-node-ng-4.1.9-0.20180124.0+1/initramfs-3.10.0-693.11.6.el7.x86_64.img
  root: /dev/onn/ovirt-node-ng-4.1.9-0.20180124.0+1
current_layer: ovirt-node-ng-4.2.4-0.20180626.0+1
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BGBYX6IFNCJTMIT344XV2U2QVV2CXYWF/


[ovirt-users] Re: oVirt 4.2.6.1 - 4.2.6.2 upgrade fails

2018-08-15 Thread Maton, Brett
4.2.6.3 appears to be working just fine, thanks to all.

On 14 August 2018 at 08:49, Eli Mesika  wrote:

> Got the env from mburman , checking 
>
> On Tue, Aug 14, 2018 at 10:13 AM, Martin Perina 
> wrote:
>
>> Adding Eli
>>
>> On Tue, Aug 14, 2018 at 9:03 AM, Yedidyah Bar David 
>> wrote:
>>
>>> On Tue, Aug 14, 2018 at 9:27 AM, Maton, Brett 
>>> wrote:
>>> >
>>> > Just tried to update my test cluster to 4.2.6.2 :
>>> >
>>> >
>>> > [ INFO  ] Stage: Misc configuration
>>> > [ INFO  ] Running vacuum full on the engine schema
>>> > [ INFO  ] Running vacuum full elapsed 0:00:04.523561
>>> > [ INFO  ] Upgrading CA
>>> > [ INFO  ] Backing up database localhost:ovirt_engine_history to
>>> > '/var/lib/ovirt-engine-dwh/backups/dwh-20180814071815.xVSlda.dump'.
>>> > [ INFO  ] Creating/refreshing DWH database schema
>>> > [ INFO  ] Configuring Image I/O Proxy
>>> > [ INFO  ] Configuring WebSocket Proxy
>>> > [ INFO  ] Backing up database localhost:engine to
>>> > '/var/lib/ovirt-engine/backups/engine-20180814071825.af3Hq2.dump'.
>>> > [ INFO  ] Creating/refreshing Engine database schema
>>> > [ ERROR ] schema.sh: FATAL: Cannot execute sql command:
>>> > --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_1220_
>>> default_all_search_engine_string_fields_to_not_null.sql
>>> > [ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
>>> > refresh failed
>>> > [ INFO  ] Yum Performing yum transaction rollback
>>> >
>>> >
>>> > May or may not be relevant in this case but /tmp and /var/tmp are
>>> mounted
>>> > noexec.
>>>
>>> I do not think this is tested regularly, but I guess it should be ok.
>>>
>>> > Any more logs you need let me know.
>>>
>>> Can you please check/share full setup log? engine-setup should output the
>>> full path, it should be in /var/log/ovirt-engine/setup . Thanks.
>>> --
>>> Didi
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>> y/about/community-guidelines/
>>> List Archives: https://lists.ovirt.org/archiv
>>> es/list/users@ovirt.org/message/KQGLRAXYPMUBZMIMWDISVUHBNLV4BLHX/
>>>
>>
>>
>>
>> --
>> Martin Perina
>> Associate Manager, Software Engineering
>> Red Hat Czech s.r.o.
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XZKOQCGBQSZIEUL64XE6XQEBTLWWCBAZ/