[ovirt-users] Supporting comments on ovirt-site Blog section

2019-01-15 Thread Roy Golan
It would be very useful to have a comment section on our ovirt site.
It is quite standard to have that on every blog out there, and for a reason
- you get
feedback and conversation around the topic, without going somewhere else
(users list, irc, etc...)

What do we need to do to help the middle man with that?

Regards,
  Roy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AE2VNIVCG26X7WLVLUPKMQEWJTKO7NVK/


[ovirt-users] Re: Disk full

2019-01-15 Thread Sahina Bose
On Tue, Jan 15, 2019 at 9:31 PM  wrote:
>
> Hi Sahina,
>
> I just deleted the volume and create a new one.
> The engine still keeps showing the errors from the old volume on the Tasks.
> I run the command
>
> PGPASSWORD= /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh -u 
> engine -d engine
>
>
> But it didn't clean the messages nor the locked gluster on the engine volume 
> window
> Any idea?

Are there errors in the engine log related to monitoring the gluster
volumes (errors from GlusterSyncJob)? It's possible the database is
not getting updated due to the errors.
Can you check and report back?

We can also try to give you a script to clean up stale entries based on above.

>
> Thanks
>
> José
>
>
>
> 
> From: "Sahina Bose" 
> To: supo...@logicworks.pt
> Cc: "users" 
> Sent: Tuesday, January 8, 2019 2:17:02 PM
> Subject: Re: [ovirt-users] Re: Disk full
>
> On Tue, Jan 8, 2019 at 6:37 PM  wrote:
> >
> > Hi Sahina,
> >
> > Still have the disk full, the idea is to delete de gluster volume and 
> > create a new one.
> > In the engine when I try to put the gluster volume in maintenance it keeps 
> > in locked state and does not go to maintenance. Even when I try to destroy 
> > it does not allow because the operation is in progress.
> > I did a gluster volume stop but I don't know if I can do a  gluster volume 
> > delete
>
> You can delete the volume, if you do not need the data.
> The other option is to delete the disks from the gluster volume mount point.
>
> >
> > Any help?
> >
> > Thanks
> >
> > José Ferradeira
> >
> > 
> > From: supo...@logicworks.pt
> > To: "Sahina Bose" 
> > Cc: "users" 
> > Sent: Thursday, December 20, 2018 12:25:08 PM
> > Subject: [ovirt-users] Re: Disk full
> >
> > We moved the VM disk to the second gluster. On the ovirt-engine I cannot 
> > see the old disk, only the disk attached to the VM on the second gluster.
> > We keep having the errors concerning the disk full.
> > Using CLI I can see the image on the first gluster volume. So ovirt-engine 
> > was able to move the disk to the second volume but did not delete it from 
> > the first volume.
> >
> > # gluster volume info gv0
> >
> > Volume Name: gv0
> > Type: Distribute
> > Volume ID: 4aaffd24-553b-4a85-8c9b-386b02b30b6f
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 1
> > Transport-type: tcp
> > Bricks:
> > Brick1: gfs1.growtrade.pt:/home/brick1
> > Options Reconfigured:
> > features.shard-block-size: 512MB
> > network.ping-timeout: 30
> > storage.owner-gid: 36
> > storage.owner-uid: 36
> > user.cifs: off
> > features.shard: off
> > cluster.shd-wait-qlength: 1
> > cluster.shd-max-threads: 8
> > cluster.locking-scheme: granular
> > cluster.data-self-heal-algorithm: full
> > cluster.server-quorum-type: server
> > cluster.quorum-type: auto
> > cluster.eager-lock: enable
> > network.remote-dio: enable
> > performance.low-prio-threads: 32
> > performance.stat-prefetch: off
> > performance.io-cache: off
> > performance.read-ahead: off
> > performance.quick-read: off
> > transport.address-family: inet
> > performance.readdir-ahead: on
> > nfs.disable: on
> >
> >
> > Thanks
> >
> > 
> > From: "Sahina Bose" 
> > To: supo...@logicworks.pt, "Krutika Dhananjay" 
> > Cc: "users" 
> > Sent: Thursday, December 20, 2018 11:53:39 AM
> > Subject: Re: [ovirt-users] Disk full
> >
> > Is it possible for you to delete the old disks from storage domain
> > (you can use the ovirt-engine UI). Do you continue to see space used
> > despite doing that?
> > I see that you are on a much older version of gluster. Have you
> > considered updating to 3.12?
> >
> > Please also provide output of "gluster volume info "
> >
> > On Thu, Dec 20, 2018 at 3:56 PM  wrote:
> > >
> > > Yes, I can see the image on the volume.
> > > Gluster version:
> > > glusterfs-client-xlators-3.8.12-1.el7.x86_64
> > > glusterfs-cli-3.8.12-1.el7.x86_64
> > > glusterfs-api-3.8.12-1.el7.x86_64
> > > glusterfs-fuse-3.8.12-1.el7.x86_64
> > > glusterfs-server-3.8.12-1.el7.x86_64
> > > glusterfs-libs-3.8.12-1.el7.x86_64
> > > glusterfs-3.8.12-1.el7.x86_64
> > >
> > >
> > > Thanks
> > >
> > > José
> > >
> > > 
> > > From: "Sahina Bose" 
> > > To: supo...@logicworks.pt
> > > Cc: "users" 
> > > Sent: Wednesday, December 19, 2018 4:13:16 PM
> > > Subject: Re: [ovirt-users] Disk full
> > >
> > > Do you see the image on the gluster volume mount? Can you provide the 
> > > gluster volume options and version of gluster?
> > >
> > > On Wed, 19 Dec 2018 at 4:04 PM,  wrote:
> > >>
> > >> Hi,
> > >>
> > >> I have a all in one intallation with 2 glusters volumes.
> > >> The disk of one VM filled up the brick, which is a partition. That 
> > >> partition has 0% free disk space.
> > >> I moved the disk of that VM to the other gluster volume, the VM is 
> > >> working with the disk on the other gluster volume.
> > >> When I move the disk, it didn't 

[ovirt-users] Re: ovirt node ng 4.3.0 rc1 and HCI single host problems

2019-01-15 Thread Nir Soffer
On Tue, Jan 15, 2019 at 4:32 PM Gianluca Cecchi 
wrote:

> The mail was partly scrambled in its contents so I put some clarification
> here:
>
> On Tue, Jan 15, 2019 at 2:38 PM Gianluca Cecchi 
> wrote:
>
>>
>>>
>> So after starting from scratch and using also the info as detailed on
>> thread:
>> https://www.mail-archive.com/users@ovirt.org/msg52879.html
>>
>> the steps now have been:
>>
>> - install from  ovirt-node-ng-installer-4.3.0-2019011010.el7.iso and
>> reboot
>>
>> - connect to cockpit and open terminal
>>
>
> This step is related to ssh daemon
> cd /etc/ssh
> chmod 600 *key
> systemctl restart sshd
>
> The step below is related to ovirt-imageio-daemon
>
>
>> mkdir /var/run/vdsm
>>  chmod 755 /var/run/vdsm
>>  chown vdsm.kvm /var/run/vdsm
>>  mkdir /var/run/vdsm/dhclientmon
>>  chmod 755 /var/run/vdsm/dhclientmon/
>>  chown vdsm.kvm /var/run/vdsm/dhclientmon/
>>  mkdir /var/run/vdsm/trackedInterfaces
>> chmod 755 /var/run/vdsm/trackedInterfaces/
>> chown vdsm.kvm /var/run/vdsm/trackedInterfaces/
>> mkdir /var/run/vdsm/v2v
>> chmod 700 /var/run/vdsm/v2v
>> chown vdsm.kvm /var/run/vdsm/v2v/
>> mkdir /var/run/vdsm/vhostuser
>> chmod 755 /var/run/vdsm/vhostuser/
>> chown vdsm.kvm /var/run/vdsm/vhostuser/
>> mkdir /var/run/vdsm/payload
>> chmod 755 /var/run/vdsm/payload/
>> chown vdsm.kvm /var/run/vdsm/payload/
>>
>> systemctl restart sshd
>>
>
> Actually:
>
> systemctl restart ovirt-imageio-daemon
>
>
>>
>> - put in the newer version of vdsm-api.pickle
>> from vdsm-api-4.30.5-2.gitf824ec2.el7.noarch.rpm
>> in /usr/lib/python2.7/site-packages/vdsm/rpc/vdsm-api.pickle
>>
>
> download of vdsm-api.pickle can be directly done here eventually:
>
> https://drive.google.com/file/d/1AhakKhm_dzx-Gxt-Y1OojzRUwHs75kot/view?usp=sharing
>
>
>>
>> - run the wizard for the gluster+he setup (the right positioned option)
>> inside the gdeploy text window click edit and add
>> "
>> [diskcount]
>> 1
>>
>> "
>> under the section
>> "
>> [disktype]
>> jbod
>> "
>>
>> In my case with single disk I choose JBOD options
>
>
>> - first 2 steps ok
>>
>> - last step fails in finish part
>>
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Fetch Datacenter name]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Add NFS storage domain]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Add glusterfs storage domain]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Add iSCSI storage domain]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Add Fibre Channel storage
>> domain]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Get storage domain details]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : debug]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Find the appliance OVF]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : debug]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Parse OVF]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Get required size]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : debug]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Remove unsuitable storage
>> domain]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : debug]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Check storage domain free
>> space]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Activate storage domain]
>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
>> "[]". HTTP response code is 400.
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
>> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
>> is 400."}
>>
>> On engine.log I see
>>
>> 2019-01-15 13:50:35,317+01 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
>> (default task-2) [51725212] START, CreateStoragePoolVDSCommand(HostName =
>> ov4301.localdomain.lo
>> cal,
>> CreateStoragePoolVDSCommandParameters:{hostId='e8f105f1-37ed-4ac4-bfc3-b1e55ed3027f',
>> storagePoolId='96a31a7e-18bb-11e9-9a34-00163e6196f3',
>> storagePoolName='Default', masterDomainId='14ec2fc7-8c2
>> b-487c-8f4f-428644650928',
>> domainsIdList='[14ec2fc7-8c2b-487c-8f4f-428644650928]',
>> masterVersion='1'}), log id: 4baccd53
>> 2019-01-15 13:50:36,345+01 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
>> (default task-2) [51725212] Failed in 'CreateStoragePoolVDS' method
>> 2019-01-15 13:50:36,354+01 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-2) [51725212] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802),
>> VDSM ov4301.localdomain.local command CreateStoragePoolVDS failed: Cannot
>> acquire host id: 

[ovirt-users] [Call for Testing] oVirt 4.3.0

2019-01-15 Thread Sandro Bonazzola
Hi,
we are planning to release a 4.3.0 RC2 tomorrow morning, January 16th 2019.
We have a scheduled final release for oVirt 4.3.0 on January 29th: this is
the time when testing is most effective to ensure the release will be as
much stable as possible. Please join us testing the RC2 release this week
and reporting issues to
https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
Please remember this is still pre-release material, we recommend not
installing it on production environments yet.

Thanks,
-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VTKSPOIA3XFU6QXF5I7IKZIQYVL2EE6M/


[ovirt-users] Re: Disk full

2019-01-15 Thread suporte
Hi Sahina, 

I just deleted the volume and create a new one. 
The engine still keeps showing the errors from the old volume on the Tasks. 
I run the command 

PGPASSWORD= /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh -u 
engine -d engine 
But it didn't clean the messages nor the locked gluster on the engine volume 
window 
Any idea? 

Thanks 

José 




From: "Sahina Bose"  
To: supo...@logicworks.pt 
Cc: "users"  
Sent: Tuesday, January 8, 2019 2:17:02 PM 
Subject: Re: [ovirt-users] Re: Disk full 

On Tue, Jan 8, 2019 at 6:37 PM  wrote: 
> 
> Hi Sahina, 
> 
> Still have the disk full, the idea is to delete de gluster volume and create 
> a new one. 
> In the engine when I try to put the gluster volume in maintenance it keeps in 
> locked state and does not go to maintenance. Even when I try to destroy it 
> does not allow because the operation is in progress. 
> I did a gluster volume stop but I don't know if I can do a gluster volume 
> delete 

You can delete the volume, if you do not need the data. 
The other option is to delete the disks from the gluster volume mount point. 

> 
> Any help? 
> 
> Thanks 
> 
> José Ferradeira 
> 
>  
> From: supo...@logicworks.pt 
> To: "Sahina Bose"  
> Cc: "users"  
> Sent: Thursday, December 20, 2018 12:25:08 PM 
> Subject: [ovirt-users] Re: Disk full 
> 
> We moved the VM disk to the second gluster. On the ovirt-engine I cannot see 
> the old disk, only the disk attached to the VM on the second gluster. 
> We keep having the errors concerning the disk full. 
> Using CLI I can see the image on the first gluster volume. So ovirt-engine 
> was able to move the disk to the second volume but did not delete it from the 
> first volume. 
> 
> # gluster volume info gv0 
> 
> Volume Name: gv0 
> Type: Distribute 
> Volume ID: 4aaffd24-553b-4a85-8c9b-386b02b30b6f 
> Status: Started 
> Snapshot Count: 0 
> Number of Bricks: 1 
> Transport-type: tcp 
> Bricks: 
> Brick1: gfs1.growtrade.pt:/home/brick1 
> Options Reconfigured: 
> features.shard-block-size: 512MB 
> network.ping-timeout: 30 
> storage.owner-gid: 36 
> storage.owner-uid: 36 
> user.cifs: off 
> features.shard: off 
> cluster.shd-wait-qlength: 1 
> cluster.shd-max-threads: 8 
> cluster.locking-scheme: granular 
> cluster.data-self-heal-algorithm: full 
> cluster.server-quorum-type: server 
> cluster.quorum-type: auto 
> cluster.eager-lock: enable 
> network.remote-dio: enable 
> performance.low-prio-threads: 32 
> performance.stat-prefetch: off 
> performance.io-cache: off 
> performance.read-ahead: off 
> performance.quick-read: off 
> transport.address-family: inet 
> performance.readdir-ahead: on 
> nfs.disable: on 
> 
> 
> Thanks 
> 
>  
> From: "Sahina Bose"  
> To: supo...@logicworks.pt, "Krutika Dhananjay"  
> Cc: "users"  
> Sent: Thursday, December 20, 2018 11:53:39 AM 
> Subject: Re: [ovirt-users] Disk full 
> 
> Is it possible for you to delete the old disks from storage domain 
> (you can use the ovirt-engine UI). Do you continue to see space used 
> despite doing that? 
> I see that you are on a much older version of gluster. Have you 
> considered updating to 3.12? 
> 
> Please also provide output of "gluster volume info " 
> 
> On Thu, Dec 20, 2018 at 3:56 PM  wrote: 
> > 
> > Yes, I can see the image on the volume. 
> > Gluster version: 
> > glusterfs-client-xlators-3.8.12-1.el7.x86_64 
> > glusterfs-cli-3.8.12-1.el7.x86_64 
> > glusterfs-api-3.8.12-1.el7.x86_64 
> > glusterfs-fuse-3.8.12-1.el7.x86_64 
> > glusterfs-server-3.8.12-1.el7.x86_64 
> > glusterfs-libs-3.8.12-1.el7.x86_64 
> > glusterfs-3.8.12-1.el7.x86_64 
> > 
> > 
> > Thanks 
> > 
> > José 
> > 
> >  
> > From: "Sahina Bose"  
> > To: supo...@logicworks.pt 
> > Cc: "users"  
> > Sent: Wednesday, December 19, 2018 4:13:16 PM 
> > Subject: Re: [ovirt-users] Disk full 
> > 
> > Do you see the image on the gluster volume mount? Can you provide the 
> > gluster volume options and version of gluster? 
> > 
> > On Wed, 19 Dec 2018 at 4:04 PM,  wrote: 
> >> 
> >> Hi, 
> >> 
> >> I have a all in one intallation with 2 glusters volumes. 
> >> The disk of one VM filled up the brick, which is a partition. That 
> >> partition has 0% free disk space. 
> >> I moved the disk of that VM to the other gluster volume, the VM is working 
> >> with the disk on the other gluster volume. 
> >> When I move the disk, it didn't delete it from the brick, the engine keeps 
> >> complaining that there is no more disk space on that volume. 
> >> What can I do? 
> >> Is there a way to prevent this in the future? 
> >> 
> >> Many thanks 
> >> 
> >> José 
> >> 
> >> 
> >> 
> >> -- 
> >>  
> >> Jose Ferradeira 
> >> http://www.logicworks.pt 
> >> ___ 
> >> Users mailing list -- users@ovirt.org 
> >> To unsubscribe send an email to users-le...@ovirt.org 
> >> Privacy Statement: 

[ovirt-users] Re: vGPU with NVIDIA M60 mdev_type not showing

2019-01-15 Thread Josep Manel Andrés Moscardó

Hi,
I am using CentOS 7.6 and the latest oVirt release. Is it possible that 
the package vdsm-hook-vfio-mdev is needed? As far as I understand it is 
already deprecated, but I cannot find anything on the documentation.


[root@esxh-03 ~]# yum install vdsm-hook-vfio-mdev
Loaded plugins: enabled_repos_upload, fastestmirror, package_upload, 
product-id, search-disabled-repos,

  : subscription-manager, vdsmupgrade
This system is not registered with an entitlement server. You can use 
subscription-manager to register.

Loading mirror speeds from cached hostfile
 * base: mirror2.hs-esslingen.de
 * extras: mirror2.hs-esslingen.de
 * ovirt-4.2: ftp.plusline.net
 * ovirt-4.2-epel: ftp-stud.hs-esslingen.de
 * updates: ftp.fau.de
Package vdsm-hook-vfio-mdev-4.20.35-1.el7.noarch is obsoleted by 
vdsm-4.20.43-1.el7.x86_64 which is already installed

Nothing to do



OS Version:
RHEL - 7 - 6.1810.2.el7.centos
OS Description:
CentOS Linux 7 (Core)
Kernel Version:
3.10.0 - 957.1.3.el7.x86_64
KVM Version:
2.12.0 - 18.el7_6.1.1
LIBVIRT Version:
libvirt-4.5.0-10.el7_6.3
VDSM Version:
vdsm-4.20.43-1.el7
SPICE Version:
0.14.0 - 6.el7
CEPH Version:
librbd1-10.2.5-4.el7
Open vSwitch Version:
openvswitch-2.9.0-4.el7
Kernel Features:
PTI: 1, IBRS: 0, RETP: 1


Cheers


On 14/1/19 17:45, Josep Manel Andrés Moscardó wrote:

Hi all,
I have a host with 2 M60 with the latest supported driver installed, and 
working as you can see:


root@esxh-03 vdsm]# lsmod | grep vfio
nvidia_vgpu_vfio   49475  0
nvidia  16633974  1 nvidia_vgpu_vfio
vfio_mdev  12841  0
mdev   20336  2 vfio_mdev,nvidia_vgpu_vfio
vfio_iommu_type1   22300  0
vfio   32656  3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1
[root@esxh-03 vdsm]# nvidia-smi
Mon Jan 14 17:39:30 2019
+-+ 

| NVIDIA-SMI 410.91   Driver Version: 410.91   CUDA Version: N/A 
  |
|---+--+--+ 

| GPU  Name    Persistence-M| Bus-Id    Disp.A | Volatile 
Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util 
Compute M. |
|===+==+==| 


|   0  Tesla M60   Off  | :05:00.0 Off |  Off |
| 16%   27C    P0    41W / 120W | 14MiB /  8191MiB |  0% Default |
+---+--+--+ 


|   1  Tesla M60   Off  | :06:00.0 Off |  Off |
| 17%   24C    P0    39W / 120W | 14MiB /  8191MiB |  0% Default |
+---+--+--+ 


|   2  Tesla M60   Off  | :84:00.0 Off |  Off |
| 15%   28C    P0    41W / 120W | 14MiB /  8191MiB |  0% Default |
+---+--+--+ 


|   3  Tesla M60   Off  | :85:00.0 Off |  Off |
| 16%   25C    P0    40W / 120W | 14MiB /  8191MiB |  0% Default |
+---+--+--+ 




+-+ 

| Processes:   GPU 
Memory |
|  GPU   PID   Type   Process name Usage 
  |
|=| 


|  No running processes found  |
+-+ 




But the issue is that when I do :

# vdsm-client Host hostdevListByCaps

I don't see any "mdev" device. Also the directory  /sys/class/mdev_bus 
is not existing.


Am I missing something ?

Cheers.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YR4TYFM24TYQP6MMHREKDVMIZ7QTL4YR/



--
Josep Manel Andrés Moscardó
Systems Engineer, IT Operations
EMBL Heidelberg
T +49 6221 387-8394



smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KO6FIDW57N3V245FKKQHZTQZDN35A5BM/


[ovirt-users] Re: Ansible and SHE deployment

2019-01-15 Thread Vrgotic, Marko
Hi Martin,

Thank you.

I will start from there as discover how far it’s going to get me.
Once more specific questions arise, I will reach out.

Kind regards,
Marko
From: Martin Perina 
Date: Tuesday, 15 January 2019 at 10:45
To: "Vrgotic, Marko" 
Cc: "users@ovirt.org" , Ondra Machacek , 
Simone Tiraboschi 
Subject: Re: [ovirt-users] Ansible and SHE deployment

Hi Marko,

please take a look at official oVirt roles: 
https://github.com/ovirt/ovirt-ansible they should cover everything around 
oVirt installation, data center setup and even daily maintenance tasks like VM 
management.

Regards,

Martin


On Mon, Jan 14, 2019 at 12:12 PM Vrgotic, Marko 
mailto:m.vrgo...@activevideo.com>> wrote:
Dear oVirt team,

I would like to ask you help in get some general guidelines, do’s & don’ts in 
deploying complete oVirt environment using Ansible.

The first Production deployment I made was done manual:

12 Hypervsiors – all exact same HW Brand and Specs
3/12 used for HA Env for SHE
oVirt version 4.2.1 (now we are at 4.2.7)
4Gluster nodes, managed externally of oVirt

This is environment I would like to convert into deployable by Ansible

Atm, I am working on second Production env, for Eng/Dev department, and I want 
to go all way Ansible.
I am aware of your playbooks, and location on github, but what I want to ask is 
an advice on how to approach using them:

The second Env will have:

7Hypervisors different specs / all provisioned using Foreman
oVirt version, latest 4.2.x at that point.
3/7 providing HA for SHE engine
Storage used is to be NetApp.

Please let me know how to proceed with modifying Ansible playbooks and what 
should be the recommended executing order, and what to look for? Also, If you 
need additional info, I will be happy to provide.

Kind regards,
Marko Vrgotic


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D6BMPCYVHL6EP3ICN475TXZ4EWJSY7HZ/


--
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H76SQ53Y7UJWAFMGFUZDJJI6KDD7RCM6/


[ovirt-users] Re: ovirt node ng 4.3.0 rc1 and HCI single host problems

2019-01-15 Thread Gianluca Cecchi
The mail was partly scrambled in its contents so I put some clarification
here:

On Tue, Jan 15, 2019 at 2:38 PM Gianluca Cecchi 
wrote:

>
>>
> So after starting from scratch and using also the info as detailed on
> thread:
> https://www.mail-archive.com/users@ovirt.org/msg52879.html
>
> the steps now have been:
>
> - install from  ovirt-node-ng-installer-4.3.0-2019011010.el7.iso and reboot
>
> - connect to cockpit and open terminal
>

This step is related to ssh daemon
cd /etc/ssh
chmod 600 *key
systemctl restart sshd

The step below is related to ovirt-imageio-daemon


> mkdir /var/run/vdsm
>  chmod 755 /var/run/vdsm
>  chown vdsm.kvm /var/run/vdsm
>  mkdir /var/run/vdsm/dhclientmon
>  chmod 755 /var/run/vdsm/dhclientmon/
>  chown vdsm.kvm /var/run/vdsm/dhclientmon/
>  mkdir /var/run/vdsm/trackedInterfaces
> chmod 755 /var/run/vdsm/trackedInterfaces/
> chown vdsm.kvm /var/run/vdsm/trackedInterfaces/
> mkdir /var/run/vdsm/v2v
> chmod 700 /var/run/vdsm/v2v
> chown vdsm.kvm /var/run/vdsm/v2v/
> mkdir /var/run/vdsm/vhostuser
> chmod 755 /var/run/vdsm/vhostuser/
> chown vdsm.kvm /var/run/vdsm/vhostuser/
> mkdir /var/run/vdsm/payload
> chmod 755 /var/run/vdsm/payload/
> chown vdsm.kvm /var/run/vdsm/payload/
>
> systemctl restart sshd
>

Actually:

systemctl restart ovirt-imageio-daemon


>
> - put in the newer version of vdsm-api.pickle
> from vdsm-api-4.30.5-2.gitf824ec2.el7.noarch.rpm
> in /usr/lib/python2.7/site-packages/vdsm/rpc/vdsm-api.pickle
>

download of vdsm-api.pickle can be directly done here eventually:
https://drive.google.com/file/d/1AhakKhm_dzx-Gxt-Y1OojzRUwHs75kot/view?usp=sharing


>
> - run the wizard for the gluster+he setup (the right positioned option)
> inside the gdeploy text window click edit and add
> "
> [diskcount]
> 1
>
> "
> under the section
> "
> [disktype]
> jbod
> "
>
> In my case with single disk I choose JBOD options


> - first 2 steps ok
>
> - last step fails in finish part
>
> [ INFO ] TASK [oVirt.hosted-engine-setup : Fetch Datacenter name]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Add NFS storage domain]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Add glusterfs storage domain]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Add iSCSI storage domain]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Add Fibre Channel storage
> domain]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Get storage domain details]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : debug]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Find the appliance OVF]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : debug]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Parse OVF]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Get required size]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : debug]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Remove unsuitable storage
> domain]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : debug]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Check storage domain free space]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [oVirt.hosted-engine-setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
> HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
> is 400."}
>
> On engine.log I see
>
> 2019-01-15 13:50:35,317+01 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (default task-2) [51725212] START, CreateStoragePoolVDSCommand(HostName =
> ov4301.localdomain.lo
> cal,
> CreateStoragePoolVDSCommandParameters:{hostId='e8f105f1-37ed-4ac4-bfc3-b1e55ed3027f',
> storagePoolId='96a31a7e-18bb-11e9-9a34-00163e6196f3',
> storagePoolName='Default', masterDomainId='14ec2fc7-8c2
> b-487c-8f4f-428644650928',
> domainsIdList='[14ec2fc7-8c2b-487c-8f4f-428644650928]',
> masterVersion='1'}), log id: 4baccd53
> 2019-01-15 13:50:36,345+01 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (default task-2) [51725212] Failed in 'CreateStoragePoolVDS' method
> 2019-01-15 13:50:36,354+01 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-2) [51725212] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802),
> VDSM ov4301.localdomain.local command CreateStoragePoolVDS failed: Cannot
> acquire host id: (u'14ec2fc7-8c2b-487c-8f4f-428644650928',
> SanlockException(-203, 'Sanlock lockspace add failure', 'Watchdog device
> error'))
> 2019-01-15 13:50:36,354+01 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (default 

[ovirt-users] Re: ovirt node ng 4.3.0 rc1 and HCI single host problems

2019-01-15 Thread Gianluca Cecchi
On Fri, Jan 11, 2019 at 6:57 PM Gianluca Cecchi 
wrote:

> On Fri, Jan 11, 2019 at 6:48 PM Gianluca Cecchi 
> wrote:
>
>>
>> The problem now is at final step where I get
>>
>>
>> [ INFO ] TASK [oVirt.hosted-engine-setup : debug]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Fetch Datacenter ID]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Fetch Datacenter name]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Add NFS storage domain]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [oVirt.hosted-engine-setup : Add glusterfs storage domain]
>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
>> "[Invalid parameter]". HTTP response code is 400.
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
>> reason is \"Operation Failed\". Fault detail is \"[Invalid parameter]\".
>> HTTP response code is 400."}
>>
>>
> On engine VM I see a reference to " CreateStorageDomainVDS failed: Invalid
> parameter: 'block_size=None'" in engine.log
>
>
So after starting from scratch and using also the info as detailed on
thread:
https://www.mail-archive.com/users@ovirt.org/msg52879.html

the steps now have been:

- install from  ovirt-node-ng-installer-4.3.0-2019011010.el7.iso and reboot

- connect to cockpit and open terminal
mkdir /var/run/vdsm
 chmod 755 /var/run/vdsm
 chown vdsm.kvm /var/run/vdsm
 mkdir /var/run/vdsm/dhclientmon
 chmod 755 /var/run/vdsm/dhclientmon/
 chown vdsm.kvm /var/run/vdsm/dhclientmon/
 mkdir /var/run/vdsm/trackedInterfaces
chmod 755 /var/run/vdsm/trackedInterfaces/
chown vdsm.kvm /var/run/vdsm/trackedInterfaces/
mkdir /var/run/vdsm/v2v
chmod 700 /var/run/vdsm/v2v
chown vdsm.kvm /var/run/vdsm/v2v/
mkdir /var/run/vdsm/vhostuser
chmod 755 /var/run/vdsm/vhostuser/
chown vdsm.kvm /var/run/vdsm/vhostuser/
mkdir /var/run/vdsm/payload
chmod 755 /var/run/vdsm/payload/
chown vdsm.kvm /var/run/vdsm/payload/

systemctl restart sshd

- put in the newer version of vdsm-api.pickle
from vdsm-api-4.30.5-2.gitf824ec2.el7.noarch.rpm
in /usr/lib/python2.7/site-packages/vdsm/rpc/vdsm-api.pickle

- run the wizard for the gluster+he setup (the right positioned option)
inside the gdeploy text window click edit and add
"
[diskcount]
1

"
under the section
"
[disktype]
jbod
"

- first 2 steps ok

- last step fails in finish part

[ INFO ] TASK [oVirt.hosted-engine-setup : Fetch Datacenter name]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Add NFS storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Add glusterfs storage domain]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Add iSCSI storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Add Fibre Channel storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Get storage domain details]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : debug]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Find the appliance OVF]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : debug]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Parse OVF]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Get required size]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : debug]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Remove unsuitable storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : debug]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Check storage domain free space]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}

On engine.log I see

2019-01-15 13:50:35,317+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(default task-2) [51725212] START, CreateStoragePoolVDSCommand(HostName =
ov4301.localdomain.lo
cal,
CreateStoragePoolVDSCommandParameters:{hostId='e8f105f1-37ed-4ac4-bfc3-b1e55ed3027f',
storagePoolId='96a31a7e-18bb-11e9-9a34-00163e6196f3',
storagePoolName='Default', masterDomainId='14ec2fc7-8c2
b-487c-8f4f-428644650928',
domainsIdList='[14ec2fc7-8c2b-487c-8f4f-428644650928]',
masterVersion='1'}), log id: 4baccd53
2019-01-15 13:50:36,345+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(default task-2) [51725212] Failed in 'CreateStoragePoolVDS' method
2019-01-15 13:50:36,354+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-2) [51725212] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802),
VDSM 

[ovirt-users] Re: migrate hosted-engine vm to another cluster?

2019-01-15 Thread Douglas Duckworth
Hi

I opened a BugZilla at https://bugzilla.redhat.com/show_bug.cgi?id=1664777 but 
no steps have been shared on how to resolve.  Does anyone know how this can be 
fixed without destroying the data center and building a new hosted engine?

Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690


On Wed, Jan 9, 2019 at 10:22 AM Douglas Duckworth 
mailto:dod2...@med.cornell.edu>> wrote:
Hi

Should I open a Bugzilla to resolve this problem?

Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690


On Wed, Dec 19, 2018 at 1:13 PM Douglas Duckworth 
mailto:dod2...@med.cornell.edu>> wrote:
Hello

I am trying to migrate my hosted-engine VM to another cluster in the same data 
center.  Hosts in both clusters have the same logical networks and storage.  
Yet migrating the VM isn't an option.

To get the hosted-engine VM on the other cluster I started the VM on host in 
that other cluster using "hosted-engine --vm-start."

However HostedEngine still associated with old cluster as shown attached.  So I 
cannot live migrate the VM.  Does anyone know how to resolve?  With other VMs 
one can shut them down then using the "Edit" option.  Though that will not work 
for HostedEngine.


Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J2GZK5PUBZIQLGLNZ2UUCSIES6HSZLHC/


[ovirt-users] Re: Ansible and SHE deployment

2019-01-15 Thread Martin Perina
Hi Marko,

please take a look at official oVirt roles:
https://github.com/ovirt/ovirt-ansible they should cover everything around
oVirt installation, data center setup and even daily maintenance tasks like
VM management.

Regards,

Martin


On Mon, Jan 14, 2019 at 12:12 PM Vrgotic, Marko 
wrote:

> Dear oVirt team,
>
>
>
> I would like to ask you help in get some general guidelines, do’s & don’ts
> in deploying complete oVirt environment using Ansible.
>
>
>
> The first Production deployment I made was done manual:
>
>
>
> 12 Hypervsiors – all exact same HW Brand and Specs
>
> 3/12 used for HA Env for SHE
>
> oVirt version 4.2.1 (now we are at 4.2.7)
>
> 4Gluster nodes, managed externally of oVirt
>
>
>
> This is environment I would like to convert into deployable by Ansible
>
>
>
> Atm, I am working on second Production env, for Eng/Dev department, and I
> want to go all way Ansible.
>
> I am aware of your playbooks, and location on github, but what I want to
> ask is an advice on how to approach using them:
>
>
>
> The second Env will have:
>
>
>
> 7Hypervisors different specs / all provisioned using Foreman
>
> oVirt version, latest 4.2.x at that point.
>
> 3/7 providing HA for SHE engine
>
> Storage used is to be NetApp.
>
>
>
> Please let me know how to proceed with modifying Ansible playbooks and
> what should be the recommended executing order, and what to look for? Also,
> If you need additional info, I will be happy to provide.
>
>
>
> Kind regards,
>
> Marko Vrgotic
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/D6BMPCYVHL6EP3ICN475TXZ4EWJSY7HZ/
>


-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WUQ2O646AQYBI7FLQELT6WQL3BXSIHF4/


[ovirt-users] Re: Is it possible to migrate self hosted engine to bare metal?

2019-01-15 Thread Florian Schmid
Hi,

is there a more detailed HowTo to do this? On RedHat, there is only the 
documentation the other way round (Bare-Metal to self hosted engine...).

I want to install the engine into a LXD container, because with the self hosted 
engine, I have too much drawbacks at the moment and backup/restore is also far 
away from strait forward, especially when you have a database which was created 
with oVirt 4.0.

Thank you!

BR Florian

- Ursprüngliche Mail -
Von: "Tobias Scheinert" 
An: "users" , "abhishek sahni1991" 

Gesendet: Mittwoch, 19. Dezember 2018 11:56:29
Betreff: [ovirt-users] Re: Is it possible to migrate self hosted engine to bare 
metal?

Hi,

Am 19.12.2018 um 11:21 schrieb Abhishek Sahni:
> Do we have some steps where we can migrate self hosted engine to 
> separate bare metal machine.
> 
> I do have a recent backup of DB from the self hosted engine?

yes it is possible to this. You can find the necessary information in 
the RedHat documentation.

--> 


We are running our hosted engine as a virtual machine on a Proxmox host. 
We do this to avoid several bootstraping problems in case of a disaster 
recovery.


Greeting Tobias


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/25PJNZIUMJ57B6MBY3NUWYNFYETIBE77/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WAKGP7OKL3E3L365NLTXCOGQXB3LZMYE/


[ovirt-users] Re: ETL service aggregation to hourly tables has encountered an error. Please consult the service log for more details.

2019-01-15 Thread melnyksergii
2018-10-22 17:58:40|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|30
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.00
etlVersion|4.2.4.3
dwhAggregationDebug|false
dwhUuid|69462636-22a6-4aae-9703-70ce55856985
ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbPassword|**
2018-10-23 
16:59:59|QUC3MI|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-23
 16:59:59| ETL service aggregation to hourly tables has encountered an error. 
lastHourAgg value =Tue Oct 23 15:00:00 EEST 2018 and runTime = Tue Oct 23 
16:59:59 EEST 2018 .Please consult the service log for more details.|42
2018-10-24 
17:59:59|xMmYXu|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-24
 17:59:59| ETL service aggregation to hourly tables has encountered an error. 
lastHourAgg value =Wed Oct 24 16:00:00 EEST 2018 and runTime = Wed Oct 24 
17:59:59 EEST 2018 .Please consult the service log for more details.|42
2018-10-25 
16:59:59|cUcnsD|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-25
 16:59:59| ETL service aggregation to hourly tables has encountered an error. 
lastHourAgg value =Thu Oct 25 15:00:00 EEST 2018 and runTime = Thu Oct 25 
16:59:59 EEST 2018 .Please consult the service log for more details.|42
2018-10-25 
17:59:59|eJkgGv|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-25
 17:59:59| ETL service aggregation to hourly tables has encountered an error. 
lastHourAgg value =Thu Oct 25 16:00:00 EEST 2018 and runTime = Thu Oct 25 
17:59:59 EEST 2018 .Please consult the service log for more details.|42
2018-10-25 
20:59:58|Sc5Lfp|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-25
 20:59:58| ETL service aggregation to hourly tables has encountered an error. 
lastHourAgg value =Thu Oct 25 19:00:00 EEST 2018 and runTime = Thu Oct 25 
20:59:58 EEST 2018 .Please consult the service log for more details.|42
2018-10-26 
01:59:59|TQ4s8m|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-26
 01:59:59| ETL service aggregation to hourly tables has encountered an error. 
lastHourAgg value =Fri Oct 26 00:00:00 EEST 2018 and runTime = Fri Oct 26 
01:59:59 EEST 2018 .Please consult the service log for more details.|42
2018-10-26 
15:59:55|Tiv1gZ|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-26
 15:59:55| ETL service aggregation to hourly tables has encountered an error. 
lastHourAgg value =Fri Oct 26 14:00:00 EEST 2018 and runTime = Fri Oct 26 
15:59:55 EEST 2018 .Please consult the service log for more details.|42
2018-10-27 
22:59:59|tsRtk4|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-27
 22:59:59| ETL service aggregation to hourly tables has encountered an error. 
lastHourAgg value =Sat Oct 27 21:00:00 EEST 2018 and runTime = Sat Oct 27 
22:59:59 EEST 2018 .Please consult the service log for more details.|42
2018-10-28 
03:00:00|We7dPQ|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-28
 03:00:00| ETL service aggregation to hourly tables has encountered an error. 
lastHourAgg value =Sun Oct 28 03:00:00 EET 2018 and runTime = Sun Oct 28 
03:00:00 EET 2018 .Please consult the service log for more details.|42
2018-10-28 
03:00:14|c3NnXS|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
 not sample data, oVirt Engine is not updating the statistics. Please check 
your oVirt Engine status.|9704
2018-10-28 
03:01:19|3CFZLf|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
 not sample data, oVirt Engine is not updating the statistics. Please check 
your oVirt Engine status.|9704
2018-10-28 
03:02:24|gt6fSU|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
 not sample data, oVirt Engine is not updating the statistics. Please check 
your oVirt Engine status.|9704
2018-10-28 
03:03:29|TzjHed|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
 not sample data, oVirt Engine is not updating the statistics. Please check 
your oVirt Engine status.|9704
2018-10-28 
03:04:34|cK2IFe|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
 not sample data, oVirt Engine is not updating the statistics. Please