[ovirt-users] Re: Ovirt 4.4/ Centos 8 issue with nfs?

2020-10-12 Thread Amit Bawer
On Mon, Oct 12, 2020 at 9:33 PM Amit Bawer  wrote:

>
>
> On Mon, Oct 12, 2020 at 9:12 PM Lee Hanel  wrote:
>
>> my /etc/exports looks like:
>>
>> (rw,async,no_wdelay,crossmnt,insecure,no_root_squash,insecure_locks,sec=sys,anonuid=1025,anongid=100)
>>
> The anongid,anonuid options could be failing the qemu user access check,
> Is there a special need to have them for the nfs shares for ovirt?
> I'd suggest to specify the exports for ovirt on their own
> /export/path1   *(rw,sync,no_root_suqash)
> /export/path2   *(rw,sync,no_root_suqash)
>
mind the typo "squash":
/export/path2   *(rw,sync,no_root_squash)

> ...
>
>
>> also to note, as vdsm I can create files/directories on the share.
>>
>>
>> On Mon, Oct 12, 2020 at 12:34 PM Amit Bawer  wrote:
>> >
>> >
>> >
>> > On Mon, Oct 12, 2020 at 7:47 PM  wrote:
>> >>
>> >> ok, I think that the selinux context might be wrong?   but I saw
>> nothing in the audit logs about it.
>> >>
>> >> drwxr-xr-x. 1 vdsm kvm system_u:object_r:nfs_t:s0 40 Oct  8 17:19 /data
>> >>
>> >> I don't see in the ovirt docs what the selinux context needs to be.
>> Is what you shared as an example the correct setting?
>> >
>> > It's taken from a working nfs setup,
>> > what is the /etc/exports (or equiv.) options settings on the server?
>> >
>> >> ___
>> >> Users mailing list -- users@ovirt.org
>> >> To unsubscribe send an email to users-le...@ovirt.org
>> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> >> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OSGBFRHYC3AMI2E3ZYRRBIAJINYOIBG/
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VGLS62COSG774PI6JJKP44JTL6QQWRHW/


[ovirt-users] Re: Ovirt 4.4/ Centos 8 issue with nfs?

2020-10-12 Thread Amit Bawer
On Mon, Oct 12, 2020 at 9:12 PM Lee Hanel  wrote:

> my /etc/exports looks like:
>
> (rw,async,no_wdelay,crossmnt,insecure,no_root_squash,insecure_locks,sec=sys,anonuid=1025,anongid=100)
>
The anongid,anonuid options could be failing the qemu user access check,
Is there a special need to have them for the nfs shares for ovirt?
I'd suggest to specify the exports for ovirt on their own
/export/path1   *(rw,sync,no_root_suqash)
/export/path2   *(rw,sync,no_root_suqash)
...


> also to note, as vdsm I can create files/directories on the share.
>
>
> On Mon, Oct 12, 2020 at 12:34 PM Amit Bawer  wrote:
> >
> >
> >
> > On Mon, Oct 12, 2020 at 7:47 PM  wrote:
> >>
> >> ok, I think that the selinux context might be wrong?   but I saw
> nothing in the audit logs about it.
> >>
> >> drwxr-xr-x. 1 vdsm kvm system_u:object_r:nfs_t:s0 40 Oct  8 17:19 /data
> >>
> >> I don't see in the ovirt docs what the selinux context needs to be.  Is
> what you shared as an example the correct setting?
> >
> > It's taken from a working nfs setup,
> > what is the /etc/exports (or equiv.) options settings on the server?
> >
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OSGBFRHYC3AMI2E3ZYRRBIAJINYOIBG/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WOYGDDLQIADJE53J7TIPUPT62WBFHUMM/


[ovirt-users] Re: Ovirt 4.4/ Centos 8 issue with nfs?

2020-10-12 Thread Amit Bawer
On Mon, Oct 12, 2020 at 7:47 PM  wrote:

> ok, I think that the selinux context might be wrong?   but I saw nothing
> in the audit logs about it.
>
> drwxr-xr-x. 1 vdsm kvm system_u:object_r:nfs_t:s0 40 Oct  8 17:19 /data
>
> I don't see in the ovirt docs what the selinux context needs to be.  Is
> what you shared as an example the correct setting?
>
It's taken from a working nfs setup,
what is the /etc/exports (or equiv.) options settings on the server?

___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OSGBFRHYC3AMI2E3ZYRRBIAJINYOIBG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L35IT7ZCIQ2VKHJ5HGLRUTPVC32GSPMX/


[ovirt-users] Re: Ovirt 4.4/ Centos 8 issue with nfs?

2020-10-12 Thread Amit Bawer
On Mon, Oct 12, 2020 at 6:03 PM  wrote:

> Greetings,
>
> I'm trying to upgrade from 4.3 to 4.4.  When trying to mount the original
> nfs items, I'm getting the following error:
>
> vdsm.storage.exception.StorageServerAccessPermissionError: Permission
> settings on the specified path do not allow access to the storage. Verify
> permission settings on the specified storage path.: 'path =
> /rhev/data-center/mnt/nfshost:nfs_path'
>
> with the following stack trace:
>
> 2020-10-08 19:00:17,961+ ERROR (jsonrpc/4) [storage.HSM] Could not
> connect to storageServer (hsm:2421)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 82,
> in validateDirAccess
> getProcPool().fileUtils.validateAccess(dirPath)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/outOfProcess.py",
> line 194, in validateAccess
> raise OSError(errno.EACCES, os.strerror(errno.EACCES))
> PermissionError: [Errno 13] Permission denied
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2418,
> in connectStorageServer
> conObj.connect()
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py",
> line 449, in connect
> return self._mountCon.connect()
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py",
> line 190, in connect
> six.reraise(t, v, tb)
>   File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
> raise value
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py",
> line 183, in connect
> self.getMountObj().getRecord().fs_file)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 93,
> in validateDirAccess
> raise se.StorageServerAccessPermissionError(dirPath)
>
>
> via an ls, it looks like there are the correct permissions:
>
> ls -alh
> total 0
> drwxr-xr-x. 1 vdsm kvm 100 Oct  8 19:15  .
> drwxr-xr-x. 4 vdsm kvm 115 Oct  8 19:02  ..
> drwxr-xr-x. 1 vdsm kvm  52 Oct  1 20:35
> ffe7b7bb-a391-42a9-9bae-480807509778
> d-. 1 vdsm kvm  22 Mar 17  2020 '#recycle'
>
>
The permissions and ownership seem correct, make sure the rest is set as
specified:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/sect-preparing_and_adding_nfs_storage
In addition make sure you are exporting the shares with
*(rw,sync,no_root_suqash) for the relevant export paths on the NFS server
side,
and there is selinux context set for the shared folders:
# ls -lhZ
...
drwxr-xr-x. 2 vdsm kvm unconfined_u:object_r:default_t:s0  6 Jul 20 18:21
data



>
> But, I wrote a short script to check the individual permissions that are
> checked, and it thinks I can't write to the directory:
>
> We can Read From  it:  /rhev/data-center/mnt/nfshost:nfspath
> We can't write to it:  /rhev/data-center/mnt/nfshost:nfspath
> We can execute to it:  /rhev/data-center/mnt/nfshost:nfspath


> This is doing a simple:
>
> print(os.stat(path))
> if os.access(path, os.F_OK):
>   print("It Exists", "/rhev/data-center/mnt/nfshost:nfspath")
> else:
>   print("It Doesn't Exists", "/rhev/data-center/mnt/nfshost:nfspath")
> if os.access(path, os.R_OK):
>   print("We can Read From  it: ", "/rhev/data-center/mnt/nfshost:nfspath")
> else:
>   print("We can't Read from it: ", "/rhev/data-center/mnt/nfshost:nfspath")
> if os.access(path, os.W_OK):
>   print("We can write to it: ", "/rhev/data-center/mnt/nfshost:nfspath")
> else:
>   print("We can't write to it: ", "/rhev/data-center/mnt/nfshost:nfspath")
> if os.access(path, os.X_OK):
>   print("We can execute to it: ", "/rhev/data-center/mnt/nfshost:nfspath")
> else:
>   print("We can can't to it: ", "/rhev/data-center/mnt/nfshost:nfspath")
>
> I took this same checks over to the Centos 7 host, and I it passes the
> checks fine.
>
>
> Does anyone have any ideas?
>
> Thanks,
> Lee
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y33ZNSVWCR6PJCH3TYUMXM72EBTUH4ZQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AKULHVEQA6ZY3UMRTKN72B5CIAWLELTP/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Amit Bawer
On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi 
wrote:

> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
>
>>
>>
>> Since there wasn't a filter set on the node, the 4.4.2 update added the
>> default filter for the root-lv pv
>> if there was some filter set before the upgrade, it would not have been
>> added by the 4.4.2 update.
>>
>>>
>>>
> Do you mean that I will get the same problem upgrading from 4.4.2 to an
> upcoming 4.4.3, as also now I don't have any filter set?
> This would not be desirable
>
Once you have got back into 4.4.2, it's recommended to set the lvm filter
to fit the pvs you use on your node
for the local root pv you can run
# vdsm-tool config-lvm-filter -y
For the gluster bricks you'll need to add their uuids to the filter as well.
The next upgrade should not set a filter on its own if one is already set.


>
>
>>
>>> Right now only two problems:
>>>
>>> 1) a long running problem that from engine web admin all the volumes are
>>> seen as up and also the storage domains up, while only the hosted engine
>>> one is up, while "data" and vmstore" are down, as I can verify from the
>>> host, only one /rhev/data-center/ mount:
>>>
>>> [snip]
>
>>
>>> I already reported this, but I don't know if there is yet a bugzilla
>>> open for it.
>>>
>> Did you get any response for the original mail? haven't seen it on the
>> users-list.
>>
>
> I think it was this thread related to 4.4.0 released and question about
> auto-start of VMs.
> A script from Derek that tested if domains were active and got false
> positive, and my comments about the same registered behaviour:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UOEL2SNHUUC7M4WAJ5NO/
>
> But I think there was no answer on that particular item/problem.
> Indeed I think you can easily reproduce, I don't know if only with Gluster
> or also with other storage domains.
> I don't know if it can have a part the fact that on the last host during a
> whole shutdown (and the only host in case of single host) you have to run
> the script
> /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
> otherwise you risk not to get a complete shutdown sometimes.
> And perhaps this stop can have an influence on the following startup.
> In any case the web admin gui (and the API access) should not show the
> domains active when they are not. I think there is a bug in the code that
> checks this.
>
If it got no response so far, I think it could be helpful to file a bug
with the details of the setup and the steps involved here so it will get
tracked.


>
>>
>>> 2) I see that I cannot connect to cockpit console of node.
>>>
>>> [snip]
>
>> NOTE: the ost is not resolved by DNS but I put an entry in my hosts
>>> client.
>>>
>> Might be required to set DNS for authenticity, maybe other members on the
>> list could tell better.
>>
>
> It would be the first time I see it. The access to web admin GUI works ok
> even without DNS resolution.
> I'm not sure if I had the same problem with the cockpit host console on
> 4.4.0.
>
Perhaps +Yedidyah Bar David   could help regarding cockpit
web access.


> Gianluca
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VYWJPRKRESPBAR7I45QSVNTCVWNRZ5WQ/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Amit Bawer
On Sun, Oct 4, 2020 at 2:07 AM Gianluca Cecchi 
wrote:

> On Sat, Oct 3, 2020 at 9:42 PM Amit Bawer  wrote:
>
>>
>>
>> On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer  wrote:
>>
>>>
>>>
>>> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>>>
>>
>> Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
>> maintenance mode
>> if the fs is mounted as read only, try
>>
>> mount -o remount,rw /
>>
>> sync and try to reboot 4.4.2.
>>
>>
> Indeed if i run, when in emergency shell in 4.4.2, the command:
>
> lvs --config 'devices { filter = [ "a|.*|" ] }'
>
> I see also all the gluster volumes, so I think the update injected the
> nasty filter.
> Possibly during update the command
> # vdsm-tool config-lvm-filter -y
> was executed and erroneously created the filter?
>
Since there wasn't a filter set on the node, the 4.4.2 update added the
default filter for the root-lv pv
if there was some filter set before the upgrade, it would not have been
added by the 4.4.2 update.


> Anyway remounting read write the root filesystem and removing the filter
> line from lvm.conf and rebooting worked and 4.4.2 booted ok and I was able
> to exit global maintenance and have the engine up.
>
> Thanks Amit for the help and all the insights.
>
> Right now only two problems:
>
> 1) a long running problem that from engine web admin all the volumes are
> seen as up and also the storage domains up, while only the hosted engine
> one is up, while "data" and vmstore" are down, as I can verify from the
> host, only one /rhev/data-center/ mount:
>
> [root@ovirt01 ~]# df -h
> Filesystem  Size  Used Avail
> Use% Mounted on
> devtmpfs 16G 0   16G
> 0% /dev
> tmpfs16G   16K   16G
> 1% /dev/shm
> tmpfs16G   18M   16G
> 1% /run
> tmpfs16G 0   16G
> 0% /sys/fs/cgroup
> /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1  133G  3.9G  129G
> 3% /
> /dev/mapper/onn-tmp1014M   40M  975M
> 4% /tmp
> /dev/mapper/gluster_vg_sda-gluster_lv_engine100G  9.0G   91G
> 9% /gluster_bricks/engine
> /dev/mapper/gluster_vg_sda-gluster_lv_data  500G  126G  375G
>  26% /gluster_bricks/data
> /dev/mapper/gluster_vg_sda-gluster_lv_vmstore90G  6.9G   84G
> 8% /gluster_bricks/vmstore
> /dev/mapper/onn-home   1014M   40M  975M
> 4% /home
> /dev/sdb2   976M  307M  603M
>  34% /boot
> /dev/sdb1   599M  6.8M  593M
> 2% /boot/efi
> /dev/mapper/onn-var  15G  263M   15G
> 2% /var
> /dev/mapper/onn-var_log 8.0G  541M  7.5G
> 7% /var/log
> /dev/mapper/onn-var_crash10G  105M  9.9G
> 2% /var/crash
> /dev/mapper/onn-var_log_audit   2.0G   79M  2.0G
> 4% /var/log/audit
> ovirt01st.lutwyn.storage:/engine100G   10G   90G
>  10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
> tmpfs   3.2G 0  3.2G
> 0% /run/user/1000
> [root@ovirt01 ~]#
>
> I can also wait 10 minutes and no change. The way I use to exit from this
> stalled situation is power on a VM, so that obviously it fails
> VM f32 is down with error. Exit message: Unable to get volume size for
> domain d39ed9a3-3b10-46bf-b334-e8970f5deca1 volume
> 242d16c6-1fd9-4918-b9dd-0d477a86424c.
> 10/4/20 12:50:41 AM
>
> and suddenly all the data storage domains are deactivated (from engine
> point of view, because actually they were not active...):
> Storage Domain vmstore (Data Center Default) was deactivated by system
> because it's not visible by any of the hosts.
> 10/4/20 12:50:31 AM
>
> and I can go in Data Centers --> Default --> Storage and activate
> "vmstore" and "data" storage domains and suddenly I get them activated and
> filesystems mounted.
>
> [root@ovirt01 ~]# df -h | grep rhev
> ovirt01st.lutwyn.storage:/engine100G   10G   90G
>  10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
> ovirt01st.lutwyn.storage:/data  500G  131G  370G
>  27% /rhev/data-center/mnt/gluste

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer  wrote:

>
>
> On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi 
> wrote:
>
>> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>>
>>> From the info it seems that startup panics because gluster bricks cannot
>>> be mounted.
>>>
>>>
>> Yes, it is so
>> This is a testbed NUC I use for testing.
>> It has 2 disks, the one named sdb is where ovirt node has been installed.
>> The one named sda is where I configured gluster though the wizard,
>> configuring the 3 volumes for engine, vm, data
>>
>> The filter that you do have in the 4.4.2 screenshot should correspond to
>>> your root pv,
>>> you can confirm that by doing (replace the pv-uuid with the one from
>>> your filter):
>>>
>>> #udevadm info
>>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>> P:
>>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>>> N: sda2
>>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>>
>>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>>
>>
>> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
>> special file created of type /dev/disk/by-id/
>>
> What does "udevadm info" show for /dev/sdb3 on 4.4.2?
>
>
>> See here for udevadm command on 4.4.0 that shows sdb3 that is the
>> partition corresponding to PV of root disk
>>
>> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>>
>>
>>
>>> Can you give the output of lsblk on your node?
>>>
>>
>> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>>
>> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>>
>> ANd here lsblk as seen from 4.4.2 with an empty sda:
>>
>> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>>
>>
>>> Can you check that the same filter is in initramfs?
>>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>>
>>
>> Here the command from 4.4.0 that shows no filter
>>
>> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>>
>> And here from 4.4.2 emergency mode, where I have to use the path
>> /boot/ovirt-node-ng-4.4.2-0/initramfs-
>> because no initrd file in /boot (in screenshot you also see output of "ll
>> /boot)
>>
>> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>>
>>
>>
>>> We have the following tool on the hosts
>>> # vdsm-tool config-lvm-filter -y
>>> it only sets the filter for local lvm devices, this is run as part of
>>> deployment and upgrade when done from
>>> the engine.
>>>
>>> If you have other volumes which have to be mounted as part of your
>>> startup
>>> then you should add their uuids to the filter as well.
>>>
>>
>> I didn't anything special in 4.4.0: I installed node on the intended
>> disk, that was seen as sdb and then through the single node hci wizard I
>> configured the gluster volumes on sda
>>
>> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
>> command from 4.4.0 to correct initramfs of 4.4.2?
>>
> The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see
> what needs to be fixed in this case.
>
>
>> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
>> go with engine in 4.4.2?
>>
> Might work, probably not too tested.
>
> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>

Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
maintenance mode
if the fs is mounted as read only, try

mount -o remount,rw /

sync and try to reboot 4.4.2.


>
>>
>>
>> Thanks,
>> Gianluca
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZK3JS7OUIPU4H5KJLGOW7C5IPPAIYPTM/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi 
wrote:

> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>
>> From the info it seems that startup panics because gluster bricks cannot
>> be mounted.
>>
>>
> Yes, it is so
> This is a testbed NUC I use for testing.
> It has 2 disks, the one named sdb is where ovirt node has been installed.
> The one named sda is where I configured gluster though the wizard,
> configuring the 3 volumes for engine, vm, data
>
> The filter that you do have in the 4.4.2 screenshot should correspond to
>> your root pv,
>> you can confirm that by doing (replace the pv-uuid with the one from your
>> filter):
>>
>> #udevadm info
>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>> P:
>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>> N: sda2
>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>
>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>
>
> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
> special file created of type /dev/disk/by-id/
>
What does "udevadm info" show for /dev/sdb3 on 4.4.2?


> See here for udevadm command on 4.4.0 that shows sdb3 that is the
> partition corresponding to PV of root disk
>
> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>
>
>
>> Can you give the output of lsblk on your node?
>>
>
> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>
> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>
> ANd here lsblk as seen from 4.4.2 with an empty sda:
>
> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>
>
>> Can you check that the same filter is in initramfs?
>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>
>
> Here the command from 4.4.0 that shows no filter
>
> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>
> And here from 4.4.2 emergency mode, where I have to use the path
> /boot/ovirt-node-ng-4.4.2-0/initramfs-
> because no initrd file in /boot (in screenshot you also see output of "ll
> /boot)
>
> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>
>
>
>> We have the following tool on the hosts
>> # vdsm-tool config-lvm-filter -y
>> it only sets the filter for local lvm devices, this is run as part of
>> deployment and upgrade when done from
>> the engine.
>>
>> If you have other volumes which have to be mounted as part of your startup
>> then you should add their uuids to the filter as well.
>>
>
> I didn't anything special in 4.4.0: I installed node on the intended disk,
> that was seen as sdb and then through the single node hci wizard I
> configured the gluster volumes on sda
>
> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
> command from 4.4.0 to correct initramfs of 4.4.2?
>
The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see
what needs to be fixed in this case.


> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
> go with engine in 4.4.2?
>
Might work, probably not too tested.

For the gluster bricks being filtered out in 4.4.2, this seems like [1].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805


>
>
> Thanks,
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDJHASPYE5PC2HFJC2LJDPGKV2JA7MAV/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
>From the info it seems that startup panics because gluster bricks cannot be
mounted.

The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv,
you can confirm that by doing (replace the pv-uuid with the one from your
filter):

#udevadm info
 /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
P:
/devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
N: sda2
S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ

In this case sda2 is the partition of the root-lv shown by lsblk.

Can you give the output of lsblk on your node?

Can you check that the same filter is in initramfs?
# lsinitrd -f  /etc/lvm/lvm.conf | grep filter

We have the following tool on the hosts
# vdsm-tool config-lvm-filter -y
it only sets the filter for local lvm devices, this is run as part of
deployment and upgrade when done from
the engine.

If you have other volumes which have to be mounted as part of your startup
then you should add their uuids to the filter as well.


On Sat, Oct 3, 2020 at 3:19 PM Gianluca Cecchi 
wrote:

> On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
>> gianluca.cec...@gmail.com> ha scritto:
>>
>>>
>>>
>>> On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
>>> wrote:
>>>
 oVirt Node 4.4.2 is now generally available

 The oVirt project is pleased to announce the general availability of
 oVirt Node 4.4.2 , as of September 25th, 2020.

 This release completes the oVirt 4.4.2 release published on September
 17th

>>>
>>> Thanks fir the news!
>>>
>>> How to prevent hosts entering emergency mode after upgrade from oVirt
 4.4.1

 Due to Bug 1837864
  - Host enter
 emergency mode after upgrading to latest build

 If you have your root file system on a multipath device on your hosts
 you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get
 your host entering emergency mode.

 In order to prevent this be sure to upgrade oVirt Engine first, then on
 your hosts:

1.

Remove the current lvm filter while still on 4.4.1, or in emergency
mode (if rebooted).
2.

Reboot.
3.

Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
4.

Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.

Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with
the correct filter configuration
6.

Reboot.



>>> What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have
>>> to follow the same steps as if I were in 4.4.1 or what?
>>> I would like to avoid going through 4.4.1 if possible.
>>>
>>
>> I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
>> should work for the same case.
>> The problematic filter in /etc/lvm/lvm.conf looks like:
>>
>> # grep '^filter = ' /etc/lvm/lvm.conf
>> filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
>>
>>
>>
>>
>>>
>>> Thanks,
>>> Gianluca
>>>
>>
>>
> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
> and gluster wizard and never update until now.
> Updated self hosted engine to 4.4.2 without problems.
>
> My host doesn't have any filter or global_filter set up in lvm.conf  in
> 4.4.0.
>
> So I update it:
>
> [root@ovirt01 vdsm]# yum update
> Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM
> CEST.
> Dependencies resolved.
>
> 
>  Package ArchitectureVersion
> Repository  Size
>
> 
> Installing:
>  ovirt-node-ng-image-update  noarch  4.4.2-1.el8
> ovirt-4.4  782 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
>
> Transaction Summary
>
> 
> Install  1 Package
>
> Total download size: 782 M
> Is this ok [y/N]: y
> Downloading Packages:
> ovirt-node-ng-image-update-4.4  27% [= ] 6.0 MB/s |
> 145 MB 01:45 ETA
>
>
> 
> Total   5.3
> MB/s | 782 MB 02:28
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Preparing:
>  1/1
>   Running scriptlet: 

[ovirt-users] Re: ovirt-node-4.4.2 grub is not reading new grub.cfg at boot

2020-10-01 Thread Amit Bawer
On Thu, Oct 1, 2020 at 4:12 PM Mike Lindsay  wrote:

> Hey Folks,
>
> I've got a bit of a strange one here. I downloaded and installed
> ovirt-node-ng-installer-4.4.2-2020091810.el8.iso today on an old dev
> laptop and to get it to install I needed to add acpi=off to the kernel
> boot param to get the installing to work (known issue with my old
> laptop). After installation it was still booting with acpi=off, no
> biggie (seen that happen with Centos 5,6,7 before on occasion) right,
> just change the line in /etc/defaults/grub and run grub2-mkconfig (ran
> for both efi and legacy for good measure even knowing EFI isn't used)
> and reboot...done this hundreds of times without any problems.
>
> But this time after rebooting if I hit 'e' to look at the kernel
> params on boot, acpi=off is still there. Basically any changes to
> /etc/default/grub are being ignored or over-ridden but I'll be damned
> if I can't find where.
>

According to RHEL information [1] you should be using "grubby" to update
grub parameters,
in your case:

# *grubby --args=acpi=off --update-kernel=ALL*

more acpi=off info in [2]

[1]
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/configuring-kernel-command-line-parameters_managing-monitoring-and-updating-the-kernel
[2]
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-acpi-ca


> I know I'm missing something simple here, I do this all the time but
> to be honest this is the first Centos 8 based install I've had time to
> play with. Any suggestions would be greatly appreciated.
>
> The drive layout is a bit weird but had no issues running fedora or
> centos in the past. boot drive is a mSATA (/dev/sdb) and there is a
> SSD data drive at /dev/sda...having sda installed or removed makes no
> difference and /boot is mounted where it should /dev/sdb1very
> strange
>
> Cheers,
> Mike
>
> [root@ovirt-node01 ~]# cat /etc/default/grub
> GRUB_TIMEOUT=5
> GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
> GRUB_DEFAULT=saved
> GRUB_DISABLE_SUBMENU=true
> GRUB_TERMINAL_OUTPUT="console"
> GRUB_CMDLINE_LINUX='crashkernel=auto resume=/dev/mapper/onn-swap
> rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap
> noapic rhgb quiet'
> GRUB_DISABLE_RECOVERY="true"
> GRUB_ENABLE_BLSCFG=true
> GRUB_DISABLE_OS_PROBER='true'
>
>
>
> [root@ovirt-node01 ~]# cat /boot/grub2/grub.cfg
> #
> # DO NOT EDIT THIS FILE
> #
> # It is automatically generated by grub2-mkconfig using templates
> # from /etc/grub.d and settings from /etc/default/grub
> #
>
> ### BEGIN /etc/grub.d/00_header ###
> set pager=1
>
> if [ -f ${config_directory}/grubenv ]; then
>   load_env -f ${config_directory}/grubenv
> elif [ -s $prefix/grubenv ]; then
>   load_env
> fi
> if [ "${next_entry}" ] ; then
>set default="${next_entry}"
>set next_entry=
>save_env next_entry
>set boot_once=true
> else
>set default="${saved_entry}"
> fi
>
> if [ x"${feature_menuentry_id}" = xy ]; then
>   menuentry_id_option="--id"
> else
>   menuentry_id_option=""
> fi
>
> export menuentry_id_option
>
> if [ "${prev_saved_entry}" ]; then
>   set saved_entry="${prev_saved_entry}"
>   save_env saved_entry
>   set prev_saved_entry=
>   save_env prev_saved_entry
>   set boot_once=true
> fi
>
> function savedefault {
>   if [ -z "${boot_once}" ]; then
> saved_entry="${chosen}"
> save_env saved_entry
>   fi
> }
>
> function load_video {
>   if [ x$feature_all_video_module = xy ]; then
> insmod all_video
>   else
> insmod efi_gop
> insmod efi_uga
> insmod ieee1275_fb
> insmod vbe
> insmod vga
> insmod video_bochs
> insmod video_cirrus
>   fi
> }
>
> terminal_output console
> if [ x$feature_timeout_style = xy ] ; then
>   set timeout_style=menu
>   set timeout=5
> # Fallback normal timeout code in case the timeout_style feature is
> # unavailable.
> else
>   set timeout=5
> fi
> ### END /etc/grub.d/00_header ###
>
> ### BEGIN /etc/grub.d/00_tuned ###
> set tuned_params=""
> set tuned_initrd=""
> ### END /etc/grub.d/00_tuned ###
>
> ### BEGIN /etc/grub.d/01_users ###
> if [ -f ${prefix}/user.cfg ]; then
>   source ${prefix}/user.cfg
>   if [ -n "${GRUB2_PASSWORD}" ]; then
> set superusers="root"
> export superusers
> password_pbkdf2 root ${GRUB2_PASSWORD}
>   fi
> fi
> ### END /etc/grub.d/01_users ###
>
> ### BEGIN /etc/grub.d/08_fallback_counting ###
> insmod increment
> # Check if boot_counter exists and boot_success=0 to activate this
> behaviour.
> if [ -n "${boot_counter}" -a "${boot_success}" = "0" ]; then
>   # if countdown has ended, choose to boot rollback deployment,
>   # i.e. default=1 on OSTree-based systems.
>   if  [ "${boot_counter}" = "0" -o "${boot_counter}" = "-1" ]; then
> set default=1
> set boot_counter=-1
>   # otherwise decrement boot_counter
>   else
> decrement boot_counter
>   fi
>   save_env 

[ovirt-users] Re: Source storage domain info in storage live migration

2020-09-29 Thread Amit Bawer
On Tue, Sep 29, 2020 at 11:02 AM Gianluca Cecchi 
wrote:

> Hello,
> having to monitor and check live storage migrations, it seems to me that
> scrolling through events I can see information about the target storage
> domain name, but nothing about the source. Is it correct?
>
> Can it be added? Which kind of rfe bugzilla should I use in case? Against
> what?
>

Unless you have a filter set on your event viewer, it may probably be
unlogged.
LSM features are usually under product/ component:
ovirt-engine/ BLL.Storage


> Thanks,
> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JJDHV2QV43Y2BAC5IGSOJZEWLXUM55HZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLOMRVX2X7YE3AU5BV2QAJCU3ZBDR4Y2/


[ovirt-users] Re: Setting Up oVirt + NFS Storage Issues

2020-08-03 Thread Amit Bawer
On Mon, Aug 3, 2020 at 4:06 PM Arden Shackelford 
wrote:

> Hello!
>
> Been looking to get setup with oVirt for a few weeks and had a chance the
> past week or so to attempt getting it all setup. Ended up doing a bit of
> troubleshooting but finally got to the point where the Cockpit setup
> prompts for me setting up the storage piece, for which I've opted for NFS
> (path of least resistance for now). However, I constantly run into this
> issue (pulled from the administration portal events page):
>

Are your NFS exports permissions set correctly (probably yes if you can see
something created on your share)?
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/sect-preparing_and_adding_nfs_storage

>
> VDSM asgard.shackfam.net command CreateStorageDomainVDS failed: Could not
> initialize cluster lock: ()
>
Can you share the full error trace from vdsm.log ?

>
> This specifically shows up in the following task for the Ansible piece:
>
>  [ovirt.hosted_engine_setup : Add NFS storage domain]
>
> When I browse the NFS share, I do see a file structure being created,
> which kind of confuses me based on the error provide in the Administration
> portal.
>
Can you list your share contents with ls -lhZ ?

>
> Any suggestions on what would be best steps forward at this point?
>
> For reference, here's my setup:
>
> 1x oVirt Node
> 1x NFS server (storage01: NFS shares mainly managed by ZFS)
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BNVXUH5B26FBFCGYLG62JUSB5SOU2MN7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IL4IUW6NK7J6UUSNFRXYYUYU75SGKUCU/


[ovirt-users] Re: Snapshot and disk size allocation

2020-08-02 Thread Amit Bawer
You may also refer to
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/technical_reference/chap-virtual_machine_snapshots


On Sun, Aug 2, 2020 at 10:35 AM Strahil Nikolov via Users 
wrote:

> It's quite  simple.
>
> For example , you got vm1 with 10GB OS disk that is fully preallocated
> (actual size is really 10GB).
> Now  imagine that you create a snapshot of this vm1 and you download 1GB
> file from another place.
> You will see 2 disk files on the storage domain:
> - The original one that was made read-only during the snapshot process
> - A new 1GB file that represents the delta (changes between snapshot and
> current state)
>
> The second file is read-write and if left as is - it will grow up to 10GB
> (the actual disk of the vm1). When the Virtualization is looking for some
> data it has to search into 2  places which will reduce performance  a
> little bit.If you decide to create another snapshot, the second disk will
> be made  read-only and another one will be created and so on and so on.
>
> When you delete a snapshot, the disks  will be merged in such way so newer
> snapshot disks remain, while the rest are merged into a single file.
>
> Restoring  a snapshot is simplest - everything after that snapshot is
> deleted and the vm1 will use the snapshot disk till you delete (which will
> merge base disk with snapshot disk) that snapshot.
>
>
> Best Regards,
> Strahil Nikolov
>
> На 2 август 2020 г. 3:53:11 GMT+03:00, jorgevisent...@gmail.com написа:
> >Hello everyone.
> >
> >I would like to know how disk size and snapshot allocation works,
> >because every time I create a new snapshot, it increases 1 GB in the
> >VM's disk size, and when I remove the snap, that space is not returned
> >to Domain Storage.
> >
> >I'm using the oVirt 4.3.10
> >
> >How do I reprovision the VM disk?
> >
> >Thank you all.
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TG24XR77PISMBRZ5S5L4P7DV56SUDON/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7AEVC7U4JUFWPAGQRJDN7WIZ37CZK474/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TV52OBEVPWCXYIHX7CMD66E7HFRWEA4Y/


[ovirt-users] Re: iSCSI multipath issue

2020-07-28 Thread Amit Bawer
On Tue, Jul 28, 2020 at 8:56 AM Nick Kas via Users  wrote:

> Hello evryone,
> setup ovirt 4.4.1 on CentOS 8.2 as an experiment, and I am trying to get
> an iSCSI domain working but have issues. The little experimental cluster
> has 3 hosts. There is an ovirtmgmt network on the default vlan, and two
> iSCSI network (172.27.0/1.X) with vlans 20/21. ovirtmgmt has all the
> functions (Data, display, migration etc), and the iSCSI networks nothing
> yet, and they are not set as required.
> The SAN device is already serving a few iSCSI volumes to a vmware cluster,
> so I know things are fine on this end. It has two controllers, and four
> NICs per controller so a total of 8 NICs, half of the NICS per controller
> on 172.27.0.X and half on 172.27.1.X.
>
> When I create the iSCSI domain, I login to only one of the targets, and
> add the Volume, all is good and I can use the disc fine.
> However when I login to more than one of the targets, then I start having
> issues with the Volume. Even when I enabled multipath in the cluster, and I
> created a single multipath by selecting both of the 172.27.0/1.X networks,
> and all the targets, the end result was the same. The hosts have difficulty
> accessing the volume, they may even swing between 'non-operational' and
> 'up' if I transfer data to the volume. When I ssh to the hosts and i check
> things in the command line I also get inconsistent results between hosts,
> and blocks that appear with lsblk when I first setup iSCSI have dissapeared
> after I try to actively use the volume.
>
maybe /var/log/vdsm/vdsm.log could tell more.

>
> I am new to iSCSI so I am not sure how to debug this. I am not sure if my
> multipath configuration is correct or not. The documentation on this part
> was not very detailed. I also tried to remove the domain, and try to
> experiment with mounting the iSCSI volume from the command line, but I
> cannot even discover the target from the command line, which is very
> bizarre. The command
> iscsiadm --mode discovery --target sendtargets --portal 172.27.0.55
> --discover
>
you are mixing "--target" with "--type", should be:
iscsiadm --mode discovery --type sendtargets --portal 172.27.0.55
--interface default

> returns the message 'iscsiadm: cannot make connection to 172.27.0.55: No
> route to host'. Yet through ovirt, and if I select only one target,
> everything work fine!
>
> Any suggestions on how to start debugging this would really be appreciated.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/N5DNXQ5MAMPXMA3LOHM4RHUZLYKUUMLO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B5BISBNF273PDSQTNQRC7DPMOIWWM53A/


[ovirt-users] Re: Q: Which types of tests and tools are used?

2020-05-31 Thread Amit Bawer
You can look into ovirt vdsm pytest readme [1], it is slightly outdated as
invocation for 4.4 branch is now using "tox -e storage" (with no "-pyXX"
suffix as all tests are python3 compatible there)
All up2date options are listed on tox.ini [2]. If you need to set up your
vdsm dev/test env you can follow [3].
For engine tests example, you can look at maven tests readme for DB models
[4]

[1] https://github.com/oVirt/vdsm/blob/master/tests/README
[2] https://github.com/oVirt/vdsm/blob/master/tox.ini
[3] https://github.com/oVirt/vdsm/blob/master/README.md
[4]
https://github.com/oVirt/ovirt-engine/blob/ede62008318d924556bc9dfc5710d90e9519670d/backend/manager/modules/dal/README




On Sun, May 31, 2020 at 10:46 PM Juergen Novak  wrote:

> Hi,
>
> can anybody help me to find some information about test types used in
> the project and tools used?
>
> Particularly interesting would be tools and tests used for the Python
> coding, but also any information about Java would be appreciated.
>
>
> I already scanned the documentation, but I mainly found only information
> about Mocking tools.
>
> Thank you!
>
> /juergen
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OVHS6QUEGSNLZRKXIKDQFR6PKYKL4CBE/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HUBVO5J6Q3SJUP53CCIZM45SP6TYBZZH/


[ovirt-users] Re: ovirt imageio problem...

2020-05-31 Thread Amit Bawer
On Sun, May 31, 2020 at 8:09 AM Strahil Nikolov via Users 
wrote:

> And what  about https://bugzilla.redhat.com/show_bug.cgi?id=1787906
> Do we  have any validation of the checksum via the python script ?
>
No integrated checksum.

>
>
> Best Regards,
> Strahil Nikolov
>
> На 31 май 2020 г. 0:18:43 GMT+03:00, Carlos C 
> написа:
> >Hi,
> >
> >You can try upload using the python as described here
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWYRK6LUNHU6FELP7QYSDIPF5SR6YPCK/
> >
> >
> >Carlos
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A2JW27WDLI5X2KWWZ47T7TNZCUOJMD32/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DN3QCX3LOCSUWXPL7X64OHUC6UAQADI4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KOA5KOQMEFVWHO6DZTKKI6NHMPFPSL5X/


[ovirt-users] Re: ovirt imageio problem...

2020-05-30 Thread Amit Bawer
On Saturday, May 30, 2020, matteo fedeli  wrote:

> Hi! I' installed CentOS 8 and ovirt package following this step:
>
> systemctl enable --now cockpit.socket
> yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
> yum module -y enable javapackages-tools
> yum module -y enable pki-deps
> yum module -y enable postgresql:12
> yum -y install glibc-locale-source glibc-langpack-en
> localedef -v -c -i en_US -f UTF-8 en_US.UTF-8
> yum update
> yum install ovirt-engine
> engine-setup (by keeping all default)
>
> It's possible ovirt-imageio-proxy service is not installed? (service
> ovirt-imageio-proxy status --> not found, yum install ovirt-imageio-proxy
> --> not found) I'm not able to upload iso... I also installed CA cert in
> firefox...


Ovirt imageio is undergoing developments and is not part of the currently
available release.

You can upload iso images by creating nfs iso domain and copying your iso
files under the image folder marked by '1's there.

___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/E5AK75SOMAVBSWKGVLCJPFFK2RXCZO7N/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CWUAOQAIXCHTFUFLVA42W6FWHWBERFI4/


[ovirt-users] Re: oVirt 4.4 HE on Copy local VM disk to shared storage (NFS) failing

2020-05-27 Thread Amit Bawer
Maybe someone from virt team could refer to this.

On Wed, May 27, 2020 at 2:06 PM Gianluca Cecchi 
wrote:

> On Wed, May 27, 2020 at 11:26 AM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Wed, May 27, 2020 at 10:21 AM Amit Bawer  wrote:
>>
>>> From your vdsm log, it seems as occurrence of an issue discussed at
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JE76KK2WFDCDJL3DF52OGOLGXWHWPEDA/
>>>
>>> 2020-05-26 19:51:53,837-0400 ERROR (vm/90f09a7c) [virt.vm]
>>> (vmId='90f09a7c-3af1-45a9-a210-78a9b0cd4c3d') The vm start process failed
>>> (vm:871)
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 801, in
>>> _startUnderlyingVm
>>> self._run()
>>>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2608, in
>>> _run
>>> dom.createWithFlags(flags)
>>>   File
>>> "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line
>>> 131, in wrapper
>>> ret = f(*args, **kwargs)
>>>   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line
>>> 94, in wrapper
>>> return func(inst, *args, **kwargs)
>>>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1166, in
>>> createWithFlags
>>> if ret == -1: raise libvirtError ('virDomainCreateWithFlags()
>>> failed', dom=self)
>>> libvirt.libvirtError: unsupported configuration: unknown CPU feature:
>>> tsx-ctrl
>>>
>>>
>>> On Wed, May 27, 2020 at 9:15 AM Patrick Lomakin <
>>> patrick.loma...@gmail.com> wrote:
>>>
>>>> I think the problem with a storage connection. Verify your IP addresses
>>>> on the storage adapter port. Are they connected? After install my ports for
>>>> storage was deactivated by defaut. Try to connect your iSCSI manually with
>>>> iscsiadm command and then check your storage connection in storage tab
>>>> using a host admin portal.
>>>>
>>>
>> Yes, in that thread I asked if possible to specify in some way an
>> alternate cpu type and/or flags to be setup for self hosted engine vm,
>> without setup to automatically detect/configure it, but I didn't receive an
>> answer.
>> Because the cpu flag is there also in 4.3.9 but doesn't create problems
>> with self hosted engine or host.
>>
>> Gianluca
>>
>
> BTW: is there anything special we can do to help to have this cpu as
> supported?
>
> model name : Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz
>
> So that others could have better and wider options and also RHV benefit
> from it?
> Also, I now find here below that Cascadelake is supposed to be supported:
>
> https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/#CPU_Requirements_SHE_cockpit_deploy
>
>
> Thanks,
> Gianluca
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TVOTLOCNG44ZEZ7NHKBB7BOPXPCMMN3O/


[ovirt-users] Re: oVirt 4.4 HE on Copy local VM disk to shared storage (NFS) failing

2020-05-27 Thread Amit Bawer
>From your vdsm log, it seems as occurrence of an issue discussed at
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JE76KK2WFDCDJL3DF52OGOLGXWHWPEDA/

2020-05-26 19:51:53,837-0400 ERROR (vm/90f09a7c) [virt.vm]
(vmId='90f09a7c-3af1-45a9-a210-78a9b0cd4c3d') The vm start process failed
(vm:871)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 801, in
_startUnderlyingVm
self._run()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2608, in
_run
dom.createWithFlags(flags)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94,
in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1166, in
createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
dom=self)
libvirt.libvirtError: unsupported configuration: unknown CPU feature:
tsx-ctrl


On Wed, May 27, 2020 at 9:15 AM Patrick Lomakin 
wrote:

> I think the problem with a storage connection. Verify your IP addresses on
> the storage adapter port. Are they connected? After install my ports for
> storage was deactivated by defaut. Try to connect your iSCSI manually with
> iscsiadm command and then check your storage connection in storage tab
> using a host admin portal.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNQTADDDEIK2V35TSNPSE7JNC6JNF4N3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLH5BCT6VOFVTSQAG6ZSLGV5UXMMJJ4U/


[ovirt-users] Re: POWER9 Support: VDSM requiring LVM2 package that's missing

2020-05-14 Thread Amit Bawer
On Fri, May 15, 2020 at 12:19 AM Vinícius Ferrão via Users 
wrote:

> Hello,
>
> I would like to know if this is a bug or not, if yes I will submit to Red
> Hat.
>
Fixed on vdsm-4.30.46

>
> I’m trying to add a ppc64le (POWER9) machine to the hosts pool, but
> there’s missing dependencies on VDSM:
>
> --> Processing Dependency: lvm2 >= 7:2.02.186-7.el7_8.1 for package:
> vdsm-4.30.44-1.el7ev.ppc64le
> --> Finished Dependency Resolution
> Error: Package: vdsm-4.30.44-1.el7ev.ppc64le
> (rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms)
>Requires: lvm2 >= 7:2.02.186-7.el7_8.1
>Available: 7:lvm2-2.02.171-8.el7.ppc64le
> (rhel-7-for-power-9-rpms)
>lvm2 = 7:2.02.171-8.el7
>Available: 7:lvm2-2.02.177-4.el7.ppc64le
> (rhel-7-for-power-9-rpms)
>lvm2 = 7:2.02.177-4.el7
>Available: 7:lvm2-2.02.180-8.el7.ppc64le
> (rhel-7-for-power-9-rpms)
>lvm2 = 7:2.02.180-8.el7
>Available: 7:lvm2-2.02.180-10.el7_6.1.ppc64le
> (rhel-7-for-power-9-rpms)
>lvm2 = 7:2.02.180-10.el7_6.1
>Available: 7:lvm2-2.02.180-10.el7_6.2.ppc64le
> (rhel-7-for-power-9-rpms)
>lvm2 = 7:2.02.180-10.el7_6.2
>Available: 7:lvm2-2.02.180-10.el7_6.3.ppc64le
> (rhel-7-for-power-9-rpms)
>lvm2 = 7:2.02.180-10.el7_6.3
>Available: 7:lvm2-2.02.180-10.el7_6.7.ppc64le
> (rhel-7-for-power-9-rpms)
>lvm2 = 7:2.02.180-10.el7_6.7
>Available: 7:lvm2-2.02.180-10.el7_6.8.ppc64le
> (rhel-7-for-power-9-rpms)
>lvm2 = 7:2.02.180-10.el7_6.8
>Installing: 7:lvm2-2.02.180-10.el7_6.9.ppc64le
> (rhel-7-for-power-9-rpms)
>lvm2 = 7:2.02.180-10.el7_6.9
>
>
> Thanks,
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3YDM2VN7K2GHNLNLWCEXZRSAHI4F4L7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5DQUI7PPI45ZDJJFBHNZ6OK765XLI3CQ/


[ovirt-users] Re: When and how to change VDSM REVISION in multipath.conf

2020-03-22 Thread Amit Bawer
On Thu, Mar 19, 2020 at 10:16 AM Gianluca Cecchi 
wrote:

> Hello,
> I created some hosts at time of 4.3.3 or similar and connecting to iSCSI I
> set this in multipath.conf to specify customization
>
> "
> # VDSM REVISION 1.5
> # VDSM PRIVATE
>
> # This file is managed by vdsm.
> ...
> "
>
> Then I updated the environment gradually to 4.3.8 but of course the file
> remained the same because of its configuration and the "PRIVATE" label.
> Now I install new hosts in 4.3.8 and I see that by default they have these
> lines
>
> "
> # VDSM REVISION 1.8
>
> # This file is managed by vdsm.
> "
>
> So the question is:
> suppose I customize the multipath.conf file using the "# VDSM PRIVATE"
> line, how do I have to manage the REVISION number as time goes on and I
> execute minor updates to hosts?
> What will it change between 1.5 above and 1.8? Any impact if I leave 1.5
> for example across all my 4.3.8 hosts?
>
You are not supposed to edit revision numbers of the conf file as it is
managed by vdsm internally for reconfiguration checks on upgrades.
for implications, you can compare files for 1.5 and 1.8 to see the diff.

>
> Thanks in advance for clarification,
> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHCG3OTTCRXKPZDQAXP4R65NWQK2IXLF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PZCM7DR7A4ZMX65RBSQUIPWHQIO7WSUQ/


[ovirt-users] Re: Info on python SDK variable definition

2020-02-27 Thread Amit Bawer
Maybe you have meant to a formatted string:

my_var3 = "prefix_%s" % suffix

Or

 my_var3 = "prefix_{}".format(suffix)

where suffix has some runtime string result.

On Thursday, February 27, 2020, Gianluca Cecchi 
wrote:

> Hello,
> suppose in a .py using sdk I have
>
> my_var1 = 'prefix'
>
> then in the workflow I get at runtime another my_var2 that for example has
> the attribute
> my_var2.name that equals "suffix"
>
> Then I want to define a new variable my_var3 with value "prefix_suffix",
> where the string "suffix" is known only at runtime, how can I get it?
>
> my_var3 =  ???
>
> Can I use something similar to what I do when using the print function?
>
> print ("My storage domain is %s..." % sd.name)
> ?
> Thanks,
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L5KSTMULHMGFQFLJNOZGQEB7B2E3ZBXD/


[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-10 Thread Amit Bawer
On Mon, Feb 10, 2020 at 4:13 PM Jorick Astrego  wrote:

>
> On 2/10/20 1:27 PM, Jorick Astrego wrote:
>
> Hmm, I didn't notice that.
>
> I did a check on the NFS server and I found the
> "1ed0a635-67ee-4255-aad9-b70822350706" in the exportdom path
> (/data/exportdom).
>
> This was an old NFS export domain that has been deleted for a while now. I
> remember finding somewhere an issue with old domains still being active
> after removal but I cannot find it now.
>
> I unexported the directory on the nfs server and now I have to correct
> mount and it activates fine.
>
> Thanks!
>
> Still weird that it picks another nfs mount path to mount that has been
> removed months ago from engine.
>
This is because vdsm scans for domains by storage i.e. looking up under
/rhev/data-center/mnt/* in case of nfs domains [1]


> It's not listed in the database on engine:
>
The table lists the valid domains known to engine, removals/additions of
storage domains update this table.

If you removed the old nfs domain, but the nfs storage was not available to
the time (i.e. not mounted) then storage format could fail silently [2]
and yet this table would still be updated for the SD removal. [3]

Haven't tested this out, and may need to unmount at a very specific moment
to achieve this in [2],
but looking around with the kind assistance of +Benny Zlotnik
 on engine side makes this assumption seem possible.

[1]
https://github.com/oVirt/vdsm/blob/821afbbc238ba379c12666922fc1ac80482ee383/lib/vdsm/storage/fileSD.py#L888
[2]
https://github.com/oVirt/vdsm/blob/master/lib/vdsm/storage/fileSD.py#L628
[3]
https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/storage/domain/RemoveStorageDomainCommand.java#L77

engine=# select * from storage_domain_static ;
>   id  |
> storage|  storage_name  | storage_domain_type |
> storage_type | storage_domain_format_type | _create_date
> | _update_date  | reco
> verable | last_time_used_as_master |storage_description |
> storage_comment | wipe_after_delete | warning_low_space_indicator |
> critical_space_action_blocker | first_metadata_device | vg_metadata_device
> | discard_after_delet
> e | backup | warning_low_confirmed_space_indicator | block_size
>
> --+--++-+--++---+---+-
>
> +--++-+---+-+---+---++
> --++---+
>  782a61af-a520-44c4-8845-74bf92888552 |
> 640ab34d-aa5d-478b-97be-e3f810558628 | ISO_DOMAIN
> |   2 |1 | 0  |
> 2017-11-16 09:49:49.225478+01 |   | t
> |0 | ISO_DOMAIN
> | | f |
> |   |
> || f
>   | f  |   |512
>  072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 |
> ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository
> |   4 |8 | 0  |
> 2016-10-14 20:40:44.700381+02 | 2018-04-06 14:03:31.201898+02 | t
> |0 | Public Glance repository for oVirt
> | | f |
> |   |
> || f
>   | f  |   |512
>  b30bab9d-9a66-44ce-ad17-2eb4ee858d8f |
> 40d191b0-b7f8-48f9-bf6f-327275f51fef | ssd-6
> |   1 |7 | 4  |
> 2017-06-25 12:45:24.52974+02  | 2019-01-24 15:35:57.013832+01 | t
> |1498461838176 |
> | | f |  10
> | 5 |
> || f
>   | f  |   |512
>  95b4e5d2-2974-4d5f-91e4-351f75a15435 |
> f11fed97-513a-4a10-b85c-2afe68f42608 | ssd-3
> |   1 |7 | 4  |
> 2019-01-10 12:15:55.20347+01  | 2019-01-24 15:35:57.013832+01 | t
> |0 |
> | | f |  10
> | 5 |
> || f
>   | f  |10 |512
>  f5d2f7c6-093f-46d6-a844-224d92db5ef9 |
> b8b456f0-27c3-49b9-b5e9-9fa81fb3cdaa | backupnfs
> |   1 |1 | 4  |
> 2018-01-19 13:31:25.899738+01 | 2019-02-14 

[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-10 Thread Amit Bawer
On Mon, Feb 10, 2020 at 2:27 PM Jorick Astrego  wrote:

>
> On 2/10/20 11:09 AM, Amit Bawer wrote:
>
> compared it with host having nfs domain working
> this
>
> On Mon, Feb 10, 2020 at 11:11 AM Jorick Astrego 
> wrote:
>
>>
>> On 2/9/20 10:27 AM, Amit Bawer wrote:
>>
>>
>>
>> On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego 
>> wrote:
>>
>>> Hi,
>>>
>>> Something weird is going on with our ovirt node 4.3.8 install mounting a
>>> nfs share.
>>>
>>> We have a NFS domain for a couple of backup disks and we have a couple
>>> of 4.2 nodes connected to it.
>>>
>>> Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount
>>> doesn't work.
>>>
>>> (annoying you cannot copy the text from the events view)
>>>
>>> The domain is up and working
>>>
>>> ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9
>>> Size: 10238 GiB
>>> Available:2491 GiB
>>> Used:7747 GiB
>>> Allocated: 3302 GiB
>>> Over Allocation Ratio:37%
>>> Images:7
>>> Path:*.*.*.*:/data/ovirt
>>> NFS Version: AUTO
>>> Warning Low Space Indicator:10% (1023 GiB)
>>> Critical Space Action Blocker:5 GiB
>>>
>>> But somehow the node appears to thin thinks it's an LVM volume? It tries
>>> to find the VGs volume group but fails... which is not so strange as it is
>>> an NFS volume:
>>>
>>> 2020-02-05 14:17:54,190+ WARN  (monitor/f5d2f7c) [storage.LVM]
>>> Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5
>>> out=[] err=['  Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not
>>> found', '  Cannot process volume group
>>> f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470)
>>> 2020-02-05 14:17:54,201+ ERROR (monitor/f5d2f7c) [storage.Monitor]
>>> Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed
>>> (monitor:330)
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>>> 327, in _setupLoop
>>> self._setupMonitor()
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>>> 349, in _setupMonitor
>>> self._produceDomain()
>>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in
>>> wrapper
>>> value = meth(self, *a, **kw)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>>> 367, in _produceDomain
>>> self.domain = sdCache.produce(self.sdUUID)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110,
>>> in produce
>>> domain.getRealDomain()
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51,
>>> in getRealDomain
>>> return self._cache._realProduce(self._sdUUID)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134,
>>> in _realProduce
>>> domain = self._findDomain(sdUUID)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151,
>>> in _findDomain
>>> return findMethod(sdUUID)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176,
>>> in _findUnfetchedDomain
>>> raise se.StorageDomainDoesNotExist(sdUUID)
>>> StorageDomainDoesNotExist: Storage domain does not exist:
>>> (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
>>>
>>> The volume is actually mounted fine on the node:
>>>
>>> On NFS server
>>>
>>> Feb  5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request
>>> from *.*.*.*:673 for /data/ovirt (/data/ovirt)
>>>
>>> On the host
>>>
>>> mount|grep nfs
>>>
>>> *.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type
>>> nfs
>>> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
>>>
>>> And I can see the files:
>>>
>>> ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt
>>> total 4
>>> drwxr-xr-x. 5 vdsm kvm61 Oct 26  2016
>>> 1ed0a635-67ee-4255-aad9-b70822350706
>>>
>>>
>> What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?
>>
>> ls -arlt 1ed0a6

[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-10 Thread Amit Bawer
compared it with host having nfs domain working
this

On Mon, Feb 10, 2020 at 11:11 AM Jorick Astrego  wrote:

>
> On 2/9/20 10:27 AM, Amit Bawer wrote:
>
>
>
> On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego  wrote:
>
>> Hi,
>>
>> Something weird is going on with our ovirt node 4.3.8 install mounting a
>> nfs share.
>>
>> We have a NFS domain for a couple of backup disks and we have a couple of
>> 4.2 nodes connected to it.
>>
>> Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount
>> doesn't work.
>>
>> (annoying you cannot copy the text from the events view)
>>
>> The domain is up and working
>>
>> ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9
>> Size: 10238 GiB
>> Available:2491 GiB
>> Used:7747 GiB
>> Allocated: 3302 GiB
>> Over Allocation Ratio:37%
>> Images:7
>> Path:*.*.*.*:/data/ovirt
>> NFS Version: AUTO
>> Warning Low Space Indicator:10% (1023 GiB)
>> Critical Space Action Blocker:5 GiB
>>
>> But somehow the node appears to thin thinks it's an LVM volume? It tries
>> to find the VGs volume group but fails... which is not so strange as it is
>> an NFS volume:
>>
>> 2020-02-05 14:17:54,190+ WARN  (monitor/f5d2f7c) [storage.LVM]
>> Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5
>> out=[] err=['  Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not
>> found', '  Cannot process volume group
>> f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470)
>> 2020-02-05 14:17:54,201+ ERROR (monitor/f5d2f7c) [storage.Monitor]
>> Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed
>> (monitor:330)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>> 327, in _setupLoop
>> self._setupMonitor()
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>> 349, in _setupMonitor
>> self._produceDomain()
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in
>> wrapper
>> value = meth(self, *a, **kw)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>> 367, in _produceDomain
>> self.domain = sdCache.produce(self.sdUUID)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110,
>> in produce
>> domain.getRealDomain()
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51,
>> in getRealDomain
>> return self._cache._realProduce(self._sdUUID)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134,
>> in _realProduce
>> domain = self._findDomain(sdUUID)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151,
>> in _findDomain
>> return findMethod(sdUUID)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176,
>> in _findUnfetchedDomain
>> raise se.StorageDomainDoesNotExist(sdUUID)
>> StorageDomainDoesNotExist: Storage domain does not exist:
>> (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
>>
>> The volume is actually mounted fine on the node:
>>
>> On NFS server
>>
>> Feb  5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request
>> from *.*.*.*:673 for /data/ovirt (/data/ovirt)
>>
>> On the host
>>
>> mount|grep nfs
>>
>> *.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs
>> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
>>
>> And I can see the files:
>>
>> ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt
>> total 4
>> drwxr-xr-x. 5 vdsm kvm61 Oct 26  2016
>> 1ed0a635-67ee-4255-aad9-b70822350706
>>
>>
> What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?
>
> ls -arlt 1ed0a635-67ee-4255-aad9-b70822350706/
> total 4
> drwxr-xr-x. 2 vdsm kvm93 Oct 26  2016 dom_md
> drwxr-xr-x. 5 vdsm kvm61 Oct 26  2016 .
> drwxr-xr-x. 4 vdsm kvm40 Oct 26  2016 master
> drwxr-xr-x. 5 vdsm kvm  4096 Oct 26  2016 images
> drwxrwxrwx. 3 root root   86 Feb  5 14:37 ..
>
On a working nfs domain host we have following storage hierarchy,
feece142-9e8d-42dc-9873-d154f60d0aac is the nfs domain in my case

/rhev/data-center/
├── edefe626-3ada-11ea-9877-525400b37767
...
│   ├── feece142-9e8d-42dc-9873-d154f60d0aac ->
/rhev/data-cent

[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-09 Thread Amit Bawer
On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego  wrote:

> Hi,
>
> Something weird is going on with our ovirt node 4.3.8 install mounting a
> nfs share.
>
> We have a NFS domain for a couple of backup disks and we have a couple of
> 4.2 nodes connected to it.
>
> Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount
> doesn't work.
>
> (annoying you cannot copy the text from the events view)
>
> The domain is up and working
>
> ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9
> Size: 10238 GiB
> Available:2491 GiB
> Used:7747 GiB
> Allocated: 3302 GiB
> Over Allocation Ratio:37%
> Images:7
> Path:*.*.*.*:/data/ovirt
> NFS Version: AUTO
> Warning Low Space Indicator:10% (1023 GiB)
> Critical Space Action Blocker:5 GiB
>
> But somehow the node appears to thin thinks it's an LVM volume? It tries
> to find the VGs volume group but fails... which is not so strange as it is
> an NFS volume:
>
> 2020-02-05 14:17:54,190+ WARN  (monitor/f5d2f7c) [storage.LVM]
> Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5
> out=[] err=['  Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not
> found', '  Cannot process volume group
> f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470)
> 2020-02-05 14:17:54,201+ ERROR (monitor/f5d2f7c) [storage.Monitor]
> Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed
> (monitor:330)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 327, in _setupLoop
> self._setupMonitor()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 349, in _setupMonitor
> self._produceDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in
> wrapper
> value = meth(self, *a, **kw)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 367, in _produceDomain
> self.domain = sdCache.produce(self.sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110,
> in produce
> domain.getRealDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in
> getRealDomain
> return self._cache._realProduce(self._sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134,
> in _realProduce
> domain = self._findDomain(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151,
> in _findDomain
> return findMethod(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176,
> in _findUnfetchedDomain
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
>
> The volume is actually mounted fine on the node:
>
> On NFS server
>
> Feb  5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request from
> *.*.*.*:673 for /data/ovirt (/data/ovirt)
>
> On the host
>
> mount|grep nfs
>
> *.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
>
> And I can see the files:
>
> ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt
> total 4
> drwxr-xr-x. 5 vdsm kvm61 Oct 26  2016
> 1ed0a635-67ee-4255-aad9-b70822350706
>
>
What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?


> -rwxr-xr-x. 1 vdsm kvm 0 Feb  5 14:37 __DIRECT_IO_TEST__
> drwxrwxrwx. 3 root root   86 Feb  5 14:37 .
> drwxr-xr-x. 5 vdsm kvm  4096 Feb  5 14:37 ..
>
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> *Netbulae Virtualization Experts *
> --
> Tel: 053 20 30 270 i...@netbulae.eu Staalsteden 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
> --
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IFTO5WBLVLGTVWKYN3BGLOHAC453UBD5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VO6WD7MJWBJJXG4FTW7PUSOXHIZDHD3/


[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-08 Thread Amit Bawer
 I doubt if you can use 4.3.8 nodes with a 4.2 cluster without upgrading it
first. But myabe members of this list could say differently.

On Friday, February 7, 2020, Jorick Astrego  wrote:

>
> On 2/6/20 6:22 PM, Amit Bawer wrote:
>
>
>
> On Thu, Feb 6, 2020 at 2:54 PM Jorick Astrego  wrote:
>
>>
>> On 2/6/20 1:44 PM, Amit Bawer wrote:
>>
>>
>>
>> On Thu, Feb 6, 2020 at 1:07 PM Jorick Astrego  wrote:
>>
>>> Here you go, this is from the activation I just did a couple of minutes
>>> ago.
>>>
>> I was hoping to see how it was first connected to host, but it doesn't go
>> that far back. Anyway, the storage domain type is set from engine and vdsm
>> never try to guess it as far as I saw.
>>
>> I put the host in maintenance and activated it again, this should give
>> you some more info. See attached log.
>>
>> Could you query the engine db about the misbehaving domain and paste the
>> results?
>>
>> # su - postgres
>> Last login: Thu Feb  6 07:17:52 EST 2020 on pts/0
>> -bash-4.2$ LD_LIBRARY_PATH=/opt/rh/rh-postgresql10/root/lib64/
>> /opt/rh/rh-postgresql10/root/usr/bin/psql engine
>> psql (10.6)
>> Type "help" for help.
>> engine=# select * from storage_domain_static where id = '
>> f5d2f7c6-093f-46d6-a844-224d92db5ef9' ;
>>
>>
>> engine=# select * from storage_domain_static where id =
>> 'f5d2f7c6-093f-46d6-a844-224d92db5ef9' ;
>>   id  |
>> storage| storage_name | storage_domain_type | storage_type
>> | storage_domain_format_type | _create_date  |
>> _update_date | recoverable | la
>> st_time_used_as_master | storage_description | storage_comment |
>> wipe_after_delete | warning_low_space_indicator |
>> critical_space_action_blocker | first_metadata_device | vg_metadata_device
>> | discard_after_delete | backup | warning_low_co
>> nfirmed_space_indicator | block_size
>> --+-
>> -+--+-+-
>> -++-
>> --+-+-+---
>> ---+-+--
>> ---+---+-+--
>> -+---+--
>> --+--++---
>> +
>>  f5d2f7c6-093f-46d6-a844-224d92db5ef9 | b8b456f0-27c3-49b9-b5e9-9fa81fb3cdaa
>> | backupnfs|   1 |1 |
>> 4  | 2018-01-19 13:31:25.899738+01 | 2019-02-14
>> 14:36:22.3171+01 | t   |
>>  1530772724454 | | |
>> f |  10
>> | 5 |
>> || f| f  |
>>   0 |512
>> (1 row)
>>
>>
>>
> Thanks for sharing,
>
> The storage_type in db is indeed NFS (1), storage_domain_format_type is 4
> - for ovirt 4.3 the storage_domain_format_type is 5 by default and usually
> datacenter upgrade is required for 4.2 to 4.3 migration, which not sure if
> possible in your current setup since you have 4.2 nodes using this storage
> as well.
>
> Regarding the repeating monitor failure for the SD:
>
> 2020-02-05 14:17:54,190+ WARN  (monitor/f5d2f7c) [storage.LVM]
> Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5
> out=[] err=['  Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not
> found', '  Cannot process volume group f5d2f7c6-093f-46d6-a844-224d92db5ef9'])
> (lvm:470)
>
> This error means that the monitor has tried to query the SD as a VG first
> and failed, this is expected for the fallback code called for finding a
> domain missing from SD cache:
>
> def _findUnfetchedDomain(self, sdUUID):
> ...
> for mod in (blockSD, glusterSD, localFsSD, nfsSD):
> try:
> return mod.findDomain(sdUUID)
> except se.StorageDomainDoesNotExist:
> pass
> except Exception:
> self.log.error(
> "Error while looking for domain `%s`",
> sdUUID, exc_info=True)
> raise se.StorageDomainDoesNotExist(sdUUID)
>
> 2020-02-05 14:17:54,201+ ERROR (monitor/f5d2f7c) [storage.Monitor]
> Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed
> (monitor:330)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py&quo

[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-06 Thread Amit Bawer
On Thu, Feb 6, 2020 at 2:54 PM Jorick Astrego  wrote:

>
> On 2/6/20 1:44 PM, Amit Bawer wrote:
>
>
>
> On Thu, Feb 6, 2020 at 1:07 PM Jorick Astrego  wrote:
>
>> Here you go, this is from the activation I just did a couple of minutes
>> ago.
>>
> I was hoping to see how it was first connected to host, but it doesn't go
> that far back. Anyway, the storage domain type is set from engine and vdsm
> never try to guess it as far as I saw.
>
> I put the host in maintenance and activated it again, this should give you
> some more info. See attached log.
>
> Could you query the engine db about the misbehaving domain and paste the
> results?
>
> # su - postgres
> Last login: Thu Feb  6 07:17:52 EST 2020 on pts/0
> -bash-4.2$ LD_LIBRARY_PATH=/opt/rh/rh-postgresql10/root/lib64/
> /opt/rh/rh-postgresql10/root/usr/bin/psql engine
> psql (10.6)
> Type "help" for help.
> engine=# select * from storage_domain_static where id = '
> f5d2f7c6-093f-46d6-a844-224d92db5ef9' ;
>
>
> engine=# select * from storage_domain_static where id =
> 'f5d2f7c6-093f-46d6-a844-224d92db5ef9' ;
>   id  |
> storage| storage_name | storage_domain_type | storage_type
> | storage_domain_format_type | _create_date  |
> _update_date | recoverable | la
> st_time_used_as_master | storage_description | storage_comment |
> wipe_after_delete | warning_low_space_indicator |
> critical_space_action_blocker | first_metadata_device | vg_metadata_device
> | discard_after_delete | backup | warning_low_co
> nfirmed_space_indicator | block_size
>
> --+--+--+-+--++---+-+-+---
>
> ---+-+-+---+-+---+---++--++---
> +
>  f5d2f7c6-093f-46d6-a844-224d92db5ef9 |
> b8b456f0-27c3-49b9-b5e9-9fa81fb3cdaa | backupnfs|   1
> |1 | 4  | 2018-01-19 13:31:25.899738+01
> | 2019-02-14 14:36:22.3171+01 | t   |
>  1530772724454 | | |
> f |  10
> | 5 |
> || f| f  |
>   0 |512
> (1 row)
>
>
>
Thanks for sharing,

The storage_type in db is indeed NFS (1), storage_domain_format_type is 4 -
for ovirt 4.3 the storage_domain_format_type is 5 by default and usually
datacenter upgrade is required for 4.2 to 4.3 migration, which not sure if
possible in your current setup since you have 4.2 nodes using this storage
as well.

Regarding the repeating monitor failure for the SD:

2020-02-05 14:17:54,190+ WARN  (monitor/f5d2f7c) [storage.LVM]
Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5
out=[] err=['  Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not
found', '  Cannot process volume group
f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470)

This error means that the monitor has tried to query the SD as a VG first
and failed, this is expected for the fallback code called for finding a
domain missing from SD cache:

def _findUnfetchedDomain(self, sdUUID):
...
for mod in (blockSD, glusterSD, localFsSD, nfsSD):
try:
return mod.findDomain(sdUUID)
except se.StorageDomainDoesNotExist:
pass
except Exception:
self.log.error(
"Error while looking for domain `%s`",
sdUUID, exc_info=True)

raise se.StorageDomainDoesNotExist(sdUUID)

2020-02-05 14:17:54,201+ ERROR (monitor/f5d2f7c) [storage.Monitor]
Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed
(monitor:330)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
327, in _setupLoop
self._setupMonitor()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
349, in _setupMonitor
self._produceDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in
wrapper
value = meth(self, *a, **kw)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
367, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in
produce
domain.getRealDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in
getRealDomain
return self._cache._realProduce(self._sdU

[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-06 Thread Amit Bawer
On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego  wrote:

> Hi,
>
> Something weird is going on with our ovirt node 4.3.8 install mounting a
> nfs share.
>
> We have a NFS domain for a couple of backup disks and we have a couple of
> 4.2 nodes connected to it.
>
> Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount
> doesn't work.
>
> (annoying you cannot copy the text from the events view)
>
> The domain is up and working
>
> ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9
> Size: 10238 GiB
> Available:2491 GiB
> Used:7747 GiB
> Allocated: 3302 GiB
> Over Allocation Ratio:37%
> Images:7
> Path:*.*.*.*:/data/ovirt
> NFS Version: AUTO
> Warning Low Space Indicator:10% (1023 GiB)
> Critical Space Action Blocker:5 GiB
>
> But somehow the node appears to thin thinks it's an LVM volume? It tries
> to find the VGs volume group but fails... which is not so strange as it is
> an NFS volume:
>

Could you provide full vdsm.log file with this flow?


> 2020-02-05 14:17:54,190+ WARN  (monitor/f5d2f7c) [storage.LVM]
> Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5
> out=[] err=['  Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not
> found', '  Cannot process volume group
> f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470)
> 2020-02-05 14:17:54,201+ ERROR (monitor/f5d2f7c) [storage.Monitor]
> Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed
> (monitor:330)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 327, in _setupLoop
> self._setupMonitor()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 349, in _setupMonitor
> self._produceDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in
> wrapper
> value = meth(self, *a, **kw)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 367, in _produceDomain
> self.domain = sdCache.produce(self.sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110,
> in produce
> domain.getRealDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in
> getRealDomain
> return self._cache._realProduce(self._sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134,
> in _realProduce
> domain = self._findDomain(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151,
> in _findDomain
> return findMethod(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176,
> in _findUnfetchedDomain
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
>
> The volume is actually mounted fine on the node:
>
> On NFS server
>
> Feb  5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request from
> *.*.*.*:673 for /data/ovirt (/data/ovirt)
>
> On the host
>
> mount|grep nfs
>
> *.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
>
> And I can see the files:
>
> ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt
> total 4
> drwxr-xr-x. 5 vdsm kvm61 Oct 26  2016
> 1ed0a635-67ee-4255-aad9-b70822350706
> -rwxr-xr-x. 1 vdsm kvm 0 Feb  5 14:37 __DIRECT_IO_TEST__
> drwxrwxrwx. 3 root root   86 Feb  5 14:37 .
> drwxr-xr-x. 5 vdsm kvm  4096 Feb  5 14:37 ..
>
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> *Netbulae Virtualization Experts *
> --
> Tel: 053 20 30 270 i...@netbulae.eu Staalsteden 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
> --
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IFTO5WBLVLGTVWKYN3BGLOHAC453UBD5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/URHZXVEL4N3DS6JS2JJRFRL37R24OSGX/


[ovirt-users] Re: Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread Amit Bawer
On Sat, Feb 1, 2020 at 6:39 PM  wrote:

> Ok, i will try to set 777 permissoin on NFS storage. But, why this issue
> starting from updating  4.30.32-1 to  4.30.33-1? Withowt any another
> changes.
>

The differing commit for 4.30.33 over 4.30.32 is the transition into block
size probing done by ioprocess-1.3.0:
https://github.com/oVirt/vdsm/commit/9bd210e340be0855126d1620cdb94840ced56129



> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICWRJ75Q7DIZDDNYRP757YHDEN4N537V/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RO4FS4ESIR5UEAHBMGSMBWQZM6Z5WFJD/


[ovirt-users] Re: Device /dev/sdb excluded by a filter.\n

2020-02-01 Thread Amit Bawer
Maybe information on this message thread could help:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/N7M57G7HC46NBQTX6T3KSVHEYV3IDDIP/

On Saturday, February 1, 2020, Steve Watkins  wrote:

> Since I managed to crash my last attempt at installing by uploading an
> ISO,  I wound up just reloading all the nodes and starting from scratch.
> Now one node gets "Device /dev/sdb excluded by a filter.\n" and fails when
> creating the volumes.  Can't seem to get passed that -- the other machiens
> are set up identally and don't fail, and it worked before when installed
> but now...
>
> Any ideas?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/B2ZI4YNZQPKG4PE3VQ4KS6MRURWNQ4FY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QO65KZQ7CBCXN5MXZWPZT45LAYGMHRNI/


[ovirt-users] Re: Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread Amit Bawer
On Saturday, February 1, 2020,  wrote:

> Hi! I trying to upgrade my hosts and have problem with it. After uprgading
> one host i see that this one NonOperational. All was fine with
> vdsm-4.30.24-1.el7 but after upgrading with new version
> vdsm-4.30.40-1.el7.x86_64 and some others i have errors.
> Firtst of all i see in ovirt Events: Host srv02 cannot access the Storage
> Domain(s)  attached to the Data Center Default. Setting Host state
> to Non-Operational. My Default storage domain with HE VM data on NFS
> storage.
>
> In messages log of host:
> srv02 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent
> ERROR Traceback (most recent call last):#012  File "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/agent/a
> gent.py", line 131, in _run_agent#012return action(he)#012  File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> line 55, in action_proper#012return he.start_monitoring
> ()#012  File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 432, in start_monitoring#012self._initialize_broker()#012  File
> "/usr/lib/python2.7/site-packages/
> ovirt_hosted_engine_ha/agent/hosted_engine.py", line 556, in
> _initialize_broker#012m.get('options', {}))#012  File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 8
> 9, in start_monitor#012).format(t=type, o=options,
> e=e)#012RequestError: brokerlink - failed to start monitor via
> ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: 'network',
> options:
> {'tcp_t_address': None, 'network_test': None, 'tcp_t_port': None, 'addr':
> '192.168.2.248'}]
> Feb  1 15:41:42 srv02 journal: ovirt-ha-agent 
> ovirt_hosted_engine_ha.agent.agent.Agent
> ERROR Trying to restart agent
>
> In broker log:
> MainThread::WARNING::2020-02-01 15:43:35,167::storage_broker::
> 97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
> Can't connect vdsm storage: Command StorageDomain.getInfo with ar
> gs {'storagedomainID': 'bbdddea7-9cd6-41e7-ace5-fb9a6795caa8'} failed:
> (code=350, message=Error in storage domain action:
> (u'sdUUID=bbdddea7-9cd6-41e7-ace5-fb9a6795caa8',))
>
> In vdsm.lod
> 2020-02-01 15:44:19,930+0600 INFO  (jsonrpc/0) [vdsm.api] FINISH
> getStorageDomainInfo error=[Errno 1] Operation not permitted
> from=::1,57528, task_id=40683f67-d7b0-4105-aab8-6338deb54b00 (api:52)
> 2020-02-01 15:44:19,930+0600 ERROR (jsonrpc/0) [storage.TaskManager.Task]
> (Task='40683f67-d7b0-4105-aab8-6338deb54b00') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in getStorageDomainInfo
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2753,
> in getStorageDomainInfo
> dom = self.validateSdUUID(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 305,
> in validateSdUUID
> sdDom = sdCache.produce(sdUUID=sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110,
> in produce
> domain.getRealDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51,
> in getRealDomain
> return self._cache._realProduce(self._sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134,
> in _realProduce
> domain = self._findDomain(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151,
> in _findDomain
> return findMethod(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line
> 145, in findDomain
> return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line
> 378, in __init__
> manifest.sdUUID, manifest.mountpoint)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line
> 853, in _detect_block_size
> block_size = iop.probe_block_size(mountpoint)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
> line 384, in probe_block_size
> return self._ioproc.probe_block_size(dir_path)
>   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line
> 602, in probe_block_size
> "probe_block_size", {"dir": dir_path}, self.timeout)
>   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line
> 448, in _sendCommand
> raise OSError(errcode, errstr)
> OSError: [Errno 1] Operation not permitted
> 2020-02-01 15:44:19,930+0600 INFO  (jsonrpc/0) [storage.TaskManager.Task]
> (Task='40683f67-d7b0-4105-aab8-6338deb54b00') aborting: Task is aborted:
> u'[Errno 1] Operation not permitted' - code 100 (task:1
> 181)
> 2020-02-01 15:44:19,930+0600 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH
> getStorageDomainInfo error=[Errno 1] Operation 

[ovirt-users] Re: Understanding ovirt memory management which appears incorrect

2020-01-28 Thread Amit Bawer
Maybe this could help:
https://lists.ovirt.org/pipermail/users/2017-August/083692.html

On Tuesday, January 28, 2020,  wrote:

> Hi All,
>
> A question regarding memory management with ovirt. I know memory can
> be complicated hence I'm asking the experts. :)
>
> Two examples of where it looks - to me - that memory management from
> ovirt perspective is incorrect. This is resulting in us not getting as
> much out of a host as we'd expect.
>
> ## Example 1:
>
> host: dev-cluster-04
>
> I understand the mem on the host to be:
> 128G total (physical)
> 68G used
> 53G available
> 56G buff/cache
>
> I understand therefore 53G should still be available to allocate
> (approximately, minus a few things).
>
> ```
>   DEV  [root@dev-cluster-04:~]  # free -m
> totalusedfree  shared  buff/cache
>  available
>   Mem: 128741   6829544294078   56016
>  53422
>   Swap: 121111578   10533
>   DEV  [root@dev-cluster-04:~]  # cat /proc/meminfo
>   MemTotal:   131831292 kB
>   MemFree: 4540852 kB
>   MemAvailable:   54709832 kB
>   Buffers:3104 kB
>   Cached:  5174136 kB
>   SwapCached:   835012 kB
>   Active: 66943552 kB
>   Inactive:5980340 kB
>   Active(anon):   66236968 kB
>   Inactive(anon):  5713972 kB
>   Active(file): 706584 kB
>   Inactive(file):   266368 kB
>   Unevictable:   50036 kB
>   Mlocked:   54132 kB
>   SwapTotal:  12402684 kB
>   SwapFree:   10786688 kB
>   Dirty:   812 kB
>   Writeback: 0 kB
>   AnonPages:  67068548 kB
>   Mapped:   143880 kB
>   Shmem:   4176328 kB
>   Slab:   52183680 kB
>   SReclaimable:   49822156 kB
>   SUnreclaim:  2361524 kB
>   KernelStack:   2 kB
>   PageTables:   213628 kB
>   NFS_Unstable:  0 kB
>   Bounce:0 kB
>   WritebackTmp:  0 kB
>   CommitLimit:78318328 kB
>   Committed_AS:   110589076 kB
>   VmallocTotal:   34359738367 kB
>   VmallocUsed:  859104 kB
>   VmallocChunk:   34291324976 kB
>   HardwareCorrupted: 0 kB
>   AnonHugePages:583680 kB
>   CmaTotal:  0 kB
>   CmaFree:   0 kB
>   HugePages_Total:   0
>   HugePages_Free:0
>   HugePages_Rsvd:0
>   HugePages_Surp:0
>   Hugepagesize:   2048 kB
>   DirectMap4k:  621088 kB
>   DirectMap2M:44439552 kB
>   DirectMap1G:91226112 kB
> ```
>
> The ovirt engine, compute -> hosts view shows s4-dev-cluster-01 as 93%
> memory utilised.
>
> Clicking on the node says:
> Physical Memory: 128741 MB total, 119729 MB used, 9012 MB free
>
> So ovirt engine says 9G free. The OS reports 4G free but 53G
> available. Surely ovirt should be looking at available memory?
>
> This is a problem, for instance, when trying to run a VM, called
> dev-cassandra-01, with mem size 24576, max mem 24576 and mem
> guarantee set to 10240 on this host it fails with:
>
> ```
>   Cannot run VM. There is no host that satisfies current scheduling
>   constraints. See below for details:
>
>   The host dev-cluster-04.fnb.co.za did not satisfy internal filter
>   Memory because its available memory is too low (19884 MB) to run the
>   VM.
> ```
>
> To me this looks blatantly wrong. The host has 53G available according
> to free -m.
>
> Guessing I'm missing something, unless this is some sort of bug?
>
> versions:
>
> ```
>   engine: 4.3.7.2-1.el7
>
>   host:
>   OS Version: RHEL - 7 - 6.1810.2.el7.centos
>   OS Description: CentOS Linux 7 (Core)
>   Kernel Version: 3.10.0 - 957.12.1.el7.x86_64
>   KVM Version: 2.12.0 - 18.el7_6.3.1
>   LIBVIRT Version: libvirt-4.5.0-10.el7_6.7
>   VDSM Version: vdsm-4.30.13-1.el7
>   SPICE Version: 0.14.0 - 6.el7_6.1
>   GlusterFS Version: [N/A]
>   CEPH Version: librbd1-10.2.5-4.el7
>   Open vSwitch Version: openvswitch-2.10.1-3.el7
>   Kernel Features: PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
>   VNC Encryption: Disabled
> ```
>
> ## Example 2:
>
> A ovirt host with two VMs:
>
> According to the host, it has 128G of physical memory of which 56G is
> used, 69G is buff/cache and 65G is available.
>
> As is shown here:
>
> ```
>   LIVE  [root@prod-cluster-01:~]  # cat /proc/meminfo
>   MemTotal:   131326836 kB
>   MemFree: 2630812 kB
>   MemAvailable:   66573596 kB
>   Buffers:2376 kB
>   Cached:  5670628 kB
>   SwapCached:   151072 kB
>   Active: 59106140 kB
>   Inactive:2744176 kB
>   Active(anon):   58099732 kB
>   Inactive(anon):  2327428 kB
>   Active(file):1006408 kB
>   Inactive(file):   416748 kB
>   Unevictable:   40004 kB
>   Mlocked:   42052 kB
>   SwapTotal:   4194300 kB
>   SwapFree:3579492 kB
>   Dirty: 0 kB
>   Writeback: 0 kB
>   AnonPages:  56085040 kB
>   Mapped:   121816 kB
>   Shmem:   4231808 kB
>   Slab:   65143868 kB
>   SReclaimable:   63145684 

[ovirt-users] Re: Cannot put host in maintenance mode

2020-01-28 Thread Amit Bawer
On Tue, Jan 28, 2020 at 6:08 AM Vinícius Ferrão 
wrote:

> Hello,
>
> I’m with an issue on one of my oVirt installs and I wasn’t able to solve
> it. When trying to put a node in maintenance it complains about image
> transfers:
>

Which ovrit-engine version is being used? was an upgrade involved?


> Error while executing action: Cannot switch Host ovirt2 to Maintenance
> mode. Image transfer is in progress for the following (3) disks:
>
> 8f4c712e-66bb-4bfc-9afb-78407b8b726c,
> eb0ef249-284d-4d77-b1f1-ee8b70718f3d,
> 73245a4c-8f56-4508-a5c5-2d7697f87654
>
> Please wait for the operations to complete and try again.
>

If it is prior to ovirt-engine-4.3.3.2, then it lacks a fix for [1]. In
such case you should use a newer version.


>
> I wasn’t able to find this image transfers. There’s nothing on the engine
> showing any transfer.
>
> How can I debug this?
>

If it is newer than 4.3.3.2, then provide engine.log,
preferably in debug mode (see "Enable DEBUG Log - Restart Required" section
in [2] for how-to).

[1] https://bugzilla.redhat.com/1586126
[2]
https://www.ovirt.org/develop/developer-guide/engine/engine-development-environment.html


>
> Thanks,
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GXOO6C5HHTJUCGNYZK52LZLBIWUQUV6D/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IEYF3PHY4D7CEQCEL6KARTRFRCGGDUVV/


[ovirt-users] Re: ovirt upgrade

2019-12-18 Thread Amit Bawer
Hi
You can refer to 4.3 upgrade guide:

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/upgrade_guide/index

On Thursday, December 19, 2019, David David  wrote:

> hi all
>
> how do upgrade from 4.2.5 to 4.3.7 ?
> what steps are necessary, are the same as when upgrading from 4.1 to 4.2?
>
>  # engine-upgrade-check
>  # yum update ovirt*setup*
>  # engine-setup
>
>
> thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/J3J6KDI65N3PLLPNOGHXCLU5HEFOP4QI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IICWWM7IRUPM652O67YVZSKMCNNN3EQZ/


[ovirt-users] Re: Cannot activate/deactivate storage domain

2019-12-04 Thread Amit Bawer
in check we had here, we got similar warnings for using the ignore OVF
updates checks, but the SD was set inactive at end of process.
what is the SD status in your case after this try?


On Wed, Dec 4, 2019 at 4:49 PM Albl, Oliver 
wrote:

> Yes.
>
> Am 04.12.2019 um 15:47 schrieb Amit Bawer  aba...@redhat.com>>:
>
>
>
> On Wed, Dec 4, 2019 at 4:42 PM Albl, Oliver  <mailto:oliver.a...@fabasoft.com>> wrote:
> Hi Amit,
>
>   unfortunately no success.
>
> Dec 4, 2019, 3:41:36 PM
> Storage Domain HOST_LUN_219 (Data Center xxx) was deactivated by system
> because it's not visible by any of the hosts.
>
> Dec 4, 2019, 3:35:09 PM
> Failed to update VMs/Templates OVF data for Storage Domain HOST_LUN_219 in
> Data Center Production.
>
> Dec 4, 2019, 3:35:09 PM
> Failed to update OVF disks 77c64b39-fe50-4d05-b77f-8131ad1f95f9, OVF data
> isn't updated on those OVF stores (Data Center Production, Storage Domain
> HOST_LUN_219).
>
> Have you selected the checkbox for "Ignore OVF update failure" before
> putting into maintenance?
>
>
> All the best,
> Oliver
>
> Von: Amit Bawer mailto:aba...@redhat.com>>
> Gesendet: Mittwoch, 4. Dezember 2019 15:20
> An: Albl, Oliver mailto:oliver.a...@fabasoft.com
> >>
> Cc: users@ovirt.org<mailto:users@ovirt.org>; Nir Soffer <
> nsof...@redhat.com<mailto:nsof...@redhat.com>>
> Betreff: Re: [ovirt-users] Re: Cannot activate/deactivate storage domain
>
> Hi Oliver,
>
> For deactivating the unresponsive storage domains, you can use the Compute
> -> Data Centers -> Maintenance option with "Ignore OVF update failure"
> checked.
> This will force deactivation of the SD.
>
> Will provide further details about the issue in the ticket.
>
>
> On Tue, Dec 3, 2019 at 12:02 PM Albl, Oliver  <mailto:oliver.a...@fabasoft.com>> wrote:
> Hi,
>
>   does anybody have an advice how to activate or safely remove that
> storage domain?
>
> Thank you!
> Oliver
> -Ursprüngliche Nachricht-
> Von: Oliver Albl mailto:oliver.a...@fabasoft.com
> >>
> Gesendet: Dienstag, 5. November 2019 11:20
> An: users@ovirt.org<mailto:users@ovirt.org>
> Betreff: [ovirt-users] Re: Cannot activate/deactivate storage domain
>
> > On Mon, Nov 4, 2019 at 9:18 PM Albl, Oliver  http://fabasoft.com> wrote:
> >
> > What was the last change in the system? upgrade? network change? storage
> change?
> >
>
> Last change was four weeks ago ovirt upgrade from 4.3.3 to 4.3.6.7
> (including CentOS hosts to 7.7 1908)
>
> >
> > This is expected if some domain is not accessible on all hosts.
> >
> >
> > This means sanlock timed out renewing the lockspace
> >
> >
> > If a host cannot access all storage domain in the DC, the system set
> > it to non-operational, and will probably try to reconnect it later.
> >
> >
> > This means reading 4k from start of the metadata lv took 9.6 seconds.
> > Something in
> > the way to storage is bad (kernel, network, storage).
> >
> >
> > We 20 seconds (4 retires, 5 seconds per retry) gracetime in multipath
> > when there are no active paths, before I/O fails, pausing the VM. We
> > also resume paused VMs when storage monitoring works again, so maybe
> > the VM were paused and resumed.
> >
> > However for storage monitoring we have strict 10 seconds timeout. If
> > reading from the metadata lv times out or fail and does not operated
> > normally after
> > 5 minutes, the
> > domain will become inactive.
> >
> >
> > This can explain the read timeouts.
> >
> >
> > This looks the right way to troubleshoot this.
> >
> >
> > We need vdsm logs to understand this failure.
> >
> >
> > This does not mean OVF is corrupted, only that we could not store new
> > data. The older data on the other OVFSTORE disk is probably fine.
> > Hopefuly the system will not try to write to the other OVFSTORE disk
> > overwriting the last good version.
> >
> >
> > This is normal, the first 2048 bytes are always zeroes. This area was
> > used for domain metadata in older versions.
> >
> >
> > Please share more details:
> >
> > - output of "lsblk"
> > - output of "multipath -ll"
> > - output of "/usr/libexec/vdsm/fc-scan -v"
> > - output of "vgs -o +tags problem-domain-id"
> > - output of "lvs -o +tags problem-domain-id"
> > - contents of /etc/multipath.conf
> > - contents of /etc/multipath.conf.d/*.conf
> > - /var/log/messages s

[ovirt-users] Re: Cannot activate/deactivate storage domain

2019-12-04 Thread Amit Bawer
On Wed, Dec 4, 2019 at 4:42 PM Albl, Oliver 
wrote:

> Hi Amit,
>
>
>
>   unfortunately no success.
>
>
>
> Dec 4, 2019, 3:41:36 PM
>
> Storage Domain HOST_LUN_219 (Data Center xxx) was deactivated by system
> because it's not visible by any of the hosts.
>
>
>
> Dec 4, 2019, 3:35:09 PM
>
> Failed to update VMs/Templates OVF data for Storage Domain HOST_LUN_219 in
> Data Center Production.
>
>
>
> Dec 4, 2019, 3:35:09 PM
>
> Failed to update OVF disks 77c64b39-fe50-4d05-b77f-8131ad1f95f9, OVF data
> isn't updated on those OVF stores (Data Center Production, Storage Domain
> HOST_LUN_219).
>

Have you selected the checkbox for "Ignore OVF update failure" before
putting into maintenance?


>
> All the best,
>
> Oliver
>
>
>
> *Von:* Amit Bawer 
> *Gesendet:* Mittwoch, 4. Dezember 2019 15:20
> *An:* Albl, Oliver 
> *Cc:* users@ovirt.org; Nir Soffer 
> *Betreff:* Re: [ovirt-users] Re: Cannot activate/deactivate storage domain
>
>
>
> Hi Oliver,
>
>
>
> For deactivating the unresponsive storage domains, you can use the Compute
> -> Data Centers -> Maintenance option with "Ignore OVF update failure"
> checked.
>
> This will force deactivation of the SD.
>
>
>
> Will provide further details about the issue in the ticket.
>
>
>
>
>
> On Tue, Dec 3, 2019 at 12:02 PM Albl, Oliver 
> wrote:
>
> Hi,
>
>   does anybody have an advice how to activate or safely remove that
> storage domain?
>
> Thank you!
> Oliver
> -Ursprüngliche Nachricht-
> Von: Oliver Albl 
> Gesendet: Dienstag, 5. November 2019 11:20
> An: users@ovirt.org
> Betreff: [ovirt-users] Re: Cannot activate/deactivate storage domain
>
> > On Mon, Nov 4, 2019 at 9:18 PM Albl, Oliver  wrote:
> >
> > What was the last change in the system? upgrade? network change? storage
> change?
> >
>
> Last change was four weeks ago ovirt upgrade from 4.3.3 to 4.3.6.7
> (including CentOS hosts to 7.7 1908)
>
> >
> > This is expected if some domain is not accessible on all hosts.
> >
> >
> > This means sanlock timed out renewing the lockspace
> >
> >
> > If a host cannot access all storage domain in the DC, the system set
> > it to non-operational, and will probably try to reconnect it later.
> >
> >
> > This means reading 4k from start of the metadata lv took 9.6 seconds.
> > Something in
> > the way to storage is bad (kernel, network, storage).
> >
> >
> > We 20 seconds (4 retires, 5 seconds per retry) gracetime in multipath
> > when there are no active paths, before I/O fails, pausing the VM. We
> > also resume paused VMs when storage monitoring works again, so maybe
> > the VM were paused and resumed.
> >
> > However for storage monitoring we have strict 10 seconds timeout. If
> > reading from the metadata lv times out or fail and does not operated
> > normally after
> > 5 minutes, the
> > domain will become inactive.
> >
> >
> > This can explain the read timeouts.
> >
> >
> > This looks the right way to troubleshoot this.
> >
> >
> > We need vdsm logs to understand this failure.
> >
> >
> > This does not mean OVF is corrupted, only that we could not store new
> > data. The older data on the other OVFSTORE disk is probably fine.
> > Hopefuly the system will not try to write to the other OVFSTORE disk
> > overwriting the last good version.
> >
> >
> > This is normal, the first 2048 bytes are always zeroes. This area was
> > used for domain metadata in older versions.
> >
> >
> > Please share more details:
> >
> > - output of "lsblk"
> > - output of "multipath -ll"
> > - output of "/usr/libexec/vdsm/fc-scan -v"
> > - output of "vgs -o +tags problem-domain-id"
> > - output of "lvs -o +tags problem-domain-id"
> > - contents of /etc/multipath.conf
> > - contents of /etc/multipath.conf.d/*.conf
> > - /var/log/messages since the issue started
> > - /var/log/vdsm/vdsm.log* since the issue started on one of the hosts
> >
> > A bug is probably the best place to keep these logs and make it easy to
> trac.
>
> Please see https://bugzilla.redhat.com/show_bug.cgi?id=1768821
>
> >
> > Thanks,
> > Nir
>
> Thank you!
> Oliver
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:
> https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZ5ZN2S7N54JYVV3RWOYOHTEAWFQ23Q7/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AF2GBIQKW45QVGJCEN2O3ZYV2BVTI4YU/


[ovirt-users] Re: Cannot activate/deactivate storage domain

2019-12-04 Thread Amit Bawer
Hi Oliver,

For deactivating the unresponsive storage domains, you can use the Compute
-> Data Centers -> Maintenance option with "Ignore OVF update failure"
checked.
This will force deactivation of the SD.

Will provide further details about the issue in the ticket.


On Tue, Dec 3, 2019 at 12:02 PM Albl, Oliver 
wrote:

> Hi,
>
>   does anybody have an advice how to activate or safely remove that
> storage domain?
>
> Thank you!
> Oliver
> -Ursprüngliche Nachricht-
> Von: Oliver Albl 
> Gesendet: Dienstag, 5. November 2019 11:20
> An: users@ovirt.org
> Betreff: [ovirt-users] Re: Cannot activate/deactivate storage domain
>
> > On Mon, Nov 4, 2019 at 9:18 PM Albl, Oliver  wrote:
> >
> > What was the last change in the system? upgrade? network change? storage
> change?
> >
>
> Last change was four weeks ago ovirt upgrade from 4.3.3 to 4.3.6.7
> (including CentOS hosts to 7.7 1908)
>
> >
> > This is expected if some domain is not accessible on all hosts.
> >
> >
> > This means sanlock timed out renewing the lockspace
> >
> >
> > If a host cannot access all storage domain in the DC, the system set
> > it to non-operational, and will probably try to reconnect it later.
> >
> >
> > This means reading 4k from start of the metadata lv took 9.6 seconds.
> > Something in
> > the way to storage is bad (kernel, network, storage).
> >
> >
> > We 20 seconds (4 retires, 5 seconds per retry) gracetime in multipath
> > when there are no active paths, before I/O fails, pausing the VM. We
> > also resume paused VMs when storage monitoring works again, so maybe
> > the VM were paused and resumed.
> >
> > However for storage monitoring we have strict 10 seconds timeout. If
> > reading from the metadata lv times out or fail and does not operated
> > normally after
> > 5 minutes, the
> > domain will become inactive.
> >
> >
> > This can explain the read timeouts.
> >
> >
> > This looks the right way to troubleshoot this.
> >
> >
> > We need vdsm logs to understand this failure.
> >
> >
> > This does not mean OVF is corrupted, only that we could not store new
> > data. The older data on the other OVFSTORE disk is probably fine.
> > Hopefuly the system will not try to write to the other OVFSTORE disk
> > overwriting the last good version.
> >
> >
> > This is normal, the first 2048 bytes are always zeroes. This area was
> > used for domain metadata in older versions.
> >
> >
> > Please share more details:
> >
> > - output of "lsblk"
> > - output of "multipath -ll"
> > - output of "/usr/libexec/vdsm/fc-scan -v"
> > - output of "vgs -o +tags problem-domain-id"
> > - output of "lvs -o +tags problem-domain-id"
> > - contents of /etc/multipath.conf
> > - contents of /etc/multipath.conf.d/*.conf
> > - /var/log/messages since the issue started
> > - /var/log/vdsm/vdsm.log* since the issue started on one of the hosts
> >
> > A bug is probably the best place to keep these logs and make it easy to
> trac.
>
> Please see https://bugzilla.redhat.com/show_bug.cgi?id=1768821
>
> >
> > Thanks,
> > Nir
>
> Thank you!
> Oliver
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:
> https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZ5ZN2S7N54JYVV3RWOYOHTEAWFQ23Q7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E7AMRZVLGZALEKSWOG2SWMSYQNDNHTOU/


[ovirt-users] Unable to attach ISO domain to Datacenter

2019-12-02 Thread Amit Bawer
On Tuesday, December 3, 2019, Amit Bawer  wrote:

>
>
> On Tuesday, December 3, 2019, Ivan Apolonio  wrote:
>
>> Hello Amit. Thanks for you reply.
>>
>> This is the content of /etc/sudoers.d/50_vdsm file (it's the default
>> generated by ovirt install):
>>
>> Cmnd_Alias VDSM_LIFECYCLE = \
>> /usr/sbin/dmidecode -s system-uuid
>> Cmnd_Alias VDSM_STORAGE = \
>> /usr/sbin/fsck -p *, \
>> /usr/sbin/tune2fs -j *, \
>> /usr/sbin/mkfs -q -j *, \
>> /usr/bin/kill, \
>> /usr/bin/chown vdsm\:qemu *, \
>> /usr/bin/chown vdsm\:kvm *, \
>> /usr/sbin/iscsiadm *, \
>> /usr/sbin/lvm, \
>> /usr/bin/setsid /usr/bin/ionice -c ? -n ? /usr/bin/su vdsm -s /bin/sh
>> -c /usr/libexec/vdsm/spmprotect.sh*, \
>> /usr/sbin/service vdsmd *, \
>> /usr/sbin/reboot -f
>>
>> vdsm  ALL=(ALL) NOPASSWD: VDSM_LIFECYCLE, VDSM_STORAGE
>> Defaults:vdsm !requiretty
>> Defaults:vdsm !syslog
>
> This line shuts logging, worth to comment it out during check. Plus, do
> you have an #includedir setting in your /etc/sudoers file?
>
> The vdsm.log snippet seems later than the error in the engine.log, could
> you provide one covering the failing attempt?
>

+ Setting vdsm log level to debug

vdsm-client Host setLogLevel level=DEBUG
>
>
>> I was pretty curious about the format of the line "/usr/bin/setsid
>> /usr/bin/ionice -c ? -n ? /usr/bin/su vdsm -s /bin/sh -c
>> /usr/libexec/vdsm/spmprotect.sh*", but looking at source code (
>> https://github.com/oVirt/vdsm/blob/master/static/etc/sudoers.d/50_vdsm.in)
>> it looks to be just like that. If I need to change anything on this file,
>> it looks that there's some bug on vdsm package.
>>
>> In other hand, I watched the /var/log/secure file while I was trying to
>> attach a Datacenter to ISO Domain and it didn't showed anything new,
>> meaning that the referred "ionice" command was not executed via sudo by
>> vdsm. If it's is true, that could explain the "permission denied" error.
>>
>> About the NFS export, it is exactly the same as parameters as Data Domain
>> exports (which works perfectly):
>>
>> exportfs -v
>> /storage/vm 172.31.17.0/24(sync,wdelay,hi
>> de,no_subtree_check,sec=sys,rw,secure,root_squash,no_all_squash)
>> /storage/vm
>> <http://172.31.17.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,no_all_squash)/storage/vm>
>>172.31.48.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw
>> ,secure,root_squash,no_all_squash)
>> /storage/iso
>> <http://172.31.48.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,no_all_squash)/storage/iso>
>>   (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,
>> root_squash,no_all_squash)
>>
>>
>> What else do I need to check?
>> Thanks
>> Ivan
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/communit
>> y/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archiv
>> es/list/users@ovirt.org/message/BPEKZ4JEDMLLMDXCJWX5IOIKYIU5NRVF/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YIZKNH56KGRPQH4YHEURTUHAAPALECRS/


[ovirt-users] Re: Unable to attach ISO domain to Datacenter

2019-12-02 Thread Amit Bawer
On Tuesday, December 3, 2019, Ivan Apolonio  wrote:

> Hello Amit. Thanks for you reply.
>
> This is the content of /etc/sudoers.d/50_vdsm file (it's the default
> generated by ovirt install):
>
> Cmnd_Alias VDSM_LIFECYCLE = \
> /usr/sbin/dmidecode -s system-uuid
> Cmnd_Alias VDSM_STORAGE = \
> /usr/sbin/fsck -p *, \
> /usr/sbin/tune2fs -j *, \
> /usr/sbin/mkfs -q -j *, \
> /usr/bin/kill, \
> /usr/bin/chown vdsm\:qemu *, \
> /usr/bin/chown vdsm\:kvm *, \
> /usr/sbin/iscsiadm *, \
> /usr/sbin/lvm, \
> /usr/bin/setsid /usr/bin/ionice -c ? -n ? /usr/bin/su vdsm -s /bin/sh
> -c /usr/libexec/vdsm/spmprotect.sh*, \
> /usr/sbin/service vdsmd *, \
> /usr/sbin/reboot -f
>
> vdsm  ALL=(ALL) NOPASSWD: VDSM_LIFECYCLE, VDSM_STORAGE
> Defaults:vdsm !requiretty
> Defaults:vdsm !syslog

This line shuts logging, worth to comment it out during check. Plus, do you
have an #includedir setting in your /etc/sudoers file?

The vdsm.log snippet seems later than the error in the engine.log, could
you provide one covering the failing attempt?


> I was pretty curious about the format of the line "/usr/bin/setsid
> /usr/bin/ionice -c ? -n ? /usr/bin/su vdsm -s /bin/sh -c
> /usr/libexec/vdsm/spmprotect.sh*", but looking at source code (
> https://github.com/oVirt/vdsm/blob/master/static/etc/sudoers.d/50_vdsm.in)
> it looks to be just like that. If I need to change anything on this file,
> it looks that there's some bug on vdsm package.
>
> In other hand, I watched the /var/log/secure file while I was trying to
> attach a Datacenter to ISO Domain and it didn't showed anything new,
> meaning that the referred "ionice" command was not executed via sudo by
> vdsm. If it's is true, that could explain the "permission denied" error.
>
> About the NFS export, it is exactly the same as parameters as Data Domain
> exports (which works perfectly):
>
> exportfs -v
> /storage/vm 172.31.17.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,
> rw,secure,root_squash,no_all_squash)
> /storage/vm 172.31.48.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,
> rw,secure,root_squash,no_all_squash)
> /storage/iso(sync,wdelay,hide,no_subtree_check,sec=sys,rw,
> secure,root_squash,no_all_squash)
>
>
> What else do I need to check?
> Thanks
> Ivan
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/BPEKZ4JEDMLLMDXCJWX5IOIKYIU5NRVF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RY7ZDRFLXKCM6WTKWJANQXE6PZVLJLMD/


[ovirt-users] Re: Unable to attach ISO domain to Datacenter

2019-12-02 Thread Amit Bawer
On Mon, Dec 2, 2019 at 9:38 PM Amit Bawer  wrote:

>
>
> On Sun, Nov 17, 2019 at 9:19 AM Ivan de Gusmão Apolonio <
> i...@apolonio.com.br> wrote:
>
>> I'm having trouble to create a storage ISO Domain and attach it to a
>> Datacenter. It just give me this error message:
>>
>> Error while executing action Attach Storage Domain: Could not obtain lock
>>
>> Also the oVirt's Engine log files show this error message: "setsid:
>> failed to execute /usr/bin/ionice: Permission denied", but I was unable to
>> identify what exactly it's trying to do to get this permission denied.
>>
>> 2019-11-14 16:46:07,779-03 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engine-Thread-7388)
>> [86161370-2aaa-4eff-9aab-c184bdf5bb98] EVENT_ID:
>> IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command AttachStorageDomainVDS
>> failed: Cannot obtain lock: u"id=e6b34c42-0ca6-41f4-be3e-3c9b2af1747b,
>> rc=1, out=[], err=['setsid: failed to execute /usr/bin/ionice: Permission
>> denied']"
>>
>
> ionice command runs in an elevated suid permissions by vdsm,
> see that your host has /etc/sudoers.d/50_vdsm file with ionice command
> entry and that /etc/sudoers has the #includedir /etc/sudoers.d directive.
>

also make sure the nfs share used for the ISO domain has proper mount
options (no "no exec" option, with "no_root_squash" )  and
ownership/permission as you have probably set [1]
[1]
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/administration_guide/sect-preparing_and_adding_nfs_storage

>
>> This behavior just happens on ISO Domains, while Data Domains works fine.
>> I have read oVirt documentation and searched everywhere but I was unable to
>> find the solution for this issue.
>>
>> I'm using CentOS 7 with last update of all packages (oVirt version
>> 4.3.6.7). Please help!
>>
>> Thanks,
>> Ivan de Gusmão Apolonio
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6Z5LCA77NHWLKHUYJDANIUCAHXX466MD/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BX7IVGX5UVNDDHAGAPEMNLA7ADXP26AS/


[ovirt-users] Re: Unable to attach ISO domain to Datacenter

2019-12-02 Thread Amit Bawer
On Sun, Nov 17, 2019 at 9:19 AM Ivan de Gusmão Apolonio <
i...@apolonio.com.br> wrote:

> I'm having trouble to create a storage ISO Domain and attach it to a
> Datacenter. It just give me this error message:
>
> Error while executing action Attach Storage Domain: Could not obtain lock
>
> Also the oVirt's Engine log files show this error message: "setsid: failed
> to execute /usr/bin/ionice: Permission denied", but I was unable to
> identify what exactly it's trying to do to get this permission denied.
>
> 2019-11-14 16:46:07,779-03 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-7388)
> [86161370-2aaa-4eff-9aab-c184bdf5bb98] EVENT_ID:
> IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command AttachStorageDomainVDS
> failed: Cannot obtain lock: u"id=e6b34c42-0ca6-41f4-be3e-3c9b2af1747b,
> rc=1, out=[], err=['setsid: failed to execute /usr/bin/ionice: Permission
> denied']"
>

ionice command runs in an elevated suid permissions by vdsm,
see that your host has /etc/sudoers.d/50_vdsm file with ionice command
entry and that /etc/sudoers has the #includedir /etc/sudoers.d directive.
>
>
> This behavior just happens on ISO Domains, while Data Domains works fine.
> I have read oVirt documentation and searched everywhere but I was unable to
> find the solution for this issue.
>
> I'm using CentOS 7 with last update of all packages (oVirt version
> 4.3.6.7). Please help!
>
> Thanks,
> Ivan de Gusmão Apolonio
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6Z5LCA77NHWLKHUYJDANIUCAHXX466MD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UBUUERGT6PXF2Y6T4JLPHFKOY46K4RHX/


[ovirt-users] Re: NFS Storage Domain on OpenMediaVault

2019-12-02 Thread Amit Bawer
On Mon, Dec 2, 2019 at 7:21 PM Robert Webb  wrote:

> Thanks for the response.
>
> As it turned out, the issue was on the export where I had to remove
> subtree_check and added no_root_squash and it started working.
>
> Being new to the setup, I am still trying to work through some config
> issues and deciding if I want to continue. The main thing I am looking for
> is good HA and failover capability. Been using Proxmox VE and I like the
> admin of it, but failover still needs work. Simple things like auto
> migration when rebooting a host does not exist and is something I need.
>
for RHEV, you can check the Cluster scheduling policies
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/sect-scheduling_policies


> Robert
> --
> *From:* Strahil Nikolov 
> *Sent:* Sunday, December 1, 2019 2:54 PM
> *To:* users@ovirt.org ; Robert Webb 
> *Subject:* Re: [ovirt-users] NFS Storage Domain on OpenMediaVault
>
> Does sanlock user has rights on the ./dom_md/ids ?
>
> Check the sanlock.service for issues.
> journalctl -u sanlock.service
>
> Best Regards,
> Strahil Nikolov
>
> В неделя, 1 декември 2019 г., 17:22:21 ч. Гринуич+2, rw...@ropeguru.com <
> rw...@ropeguru.com> написа:
>
>
> I have a clean install with openmediavault as backend NFS and cannot get
> it to work. Keep getting permission errors even though I created a vdsm
> user and kvm group; and they are the owners of the directory on OMV with
> full permissions.
>
> The directory gets created on the NFS side for the host, but then get the
> permission error and is removed form the host but the directory structure
> is left on the NFS server.
>
> Logs:
>
> From the engine:
>
> Error while executing action New NFS Storage Domain: Unexpected exception
>
> From the oVirt node log:
>
> 2019-11-29 10:03:02 136998 [30025]: open error -13 EACCES: no permission
> to open /rhev/data-center/mnt/192.168.1.56:
> _export_Datastore-oVirt/f38b19e4-8060-4467-860b-09cf606ccc15/dom_md/ids
> 2019-11-29 10:03:02 136998 [30025]: check that daemon user sanlock 179
> group sanlock 179 has access to disk or file.
>
> File system on Openmediavault:
>
> drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 .
> drwxr-xr-x  9 root root 4096 Nov 27 20:56 ..
> drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03
> f38b19e4-8060-4467-860b-09cf606ccc15
>
> drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 .
> drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 ..
> drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 dom_md
> drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 images
>
> drwxrwsr-x+ 2 vdsm kvm4096 Nov 29 10:03 .
> drwxrwsr-x+ 4 vdsm kvm4096 Nov 29 10:03 ..
> -rw-rw+ 1 vdsm kvm0 Nov 29 10:03 ids
> -rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 inbox
> -rw-rw+ 1 vdsm kvm0 Nov 29 10:03 leases
> -rw-rw-r--+ 1 vdsm kvm  343 Nov 29 10:03 metadata
> -rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 outbox
> -rw-rw+ 1 vdsm kvm  1302528 Nov 29 10:03 xleases
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILKNT57F6VUHEVKMOACLLQRAO3J364MC/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KH2H2V254Z7T7GP6NK3TOVDQBAPRBLZS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/27WLJXFNPXK7MNPAIAXSNPR3T3YFLYSQ/


[ovirt-users] Re: Upgrade ovirt from 3.4 to 4.3

2019-12-02 Thread Amit Bawer
On Mon, Dec 2, 2019 at 4:36 PM Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> On Mon, Dec 2, 2019 at 3:19 PM  wrote:
> >
> > Hi Luca,
> >
> > thanks for you reply. Following this procedure which are the step that i
> must reinstall instead of upgrade the Hypervisors?
> >
>
> You'll need to reinstall the hypervisors when upgrading from 3.6 to
> 4.0, because changes the OS release from centos/rhel 6 to centos/rhel
> 7. And maybe you'll require to do the same with the engine server.
>
> An alternative procedure can be to stop all the vms on the 3.4
> cluster, detach the storage volumes and attach the storage volumes on
> a new cluster 4.3, reimporting back all the vms after recreating the
> networks. I'm not sure this can work directly with 3.4 (maybe you
> require to upgrade at least to 3.5), because is a very ancient
> release, but maybe you can upgrade up to 3.6 and this should work.
> With this procedure you can upgrade with a single step, but with a
> longer downtime.
>

You can only import old SD to 4.3 from compatibility version 3.5 and higher:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/sect-importing_existing_storage_domains

>
> Someone with more experience can confirm this?
>
> Luca
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NPZ6GMR53ESWAPZJA2SPYTGCMZTE3E4J/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W4JXN7XIXG6N2SAQZUDREBFEIVMWGEYM/


[ovirt-users] Re: NFS Storage Domain on OpenMediaVault

2019-12-01 Thread Amit Bawer
On Sun, Dec 1, 2019 at 5:22 PM  wrote:

> I have a clean install with openmediavault as backend NFS and cannot get
> it to work. Keep getting permission errors even though I created a vdsm
> user and kvm group; and they are the owners of the directory on OMV with
> full permissions.
>
> The directory gets created on the NFS side for the host, but then get the
> permission error and is removed form the host but the directory structure
> is left on the NFS server.
>
> Logs:
>
> From the engine:
>
> Error while executing action New NFS Storage Domain: Unexpected exception
>
> From the oVirt node log:
>
> 2019-11-29 10:03:02 136998 [30025]: open error -13 EACCES: no permission
> to open /rhev/data-center/mnt/192.168.1.56:
> _export_Datastore-oVirt/f38b19e4-8060-4467-860b-09cf606ccc15/dom_md/ids
> 2019-11-29 10:03:02 136998 [30025]: check that daemon user sanlock 179
> group sanlock 179 has access to disk or file.
>

Make sure sanlock user is a member of vdsm and kvm groups

id -a sanlock

should list also kvm and vdsm.

This is something that

vdsm-tool configure --force
systemctl restart libvirtd
systemctl restart vdsm

should set if not set already.


>
> File system on Openmediavault:
>
> drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 .
> drwxr-xr-x  9 root root 4096 Nov 27 20:56 ..
> drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03
> f38b19e4-8060-4467-860b-09cf606ccc15
>
> drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 .
> drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 ..
> drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 dom_md
> drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 images
>
> drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 .
> drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 ..
> -rw-rw+ 1 vdsm kvm0 Nov 29 10:03 ids
> -rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 inbox
> -rw-rw+ 1 vdsm kvm0 Nov 29 10:03 leases
> -rw-rw-r--+ 1 vdsm kvm  343 Nov 29 10:03 metadata
> -rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 outbox
> -rw-rw+ 1 vdsm kvm  1302528 Nov 29 10:03 xleases
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILKNT57F6VUHEVKMOACLLQRAO3J364MC/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJOP2HRCE63QKCELUW2KUUH3PFSNRKF5/


[ovirt-users] Re: Disk move succeed but didn't move content

2019-12-01 Thread Amit Bawer
On Sun, Dec 1, 2019 at 1:32 AM jplor...@gmail.com 
wrote:

> Thanks but it didn't work, seems that all data in the disk is gone.
> As I still have the original storage domain, I'll see to import the vms
> back. I can't add it as a posix fs, don't know what I'm doing wrong. The
> ovirt docs are quite few, maybe after this I'll write something to add to
> the site.
>
Have you referred to RHV administration guide?
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/sect-importing_existing_storage_domains


> Any other ideas are welcome
> Regards
>
> El sáb., 30 de noviembre de 2019 3:19 p. m., Amit Bawer 
> escribió:
>
>> Are you able to extend the disks to 1GB+ size ?
>>
>>- Go to “Virtual Machines” tab and select virtual machine
>>- Go to “Disks” sub tab and select disk
>>- Click on “Edit”, pay attention that if disk is locked or VM has
>>other status than “UP”, “PAUSED”, “DOWN” or “SUSPENDED”, editing is not
>>allowed so “Edit” option is grayed out.
>>- Use “Extend Size By(GB)” field to insert the size in GB which
>>should be added to the existing size
>>
>>
>> On Fri, Nov 29, 2019 at 3:48 AM Juan Pablo Lorier 
>> wrote:
>>
>>> Hi,
>>>
>>> I've a fresh new install of ovirt 4.3 and tried to import an gluster
>>> vmstore. I managed to import via NFS the former data domain. The problem
>>> is that when I moved the disks of the vms to the new ISCSI data domain,
>>> I got a warning that sparse disk type will be converted to qcow2 disks,
>>> and after accepting, the disks were moved with no error.
>>>
>>> The problem is that the disks now figure as <1Gb size instead of the
>>> original size and thus, the vms fail to start.
>>>
>>> Is there any way to recover those disks? I have no backup of the vms :-(
>>>
>>> Regards
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YKK2HIGPFJUZBS5KQHIIWCP5OGC3ZYVY/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/46E4EPYBIL3MAGVV4KDYFXOBNGFDTZIG/


[ovirt-users] Re: Disk move succeed but didn't move content

2019-11-30 Thread Amit Bawer
Are you able to extend the disks to 1GB+ size ?

   - Go to “Virtual Machines” tab and select virtual machine
   - Go to “Disks” sub tab and select disk
   - Click on “Edit”, pay attention that if disk is locked or VM has other
   status than “UP”, “PAUSED”, “DOWN” or “SUSPENDED”, editing is not allowed
   so “Edit” option is grayed out.
   - Use “Extend Size By(GB)” field to insert the size in GB which should
   be added to the existing size


On Fri, Nov 29, 2019 at 3:48 AM Juan Pablo Lorier 
wrote:

> Hi,
>
> I've a fresh new install of ovirt 4.3 and tried to import an gluster
> vmstore. I managed to import via NFS the former data domain. The problem
> is that when I moved the disks of the vms to the new ISCSI data domain,
> I got a warning that sparse disk type will be converted to qcow2 disks,
> and after accepting, the disks were moved with no error.
>
> The problem is that the disks now figure as <1Gb size instead of the
> original size and thus, the vms fail to start.
>
> Is there any way to recover those disks? I have no backup of the vms :-(
>
> Regards
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YKK2HIGPFJUZBS5KQHIIWCP5OGC3ZYVY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2FAUYLNGON6CXSKDZ3AJRWHYXQB4LNBM/


[ovirt-users] Re: Engine deployment last step.... Can anyone help ?

2019-11-25 Thread Amit Bawer
firewalld is running?

systemctl disable --now firewalld

On Monday, November 25, 2019, Rob  wrote:

> It can’t be DNS, since the engine runs on a separate network anyway ie
> front end, so why can’t it reach the Volume I wonder
>
>
> On 25 Nov 2019, at 12:55, Gobinda Das  wrote:
>
> There could be two reason
> 1- Your gluster service may not be running.
> 2- In Storage Connection  there mentioned  may not exists
>
> can you please paste the output of "gluster volume status" ?
>
> On Mon, Nov 25, 2019 at 5:03 PM Rob 
> wrote:
>
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
>> "[Failed to fetch Gluster Volume List]". HTTP response code is 400.
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
>> reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster
>> Volume List]\". HTTP response code is 400.”}
>>
>>
>> On 25 Nov 2019, at 09:16, Rob  wrote:
>>
>> Yes,
>>
>> I’ll restart all Nodes after wiping the failed setup of Hosted engine
>> using.
>>
>> * ovirt-hosted-engine-cleanup*
>> *vdsm-tool configure --force*
>> *systemctl restart libvirtd*
>> *systemctl restart vdsm*
>>
>> *although last time I did *
>>
>> *systemctl restart vdsm*
>>
>> *VDSM did **not** restart maybe that is OK as Hosted Engine was then de
>> deployed or is that the issue ?*
>>
>>
>> On 25 Nov 2019, at 09:13, Parth Dhanjal  wrote:
>>
>> Can you please share the error in case it fails again?
>>
>> On Mon, Nov 25, 2019 at 2:42 PM Rob 
>> wrote:
>>
>>> hmm, I’’l try again, that failed last time.
>>>
>>>
>>> On 25 Nov 2019, at 09:08, Parth Dhanjal  wrote:
>>>
>>> Hey!
>>>
>>> For
>>> Storage Connection you can add - :/engine
>>> And for
>>> Mount Options - backup-volfile-servers=:
>>>
>>>
>>> On Mon, Nov 25, 2019 at 2:31 PM  wrote:
>>>
 So...

 I have got to the last step

 3 Machines with Gluster Storage configured however at the last screen

 Deploying the Engine to Gluster and the wizard does not auto fill
 the two fields

 Hosted Engine Deployment

 Storage Connection
 and
 Mount Options

 I also had to expand /tmp as it was not big enough to fit the engine
 before moving...

 What can I do to get the auto complete sorted out ?

 I have tried entering ovirt1.kvm.private:/gluster_lv_engine  - The
 Volume name
 and
 ovirt1.kvm.private:/gluster_bricks/engine

 Ovirt1 being the actual machine I'm running this on.

 Thanks
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
 guidelines/
 List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
 message/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/

>>>
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>> guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
>> message/LQ5YXX7IVV6ZPJA7BAVSKVVDDUBIEYXT/
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>> guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
>> message/VHQ3LTES6ABPLC6IAFKWTXN52T2C7CS5/
>>
>
>
> --
>
>
> Thanks,
> Gobinda
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPGWGJ5YDBMWZAVNADI7VMQBHEALWIHJ/


[ovirt-users] Re: Engine deployment last step.... Can anyone help ?

2019-11-25 Thread Amit Bawer
it's plausible that

systemctl restart supervdsm

is required as well

On Mon, Nov 25, 2019 at 12:17 PM Parth Dhanjal  wrote:

> As such there are no errors in the vdsm log.
> Maybe you can try these steps again
>
> *ovirt-hosted-engine-cleanup*
> *vdsm-tool configure --force*
> *systemctl restart libvirtd*
> *systemctl restart vdsm*
>
> And continue with the hosted engine deployment, in case the hosted engine
> deployment fails you can look for errors
> under /var/log/ovirt-hosted-engine-setup/engine.log
>
>
> On Mon, Nov 25, 2019 at 3:13 PM Rob 
> wrote:
>
>>
>>
>> On 25 Nov 2019, at 09:28, Parth Dhanjal  wrote:
>>
>>  /var/log/vdsm/vdsm.log
>>
>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XD3TXRCVGTSJFULHGWGBUEHV43CRWPE2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M4NBRK272ZO3DC4HBBAUEAOMUKFTQARK/


[ovirt-users] Re: VMs Locked - vProtect snapshots

2019-11-25 Thread Amit Bawer
On Sat, Nov 23, 2019 at 2:49 PM  wrote:

> Anyone have some insight? Is there an official support channel or dev team
> I can engage to help look at this (even paid support) ? Does RHEV Support
> take ad-hoc cases on oVirt setups?\
>
If you have a RH subscription you may access the official KB, and there is
also related information per your issue at
https://access.redhat.com/solutions/396753
For subscription information, see https://access.redhat.com/management

>
> Basically I think just need a way to determine what is locked (when the
> unlock_entity.sh utility says 'nothing' is locked) and how to force unlock
> the entities (even if unlock_entity.sh says it has unlocked everything).
>
> Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLGGLXLI7P6JDLWFGOIJKV5MOUP4UMCV/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IRDPWANLKRV5EIWMLJWMBKZAPYGCW53O/


[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-10-01 Thread Amit Bawer
On Tue, Oct 1, 2019 at 12:49 PM Vrgotic, Marko 
wrote:

> Thank you very much Amit,
>
>
>
> I hope the result of suggested tests allows us improve the speed for
> specific IO test case as well.
>
>
>
> Apologies for not being more clear, but I was referring  to changing mount
> options for storage where SHE also runs. It cannot be put in Maintenance
> mode since the engine is running on it.
> What to do in this case? Its clear that I need to power it down, but where
> can I then change the settings?
>

You can see similar question about changing mnt_options of hosted engine
and answer here [1]
[1] https://lists.ovirt.org/pipermail/users/2018-January/086265.html

>
>
> Kindly awaiting your reply.
>
>
>
> — — —
> Met vriendelijke groet / Kind regards,
>
> *Marko Vrgotic*
>
>
>
>
>
>
>
> *From: *Amit Bawer 
> *Date: *Saturday, 28 September 2019 at 20:25
> *To: *"Vrgotic, Marko" 
> *Cc: *Tony Brian Albers , "hunter86...@yahoo.com" <
> hunter86...@yahoo.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
>
>
>
>
> On Fri, Sep 27, 2019 at 4:02 PM Vrgotic, Marko 
> wrote:
>
> Hi oVirt gurus,
>
>
>
> Thank s to Tony, who pointed me into discovery process, the performance of
> the IO seems greatly dependent on the flags.
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 0.108962 s, *470 MB/s*
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
> *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 322.314 s, *159 kB/s*
>
>
>
> Dsync flag tells dd to ignore all buffers, cache except certain kernel
> buffers and write data physically to the disc, before writing further.
> According to number of sites I looked at, this is the way to test Server
> Latency in regards to IO operations. Difference in performance is huge, as
> you can see (below I have added results from tests with 4k and 8k block)
>
>
>
> Still, certain software component we run tests with writes data in
> this/similar way, which is why I got this complaint in the first place.
>
>
>
> Here is my current NFS mount settings:
>
>
> rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.11,local_lock=none,addr=172.17.28.5
>
>
>
> *If you have any suggestions on possible NFS tuning options, to try to
> increase performance, I would highly appreciate it.*
>
> *Can someone tell me how to change NFS mount options in oVirt for already
> existing/used storage?*
>
>
>
> Taking into account your network configured MTU [1] and Linux version [2],
> you can tune wsize, rsize mount options.
>
> Editing mount options can be done from Storage->Domains->Manage Domain
> menu.
>
>
>
> [1]  https://access.redhat.com/solutions/2440411
>
> [2] https://access.redhat.com/solutions/753853
>
>
>
>
>
> Test results with 4096 and 8192 byte size.
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 40960 bytes (410 MB) copied, 1.49831 s, *273 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 40960 bytes (410 MB) copied, 349.041 s, *1.2 MB/s*
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 81920 bytes (819 MB) copied, 11.6553 s, *70.3 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 81920 bytes (819 MB) copied, 393.035 s, *2.1 MB/s*
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Thursday, 26 September 2019 at 09:51
> *To: *Amit Bawer 
> *Cc: *Tony Brian Albers , "hunter86...@yahoo.com" <
> hunter86...@yahoo.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
> Dear all,
>
>
>
> I very much appreciate all help and suggestions so far.
>
>
>
> Today I will send the test results and current mount settings for NFS4.
> Our production setup is using Netapp based NFS server.
>
>
>
> I am surprised with results from Tony

[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-28 Thread Amit Bawer
On Fri, Sep 27, 2019 at 4:02 PM Vrgotic, Marko 
wrote:

> Hi oVirt gurus,
>
>
>
> Thank s to Tony, who pointed me into discovery process, the performance of
> the IO seems greatly dependent on the flags.
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 0.108962 s, *470 MB/s*
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
> *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 322.314 s, *159 kB/s*
>
>
>
> Dsync flag tells dd to ignore all buffers, cache except certain kernel
> buffers and write data physically to the disc, before writing further.
> According to number of sites I looked at, this is the way to test Server
> Latency in regards to IO operations. Difference in performance is huge, as
> you can see (below I have added results from tests with 4k and 8k block)
>
>
>
> Still, certain software component we run tests with writes data in
> this/similar way, which is why I got this complaint in the first place.
>
>
>
> Here is my current NFS mount settings:
>
>
> rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.11,local_lock=none,addr=172.17.28.5
>
>
>
> *If you have any suggestions on possible NFS tuning options, to try to
> increase performance, I would highly appreciate it.*
>
*Can someone tell me how to change NFS mount options in oVirt for already
> existing/used storage?*
>

Taking into account your network configured MTU [1] and Linux version [2],
you can tune wsize, rsize mount options.
Editing mount options can be done from Storage->Domains->Manage Domain menu.

[1]  https://access.redhat.com/solutions/2440411
[2] https://access.redhat.com/solutions/753853

>
>
>
>
> Test results with 4096 and 8192 byte size.
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 40960 bytes (410 MB) copied, 1.49831 s, *273 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 40960 bytes (410 MB) copied, 349.041 s, *1.2 MB/s*
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 81920 bytes (819 MB) copied, 11.6553 s, *70.3 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 81920 bytes (819 MB) copied, 393.035 s, *2.1 MB/s*
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Thursday, 26 September 2019 at 09:51
> *To: *Amit Bawer 
> *Cc: *Tony Brian Albers , "hunter86...@yahoo.com" <
> hunter86...@yahoo.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
> Dear all,
>
>
>
> I very much appreciate all help and suggestions so far.
>
>
>
> Today I will send the test results and current mount settings for NFS4.
> Our production setup is using Netapp based NFS server.
>
>
>
> I am surprised with results from Tony’s test.
>
> We also have one setup with Gluster based NFS, and I will run tests on
> those as well.
>
> Sent from my iPhone
>
>
>
> On 25 Sep 2019, at 14:18, Amit Bawer  wrote:
>
>
>
>
>
> On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers  wrote:
>
> Guys,
>
> Just for info, this is what I'm getting on a VM that is on shared
> storage via NFSv3:
>
> --snip--
> [root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
> count=100
> 100+0 records in
> 100+0 records out
> 409600 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s
>
> real0m18.171s
> user0m1.077s
> sys 0m4.303s
> [root@proj-000 ~]#
> --snip--
>
> my /etc/exports:
> /data/ovirt
> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
>
> and output from 'mount' on one of the hosts:
>
> sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
> 001.kac.lokalnet:_data_ovirt type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
> nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
> .41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
> 

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Amit Bawer
According to resolution of [1] it's a multipathd/udev configuration issue.
Could be worth to track this issue.

[1] https://tracker.ceph.com/issues/12763

On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski 
wrote:

> On ovirt 4.3.5 we are seeing various problems related to the rbd device
> staying mapped after a guest has been live migrated. This causes problems
> migrating the guest back, as well as rebooting the guest when it starts
> back up on the original host. The error returned is ‘nrbd: unmap failed:
> (16) Device or resource busy’. I’ve pasted the full vdsm log below.
>
>
>
> As far as I can tell this isn’t happening 100% of the time, and seems to
> be more prevalent on busy guests.
>
>
>
> (Not sure if I should create a bug for this, so thought I’d start here
> first)
>
>
>
> Thanks,
>
>
>
> Dan
>
>
>
>
>
> Sep 24 19:26:18 mario vdsm[5485]: ERROR FINISH detach_volume error=Managed
> Volume Helper failed.: ('Error executing helper: Command
> [\'/usr/libexec/vdsm/managedvolume-helper\', \'detach\'] failed with rc=1
> out=\'\' err=\'oslo.privsep.daemon: Running privsep helper: [\\\'sudo\\\',
> \\\'privsep-helper\\\', \\\'--privsep_context\\\',
> \\\'os_brick.privileged.default\\\', \\\'--privsep_sock_path\\\',
> \\\'/tmp/tmptQzb10/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
> starting\\noslo.privsep.daemon: privsep process running with uid/gid:
> 0/0\\noslo.privsep.daemon: privsep process running with capabilities
> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
> privsep daemon running as pid 76076\\nTraceback (most recent call
> last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154, in
> \\nsys.exit(main(sys.argv[1:]))\\n  File
> "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper", line
> 149, in detach\\nignore_errors=False)\\n  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 121, in
> disconnect_volume\\nrun_as_root=True)\\n  File
> "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in
> _execute\\nresult = self.__execute(*args, **kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line
> 169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 241,
> in _wrap\\nreturn self.channel.remote_call(name, args, kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 203, in
> remote_call\\nraise
> exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError:
> Unexpected error while running command.\\nCommand: rbd unmap
> /dev/rbd/rbd/volume-0e8c1056-45d6-4740-934d-eb07a9f73160 --conf
> /tmp/brickrbd_LCKezP --id ovirt --mon_host 172.16.10.13:3300 --mon_host
> 172.16.10.14:3300 --mon_host 172.16.10.12:6789\\nExit code: 16\\nStdout:
> u\\\'\\\'\\nStderr: u\\\'rbd: sysfs write failednrbd: unmap failed:
> (16) Device or resource busyn\\\'\\n\'',)#012Traceback (most recent
> call last):#012  File
> "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 124, in
> method#012ret = func(*args, **kwargs)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1766, in
> detach_volume#012return managedvolume.detach_volume(vol_id)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 67,
> in wrapper#012return func(*args, **kwargs)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 135,
> in detach_volume#012run_helper("detach", vol_info)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 179,
> in run_helper#012sub_cmd, cmd_input=cmd_input)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in
> __call__#012return callMethod()#012  File
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in
> #012**kwargs)#012  File "", line 2, in
> managedvolume_run_helper#012  File
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod#012raise convert_to_error(kind,
> result)#012ManagedVolumeHelperFailed: Managed Volume Helper failed.:
> ('Error executing helper: Command
> [\'/usr/libexec/vdsm/managedvolume-helper\', \'detach\'] failed with rc=1
> out=\'\' err=\'oslo.privsep.daemon: Running privsep helper: [\\\'sudo\\\',
> \\\'privsep-helper\\\', \\\'--privsep_context\\\',
> \\\'os_brick.privileged.default\\\', \\\'--privsep_sock_path\\\',
> \\\'/tmp/tmptQzb10/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
> starting\\noslo.privsep.daemon: privsep process running with uid/gid:
> 0/0\\noslo.privsep.daemon: privsep process running with capabilities
> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
> privsep 

[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-25 Thread Amit Bawer
On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers  wrote:

> Guys,
>
> Just for info, this is what I'm getting on a VM that is on shared
> storage via NFSv3:
>
> --snip--
> [root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
> count=100
> 100+0 records in
> 100+0 records out
> 409600 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s
>
> real0m18.171s
> user0m1.077s
> sys 0m4.303s
> [root@proj-000 ~]#
> --snip--
>
> my /etc/exports:
> /data/ovirt
> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
>
> and output from 'mount' on one of the hosts:
>
> sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
> 001.kac.lokalnet:_data_ovirt type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
> nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
> .41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
> 16.216.41)
>

Worth to compare mount options with the slow shared NFSv4 mount.

Window size tuning can be found at bottom of [1], although its relating to
NFSv3, it could be relevant to v4 as well.
[1] https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html


> connected via single 10gbit ethernet. Storage on NFS server is 8 x 4TB
> SATA disks in RAID10. NFS server is running CentOS 7.6.
>
> Maybe you can get some inspiration from this.
>
> /tony
>
>
>
> On Wed, 2019-09-25 at 09:59 +, Vrgotic, Marko wrote:
> > Dear Strahil, Amit,
> >
> > Thank you for the suggestion.
> > Test result with block size 4096:
> > Network storage:
> > avshared:
> > [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 40960 bytes (410 MB) copied, 275.522 s, 1.5 MB/s
> >
> > Local storage:
> >
> > avlocal2:
> > [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 40960 bytes (410 MB) copied, 53.093 s, 7.7 MB/s
> > 10:38
> > avlocal3:
> > [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 40960 bytes (410 MB) copied, 46.0392 s, 8.9 MB/s
> >
> > As Amit suggested, I am also going to execute same tests on the
> > BareMetals and between BareMetal and NFS to compare results.
> >
> >
> > — — —
> > Met vriendelijke groet / Kind regards,
> >
> > Marko Vrgotic
> >
> >
> >
> >
> > From: Strahil 
> > Date: Tuesday, 24 September 2019 at 19:10
> > To: "Vrgotic, Marko" , Amit  > .com>
> > Cc: users 
> > Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared
> > Storage
> >
> > Why don't you try with 4096 ?
> > Most block devices have a blcok size of 4096 and anything bellow is
> > slowing them down.
> > Best Regards,
> > Strahil Nikolov
> > On Sep 24, 2019 17:40, Amit Bawer  wrote:
> > have you reproduced performance issue when checking this directly
> > with the shared storage mount, outside the VMs?
> >
> > On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko  > .com> wrote:
> > Dear oVirt,
> >
> > I have executed some tests regarding IO disk speed on the VMs,
> > running on shared storage and local storage in oVirt.
> >
> > Results of the tests on local storage domains:
> > avlocal2:
> > [root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 5120 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
> >
> > avlocal3:
> > [root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 5120 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
> >
> > Results of the test on shared storage domain:
> > avshared:
> > [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 5120 bytes (51 MB) copied, 283.499 s, 181 kB/s
> >
> > Why is it so low? Is there anything I can do to tune, configure VDSM
> > or other service to speed this up?
> > Any advice is appreciated.
> >
> > Shared storage is based on Neta

[ovirt-users] Re: Got a RequestError: status: 409 reason: Conflict

2019-09-24 Thread Amit Bawer
On Tue, Sep 24, 2019 at 5:21 PM  wrote:

>
> I'm trying to clone the snapshot into a new vm. The tool I am using is
> ovirtBAckup from the github wefixit-AT The link is here
> https://github.com/wefixit-AT/oVirtBackup
>
> this piece of code snippet throws me error.
>
> if not config.get_dry_run():
> api.vms.add(params.VM(name=vm_clone_name, memory=vm.get_memory(),
> cluster=api.clusters.get(config.get_cluster_name()),
> snapshots=snapshots_param))
> VMTools.wait_for_vm_operation(api, config, "Cloning", vm_from_list)
> print 'hello'
> logger.info("Cloning finished")
>
> The above lines are from the 325 line number of backup.py of the github.
>
> I am getting error as
>
> !!! Got a RequestError:
> status: 409
> reason: Conflict
> detail: Cannot add VM. The VM is performing an operation on a Snapshot.
> Please wait for the operation to finish, and try again.
> How can i further debug the code to know what is happening wrong in my
> program.I am new to python please help me.
>

- Can run in debug mode with  --debug flag
- Could also set set debug=True in [1]
- Example how to clone VM from snapshot using ovirt sdk in [2].

[1]
https://github.com/wefixit-AT/oVirtBackup/blob/5acd25af6b9876b883c02218f653812d5e7845f3/backup.py#L400
[2] https://lists.ovirt.org/pipermail/users/2016-January/037321.html


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QFV2KGHO3Z4YRQ3UC3WVIWT43AP2MAVM/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FIXY2YH3KYH75RYDDEDCM2KCHZO2KXDL/


[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-24 Thread Amit Bawer
have you reproduced performance issue when checking this directly with the
shared storage mount, outside the VMs?

On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko 
wrote:

> Dear oVirt,
>
>
>
> I have executed some tests regarding IO disk speed on the VMs, running on
> shared storage and local storage in oVirt.
>
>
>
> Results of the tests on local storage domains:
>
> avlocal2:
>
> [root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
>
>
>
> avlocal3:
>
> [root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
>
>
>
> Results of the test on shared storage domain:
>
> *avshared*:
>
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 283.499 s, 181 kB/s
>
>
>
> Why is it so low? Is there anything I can do to tune, configure VDSM or
> other service to speed this up?
>
> Any advice is appreciated.
>
>
>
> Shared storage is based on Netapp with 20Gbps LACP path from Hypervisor to
> Netapp volume, and set to MTU 9000. Used protocol is NFS4.0.
>
> oVirt is 4.3.4.3 SHE.
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZYTH3XEM3NWZJWAXFUDC3J6N4L6WOUI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2IUTRS62EF6MB4BBBHN2LTVY2MT5APG/


[ovirt-users] Re: Creating Bricks via the REST API

2019-09-23 Thread Amit Bawer
+Sahina Bose 
Thanks for your clarification Julian, creating new Gluster brick from
ovirt's REST API is not currently supported, only from UI.

On Mon, Sep 23, 2019 at 6:25 PM Julian Schill  wrote:

> > This is documented in section 6.92.2. of [1]
> >
> > [1]
> >
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/.
> ..
>
> Section 6.92.2 documents a POST request that lets one add *existing*
> bricks to an existing volume.
> This doesn't answer my question on how one can create *new* bricks via the
> REST API.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QMPFTXA2WX3JMB6AJNIAHS4XLBJPXXVC/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7E3LSHVCXLW2N7JG2LUETPSQPF5PTAB/


[ovirt-users] Re: Creating Bricks via the REST API

2019-09-23 Thread Amit Bawer
This is documented in section 6.92.2. of [1]

[1]
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/rest_api_guide/services#services-gluster_bricks-methods-add

On Sun, Sep 22, 2019 at 11:59 PM Julian Schill  wrote:

> Under "Compute -> Hosts -> Host Name -> Storage Devices" one can create
> new Gluster Bricks on free partitions. How can i do this via the REST API?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/THFM6YI6R6F4LE6W34SJ7REIKVYCPALL/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T3AIKWZSWZPQYN4K4DWK65OAAWLYLUZ4/