[ovirt-users] oVirt Survey 2019 results

2019-04-01 Thread Sandro Bonazzola
Thanks to the 143 participants to oVirt Survey 2019!
The survey is now closed and results are publicly available at
https://bit.ly/2JYlI7U
We'll analyze collected data in order to improve oVirt thanks to your
feedback.

As a first step after reading the results I'd like to invite the 30 persons
who replied they're willing to contribute code to send an email to
de...@ovirt.org introducing themselves: we'll be more than happy to welcome
them and helping them getting started.

I would also like to invite the 17 people who replied they'd like to help
organizing oVirt events in their area to either get in touch with me or
introduce themselves to users@ovirt.org so we can discuss about events
organization.

Last but not least I'd like to invite the 38 people willing to contribute
documentation and the one willing to contribute localization to introduce
themselves to de...@ovirt.org.

Thanks!
-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4N5DYCXY2S6ZAUI7BWD4DEKZ6JL6MSGN/


[ovirt-users] Not being able to create new partition with fdisk

2019-04-01 Thread pawel . zajac
Hi,

I am not able to add new  or extend a partitions under CentOS 7 VM in oVirt 
4.2.7.5 - 1.el7

Once I get to write to partition table the system pauses with error.

"VM  has been paused due to unknown storage error."

Can't see much in the log, messages doesn't even signaled it.
The VM just pauses.

VM storage is on a NFS share, so not a block device, like most of the issues I 
found.

Did search web for similar issue, bit no luck so far.

Any ideas? 

Thank you.

Best,

Pawel

 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6ZIRCRNZ52VZL7I7DMRDT7ZJHGVM4HA/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-01 Thread Simone Tiraboschi
Can you please add also /var/log/sanlock.log ?

On Mon, Apr 1, 2019 at 3:42 PM Strahil Nikolov 
wrote:

> Hi Simone,
>
> >Sorry, it looks empty.
>
> Sadly it's true. This one should be OK.
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UOHGTMV7BBEAQBI772G7IFTOD7GO5HG3/


[ovirt-users] Re: 4.2 / 4.3 : Moving the hosted-engine to another storage

2019-04-01 Thread Simone Tiraboschi
On Mon, Apr 1, 2019 at 5:26 PM 
wrote:

> Hi friends of oVirt,
>
> Roughly 3 years ago a user asked about the options he had to move the
> hosted engine to some other storage.
>
> The answer by Simone Tiraboschi was that it would largely not be possible,
> because of references in the database to the node the engine was hosted.
> This information would prevent a successful move of the engine even with
> backup/restore.
>
> The situation seem to have improved, but I'm not sure. So I ask.
>
> We have to move our engine away from our older Cluster with NFS Storage
> backends (engine, volumes, iso-images).
>
> The engine should be restored on our new cluster that has a gluster volume
> available for the engine. Additionally this 3-node cluster is running
> Guests from a Cinder/Ceph storage Domain.
>
> I want to restore the engine on a different cluster to a different storage
> domain.
>
> Reading the documentation at
> https://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environment.html
> I am wondering whether oVirt Nodes (formerly Node-NG) are capable of
> restoring an engine at all. Do I need EL-based Nodes? We are currently
> running on oVirt Nodes.
>

Yes, now you can do it via backup and restore:
take a backup of the engine with engine-backup and restore it on a new
hosted-engine VM on a new storage domain with:
  hosted-engine --deploy --restore-from-file=mybackup.tar.gz


>
> - Andreas
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y7EPAGSARSUFGYRABDX7M7BNAVBRAFHS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GYCDAW6SGFMN5OZ4U24LCQAPGCRJGU2U/


[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-01 Thread Simone Tiraboschi
On Mon, Apr 1, 2019 at 6:14 PM Leo David  wrote:

> Thank you Simone.
> I've decides to go for a new fresh install from iso, and i'll keep posted
> if any troubles arise. But I am still trying to understand what are the
> services that mount the lvms and volumes after configuration. There is
> nothing related in fstab, so I assume there are a couple of .mount files
> somewhere in the filesystem.
> Im just trying to understand node's underneath workflow.
>

hosted-engine configuration is stored
in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount
the hosted-engine storage domain according to that and so ovirt-ha-agent
will be able to start the engine VM.
Everything else is just in the engine DB.


>
> On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi  wrote:
>
>> Hi,
>> to understand what's failing I'd suggest to start attaching setup logs.
>>
>> On Sun, Mar 31, 2019 at 5:06 PM Leo David  wrote:
>>
>>> Hello Everyone,
>>> Using 4.3.2 installation, and after running through HyperConverged
>>> Setup,  at the last stage it fails. It seems that the previously created
>>> "engine" volume is not mounted under "/rhev" path, therefore the setup
>>> cannot finish the deployment.
>>> Any ideea which are the services responsible of mounting the volumes on
>>> oVirt Node distribution ? I'm thinking that maybe this particularly one
>>> failed to start for some reason...
>>> Thank you very much !
>>>
>>> --
>>> Best regards, Leo David
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4TIXZ3GIBZHSJ7IC2VHC/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQJMVXPBOR442DPDYNOOHPUUCZFYCKYX/


[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-01 Thread Leo David
Thank you Simone.
I've decides to go for a new fresh install from iso, and i'll keep posted
if any troubles arise. But I am still trying to understand what are the
services that mount the lvms and volumes after configuration. There is
nothing related in fstab, so I assume there are a couple of .mount files
somewhere in the filesystem.
Im just trying to understand node's underneath workflow.

On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi  wrote:

> Hi,
> to understand what's failing I'd suggest to start attaching setup logs.
>
> On Sun, Mar 31, 2019 at 5:06 PM Leo David  wrote:
>
>> Hello Everyone,
>> Using 4.3.2 installation, and after running through HyperConverged
>> Setup,  at the last stage it fails. It seems that the previously created
>> "engine" volume is not mounted under "/rhev" path, therefore the setup
>> cannot finish the deployment.
>> Any ideea which are the services responsible of mounting the volumes on
>> oVirt Node distribution ? I'm thinking that maybe this particularly one
>> failed to start for some reason...
>> Thank you very much !
>>
>> --
>> Best regards, Leo David
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4TIXZ3GIBZHSJ7IC2VHC/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WND2O6L77H5CMKG45ZKA5GIMFUGGAHZW/


[ovirt-users] 4.2 / 4.3 : Moving the hosted-engine to another storage

2019-04-01 Thread andreas . elvers+ovirtforum
Hi friends of oVirt,

Roughly 3 years ago a user asked about the options he had to move the hosted 
engine to some other storage.

The answer by Simone Tiraboschi was that it would largely not be possible, 
because of references in the database to the node the engine was hosted. This 
information would prevent a successful move of the engine even with 
backup/restore.

The situation seem to have improved, but I'm not sure. So I ask.

We have to move our engine away from our older Cluster with NFS Storage 
backends (engine, volumes, iso-images). 

The engine should be restored on our new cluster that has a gluster volume 
available for the engine. Additionally this 3-node cluster is running Guests 
from a Cinder/Ceph storage Domain.

I want to restore the engine on a different cluster to a different storage 
domain. 

Reading the documentation at 
https://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environment.html
 I am wondering whether oVirt Nodes (formerly Node-NG) are capable of restoring 
an engine at all. Do I need EL-based Nodes? We are currently running on oVirt 
Nodes.

- Andreas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y7EPAGSARSUFGYRABDX7M7BNAVBRAFHS/


[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Benny Zlotnik
I added an example for ceph[1]

[1] - 
https://github.com/oVirt/ovirt-site/blob/468c79a05358e20289e7403d9dd24732ab453a13/source/develop/release-management/features/storage/cinderlib-integration.html.md#create-storage-domain

On Mon, Apr 1, 2019 at 5:24 PM Benny Zlotnik  wrote:
>
> Did you pass the rbd_user when creating the storage domain?
>
> On Mon, Apr 1, 2019 at 5:08 PM Matthias Leopold
>  wrote:
> >
> >
> > Am 01.04.19 um 13:17 schrieb Benny Zlotnik:
> > >> OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:
> > >>
> > >> 2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
> > >> connecting to ceph cluster.
> > >> Traceback (most recent call last):
> > >> File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
> > >> line 337, in _do_conn
> > >>   client.connect()
> > >> File "rados.pyx", line 885, in rados.Rados.connect
> > >> (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)
> > >> OSError: [errno 95] error connecting to the cluster
> > >> 2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to
> > >> run command 'storage_stats': Bad or unexpected response from the storage
> > >> volume backend API: Error connecting to ceph cluster.
> > >>
> > >> I don't really know what to do with that either.
> > >> BTW, the cinder version on engine host is "pike"
> > >> (openstack-cinder-11.2.0-1.el7.noarch)
> > > Not sure if the version is related (I know it's been tested with
> > > pike), but you can try and install the latest rocky (that's what I use
> > > for development)
> >
> > I upgraded cinder on engine and hypervisors to rocky and installed
> > missing "ceph-common" packages on hypervisors. I set "rbd_keyring_conf"
> > and "rbd_ceph_conf" as indicated and got as far as adding a "Managed
> > Block Storage" domain and creating a disk (which is also visible through
> > "rbd ls"). I used a keyring that is only authorized for the pool I
> > specified with "rbd_pool". When I try to start the VM it fails and I see
> > the following in supervdsm.log on hypervisor:
> >
> > ManagedVolumeHelperFailed: Managed Volume Helper failed.: ('Error
> > executing helper: Command [\'/usr/libexec/vdsm/managedvolume-helper\',
> > \'attach\'] failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running
> > privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\',
> > \\\'--privsep_context\\\', \\\'os_brick.privileged.default\\\',
> > \\\'--privsep_sock_path\\\',
> > \\\'/tmp/tmp5S8zZV/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
> > privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
> > starting\\noslo.privsep.daemon: privsep process running with uid/gid:
> > 0/0\\noslo.privsep.daemon: privsep process running with capabilities
> > (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
> > privsep daemon running as pid 15944\\nTraceback (most recent call
> > last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154, in
> > \\nsys.exit(main(sys.argv[1:]))\\n  File
> > "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> > args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper",
> > line 137, in attach\\nattachment =
> > conn.connect_volume(conn_info[\\\'data\\\'])\\n  File
> > "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 96,
> > in connect_volume\\nrun_as_root=True)\\n  File
> > "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in
> > _execute\\nresult = self.__execute(*args, **kwargs)\\n  File
> > "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line
> > 169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File
> > "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line
> > 207, in _wrap\\nreturn self.channel.remote_call(name, args,
> > kwargs)\\n  File
> > "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in
> > remote_call\\nraise
> > exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError:
> > Unexpected error while running command.\\nCommand: rbd map
> > volume-36f5eb75-329e-4bd2-88d0-6f0bfe5d1040 --pool ovirt-test --conf
> > /tmp/brickrbd_RmBvxA --id None --mon_host xxx.xxx.216.45:6789 --mon_host
> > xxx.xxx.216.54:6789 --mon_host xxx.xxx.216.55:6789\\nExit code:
> > 22\\nStdout: u\\\'In some cases useful info is found in syslog - try
> > "dmesg | tail".n\\\'\\nStderr: u"2019-04-01 15:27:30.743196
> > 7fe0b4632d40 -1 auth: unable to find a keyring on
> > /etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
> > (2) No such file or directorynrbd: sysfs write failedn2019-04-01
> > 15:27:30.746987 7fe0b4632d40 -1 auth: unable to find a keyring on
> > /etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/cep

[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Benny Zlotnik
Did you pass the rbd_user when creating the storage domain?

On Mon, Apr 1, 2019 at 5:08 PM Matthias Leopold
 wrote:
>
>
> Am 01.04.19 um 13:17 schrieb Benny Zlotnik:
> >> OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:
> >>
> >> 2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
> >> connecting to ceph cluster.
> >> Traceback (most recent call last):
> >> File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
> >> line 337, in _do_conn
> >>   client.connect()
> >> File "rados.pyx", line 885, in rados.Rados.connect
> >> (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)
> >> OSError: [errno 95] error connecting to the cluster
> >> 2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to
> >> run command 'storage_stats': Bad or unexpected response from the storage
> >> volume backend API: Error connecting to ceph cluster.
> >>
> >> I don't really know what to do with that either.
> >> BTW, the cinder version on engine host is "pike"
> >> (openstack-cinder-11.2.0-1.el7.noarch)
> > Not sure if the version is related (I know it's been tested with
> > pike), but you can try and install the latest rocky (that's what I use
> > for development)
>
> I upgraded cinder on engine and hypervisors to rocky and installed
> missing "ceph-common" packages on hypervisors. I set "rbd_keyring_conf"
> and "rbd_ceph_conf" as indicated and got as far as adding a "Managed
> Block Storage" domain and creating a disk (which is also visible through
> "rbd ls"). I used a keyring that is only authorized for the pool I
> specified with "rbd_pool". When I try to start the VM it fails and I see
> the following in supervdsm.log on hypervisor:
>
> ManagedVolumeHelperFailed: Managed Volume Helper failed.: ('Error
> executing helper: Command [\'/usr/libexec/vdsm/managedvolume-helper\',
> \'attach\'] failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running
> privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\',
> \\\'--privsep_context\\\', \\\'os_brick.privileged.default\\\',
> \\\'--privsep_sock_path\\\',
> \\\'/tmp/tmp5S8zZV/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
> starting\\noslo.privsep.daemon: privsep process running with uid/gid:
> 0/0\\noslo.privsep.daemon: privsep process running with capabilities
> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
> privsep daemon running as pid 15944\\nTraceback (most recent call
> last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154, in
> \\nsys.exit(main(sys.argv[1:]))\\n  File
> "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper",
> line 137, in attach\\nattachment =
> conn.connect_volume(conn_info[\\\'data\\\'])\\n  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 96,
> in connect_volume\\nrun_as_root=True)\\n  File
> "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in
> _execute\\nresult = self.__execute(*args, **kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line
> 169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line
> 207, in _wrap\\nreturn self.channel.remote_call(name, args,
> kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in
> remote_call\\nraise
> exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError:
> Unexpected error while running command.\\nCommand: rbd map
> volume-36f5eb75-329e-4bd2-88d0-6f0bfe5d1040 --pool ovirt-test --conf
> /tmp/brickrbd_RmBvxA --id None --mon_host xxx.xxx.216.45:6789 --mon_host
> xxx.xxx.216.54:6789 --mon_host xxx.xxx.216.55:6789\\nExit code:
> 22\\nStdout: u\\\'In some cases useful info is found in syslog - try
> "dmesg | tail".n\\\'\\nStderr: u"2019-04-01 15:27:30.743196
> 7fe0b4632d40 -1 auth: unable to find a keyring on
> /etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
> (2) No such file or directorynrbd: sysfs write failedn2019-04-01
> 15:27:30.746987 7fe0b4632d40 -1 auth: unable to find a keyring on
> /etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
> (2) No such file or directoryn2019-04-01 15:27:30.747896
> 7fe0b4632d40 -1 monclient: authenticate NOTE: no keyring found; disabled
> cephx authenticationn2019-04-01 15:27:30.747903 7fe0b4632d40  0
> librados: client.None authentication error (95) Operation not
> supportednrbd: couldn\\\'t connect to the cluster!nrbd: map
> failed: (22) Invalid argumentn"\\n\'',)
>
> I tried to provide a /etc/c

[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Matthias Leopold


Am 01.04.19 um 13:17 schrieb Benny Zlotnik:

OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:

2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
connecting to ceph cluster.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 337, in _do_conn
  client.connect()
File "rados.pyx", line 885, in rados.Rados.connect
(/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)
OSError: [errno 95] error connecting to the cluster
2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to
run command 'storage_stats': Bad or unexpected response from the storage
volume backend API: Error connecting to ceph cluster.

I don't really know what to do with that either.
BTW, the cinder version on engine host is "pike"
(openstack-cinder-11.2.0-1.el7.noarch)

Not sure if the version is related (I know it's been tested with
pike), but you can try and install the latest rocky (that's what I use
for development)


I upgraded cinder on engine and hypervisors to rocky and installed 
missing "ceph-common" packages on hypervisors. I set "rbd_keyring_conf" 
and "rbd_ceph_conf" as indicated and got as far as adding a "Managed 
Block Storage" domain and creating a disk (which is also visible through 
"rbd ls"). I used a keyring that is only authorized for the pool I 
specified with "rbd_pool". When I try to start the VM it fails and I see 
the following in supervdsm.log on hypervisor:


ManagedVolumeHelperFailed: Managed Volume Helper failed.: ('Error 
executing helper: Command [\'/usr/libexec/vdsm/managedvolume-helper\', 
\'attach\'] failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running 
privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\', 
\\\'--privsep_context\\\', \\\'os_brick.privileged.default\\\', 
\\\'--privsep_sock_path\\\', 
\\\'/tmp/tmp5S8zZV/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new 
privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon 
starting\\noslo.privsep.daemon: privsep process running with uid/gid: 
0/0\\noslo.privsep.daemon: privsep process running with capabilities 
(eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon: 
privsep daemon running as pid 15944\\nTraceback (most recent call 
last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154, in 
\\nsys.exit(main(sys.argv[1:]))\\n  File 
"/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n 
args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper", 
line 137, in attach\\nattachment = 
conn.connect_volume(conn_info[\\\'data\\\'])\\n  File 
"/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 96, 
in connect_volume\\nrun_as_root=True)\\n  File 
"/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in 
_execute\\nresult = self.__execute(*args, **kwargs)\\n  File 
"/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 
169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 
207, in _wrap\\nreturn self.channel.remote_call(name, args, 
kwargs)\\n  File 
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in 
remote_call\\nraise 
exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError: 
Unexpected error while running command.\\nCommand: rbd map 
volume-36f5eb75-329e-4bd2-88d0-6f0bfe5d1040 --pool ovirt-test --conf 
/tmp/brickrbd_RmBvxA --id None --mon_host xxx.xxx.216.45:6789 --mon_host 
xxx.xxx.216.54:6789 --mon_host xxx.xxx.216.55:6789\\nExit code: 
22\\nStdout: u\\\'In some cases useful info is found in syslog - try 
"dmesg | tail".n\\\'\\nStderr: u"2019-04-01 15:27:30.743196 
7fe0b4632d40 -1 auth: unable to find a keyring on 
/etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: 
(2) No such file or directorynrbd: sysfs write failedn2019-04-01 
15:27:30.746987 7fe0b4632d40 -1 auth: unable to find a keyring on 
/etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: 
(2) No such file or directoryn2019-04-01 15:27:30.747896 
7fe0b4632d40 -1 monclient: authenticate NOTE: no keyring found; disabled 
cephx authenticationn2019-04-01 15:27:30.747903 7fe0b4632d40  0 
librados: client.None authentication error (95) Operation not 
supportednrbd: couldn\\\'t connect to the cluster!nrbd: map 
failed: (22) Invalid argumentn"\\n\'',)


I tried to provide a /etc/ceph directory with ceph.conf and client 
keyring on hypervisors (as configured in driver options). This didn't 
solve it and doesn't seem to be the right way as the mentioned 
/tmp/brickrbd_RmBvxA contains the needed keyring data. Please give me 
some advice what's wrong.


th

[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-01 Thread Strahil Nikolov
Hi Simone,
>Sorry, it looks empty.
Sadly it's true. This one should be OK.

Best Regards,Strahil Nikolov


  <>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DGB7TYSWORVXZAGE7UXXCLZS4ANIH72O/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-01 Thread Simone Tiraboschi
On Mon, Apr 1, 2019 at 2:31 PM Strahil Nikolov 
wrote:

> Hi Simone,
>
> I am attaching the gluster logs from ovirt1.
> I hope you see something I missed.
>

Sorry, it looks empty.


>
> Best Regards,
> Strahil Nikolov
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A74BIZ6IZ5RBAJE7QKR25MEQERJY244H/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-01 Thread Strahil Nikolov
Hi Simone,
I am attaching the gluster logs from ovirt1.I hope you see something I missed.
Best Regards,Strahil Nikolov
<>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RRN34XK24F67EPN5UXGF4NVKWAE5235X/


[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Benny Zlotnik
> OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:
>
> 2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
> connecting to ceph cluster.
> Traceback (most recent call last):
>File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
> line 337, in _do_conn
>  client.connect()
>File "rados.pyx", line 885, in rados.Rados.connect
> (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)
> OSError: [errno 95] error connecting to the cluster
> 2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to
> run command 'storage_stats': Bad or unexpected response from the storage
> volume backend API: Error connecting to ceph cluster.
>
> I don't really know what to do with that either.
> BTW, the cinder version on engine host is "pike"
> (openstack-cinder-11.2.0-1.el7.noarch)
Not sure if the version is related (I know it's been tested with
pike), but you can try and install the latest rocky (that's what I use
for development)

> Shall I pass "rbd_secret_uuid" in the driver options? But where is this
> UUID created? Where is the ceph secret key stored in oVirt?
I don't think it's needed as ceph based volumes are no longer a
network disk like in the cinder integration, but it is attached like a
regular block device
The only things that are a must now are "rbd_keyring_conf" and
"rbd_ceph_conf" (you don't need the first if the path to the keyring
is configured in the latter)
And I think you get the error because it's missing or incorrect, since
I manually removed the keyring path from the configuration and got the
same error as you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W75WEWNWMUTKCJ6MQPAO72GBABISI4EG/


[ovirt-users] Re: Backup VMs to external USB Disk

2019-04-01 Thread Arik Hadas
On Mon, Apr 1, 2019 at 10:09 AM  wrote:

> Hi all,
> I'm new in Ovirt and have tested a lot of things how to backup my VMS to a
> external USB Disk.
>
> How have you solved this Problem - have anybody a Tutorial or something
> similar for me?
>

If you can plug the external USB disk to one of the hosts in the cluster,
then you can export your VMs to OVAs and specify the mount point of that
disk on that host as target.


>
> Thanks for your helps.
>
> Daniel
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UWRIZGJHQULNV47Y6DQ2CHFL2PGP2423/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBIW4BNZVPG4ICN2I2IMUYIQBWYF325M/


[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Matthias Leopold



Am 01.04.19 um 12:07 schrieb Benny Zlotnik:

Hi,

Thanks for trying this out!
We added a separate log file for cinderlib in 4.3.2, it should be 
available under /var/log/ovirt-engine/cinderlib/cinderlib.log
They are not perfect yet, and more improvements are coming, but it might 
provide some insight about the issue



OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:

2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error 
connecting to ceph cluster.

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", 
line 337, in _do_conn

client.connect()
  File "rados.pyx", line 885, in rados.Rados.connect 
(/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)

OSError: [errno 95] error connecting to the cluster
2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to 
run command 'storage_stats': Bad or unexpected response from the storage 
volume backend API: Error connecting to ceph cluster.


I don't really know what to do with that either.
BTW, the cinder version on engine host is "pike" 
(openstack-cinder-11.2.0-1.el7.noarch)




 >Although I don't think this is directly connected there is one other
 >question that comes up for me: how are libvirt "Authentication Keys"
 >handled with Ceph "Managed Block Storage" domains? With "standalone
 >Cinder" setups like we are using now you have to configure a "provider"
 >of type "OpenStack Block Storage" where you can configure these keys
 >that are referenced in cinder.conf as "rbd_secret_uuid". How is this
 >supposed to work now?

Now you are supposed to pass the secret in the driver options, something 
like this (using REST):


rbd_ceph_conf
/etc/ceph/ceph.conf
  

  
           rbd_keyring_conf
          /etc/ceph/ceph.client.admin.keyring




Shall I pass "rbd_secret_uuid" in the driver options? But where is this 
UUID created? Where is the ceph secret key stored in oVirt?


thanks
Matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MSGWGFANCKLT2UK3KJVZW5R6IBNRJEJS/


[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Benny Zlotnik
Hi,

Thanks for trying this out!
We added a separate log file for cinderlib in 4.3.2, it should be available
under /var/log/ovirt-engine/cinderlib/cinderlib.log
They are not perfect yet, and more improvements are coming, but it might
provide some insight about the issue

>Although I don't think this is directly connected there is one other
>question that comes up for me: how are libvirt "Authentication Keys"
>handled with Ceph "Managed Block Storage" domains? With "standalone
>Cinder" setups like we are using now you have to configure a "provider"
>of type "OpenStack Block Storage" where you can configure these keys
>that are referenced in cinder.conf as "rbd_secret_uuid". How is this
>supposed to work now?

Now you are supposed to pass the secret in the driver options, something
like this (using REST):

rbd_ceph_conf
/etc/ceph/ceph.conf
 

 
  rbd_keyring_conf
 /etc/ceph/ceph.client.admin.keyring



On Mon, Apr 1, 2019 at 12:51 PM Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> Hi,
>
> I upgraded my test environment to 4.3.2 and now I'm trying to set up a
> "Managed Block Storage" domain with our Ceph 12.2 cluster. I think I got
> all prerequisites, but when saving the configuration for the domain with
> volume_driver "cinder.volume.drivers.rbd.RBDDriver" (and a couple of
> other options) I get "VolumeBackendAPIException: Bad or unexpected
> response from the storage volume backend API: Error connecting to ceph
> cluster" in engine log (full error below). Unfortunately this is a
> rather generic error message and I don't really know where to look next.
> Accessing the rbd pool from the engine host with rbd CLI and the
> configured "rbd_user" works flawlessly...
>
> Although I don't think this is directly connected there is one other
> question that comes up for me: how are libvirt "Authentication Keys"
> handled with Ceph "Managed Block Storage" domains? With "standalone
> Cinder" setups like we are using now you have to configure a "provider"
> of type "OpenStack Block Storage" where you can configure these keys
> that are referenced in cinder.conf as "rbd_secret_uuid". How is this
> supposed to work now?
>
> Thanks for any advice, we are using oVirt with Ceph heavily and are very
> interested in a tight integration of oVirt and Ceph.
>
> Matthias
>
>
> 2019-04-01 11:14:55,128+02 ERROR
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
> (default task-22) [b6665621-6b85-438e-8c68-266f33e55d79] cinderlib
> execution failed: Traceback (most recent call last):
>File "./cinderlib-client.py", line 187, in main
>  args.command(args)
>File "./cinderlib-client.py", line 275, in storage_stats
>  backend = load_backend(args)
>File "./cinderlib-client.py", line 217, in load_backend
>  return cl.Backend(**json.loads(args.driver))
>File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line
> 87, in __init__
>  self.driver.check_for_setup_error()
>File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
> line 288, in check_for_setup_error
>  with RADOSClient(self):
>File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
> line 170, in __init__
>  self.cluster, self.ioctx = driver._connect_to_rados(pool)
>File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
> line 346, in _connect_to_rados
>  return _do_conn(pool, remote, timeout)
>File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 799, in
> _wrapper
>  return r.call(f, *args, **kwargs)
>File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call
>  raise attempt.get()
>File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
>  six.reraise(self.value[0], self.value[1], self.value[2])
>File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
>  attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
>File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
> line 344, in _do_conn
>  raise exception.VolumeBackendAPIException(data=msg)
> VolumeBackendAPIException: Bad or unexpected response from the storage
> volume backend API: Error connecting to ceph cluster.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2V53GZEMALXSOUHRJ7PRPZSOSOMRURK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/l

[ovirt-users] trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Matthias Leopold

Hi,

I upgraded my test environment to 4.3.2 and now I'm trying to set up a 
"Managed Block Storage" domain with our Ceph 12.2 cluster. I think I got 
all prerequisites, but when saving the configuration for the domain with 
volume_driver "cinder.volume.drivers.rbd.RBDDriver" (and a couple of 
other options) I get "VolumeBackendAPIException: Bad or unexpected 
response from the storage volume backend API: Error connecting to ceph 
cluster" in engine log (full error below). Unfortunately this is a 
rather generic error message and I don't really know where to look next. 
Accessing the rbd pool from the engine host with rbd CLI and the 
configured "rbd_user" works flawlessly...


Although I don't think this is directly connected there is one other 
question that comes up for me: how are libvirt "Authentication Keys" 
handled with Ceph "Managed Block Storage" domains? With "standalone 
Cinder" setups like we are using now you have to configure a "provider" 
of type "OpenStack Block Storage" where you can configure these keys 
that are referenced in cinder.conf as "rbd_secret_uuid". How is this 
supposed to work now?


Thanks for any advice, we are using oVirt with Ceph heavily and are very 
interested in a tight integration of oVirt and Ceph.


Matthias


2019-04-01 11:14:55,128+02 ERROR 
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
(default task-22) [b6665621-6b85-438e-8c68-266f33e55d79] cinderlib 
execution failed: Traceback (most recent call last):

  File "./cinderlib-client.py", line 187, in main
args.command(args)
  File "./cinderlib-client.py", line 275, in storage_stats
backend = load_backend(args)
  File "./cinderlib-client.py", line 217, in load_backend
return cl.Backend(**json.loads(args.driver))
  File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line 
87, in __init__

self.driver.check_for_setup_error()
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", 
line 288, in check_for_setup_error

with RADOSClient(self):
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", 
line 170, in __init__

self.cluster, self.ioctx = driver._connect_to_rados(pool)
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", 
line 346, in _connect_to_rados

return _do_conn(pool, remote, timeout)
  File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 799, in 
_wrapper

return r.call(f, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call
raise attempt.get()
  File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
six.reraise(self.value[0], self.value[1], self.value[2])
  File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", 
line 344, in _do_conn

raise exception.VolumeBackendAPIException(data=msg)
VolumeBackendAPIException: Bad or unexpected response from the storage 
volume backend API: Error connecting to ceph cluster.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2V53GZEMALXSOUHRJ7PRPZSOSOMRURK/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-01 Thread Simone Tiraboschi
On Sun, Mar 31, 2019 at 11:49 PM Strahil Nikolov 
wrote:

> Hi Guys,
>
> As I'm still quite new in oVirt - I have some problems finding the problem
> on this one.
> My Hosted Engine (4.3.2) is constantly dieing (even when the Global
> Maintenance is enabled).
> My interpretation of the logs indicates some lease problem , but I don't
> get the whole picture ,yet.
>
> I'm attaching the output of 'journalctl -f | grep -Ev "Started
> Session|session opened|session closed"' after I have tried to power on the
> hosted engine (hosted-engine --vm-start).
>
> The nodes are fully updated and I don't see anything in the gluster v5.5
> logs, but I can double check.
>

Can you please attach gluster logs fro the same time frame?


>
> Any hints are appreciated and thanks in advance.
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TRQL5EOCRLELX46GSLJI4V5KT2QCME7U/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YG3M7FZ7LBT33LWPGASGZM2NNSSZOZB7/


[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-01 Thread Simone Tiraboschi
Hi,
to understand what's failing I'd suggest to start attaching setup logs.

On Sun, Mar 31, 2019 at 5:06 PM Leo David  wrote:

> Hello Everyone,
> Using 4.3.2 installation, and after running through HyperConverged Setup,
> at the last stage it fails. It seems that the previously created "engine"
> volume is not mounted under "/rhev" path, therefore the setup cannot finish
> the deployment.
> Any ideea which are the services responsible of mounting the volumes on
> oVirt Node distribution ? I'm thinking that maybe this particularly one
> failed to start for some reason...
> Thank you very much !
>
> --
> Best regards, Leo David
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4TIXZ3GIBZHSJ7IC2VHC/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SMZEMNPTF7EE466BIWX6S2SUF72N4A7M/


[ovirt-users] Backup VMs to external USB Disk

2019-04-01 Thread daniel94 . oeller
Hi all,
I'm new in Ovirt and have tested a lot of things how to backup my VMS to a 
external USB Disk.

How have you solved this Problem - have anybody a Tutorial or something similar 
for me?

Thanks for your helps.

Daniel
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UWRIZGJHQULNV47Y6DQ2CHFL2PGP2423/