On Tue, Dec 4, 2018 at 11:32 AM Abhishek Sahni
wrote:
> Hello Team,
>
>
> We are running a setup of 3-way replica HC gluster setup configured during
> the initial deployment from the cockpit console using ansible.
>
> NODE1
> - /dev/sda (OS)
> - /dev/sdb ( Gluster Bricks )
>* /glu
Hello Team,
We are running a setup of 3-way replica HC gluster setup configured during
the initial deployment from the cockpit console using ansible.
NODE1
- /dev/sda (OS)
- /dev/sdb ( Gluster Bricks )
* /gluster_bricks/engine/engine/
* /gluster_bricks/data/data/
* /g
Hello,
we currently have a self-hosted engine on gluster with 3 hosts. We want to have
the engine on a single machine on a standalone KVM.
We did the following steps on our test platform.
- Create a VM on a standalone KVM
- Put the self hosted engine into global maintenance
- Shut the self-hoste
Hello Team,
We are running a setup of 3-way replica HC gluster setup configured during
the initial deployment from the cockpit console using ansible.
NODE1
- /dev/sda (OS)
- /dev/sdb ( Gluster Bricks )
* /gluster_bricks/engine/engine/
* /gluster_bricks/data/data/
* /
Hello Peter,
if you install it on one VM it will only run on one of
the hosts however if you had a distributed scanner it could run on multiple
hosts. I think there was some work with a distibuted scanner in docker (don't
know if there is openshift or kubernetes) but it ma
Ok so two things, one I found this bug report which I believe is
relevant to my issue. *https://bugzilla.redhat.com/show_bug.cgi?id=1497931*
*
*
And secondly I found using a tip from that bug report that I in fact
have two identical LVMs. I need to remove the one that already exists on
the FC
I believe I have found the relevant error in the engine.log.
__
Exception: 'VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksSta
Any thoughts on how I remove an un-imported VM from my FIbre
Channel storage? I cannot move my working VMs back to the FC, and I
believe it is because the older un-imported version is creating conflict.
On 11/29/2018 03:12 PM, Jacob Green wrote:
Ok, so here is the situation, before m
Hello,
attached the qemu Log.
This ist the problem:
Could not open
'/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08':
Permission denied
When I do "su - vdsm -s /bin/bash"
I can hexdump the file !
-bash-4.2$ id
On Sat, Dec 1, 2018 at 5:32 AM Andrew DeMaria
wrote:
> Hi,
>
> I've been testing the hosted-engine restore process and have gotten stuck
> because I cannot remove the old hosted-engine storage domain. Here is the
> process I went through:
>
> 1. Setup two 70G luns under an iscsi target (one for t
On Sat, Dec 1, 2018 at 3:30 PM Sinan Polat wrote:
> Hi folks,
>
>
> A while ago I installed oVirt 4.2 on my CentOS 7.5 server. It is a single
> node.
>
>
> Everything is working, I can deploy VM's, etc. But it look likes that the
> hosted-engine is partly installed or something.
>
>
> [root@s01
On Mon, Dec 3, 2018 at 2:07 PM Ralf Schenk wrote:
> Hello,
>
> I try to deploy hosted-engine to a NFS Share accessible by (currently) two
> hosts. The host is running latest ovirt-node-ng 4.2.7.
>
> hosted-engine --deploy fails constantly in late stage when trying to run
> engine from NFS. It alr
Thanks very much for these suggestions and assistance.
Peter
On Mon., Dec. 3, 2018, 3:26 a.m. Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk wrote:
> Hello Peter,
>if you install it on one VM it will only run on one
> of the hosts however if you had a distributed scan
Hello,
I try to deploy hosted-engine to a NFS Share accessible by (currently)
two hosts. The host is running latest ovirt-node-ng 4.2.7.
hosted-engine --deploy fails constantly in late stage when trying to run
engine from NFS. It already ran as "HostedEngineLocal" and I think is
then migrated to
Hi,
On Sun, Nov 25, 2018 at 02:48:13PM -0500, Alex McWhirter wrote:
> I'm having an odd issue that i find hard to believe could be a
> bug, and not some kind of user error, but im at a loss for
> where else to look.
Looks like a bug or bad configuration between host/guest. Let's
see.
> when boot
15 matches
Mail list logo