wonder if I add their Ethernet range to the pools of the
gaining Manager...
On Fri, Nov 18, 2016 at 9:35 PM Kenneth Bingham <w...@qrk.us> wrote:
> I imported a guest from its iscsi storage domain and clicked the green UP
> button, but the guest failed to start. This was the first tim
I imported a guest from its iscsi storage domain and clicked the green UP
button, but the guest failed to start. This was the first time vdsm tried
to create a temporary storage domain for a host other than hosted_engine.
I'm using the same chap credential that was used with the same iscsi
storage
ithub to update the following page?
>
> http://www.ovirt.org/develop/release-management/features/infra/pki/
>
> Best regards,
>
> >
> >
> >
> > Best,
> >
> > Daniel
> >
> >
> >
> > From: <users-boun...@ovirt.org> on beha
te key of ca.pem
Please copy the private key of /etc/pki/ovirt-engine/ca.pem to
/etc/pki/ovirt-engine/private/ca.pem and let me know if everything works
On Thu, Oct 27, 2016 at 2:47 PM, Kenneth Bingham <w...@qrk.us> wrote:
Thanks Ravi, that's helpful and I appreciate the precision and attenti
ertificate.p12 /etc/pki/ovirt-engine/keys/apache.p12
> cp apache.cer /etc/pki/ovirt-engine/certs/apache.cer
> cp apache.key.nopass /etc/pki/ovirt-engine/keys/apache.key.nopass
>
> Restart engine and httpd
> ===
> service httpd restart
> service ovirt-engine restart
>
> On Th
I did install a server certificate from a private CA on the engine server
for the oVirt 4 Manager GUI, but haven't figured out how to configure
engine to trust the same CA which also issued the server certificate
presented by vdsm. This is important for us because this is the same server
Is it possible to "pivot" up or down guests from one instance of Manager to
another instance of Manager by detaching the data storage domain from the
source Manager's data center and attaching it to the destination Manager's
data center? I tried this with a block type storage domain, but there are
Fernando, That is the recommended approach, that ovirtmgmt bridge not also
be used for VMs. This is for the sake of separation of privilege if I
understand correctly, and there's no reason that VMs will not be able to
communicate using that bridge. Whether or not you tag depends on your
network's
<rgo...@redhat.com> wrote:
> On 30 July 2016 at 02:48, Kenneth Bingham <w...@qrk.us> wrote:
>
>> Aw crap. I did exactly the same thing and this could explain a lot of the
>> issues I've been pulling out my beard over. Every time I did 'hosted-engine
>> --deplo
mode on a
fresh install.
On Sun, Jul 31, 2016 at 3:07 AM Roy Golan <rgo...@redhat.com> wrote:
> On 31 July 2016 at 06:20, Kenneth Bingham <w...@qrk.us> wrote:
>
>> Correction: /data is the master, /engine is stuck in status "Locked", but
>> is also the vo
of the /engine storage domain succeeded this time and the hosted
engine VM now appears in the "Virtual Machines" tab. It was always visible
in 'virsh --readonly list'.
On Sun, Jul 31, 2016 at 3:20 AM Roy Golan <rgo...@redhat.com> wrote:
> On 31 July 2016 at 02:24, Kenneth Bingham
I had the same problem until I imported the CA certificate from the engine
as a trusted root (self-signed) authority in my browser.
ENGINE:/etc/pki/ovirt-engine/ca.pem
Further possibilities mentioned in here:
https://access.redhat.com/solutions/718653
On Sat, Jul 30, 2016 at 11:12 PM Anantha
ltaneously. The volume is still locked.
On Sat, Jul 30, 2016 at 5:24 PM Kenneth Bingham <w...@qrk.us> wrote:
> Please help me determine whether this is a defect or user error.
>
> The master storage domain "hosted_storage" was automatically imported in
> oVirt
> ma
When I set local maintenance on the self-hosted engine hypervisor host
through
the manager GUI it becomes stuck in state "Preparing for maintenance". The
hosted engine virtual guest is automatically migrated because the HA broker
is
successfully set to maintenance mode=local. The host returns to
Please help me determine whether this is a defect or user error.
The master storage domain "hosted_storage" was automatically imported in
oVirt
manager when I created another new storage domain "/data", a Gluster FS
volume,
immediately after doing 'hosted-engine --deploy' on the
Aw crap. I did exactly the same thing and this could explain a lot of the
issues I've been pulling out my beard over. Every time I did 'hosted-engine
--deploy' on the RHEV-M|NODE host I entered the FQDN of *that* host, not
the first host, as the origin of the Gluster FS volume because at the time
Do you know of a way to force vdsmd to forget what it knows about the
network configuration on the hypervisor host? If I synchronize or make any
change through the engine the host drops off the network because of an
invalid configuration pushed by some component of oVirt which creates bond1
with
17 matches
Mail list logo