Re: [ovirt-users] unable to pull 2 gluster nodes into ovirt

2016-11-01 Thread Thing
FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination [root@glusterp2 log]# = On 2 November 2016 at 11:24, Thing <thing.th...@gmail.com> wrote: > I have 3 gluster nodes

[ovirt-users] Setting DNS servers problem on ovirt

2016-10-31 Thread Thing
Hi, I have installed IPA across 3 nodes. In order to point the ovirt server at the new IPA/DNS servers and to clean up I ran engine-cleanup aiming to delte the ovirt setup. However it seems even though I ran this something, "vdsm?" is still running and controlling the networking. So down

[ovirt-users] messed up gluster attempt

2016-10-27 Thread Thing
Hi, So was was trying to make a 3 way mirror and it reported failed. Now I get these messages, On glusterp1, = [root@glusterp1 ~]# gluster peer status Number of Peers: 1 Hostname: 192.168.1.32 Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac State: Peer in Cluster (Connected)

[ovirt-users] gluster how to setup a volume across 3 nodes via ovirt

2016-10-27 Thread Thing
Hi, I have 3 gluster nodes running, == [root@glusterp1 ~]# gluster peer status Number of Peers: 2 Hostname: 192.168.1.33 Uuid: 0fde5a5b-6254-4931-b704-40a88d4e89ce State: Peer in Cluster (Connected) Hostname: 192.168.1.32 Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac State: Peer in Cluster

[ovirt-users] power management configuration.

2016-10-27 Thread Thing
So far from reading it appears this only applies to "proper" servers? ie without a iLo card there is nothiing to do? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] host gaga and ovirt cannt control it.

2016-10-27 Thread Thing
> 'Hosts' tab and logs from /var/log/ovirt-engine/engine.log should help us > to understand the situation there. > > Regards, > Ramesh > > > > > - Original Message - > > From: "Thing" <thing.th...@gmail.com> > > To: "users"

[ovirt-users] host gaga and ovirt cannt control it.

2016-10-26 Thread Thing
Ok, I have struggled with this for 2 hours now, glusterp2 and the ovirt server are basically not talking at all. I have rebooted both, I dont know how many times. Reading via google there seems to be no fix for this bar a manual hack of the ovirt server's database to delete the host glusterp2?

Re: [ovirt-users] failed to activate a (gluster) host. ovirt 4.0.4

2016-10-26 Thread Thing
lusterTasksService] (DefaultQuartzScheduler1) [2d374249] No up server in cluster === On 27 October 2016 at 13:45, Thing <thing.th...@gmail.com> wrote: > While trying to figure out how to deploy storage I put 1 host into > maintenance mode, trying to re-activate it and its failed. > > It seems

[ovirt-users] failed to activate a (gluster) host. ovirt 4.0.4

2016-10-26 Thread Thing
While trying to figure out how to deploy storage I put 1 host into maintenance mode, trying to re-activate it and its failed. It seems to be stuck as neither in activated nor maintenance, so how would I go about fixing this? So what log(s) would this be written to?

Re: [ovirt-users] adding 3 machines as gluster nodes to ovirt 4.0.4

2016-10-26 Thread Thing
any idea why the ssh keys are failing? On 27 October 2016 at 11:08, Thing <thing.th...@gmail.com> wrote: > oopsie " Are the three hosts subscribed to the ovirt repos?" > > no, I will try again doing so. I didnt notice this as a requirement so I > assumed the

Re: [ovirt-users] adding 3 machines as gluster nodes to ovirt 4.0.4

2016-10-26 Thread Thing
virt-engine > will provide the path to the logs. > > HTH, > sahina > > On Wed, Oct 26, 2016 at 4:06 AM, Thing <thing.th...@gmail.com> wrote: > >> Hi, >> >> I have ovirt 4.0.4 running on a centos 7.2 machine. >> >> I have 3 identical centos 7.2

[ovirt-users] adding 3 machines as gluster nodes to ovirt 4.0.4

2016-10-25 Thread Thing
Hi, I have ovirt 4.0.4 running on a centos 7.2 machine. I have 3 identical centos 7.2 machines I want to add as a gluster storage 3 way mirror array. The admin guide doesnt seem to show how to do this? I have setup ssh keys for root access. I have setup a 1TB LUN on each ready for