FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@glusterp2 log]#
=
On 2 November 2016 at 11:24, Thing <thing.th...@gmail.com> wrote:
> I have 3 gluster nodes
Hi,
I have installed IPA across 3 nodes. In order to point the ovirt server at
the new IPA/DNS servers and to clean up I ran engine-cleanup aiming to
delte the ovirt setup. However it seems even though I ran this something,
"vdsm?" is still running and controlling the networking.
So down
Hi,
So was was trying to make a 3 way mirror and it reported failed. Now I get
these messages,
On glusterp1,
=
[root@glusterp1 ~]# gluster peer status
Number of Peers: 1
Hostname: 192.168.1.32
Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
State: Peer in Cluster (Connected)
Hi,
I have 3 gluster nodes running,
==
[root@glusterp1 ~]# gluster peer status
Number of Peers: 2
Hostname: 192.168.1.33
Uuid: 0fde5a5b-6254-4931-b704-40a88d4e89ce
State: Peer in Cluster (Connected)
Hostname: 192.168.1.32
Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
State: Peer in Cluster
So far from reading it appears this only applies to "proper" servers? ie
without a iLo card there is nothiing to do?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
> 'Hosts' tab and logs from /var/log/ovirt-engine/engine.log should help us
> to understand the situation there.
>
> Regards,
> Ramesh
>
>
>
>
> - Original Message -
> > From: "Thing" <thing.th...@gmail.com>
> > To: "users"
Ok, I have struggled with this for 2 hours now, glusterp2 and the ovirt
server are basically not talking at all. I have rebooted both, I dont know
how many times. Reading via google there seems to be no fix for this bar a
manual hack of the ovirt server's database to delete the host glusterp2?
lusterTasksService]
(DefaultQuartzScheduler1) [2d374249] No up server in cluster
===
On 27 October 2016 at 13:45, Thing <thing.th...@gmail.com> wrote:
> While trying to figure out how to deploy storage I put 1 host into
> maintenance mode, trying to re-activate it and its failed.
>
> It seems
While trying to figure out how to deploy storage I put 1 host into
maintenance mode, trying to re-activate it and its failed.
It seems to be stuck as neither in activated nor maintenance, so how would
I go about fixing this?
So what log(s) would this be written to?
any idea why the ssh keys are failing?
On 27 October 2016 at 11:08, Thing <thing.th...@gmail.com> wrote:
> oopsie " Are the three hosts subscribed to the ovirt repos?"
>
> no, I will try again doing so. I didnt notice this as a requirement so I
> assumed the
virt-engine
> will provide the path to the logs.
>
> HTH,
> sahina
>
> On Wed, Oct 26, 2016 at 4:06 AM, Thing <thing.th...@gmail.com> wrote:
>
>> Hi,
>>
>> I have ovirt 4.0.4 running on a centos 7.2 machine.
>>
>> I have 3 identical centos 7.2
Hi,
I have ovirt 4.0.4 running on a centos 7.2 machine.
I have 3 identical centos 7.2 machines I want to add as a gluster storage 3
way mirror array. The admin guide doesnt seem to show how to do this? I
have setup ssh keys for root access. I have setup a 1TB LUN on each ready
for
12 matches
Mail list logo