Re: [ovirt-users] unable to pull 2 gluster nodes into ovirt

2016-11-01 Thread Thing
DNS responds fine,

==
[root@ovirt1 host-deploy]# host 192.168.1.31
31.1.168.192.in-addr.arpa domain name pointer glusterp1.ods.graywitch.co.nz.
[root@ovirt1 host-deploy]# host 192.168.1.32
32.1.168.192.in-addr.arpa domain name pointer glusterp2.ods.graywitch.co.nz.
[root@ovirt1 host-deploy]# host 192.168.1.33
33.1.168.192.in-addr.arpa domain name pointer glusterp3.ods.graywitch.co.nz.
[root@ovirt1 host-deploy]# host 192.168.1.34
34.1.168.192.in-addr.arpa domain name pointer ovirt1.ods.graywitch.co.nz.
[root@ovirt1 host-deploy]# host glusterp1.ods.graywitch.co.nz
glusterp1.ods.graywitch.co.nz has address 192.168.1.31
[root@ovirt1 host-deploy]# host glusterp2.ods.graywitch.co.nz
glusterp2.ods.graywitch.co.nz has address 192.168.1.32
[root@ovirt1 host-deploy]# host glusterp3.ods.graywitch.co.nz
glusterp3.ods.graywitch.co.nz has address 192.168.1.33
[root@ovirt1 host-deploy]# more /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
192.168.1.31glusterp1.ods.graywitch.co.nz   glusterp1
192.168.1.32glusterp2.ods.graywitch.co.nz   glusterp2
192.168.1.33glusterp3.ods.graywitch.co.nz   glusterp3
192.168.1.34ovirt1.ods.graywitch.co.nzovirt1
[root@ovirt1 host-deploy]# host glusterp2.ods.graywitch.co.nz
glusterp1.ods.graywitch.co.nz
Using domain server:
Name: glusterp1.ods.graywitch.co.nz
Address: 192.168.1.31#53
Aliases:

glusterp2.ods.graywitch.co.nz has address 192.168.1.32
[root@ovirt1 host-deploy]# host glusterp1.ods.graywitch.co.nz
glusterp1.ods.graywitch.co.nz
Using domain server:
Name: glusterp1.ods.graywitch.co.nz
Address: 192.168.1.31#53
Aliases:

glusterp1.ods.graywitch.co.nz has address 192.168.1.31
[root@ovirt1 host-deploy]# host glusterp3.ods.graywitch.co.nz
glusterp1.ods.graywitch.co.nz
Using domain server:
Name: glusterp1.ods.graywitch.co.nz
Address: 192.168.1.31#53
Aliases:

glusterp3.ods.graywitch.co.nz has address 192.168.1.33
[root@ovirt1 host-deploy]# host glusterp2.ods.graywitch.co.nz
glusterp2.ods.graywitch.co.nz
Using domain server:
Name: glusterp2.ods.graywitch.co.nz
Address: 192.168.1.32#53
Aliases:

glusterp2.ods.graywitch.co.nz has address 192.168.1.32
[root@ovirt1 host-deploy]# host glusterp1.ods.graywitch.co.nz
glusterp2.ods.graywitch.co.nz
Using domain server:
Name: glusterp2.ods.graywitch.co.nz
Address: 192.168.1.32#53
Aliases:

glusterp1.ods.graywitch.co.nz has address 192.168.1.31
[root@ovirt1 host-deploy]# host glusterp3.ods.graywitch.co.nz
glusterp2.ods.graywitch.co.nz
Using domain server:
Name: glusterp2.ods.graywitch.co.nz
Address: 192.168.1.32#53
Aliases:

glusterp3.ods.graywitch.co.nz has address 192.168.1.33
[root@ovirt1 host-deploy]# host ovirt1.ods.graywitch.co.nz
glusterp2.ods.graywitch.co.nz
Using domain server:
Name: glusterp2.ods.graywitch.co.nz
Address: 192.168.1.32#53
Aliases:

ovirt1.ods.graywitch.co.nz has address 192.168.1.34
[root@ovirt1 host-deploy]#
==

ssh keys work fine from the command line,

==
[root@ovirt1 host-deploy]# uname -a
Linux ovirt1.ods.graywitch.co.nz 3.10.0-327.36.2.el7.x86_64 #1 SMP Mon Oct
10 23:08:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@ovirt1 host-deploy]# ssh glusterp1
Last login: Wed Nov  2 10:46:58 2016 from ovirt1.ods.graywitch.co.nz
[root@glusterp1 ~]# ^C
[root@glusterp1 ~]# logout
Connection to glusterp1 closed.
[root@ovirt1 host-deploy]# ssh glusterp2
Last login: Wed Nov  2 10:14:09 2016
[root@glusterp2 ~]# logout
Connection to glusterp2 closed.
[root@ovirt1 host-deploy]# ssh glusterp3
Last login: Wed Nov  2 10:14:19 2016
[root@glusterp3 ~]# logout
Connection to glusterp3 closed.
[root@ovirt1 host-deploy]#
==

iptables is disabled,


[root@glusterp2 log]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source   destination

Chain FORWARD (policy ACCEPT)
target prot opt source   destination

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination
[root@glusterp2 log]#
=







On 2 November 2016 at 11:24, Thing <thing.th...@gmail.com> wrote:

> I have 3 gluster nodes, 1 and 2 repeatedly fail to come in to ovirt but
> node 3 worked first time.
>
> It appears to be saying ssh-keys / root password is failing, but I can ssh
> in fine from teh command line so this message makes no sense.
>
> attached is a host deploy log
>
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Setting DNS servers problem on ovirt

2016-10-31 Thread Thing
Hi,

I have installed IPA across 3 nodes.  In order to point the ovirt server at
the new IPA/DNS servers and to clean up I ran engine-cleanup aiming to
delte the ovirt setup.   However it seems even though I ran this something,
"vdsm?" is still running and controlling the networking.

So down under /etc/sysconfig/network-scripts I see,

=
/etc/sysconfig/network-scripts
[root@ovirt1 network-scripts]# ls -l
total 256
-rw-rw-r--. 1 root root   130 Nov  1 10:32 ifcfg-enp0s25
-rw-r--r--. 1 root root   254 Sep 16  2015 ifcfg-lo
-rw-rw-r--. 1 root root   252 Nov  1 10:32 ifcfg-ovirtmgmt
8><-
==

So my first Q is why when I run engine-cleanup isnt the networking cleaned
up?  should I file this as a bugzilla?

After that I can see that ifcfg-ovirtmgmt is still controlling DNS,

==
8><
[root@ovirt1 network-scripts]# tail ifcfg-ovirtmgmt
ONBOOT=yes
IPADDR=192.168.1.34
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
BOOTPROTO=none
MTU=1500
DEFROUTE=yes
NM_CONTROLLED=no
IPV6INIT=no
DNS1=192.168.1.240
[root@ovirt1 network-scripts]#
===

So I tried to set,

DNS1=192.168.1.31
DNS2=192.168.1.32
DNS3=192.168.1.33

rebooted and, no I see,

DNS1=192.168.1.240 again

So I also see that the vdsm service is still running,

==
[root@ovirt1 network-scripts]# systemctl status vdsm-network.service
● vdsm-network.service - Virtual Desktop Server Manager network restoration
   Loaded: loaded (/usr/lib/systemd/system/vdsm-network.service; enabled;
vendor preset: enabled)
   Active: active (exited) since Tue 2016-11-01 10:32:51 NZDT; 2h 0min ago
  Process: 2873 ExecStart=/usr/bin/vdsm-tool restore-nets (code=exited,
status=0/SUCCESS)
  Process: 2848 ExecStartPre=/usr/bin/vdsm-tool --vvverbose --append
--logfile=/var/log/vdsm/upgrade.log upgrade-unified-persistence
(code=exited, status=0/SUCCESS)
 Main PID: 2873 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/vdsm-network.service

Nov 01 10:32:45 ovirt1.ods.graywitch.co.nz python[2887]: DIGEST-MD5 client
step 2
Nov 01 10:32:45 ovirt1.ods.graywitch.co.nz python[2887]: DIGEST-MD5
parse_server_challenge()
Nov 01 10:32:45 ovirt1.ods.graywitch.co.nz python[2887]: DIGEST-MD5
ask_user_info()
Nov 01 10:32:45 ovirt1.ods.graywitch.co.nz python[2887]: DIGEST-MD5 client
step 2
Nov 01 10:32:45 ovirt1.ods.graywitch.co.nz python[2887]: DIGEST-MD5
ask_user_info()
Nov 01 10:32:45 ovirt1.ods.graywitch.co.nz python[2887]: DIGEST-MD5
make_client_response()
Nov 01 10:32:45 ovirt1.ods.graywitch.co.nz python[2887]: DIGEST-MD5 client
step 3
Nov 01 10:32:51 ovirt1.ods.graywitch.co.nz python[2887]: DIGEST-MD5 client
mech dispose
Nov 01 10:32:51 ovirt1.ods.graywitch.co.nz python[2887]: DIGEST-MD5 common
mech dispose
Nov 01 10:32:51 ovirt1.ods.graywitch.co.nz systemd[1]: Started Virtual
Desktop Server Manager network restoration.
[root@ovirt1 network-scripts]#
===

why after cleaning up is this still active?

Next, I have grep'd under /etc/ and cannot find where its getting its
obsolete network DNS info info from.

So I need to know where this info is stored? so I can edit is via the CLI?
database?

There is no web ui running as engine-cleanup has removed that so I cant
work via the web ui.

Is there anything else I need to manually stop, disable and remove after
running engine-cleanup?

thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] messed up gluster attempt

2016-10-27 Thread Thing
Hi,

So was was trying to make a 3 way mirror and it reported failed.  Now I get
these messages,

On glusterp1,

=
[root@glusterp1 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.1.32
Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
State: Peer in Cluster (Connected)
[root@glusterp1 ~]# gluster peer probe glusterp3.graywitch.co.nz
peer probe: failed: glusterp3.graywitch.co.nz is either already part of
another cluster or having volumes configured
[root@glusterp1 ~]# gluster volume info
No volumes present
[root@glusterp1 ~]#
=

on glusterp2,

=
[root@glusterp2 ~]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
vendor preset: disabled)
   Active: active (running) since Fri 2016-10-28 15:22:34 NZDT; 5min ago
 Main PID: 16779 (glusterd)
   CGroup: /system.slice/glusterd.service
   └─16779 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
INFO

Oct 28 15:22:32 glusterp2.graywitch.co.nz systemd[1]: Starting GlusterFS, a
clustered file-system server...
Oct 28 15:22:34 glusterp2.graywitch.co.nz systemd[1]: Started GlusterFS, a
clustered file-system server.
[root@glusterp2 ~]# gluster volume info
No volumes present
[root@glusterp2 ~]# gluster peer status
Number of Peers: 2

Hostname: 192.168.1.33
Uuid: 0fde5a5b-6254-4931-b704-40a88d4e89ce
State: Sent and Received peer request (Connected)

Hostname: 192.168.1.31
Uuid: a29a93ee-e03a-46b0-a168-4d5e224d5f02
State: Peer in Cluster (Connected)
[root@glusterp2 ~]#
==

on glusterp3,

==
[root@glusterp3 glusterd]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
vendor preset: disabled)
   Active: active (running) since Fri 2016-10-28 15:26:40 NZDT; 1min 16s ago
 Main PID: 7033 (glusterd)
   CGroup: /system.slice/glusterd.service
   └─7033 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
INFO

Oct 28 15:26:37 glusterp3.graywitch.co.nz systemd[1]: Starting GlusterFS, a
clustered file-system server...
Oct 28 15:26:40 glusterp3.graywitch.co.nz systemd[1]: Started GlusterFS, a
clustered file-system server.
[root@glusterp3 glusterd]# gluster volume info
No volumes present
[root@glusterp3 glusterd]# gluster peer probe glusterp1.graywitch.co.nz
peer probe: failed: glusterp1.graywitch.co.nz is either already part of
another cluster or having volumes configured
[root@glusterp3 glusterd]# gluster volume info
No volumes present
[root@glusterp3 glusterd]# gluster peer status
Number of Peers: 1

Hostname: glusterp2.graywitch.co.nz
Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
State: Sent and Received peer request (Connected)
[root@glusterp3 glusterd]#
===

How do I clean this mess up?

thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] gluster how to setup a volume across 3 nodes via ovirt

2016-10-27 Thread Thing
Hi,

I have 3 gluster nodes running,

==
[root@glusterp1 ~]# gluster peer status
Number of Peers: 2

Hostname: 192.168.1.33
Uuid: 0fde5a5b-6254-4931-b704-40a88d4e89ce
State: Peer in Cluster (Connected)

Hostname: 192.168.1.32
Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
State: Peer in Cluster (Connected)
==

I have a 900gb partition on each of the three nodes ready to go, formatted
xfs.   However when I go into host---> storage devices it says gv_1-lvgv1
is already in use and "create brick" is greyed out.

So how do I get "create brick" un-greyed?

The partition isnt mounted, just setup and xfs'd ready for use.

Or am I better to set it up via the CLI on glusterp1?  I assume I can then
import it into ovirt for use?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] power management configuration.

2016-10-27 Thread Thing
So far from reading it appears this only applies to "proper" servers? ie
without a iLo card there is nothiing to do?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host gaga and ovirt cannt control it.

2016-10-27 Thread Thing
Hi,  after 2 hours of repeatedly rebooting glusterp2 (8 odd times) and the
ovirt server itself twice after the second reboot of the ovirt server I
managed to get glusterp2 into maintenance mode and re-install it, which
cleared the issue. I still dont know why it failed, nor why it took so
long to right.  :(

Power management isnt working, so fixing this is my next job. Hopefully
making it go will be documented and straightforward.


On 27 October 2016 at 16:53, Ramesh Nachimuthu <rnach...@redhat.com> wrote:

> Can you explain state of your setup now. May be a screen shot of the
> 'Hosts' tab and logs from /var/log/ovirt-engine/engine.log should help us
> to understand the situation there.
>
> Regards,
> Ramesh
>
>
>
>
> - Original Message -
> > From: "Thing" <thing.th...@gmail.com>
> > To: "users" <Users@ovirt.org>
> > Sent: Thursday, October 27, 2016 9:03:22 AM
> > Subject: [ovirt-users] host gaga and ovirt cannt control it.
> >
> > Ok, I have struggled with this for 2 hours now, glusterp2 and the ovirt
> > server are basically not talking at all. I have rebooted both, I dont
> know
> > how many times. Reading via google there seems to be no fix for this bar
> a
> > manual hack of the ovirt server's database to delete the host glusterp2?
> or
> > it it re-install from scratch time?
> >
> > If I have to re-install from scratch, is it best to go back a version of
> > ovirt say to 3.6?
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] host gaga and ovirt cannt control it.

2016-10-26 Thread Thing
Ok, I have struggled with this for 2 hours now, glusterp2 and the ovirt
server are basically not talking at all.  I have rebooted both, I dont know
how many times.  Reading via google there seems to be no fix for this bar a
manual hack of the ovirt server's database to delete the host glusterp2?
or it it re-install from scratch time?

If I have to re-install from scratch, is it best to go back a version of
ovirt say to 3.6?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] failed to activate a (gluster) host. ovirt 4.0.4

2016-10-26 Thread Thing
ead-29) [43df6659] Executing SSH Soft Fencing
command on host '192.168.1.31'
2016-10-27 13:48:27,914 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to /192.168.1.31
2016-10-27 13:48:29,379 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler7) [] START, GetHardwareInfoVDSCommand(HostName =
glusterp1, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='260c0a92-2856-4cd6-a784-01ac95fc41d5',
vds='Host[glusterp1,260c0a92-2856-4cd6-a784-01ac95fc41d5]'}), log id:
46d1a470
2016-10-27 13:48:30,383 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler7) [] FINISH, GetHardwareInfoVDSCommand, log id:
46d1a470
2016-10-27 13:48:30,449 INFO
[org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand]
(DefaultQuartzScheduler7) [a8ef973] Running command:
HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected
:  ID: 260c0a92-2856-4cd6-a784-01ac95fc41d5 Type: VDS
2016-10-27 13:48:30,464 INFO
[org.ovirt.engine.core.bll.InitVdsOnUpCommand] (DefaultQuartzScheduler7)
[1c434304] Running command: InitVdsOnUpCommand internal: true. Entities
affected :  ID: 58098515-0126-0151-02eb-03cb Type: StoragePool
2016-10-27 13:48:30,469 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterHostUUIDVDSCommand]
(DefaultQuartzScheduler7) [1c434304] START,
GetGlusterHostUUIDVDSCommand(HostName = glusterp1,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='260c0a92-2856-4cd6-a784-01ac95fc41d5'}), log id: 4ca089d8
2016-10-27 13:48:37,296 INFO
[org.ovirt.engine.core.bll.pm.SshSoftFencingCommand]
(org.ovirt.thread.pool-8-thread-29) [43df6659] Lock freed to object
'EngineLock:{exclusiveLocks='[260c0a92-2856-4cd6-a784-01ac95fc41d5=<VDS_FENCE,
POWER_MANAGEMENT_ACTION_ON_ENTITY_ALREADY_IN_PROGRESS>]',
sharedLocks='null'}'
2016-10-27 13:48:37,328 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-29) [43df6659] Correlation ID: 43df6659,
Job ID: 091c3b5f-d8e6-4200-b8fb-39d0d8b5b330, Call Stack: null, Custom
Event ID: -1, Message: Host glusterp1 is rebooting.
2016-10-27 13:48:37,336 WARN
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(org.ovirt.thread.pool-8-thread-29) [43df6659] Trying to release exclusive
lock which does not exist, lock key:
'260c0a92-2856-4cd6-a784-01ac95fc41d5VDS_FENCE'
2016-10-27 13:48:37,336 INFO
[org.ovirt.engine.core.bll.pm.VdsNotRespondingTreatmentCommand]
(org.ovirt.thread.pool-8-thread-29) [43df6659] Lock freed to object
'EngineLock:{exclusiveLocks='[260c0a92-2856-4cd6-a784-01ac95fc41d5=<VDS_FENCE,
POWER_MANAGEMENT_ACTION_ON_ENTITY_ALREADY_IN_PROGRESS>]',
sharedLocks='null'}'
2016-10-27 13:49:15,787 INFO
[org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService]
(DefaultQuartzScheduler8) [beed7f3] No up server in cluster
2016-10-27 13:50:15,790 INFO
[org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService]
(DefaultQuartzScheduler1) [2d374249] No up server in cluster
===



On 27 October 2016 at 13:45, Thing <thing.th...@gmail.com> wrote:

> While trying to figure out how to deploy storage I put 1 host into
> maintenance mode, trying to re-activate it and its failed.
>
> It seems to be stuck as neither in activated nor maintenance, so how would
> I go about fixing this?
>
> So what log(s) would this be written to?
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] failed to activate a (gluster) host. ovirt 4.0.4

2016-10-26 Thread Thing
While trying to figure out how to deploy storage I put 1 host into
maintenance mode, trying to re-activate it and its failed.

It seems to be stuck as neither in activated nor maintenance, so how would
I go about fixing this?

So what log(s) would this be written to?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] adding 3 machines as gluster nodes to ovirt 4.0.4

2016-10-26 Thread Thing
any idea why the ssh keys are failing?

On 27 October 2016 at 11:08, Thing <thing.th...@gmail.com> wrote:

> oopsie  " Are the three hosts subscribed to the ovirt repos?"
>
> no, I will try again doing so.  I didnt notice this as a requirement so I
> assumed the method was the ovirt server uploaded the needed packages to
> each host.
>
>
>
> On 26 October 2016 at 17:52, Sahina Bose <sab...@redhat.com> wrote:
>
>> If you want these hosts to run only gluster service - create a cluster
>> with "Enable gluster service" checked and "Enable Virt Service" unchecked
>> (i.e disabled).
>> You should then add your 3 hosts to this cluster. Are the three hosts
>> subscribed to the ovirt repos? - During installation, the required packages
>> are pulled in from the repos. So install the ovirt-release- rpm on
>> the hosts.
>>
>> After you do this, if the process still fails, please provide the
>> installation logs (host-deploy logs). Installation failure in ovirt-engine
>> will provide the path to the logs.
>>
>> HTH,
>> sahina
>>
>> On Wed, Oct 26, 2016 at 4:06 AM, Thing <thing.th...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I have ovirt 4.0.4 running on a centos 7.2 machine.
>>>
>>> I have 3 identical centos 7.2 machines I want to add as a gluster
>>> storage 3 way mirror array.  The admin guide doesnt seem to show how to do
>>> this?  I have setup ssh keys for root access.  I have setup a 1TB LUN on
>>> each ready for "glusterising".  I have tried to build and setup gluster
>>> before ahnd and import the ready made setup but this locked up in
>>> "activating" and after 36hours never completed (I assume they should move
>>> to "up"?)
>>>
>>> Is there any documentation out there on how?   I want these as pure
>>> gluster storage only nodes no VMs to go on them.  The VMs will go on 2 or 3
>>> new machines I will add later as a next stage.
>>>
>>> I have tried to add them as hosts (not sure if this is the right method
>>> I suspect not), but I get install failures in the engine.log
>>>
>>> sample,
>>>
>>> =
>>>
>>> 2016-10-26 11:28:13,899 ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>>> (org.ovirt.thread.pool-8-thread-21) [2a413474] SSH error running
>>> command root@192.168.1.31:'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}"
>>> mktemp -d -t ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" >
>>> /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar
>>> --warning=no-timestamp -C "${MYTMP}" -x &&  "${MYTMP}"/ovirt-host-deploy
>>> DIALOG/dialect=str:machine DIALOG/customization=bool:True': Command
>>> returned failure code 1 during SSH session 'root@192.168.1.31'
>>> 2016-10-26 11:28:13,899 ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>>> (org.ovirt.thread.pool-8-thread-21) [2a413474] Exception:
>>> java.io.IOException: Command returned failure code 1 during SSH session '
>>> root@192.168.1.31'
>>> at 
>>> org.ovirt.engine.core.uutils.ssh.SSHClient.executeCommand(SSHClient.java:526)
>>> [uutils.jar:]
>>> at 
>>> org.ovirt.engine.core.uutils.ssh.SSHDialog.executeCommand(SSHDialog.java:317)
>>> [uutils.jar:]
>>> at 
>>> org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase.execute(VdsDeployBase.java:563)
>>> [bll.jar:]
>>> at org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalComma
>>> nd.installHost(InstallVdsInternalCommand.java:169) [bll.jar:]
>>> at org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalComma
>>> nd.executeCommand(InstallVdsInternalCommand.java:90) [bll.jar:]
>>> at 
>>> org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1305)
>>> [bll.jar:]
>>> at org.ovirt.engine.core.bll.CommandBase.executeActionInTransac
>>> tionScope(CommandBase.java:1447) [bll.jar:]
>>> at 
>>> org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2075)
>>> [bll.jar:]
>>> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e
>>> xecuteInSuppressed(TransactionSupport.java:166) [utils.jar:]
>>> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e
>>> xecuteInScope(TransactionSupport.java:105) [utils.jar:]
>>> at org.ovirt.engine.core.bll.CommandBase.execu

Re: [ovirt-users] adding 3 machines as gluster nodes to ovirt 4.0.4

2016-10-26 Thread Thing
oopsie  " Are the three hosts subscribed to the ovirt repos?"

no, I will try again doing so.  I didnt notice this as a requirement so I
assumed the method was the ovirt server uploaded the needed packages to
each host.



On 26 October 2016 at 17:52, Sahina Bose <sab...@redhat.com> wrote:

> If you want these hosts to run only gluster service - create a cluster
> with "Enable gluster service" checked and "Enable Virt Service" unchecked
> (i.e disabled).
> You should then add your 3 hosts to this cluster. Are the three hosts
> subscribed to the ovirt repos? - During installation, the required packages
> are pulled in from the repos. So install the ovirt-release- rpm on
> the hosts.
>
> After you do this, if the process still fails, please provide the
> installation logs (host-deploy logs). Installation failure in ovirt-engine
> will provide the path to the logs.
>
> HTH,
> sahina
>
> On Wed, Oct 26, 2016 at 4:06 AM, Thing <thing.th...@gmail.com> wrote:
>
>> Hi,
>>
>> I have ovirt 4.0.4 running on a centos 7.2 machine.
>>
>> I have 3 identical centos 7.2 machines I want to add as a gluster storage
>> 3 way mirror array.  The admin guide doesnt seem to show how to do this?  I
>> have setup ssh keys for root access.  I have setup a 1TB LUN on each ready
>> for "glusterising".  I have tried to build and setup gluster before ahnd
>> and import the ready made setup but this locked up in "activating" and
>> after 36hours never completed (I assume they should move to "up"?)
>>
>> Is there any documentation out there on how?   I want these as pure
>> gluster storage only nodes no VMs to go on them.  The VMs will go on 2 or 3
>> new machines I will add later as a next stage.
>>
>> I have tried to add them as hosts (not sure if this is the right method I
>> suspect not), but I get install failures in the engine.log
>>
>> sample,
>>
>> =
>>
>> 2016-10-26 11:28:13,899 ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>> (org.ovirt.thread.pool-8-thread-21) [2a413474] SSH error running command
>> root@192.168.1.31:'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp
>> -d -t ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null
>> 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp
>> -C "${MYTMP}" -x &&  "${MYTMP}"/ovirt-host-deploy
>> DIALOG/dialect=str:machine DIALOG/customization=bool:True': Command
>> returned failure code 1 during SSH session 'root@192.168.1.31'
>> 2016-10-26 11:28:13,899 ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>> (org.ovirt.thread.pool-8-thread-21) [2a413474] Exception:
>> java.io.IOException: Command returned failure code 1 during SSH session '
>> root@192.168.1.31'
>> at 
>> org.ovirt.engine.core.uutils.ssh.SSHClient.executeCommand(SSHClient.java:526)
>> [uutils.jar:]
>> at 
>> org.ovirt.engine.core.uutils.ssh.SSHDialog.executeCommand(SSHDialog.java:317)
>> [uutils.jar:]
>> at 
>> org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase.execute(VdsDeployBase.java:563)
>> [bll.jar:]
>> at org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalComma
>> nd.installHost(InstallVdsInternalCommand.java:169) [bll.jar:]
>> at org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalComma
>> nd.executeCommand(InstallVdsInternalCommand.java:90) [bll.jar:]
>> at 
>> org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1305)
>> [bll.jar:]
>> at org.ovirt.engine.core.bll.CommandBase.executeActionInTransac
>> tionScope(CommandBase.java:1447) [bll.jar:]
>> at 
>> org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2075)
>> [bll.jar:]
>> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e
>> xecuteInSuppressed(TransactionSupport.java:166) [utils.jar:]
>> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e
>> xecuteInScope(TransactionSupport.java:105) [utils.jar:]
>> at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1490)
>> [bll.jar:]
>> at 
>> org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:398)
>> [bll.jar:]
>> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>> .executeValidatedCommand(PrevalidatingMultipleActionsRunner.java:204)
>> [bll.jar:]
>> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>> .runCommands(PrevalidatingMultipleAc

[ovirt-users] adding 3 machines as gluster nodes to ovirt 4.0.4

2016-10-25 Thread Thing
Hi,

I have ovirt 4.0.4 running on a centos 7.2 machine.

I have 3 identical centos 7.2 machines I want to add as a gluster storage 3
way mirror array.  The admin guide doesnt seem to show how to do this?  I
have setup ssh keys for root access.  I have setup a 1TB LUN on each ready
for "glusterising".  I have tried to build and setup gluster before ahnd
and import the ready made setup but this locked up in "activating" and
after 36hours never completed (I assume they should move to "up"?)

Is there any documentation out there on how?   I want these as pure gluster
storage only nodes no VMs to go on them.  The VMs will go on 2 or 3 new
machines I will add later as a next stage.

I have tried to add them as hosts (not sure if this is the right method I
suspect not), but I get install failures in the engine.log

sample,

=

2016-10-26 11:28:13,899 ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
(org.ovirt.thread.pool-8-thread-21) [2a413474] SSH error running command
root@192.168.1.31:'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d
-t ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1;
rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C
"${MYTMP}" -x &&  "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
DIALOG/customization=bool:True': Command returned failure code 1 during SSH
session 'root@192.168.1.31'
2016-10-26 11:28:13,899 ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
(org.ovirt.thread.pool-8-thread-21) [2a413474] Exception:
java.io.IOException: Command returned failure code 1 during SSH session '
root@192.168.1.31'
at
org.ovirt.engine.core.uutils.ssh.SSHClient.executeCommand(SSHClient.java:526)
[uutils.jar:]
at
org.ovirt.engine.core.uutils.ssh.SSHDialog.executeCommand(SSHDialog.java:317)
[uutils.jar:]
at
org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase.execute(VdsDeployBase.java:563)
[bll.jar:]
at
org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.installHost(InstallVdsInternalCommand.java:169)
[bll.jar:]
at
org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.executeCommand(InstallVdsInternalCommand.java:90)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1305)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1447)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2075)
[bll.jar:]
at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:166)
[utils.jar:]
at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105)
[utils.jar:]
at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1490)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:398)
[bll.jar:]
at
org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.executeValidatedCommand(PrevalidatingMultipleActionsRunner.java:204)
[bll.jar:]
at
org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.runCommands(PrevalidatingMultipleActionsRunner.java:176)
[bll.jar:]
at
org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.lambda$invokeCommands$3(PrevalidatingMultipleActionsRunner.java:182)
[bll.jar:]
at
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:92)
[utils.jar:]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_111]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[rt.jar:1.8.0_111]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_111]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_111]

2016-10-26 11:28:13,900 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(org.ovirt.thread.pool-8-thread-21) [2a413474] Error during host
192.168.1.31 install: java.io.IOException: Command returned failure code 1
during SSH session 'root@192.168.1.31'
at
org.ovirt.engine.core.uutils.ssh.SSHClient.executeCommand(SSHClient.java:526)
[uutils.jar:]
at
org.ovirt.engine.core.uutils.ssh.SSHDialog.executeCommand(SSHDialog.java:317)
[uutils.jar:]
at
org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase.execute(VdsDeployBase.java:563)
[bll.jar:]
at
org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.installHost(InstallVdsInternalCommand.java:169)
[bll.jar:]
at
org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.executeCommand(InstallVdsInternalCommand.java:90)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1305)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1447)
[bll.jar:]
at