[ovirt-users] Hosted engine setup Failed

2023-04-24 Thread Fedele Stabile
Good morning,
I have a fresh installed host node ovirt v.4.5 and i would install engine using 
terminal, using the commanda hosted-engine --deploy
host node has a ip on 160.97.xx and i want the engine on the same network 
(160.97.xx)
The installation seems to be good but at the end  
exit leaving the host-engine running on 192.168.222.x

Seems that the error is here:

2023-04-25 06:28:18,953+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
ignored: [localhost]: FAILED! => {"msg": "The task includes an option with an 
undefined variable. The error was: 'local_vm_ip' is undefined\n\nThe error 
appears to be in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/sync_on_engine_machine.yml':
 line 2, column 3, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Set the 
name for add_host\n  ^ here\n"}


2023-04-25 06:28:19,757+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
ignored: [localhost]: FAILED! => {"censored": "the output has been hidden due 
to the fact that 'no_log: true' was specified for this result"}
2023-04-25 06:28:19,857+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Sync 
on engine machine]
2023-04-25 06:28:19,958+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
{'msg': "The field 'delegate_to' has an invalid value, which includes an 
undefined variable. The error was: 'dict object' has no attribute 
'engine'\n\nThe error appears to be in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/sync_on_engine_machine.yml':
 line 7, column 3, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n  import_tasks: 
add_engine_as_ansible_host.yml\n- name: Sync on engine machine\n  ^ here\n", 
'_ansible_no_log': None}
2023-04-25 06:28:20,058+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
ignored: [localhost]: FAILED! => {"msg": "The field 'delegate_to' has an 
invalid value, which includes an undefined variable. The error was: 'dict 
object' has no attribute 'engine'\n\nThe error appears to be in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/sync_on_engine_machine.yml':
 line 7, column 3, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n  import_tasks: 
add_engine_as_ansible_host.yml\n- name: Sync on engine machine\n  ^ here\n"}

Help me, please
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FVM45UDO67FI3CQEATOLGCCMAWII7I7V/


[ovirt-users] hosted-engine-setup --deploy fail on Centos Stream 8

2022-10-10 Thread andrea.crisanti--- via Users
Hy,

 I am trying to install ovirt 4.5 on a 4-host cluster running Centos Stream 8, 
but the engine does not start and the whole process fails.

Here is my procedure

dnf install centos-release-ovirt45
dnf module reset virt
dnf module enable virt:rhel
dnf install ovirt-engine-appliance
dnf install  ovirt-hosted-engine-setup

The latest version of ansible [ansible-core 2.13] uses python3.9 and the 
installation fails because some python3.9 modules are missing 
[python39-netaddr, python39-jmespath] and cannot be installed [conflict 
python3-jmespath]. So I downgraded ansible to ansible-core 2.12

dnf downgrade ansible-core

Now 

hosted-engine-setup --deploy --4

goes proceed further but stops because it cannot start the engine

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the host to be up]   
 
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a 
failure]   
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not 
up, please check logs, perhaps also on the engine machine"}

I looked into the log file
 
/var/log//ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-20221007132728-yp7cd1.log
and I found the following error:

2022-10-07 13:28:30,881+0200 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": 
"/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"changed": false,
"cmd": [
"virsh",
"net-undefine",
"default"
],
"delta": "0:00:00.039258",
"end": "2022-10-07 13:28:30.710401",
"invocation": {
"module_args": {
"_raw_params": "virsh net-undefine default",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": false
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2022-10-07 13:28:30.671143",
"stderr": "error: failed to get network 'default'\nerror: Network not 
found: no network with matching name 'default'",
"stderr_lines": [
"error: failed to get network 'default'",
"error: Network not found: no network with matching name 'default'"
],
"stdout": "",
"stdout_lines": []
},
"ansible_task": "Update libvirt default network configuration, undefine",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 0
}

Needless to say 
firewalld and libvirtd are both up
and virsh net-list gives:

 Name  StateAutostart   Persistent

 ;vdsmdummy;   active   no  no
 default   active   no  yes

I googled around without success. 

Has anyone had similar problems?

End of past July I installed Ovirt on another cluster running Centos Stream 8 
following the procedure I just described with no problem.

If needed I can post all log files.

Thanks for the help.

Best
Andrea
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JI72US3JIOXBWTMTVVGDLVAZV7UJXBYF/


[ovirt-users] hosted engine setup, iSCSI no LUNs shown

2019-08-20 Thread billyburly
I'm trying to setup the hosted engine on top of iSCSI storage. It successfully 
logs in and gets the target, however the process errors out claiming there are 
no LUNs. But if you look on the host, the disks were added to the system.

[ INFO  ] TASK [ovirt.hosted_engine_setup : iSCSI discover with REST API]
[ INFO  ] ok: [localhost]
  The following targets have been found:
[1] iqn.2001-04.com.billdurr.durrnet.vm-int:vmdata
TPGT: 1, portals:
192.168.47.10:3260

  Please select a target (1) [1]: 1
[ INFO  ] Getting iSCSI LUNs list
...
[ INFO  ] TASK [ovirt.hosted_engine_setup : Get iSCSI LUNs]
[ INFO  ] ok: [localhost]
[ ERROR ] Cannot find any LUN on the selected target
[ ERROR ] Unable to get target list

Here's what the config in targetcli looks like
[root@vm1 ~]# targetcli ls
o- / . [...]
  o- backstores .. [...]
  | o- block .. [Storage Objects: 2]
  | | o- p_iscsi_lun1 .. [/dev/drbd0 (62.0GiB) write-thru activated]
  | | | o- alua ... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ... [ALUA state: Active/optimized]
  | | o- p_iscsi_lun2 . [/dev/drbd1 (310.6GiB) write-thru activated]
  | |   o- alua ... [ALUA Groups: 1]
  | | o- default_tg_pt_gp ... [ALUA state: Active/optimized]
  | o- fileio . [Storage Objects: 0]
  | o- pscsi .. [Storage Objects: 0]
  | o- ramdisk  [Storage Objects: 0]
  o- iscsi  [Targets: 1]
  | o- iqn.2001-04.com.billdurr.durrnet.vm-int:vmdata  [TPGs: 1]
  |   o- tpg1 .. [gen-acls, no-auth]
  | o- acls .. [ACLs: 0]
  | o- luns .. [LUNs: 2]
  | | o- lun0 . [block/p_iscsi_lun1 (/dev/drbd0) (default_tg_pt_gp)]
  | | o- lun1 . [block/p_iscsi_lun2 (/dev/drbd1) (default_tg_pt_gp)]
  | o- portals  [Portals: 1]
  |   o- 192.168.47.10:3260 ... [OK]
  o- loopback . [Targets: 0]
  o- srpt . [Targets: 0]

The two LUNs show up on the host after the hosted engine setup tries to 
enumerate the LUNs for the target
[root@vm1 ~]# lsscsi
[0:0:0:0]storage HP   P420i8.32  -
[0:1:0:0]diskHP   LOGICAL VOLUME   8.32  /dev/sda
[0:1:0:1]diskHP   LOGICAL VOLUME   8.32  /dev/sdb
[0:1:0:2]diskHP   LOGICAL VOLUME   8.32  /dev/sdc
[11:0:0:0]   diskLIO-ORG  p_iscsi_lun1 4.0   /dev/sdd
[11:0:0:1]   diskLIO-ORG  p_iscsi_lun2 4.0   /dev/sde
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MGPCIAT7QTH7A7EHIC2RBDTZTH6HB4IH/


[ovirt-users] Hosted Engine setup issues

2018-08-21 Thread Jeremy Tourville
If I try to setup hosted engine with the option to use Ansible it eventually 
fails on me.  See 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/LPK7OGFALSQFAN4UMEIHOION4BS2HJLN/

However if I run hosted-engine --deploy --noansible the setup does complete.  
There is further trouble though,  it seems the networks are not being 
configured properly.  If I browse to Compute>Hosts>Hostname and click on the 
Setup Host Networks button I am presented with a new window that contains an 
unassigned logical network of "oivrtmgmt"   On the left side pane there are two 
columns for Interfaces and Assigned Logical Networks.  Both of those columns 
are completely empty.  I think if I understand correctly the interfaces column 
should at least have some info in it.

[cid:bb1f29b4-888c-410a-9b47-7589f866539a]

So my best guess is that for some unknown reason (at least to me) the Engine 
doesn't '"know how to get setup properly"  either with or without Ansible.  
That's about as well as I can describe it.  I hope it makes sense  Anyone 
have ideas on what is going wrong here?  Thanksfor any troubleshooting advice 
you can provide!

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FSZFFTZFQK7AAQXQ5ATF3E4YRODPUCYW/


[ovirt-users] Hosted Engine Setup Issues on Ovirt Node 4.2.5.1

2018-08-20 Thread Jeremy Tourville
If I try to setup hosted engine with the option to use Ansible it eventually 
fails on me.  See 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/LPK7OGFALSQFAN4UMEIHOION4BS2HJLN/

However if I run hosted-engine --deploy --noansible the setup does complete.  
There is further trouble though,  it seems the networks are not being 
configured properly.  If I browse to Compute>Hosts>Hostname and click on the 
Setup Host Networks button I am presented with a new window that contains an 
unassigned logical network of "oivrtmgmt"   On the left side pane there are two 
columns for Interfaces and Assigned Logical Networks.  Both of those columns 
are completely empty.  I think if I understand correctly the interfaces column 
should at least have some info in it.

[cid:2a4d1038-bdf0-4d1e-be64-e8a23c1c88d9]

So my best guess is that for some unknown reason (at least to me) the Engine 
doesn't '"know how to get setup properly"  either with or without Ansible.  
That's about as well as I can describe it.  I hope it makes sense  Anyone 
have ideas on what is going wrong here?  Thanksfor any troubleshooting advice 
you can provide!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H6Z3LYCOT56UI5PCSCOOKDVCG3WO2QRA/


[ovirt-users] hosted engine setup error

2018-05-23 Thread dhy336
hiI deploy ovirt-engine-4.2.2 hosted-engine by #hosted-engine --deploy, but 
face some error bridge ovirtmgmt is not configure, i should how to do? thanks

[ INFO  ] TASK [Get ovirtmgmt route table id][ ERROR ] fatal: [localhost]: 
FAILED! => {"attempts": 50, "changed": true, "cmd": "ip rule list | grep 
ovirtmgmt | sed s/[.*]\\ //g | awk '{ print $9 }'", "delta": 
"0:00:00.010899", "end": "2018-05-23 20:03:21.222559", "rc": 0, "start": 
"2018-05-23 20:03:21.211660", "stderr": "", "stderr_lines": [], "stdout": "", 
"stdout_lines": []}[ ERROR ] Failed to execute stage 'Closing up': Failed 
executing ansible-playbook[ INFO  ] Stage: Clean up[ INFO  ] Cleaning temporary 
resources

vdsm.log
2018-05-23 19:55:15,305+0800 INFO  (vm/bfc6f7cf) [virt.vm] 
(vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') VM wrapper has started 
(vm:2619)2018-05-23 19:55:15,454+0800 INFO  (vm/bfc6f7cf) [virt.vm] 
(vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') Starting connection 
(guestagent:245)2018-05-23 19:55:15,458+0800 ERROR (vm/bfc6f7cf) [virt.vm] 
(vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') Failed to connect to guest agent 
channel (vm:2403)Traceback (most recent call last):  File 
"/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2401, in 
_vmDependentInitself.guestAgent.start()  File 
"/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 246, in start  
  self._prepare_socket()  File 
"/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 288, in 
_prepare_socketsupervdsm.getProxy().prepareVmChannel(self._socketName)  
File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in 
__call__return callMethod()  File 
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in 
**kwargs)  File "", line 2, in prepareVmChannel  File 
"/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
raise convert_to_error(kind, result)OSError: [Errno 2] No such file or 
directory: 
'/var/lib/libvirt/qemu/channels/bfc6f7cf-3e8d-4368-97f8-78a5c74a5175.com.redhat.rhevm.vdsm'2018-05-23
 19:55:15,480+0800 INFO  (vm/bfc6f7cf) [virt.vm] 
(vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') CPU running: domain 
initialization (vm:5908)
[root@hosted-engine-test1 ~]# ip a 1: lo:  mtu 65536 
qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00:00:00:00:00 brd 
00:00:00:00:00:00inet 127.0.0.1/8 scope host lo   valid_lft forever 
preferred_lft foreverinet6 ::1/128 scope hostvalid_lft forever 
preferred_lft forever2: eth0:  mtu 1500 qdisc 
pfifo_fast state UP qlen 1000link/ether 52:54:00:6c:ee:a8 brd 
ff:ff:ff:ff:ff:ffinet 192.168.122.217/24 brd 192.168.122.255 scope global 
dynamic eth0   valid_lft 2936sec preferred_lft 2936secinet6 
fe80::834a:9cc1:df2:83f/64 scope linkvalid_lft forever preferred_lft 
forever18: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN 
qlen 1000link/ether 9a:d1:bc:96:cc:c0 brd ff:ff:ff:ff:ff:ff19: virbr0: 
 mtu 1500 qdisc noqueue state DOWN qlen 1000 
   link/ether 52:54:00:55:06:26 brd ff:ff:ff:ff:ff:ffinet 192.168.124.1/24 
brd 192.168.124.255 scope global virbr0   valid_lft forever preferred_lft 
forever20: virbr0-nic:  mtu 1500 qdisc pfifo_fast master 
virbr0 state DOWN qlen 1000link/ether 52:54:00:55:06:26 brd 
ff:ff:ff:ff:ff:ff




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Hosted Engine Setup Erro

2018-05-23 Thread Sakhi Hadebe
Hi,

I am new to ansible and tring to deploy an ovrt cluster with gluster. I am
following this documentation
https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/,
although the screenshots are not exactly the same. Gluster successfully
deployed.

Below  is what is insatlled on my OvirtNodes:
1. ansible --version
ansible 2.5.3
  config file = /etc/ansible/ansible.cfg

​2. ​glusterfs 3.12.9

3. oVirtNode 4.2.3

4. CentOS Linux release 7.4.1708 (Core)

During the hosted engine setup it throws the ERROR below:

[ INFO  ] TASK [Prepare CIDR for "virbr0"]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
option with an undefined variable. The error was: 'dict object' has no
attribute 'ipv4'\n\nThe error appears to have been in
'/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.yml': line
50, column 7, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n  tags: [
'skip_ansible_lint' ]\n- name: Prepare CIDR for \"{{ virbr_default
}}\"\n  ^ here\nWe could be wrong, but this one looks like it might be
an issue with\nmissing quotes.  Always quote template expression brackets
when they\nstart a value. For instance:\n\nwith_items:\n  - {{ foo
}}\n\nShould be written as:\n\nwith_items:\n  - \"{{ foo }}\"\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook

I have tried to use quotes on line 50 in the bootstrap_local_vm.yml file,
it didn't work.

Please help, I have been stuck on this almost the whole day.



-- 
Regards,
Sakhi Hadebe

Engineer: South African National Research Network (SANReN)Competency
Area, Meraka, CSIR

Tel:   +27 12 841 2308 <+27128414213>
Fax:   +27 12 841 4223 <+27128414223>
Cell:  +27 71 331 9622 <+27823034657>
Email: sa...@sanren.ac.za 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Hosted Engine Setup error (v4.2.3)

2018-05-17 Thread ovirt

Engine network config error

Following this blog post: 
https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/


I get an error saying the hosted engine setup is "trying" to use vibr0 
(192.168.xxx.x) even though I have the bridge interface set to "eno1"


Regardless of whether the Edit Hosts File is checked or unchecked, it 
overwrites my engine IP entry from 10.50.235.x to 192.168.xxx.x


The same thing happens whether I set the engine IP to Static or DHCP (I 
don't have DNS, I'm using static entries in /etc/hosts).


Any ideas it "insists" on using "vibr0" instead of "eno1"?

**also posted this on IRC
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Hosted Engine Setup error (oVirt v4.2.3)

2018-05-15 Thread ovirt

Engine network config error

Following this blog post: 
https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/


I get an error saying the hosted engine setup is "trying" to use vibr0 
(192.168.xxx.x) even though I have the bridge interface set to "eno1"


Regardless of whether the Edit Hosts File is checked or unchecked, it 
overwrites my engine IP entry from 10.50.235.x to 192.168.xxx.x


The same thing happens whether I set the engine IP to Static or DHCP (I 
don't have DNS, I'm using static entries in /etc/hosts).


Any ideas it "insists" on using "vibr0" instead of "eno1"?

**also posted this on IRC
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] Hosted engine setup question

2017-10-03 Thread Demeter Tibor
Dear Charles, 
Thank you for your reply. 

I don't want to make an another storage domain, I just want to do a 
detach-attach procedure with the existing. 

Also, I have an another question. Is it possible delete snapshots in 4.1 what 
were created in 3.5? How is safe this? 
I have some vm snapshot in the old system, but I don't want more outage with 
deleting them.The 3.5 does not support the live snapshot deleting, but the 4.1 
yes. 

Thanks, 

Tibor 

- 2017. okt.. 2., 19:55, Charles Kozler  írta: 

> I did a 3.6 to 4.1 like this. I moved all of my VMs to a new storage domain 
> (the
> other was hyperconverged gluster) and then took a full outage, shut down all 
> of
> my VMs, detached from 3.6, and imported on 4.1. I had no issues other than
> expected mac address changes, but I think you can manually override this in 
> the
> engine somewhere
> If you are worried, do it with one VM. Create a new storage domain that both
> clusters can "see", move one VM to the domain on 3.6, detach, and import to
> 3.1. Bring the VM up

> If it is Linux VM's older than systemd and using sysvinit, you will hit issues
> where your MAC address will change and udev will move it to eth# wherever # is
> the next available NIC in your VM host

> On Mon, Oct 2, 2017 at 12:54 PM, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
> |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,
>> Can anyone answer my questions?

>> Thanks in advance,
>> R,

>> Tibor

>> - 2017. szept.. 19., 8:31, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > írta:

>>> - I have a productive ovirt cluster based on 3.5 series. This using a 
>>> shared nfs
>>> storage. Is it possible to migrate VMs from 3.5 to 4.1 with detach shared
>>> storage from the old cluster and attach it to the new cluster?
>>> - If yes what will happend with the VM properies? For example mac addresses,
>>> limits, etc. Those will be migrated or not?

>>> Thanks in advance,
>>> Regard

>>> Tibor

>>> ___
>>> Users mailing list
>>> [ mailto:Users@ovirt.org | Users@ovirt.org ]
>>> [ http://lists.ovirt.org/mailman/listinfo/users |
>>> http://lists.ovirt.org/mailman/listinfo/users ]

>> ___
>> Users mailing list
>> [ mailto:Users@ovirt.org | Users@ovirt.org ]
>> [ http://lists.ovirt.org/mailman/listinfo/users |
>> http://lists.ovirt.org/mailman/listinfo/users ]
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup question

2017-10-02 Thread Charles Kozler
I did a 3.6 to 4.1 like this. I moved all of my VMs to a new storage domain
(the other was hyperconverged gluster) and then took a full outage, shut
down all of my VMs, detached from 3.6, and imported on 4.1. I had no issues
other than expected mac address changes, but I think you can manually
override this in the engine somewhere

If you are worried, do it with one VM. Create a new storage domain that
both clusters can "see", move one VM to the domain on 3.6, detach, and
import to 3.1. Bring the VM up

If it is Linux VM's older than systemd and using sysvinit, you will hit
issues where your MAC address will change and udev will move it to eth#
wherever # is the next available NIC in your VM host

On Mon, Oct 2, 2017 at 12:54 PM, Demeter Tibor  wrote:

> Hi,
> Can anyone answer my questions?
>
> Thanks in advance,
> R,
>
> Tibor
>
> - 2017. szept.. 19., 8:31, Demeter Tibor  írta:
>
>
> - I have a productive ovirt cluster based on 3.5 series. This using a
> shared nfs storage.  Is it possible to migrate VMs from 3.5 to 4.1 with
> detach shared storage from the old cluster and attach it to the new
> cluster?
> - If yes what will happend with the VM properies? For example mac
> addresses, limits, etc. Those will be migrated or not?
>
> Thanks in advance,
> Regard
>
>
> Tibor
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup question

2017-10-02 Thread Demeter Tibor
Hi, 
Can anyone answer my questions? 

Thanks in advance, 
R, 

Tibor 

- 2017. szept.. 19., 8:31, Demeter Tibor  írta: 

> - I have a productive ovirt cluster based on 3.5 series. This using a shared 
> nfs
> storage. Is it possible to migrate VMs from 3.5 to 4.1 with detach shared
> storage from the old cluster and attach it to the new cluster?
> - If yes what will happend with the VM properies? For example mac addresses,
> limits, etc. Those will be migrated or not?

> Thanks in advance,
> Regard

> Tibor

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine setup question

2017-09-19 Thread Demeter Tibor
Hi, 

I just installed a hosted engine based four nodes cluster to glustered storage. 
It seems to working fine, but I have some question about it. 

- I would like to make an own cluster and datacenter. Is it possible to remove 
a host and re-add to an another cluster while it is running the hosted engine? 
- Is it possible to remove default datacenter without any problems? 

- I have a productive ovirt cluter that is based on 3.5 series. It is using a 
shared nfs storage. Is it possible to migrate VMs from 3.5 to 4.1 with detach 
shared storage from the old cluster and attach it to the new cluster? 
- If yes what will happend with the VM properies? For example mac addresses, 
limits, etc. Those will be migrated or not? 

Thanks in advance, 
Regard 


Tibor 








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Kasturi Narra
can you please check if you have any additional disk in the system? If you
have additional disk in the system other than the disk which is being used
for root partition then you could specify the disk in the cockpit UI (i
hope you are using cockpit UI to do the installation) with no partitions on
that. That will take care of the installation and make your life easier as
cockpit + gdeploy would take care of configuring gluster bricks and volumes
for you.

On Mon, Aug 28, 2017 at 2:55 PM, Anzar Esmail Sainudeen <
an...@it.thumbay.com> wrote:

> Dear Nara,
>
>
>
> All the partitions, pv and vg are created automatically during the initial
> setup time.
>
>
>
> [root@ovirtnode1 ~]# vgs
>
>   VG  #PV #LV #SN Attr   VSize   VFree
>
>   onn   1  12   0 wz--n- 555.73g 14.93g
>
>
>
> All space are mounted to the below location, all free space are mounted in
> /.
>
>
>
> Filesystem  Size  Used Avail
> Use% Mounted on
>
> /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1  513G  4.2G
> 483G   1% /
>
> devtmpfs 44G 0
> 44G   0% /dev
>
> tmpfs44G  4.0K
> 44G   1% /dev/shm
>
> tmpfs44G   33M
> 44G   1% /run
>
> tmpfs44G 0
> 44G   0% /sys/fs/cgroup
>
> /dev/sda2   976M  135M  774M
> 15% /boot
>
> /dev/mapper/onn-home976M  2.6M
> 907M   1% /home
>
> /dev/mapper/onn-tmp 2.0G  6.3M
> 1.8G   1% /tmp
>
> /dev/sda1   200M  9.5M
> 191M   5% /boot/efi
>
> /dev/mapper/onn-var  15G  1.8G   13G
> 13% /var
>
> /dev/mapper/onn-var--log7.8G  224M
> 7.2G   3% /var/log
>
> /dev/mapper/onn-var--log--audit 2.0G   44M
> 1.8G   3% /var/log/audit
>
> tmpfs   8.7G 0
> 8.7G   0% /run/user/0
>
>
>
> If we need any space we want to reduce the vg size and create new
> one.(This is correct)
>
>
>
>
>
> If the above step is complicated, can you please suggest to setup
> glusterfs datastore in ovirt
>
>
>
> Anzar Esmail Sainudeen
>
> Group Datacenter Incharge| IT Infra Division | Thumbay Group
>
> P.O Box : 4184 | Ajman | United Arab Emirates.
>
> Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303
>
> Email: an...@it.thumbay.com | Website: www.thumbay.com
>
> [image: cid:image001.jpg@01D18D9D.15A17620]
>
>
>
> Disclaimer: This message contains confidential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify the sender immediately by e-mail if you have received this e-mail by
> mistake, and delete this material. Thumbay Group accepts no liability for
> errors or omissions in the contents of this message, which arise as a
> result of e-mail transmission.
>
>
>
> *From:* Kasturi Narra [mailto:kna...@redhat.com]
> *Sent:* Monday, August 28, 2017 1:14 PM
>
> *To:* Anzar Esmail Sainudeen
> *Cc:* users
> *Subject:* Re: [ovirt-users] hosted engine setup with Gluster fail
>
>
>
> yes, you can create. I do not see any problems there.
>
>
>
> May i know how these vgs are created ? If they are not created using
> gdeploy then you will have to create bricks manually from the new vg you
> have created.
>
>
>
> On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen <
> an...@it.thumbay.com> wrote:
>
> Dear Nara,
>
>
>
> Thank you for your great reply.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have labels or any partitions on them ?
>
>
>
> Yes I agreed there is no labels partition available, my doubt is it
> possible to create required bricks partition from available 406.7G  Linux
> LVM. Following are the physical volume and volume group information.
>
>
>
>
>
> [root@ovirtnode1 ~]# pvdisplay
>
>   --- Physical volume ---
>
>   PV Name   /dev/sda3
>
>   VG Name   onn
>
>   PV Size   555.73 GiB / not usable 2.00 MiB
>
>   Allocatable   yes
>
>   PE Size   4.00 MiB
>
>   Total PE  142267

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Anzar Esmail Sainudeen
Dear Nara,

 

All the partitions, pv and vg are created automatically during the initial 
setup time.

 

[root@ovirtnode1 ~]# vgs

  VG  #PV #LV #SN Attr   VSize   VFree 

  onn   1  12   0 wz--n- 555.73g 14.93g

 

All space are mounted to the below location, all free space are mounted in /.

 

Filesystem  Size  Used Avail Use% 
Mounted on

/dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1  513G  4.2G  483G   1% /

devtmpfs 44G 0   44G   0% 
/dev

tmpfs44G  4.0K   44G   1% 
/dev/shm

tmpfs44G   33M   44G   1% 
/run

tmpfs44G 0   44G   0% 
/sys/fs/cgroup

/dev/sda2   976M  135M  774M  15% 
/boot

/dev/mapper/onn-home976M  2.6M  907M   1% 
/home

/dev/mapper/onn-tmp 2.0G  6.3M  1.8G   1% 
/tmp

/dev/sda1   200M  9.5M  191M   5% 
/boot/efi

/dev/mapper/onn-var  15G  1.8G   13G  13% 
/var

/dev/mapper/onn-var--log7.8G  224M  7.2G   3% 
/var/log

/dev/mapper/onn-var--log--audit 2.0G   44M  1.8G   3% 
/var/log/audit

tmpfs   8.7G 0  8.7G   0% 
/run/user/0

 

If we need any space we want to reduce the vg size and create new one.(This is 
correct)

 

 

If the above step is complicated, can you please suggest to setup glusterfs 
datastore in ovirt 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: an...@it.thumbay.com <mailto:an...@it.thumbay.com>  | Website: 
www.thumbay.com <http://www.thumbay.com/> 



 

Disclaimer: This message contains confidential information and is intended only 
for the individual named. If you are not the named addressee, you are hereby 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this e-mail is strictly prohibited. Please notify 
the sender immediately by e-mail if you have received this e-mail by mistake, 
and delete this material. Thumbay Group accepts no liability for errors or 
omissions in the contents of this message, which arise as a result of e-mail 
transmission.

 

From: Kasturi Narra [mailto:kna...@redhat.com] 
Sent: Monday, August 28, 2017 1:14 PM
To: Anzar Esmail Sainudeen
Cc: users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail

 

yes, you can create. I do not see any problems there. 

 

May i know how these vgs are created ? If they are not created using gdeploy 
then you will have to create bricks manually from the new vg you have created.

 

On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen <an...@it.thumbay.com 
<mailto:an...@it.thumbay.com> > wrote:

Dear Nara,

 

Thank you for your great reply.

 

1) can you please check if the disks what would be used for brick creation does 
not have labels or any partitions on them ?

 

Yes I agreed there is no labels partition available, my doubt is it possible to 
create required bricks partition from available 406.7G  Linux LVM. Following 
are the physical volume and volume group information.

 

 

[root@ovirtnode1 ~]# pvdisplay 

  --- Physical volume ---

  PV Name   /dev/sda3

  VG Name   onn

  PV Size   555.73 GiB / not usable 2.00 MiB

  Allocatable   yes 

  PE Size   4.00 MiB

  Total PE  142267

  Free PE   3823

  Allocated PE  138444

  PV UUID   v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe

   

[root@ovirtnode1 ~]# vgdisplay 

  --- Volume group ---

  VG Name   onn

  System ID 

  Formatlvm2

  Metadata Areas1

  Metadata Sequence No  48

  VG Access read/write

  VG Status resizable

  MAX LV0

  Cur LV12

  Open LV   7

  Max PV0

  Cur PV1

  Act PV1

  VG Size   555.73 GiB

  PE Size   4.00 MiB

  Total PE  142267

  Alloc PE / Size   138444 / 540.80 GiB

  Free  PE / Size   3823 / 14.93 GiB

  VG UUID   nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy

   

 

I am thinking, to reduce the vg size and create new vg for gluster. Is it a 
good thinking.

   

 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: an...@it.thumbay.com <mailto:an...

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Kasturi Narra
yes, you can create. I do not see any problems there.

May i know how these vgs are created ? If they are not created using
gdeploy then you will have to create bricks manually from the new vg you
have created.

On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen <
an...@it.thumbay.com> wrote:

> Dear Nara,
>
>
>
> Thank you for your great reply.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have labels or any partitions on them ?
>
>
>
> Yes I agreed there is no labels partition available, my doubt is it
> possible to create required bricks partition from available 406.7G  Linux
> LVM. Following are the physical volume and volume group information.
>
>
>
>
>
> [root@ovirtnode1 ~]# pvdisplay
>
>   --- Physical volume ---
>
>   PV Name   /dev/sda3
>
>   VG Name   onn
>
>   PV Size   555.73 GiB / not usable 2.00 MiB
>
>   Allocatable   yes
>
>   PE Size   4.00 MiB
>
>   Total PE  142267
>
>   Free PE   3823
>
>   Allocated PE  138444
>
>   PV UUID   v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe
>
>
>
> [root@ovirtnode1 ~]# vgdisplay
>
>   --- Volume group ---
>
>   VG Name   onn
>
>   System ID
>
>   Formatlvm2
>
>   Metadata Areas1
>
>   Metadata Sequence No  48
>
>   VG Access read/write
>
>   VG Status resizable
>
>   MAX LV0
>
>   Cur LV12
>
>   Open LV   7
>
>   Max PV0
>
>   Cur PV1
>
>   Act PV1
>
>   VG Size   555.73 GiB
>
>   PE Size   4.00 MiB
>
>   Total PE  142267
>
>   Alloc PE / Size   138444 / 540.80 GiB
>
>   Free  PE / Size   3823 / 14.93 GiB
>
>   VG UUID   nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy
>
>
>
>
>
> I am thinking, to reduce the vg size and create new vg for gluster. Is it
> a good thinking.
>
>
>
>
>
>
>
> Anzar Esmail Sainudeen
>
> Group Datacenter Incharge| IT Infra Division | Thumbay Group
>
> P.O Box : 4184 | Ajman | United Arab Emirates.
>
> Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303
>
> Email: an...@it.thumbay.com | Website: www.thumbay.com
>
> [image: cid:image001.jpg@01D18D9D.15A17620]
>
>
>
> Disclaimer: This message contains confidential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify the sender immediately by e-mail if you have received this e-mail by
> mistake, and delete this material. Thumbay Group accepts no liability for
> errors or omissions in the contents of this message, which arise as a
> result of e-mail transmission.
>
>
>
> *From:* Kasturi Narra [mailto:kna...@redhat.com]
> *Sent:* Monday, August 28, 2017 9:48 AM
> *To:* Anzar Esmail Sainudeen
> *Cc:* users
> *Subject:* Re: [ovirt-users] hosted engine setup with Gluster fail
>
>
>
> Hi,
>
>
>
>If i understand right gdeploy script is failing at [1]. There could be
> two possible reasons why that would fail.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have lables or any partitions on them ?
>
>
>
> 2) can you please check if the path [1] exists. If it does not can you
> please change the path of the script in gdeploy.conf file
> to /usr/share/gdeploy/scripts/grafton-sanity-check.sh
>
>
>
> [1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
>
>
>
> Thanks
>
> kasturi
>
>
>
> On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <
> an...@it.thumbay.com> wrote:
>
> Dear Team Ovirt,
>
>
>
> I am trying to deploy hosted engine setup with Gluster. Hosted engine
> setup was failed. Total number of host is 3 server
>
>
>
>
>
> PLAY [gluster_servers] **
> ***
>
>
>
> TASK [Run a shell script] **
> 
>
> fatal: [ovirtnode4.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode3.thumbaytechlabs.i

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Anzar Esmail Sainudeen
Dear Nara,

 

Thank you for your great reply.

 

1) can you please check if the disks what would be used for brick creation does 
not have labels or any partitions on them ?

 

Yes I agreed there is no labels partition available, my doubt is it possible to 
create required bricks partition from available 406.7G  Linux LVM. Following 
are the physical volume and volume group information.

 

 

[root@ovirtnode1 ~]# pvdisplay 

  --- Physical volume ---

  PV Name   /dev/sda3

  VG Name   onn

  PV Size   555.73 GiB / not usable 2.00 MiB

  Allocatable   yes 

  PE Size   4.00 MiB

  Total PE  142267

  Free PE   3823

  Allocated PE  138444

  PV UUID   v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe

   

[root@ovirtnode1 ~]# vgdisplay 

  --- Volume group ---

  VG Name   onn

  System ID 

  Formatlvm2

  Metadata Areas1

  Metadata Sequence No  48

  VG Access read/write

  VG Status resizable

  MAX LV0

  Cur LV12

  Open LV   7

  Max PV0

  Cur PV1

  Act PV1

  VG Size   555.73 GiB

  PE Size   4.00 MiB

  Total PE  142267

  Alloc PE / Size   138444 / 540.80 GiB

  Free  PE / Size   3823 / 14.93 GiB

  VG UUID   nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy

   

 

I am thinking, to reduce the vg size and create new vg for gluster. Is it a 
good thinking.

   

 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: an...@it.thumbay.com <mailto:an...@it.thumbay.com>  | Website: 
www.thumbay.com <http://www.thumbay.com/> 



 

Disclaimer: This message contains confidential information and is intended only 
for the individual named. If you are not the named addressee, you are hereby 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this e-mail is strictly prohibited. Please notify 
the sender immediately by e-mail if you have received this e-mail by mistake, 
and delete this material. Thumbay Group accepts no liability for errors or 
omissions in the contents of this message, which arise as a result of e-mail 
transmission.

 

From: Kasturi Narra [mailto:kna...@redhat.com] 
Sent: Monday, August 28, 2017 9:48 AM
To: Anzar Esmail Sainudeen
Cc: users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail

 

Hi,

 

   If i understand right gdeploy script is failing at [1]. There could be two 
possible reasons why that would fail.

 

1) can you please check if the disks what would be used for brick creation does 
not have lables or any partitions on them ?

 

2) can you please check if the path [1] exists. If it does not can you please 
change the path of the script in gdeploy.conf file to 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh

 

[1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

 

Thanks

kasturi

 

On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <an...@it.thumbay.com 
<mailto:an...@it.thumbay.com> > wrote:

Dear Team Ovirt,

 

I am trying to deploy hosted engine setup with Gluster. Hosted engine setup was 
failed. Total number of host is 3 server 

 

 

PLAY [gluster_servers] *

 

TASK [Run a shell script] **

fatal: [ovirtnode4.thumbaytechlabs.int <http://ovirtnode4.thumbaytechlabs.int> 
]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' 
failed. The error was: error while evaluating conditional (result.rc != 0): 
'dict object' has no attribute 'rc'"}

fatal: [ovirtnode3.thumbaytechlabs.int <http://ovirtnode3.thumbaytechlabs.int> 
]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' 
failed. The error was: error while evaluating conditional (result.rc != 0): 
'dict object' has no attribute 'rc'"}

fatal: [ovirtnode2.thumbaytechlabs.int <http://ovirtnode2.thumbaytechlabs.int> 
]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' 
failed. The error was: error while evaluating conditional (result.rc != 0): 
'dict object' has no attribute 'rc'"}

to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry

 

PLAY RECAP *

ovirtnode2.thumbaytechlabs.int <http://ovirtnode2.thumbaytechlabs.int>  : ok=0  
  changed=0unreachable=0failed=1   

ovirtnode3.thumbaytechlabs.int <http://ovirtnode3.thumbaytechlabs.int>  : ok=0  
  changed=0  

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-27 Thread Kasturi Narra
Hi,

   If i understand right gdeploy script is failing at [1]. There could be
two possible reasons why that would fail.

1) can you please check if the disks what would be used for brick creation
does not have lables or any partitions on them ?

2) can you please check if the path [1] exists. If it does not can you
please change the path of the script in gdeploy.conf file
to /usr/share/gdeploy/scripts/grafton-sanity-check.sh

[1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

Thanks
kasturi

On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <
an...@it.thumbay.com> wrote:

> Dear Team Ovirt,
>
>
>
> I am trying to deploy hosted engine setup with Gluster. Hosted engine
> setup was failed. Total number of host is 3 server
>
>
>
>
>
> PLAY [gluster_servers] **
> ***
>
>
>
> TASK [Run a shell script] **
> 
>
> fatal: [ovirtnode4.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode3.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode2.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry
>
>
>
> PLAY RECAP 
> *
>
> ovirtnode2.thumbaytechlabs.int : ok=0changed=0unreachable=0
> failed=1
>
> ovirtnode3.thumbaytechlabs.int : ok=0changed=0unreachable=0
> failed=1
>
> ovirtnode4.thumbaytechlabs.int : ok=0changed=0unreachable=0
> failed=1
>
>
>
>
>
> Please note my finding.
>
>
>
> 1.Still I am doubt with bricks setup ares . because during the ovirt
> node setup time automatically create partition and mount all space. Please
> find below #fdisk –l output
>
> 2.
>
> [root@ovirtnode4 ~]# fdisk –l
>
>
>
> WARNING: fdisk GPT support is currently new, and therefore in an
> experimental phase. Use at your own discretion.
>
>
>
> Disk /dev/sda: 438.0 GB, 437998583808 bytes, 855465984 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
> Disk label type: gpt
>
>
>
>
>
> # Start  EndSize  TypeName
>
> 1 2048   411647200M  EFI System  EFI System Partition
>
> 2   411648  2508799  1G  Microsoft basic
>
>  3  2508800855463935  406.7G  Linux LVM
>
>
>
> Disk /dev/mapper/onn-swap: 25.4 GB, 25367150592 bytes, 49545216 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00_tmeta: 1073 MB, 1073741824 bytes, 2097152
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00_tdata: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00-tpool: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1: 378.1 GB,
> 378053591040 bytes, 738385920 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-var: 16.1 GB, 16106127360 bytes, 31457280 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-root: 378.1 GB, 378053591040 bytes, 738385920 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> 

[ovirt-users] hosted engine setup with Gluster fail

2017-08-27 Thread Anzar Esmail Sainudeen
Dear Team Ovirt,

 

I am trying to deploy hosted engine setup with Gluster. Hosted engine setup
was failed. Total number of host is 3 server 

 

 

PLAY [gluster_servers]
*

 

TASK [Run a shell script]
**

fatal: [ovirtnode4.thumbaytechlabs.int]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}

fatal: [ovirtnode3.thumbaytechlabs.int]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}

fatal: [ovirtnode2.thumbaytechlabs.int]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}

to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry

 

PLAY RECAP
*

ovirtnode2.thumbaytechlabs.int : ok=0changed=0unreachable=0
failed=1   

ovirtnode3.thumbaytechlabs.int : ok=0changed=0unreachable=0
failed=1   

ovirtnode4.thumbaytechlabs.int : ok=0changed=0unreachable=0
failed=1   

 

 

Please note my finding.

 

1.Still I am doubt with bricks setup ares . because during the ovirt
node setup time automatically create partition and mount all space. Please
find below #fdisk -l output

2. 

[root@ovirtnode4 ~]# fdisk -l

 

WARNING: fdisk GPT support is currently new, and therefore in an
experimental phase. Use at your own discretion.

 

Disk /dev/sda: 438.0 GB, 437998583808 bytes, 855465984 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: gpt

 

 

# Start  EndSize  TypeName

1 2048   411647200M  EFI System  EFI System Partition

2   411648  2508799  1G  Microsoft basic 

 3  2508800855463935  406.7G  Linux LVM   

 

Disk /dev/mapper/onn-swap: 25.4 GB, 25367150592 bytes, 49545216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/mapper/onn-pool00_tmeta: 1073 MB, 1073741824 bytes, 2097152
sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/mapper/onn-pool00_tdata: 394.2 GB, 394159718400 bytes, 769843200
sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/mapper/onn-pool00-tpool: 394.2 GB, 394159718400 bytes, 769843200
sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1: 378.1 GB,
378053591040 bytes, 738385920 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-pool00: 394.2 GB, 394159718400 bytes, 769843200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-var: 16.1 GB, 16106127360 bytes, 31457280 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-root: 378.1 GB, 378053591040 bytes, 738385920 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-var--log: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-home: 1073 MB, 1073741824 bytes, 2097152 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-tmp: 2147 MB, 2147483648 bytes, 4194304 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-var--log--audit: 2147 MB, 2147483648 bytes, 4194304
sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size 

Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-06-18 Thread Mike DePaulo
On Thu, May 18, 2017 at 10:03 AM, Sachidananda URS  wrote:

> Hi,
>
> On Thu, May 18, 2017 at 7:08 PM, Sahina Bose  wrote:
>
>>
>>
>> On Thu, May 18, 2017 at 3:20 PM, Mike DePaulo 
>> wrote:
>>
>>> Well, I tried both of the following:
>>> 1. Having only a boot partition and a PV for the OS that does not take
>>> up the entire disk, and then specifying "sda" in Hosted Engine Setup.
>>> 2. Having not only a boot partition and a PV for the OS, but also an
>>> empty (and not formatted) /dev/sda3 PV that I created with fdisk.
>>> Then, specfying "sda3" in Hosted Engine Setup.
>>>
>>> Both attempts resulted in errors like this:
>>> failed: [centerpoint.ad.depaulo.org] (item=/dev/sda3) => {"failed":
>>> true, "failed_when_result": true, "item": "/dev/sda3", "msg": "
>>> Device /dev/sda3 not found (or ignored by filtering).\n", "rc": 5}
>>>
>>
>> Can you provide gdeploy logs? I think, it's at ~/.gdeploy/gdeploy.log
>>
>>
>>>
>>> It seems like having gluster bricks on the same disk as the OS doesn't
>>> work at all.
>>>
>>>
>
> Hi, /dev/sda3 should work, the error here is possibly due to filesystem
> signature.
>
> Can you please set wipefs=yes? For example
>
> [pv]
> action=create
> wipefs=yes
> devices=/dev/sda3
>
> -sac
>
>
Sorry for the long delay.

This worked. Thank you very much.

-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-18 Thread Sachidananda URS
Hi,

On Thu, May 18, 2017 at 7:08 PM, Sahina Bose  wrote:

>
>
> On Thu, May 18, 2017 at 3:20 PM, Mike DePaulo 
> wrote:
>
>> Well, I tried both of the following:
>> 1. Having only a boot partition and a PV for the OS that does not take
>> up the entire disk, and then specifying "sda" in Hosted Engine Setup.
>> 2. Having not only a boot partition and a PV for the OS, but also an
>> empty (and not formatted) /dev/sda3 PV that I created with fdisk.
>> Then, specfying "sda3" in Hosted Engine Setup.
>>
>> Both attempts resulted in errors like this:
>> failed: [centerpoint.ad.depaulo.org] (item=/dev/sda3) => {"failed":
>> true, "failed_when_result": true, "item": "/dev/sda3", "msg": "
>> Device /dev/sda3 not found (or ignored by filtering).\n", "rc": 5}
>>
>
> Can you provide gdeploy logs? I think, it's at ~/.gdeploy/gdeploy.log
>
>
>>
>> It seems like having gluster bricks on the same disk as the OS doesn't
>> work at all.
>>
>>

Hi, /dev/sda3 should work, the error here is possibly due to filesystem
signature.

Can you please set wipefs=yes? For example

[pv]
action=create
wipefs=yes
devices=/dev/sda3

-sac
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-18 Thread Sahina Bose
On Thu, May 18, 2017 at 3:20 PM, Mike DePaulo  wrote:

> Well, I tried both of the following:
> 1. Having only a boot partition and a PV for the OS that does not take
> up the entire disk, and then specifying "sda" in Hosted Engine Setup.
> 2. Having not only a boot partition and a PV for the OS, but also an
> empty (and not formatted) /dev/sda3 PV that I created with fdisk.
> Then, specfying "sda3" in Hosted Engine Setup.
>
> Both attempts resulted in errors like this:
> failed: [centerpoint.ad.depaulo.org] (item=/dev/sda3) => {"failed":
> true, "failed_when_result": true, "item": "/dev/sda3", "msg": "
> Device /dev/sda3 not found (or ignored by filtering).\n", "rc": 5}
>

Can you provide gdeploy logs? I think, it's at ~/.gdeploy/gdeploy.log


>
> It seems like having gluster bricks on the same disk as the OS doesn't
> work at all.
>
> I am going to buy separate OS SSDs.
>
> -Mike
>
> On Tue, May 9, 2017 at 6:22 AM, Mike DePaulo  wrote:
> > On Mon, May 8, 2017 at 9:00 AM, knarra  wrote:
> >> On 05/07/2017 04:48 PM, Mike DePaulo wrote:
> >>>
> >>> Hi. I am trying to follow this guide. Is it possible to use part of my
> >>> OS disk /dev/sda for the bricks?
> >>>
> >>> https://www.ovirt.org/blog/2017/04/up-and-running-with-
> ovirt-4-1-and-gluster-storage/
> >>>
> >>> I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
> >>> requirements. I am guessing I have to create an LV for the OS that
> >>> does not take up the entire disk during install, manually create a pv
> >>> like /dev/sda3 afterwards, and then run Hosted Engine Setup and
> >>> specify /sda3 rather than sdb?
> >>>
> >>> Thanks,
> >>> -Mike
> >>> ___
> >>> Users mailing list
> >>> Users@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> >>
> >>
> >> Hi Mike,
> >>
> >> If you create gluster bricks on the same disk as OS it works but we
> do
> >> not recommend setting up gluster bricks on the same disk as the os. When
> >> user tries to create a gluster volume using by specifying the bricks
> from
> >> root partition it displays an error message "Bricks in root parition not
> >> recommended and use force at the end to create volume".
> >>
> >> Thanks
> >>
> >> kasturi
> >>
> >
> > Thank you very much. Is my process for doing this (listed in my
> > original email) correct though?
> >
> > -Mike
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-18 Thread Mike DePaulo
Well, I tried both of the following:
1. Having only a boot partition and a PV for the OS that does not take
up the entire disk, and then specifying "sda" in Hosted Engine Setup.
2. Having not only a boot partition and a PV for the OS, but also an
empty (and not formatted) /dev/sda3 PV that I created with fdisk.
Then, specfying "sda3" in Hosted Engine Setup.

Both attempts resulted in errors like this:
failed: [centerpoint.ad.depaulo.org] (item=/dev/sda3) => {"failed":
true, "failed_when_result": true, "item": "/dev/sda3", "msg": "
Device /dev/sda3 not found (or ignored by filtering).\n", "rc": 5}

It seems like having gluster bricks on the same disk as the OS doesn't
work at all.

I am going to buy separate OS SSDs.

-Mike

On Tue, May 9, 2017 at 6:22 AM, Mike DePaulo  wrote:
> On Mon, May 8, 2017 at 9:00 AM, knarra  wrote:
>> On 05/07/2017 04:48 PM, Mike DePaulo wrote:
>>>
>>> Hi. I am trying to follow this guide. Is it possible to use part of my
>>> OS disk /dev/sda for the bricks?
>>>
>>> https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/
>>>
>>> I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
>>> requirements. I am guessing I have to create an LV for the OS that
>>> does not take up the entire disk during install, manually create a pv
>>> like /dev/sda3 afterwards, and then run Hosted Engine Setup and
>>> specify /sda3 rather than sdb?
>>>
>>> Thanks,
>>> -Mike
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> Hi Mike,
>>
>> If you create gluster bricks on the same disk as OS it works but we do
>> not recommend setting up gluster bricks on the same disk as the os. When
>> user tries to create a gluster volume using by specifying the bricks from
>> root partition it displays an error message "Bricks in root parition not
>> recommended and use force at the end to create volume".
>>
>> Thanks
>>
>> kasturi
>>
>
> Thank you very much. Is my process for doing this (listed in my
> original email) correct though?
>
> -Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-09 Thread Mike DePaulo
On Mon, May 8, 2017 at 9:00 AM, knarra  wrote:
> On 05/07/2017 04:48 PM, Mike DePaulo wrote:
>>
>> Hi. I am trying to follow this guide. Is it possible to use part of my
>> OS disk /dev/sda for the bricks?
>>
>> https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/
>>
>> I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
>> requirements. I am guessing I have to create an LV for the OS that
>> does not take up the entire disk during install, manually create a pv
>> like /dev/sda3 afterwards, and then run Hosted Engine Setup and
>> specify /sda3 rather than sdb?
>>
>> Thanks,
>> -Mike
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
> Hi Mike,
>
> If you create gluster bricks on the same disk as OS it works but we do
> not recommend setting up gluster bricks on the same disk as the os. When
> user tries to create a gluster volume using by specifying the bricks from
> root partition it displays an error message "Bricks in root parition not
> recommended and use force at the end to create volume".
>
> Thanks
>
> kasturi
>

Thank you very much. Is my process for doing this (listed in my
original email) correct though?

-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-08 Thread knarra

On 05/07/2017 04:48 PM, Mike DePaulo wrote:

Hi. I am trying to follow this guide. Is it possible to use part of my
OS disk /dev/sda for the bricks?
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/

I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
requirements. I am guessing I have to create an LV for the OS that
does not take up the entire disk during install, manually create a pv
like /dev/sda3 afterwards, and then run Hosted Engine Setup and
specify /sda3 rather than sdb?

Thanks,
-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Mike,

If you create gluster bricks on the same disk as OS it works but we 
do not recommend setting up gluster bricks on the same disk as the os. 
When user tries to create a gluster volume using by specifying the 
bricks from root partition it displays an error message "Bricks in root 
parition not recommended and use force at the end to create volume".


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-07 Thread Mike DePaulo
Hi. I am trying to follow this guide. Is it possible to use part of my
OS disk /dev/sda for the bricks?
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/

I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
requirements. I am guessing I have to create an LV for the OS that
does not take up the entire disk during install, manually create a pv
like /dev/sda3 afterwards, and then run Hosted Engine Setup and
specify /sda3 rather than sdb?

Thanks,
-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-13 Thread Sahina Bose
On Wed, Apr 12, 2017 at 11:15 PM, Jamie Lawrence 
wrote:

>
> > On Apr 12, 2017, at 1:31 AM, Evgenia Tokar  wrote:
> >
> > Hi Jamie,
> >
> > Are you trying to setup hosted engine using the "hosted-engine --deploy"
> command, or are you trying to migrate existing he vm?
> >
> > For hosted engine setup you need to provide a clean storage domain,
> which is not a part of your 4.1 setup, this storage domain will be used for
> the hosted engine and will be visible in the UI once the deployment of the
> hosted engine is complete.
> > If your storage domain appears in the UI it means that it is already
> connected to the storage pool and is not "clean”.
>
> Hi Jenny,
>
> Thanks for the response.
>
> I’m using `hosted-engine —deploy`, yes. (Actually, the last few attempts
> have been with an answerfile, but the responses are the same.)
>
> I think I may have been unclear.  I understand that it wants an unmolested
> SD. There just doesn’t seem to be a path to provide that with an
> Ovirt-managed Gluster cluster.
>
> I guess my question is how to provide that with an Ovirt-managed gluster
> installation. Or a different way of asking, I guess, would be how do I make
> Ovirt/VDSM ignore a newly created gluster SD so that `hosted-engine` can
> pick it up? I don’t see any options to tell the Gluster cluster to not
> auto-discover or similar. So as soon as I create it, the non-hosted engine
> picks it up. This happens within seconds - I vainly tried to time it with
> running the installer.
>
> This is why I mentioned dismissing the idea of using another Gluster
> installation, unattached to Ovirt. That’s the only way I could think of to
> give it a clean pool. (I dismissed it because I can’t run this in
> production with that sort of dependency.)
>
> Do I need to take this Gluster cluster out of Ovirt control (delete the
> Gluster cluster from the Ovirt GUI, recreate outside of Ovirt manually),
> install on to that, and then re-associate it in the GUI or something
> similar?
>

The gluster cluster being detected in Ovirt does not make it a dirty
storage domain. It looks like the gluster volume was previously used as
storage domain and was not cleaned up? You can try mounting the gluster
volume and check if it has any content

I'm a bit confused about the setup though  - do you already have an
installation of oVirt engine that you use to manage the gluster hosts. Are
you deploying another engine (HE) that's managing the same hosts or using
gluster volume from another installation?


> -j
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-12 Thread Jamie Lawrence

> On Apr 12, 2017, at 1:31 AM, Evgenia Tokar  wrote:
> 
> Hi Jamie, 
> 
> Are you trying to setup hosted engine using the "hosted-engine --deploy" 
> command, or are you trying to migrate existing he vm? 
>  
> For hosted engine setup you need to provide a clean storage domain, which is 
> not a part of your 4.1 setup, this storage domain will be used for the hosted 
> engine and will be visible in the UI once the deployment of the hosted engine 
> is complete.
> If your storage domain appears in the UI it means that it is already 
> connected to the storage pool and is not "clean”.

Hi Jenny,

Thanks for the response.

I’m using `hosted-engine —deploy`, yes. (Actually, the last few attempts have 
been with an answerfile, but the responses are the same.)

I think I may have been unclear.  I understand that it wants an unmolested SD. 
There just doesn’t seem to be a path to provide that with an Ovirt-managed 
Gluster cluster.

I guess my question is how to provide that with an Ovirt-managed gluster 
installation. Or a different way of asking, I guess, would be how do I make 
Ovirt/VDSM ignore a newly created gluster SD so that `hosted-engine` can pick 
it up? I don’t see any options to tell the Gluster cluster to not auto-discover 
or similar. So as soon as I create it, the non-hosted engine picks it up. This 
happens within seconds - I vainly tried to time it with running the installer.

This is why I mentioned dismissing the idea of using another Gluster 
installation, unattached to Ovirt. That’s the only way I could think of to give 
it a clean pool. (I dismissed it because I can’t run this in production with 
that sort of dependency.)

Do I need to take this Gluster cluster out of Ovirt control (delete the Gluster 
cluster from the Ovirt GUI, recreate outside of Ovirt manually), install on to 
that, and then re-associate it in the GUI or something similar?

-j
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-12 Thread Evgenia Tokar
Hi Jamie,

Are you trying to setup hosted engine using the "hosted-engine --deploy"
command, or are you trying to migrate existing he vm?

For hosted engine setup you need to provide a clean storage domain, which
is not a part of your 4.1 setup, this storage domain will be used for the
hosted engine and will be visible in the UI once the deployment of the
hosted engine is complete.
If your storage domain appears in the UI it means that it is already
connected to the storage pool and is not "clean".

Thanks,
Jenny

On Wed, Apr 12, 2017 at 2:47 AM, Jamie Lawrence 
wrote:

> Or at least, refusing to mount a dirty pool.
>
> I have 4.1 set up, configured and functional, currently wired up with two
> VM hosts and three Gluster hosts. It is configured with a (temporary) NFS
> data storage domain, with the end-goal being two data domains on Gluster;
> one for the hosted engine, one for other VMs.
>
> The issue is that `hosted-engine` sees any gluster volumes offered as
> dirty. (I have been creating them via the command line  right before
> attempting the hosted-engine migration; there is nothing in them at that
> stage.)  I *think* what is happening is that ovirt-engine notices a newly
> created volume and has its way with the volume (visible in the GUI; the
> volume appears in the list), and the hosted-engine installer becomes upset
> about that. What I don’t know is what to do about it. Relevant log lines
> below. The installer almost sounds like it is asking me to remove the
> UUID-directory and whatnot, but I’m pretty sure that’s just going to leave
> me with two problems instead of fixing the first one. I’ve considered
> attempting to wire this together in the DB, which also seems like a great
> way to break things. I’ve even thought of using a Gluster installation that
> Ovirt knows nothing about, mainly as an experiment to see if it would even
> work, but decided it doesn’t matter, because I can’t deploy in that state
> anyway and it doesn’t actually get me any closer to getting this working.
>
> I noticed several bugs in the tracker seemingly related, but the bulk of
> those were for past versions and I saw nothing that seemed actionable from
> my end in the others.
>
> So, can anyone spare a clue as to what is going wrong, and what to do
> about that?
>
> -j
>
> - - - - ovirt-hosted-engine-setup.log - - - -
>
> 2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:408 connectStorageServer
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:475 {'status': {'message': 'Done',
> 'code': 0}, 'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-
> c610584dea6e'}]}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:502 {'status': {'message': 'Done',
> 'code': 0}, 'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-
> 1fd88b84fe14'}]}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:794 _check_existing_pools
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:795 getConnectedStoragePoolsList
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:797 {'status': {'message': 'Done', 'code':
> 0}}
> 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage
> storage._misc:956 Creating Storage Domain
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:513 createStorageDomain
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:547 {'status': {'message': 'Done', 'code':
> 0}}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:549 {'status': {'message': 'Done', 'code':
> 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree':
> u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0}
> 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage
> storage._misc:959 Creating Storage Pool
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:553 createFakeStorageDomain
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:570 {'status': {'message': 'Done',
> 'code': 0}}
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:572 {'status': {'message': 'Done',
> 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True,
> u'diskfree': u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0}
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStoragePool:587 createStoragePool
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStoragePool:627 createStoragePool(args=[
> 

[ovirt-users] Hosted engine setup shooting dirty pool

2017-04-11 Thread Jamie Lawrence
Or at least, refusing to mount a dirty pool.

I have 4.1 set up, configured and functional, currently wired up with two VM 
hosts and three Gluster hosts. It is configured with a (temporary) NFS data 
storage domain, with the end-goal being two data domains on Gluster; one for 
the hosted engine, one for other VMs.

The issue is that `hosted-engine` sees any gluster volumes offered as dirty. (I 
have been creating them via the command line  right before attempting the 
hosted-engine migration; there is nothing in them at that stage.)  I *think* 
what is happening is that ovirt-engine notices a newly created volume and has 
its way with the volume (visible in the GUI; the volume appears in the list), 
and the hosted-engine installer becomes upset about that. What I don’t know is 
what to do about it. Relevant log lines below. The installer almost sounds like 
it is asking me to remove the UUID-directory and whatnot, but I’m pretty sure 
that’s just going to leave me with two problems instead of fixing the first 
one. I’ve considered attempting to wire this together in the DB, which also 
seems like a great way to break things. I’ve even thought of using a Gluster 
installation that Ovirt knows nothing about, mainly as an experiment to see if 
it would even work, but decided it doesn’t matter, because I can’t deploy in 
that state anyway and it doesn’t actually get me any closer to getting this 
working.

I noticed several bugs in the tracker seemingly related, but the bulk of those 
were for past versions and I saw nothing that seemed actionable from my end in 
the others.

So, can anyone spare a clue as to what is going wrong, and what to do about 
that?

-j

- - - - ovirt-hosted-engine-setup.log - - - - 

2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:408 connectStorageServer
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:475 {'status': {'message': 'Done', 'code': 0}, 
'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-c610584dea6e'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:502 {'status': {'message': 'Done', 'code': 0}, 
'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-1fd88b84fe14'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:794 _check_existing_pools
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:795 getConnectedStoragePoolsList
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:797 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:956 Creating Storage Domain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:513 createStorageDomain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:547 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:549 {'status': {'message': 'Done', 'code': 0}, 
u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': 
u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:959 Creating Storage Pool
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:553 createFakeStorageDomain
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:570 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:572 {'status': {'message': 'Done', 'code': 0}, 
u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': 
u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:587 createStoragePool
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:627 
createStoragePool(args=[storagepoolID=9e399f0c-7c4b-4131-be79-922dda038383, 
name=hosted_datacenter, masterSdUUID=9a5c302b-2a18-4c7e-b75d-29088299988c, 
masterVersion=1, domainList=['9a5c302b-2a18-4c7e-b75d-29088299988c', 
'f26efe61-a2e1-4a85-a212-269d0a047e07'], lockRenewalIntervalSec=None, 
leaseTimeSec=None, ioOpTimeoutSec=None, leaseRetries=None])
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:640 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:15:29 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:962 Connecting Storage Pool
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storagePoolConnection:717 connectStoragePool

[ovirt-users] Hosted engine setup shooting dirty pool

2017-04-11 Thread Jamie Lawrence
Or at least, refusing to mount a dirty pool. I’m having trouble getting the 
hosted engine installed.

I have 4.1 set up, configured and functional, currently wired up with two VM 
hosts and three Gluster hosts. It is configured with a (temporary) NFS data 
storage domain, with the end-goal being two data domains on Gluster; one for 
the hosted engine, one for other VMs.

The issue is that `hosted-engine` sees any gluster volumes offered as dirty. (I 
have been creating them via the command line  right before attempting the 
hosted-engine migration; there is nothing in them at that stage.)  I *think* 
what is happening is that ovirt-engine notices a newly created volume and has 
its way with the volume (visible in the GUI; the volume appears in the list), 
and the hosted-engine installer becomes upset about that. What I don’t know is 
what to do about that. Relevant log lines below. The installer almost sounds 
like it is asking me to remove the UUID-directory and whatnot, but I’m pretty 
sure that’s just going to leave me with two problems instead of fixing the 
first one. I’ve considered attempting to wire this together in the DB, which 
also seems like a great way to break things. I’ve even thought of using a 
Gluster cluster that Ovirt knows nothing about, mainly as an experiment to see 
if it would even work, but decided it doesn’t especially matter, as 
architecturally that would not work for production in our environment and I 
just need to get this up.

So, can anyone spare a clue as to what is going wrong, and what to do about 
that?

-j

- - - - ovirt-hosted-engine-setup.log - - - - 

2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:408 connectStorageServer
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:475 {'status': {'message': 'Done', 'code': 0}, 
'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-c610584dea6e'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:502 {'status': {'message': 'Done', 'code': 0}, 
'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-1fd88b84fe14'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:794 _check_existing_pools
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:795 getConnectedStoragePoolsList
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:797 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:956 Creating Storage Domain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:513 createStorageDomain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:547 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:549 {'status': {'message': 'Done', 'code': 0}, 
u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': 
u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:959 Creating Storage Pool
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:553 createFakeStorageDomain
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:570 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:572 {'status': {'message': 'Done', 'code': 0}, 
u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': 
u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:587 createStoragePool
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:627 
createStoragePool(args=[storagepoolID=9e399f0c-7c4b-4131-be79-922dda038383, 
name=hosted_datacenter, masterSdUUID=9a5c302b-2a18-4c7e-b75d-29088299988c, 
masterVersion=1, domainList=['9a5c302b-2a18-4c7e-b75d-29088299988c', 
'f26efe61-a2e1-4a85-a212-269d0a047e07'], lockRenewalIntervalSec=None, 
leaseTimeSec=None, ioOpTimeoutSec=None, leaseRetries=None])
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:640 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:15:29 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:962 Connecting Storage Pool
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storagePoolConnection:717 connectStoragePool
2017-04-11 16:15:29 DEBUG otopi.context context._executeMethod:142 method 
exception
Traceback (most recent call last):
  

Re: [ovirt-users] hosted-Engine setup: hostname 'node01.example.com' doesn't uniquely match the interface selected for the management bridge

2016-07-14 Thread Yedidyah Bar David
On Tue, Jul 5, 2016 at 5:56 PM, mots  wrote:
> Hello,
>
> I'm trying to install Ovirt 4 on a new set of hosts. During "hosted-engine 
> --deploy" I get the following error: (personal information is replaced with 
> generic placeholders)
>
> [ INFO  ] Stage: Setup validation
> [ ERROR ] Failed to execute stage 'Setup validation': hostname 
> 'node01.example.com' doesn't uniquely match the interface 'ens802f1' selected 
> for the management bridge; it matches also interface with IP 
> set(['192.168.99.10']). Please make sure that the hostname got from the 
> interface for the management network resolves only there.
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file 
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160705144908.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, 
> please check the issue, fix and redeploy
>   Log file is located at 
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160705144711-tl98lx.log
>
> That IP "192.168.99.10" doesn't resolve to anything, because I haven't added 
> it to the DNS server. It's also not in /etc/hosts.
> It's just the IP for the storage network that doesn't use DNS at all.
>
> From the log:
>
> 2016-07-05 14:49:08 DEBUG otopi.plugins.gr_he_common.network.bridge 
> bridge._get_hostname_from_bridge_if:274 Network info: {'netmask': 
> u'255.255.255.0', 'ipaddr': u'192.168.10.194', 'gateway': u'192.168.10.2'}

Meaning the interface ens802f1 has address 192.168.10.194

> 2016-07-05 14:49:08 DEBUG otopi.plugins.gr_he_common.network.bridge 
> bridge._get_hostname_from_bridge_if:310 hostname: 'node01.example.com', 
> aliaslist: '[]', ipaddrlist: '['192.168.99.10', '192.168.10.194']'

This is the result of:

python -c 'import socket; print(socket.gethostbyaddr("192.168.10.194"));'

> 2016-07-05 14:49:08 DEBUG otopi.context context._executeMethod:142 method 
> exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in 
> _executeMethod
> method['method']()
>   File 
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/network/bridge.py",
>  line 327, in _get_hostname_from_bridge_if
> o=other_ip,
> RuntimeError: hostname 'node01.example.comh' doesn't uniquely match the 
> interface 'ens802f1' selected for the management bridge; it matches also 
> interface with IP set(['192.168.99.10']). Please make sure that the hostname 
> got from the interface for the management network resolves only there.
> 2016-07-05 14:49:08 ERROR otopi.context context._executeMethod:151 Failed to 
> execute stage 'Setup validation': hostname 'node01.example.com' doesn't 
> uniquely match the interface 'ens802f1' selected for the management bridge; 
> it matches also interface with IP set(['192.168.99.10']). Please make sure 
> that the hostname got from the interface for the management network resolves 
> only there.
>
> The output for dig:
>
> [root@node01 ~]# dig node01.example.com
>
> ; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> node01.example.com
> ;; global options: +cmd
> ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45269
> ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2
>
> ;; OPT PSEUDOSECTION:
> ; EDNS: version: 0, flags:; udp: 4096
> ;; QUESTION SECTION:
> ;node01.example.com. IN  A
>
> ;; ANSWER SECTION:
> node01.example.com. 3600 IN  A   192.168.10.194
>
> ;; AUTHORITY SECTION:
> example.com   900 IN  NS  dns.example.com.
>
> ;; ADDITIONAL SECTION:
> dns.example.com. 900 IN  A   192.168.10.61
>
> ;; Query time: 3 msec
> ;; SERVER: 192.168.10.61#53(192.168.10.61)
> ;; WHEN: Die Jul 05 15:14:48 CEST 2016
> ;; MSG SIZE  rcvd: 110
>
> Output for nslookup:
>
> [root@node01 ~]# nslookup 192.168.99.10
> Server: 192.168.10.61
> Address:192.168.10.61#53
>
> ** server can't find 10.99.168.192.in-addr.arpa.: NXDOMAIN
>
> Why does the setup script think that my hostname resolves to 192.168.99.10?

Please run above python command and see for yourself.

Perhaps you have other means it uses for name resolution.

Check /etc/nsswitch.conf, getent, mdns, /etc/hosts, etc.

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-Engine setup: hostname 'node01.example.com' doesn't uniquely match the interface selected for the management bridge

2016-07-05 Thread mots
Hello,

I'm trying to install Ovirt 4 on a new set of hosts. During "hosted-engine 
--deploy" I get the following error: (personal information is replaced with 
generic placeholders)

[ INFO  ] Stage: Setup validation
[ ERROR ] Failed to execute stage 'Setup validation': hostname 
'node01.example.com' doesn't uniquely match the interface 'ens802f1' selected 
for the management bridge; it matches also interface with IP 
set(['192.168.99.10']). Please make sure that the hostname got from the 
interface for the management network resolves only there.
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160705144908.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please 
check the issue, fix and redeploy
  Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160705144711-tl98lx.log

That IP "192.168.99.10" doesn't resolve to anything, because I haven't added it 
to the DNS server. It's also not in /etc/hosts.
It's just the IP for the storage network that doesn't use DNS at all.

>From the log:

2016-07-05 14:49:08 DEBUG otopi.plugins.gr_he_common.network.bridge 
bridge._get_hostname_from_bridge_if:274 Network info: {'netmask': 
u'255.255.255.0', 'ipaddr': u'192.168.10.194', 'gateway': u'192.168.10.2'}
2016-07-05 14:49:08 DEBUG otopi.plugins.gr_he_common.network.bridge 
bridge._get_hostname_from_bridge_if:310 hostname: 'node01.example.com', 
aliaslist: '[]', ipaddrlist: '['192.168.99.10', '192.168.10.194']'
2016-07-05 14:49:08 DEBUG otopi.context context._executeMethod:142 method 
exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in 
_executeMethod
    method['method']()
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/network/bridge.py",
 line 327, in _get_hostname_from_bridge_if
    o=other_ip,
RuntimeError: hostname 'node01.example.comh' doesn't uniquely match the 
interface 'ens802f1' selected for the management bridge; it matches also 
interface with IP set(['192.168.99.10']). Please make sure that the hostname 
got from the interface for the management network resolves only there.
2016-07-05 14:49:08 ERROR otopi.context context._executeMethod:151 Failed to 
execute stage 'Setup validation': hostname 'node01.example.com' doesn't 
uniquely match the interface 'ens802f1' selected for the management bridge; it 
matches also interface with IP set(['192.168.99.10']). Please make sure that 
the hostname got from the interface for the management network resolves only 
there.

The output for dig:

[root@node01 ~]# dig node01.example.com

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> node01.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45269
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;node01.example.com. IN  A

;; ANSWER SECTION:
node01.example.com. 3600 IN  A   192.168.10.194

;; AUTHORITY SECTION:
example.com   900 IN  NS  dns.example.com.

;; ADDITIONAL SECTION:
dns.example.com. 900 IN  A   192.168.10.61

;; Query time: 3 msec
;; SERVER: 192.168.10.61#53(192.168.10.61)
;; WHEN: Die Jul 05 15:14:48 CEST 2016
;; MSG SIZE  rcvd: 110

Output for nslookup:

[root@node01 ~]# nslookup 192.168.99.10
Server: 192.168.10.61
Address:    192.168.10.61#53

** server can't find 10.99.168.192.in-addr.arpa.: NXDOMAIN

Why does the setup script think that my hostname resolves to 192.168.99.10?


signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup fails - system unreliable

2016-05-22 Thread Yedidyah Bar David
On Sat, May 21, 2016 at 8:47 AM, Bill Bill  wrote:
> I’ve tried over & over on fresh installs to setup the hosted engine VM
> however, each time, it fails. No clue as to what the problem is as it just
> says “this system is unreliable”.

Please post (a link to?) full logs. Depending on exact point/reason, which
can't be seen in current snippet, this should include at least full
hosted-engine-setup logs, and also perhaps engine (from engine vm) and vdsm
(from host) ones. Thanks.

>
>
>
> ///
>
>
>
> Log output:
>
>
>
> ///
>
>
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/nicUUID=str:'58a28a5e-5d0e-4ac3-835a-a1e9b0df6ae6'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/ovfArchive=unicode:'/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-3.6-20160420.1.el7.centos.ova'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/subst=dict:'{'@CDROM@': '/tmp/tmpTyL8IW/seed.iso', '@SD_UUID@':
> '2455aa81-146f-4a6e-bd6c-c368fa1d10b8', '@CONSOLE_UUID@':
> 'bef503b1-4224-4d82-8acd-8b03d21ae60b', '@NAME@': 'HostedEngine',
> '@BRIDGE@': 'ovirtmgmt', '@CDROM_UUID@':
> '4f64e8ba-5253-4b9c-b1a7-bc550e22f097', '@MEM_SIZE@': 4096, '@NIC_UUID@':
> '58a28a5e-5d0e-4ac3-835a-a1e9b0df6ae6', '@BOOT_CDROM@': '', '@VCPUS@': '4',
> '@CPU_TYPE@': 'SandyBridge', '@VM_UUID@':
> '8cc30bbf-8f4b-4fce-ae4a-ffd476baf2b3', '@EMULATED_MACHINE@': 'pc',
> '@BOOT_PXE@': '', '@IMG_UUID@': '4148fb72-73f1-4d8f-8368-5b6e1ddb4e96',
> '@BOOT_DISK@': ',bootOrder:1', '@CONSOLE_TYPE@': 'vnc', '@MAC_ADDR@':
> '00:16:3e:41:21:db', '@SP_UUID@': '----',
> '@VOL_UUID@': '9c175329-7d1a-4b51-8218-8a2512305684'}'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmBoot=str:'disk'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmCDRom=NoneType:'None'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmMACAddr=str:'00:16:3e:41:21:db'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmMemSizeMB=int:'4096'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmUUID=str:'8cc30bbf-8f4b-4fce-ae4a-ffd476baf2b3'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmVCpus=str:'4'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVESETUP_CORE/offlinePackager=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/dnfDisabledPlugins=list:'[]'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/dnfExpireCache=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/dnfRollback=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/dnfpackagerEnabled=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/keepAliveInterval=int:'30'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/yumDisabledPlugins=list:'[]'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/yumEnabledPlugins=list:'[]'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/yumExpireCache=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/yumRollback=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/yumpackagerEnabled=bool:'False'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/clockMaxGap=int:'5'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/clockSet=bool:'False'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/reboot=bool:'False'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/rebootAllow=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/rebootDeferTime=int:'10'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:514
> ENVIRONMENT DUMP - END
>
> 2016-05-21 01:42:42 DEBUG otopi.context context._executeMethod:142 Stage
> pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
>
> 2016-05-21 01:42:42 DEBUG otopi.context context._executeMethod:148 condition
> False
>
> 2016-05-21 01:42:42 INFO otopi.context context.runSequence:427 Stage:
> Termination
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.runSequence:431 STAGE
> terminate
>
> 2016-05-21 01:42:42 

[ovirt-users] Hosted engine setup fails - system unreliable

2016-05-20 Thread Bill Bill
I’ve tried over & over on fresh installs to setup the hosted engine VM however, 
each time, it fails. No clue as to what the problem is as it just says “this 
system is unreliable”.

///

Log output:

///

2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/nicUUID=str:'58a28a5e-5d0e-4ac3-835a-a1e9b0df6ae6'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/ovfArchive=unicode:'/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-3.6-20160420.1.el7.centos.ova'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/subst=dict:'{'@CDROM@': '/tmp/tmpTyL8IW/seed.iso', '@SD_UUID@': 
'2455aa81-146f-4a6e-bd6c-c368fa1d10b8', '@CONSOLE_UUID@': 
'bef503b1-4224-4d82-8acd-8b03d21ae60b', '@NAME@': 'HostedEngine', '@BRIDGE@': 
'ovirtmgmt', '@CDROM_UUID@': '4f64e8ba-5253-4b9c-b1a7-bc550e22f097', 
'@MEM_SIZE@': 4096, '@NIC_UUID@': '58a28a5e-5d0e-4ac3-835a-a1e9b0df6ae6', 
'@BOOT_CDROM@': '', '@VCPUS@': '4', '@CPU_TYPE@': 'SandyBridge', '@VM_UUID@': 
'8cc30bbf-8f4b-4fce-ae4a-ffd476baf2b3', '@EMULATED_MACHINE@': 'pc', 
'@BOOT_PXE@': '', '@IMG_UUID@': '4148fb72-73f1-4d8f-8368-5b6e1ddb4e96', 
'@BOOT_DISK@': ',bootOrder:1', '@CONSOLE_TYPE@': 'vnc', '@MAC_ADDR@': 
'00:16:3e:41:21:db', '@SP_UUID@': '----', 
'@VOL_UUID@': '9c175329-7d1a-4b51-8218-8a2512305684'}'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmBoot=str:'disk'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmCDRom=NoneType:'None'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmMACAddr=str:'00:16:3e:41:21:db'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmMemSizeMB=int:'4096'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmUUID=str:'8cc30bbf-8f4b-4fce-ae4a-ffd476baf2b3'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmVCpus=str:'4'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVESETUP_CORE/offlinePackager=bool:'True'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/dnfDisabledPlugins=list:'[]'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/dnfExpireCache=bool:'True'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/dnfRollback=bool:'True'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/dnfpackagerEnabled=bool:'True'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/keepAliveInterval=int:'30'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/yumDisabledPlugins=list:'[]'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/yumEnabledPlugins=list:'[]'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/yumExpireCache=bool:'True'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/yumRollback=bool:'True'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/yumpackagerEnabled=bool:'False'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/clockMaxGap=int:'5'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/clockSet=bool:'False'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/reboot=bool:'False'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/rebootAllow=bool:'True'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/rebootDeferTime=int:'10'
2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:514 ENVIRONMENT 
DUMP - END
2016-05-21 01:42:42 DEBUG otopi.context context._executeMethod:142 Stage 
pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2016-05-21 01:42:42 DEBUG otopi.context context._executeMethod:148 condition 
False
2016-05-21 01:42:42 INFO otopi.context context.runSequence:427 Stage: 
Termination
2016-05-21 01:42:42 DEBUG otopi.context context.runSequence:431 STAGE terminate
2016-05-21 01:42:42 DEBUG otopi.context context._executeMethod:142 Stage 
terminate METHOD 
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._terminate
2016-05-21 01:42:42 ERROR otopi.plugins.ovirt_hosted_engine_setup.core.misc 
misc._terminate:170 Hosted Engine deployment failed: this system is not 
reliable, please check the issue, fix and redeploy
2016-05-21 01:42:42 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Log file is located at 

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-03 Thread Langley, Robert
Outstanding! Thank you Sahina! Your assistance is much appreciated.
When I saw your email it reminded me that the versions were different and I was 
having an issue with reaching the internet from the engine storage servers. A 
DNS issue between my private DNS server and my organization's DNS server. I 
figured out a workaround and found that I had to install the ovirt release36 
rpm repository also. Since my host server had 3.7 (since it had the oVirt 
release36 rpm repository) and my engine storage would only upgrade to the 
latest 3.6 GlusterFS version. All are now at Glusterfs 3.7.11 and working. I 
got past the storage configuration.

From: Sahina Bose [mailto:sab...@redhat.com]
Sent: Monday, May 2, 2016 11:24 PM
To: Langley, Robert <robert.lang...@ventura.org>; users@ovirt.org
Subject: Re: [ovirt-users] hosted-engine setup Gluster fails to execute

Command that failed to execute from your hosted-engine node - "gluster volume 
info engine-vol --remote-host gsave0.engine.local"

Can you check the glusterfs versions on the hosted-engine node and 
gsave0.engine.local node - are they the same?
On 05/02/2016 11:08 PM, Langley, Robert wrote:
Hi Sahina,

Thank you for your response. Let me know if you'll need any of the log before 
the Storage Configuration section. I looked at this earlier and I was wondering 
why, after choosing to use GlusterFS, there is still reference to NFS (nfs.py)? 
I do believe NFS is disabled in my Gluster config for the engine cluster. 
-Robert

2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND --== STORAGE CONFIGURATION 
==--
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND
2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND During customization use 
CTRL-D to abort.
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1100 _check_existing_pools
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1101 getConnectedStoragePoolsList
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1103 {'status': {'message': 'OK', 'code': 0}, 
'poollist': []}
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the storage 
you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEglusterfs
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500 ENVIRONMENT 
DUMP - BEGIN
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_STORAGE/domainType=str:'glusterfs'
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514 ENVIRONMENT 
DUMP - END
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 condition 
False
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization
2016-05-02 09:16:59 INFO otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:360 Please note that Replica 3 support is required for the 
shared storage.
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the full 
shared storage connection path to use (example: host:/path):
2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEgsave0.engine.local:/engine-vol
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:828 execute: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), 
executable='None', cwd='None', env=None
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:878 execute-result: ('/sbin/gluster', '--mode=script', 
'--xml', 'volume', 'info', 'engine-vol', '--re

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-03 Thread Langley, Robert
Thank you Simone!

-Original Message-
From: Simone Tiraboschi [mailto:stira...@redhat.com] 
Sent: Tuesday, May 3, 2016 12:34 AM
To: Langley, Robert <robert.lang...@ventura.org>
Cc: users@ovirt.org
Subject: Re: [ovirt-users] hosted-engine setup Gluster fails to execute

On Mon, May 2, 2016 at 7:38 PM, Langley, Robert <robert.lang...@ventura.org> 
wrote:
> Hi Sahina,
>
>
>
> Thank you for your response. Let me know if you’ll need any of the log 
> before the Storage Configuration section. I looked at this earlier and 
> I was wondering why, after choosing to use GlusterFS, there is still 
> reference to NFS (nfs.py)?

It's just an implementation detail of hosted-engine-setup:
iSCSI an FC commands are define in blockd.py while commands for file based 
storage domains (NFS and gluster) are defined in nfs.py.

> I do believe NFS is disabled in my Gluster config for the engine 
> cluster. -Robert
>
>
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND --== STORAGE
> CONFIGURATION ==--
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND
>
> 2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 
> Stage customization METHOD 
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_
> customization
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND During customization use
> CTRL-D to abort.
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1100 _check_existing_pools
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1101 getConnectedStoragePoolsList
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1103 {'status': {'message': 'OK', 
> 'code': 0},
> 'poollist': []}
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Please specify the
> storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:RECEIVEglusterfs
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500 
> ENVIRONMENT DUMP - BEGIN
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 
> ENV OVEHOSTED_STORAGE/domainType=str:'glusterfs'
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514 
> ENVIRONMENT DUMP - END
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
> Stage customization METHOD 
> otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._cust
> omization
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
> Stage customization METHOD 
> otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._bric
> k_customization
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 
> condition False
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
> Stage customization METHOD 
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customizat
> ion
>
> 2016-05-02 09:16:59 INFO 
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs
> nfs._customization:360 Please note that Replica 3 support is required 
> for the shared storage.
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Please specify the full
> shared storage connection path to use (example: host:/path):
>
> 2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:RECEIVEgsave0.engine.local:/engine-vol
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
> plugin.executeRaw:828
> execute: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info', 
> 'engine-vol', '--remote-host=gsave0.engine.local'), executable='None', 
> cwd='None', env=None
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
> plugin.executeRaw:878
> execute-result: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 
> 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), rc=2
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.o

Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-03 Thread Simone Tiraboschi
On Mon, May 2, 2016 at 9:15 PM, Gianluca Cecchi
 wrote:
>
>
> On Mon, May 2, 2016 at 8:39 PM, Gianluca Cecchi 
> wrote:
>>
>> On Mon, May 2, 2016 at 11:14 AM, Simone Tiraboschi 
>> wrote:
>>>
>>>
>>> >>>
>>> >>> Can you please check the entropy value on your host?
>>> >>>  cat /proc/sys/kernel/random/entropy_avail
>>> >>>
>>> >>
>>> >> I have not at hand now the server. I'll check soon and report
>>> >> Do you mean entropy of the physical server that will operate as
>>> >> hypervisor?
>>>
>>> On the hypervisor
>>>
>>> > That's a good question. Simone - do you know if we start the guest with
>>> > virtio-rng?
>>>
>>> AFAIK we are not.
>>>
>>
>> On the only existing hypervisor, just after booting and exiting global
>> maintenance, causing hosted engine to start, I have
>>
>> [root@ovirt01 ~]# uptime
>>  20:34:17 up 6 min,  1 user,  load average: 0.23, 0.20, 0.11
>>
>> [root@ovirt01 ~]# cat /proc/sys/kernel/random/entropy_avail
>> 3084
>>
>> BTW on the self hosted engine VM:
>> [root@ovirt ~]# uptime
>>  18:35:33 up 4 min,  1 user,  load average: 0.06, 0.25, 0.13
>>
>> [root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
>> 14
>>
>> On the hypervisor:
>> [root@ovirt01 ~]# ps -ef | grep [q]emu | grep virtio-rng
>> [root@ovirt01 ~]#
>>
>> On engine VM:
>> [root@ovirt ~]# ll /dev/hwrng
>> ls: cannot access /dev/hwrng: No such file or directory
>> [root@ovirt ~]#
>>
>> [root@ovirt ~]# lsmod | grep virtio_rng
>> [root@ovirt ~]#
>>
>> May I change anything so that engine VM has virtio-rng enabled?
>>
>> Gianluca
>>
>>
>
> I verified very slow login time in webadmin after welcome page, with my
> configuration that is for now based on /etc/hosts.
> After reading a previous post, and having after about 30 minutes only 114 as
> entropy in hosted engine vm, I made this in engine VM:

Thanks for your report Gianluca,
adding virtio-rng or adding haveged daemon to the appliance is indeed
a good idea: could you please fill an RFE on bugzilla for that?

> yum install haveged
> systemctl enable haveged
>
> put host in global maintenance
> shutdown engine VM
> exit from maintenance
>
> engine VM starts and immediately I have:
>
> [root@ovirt ~]# uptime
>  19:05:10 up 0 min,  1 user,  load average: 0.68, 0.20, 0.07
>
> [root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
> 1369
>
> And login in web admin page now almost immediate
>
> Inside the thread I read:
> http://lists.ovirt.org/pipermail/users/2016-April/038805.html
>
> it wasn't clear if I can edit the engine VM in webadmin (or other mean) and
> enable the random generator option or if the haveged way is the one to go
> with in case of self hosted engine
> Is there a list of what I can change (if any) and what not for the engine
> VM?
> For example I would like to change the time zone that is GMT now (I think
> inherited from the OVF of the appliance?)
>
> Thanks,
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-03 Thread Simone Tiraboschi
On Mon, May 2, 2016 at 7:38 PM, Langley, Robert
 wrote:
> Hi Sahina,
>
>
>
> Thank you for your response. Let me know if you’ll need any of the log
> before the Storage Configuration section. I looked at this earlier and I was
> wondering why, after choosing to use GlusterFS, there is still reference to
> NFS (nfs.py)?

It's just an implementation detail of hosted-engine-setup:
iSCSI an FC commands are define in blockd.py while commands for file
based storage domains (NFS and gluster) are defined in nfs.py.

> I do believe NFS is disabled in my Gluster config for the
> engine cluster. -Robert
>
>
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND --== STORAGE
> CONFIGURATION ==--
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND
>
> 2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 Stage
> customization METHOD
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND During customization use
> CTRL-D to abort.
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1100 _check_existing_pools
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1101 getConnectedStoragePoolsList
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1103 {'status': {'message': 'OK', 'code': 0},
> 'poollist': []}
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Please specify the
> storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:RECEIVEglusterfs
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500
> ENVIRONMENT DUMP - BEGIN
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_STORAGE/domainType=str:'glusterfs'
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514
> ENVIRONMENT DUMP - END
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage
> customization METHOD
> otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage
> customization METHOD
> otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 condition
> False
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage
> customization METHOD
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization
>
> 2016-05-02 09:16:59 INFO otopi.plugins.ovirt_hosted_engine_setup.storage.nfs
> nfs._customization:360 Please note that Replica 3 support is required for
> the shared storage.
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Please specify the full
> shared storage connection path to use (example: host:/path):
>
> 2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:RECEIVEgsave0.engine.local:/engine-vol
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.executeRaw:828
> execute: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info',
> 'engine-vol', '--remote-host=gsave0.engine.local'), executable='None',
> cwd='None', env=None
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.executeRaw:878
> execute-result: ('/sbin/gluster', '--mode=script', '--xml', 'volume',
> 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), rc=2
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.execute:936
> execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume',
> 'info', 'engine-vol', '--remote-host=gsave0.engine.local') stdout:
>
>
>
>
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.execute:941
> execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume',
> 'info', 'engine-vol', '--remote-host=gsave0.engine.local') stderr:
>
>
>
>
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs nfs._customization:395
> exception
>
> Traceback (most recent call last):
>
>   File
> 

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-03 Thread Sahina Bose
Command that failed to execute from your hosted-engine node - "gluster 
volume info engine-vol --remote-host gsave0.engine.local"


Can you check the glusterfs versions on the hosted-engine node and 
gsave0.engine.local node - are they the same?


On 05/02/2016 11:08 PM, Langley, Robert wrote:


Hi Sahina,

Thank you for your response. Let me know if you’ll need any of the log 
before the Storage Configuration section. I looked at this earlier and 
I was wondering why, after choosing to use GlusterFS, there is still 
reference to NFS (nfs.py)? I do believe NFS is disabled in my Gluster 
config for the engine cluster. -Robert


2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND --== STORAGE 
CONFIGURATION ==--


2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND


2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 
Stage customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization


2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND During 
customization use CTRL-D to abort.


2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1100 _check_existing_pools


2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1101 getConnectedStoragePoolsList


2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1103 {'status': {'message': 'OK', 
'code': 0}, 'poollist': []}


2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE


2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the 
storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:


2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEglusterfs


2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500 
ENVIRONMENT DUMP - BEGIN


2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 
ENV OVEHOSTED_STORAGE/domainType=str:'glusterfs'


2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514 
ENVIRONMENT DUMP - END


2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
Stage customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization


2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
Stage customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization


2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 
condition False


2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
Stage customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization


2016-05-02 09:16:59 INFO 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:360 Please note that Replica 3 support is required 
for the shared storage.


2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION


2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the 
full shared storage connection path to use (example: host:/path):


2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVE gsave0.engine.local:/engine-vol


2016-05-02 09:17:22 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:828 execute: ('/sbin/gluster', '--mode=script', 
'--xml', 'volume', 'info', 'engine-vol', 
'--remote-host=gsave0.engine.local'), executable='None', cwd='None', 
env=None


2016-05-02 09:17:22 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:878 execute-result: ('/sbin/gluster', 
'--mode=script', '--xml', 'volume', 'info', 'engine-vol', 
'--remote-host=gsave0.engine.local'), rc=2


2016-05-02 09:17:22 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.execute:936 
execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 
'info', 'engine-vol', '--remote-host=gsave0.engine.local') stdout:


2016-05-02 09:17:22 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.execute:941 
execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 
'info', 'engine-vol', '--remote-host=gsave0.engine.local') stderr:


2016-05-02 09:17:22 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:395 exception


Traceback (most recent call last):

  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py", 
line 390, in _customization


check_space=False,

  File 

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-02 Thread Langley, Robert
Correction; I verified and on the Gluster Volume "engine-vol" nfs.disable is 
off. Not sure if that is significant or not.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Gianluca Cecchi
On Mon, May 2, 2016 at 8:39 PM, Gianluca Cecchi 
wrote:

> On Mon, May 2, 2016 at 11:14 AM, Simone Tiraboschi 
> wrote:
>
>>
>> >>>
>> >>> Can you please check the entropy value on your host?
>> >>>  cat /proc/sys/kernel/random/entropy_avail
>> >>>
>> >>
>> >> I have not at hand now the server. I'll check soon and report
>> >> Do you mean entropy of the physical server that will operate as
>> hypervisor?
>>
>> On the hypervisor
>>
>> > That's a good question. Simone - do you know if we start the guest with
>> > virtio-rng?
>>
>> AFAIK we are not.
>>
>>
> On the only existing hypervisor, just after booting and exiting global
> maintenance, causing hosted engine to start, I have
>
> [root@ovirt01 ~]# uptime
>  20:34:17 up 6 min,  1 user,  load average: 0.23, 0.20, 0.11
>
> [root@ovirt01 ~]# cat /proc/sys/kernel/random/entropy_avail
> 3084
>
> BTW on the self hosted engine VM:
> [root@ovirt ~]# uptime
>  18:35:33 up 4 min,  1 user,  load average: 0.06, 0.25, 0.13
>
> [root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
> 14
>
> On the hypervisor:
> [root@ovirt01 ~]# ps -ef | grep [q]emu | grep virtio-rng
> [root@ovirt01 ~]#
>
> On engine VM:
> [root@ovirt ~]# ll /dev/hwrng
> ls: cannot access /dev/hwrng: No such file or directory
> [root@ovirt ~]#
>
> [root@ovirt ~]# lsmod | grep virtio_rng
> [root@ovirt ~]#
>
> May I change anything so that engine VM has virtio-rng enabled?
>
> Gianluca
>
>
>
I verified very slow login time in webadmin after welcome page, with my
configuration that is for now based on /etc/hosts.
After reading a previous post, and having after about 30 minutes only 114
as entropy in hosted engine vm, I made this in engine VM:

yum install haveged
systemctl enable haveged

put host in global maintenance
shutdown engine VM
exit from maintenance

engine VM starts and immediately I have:

[root@ovirt ~]# uptime
 19:05:10 up 0 min,  1 user,  load average: 0.68, 0.20, 0.07

[root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
1369

And login in web admin page now almost immediate

Inside the thread I read:
http://lists.ovirt.org/pipermail/users/2016-April/038805.html

it wasn't clear if I can edit the engine VM in webadmin (or other mean) and
enable the random generator option or if the haveged way is the one to go
with in case of self hosted engine
Is there a list of what I can change (if any) and what not for the engine
VM?
For example I would like to change the time zone that is GMT now (I think
inherited from the OVF of the appliance?)

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Gianluca Cecchi
On Mon, May 2, 2016 at 11:14 AM, Simone Tiraboschi 
wrote:

>
> >>>
> >>> Can you please check the entropy value on your host?
> >>>  cat /proc/sys/kernel/random/entropy_avail
> >>>
> >>
> >> I have not at hand now the server. I'll check soon and report
> >> Do you mean entropy of the physical server that will operate as
> hypervisor?
>
> On the hypervisor
>
> > That's a good question. Simone - do you know if we start the guest with
> > virtio-rng?
>
> AFAIK we are not.
>
>
On the only existing hypervisor, just after booting and exiting global
maintenance, causing hosted engine to start, I have

[root@ovirt01 ~]# uptime
 20:34:17 up 6 min,  1 user,  load average: 0.23, 0.20, 0.11

[root@ovirt01 ~]# cat /proc/sys/kernel/random/entropy_avail
3084

BTW on the self hosted engine VM:
[root@ovirt ~]# uptime
 18:35:33 up 4 min,  1 user,  load average: 0.06, 0.25, 0.13

[root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
14

On the hypervisor:
[root@ovirt01 ~]# ps -ef | grep [q]emu | grep virtio-rng
[root@ovirt01 ~]#

On engine VM:
[root@ovirt ~]# ll /dev/hwrng
ls: cannot access /dev/hwrng: No such file or directory
[root@ovirt ~]#

[root@ovirt ~]# lsmod | grep virtio_rng
[root@ovirt ~]#

May I change anything so that engine VM has virtio-rng enabled?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-02 Thread Langley, Robert
Hi Sahina,

Thank you for your response. Let me know if you'll need any of the log before 
the Storage Configuration section. I looked at this earlier and I was wondering 
why, after choosing to use GlusterFS, there is still reference to NFS (nfs.py)? 
I do believe NFS is disabled in my Gluster config for the engine cluster. 
-Robert

2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND --== STORAGE CONFIGURATION 
==--
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND
2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND During customization use 
CTRL-D to abort.
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1100 _check_existing_pools
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1101 getConnectedStoragePoolsList
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1103 {'status': {'message': 'OK', 'code': 0}, 
'poollist': []}
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the storage 
you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEglusterfs
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500 ENVIRONMENT 
DUMP - BEGIN
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_STORAGE/domainType=str:'glusterfs'
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514 ENVIRONMENT 
DUMP - END
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 condition 
False
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization
2016-05-02 09:16:59 INFO otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:360 Please note that Replica 3 support is required for the 
shared storage.
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the full 
shared storage connection path to use (example: host:/path):
2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEgsave0.engine.local:/engine-vol
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:828 execute: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), 
executable='None', cwd='None', env=None
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:878 execute-result: ('/sbin/gluster', '--mode=script', 
'--xml', 'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), 
rc=2
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.execute:936 execute-output: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local') stdout:


2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.execute:941 execute-output: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local') stderr:


2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:395 exception
Traceback (most recent call last):
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py",
 line 390, in _customization
check_space=False,
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py",
 line 302, in _validateDomain
self._check_volume_properties(connection)
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py",
 line 179, in _check_volume_properties
raiseOnError=True
  File 

Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Simone Tiraboschi
On Mon, May 2, 2016 at 11:06 AM, Yedidyah Bar David  wrote:
> On Mon, May 2, 2016 at 11:48 AM, Gianluca Cecchi
>  wrote:
>> On Mon, May 2, 2016 at 9:58 AM, Simone Tiraboschi wrote:
>>>
>>>
>>>
>>> hosted-engine-setup creates a fresh VM and inject a cloud-init script
>>> to configure it and execute there engine-setup to configure the engine
>>> as needed.
>>> Since engine-setup is running on the engine VM triggered by
>>> cloud-init, hosted-engine-setup has no way to really control its
>>> process status so we simply gather its output with a timeout of 10
>>> minutes between each single output line.
>>> In nothing happens within 10 minutes (the value is easily
>>> customizable), hosted-engine-setup thinks that engine-setup is stuck.
>>
>>
>>
>> How can one customize the pre-set timeout?

To set 20 minutes you can pass this
OVEHOSTED_ENGINE/engineSetupTimeout=int:1200


>> Could it be better to ask the user at the end of timeout if he/she wants to
>> wait again, instead of directly fail?
>
> Perhaps, can you please open a bz?

+1

>>> So the issue we have to understood is why this simple command took
>>> more than 10 minutes in your env:
>>> 2016-04-30 17:56:57 DEBUG
>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
>>> plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
>>> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
>>> 'password-reset', 'admin', '--password=env:pass', '--force',
>>> '--password-valid-to=2216-03-13 17:56:57Z'), executable='None',
>>> cwd='None', env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
>>> '/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
>>> 'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/',
>>> 'OVIRT_ENGINE_JAVA_HOME': u'/usr/lib/jvm/jre', 'PATH':
>>> '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
>>>
>>> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
>>> 'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly',
>>> 'OTOPI_EXECDIR': '/'}
>>
>>
>>
>>
>> It seemed quite strange to me too (see below further info on this)
>>
>>>
>>> Can you please check the entropy value on your host?
>>>  cat /proc/sys/kernel/random/entropy_avail
>>>
>>
>> I have not at hand now the server. I'll check soon and report
>> Do you mean entropy of the physical server that will operate as hypervisor?

On the hypervisor

> That's a good question. Simone - do you know if we start the guest with
> virtio-rng?

AFAIK we are not.

> This is another case of [1], perhaps we should reopen it.
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1319827
>
>>
>>
>>>
>>> > As a last question how to clean up things in case I have to start from
>>> > scratch.
>>>
>>> I'd recommend to redeploy from scratch instead of trying fixing it
>>> but, before that, we need to understand the root issue.
>>>
>>
>> So, trying restart the setup with generated answer file I got:
>> 1) if VM still powered on, an error about this condition
>> 2) if VM powered down, an error abut storage domain already in place and
>> restart not supported in this condition.
>>
>> I was able to continue with these steps:
>>
>> a) remove what inside the partially setup self hosted engine storage domain
>> rm -rf /SHE_DOMAIN/*
>> cd SHE_DOMAIN
>> mklost+found
>>
>> b) reboot the hypervisor
>>
>> c) stop vdsmd
>>
>> d) start the setup again with the answer file
>> It seems all went well and this time strangely the step that took more than
>> 10 minutes before lasted less than 2 seconds
>>
>> I was then able to deploy storage and iso domains without problems and self
>> hosted engine domain correctly detected and imported too.
>> Created two CentOS VMs without problems (6.7 and 7.2).
>>
>> See below the full output of deploy command
>>
>>
>> [root@ovirt01 ~]# hosted-engine --deploy
>> --config-append=/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf
>> [ INFO  ] Stage: Initializing
>> [ INFO  ] Generating a temporary VNC password.
>> [ INFO  ] Stage: Environment setup
>>   Configuration files:
>> ['/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf']
>>   Log file:
>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160501014326-8frbxk.log
>>   Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
>> [ INFO  ] Hardware supports virtualization
>> [ INFO  ] Bridge ovirtmgmt already created
>> [ INFO  ] Stage: Environment packages setup
>> [ INFO  ] Stage: Programs detection
>> [ INFO  ] Stage: Environment setup
>> [ INFO  ] Stage: Environment customization
>>
>>   --== STORAGE CONFIGURATION ==--
>>
>>   During customization use CTRL-D to abort.
>> [ INFO  ] Installing on first host
>>
>>   --== SYSTEM CONFIGURATION ==--
>>
>>
>>   --== NETWORK CONFIGURATION ==--
>>
>>
>>   --== VM CONFIGURATION ==--
>>
>> [ INFO  ] Checking OVF archive content (could take a few minutes depending
>> on 

Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Yedidyah Bar David
On Mon, May 2, 2016 at 11:48 AM, Gianluca Cecchi
 wrote:
> On Mon, May 2, 2016 at 9:58 AM, Simone Tiraboschi wrote:
>>
>>
>>
>> hosted-engine-setup creates a fresh VM and inject a cloud-init script
>> to configure it and execute there engine-setup to configure the engine
>> as needed.
>> Since engine-setup is running on the engine VM triggered by
>> cloud-init, hosted-engine-setup has no way to really control its
>> process status so we simply gather its output with a timeout of 10
>> minutes between each single output line.
>> In nothing happens within 10 minutes (the value is easily
>> customizable), hosted-engine-setup thinks that engine-setup is stuck.
>
>
>
> How can one customize the pre-set timeout?
> Could it be better to ask the user at the end of timeout if he/she wants to
> wait again, instead of directly fail?

Perhaps, can you please open a bz?

>
>
>>
>> So the issue we have to understood is why this simple command took
>> more than 10 minutes in your env:
>> 2016-04-30 17:56:57 DEBUG
>> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
>> plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
>> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
>> 'password-reset', 'admin', '--password=env:pass', '--force',
>> '--password-valid-to=2216-03-13 17:56:57Z'), executable='None',
>> cwd='None', env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
>> '/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
>> 'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/',
>> 'OVIRT_ENGINE_JAVA_HOME': u'/usr/lib/jvm/jre', 'PATH':
>> '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
>>
>> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
>> 'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly',
>> 'OTOPI_EXECDIR': '/'}
>
>
>
>
> It seemed quite strange to me too (see below further info on this)
>
>>
>> Can you please check the entropy value on your host?
>>  cat /proc/sys/kernel/random/entropy_avail
>>
>
> I have not at hand now the server. I'll check soon and report
> Do you mean entropy of the physical server that will operate as hypervisor?

That's a good question. Simone - do you know if we start the guest with
virtio-rng?

This is another case of [1], perhaps we should reopen it.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1319827

>
>
>>
>> > As a last question how to clean up things in case I have to start from
>> > scratch.
>>
>> I'd recommend to redeploy from scratch instead of trying fixing it
>> but, before that, we need to understand the root issue.
>>
>
> So, trying restart the setup with generated answer file I got:
> 1) if VM still powered on, an error about this condition
> 2) if VM powered down, an error abut storage domain already in place and
> restart not supported in this condition.
>
> I was able to continue with these steps:
>
> a) remove what inside the partially setup self hosted engine storage domain
> rm -rf /SHE_DOMAIN/*
> cd SHE_DOMAIN
> mklost+found
>
> b) reboot the hypervisor
>
> c) stop vdsmd
>
> d) start the setup again with the answer file
> It seems all went well and this time strangely the step that took more than
> 10 minutes before lasted less than 2 seconds
>
> I was then able to deploy storage and iso domains without problems and self
> hosted engine domain correctly detected and imported too.
> Created two CentOS VMs without problems (6.7 and 7.2).
>
> See below the full output of deploy command
>
>
> [root@ovirt01 ~]# hosted-engine --deploy
> --config-append=/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Configuration files:
> ['/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf']
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160501014326-8frbxk.log
>   Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
> [ INFO  ] Hardware supports virtualization
> [ INFO  ] Bridge ovirtmgmt already created
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Stage: Environment customization
>
>   --== STORAGE CONFIGURATION ==--
>
>   During customization use CTRL-D to abort.
> [ INFO  ] Installing on first host
>
>   --== SYSTEM CONFIGURATION ==--
>
>
>   --== NETWORK CONFIGURATION ==--
>
>
>   --== VM CONFIGURATION ==--
>
> [ INFO  ] Checking OVF archive content (could take a few minutes depending
> on archive size)
> [ INFO  ] Checking OVF XML content (could take a few minutes depending on
> archive size)
> [WARNING] OVF does not contain a valid image description, using default.
>   Enter root password that will be used for the engine appliance
> (leave it empty to skip):
>   Confirm appliance root password:
> 

Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Simone Tiraboschi
On Sat, Apr 30, 2016 at 10:59 PM, Gianluca Cecchi
 wrote:
> Hello,
> trying to deploy a self hosted engine on an Intel NUC6i5SYB with CentOS 7.2
> using oVirt 3.6.5 and appliance (picked up rpm is
> ovirt-engine-appliance-3.6-20160420.1.el7.centos.noarch)
>
> Near the end of the command
> hosted-engine --deploy
>
> I get
> ...
>   |- [ INFO  ] Initializing PostgreSQL
>   |- [ INFO  ] Creating PostgreSQL 'engine' database
>   |- [ INFO  ] Configuring PostgreSQL
>   |- [ INFO  ] Creating/refreshing Engine database schema
>   |- [ INFO  ] Creating/refreshing Engine 'internal' domain database
> schema
> [ ERROR ] Engine setup got stuck on the appliance
> [ ERROR ] Failed to execute stage 'Closing up': Engine setup is stalled on
> the appliance since 600 seconds ago. Please check its log on the appliance.
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
> please check the issue, fix and redeploy
>
> On host log I indeed see the 10 minutes timeout:
>
> 2016-04-30 19:56:52 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND |- [ INFO  ]
> Creating/refreshing Engine 'internal' domain database schema
> 2016-04-30 20:06:53 ERROR
> otopi.plugins.ovirt_hosted_engine_setup.engine.health health._closeup:140
> Engine setup got stuck on the appliance
>
> On engine I don't see any particular problem but a ten minutes delay in its
> log:
>
> 2016-04-30 17:56:57 DEBUG otopi.context context.dumpEnvironment:514
> ENVIRONMENT DUMP - END
> 2016-04-30 17:56:57 DEBUG otopi.context context._executeMethod:142 Stage
> misc METHOD
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc.Plugin._setupAdminPassword
> 2016-04-30 17:56:57 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
> plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
> 'password-reset', 'admin', '--password=env:pass', '--force',
> '--password-valid-to=2216-03-13 17:56:57Z'), executable='None', cwd='None',
> env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
> '/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
> 'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/', 'OVIRT_ENGINE_JAVA_HOME':
> u'/usr/lib/jvm/jre', 'PATH':
> '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
> 'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly', 'OTOPI_EXECDIR': '/'}
> 2016-04-30 18:07:06 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
> plugin.executeRaw:878 execute-result: ('/usr/bin/ovirt-aaa-jdbc-tool',
> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
> 'password-reset', 'admin', '--password=env:pass', '--force',
> '--password-valid-to=2216-03-13 17:56:57Z'), rc=0
>
> and its last lines are:
>
> 2016-04-30 18:07:06 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
> plugin.execute:936 execute-output: ('/usr/bin/ovirt-aaa-jdbc-tool',
> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
> 'password-reset', 'admin', '--password=env:pass', '--force',
> '--password-valid-to=2216-03-13 17:56:57Z') stdout:
> updating user admin...
> user updated successfully

hosted-engine-setup creates a fresh VM and inject a cloud-init script
to configure it and execute there engine-setup to configure the engine
as needed.
Since engine-setup is running on the engine VM triggered by
cloud-init, hosted-engine-setup has no way to really control its
process status so we simply gather its output with a timeout of 10
minutes between each single output line.
In nothing happens within 10 minutes (the value is easily
customizable), hosted-engine-setup thinks that engine-setup is stuck.

So the issue we have to understood is why this simple command took
more than 10 minutes in your env:
2016-04-30 17:56:57 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
'--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
'password-reset', 'admin', '--password=env:pass', '--force',
'--password-valid-to=2216-03-13 17:56:57Z'), executable='None',
cwd='None', env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
'/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/',
'OVIRT_ENGINE_JAVA_HOME': u'/usr/lib/jvm/jre', 'PATH':
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly',
'OTOPI_EXECDIR': '/'}

Can you please check 

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-01 Thread Sahina Bose
You will need to provide the hosted-engine setup log to see which 
gluster command failed to execute.


On 04/30/2016 10:10 PM, Langley, Robert wrote:


I’m attempting to host the engine within a GlusterFS Replica 3 storage 
volume.


During setup, after entering the server and volume, I’m receiving the 
message that ‘/sbin/gluster’ failed to execute.


Reviewing the gluster cmd log, it looks as though /sbin/gluster does 
execute.


I can successfully mount the volume on the host outside of the 
hosted-engine setup.


Any assistance would be appreciated.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-04-30 Thread Gianluca Cecchi
Hello,
trying to deploy a self hosted engine on an Intel NUC6i5SYB with CentOS 7.2
using oVirt 3.6.5 and appliance (picked up rpm
is ovirt-engine-appliance-3.6-20160420.1.el7.centos.noarch)

Near the end of the command
hosted-engine --deploy

I get
...
  |- [ INFO  ] Initializing PostgreSQL
  |- [ INFO  ] Creating PostgreSQL 'engine' database
  |- [ INFO  ] Configuring PostgreSQL
  |- [ INFO  ] Creating/refreshing Engine database schema
  |- [ INFO  ] Creating/refreshing Engine 'internal' domain
database schema
[ ERROR ] Engine setup got stuck on the appliance
[ ERROR ] Failed to execute stage 'Closing up': Engine setup is stalled on
the appliance since 600 seconds ago. Please check its log on the appliance.
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please check the issue, fix and redeploy

On host log I indeed see the 10 minutes timeout:

2016-04-30 19:56:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND |- [ INFO  ]
Creating/refreshing Engine 'internal' domain database schema
2016-04-30 20:06:53 ERROR
otopi.plugins.ovirt_hosted_engine_setup.engine.health health._closeup:140
Engine setup got stuck on the appliance

On engine I don't see any particular problem but a ten minutes delay in its
log:

2016-04-30 17:56:57 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-04-30 17:56:57 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD
otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc.Plugin._setupAdminPassword
2016-04-30 17:56:57 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
'--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
'password-reset', 'admin', '--password=env:pass', '--force',
'--password-valid-to=2216-03-13 17:56:57Z'), executable='None', cwd='None',
env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
'/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/', 'OVIRT_ENGINE_JAVA_HOME':
u'/usr/lib/jvm/jre', 'PATH':
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly', 'OTOPI_EXECDIR': '/'}
2016-04-30 18:07:06 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
plugin.executeRaw:878 execute-result: ('/usr/bin/ovirt-aaa-jdbc-tool',
'--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
'password-reset', 'admin', '--password=env:pass', '--force',
'--password-valid-to=2216-03-13 17:56:57Z'), rc=0

and its last lines are:

2016-04-30 18:07:06 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
plugin.execute:936 execute-output: ('/usr/bin/ovirt-aaa-jdbc-tool',
'--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
'password-reset', 'admin', '--password=env:pass', '--force',
'--password-valid-to=2216-03-13 17:56:57Z') stdout:
updating user admin...
user updated successfully

2016-04-30 18:07:06 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
plugin.execute:941 execute-output: ('/usr/bin/ovirt-aaa-jdbc-tool',
'--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
'password-reset', 'admin', '--password=env:pass', '--force',
'--password-valid-to=2216-03-13 17:56:57Z') stderr:


2016-04-30 18:07:06 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD
otopi.plugins.ovirt_engine_setup.ovirt_engine.pki.ca.Plugin._miscUpgrade
2016-04-30 18:07:06 INFO
otopi.plugins.ovirt_engine_setup.ovirt_engine.pki.ca ca._miscUpgrade:510
Upgrading CA

Full logs of host and engine here:
https://drive.google.com/file/d/0BwoPbcrMv8mvQm9jeDhpZEdRUjg/view?usp=sharing

I can connect via vnc to the engine and see 277 tables in engine database
(277 rows in output of "\d" command)

Can anyone tell me if I can follow up without starting from scratch and how
in case?
Also understand the reason of this delay, as the NUC is a physical host
with 32Gb of ram and SSD disks and should be quite fast... faster than a VM
non my laptop where I had no problems in similar setup...

As a last question how to clean up things in case I have to start from
scratch.

I can leave the situation as it is in the moment, so I can work on the live
environment before power off

Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine setup Gluster fails to execute

2016-04-30 Thread Langley, Robert
I'm attempting to host the engine within a GlusterFS Replica 3 storage volume.
During setup, after entering the server and volume, I'm receiving the message 
that '/sbin/gluster' failed to execute.
Reviewing the gluster cmd log, it looks as though /sbin/gluster does execute.
I can successfully mount the volume on the host outside of the hosted-engine 
setup.
Any assistance would be appreciated.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [hosted-engine] Setup broke host's network

2016-03-19 Thread Simone Tiraboschi
On Thu, Mar 17, 2016 at 11:34 AM, Wee Sritippho  wrote:

> Hi,
>
> I setup the host's network while installing CentOS 7 (GUI), so the network
> configuration is like this:
>
> eno1 --> bond0_slave1 --\
>  |--> bond0
> eno2 --> bond0_slave2 --/
>
> After I disabled NetworkManager and ran 'hosted-engine --deploy', the
> setup stuck at this line:
>
> [ INFO  ] Configuring the management bridge
>
> Then the ssh connection is lost. I accessed the console and found this
> line after the line above:
>
> [ ERROR ] Failed to execute stage 'Misc configuration': Connection to
> storage server failed
>

Hi Wee,
from the log it seams that the network configuration worked as expected.
Your issue was different: 'Connection to storage server failed' simply
means that your host lost its connection with the storage server in the
middle of the deployment.

>From the logs I saw that you tried to deploy on GlusterFS using the host
where you are deploying hosted-engine host also for gluster: this setup is
called hyper-converged but it's currently not supported, please wait for
the next major release to deploy in this scenario.

In the mean time you can deploy hosted-engine pointing it to a gluster
volume on other external hosts or with another storage type.



>
> And the network is kinda break. I have to 1. delete MASTER=bond0 and
> SLAVE=yes line in ifcfg-eno{1,2} config files 2. re-config ifcfg-bond0 to
> get static IP 3. turnoff and delete ovirtmgmt bridge 4. restart the network
> in order to make it live again.
>
> Did this network configuration really break the setup or it was something
> else?  If the network configuration is the cause, how can I proceed to
> install oVirt hosted-engine?
>
> I attached the answer file, installation log, vdsm.log and supervdsm.log
> with this email.
>
> Environment:
> - CentOS Linux release 7.2.1511 (Core)
> - ovirt-release36-003-1.noarch
> - ovirt-hosted-engine-setup-1.3.3.4-1.el7.centos.noarch
> - vdsm-4.17.23-1.el7.noarch
>
> Thank you,
> Wee
>
>
> ---
> ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
> https://www.avast.com/antivirus
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine setup - got " Failed to start service 'ovirt-ha-agent' "

2015-12-18 Thread Willard Dennis
Hi all,

Did a hosted engine setup using a Gluster storage domain, it went well until 
the end, where I got this error:

[ INFO  ] Saving hosted-engine configuration on the shared storage domain
[ INFO  ] Shutting down the engine VM
[ INFO  ] Enabling and starting HA services
[ ERROR ] Failed to execute stage 'Closing up': Failed to start service 
'ovirt-ha-agent’ 
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20151218124259.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination

Full screen output from setup run:
http://pastebin.com/yWkppmjG 

What’s my move now? Hopefully the install can be salvaged….

FYI, I have three hosts I’m using for oVirt; they are named 
“ovirt-node-[01,02,03]”

Thanks,
Will___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine setup - got " Failed to start service 'ovirt-ha-agent' "

2015-12-18 Thread Will Dennis
I yum updated my hosts, and it did update ovirt-hosted-engine-ha on all to 
1.3.3.5 (two of my hosts include the one I did the engine install on were 
previously 1.3.3.4, and the third one was 1.3.3.3 for some reason)
Shortly thereafter, I began getting ovirt-hosted-engine state machine emails, 
and when I checked the state of the ovirt-ha-[agent,broker] services, they were 
running. When I got the email saying “EngineStarting-EngineUp”, I checked the 
web UI, and it was available, and I could successfully log into the admin site 
:)

Thanks for your help, and onwards!
W.

On Dec 18, 2015, at 4:55 PM, Simone Tiraboschi 
> wrote:

Today we async released ovirt-hosted-engine-ha-1.3.3.5-1 that should fix it.
Can you please check if you are already with that?
If not please update it and manually restart ovirt-ha-broker and ovirt-ha-agent 
services, I'm quite confident the it should be enough.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine setup - got " Failed to start service 'ovirt-ha-agent' "

2015-12-18 Thread Simone Tiraboschi
On Fri, Dec 18, 2015 at 7:08 PM, Willard Dennis 
wrote:

> Hi all,
>
> Did a hosted engine setup using a Gluster storage domain, it went well
> until the end, where I got this error:
>
> [ INFO  ] Saving hosted-engine configuration on the shared storage domain
> [ INFO  ] Shutting down the engine VM
> [ INFO  ] Enabling and starting HA services
> [ ERROR ] Failed to execute stage 'Closing up': Failed to start service
> 'ovirt-ha-agent’
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151218124259.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
>
> Full screen output from setup run:
> http://pastebin.com/yWkppmjG
>
> What’s my move now? Hopefully the install can be salvaged….
>

It's hard to say without a detailed log but yesterday we found an issue
with HA services systemd unit files on Centos 7.2.

Today we async released ovirt-hosted-engine-ha-1.3.3.5-1 that should fix it.
Can you please check if you are already with that?
If not please update it and manually restart ovirt-ha-broker and
ovirt-ha-agent services, I'm quite confident the it should be enough.


>
> FYI, I have three hosts I’m using for oVirt; they are named
> “ovirt-node-[01,02,03]”
>
> Thanks,
> Will
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine setup using Gluster - what is proper param's for the storage domain names?

2015-12-17 Thread Willard Dennis
Hi all,

Doing the hosted engine setup on Gluster; am at the point of configuring the 
storage domain / datacenter names, and not sure of what my best move is here… 
Here’s what I’m seeing:

——
  --== STORAGE CONFIGURATION ==--

  During customization use CTRL-D to abort.
  Please specify the storage you would like to use (glusterfs, iscsi, 
fc, nfs3, nfs4)[nfs3]: glusterfs
[ INFO  ] Please note that Replica 3 support is required for the shared storage.
  Please specify the full shared storage connection path to use 
(example: host:/path): localhost:/engine
[WARNING] Due to several bugs in mount.glusterfs the validation of GlusterFS 
share cannot be reliable.
[ INFO  ] GlusterFS replica 3 Volume detected
[ INFO  ] Installing on first host
  Please provide storage domain name. [hosted_storage]:
  Local storage datacenter name is an internal name
  and currently will not be shown in engine's admin UI.
  Please enter local datacenter name [hosted_datacenter]:
——

Concerned about the "Local storage datacenter name is an internal name and 
currently will not be shown in engine's admin UI” message… I want to use a 
second distributed Gluster volume (name = “vmdata”) for VM storage if I can, 
and don’t want to mess up the install… What should I consider when setting 
names for the storage domain name and local datacenter names?

Thanks,
Will
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup using Gluster - what is proper param's for the storage domain names?

2015-12-17 Thread Sahina Bose



On 12/17/2015 10:32 PM, Willard Dennis wrote:

Hi all,

Doing the hosted engine setup on Gluster; am at the point of configuring the 
storage domain / datacenter names, and not sure of what my best move is here… 
Here’s what I’m seeing:

——
   --== STORAGE CONFIGURATION ==--

   During customization use CTRL-D to abort.
   Please specify the storage you would like to use (glusterfs, iscsi, 
fc, nfs3, nfs4)[nfs3]: glusterfs
[ INFO  ] Please note that Replica 3 support is required for the shared storage.
   Please specify the full shared storage connection path to use 
(example: host:/path): localhost:/engine
[WARNING] Due to several bugs in mount.glusterfs the validation of GlusterFS 
share cannot be reliable.
[ INFO  ] GlusterFS replica 3 Volume detected
[ INFO  ] Installing on first host
   Please provide storage domain name. [hosted_storage]:
   Local storage datacenter name is an internal name
   and currently will not be shown in engine's admin UI.
   Please enter local datacenter name [hosted_datacenter]:
——

Concerned about the "Local storage datacenter name is an internal name and 
currently will not be shown in engine's admin UI” message… I want to use a second 
distributed Gluster volume (name = “vmdata”) for VM storage if I can, and don’t want 
to mess up the install… What should I consider when setting names for the storage 
domain name and local datacenter names?


You can safely go with the defaults here.

To set up a second storage domain (using gluster volume) - once the 
engine VM is up and running, you can use the user interface to create 
the domain (vmdata).

Note: replica 3 gluster volume is recommended to use as storage domain



Thanks,
Will
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted-engine-setup

2015-11-19 Thread Budur Nagaraju
HI

Getting below error while doing a hosted engine setup, OS is running on
ESXi6.0 version.




[root@he ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
  Continuing will configure this host for serving as hypervisor and
create a VM where you have to install oVirt Engine afterwards.
  Are you sure you want to continue? (Yes, No)[Yes]: yes
  Configuration files: []
  Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151119152307-cx1kgx.log
  Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
  It has been detected that this program is executed through an SSH
connection without using screen.
  Continuing with the installation may lead to broken installation
if the network connection fails.
  It is highly recommended to abort the installation and run it
inside a screen session using command "screen".
  Do you want to continue anyway? (Yes, No)[No]: yes
[ ERROR ] Failed to execute stage 'Environment setup': Hardware does not
support virtualization
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20151119152311.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[root@he ~]#
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-engine-setup in an ESXi VM fails

2015-11-19 Thread Yedidyah Bar David
On Thu, Nov 19, 2015 at 11:55 AM, Budur Nagaraju  wrote:
> HI
>
> Getting below error while doing a hosted engine setup, OS is running on
> ESXi6.0 version.
>
>
>
>
> [root@he ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Continuing will configure this host for serving as hypervisor and
> create a VM where you have to install oVirt Engine afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]: yes
>   Configuration files: []
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151119152307-cx1kgx.log
>   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>   It has been detected that this program is executed through an SSH
> connection without using screen.
>   Continuing with the installation may lead to broken installation
> if the network connection fails.
>   It is highly recommended to abort the installation and run it
> inside a screen session using command "screen".
>   Do you want to continue anyway? (Yes, No)[No]: yes
> [ ERROR ] Failed to execute stage 'Environment setup': Hardware does not
> support virtualization

Not sure what needs to be configured in ESXi to let you run kvm inside it.

Can you start a normal kvm vm?

Changing the subject accordingly.

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-engine-setup

2015-11-19 Thread Martin Sivak
Hello,

hosted-engine has to be executed on a physical host (or a nested host
with all the proper CPU flags) that supports KVM virtualization. That
means linux kernel and the vmx flag in /proc/cpuinfo iirc.

I am also adding Sandro who is the maintainer of the setup tool as he
might have some additional insights.


Best regards

--
Martin Sivak
SLA / oVirt

On Thu, Nov 19, 2015 at 10:55 AM, Budur Nagaraju  wrote:
> HI
>
> Getting below error while doing a hosted engine setup, OS is running on
> ESXi6.0 version.
>
>
>
>
> [root@he ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Continuing will configure this host for serving as hypervisor and
> create a VM where you have to install oVirt Engine afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]: yes
>   Configuration files: []
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151119152307-cx1kgx.log
>   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>   It has been detected that this program is executed through an SSH
> connection without using screen.
>   Continuing with the installation may lead to broken installation
> if the network connection fails.
>   It is highly recommended to abort the installation and run it
> inside a screen session using command "screen".
>   Do you want to continue anyway? (Yes, No)[No]: yes
> [ ERROR ] Failed to execute stage 'Environment setup': Hardware does not
> support virtualization
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151119152311.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [root@he ~]#
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-engine-setup

2015-11-19 Thread Simone Tiraboschi
On Thu, Nov 19, 2015 at 11:16 AM, Martin Sivak  wrote:

> Hello,
>
> hosted-engine has to be executed on a physical host (or a nested host
> with all the proper CPU flags) that supports KVM virtualization. That
> means linux kernel and the vmx flag in /proc/cpuinfo iirc.
>
> I am also adding Sandro who is the maintainer of the setup tool as he
> might have some additional insights.
>

In hosted-engine the engine will run as a VM so if your host is a virtual
machine too you are going to create a nested deployment. To create a nested
env you need to:
- enable nested virtualization on the external hypervisor, follow here as a
reference if you are using oVirt
http://www.ovirt.org/Vdsm_Developers#Running_Node_as_guest_-_Nested_KVM
- if you are using oVirt as your external hypervisor, disable no-mac-spoof
filter on the physical hypervisor otherwise your engine VM will no have
network connectivity at all. You can proceed following this instructions:
https://github.com/oVirt/vdsm/tree/master/vdsm_hooks/macspoof



>
>
> Best regards
>
> --
> Martin Sivak
> SLA / oVirt
>
> On Thu, Nov 19, 2015 at 10:55 AM, Budur Nagaraju 
> wrote:
> > HI
> >
> > Getting below error while doing a hosted engine setup, OS is running on
> > ESXi6.0 version.
> >
> >
> >
> >
> > [root@he ~]# hosted-engine --deploy
> > [ INFO  ] Stage: Initializing
> > [ INFO  ] Generating a temporary VNC password.
> > [ INFO  ] Stage: Environment setup
> >   Continuing will configure this host for serving as hypervisor
> and
> > create a VM where you have to install oVirt Engine afterwards.
> >   Are you sure you want to continue? (Yes, No)[Yes]: yes
> >   Configuration files: []
> >   Log file:
> >
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151119152307-cx1kgx.log
> >   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
> >   It has been detected that this program is executed through an
> SSH
> > connection without using screen.
> >   Continuing with the installation may lead to broken
> installation
> > if the network connection fails.
> >   It is highly recommended to abort the installation and run it
> > inside a screen session using command "screen".
> >   Do you want to continue anyway? (Yes, No)[No]: yes
> > [ ERROR ] Failed to execute stage 'Environment setup': Hardware does not
> > support virtualization
> > [ INFO  ] Stage: Clean up
> > [ INFO  ] Generating answer file
> > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151119152311.conf'
> > [ INFO  ] Stage: Pre-termination
> > [ INFO  ] Stage: Termination
> > [root@he ~]#
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-engine-setup

2015-11-19 Thread Sandro Bonazzola
On Thu, Nov 19, 2015 at 11:57 AM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Nov 19, 2015 at 11:16 AM, Martin Sivak  wrote:
>
>> Hello,
>>
>> hosted-engine has to be executed on a physical host (or a nested host
>> with all the proper CPU flags) that supports KVM virtualization. That
>> means linux kernel and the vmx flag in /proc/cpuinfo iirc.
>>
>> I am also adding Sandro who is the maintainer of the setup tool as he
>> might have some additional insights.
>>
>
>
Simone already replied


> In hosted-engine the engine will run as a VM so if your host is a virtual
> machine too you are going to create a nested deployment. To create a nested
> env you need to:
> - enable nested virtualization on the external hypervisor, follow here as
> a reference if you are using oVirt
> http://www.ovirt.org/Vdsm_Developers#Running_Node_as_guest_-_Nested_KVM
> - if you are using oVirt as your external hypervisor, disable no-mac-spoof
> filter on the physical hypervisor otherwise your engine VM will no have
> network connectivity at all. You can proceed following this instructions:
> https://github.com/oVirt/vdsm/tree/master/vdsm_hooks/macspoof
>
>
>

+1


>
>>
>> Best regards
>>
>> --
>> Martin Sivak
>> SLA / oVirt
>>
>> On Thu, Nov 19, 2015 at 10:55 AM, Budur Nagaraju 
>> wrote:
>> > HI
>> >
>> > Getting below error while doing a hosted engine setup, OS is running on
>> > ESXi6.0 version.
>> >
>> >
>> >
>> >
>> > [root@he ~]# hosted-engine --deploy
>> > [ INFO  ] Stage: Initializing
>> > [ INFO  ] Generating a temporary VNC password.
>> > [ INFO  ] Stage: Environment setup
>> >   Continuing will configure this host for serving as hypervisor
>> and
>> > create a VM where you have to install oVirt Engine afterwards.
>> >   Are you sure you want to continue? (Yes, No)[Yes]: yes
>> >   Configuration files: []
>> >   Log file:
>> >
>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151119152307-cx1kgx.log
>> >   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>> >   It has been detected that this program is executed through an
>> SSH
>> > connection without using screen.
>> >   Continuing with the installation may lead to broken
>> installation
>> > if the network connection fails.
>> >   It is highly recommended to abort the installation and run it
>> > inside a screen session using command "screen".
>> >   Do you want to continue anyway? (Yes, No)[No]: yes
>> > [ ERROR ] Failed to execute stage 'Environment setup': Hardware does not
>> > support virtualization
>> > [ INFO  ] Stage: Clean up
>> > [ INFO  ] Generating answer file
>> > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151119152311.conf'
>> > [ INFO  ] Stage: Pre-termination
>> > [ INFO  ] Stage: Termination
>> > [root@he ~]#
>> >
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>>
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-27 Thread Yedidyah Bar David
- Original Message -
 From: Martin Sivak msi...@redhat.com
 To: Sven Achtelik sven.achte...@mailpool.us
 Cc: Yedidyah Bar David d...@redhat.com, Roy Golan rgo...@redhat.com, 
 users@ovirt.org
 Sent: Monday, April 27, 2015 11:50:30 AM
 Subject: Re: AW: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional 
 host
 
 Uh this really is weird.
 
 The situation is clear though:
 
 Broker dies when it tries to initialize logging (missing /dev/stdout ???)
 Agent dies because it can't connect to the broker.
 
 My /dev/stdout looks like this:
 
 lrwxrwxrwx. 1 root root 15 Mar 30 17:29 /dev/stdout - /proc/self/fd/1
 
 And /proc/self/fd/1 is obviously related to the process. But I have an idea.
 
 Can you check whether the /proc/self/fd/1 is there? It might be missing if
 the broker closed its stdout during daemonizing.

If that's the problem, he can't see that - in his shell, /proc/self is its
shell's process.

Not sure how else to check without some small patch...

 
 --
 Martin Sivák
 msi...@redhat.com
 Red Hat Czech
 RHEV-M SLA / Brno, CZ
 
 - Original Message -
  Yes,
  
  ---
  [root@ovirt-node2 ~]# systemctl start ovirt-ha-broker.service  systemctl
  start ovirt-ha-agent.service
  Job for ovirt-ha-broker.service failed. See 'systemctl status
  ovirt-ha-broker.service' and 'journalctl -xn' for details.
  [root@ovirt-node2 ~]# journalctl -xn
  -- Logs begin at Sun 2015-04-26 09:14:15 CDT, end at Mon 2015-04-27
  02:49:33
  CDT. --
  Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]:
  File /usr/lib64/python2.7/logging/__init__.py, line 925, in _open
  Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]:
  stream = open(self.baseFilename, self.mode)
  Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]:
  IOError: [Errno 6] No such device or address: '/dev/stdout'
  Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]:
  [FAILED]
  Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd[1]:
  ovirt-ha-broker.service: control process exited, code=exited status=1
  Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start
  oVirt
  Hosted Engine High Availability Communications Broker.
  -- Subject: Unit ovirt-ha-broker.service has failed
  -- Defined-By: systemd
  -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
  --
  -- Unit ovirt-ha-broker.service has failed.
  --
  -- The result is failed.
  Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd[1]: Unit
  ovirt-ha-broker.service entered failed state.
  Apr 27 02:49:33 ovirt-node2.mgmt.asl.local vdsm[3309]: vdsm
  ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to connect to
  broker, the number of errors has exceeded the limit (
  Apr 27 02:49:33 ovirt-node2.mgmt.asl.local vdsm[3309]: vdsm vds ERROR
  failed
  to retrieve Hosted Engine HA info
 Traceback (most
 recent
 call last):
   File
   
  /usr/share/vdsm/API.py,
   line 1703, in
   _getHaInfo
 stats =
 
  instance.get_all_stats()
   File
   
  /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py,
   line 97, in
   get_all_stats
 with
 
  broker.connection():
   File
   
  /usr/lib64/python2.7/contextlib.py,
   line 17, in
   __enter__
 return
 self.gen.next()
   File
   
  /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py,
   line 99, in
   connection
 self.connect()
   File

Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-27 Thread Sven Achtelik
Hi Martin,

on my root login things look like this:

[root@ovirt-node2 ~]# ls -lah /proc/self/fd/1
lrwx-- 1 root root 64 Apr 27 04:02 /proc/self/fd/1 - /dev/pts/1

Sven

-Ursprüngliche Nachricht-
Von: Martin Sivak [mailto:msi...@redhat.com]
Gesendet: Montag, 27. April 2015 10:51
An: Sven Achtelik
Cc: Yedidyah Bar David; Roy Golan; users@ovirt.org
Betreff: Re: AW: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional host

Uh this really is weird.

The situation is clear though:

Broker dies when it tries to initialize logging (missing /dev/stdout ???) Agent 
dies because it can't connect to the broker.

My /dev/stdout looks like this:

lrwxrwxrwx. 1 root root 15 Mar 30 17:29 /dev/stdout - /proc/self/fd/1

And /proc/self/fd/1 is obviously related to the process. But I have an idea.

Can you check whether the /proc/self/fd/1 is there? It might be missing if the 
broker closed its stdout during daemonizing.

--
Martin Sivák
msi...@redhat.com
Red Hat Czech
RHEV-M SLA / Brno, CZ

- Original Message -
 Yes,

 ---
 [root@ovirt-node2 ~]# systemctl start ovirt-ha-broker.service 
 systemctl start ovirt-ha-agent.service Job for ovirt-ha-broker.service
 failed. See 'systemctl status ovirt-ha-broker.service' and 'journalctl
 -xn' for details.
 [root@ovirt-node2 ~]# journalctl -xn
 -- Logs begin at Sun 2015-04-26 09:14:15 CDT, end at Mon 2015-04-27
 02:49:33 CDT. -- Apr 27 02:49:27 ovirt-node2.mgmt.asl.local
 systemd-ovirt-ha-broker[29068]:
 File /usr/lib64/python2.7/logging/__init__.py, line 925, in _open
 Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]:
 stream = open(self.baseFilename, self.mode) Apr 27 02:49:27
 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]:
 IOError: [Errno 6] No such device or address: '/dev/stdout'
 Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]:
 [FAILED]
 Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd[1]:
 ovirt-ha-broker.service: control process exited, code=exited status=1
 Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start
 oVirt Hosted Engine High Availability Communications Broker.
 -- Subject: Unit ovirt-ha-broker.service has failed
 -- Defined-By: systemd
 -- Support:
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel
 --
 -- Unit ovirt-ha-broker.service has failed.
 --
 -- The result is failed.
 Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd[1]: Unit
 ovirt-ha-broker.service entered failed state.
 Apr 27 02:49:33 ovirt-node2.mgmt.asl.local vdsm[3309]: vdsm
 ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to
 connect to broker, the number of errors has exceeded the limit ( Apr
 27 02:49:33 ovirt-node2.mgmt.asl.local vdsm[3309]: vdsm vds ERROR
 failed to retrieve Hosted Engine HA info
Traceback (most recent
call last):
  File
  
 /usr/share/vdsm/API.py,
  line 1703, in
  _getHaInfo
stats =

 instance.get_all_stats()
  File
  
 /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py,
  line 97, in
  get_all_stats
with

 broker.connection():
  File
  
 /usr/lib64/python2.7/contextlib.py,
  line 17, in
  __enter__
return
self.gen.next()
  File
  
 /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py,
  line 99, in
  connection
self.connect()
  File
  
 /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py

Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-27 Thread Sven Achtelik
Yes,

---
[root@ovirt-node2 ~]# systemctl start ovirt-ha-broker.service  systemctl 
start ovirt-ha-agent.service 
Job for ovirt-ha-broker.service failed. See 'systemctl status 
ovirt-ha-broker.service' and 'journalctl -xn' for details.
[root@ovirt-node2 ~]# journalctl -xn
-- Logs begin at Sun 2015-04-26 09:14:15 CDT, end at Mon 2015-04-27 02:49:33 
CDT. --
Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]: File 
/usr/lib64/python2.7/logging/__init__.py, line 925, in _open
Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]: 
stream = open(self.baseFilename, self.mode)
Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]: 
IOError: [Errno 6] No such device or address: '/dev/stdout'
Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[29068]: 
[FAILED]
Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd[1]: ovirt-ha-broker.service: 
control process exited, code=exited status=1
Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start oVirt 
Hosted Engine High Availability Communications Broker.
-- Subject: Unit ovirt-ha-broker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit ovirt-ha-broker.service has failed.
-- 
-- The result is failed.
Apr 27 02:49:27 ovirt-node2.mgmt.asl.local systemd[1]: Unit 
ovirt-ha-broker.service entered failed state.
Apr 27 02:49:33 ovirt-node2.mgmt.asl.local vdsm[3309]: vdsm 
ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to connect to 
broker, the number of errors has exceeded the limit (
Apr 27 02:49:33 ovirt-node2.mgmt.asl.local vdsm[3309]: vdsm vds ERROR failed to 
retrieve Hosted Engine HA info
   Traceback (most recent 
call last):
 File 
/usr/share/vdsm/API.py, line 1703, in _getHaInfo
   stats = 
instance.get_all_stats()
 File 
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py, 
line 97, in get_all_stats
   with 
broker.connection():
 File 
/usr/lib64/python2.7/contextlib.py, line 17, in __enter__
   return 
self.gen.next()
 File 
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py, 
line 99, in connection
   self.connect()
 File 
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py, 
line 78, in connect
   raise 
BrokerConnectionError(error_msg)
   BrokerConnectionError: 
Failed to connect to broker, the number of errors has exceeded the limit (5)
Apr 27 02:49:33 ovirt-node2.mgmt.asl.local libvirtd[1678]: metadata not found: 
Requested metadata element is not present



-Ursprüngliche Nachricht-
Von: Yedidyah Bar David [mailto:d...@redhat.com] 
Gesendet: Montag, 27. April 2015 09:46
An: Sven Achtelik; Martin Sivak
Cc: Roy Golan; users@ovirt.org
Betreff: Re: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional host

- Original Message -
 From: Sven Achtelik sven.achte...@mailpool.us
 To: Yedidyah Bar David d...@redhat.com
 Cc: Roy Golan rgo...@redhat.com, users@ovirt.org
 Sent: Monday, April 27, 2015 10:34:13 AM
 Subject: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional 
 host
 
 Hi Did,
 
 results are
 ---
 [root@ovirt-node2 ~]# ls -l /dev/stdout lrwxrwxrwx 1 root root 15 Apr 
 26 09:14 /dev/stdout - /proc/self/fd/1
 [root@ovirt-node2 ~]# echo test  /dev/stdout test
 ---
 Looks like everything is working fine.

And it still fails with the same message when you restart ha daemons?

Adding Martin.

Weird.

 
 Sven
 
 
 -Ursprüngliche Nachricht-
 Von: Yedidyah Bar David [mailto:d...@redhat.com]
 Gesendet: Montag, 27. April 2015 08:57
 An: Sven Achtelik
 Cc: Roy Golan; users@ovirt.org
 Betreff: Re: AW: [ovirt-users] Hosted Engine-Setup issue additional 
 host
 
 
 
 - Original Message -
  From: Sven Achtelik sven.achte...@mailpool.us
  To: Roy Golan rgo...@redhat.com, users@ovirt.org, Yedidyah Bar 
  David d...@redhat.com
  Sent: Sunday, April 26, 2015 6:57:06 PM
  Subject: AW: [ovirt-users] Hosted Engine-Setup issue additional host
  
  On the node that fails to start the ha-broker and ha-agent I'm using:
  
  ovirt-engine-sdk-python.noarch3.5.2.1-1.el7.centos
  @ovirt-3.5-pre
  ovirt-host-deploy.noarch

Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-27 Thread Yedidyah Bar David


- Original Message -
 From: Sven Achtelik sven.achte...@mailpool.us
 To: Roy Golan rgo...@redhat.com, users@ovirt.org, Yedidyah Bar David 
 d...@redhat.com
 Sent: Sunday, April 26, 2015 6:57:06 PM
 Subject: AW: [ovirt-users] Hosted Engine-Setup issue additional host
 
 On the node that fails to start the ha-broker and ha-agent I'm using:
 
 ovirt-engine-sdk-python.noarch3.5.2.1-1.el7.centos
 @ovirt-3.5-pre
 ovirt-host-deploy.noarch1.3.1-1.el7
 @ovirt-3.5
 ovirt-hosted-engine-ha.noarch  1.2.5-1.el7.centos
 @ovirt-3.5
 ovirt-hosted-engine-setup.noarch1.2.3-1.el7.centos
 @ovirt-3.5-pre
 ovirt-release35.noarch003-1
 @/ovirt-release35
 
 
 Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von
 Roy Golan
 Gesendet: Sonntag, 26. April 2015 16:59
 An: users@ovirt.org; Yedidyah Bar David
 Betreff: Re: [ovirt-users] Hosted Engine-Setup issue additional host
 
 On 04/26/2015 05:38 PM, Sven Achtelik wrote:
 Hi All,
 
 after a successful setup of hosted-engine on the first node I'm having
 trouble completing it on an additional node. The Setup fails with:
 -
 [ INFO  ] Waiting for the host to become operational in the engine. This may
 take several minutes...
 [ INFO  ] Still waiting for VDSM host to become operational...
 [ INFO  ] The VDSM Host is now operational
 [ INFO  ] Enabling and starting HA services
 [ ERROR ] Failed to execute stage 'Closing up': Command '/bin/systemctl'
 failed to execute
 [ INFO  ] Stage: Clean up
 [ INFO  ] Generating answer file
 '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'
 [ INFO  ] Stage: Pre-termination
 [ INFO  ] Stage: Termination
 -
 After that the node is added to the cluster and is operational from the GUI,
 but the hosted  engine broker and agent fail to start with error messages:
 --
 [root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l
 ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring
 Agent
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled)
Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT;
20min ago
   Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start
   (code=exited, status=1/FAILURE)
 
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: hdlr
 = FileHandler(filename, mode)
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File
 /usr/lib64/python2.7/logging/__init__.py, line 902, in __init__
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
 StreamHandler.__init__(self, self._open())
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File
 /usr/lib64/python2.7/logging/__init__.py, line 925, in _open
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
 stream = open(self.baseFilename, self.mode)
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
 IOError: [Errno 6] No such device or address: '/dev/stdout'
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
 [FAILED]
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]:
 ovirt-ha-agent.service: control process exited, code=exited status=1
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start oVirt
 Hosted Engine High Availability Monitoring Agent.
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit
 ovirt-ha-agent.service entered failed state.
 -
 And
 -
 [root@ovirt-node2 ~]# systemctl status ovirt-ha-broker
 ovirt-ha-broker.service - oVirt Hosted Engine High Availability
 Communications Broker
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled)
Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT;
21min ago
   Process: 5359 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-broker start
   (code=exited, status=1/FAILURE)
 
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 hdlr = FileHandler(filename, mode)
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 File /usr/lib64/python2.7/logging/__init__.py, line ...it__
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 StreamHandler.__init__(self, self._open())
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 File /usr/lib64/python2.7/logging/__init__.py, line ...open
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 stream = open(self.baseFilename, self.mode)
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 IOError: [Errno 6] No such device or address: '/dev/stdout'
 
 Didi any clue?
 the log says it runs as root so I canrule that out
 

That's weird. Please check

Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-27 Thread Martin Sivak
 of
errors has exceeded
the limit (5)
 Apr 27 02:49:33 ovirt-node2.mgmt.asl.local libvirtd[1678]: metadata not
 found: Requested metadata element is not present
 
 
 
 -Ursprüngliche Nachricht-
 Von: Yedidyah Bar David [mailto:d...@redhat.com]
 Gesendet: Montag, 27. April 2015 09:46
 An: Sven Achtelik; Martin Sivak
 Cc: Roy Golan; users@ovirt.org
 Betreff: Re: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional host
 
 - Original Message -
  From: Sven Achtelik sven.achte...@mailpool.us
  To: Yedidyah Bar David d...@redhat.com
  Cc: Roy Golan rgo...@redhat.com, users@ovirt.org
  Sent: Monday, April 27, 2015 10:34:13 AM
  Subject: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional
  host
  
  Hi Did,
  
  results are
  ---
  [root@ovirt-node2 ~]# ls -l /dev/stdout lrwxrwxrwx 1 root root 15 Apr
  26 09:14 /dev/stdout - /proc/self/fd/1
  [root@ovirt-node2 ~]# echo test  /dev/stdout test
  ---
  Looks like everything is working fine.
 
 And it still fails with the same message when you restart ha daemons?
 
 Adding Martin.
 
 Weird.
 
  
  Sven
  
  
  -Ursprüngliche Nachricht-
  Von: Yedidyah Bar David [mailto:d...@redhat.com]
  Gesendet: Montag, 27. April 2015 08:57
  An: Sven Achtelik
  Cc: Roy Golan; users@ovirt.org
  Betreff: Re: AW: [ovirt-users] Hosted Engine-Setup issue additional
  host
  
  
  
  - Original Message -
   From: Sven Achtelik sven.achte...@mailpool.us
   To: Roy Golan rgo...@redhat.com, users@ovirt.org, Yedidyah Bar
   David d...@redhat.com
   Sent: Sunday, April 26, 2015 6:57:06 PM
   Subject: AW: [ovirt-users] Hosted Engine-Setup issue additional host
   
   On the node that fails to start the ha-broker and ha-agent I'm using:
   
   ovirt-engine-sdk-python.noarch3.5.2.1-1.el7.centos
   @ovirt-3.5-pre
   ovirt-host-deploy.noarch1.3.1-1.el7
   @ovirt-3.5
   ovirt-hosted-engine-ha.noarch  1.2.5-1.el7.centos
   @ovirt-3.5
   ovirt-hosted-engine-setup.noarch1.2.3-1.el7.centos
   @ovirt-3.5-pre
   ovirt-release35.noarch003-1
   @/ovirt-release35
   
   
   Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im
   Auftrag von Roy Golan
   Gesendet: Sonntag, 26. April 2015 16:59
   An: users@ovirt.org; Yedidyah Bar David
   Betreff: Re: [ovirt-users] Hosted Engine-Setup issue additional host
   
   On 04/26/2015 05:38 PM, Sven Achtelik wrote:
   Hi All,
   
   after a successful setup of hosted-engine on the first node I'm
   having trouble completing it on an additional node. The Setup fails with:
   -
   [ INFO  ] Waiting for the host to become operational in the engine.
   This may take several minutes...
   [ INFO  ] Still waiting for VDSM host to become operational...
   [ INFO  ] The VDSM Host is now operational [ INFO  ] Enabling and
   starting HA services [ ERROR ] Failed to execute stage 'Closing up':
   Command '/bin/systemctl'
   failed to execute
   [ INFO  ] Stage: Clean up
   [ INFO  ] Generating answer file
   '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'
   [ INFO  ] Stage: Pre-termination
   [ INFO  ] Stage: Termination
   -
   After that the node is added to the cluster and is operational from
   the GUI, but the hosted  engine broker and agent fail to start with
   error
   messages:
   --
   [root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l
   ovirt-ha-agent.service - oVirt Hosted Engine High Availability
   Monitoring Agent
  Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
  enabled)
  Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT;
  20min ago
 Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start
 (code=exited, status=1/FAILURE)
   
   Apr 26 08:00:28 ovirt-node2.mgmt.asl.local
   systemd-ovirt-ha-agent[5373]: hdlr = FileHandler(filename, mode) Apr
   26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
   File /usr/lib64/python2.7/logging/__init__.py, line 902, in
   __init__ Apr 26 08:00:28 ovirt-node2.mgmt.asl.local
   systemd-ovirt-ha-agent[5373]:
   StreamHandler.__init__(self, self._open()) Apr 26 08:00:28
   ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File
   /usr/lib64/python2.7/logging/__init__.py, line 925, in _open Apr
   26
   08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
   stream = open(self.baseFilename, self.mode) Apr 26 08:00:28
   ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
   IOError: [Errno 6] No such device or address: '/dev/stdout'
   Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
   [FAILED]
   Apr 26 08:00:28 ovirt-node2.mgmt.asl.local

Re: [ovirt-users] Hosted-Engine Setup: Failed to setup networks

2015-04-26 Thread Yedidyah Bar David
- Original Message -
 From: Sven Achtelik sven.achte...@mailpool.us
 To: users@ovirt.org
 Sent: Thursday, April 23, 2015 10:58:15 AM
 Subject: Re: [ovirt-users] Hosted-Engine Setup: Failed to setup networks
 
 
 
 Hi All,
 
 
 
 fixed it, vdsm doesn’t like the PREFIX entry in the ifcfg file. After
 changing that to NETMASK it worked.

Thanks for the report!

Dan - is that expected/fixed/tracked?
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-26 Thread Sven Achtelik
On the node that fails to start the ha-broker and ha-agent I'm using:

ovirt-engine-sdk-python.noarch3.5.2.1-1.el7.centos  
@ovirt-3.5-pre
ovirt-host-deploy.noarch1.3.1-1.el7 
@ovirt-3.5
ovirt-hosted-engine-ha.noarch  1.2.5-1.el7.centos @ovirt-3.5
ovirt-hosted-engine-setup.noarch1.2.3-1.el7.centos  @ovirt-3.5-pre
ovirt-release35.noarch003-1 
@/ovirt-release35


Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Roy Golan
Gesendet: Sonntag, 26. April 2015 16:59
An: users@ovirt.org; Yedidyah Bar David
Betreff: Re: [ovirt-users] Hosted Engine-Setup issue additional host

On 04/26/2015 05:38 PM, Sven Achtelik wrote:
Hi All,

after a successful setup of hosted-engine on the first node I'm having trouble 
completing it on an additional node. The Setup fails with:
-
[ INFO  ] Waiting for the host to become operational in the engine. This may 
take several minutes...
[ INFO  ] Still waiting for VDSM host to become operational...
[ INFO  ] The VDSM Host is now operational
[ INFO  ] Enabling and starting HA services
[ ERROR ] Failed to execute stage 'Closing up': Command '/bin/systemctl' failed 
to execute
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
-
After that the node is added to the cluster and is operational from the GUI, 
but the hosted  engine broker and agent fail to start with error messages:
--
[root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l
ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled)
   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT; 20min 
ago
  Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start 
(code=exited, status=1/FAILURE)

Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: hdlr = 
FileHandler(filename, mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File 
/usr/lib64/python2.7/logging/__init__.py, line 902, in __init__
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: 
StreamHandler.__init__(self, self._open())
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File 
/usr/lib64/python2.7/logging/__init__.py, line 925, in _open
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: stream 
= open(self.baseFilename, self.mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: 
IOError: [Errno 6] No such device or address: '/dev/stdout'
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: 
[FAILED]
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: ovirt-ha-agent.service: 
control process exited, code=exited status=1
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start oVirt 
Hosted Engine High Availability Monitoring Agent.
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit 
ovirt-ha-agent.service entered failed state.
-
And
-
[root@ovirt-node2 ~]# systemctl status ovirt-ha-broker
ovirt-ha-broker.service - oVirt Hosted Engine High Availability Communications 
Broker
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled)
   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT; 21min 
ago
  Process: 5359 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-broker start 
(code=exited, status=1/FAILURE)

Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: hdlr 
= FileHandler(filename, mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: File 
/usr/lib64/python2.7/logging/__init__.py, line ...it__
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
StreamHandler.__init__(self, self._open())
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: File 
/usr/lib64/python2.7/logging/__init__.py, line ...open
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
stream = open(self.baseFilename, self.mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
IOError: [Errno 6] No such device or address: '/dev/stdout'

Didi any clue?
the log says it runs as root so I canrule that out

Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
[FAILED]
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: ovirt-ha-broker.service: 
control process exited, code=exited status=1
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start oVirt 
Hosted Engine High Availability Communications

[ovirt-users] Hosted Engine-Setup issue additional host

2015-04-26 Thread Sven Achtelik
Hi All,

after a successful setup of hosted-engine on the first node I'm having trouble 
completing it on an additional node. The Setup fails with:
-
[ INFO  ] Waiting for the host to become operational in the engine. This may 
take several minutes...
[ INFO  ] Still waiting for VDSM host to become operational...
[ INFO  ] The VDSM Host is now operational
[ INFO  ] Enabling and starting HA services
[ ERROR ] Failed to execute stage 'Closing up': Command '/bin/systemctl' failed 
to execute
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
-
After that the node is added to the cluster and is operational from the GUI, 
but the hosted  engine broker and agent fail to start with error messages:
--
[root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l
ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled)
   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT; 20min 
ago
  Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start 
(code=exited, status=1/FAILURE)

Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: hdlr = 
FileHandler(filename, mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File 
/usr/lib64/python2.7/logging/__init__.py, line 902, in __init__
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: 
StreamHandler.__init__(self, self._open())
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File 
/usr/lib64/python2.7/logging/__init__.py, line 925, in _open
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: stream 
= open(self.baseFilename, self.mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: 
IOError: [Errno 6] No such device or address: '/dev/stdout'
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: 
[FAILED]
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: ovirt-ha-agent.service: 
control process exited, code=exited status=1
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start oVirt 
Hosted Engine High Availability Monitoring Agent.
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit 
ovirt-ha-agent.service entered failed state.
-
And
-
[root@ovirt-node2 ~]# systemctl status ovirt-ha-broker
ovirt-ha-broker.service - oVirt Hosted Engine High Availability Communications 
Broker
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled)
   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT; 21min 
ago
  Process: 5359 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-broker start 
(code=exited, status=1/FAILURE)

Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: hdlr 
= FileHandler(filename, mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: File 
/usr/lib64/python2.7/logging/__init__.py, line ...it__
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
StreamHandler.__init__(self, self._open())
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: File 
/usr/lib64/python2.7/logging/__init__.py, line ...open
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
stream = open(self.baseFilename, self.mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
IOError: [Errno 6] No such device or address: '/dev/stdout'
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
[FAILED]
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: ovirt-ha-broker.service: 
control process exited, code=exited status=1
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start oVirt 
Hosted Engine High Availability Communications Broker.
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit 
ovirt-ha-broker.service entered failed state.


The system is a CentOS 7 Setup with SeLinux switched off, no firewall or 
iptables.  How can I find out which version of ovirt I'm running exactly ? I've 
had a lock at the logs and read through old bug reports.

Thank you,

Sven
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-26 Thread Roy Golan

On 04/26/2015 05:38 PM, Sven Achtelik wrote:


Hi All,

after a successful setup of hosted-engine on the first node I’m having 
trouble completing it on an additional node. The Setup fails with:


-

[ INFO  ] Waiting for the host to become operational in the engine. 
This may take several minutes...


[ INFO  ] Still waiting for VDSM host to become operational...

[ INFO  ] The VDSM Host is now operational

[ INFO  ] Enabling and starting HA services

[ ERROR ] Failed to execute stage 'Closing up': Command 
'/bin/systemctl' failed to execute


[ INFO  ] Stage: Clean up

[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'


[ INFO  ] Stage: Pre-termination

[ INFO  ] Stage: Termination

-

After that the node is added to the cluster and is operational from 
the GUI, but the hosted  engine broker and agent fail to start with 
error messages:


--

[root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l

ovirt-ha-agent.service - oVirt Hosted Engine High Availability 
Monitoring Agent


   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; 
enabled)


   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 
CDT; 20min ago


  Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent 
start (code=exited, status=1/FAILURE)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: hdlr = FileHandler(filename, mode)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: File 
/usr/lib64/python2.7/logging/__init__.py, line 902, in __init__


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: StreamHandler.__init__(self, self._open())


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: File 
/usr/lib64/python2.7/logging/__init__.py, line 925, in _open


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: stream = open(self.baseFilename, self.mode)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: IOError: [Errno 6] No such device or 
address: '/dev/stdout'


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: [FAILED]


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: 
ovirt-ha-agent.service: control process exited, code=exited status=1


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start 
oVirt Hosted Engine High Availability Monitoring Agent.


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit 
ovirt-ha-agent.service entered failed state.


-

And

-

[root@ovirt-node2 ~]# systemctl status ovirt-ha-broker

ovirt-ha-broker.service - oVirt Hosted Engine High Availability 
Communications Broker


   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; 
enabled)


   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 
CDT; 21min ago


  Process: 5359 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-broker 
start (code=exited, status=1/FAILURE)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: hdlr = FileHandler(filename, mode)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: File 
/usr/lib64/python2.7/logging/__init__.py, line ...it__


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: StreamHandler.__init__(self, self._open())


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: File 
/usr/lib64/python2.7/logging/__init__.py, line ...open


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: stream = open(self.baseFilename, self.mode)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: IOError: [Errno 6] No such device or 
address: '/dev/stdout'




Didi any clue?
the log says it runs as root so I canrule that out


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: [FAILED]


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: 
ovirt-ha-broker.service: control process exited, code=exited status=1


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start 
oVirt Hosted Engine High Availability Communications Broker.


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit 
ovirt-ha-broker.service entered failed state.




The system is a CentOS 7 Setup with SeLinux switched off, no firewall 
or iptables. How can I find out which version of ovirt I’m running 
exactly ? I’ve had a lock at the logs and read through old bug reports.




the rpm version of ovirt* will be enough I guess


Thank you,

Sven



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org

Re: [ovirt-users] Hosted-Engine Setup: Failed to setup networks

2015-04-23 Thread Sven Achtelik
Hi All,

fixed it, vdsm doesn't like the PREFIX entry in the ifcfg file. After changing 
that to NETMASK it worked.

Sven



Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Sven Achtelik
Gesendet: Mittwoch, 22. April 2015 23:51
An: users@ovirt.org
Betreff: [ovirt-users] Hosted-Engine Setup: Failed to setup networks

Hi Everyone,

I tried to install oVirt 3.5 - hosted engine and it fails with some VDSM error 
while creating the ovirtmgmt bridge. The Host is running CentOS 7 and the 
interface I want to use is em1 and it's the parent interface from a vlan.

[ ERROR ] Failed to execute stage 'Misc configuration': Failed to setup 
networks {'ovirtmgmt': {'nic': 'em1', 'netmask': '255.255.255.128', 
'bootproto': 'none', 'ipaddr': '172.16.1.13', 'gateway': '172.16.1.1'}}. Error 
code: 16 message: Unexpected exception

2015-04-22 16:33:55 INFO otopi.plugins.ovirt_hosted_engine_setup.network.bridge 
bridge._misc:198 Configuring the management ridge
2015-04-22 16:33:55 DEBUG otopi.context context._executeMethod:152 method 
exception Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/otopi/context.py, line 142, in 
_executeMethod
method['method']()
  File 
/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/network/bridge.py,
 line 207, in _misc
_setupNetworks(conn, networks, {}, {'connectivityCheck': False})
  File 
/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/network/bridge.py,
 line 225, in _setupNetworks
'message: %s' % (networks, code, message))
RuntimeError: Failed to setup networks {'ovirtmgmt': {'nic': 'em1', 'netmask': 
'255.255.255.128', 'bootproto': 'none', 'ipaddr': '172.16.1.13', 'gateway':
'172.16.1.1'}}. Error code: 16 message: Unexpected exception
2015-04-22 16:33:55 ERROR otopi.context context._executeMethod:161 Failed to 
execute stage 'Misc configuration': Failed to setup networks {'ovirtmgmt': {'
nic': 'em1', 'netmask': '255.255.255.128', 'bootproto': 'none', 'ipaddr': 
'172.16.1.13', 'gateway': '172.16.1.1'}}. Error code: 16 message: Unexpected
exception


Is there anything I can do like creating the bridge manually or use older 
version of the packages that don't have that issue ?

Thank you,

Sven
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted-Engine Setup: Failed to setup networks

2015-04-22 Thread Sven Achtelik
Hi Everyone,

I tried to install oVirt 3.5 - hosted engine and it fails with some VDSM error 
while creating the ovirtmgmt bridge. The Host is running CentOS 7 and the 
interface I want to use is em1 and it's the parent interface from a vlan.

[ ERROR ] Failed to execute stage 'Misc configuration': Failed to setup 
networks {'ovirtmgmt': {'nic': 'em1', 'netmask': '255.255.255.128', 
'bootproto': 'none', 'ipaddr': '172.16.1.13', 'gateway': '172.16.1.1'}}. Error 
code: 16 message: Unexpected exception

2015-04-22 16:33:55 INFO otopi.plugins.ovirt_hosted_engine_setup.network.bridge 
bridge._misc:198 Configuring the management ridge
2015-04-22 16:33:55 DEBUG otopi.context context._executeMethod:152 method 
exception Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/otopi/context.py, line 142, in 
_executeMethod
method['method']()
  File 
/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/network/bridge.py,
 line 207, in _misc
_setupNetworks(conn, networks, {}, {'connectivityCheck': False})
  File 
/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/network/bridge.py,
 line 225, in _setupNetworks
'message: %s' % (networks, code, message))
RuntimeError: Failed to setup networks {'ovirtmgmt': {'nic': 'em1', 'netmask': 
'255.255.255.128', 'bootproto': 'none', 'ipaddr': '172.16.1.13', 'gateway':
'172.16.1.1'}}. Error code: 16 message: Unexpected exception
2015-04-22 16:33:55 ERROR otopi.context context._executeMethod:161 Failed to 
execute stage 'Misc configuration': Failed to setup networks {'ovirtmgmt': {'
nic': 'em1', 'netmask': '255.255.255.128', 'bootproto': 'none', 'ipaddr': 
'172.16.1.13', 'gateway': '172.16.1.1'}}. Error code: 16 message: Unexpected
exception


Is there anything I can do like creating the bridge manually or use older 
version of the packages that don't have that issue ?

Thank you,

Sven
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup

2015-02-02 Thread Uwe Laverenz

Hello Michael,

Am 02.02.2015 um 00:55 schrieb Michael Schefczyk:


- In the web interface of the hosted engine, however (Hosted Engine
Network.pdf, page 3) the required network ovirtmgmt is initially
not connected to bond0 (while it is in reality connected, as ifconfig
shows). When dragging ovirtmtgt to the arrow pointing to bond0, it
does not work. The error message is Bad bond name, it must begin
with the prefix 'bond' followed by a number. This is easy to
understand, as bond0 is a combination of bond and the number zero.


bond0 is the correct one, the error messages refers to your other 
bond: bondC is not a correct name, you should name it bond1 or bond2.


hth,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine setup ovirtmgmt bridge

2015-01-26 Thread Uwe Laverenz

Hi,

Am 26.01.2015 um 23:49 schrieb Mikola Rose:


On a hosted-engine --deploy on a machine that has 2 network cards
em1 192.168.0.178  General Network
em2 192.168.1.151  Net that NFS server is on,  no dns no gateway

which one would I set as ovirtmgmt bridge

Please indicate a nic to set ovirtmgmt bridge on: (em1, em2) [em1]


The general network would be the correct one (em1).

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine setup ovirtmgmt bridge

2015-01-26 Thread Mikola Rose
Hi there again list users;


On a hosted-engine --deploy on a machine that has 2 network cards
em1 192.168.0.178  General Network
em2 192.168.1.151  Net that NFS server is on,  no dns no gateway

which one would I set as ovirtmgmt bridge

Please indicate a nic to set ovirtmgmt bridge on: (em1, em2) [em1]



Mik








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine setup/migration features for 3.6

2014-12-03 Thread Yedidyah Bar David
Hi all,

We already have quite a lot of open ovirt-hosted-engine-setup bugs for 3.6 [1].

Yesterday I tried helping someone on irc who planned to migrate to hosted-engine
manually, and without knowing (so it seems) that such a feature exists. He had
an engine set up on a physical host, prepared a VM for it, and asked about 
migrating
the engine to the VM. In principle this works, but the final result will be a
hosted-engine, where the engine manages a VM the runs itself, without knowing 
it,
and without HA.

The current recommended migration flow is described in [2]. This page is perhaps
a bit outdated, perhaps missing some details etc., but principally works. The 
main
issue with it, AFAICT after discussing this a bit with few people, is that it
requires a new clean host.

I'd like to hear what people here think about such and similar flows.

If you already had an engine and migrated to hosted-engine, what was good, what
was bad, what would you like to change?

If you plan such a migration, what do you find missing currently?

[1] http://red.ht/1vle8Vv
[2] http://www.ovirt.org/Migrate_to_Hosted_Engine

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup on second host fails

2014-09-24 Thread Jiri Moskovcak

Hi,
it's getting a little too long, so please forgive the top post. The 
engine emits the message Host with the same address already exists. 
only if you trying to add host with the same hostname it doesn't have 
any connection to it's ID, so please check if your hosts have unique 
hostnames (e.g. I ran into this when I didn't get hostname from dhcp and 
both of my hosts were localhost.localdomain).


Regards,
Jirka

On 09/24/2014 07:59 AM, Yedidyah Bar David wrote:

- Original Message -

From: Yedidyah Bar David d...@redhat.com
To: Itamar Heim ih...@redhat.com
Cc: Stefan Wendler stefan.wend...@tngtech.com, users@ovirt.org
Sent: Wednesday, September 24, 2014 8:40:58 AM
Subject: Re: [ovirt-users] hosted engine setup on second host fails

- Original Message -

From: Itamar Heim ih...@redhat.com
To: Stefan Wendler stefan.wend...@tngtech.com
Cc: Yedidyah Bar David ybard...@redhat.com, users@ovirt.org
Sent: Tuesday, September 23, 2014 7:07:12 PM
Subject: Re: [ovirt-users] hosted engine setup on second host fails


On Sep 23, 2014 7:03 PM, Stefan Wendler stefan.wend...@tngtech.com wrote:


On 09/23/2014 17:01, Itamar Heim wrote:

On 09/23/2014 05:17 PM, Stefan Wendler wrote:

On 09/22/2014 10:52, Stefan Wendler wrote:

On 09/19/2014 15:58, Itamar Heim wrote:

On 09/19/2014 03:32 PM, Stefan Wendler wrote:

Hi there.

I'm trying to install a hosted-engine on our second node (fist
engine
runs on node1).

But I always get the message:

[ ERROR ] Cannot automatically add the host to the Default cluster:
Cannot add Host. Host with the same address already exists.

I'm not entirely sure what I have to do when this message comes, so
I
just press ENTER:

###
To continue make a selection from the options below:
  (1) Continue setup - engine installation is complete
  (2) Power off and restart the VM
  (3) Abort setup

  (1, 2, 3)[1]:


Is there any other interaction required prior to selecting 1?

In the Web Gui I get the following message:

X Adding new Host hosted_engine_2 to Cluster Default

Here is the console output:

# hosted-engine --deploy
[ INFO  ] Stage: Initializing
  Continuing will configure this host for serving as
hypervisor
and create a VM where you have to install oVirt Engine afterwards.
  Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
  Configuration files: []
  Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log


  Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

  --== STORAGE CONFIGURATION ==--

  During customization use CTRL-D to abort.
  Please specify the storage you would like to use (nfs3,
nfs4)[nfs3]:
  Please specify the full shared storage connection path
to use
(example: host:/path): some address:/volume1
  The specified storage location already contains a data
domain.
Is this an additional host setup (Yes, No)[Yes]?
[ INFO  ] Installing on additional host
  Please specify the Host ID [Must be integer, default:
  2]:
  The Host ID is already known. Is this a re-deployment
on an
additional host that was previously set up (Yes, No)[Yes]?


I admit I never tried that. Not sure how exactly it's supposed to work.


A bit more details:

Normally, a host is registered only in the engine's database. A hosted
engine is additionally registered in a special hosted-engine metadata
file managed by the ha daemon [1]. The question above appears if the host id
is found in this metadata file. It seems we never check if it's already
in the engine database - the assumption is that if an existing host is
re-purposed as a hosted-engine, it should first be uninstalled - at least
not be in use (no VMs) and removed from its cluster/dc/the engine.

[1] http://www.ovirt.org/images/d/d5/Fosdem-hosted-engine.pdf pages 17-18





  --== SYSTEM CONFIGURATION ==--

[WARNING] A configuration file must be supplied to deploy Hosted
Engine
on an additional host.
  The answer file may be fetched from the first host
using scp.
  If you do not want to download it automatically you can
abort
the setup answering no to the following question.
  Do you want to scp the answer file from the first host?
(Yes,
No)[Yes]:
  Please provide the FQDN or IP of the first host:
node1.domain
  Enter 'root' user password for host node1.domain:
[ INFO  ] Answer file successfully downloaded

  --== NETWORK CONFIGURATION ==--

  The following CPU types

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-24 Thread Stefan Wendler
Oh well. I think this is fixed. I upgraded to 3.4.4 and the message
seems to be gone. the agents are running :)

Thank you very much !!! :)


On 09/24/2014 15:23, Stefan Wendler wrote:
 Okay, I'm truncating the previous mails here
 
 Davids hint was the solution. I had the ovirt hosts already added to the
 cluster and tried to do the hosted-engine-ha setup on them.
 
 After removing the hosts from the cluster and putting the data domain to
 maintenance mode I was able to deploy an all other nodes. I now have a
 HA'd hosted engine. Which can also be migrated \o/
 
 Maybe that is something that could be stated in the documentation more
 clearly?
 
 Unfortunately now I have a new problem. The agents crash rapidly after
 startup. The error is the following:
 (/var/log/ovirt-hosted-engine-ha/agent.log)
 
 AttributeError: 'NoneType' object has no attribute 'iteritems'
 
 And the whole output here - The agents have been started and I tried a
 migration of the hosted engine from ovirt host 1 to host 2 which
 succeeded. But the agents crashed afterwards:
 
 MainThread::INFO::2014-09-24
 15:09:24,839::agent::52::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
 ovirt-hosted-engine-ha agent 1.1.5 started
 MainThread::INFO::2014-09-24
 15:09:24,871::hosted_engine::223::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
 Found certificate common name: 10.8.2.101
 MainThread::INFO::2014-09-24
 15:09:25,081::hosted_engine::367::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Initializing ha-broker connection
 MainThread::INFO::2014-09-24
 15:09:25,082::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor ping, options {'addr': '10.8.2.1'}
 MainThread::INFO::2014-09-24
 15:09:25,083::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 25293072
 MainThread::INFO::2014-09-24
 15:09:25,083::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor mgmt-bridge, options {'use_ssl': 'true', 'bridge_name':
 'ovirtmgmt', 'address': '0'}
 MainThread::INFO::2014-09-24
 15:09:25,086::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 25294160
 MainThread::INFO::2014-09-24
 15:09:25,086::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor mem-free, options {'use_ssl': 'true', 'address': '0'}
 MainThread::INFO::2014-09-24
 15:09:25,088::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 25293968
 MainThread::INFO::2014-09-24
 15:09:25,088::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor cpu-load-no-engine, options {'use_ssl': 'true',
 'vm_uuid': 'e1ca293f-09e0-4d2e-8915-221839af1489', 'address': '0'}
 MainThread::INFO::2014-09-24
 15:09:25,089::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 25360400
 MainThread::INFO::2014-09-24
 15:09:25,089::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid':
 'e1ca293f-09e0-4d2e-8915-221839af1489', 'address': '0'}
 MainThread::INFO::2014-09-24
 15:09:25,091::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 25509776
 MainThread::INFO::2014-09-24
 15:09:25,091::hosted_engine::391::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Broker initialized, all submonitors started
 MainThread::INFO::2014-09-24
 15:09:25,125::hosted_engine::476::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
 Ensuring lease for lockspace hosted-engine, host id 2 is acquired (file:
 /rhev/data-center/mnt/10.8.2.12:_volume1_engine-store/e313da39-594c-46b5-95c9-c445889c745c/ha_agent/hosted-engine.lockspace)
 MainThread::INFO::2014-09-24
 15:09:25,134::state_machine::153::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
 Global metadata: {'maintenance': False}
 MainThread::INFO::2014-09-24
 15:09:25,134::state_machine::158::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
 Host 10.8.2.100 (id 1): {'live-data': True, 'extra':
 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1411564164
 (Wed Sep 24 15:09:24
 2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
 'hostname': '10.8.2.100', 'host-id': 1, 'engine-status': {'health':
 'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400, 'maintenance':
 False, 'host-ts': 1411564164}
 MainThread::INFO::2014-09-24
 15:09:25,134::state_machine::158::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
 Host 10.8.2.102 (id 3): {'live-data': False, 'extra':
 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1411562496
 (Wed Sep 24 14:41:36
 

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-24 Thread Itamar Heim
Seems we should consider not adding the host if already there. Please open a 
bug.
Though I really hope in 3.6 to see this done from the gui

On Sep 24, 2014 4:23 PM, Stefan Wendler stefan.wend...@tngtech.com wrote:

 Okay, I'm truncating the previous mails here 

 Davids hiOkay, I'm truncating the previous mails here

Davids hint was the solution. I had the ovirt hosts already added to the
cluster and tried to do the hosted-engine-ha setup on them.

After removing the hosts from the cluster and putting the data domain to
maintenance mode I was able to deploy an all other nodes. I now have a
HA'd hosted engine. Which can also be migrated \o/

Maybe that is something that could be stated in the documentation more
clearly?

Unfortunately now I have a new problem. The agents crash rapidly after
startup. The error is the following:
(/var/log/ovirt-hosted-engine-ha/agent.log)

AttributeError: 'NoneType' object has no attribute 'iteritems'

And the whole output here - The agents have been started and I tried a
migration of the hosted engine from ovirt host 1 to host 2 which
succeeded. But the agents crashed afterwards:

MainThread::INFO::2014-09-24
15:09:24,839::agent::52::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
ovirt-hosted-engine-ha agent 1.1.5 started
MainThread::INFO::2014-09-24
15:09:24,871::hosted_engine::223::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
Found certificate common name: 10.8.2.101
MainThread::INFO::2014-09-24
15:09:25,081::hosted_engine::367::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Initializing ha-broker connection
MainThread::INFO::2014-09-24
15:09:25,082::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor ping, options {'addr': '10.8.2.1'}
MainThread::INFO::2014-09-24
15:09:25,083::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 25293072
MainThread::INFO::2014-09-24
15:09:25,083::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor mgmt-bridge, options {'use_ssl': 'true', 'bridge_name':
'ovirtmgmt', 'address': '0'}
MainThread::INFO::2014-09-24
15:09:25,086::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 25294160
MainThread::INFO::2014-09-24
15:09:25,086::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor mem-free, options {'use_ssl': 'true', 'address': '0'}
MainThread::INFO::2014-09-24
15:09:25,088::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 25293968
MainThread::INFO::2014-09-24
15:09:25,088::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor cpu-load-no-engine, options {'use_ssl': 'true',
'vm_uuid': 'e1ca293f-09e0-4d2e-8915-221839af1489', 'address': '0'}
MainThread::INFO::2014-09-24
15:09:25,089::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 25360400
MainThread::INFO::2014-09-24
15:09:25,089::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid':
'e1ca293f-09e0-4d2e-8915-221839af1489', 'address': '0'}
MainThread::INFO::2014-09-24
15:09:25,091::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 25509776
MainThread::INFO::2014-09-24
15:09:25,091::hosted_engine::391::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Broker initialized, all submonitors started
MainThread::INFO::2014-09-24
15:09:25,125::hosted_engine::476::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Ensuring lease for lockspace hosted-engine, host id 2 is acquired (file:
/rhev/data-center/mnt/10.8.2.12:_volume1_engine-store/e313da39-594c-46b5-95c9-c445889c745c/ha_agent/hosted-engine.lockspace)
MainThread::INFO::2014-09-24
15:09:25,134::state_machine::153::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-09-24
15:09:25,134::state_machine::158::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.8.2.100 (id 1): {'live-data': True, 'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1411564164
(Wed Sep 24 15:09:24
2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
'hostname': '10.8.2.100', 'host-id': 1, 'engine-status': {'health':
'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400, 'maintenance':
False, 'host-ts': 1411564164}
MainThread::INFO::2014-09-24
15:09:25,134::state_machine::158::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.8.2.102 (id 3): {'live-data': False, 'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1411562496
(Wed Sep 24 14:41:36

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-23 Thread Stefan Wendler
On 09/22/2014 10:52, Stefan Wendler wrote:
 On 09/19/2014 15:58, Itamar Heim wrote:
 On 09/19/2014 03:32 PM, Stefan Wendler wrote:
 Hi there.

 I'm trying to install a hosted-engine on our second node (fist engine
 runs on node1).

 But I always get the message:

 [ ERROR ] Cannot automatically add the host to the Default cluster:
 Cannot add Host. Host with the same address already exists.

 I'm not entirely sure what I have to do when this message comes, so I
 just press ENTER:

 ###
 To continue make a selection from the options below:
(1) Continue setup - engine installation is complete
(2) Power off and restart the VM
(3) Abort setup

(1, 2, 3)[1]:
 

 Is there any other interaction required prior to selecting 1?

 In the Web Gui I get the following message:

 X Adding new Host hosted_engine_2 to Cluster Default

 Here is the console output:

 # hosted-engine --deploy
 [ INFO  ] Stage: Initializing
Continuing will configure this host for serving as hypervisor
 and create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
 [ INFO  ] Generating a temporary VNC password.
 [ INFO  ] Stage: Environment setup
Configuration files: []
Log file:
 /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log

Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
 [ INFO  ] Hardware supports virtualization
 [ INFO  ] Bridge ovirtmgmt already created
 [ INFO  ] Stage: Environment packages setup
 [ INFO  ] Stage: Programs detection
 [ INFO  ] Stage: Environment setup
 [ INFO  ] Stage: Environment customization

--== STORAGE CONFIGURATION ==--

During customization use CTRL-D to abort.
Please specify the storage you would like to use (nfs3,
 nfs4)[nfs3]:
Please specify the full shared storage connection path to use
 (example: host:/path): some address:/volume1
The specified storage location already contains a data domain.
 Is this an additional host setup (Yes, No)[Yes]?
 [ INFO  ] Installing on additional host
Please specify the Host ID [Must be integer, default: 2]:
The Host ID is already known. Is this a re-deployment on an
 additional host that was previously set up (Yes, No)[Yes]?

--== SYSTEM CONFIGURATION ==--

 [WARNING] A configuration file must be supplied to deploy Hosted Engine
 on an additional host.
The answer file may be fetched from the first host using scp.
If you do not want to download it automatically you can abort
 the setup answering no to the following question.
Do you want to scp the answer file from the first host? (Yes,
 No)[Yes]:
Please provide the FQDN or IP of the first host:
 node1.domain
Enter 'root' user password for host node1.domain:
 [ INFO  ] Answer file successfully downloaded

--== NETWORK CONFIGURATION ==--

The following CPU types are supported by this host:
   - model_Westmere: Intel Westmere Family
   - model_Nehalem: Intel Nehalem Family
   - model_Penryn: Intel Penryn Family
   - model_Conroe: Intel Conroe Family

--== HOSTED ENGINE CONFIGURATION ==--

Enter the name which will be used to identify this host inside
 the Administrator Portal [hosted_engine_2]:
Enter 'admin@internal' user password that will be used for
 accessing the Administrator Portal:
Confirm 'admin@internal' user password:
   [ INFO  ] Stage: Setup validation

--== CONFIGURATION PREVIEW ==--

Engine FQDN: engine.domain
Bridge name: ovirtmgmt
SSH daemon port: 22
Gateway address: some address
Host name for web application  : hosted_engine_2
Host ID: 2
Image size GB  : 25
Storage connection : some address:/volume1
Console type   : vnc
Memory size MB : 8192
MAC address: 00:16:3e:3b:8d:66
Boot type  : disk
Number of CPUs : 2
CPU Type   : model_Westmere

Please confirm installation settings (Yes, No)[No]: yes
 [ ERROR ] Invalid value

Please confirm installation settings (Yes, No)[No]: Yes
 [ INFO  ] Stage: Transaction setup
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Stage: Package installation
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Configuring libvirt
 [ INFO  ] Configuring VDSM
 [ INFO  ] Starting vdsmd
 [ INFO  ] Waiting for VDSM hardware 

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-23 Thread Itamar Heim

On 09/23/2014 05:17 PM, Stefan Wendler wrote:

On 09/22/2014 10:52, Stefan Wendler wrote:

On 09/19/2014 15:58, Itamar Heim wrote:

On 09/19/2014 03:32 PM, Stefan Wendler wrote:

Hi there.

I'm trying to install a hosted-engine on our second node (fist engine
runs on node1).

But I always get the message:

[ ERROR ] Cannot automatically add the host to the Default cluster:
Cannot add Host. Host with the same address already exists.

I'm not entirely sure what I have to do when this message comes, so I
just press ENTER:

###
To continue make a selection from the options below:
(1) Continue setup - engine installation is complete
(2) Power off and restart the VM
(3) Abort setup

(1, 2, 3)[1]:


Is there any other interaction required prior to selecting 1?

In the Web Gui I get the following message:

X Adding new Host hosted_engine_2 to Cluster Default

Here is the console output:

# hosted-engine --deploy
[ INFO  ] Stage: Initializing
Continuing will configure this host for serving as hypervisor
and create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log

Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

--== STORAGE CONFIGURATION ==--

During customization use CTRL-D to abort.
Please specify the storage you would like to use (nfs3,
nfs4)[nfs3]:
Please specify the full shared storage connection path to use
(example: host:/path): some address:/volume1
The specified storage location already contains a data domain.
Is this an additional host setup (Yes, No)[Yes]?
[ INFO  ] Installing on additional host
Please specify the Host ID [Must be integer, default: 2]:
The Host ID is already known. Is this a re-deployment on an
additional host that was previously set up (Yes, No)[Yes]?

--== SYSTEM CONFIGURATION ==--

[WARNING] A configuration file must be supplied to deploy Hosted Engine
on an additional host.
The answer file may be fetched from the first host using scp.
If you do not want to download it automatically you can abort
the setup answering no to the following question.
Do you want to scp the answer file from the first host? (Yes,
No)[Yes]:
Please provide the FQDN or IP of the first host:
node1.domain
Enter 'root' user password for host node1.domain:
[ INFO  ] Answer file successfully downloaded

--== NETWORK CONFIGURATION ==--

The following CPU types are supported by this host:
   - model_Westmere: Intel Westmere Family
   - model_Nehalem: Intel Nehalem Family
   - model_Penryn: Intel Penryn Family
   - model_Conroe: Intel Conroe Family

--== HOSTED ENGINE CONFIGURATION ==--

Enter the name which will be used to identify this host inside
the Administrator Portal [hosted_engine_2]:
Enter 'admin@internal' user password that will be used for
accessing the Administrator Portal:
Confirm 'admin@internal' user password:
   [ INFO  ] Stage: Setup validation

--== CONFIGURATION PREVIEW ==--

Engine FQDN: engine.domain
Bridge name: ovirtmgmt
SSH daemon port: 22
Gateway address: some address
Host name for web application  : hosted_engine_2
Host ID: 2
Image size GB  : 25
Storage connection : some address:/volume1
Console type   : vnc
Memory size MB : 8192
MAC address: 00:16:3e:3b:8d:66
Boot type  : disk
Number of CPUs : 2
CPU Type   : model_Westmere

Please confirm installation settings (Yes, No)[No]: yes
[ ERROR ] Invalid value

Please confirm installation settings (Yes, No)[No]: Yes
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware 

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-23 Thread Stefan Wendler
On 09/23/2014 17:01, Itamar Heim wrote:
 On 09/23/2014 05:17 PM, Stefan Wendler wrote:
 On 09/22/2014 10:52, Stefan Wendler wrote:
 On 09/19/2014 15:58, Itamar Heim wrote:
 On 09/19/2014 03:32 PM, Stefan Wendler wrote:
 Hi there.

 I'm trying to install a hosted-engine on our second node (fist engine
 runs on node1).

 But I always get the message:

 [ ERROR ] Cannot automatically add the host to the Default cluster:
 Cannot add Host. Host with the same address already exists.

 I'm not entirely sure what I have to do when this message comes, so I
 just press ENTER:

 ###
 To continue make a selection from the options below:
 (1) Continue setup - engine installation is complete
 (2) Power off and restart the VM
 (3) Abort setup

 (1, 2, 3)[1]:
 

 Is there any other interaction required prior to selecting 1?

 In the Web Gui I get the following message:

 X Adding new Host hosted_engine_2 to Cluster Default

 Here is the console output:

 # hosted-engine --deploy
 [ INFO  ] Stage: Initializing
 Continuing will configure this host for serving as
 hypervisor
 and create a VM where you have to install oVirt Engine afterwards.
 Are you sure you want to continue? (Yes, No)[Yes]:
 [ INFO  ] Generating a temporary VNC password.
 [ INFO  ] Stage: Environment setup
 Configuration files: []
 Log file:
 /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log


 Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
 [ INFO  ] Hardware supports virtualization
 [ INFO  ] Bridge ovirtmgmt already created
 [ INFO  ] Stage: Environment packages setup
 [ INFO  ] Stage: Programs detection
 [ INFO  ] Stage: Environment setup
 [ INFO  ] Stage: Environment customization

 --== STORAGE CONFIGURATION ==--

 During customization use CTRL-D to abort.
 Please specify the storage you would like to use (nfs3,
 nfs4)[nfs3]:
 Please specify the full shared storage connection path
 to use
 (example: host:/path): some address:/volume1
 The specified storage location already contains a data
 domain.
 Is this an additional host setup (Yes, No)[Yes]?
 [ INFO  ] Installing on additional host
 Please specify the Host ID [Must be integer, default: 2]:
 The Host ID is already known. Is this a re-deployment
 on an
 additional host that was previously set up (Yes, No)[Yes]?

 --== SYSTEM CONFIGURATION ==--

 [WARNING] A configuration file must be supplied to deploy Hosted
 Engine
 on an additional host.
 The answer file may be fetched from the first host
 using scp.
 If you do not want to download it automatically you can
 abort
 the setup answering no to the following question.
 Do you want to scp the answer file from the first host?
 (Yes,
 No)[Yes]:
 Please provide the FQDN or IP of the first host:
 node1.domain
 Enter 'root' user password for host node1.domain:
 [ INFO  ] Answer file successfully downloaded

 --== NETWORK CONFIGURATION ==--

 The following CPU types are supported by this host:
- model_Westmere: Intel Westmere Family
- model_Nehalem: Intel Nehalem Family
- model_Penryn: Intel Penryn Family
- model_Conroe: Intel Conroe Family

 --== HOSTED ENGINE CONFIGURATION ==--

 Enter the name which will be used to identify this host
 inside
 the Administrator Portal [hosted_engine_2]:
 Enter 'admin@internal' user password that will be used for
 accessing the Administrator Portal:
 Confirm 'admin@internal' user password:
[ INFO  ] Stage: Setup validation

 --== CONFIGURATION PREVIEW ==--

 Engine FQDN: engine.domain
 Bridge name: ovirtmgmt
 SSH daemon port: 22
 Gateway address: some address
 Host name for web application  : hosted_engine_2
 Host ID: 2
 Image size GB  : 25
 Storage connection : some
 address:/volume1
 Console type   : vnc
 Memory size MB : 8192
 MAC address: 00:16:3e:3b:8d:66
 Boot type  : disk
 Number of CPUs : 2
 CPU Type   : model_Westmere

 Please confirm installation settings (Yes, No)[No]: yes
 [ ERROR ] Invalid value

 Please confirm installation settings (Yes, No)[No]: Yes
 [ INFO  ] Stage: Transaction setup
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Stage: Package installation
 [ INFO  

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-23 Thread Itamar Heim

On Sep 23, 2014 7:03 PM, Stefan Wendler stefan.wend...@tngtech.com wrote:

 On 09/23/2014 17:01, Itamar Heim wrote: 
  On 09/23/2014 05:17 PM, Stefan Wendler wrote: 
  On 09/22/2014 10:52, Stefan Wendler wrote: 
  On 09/19/2014 15:58, Itamar Heim wrote: 
  On 09/19/2014 03:32 PM, Stefan Wendler wrote: 
  Hi there. 
  
  I'm trying to install a hosted-engine on our second node (fist engine 
  runs on node1). 
  
  But I always get the message: 
  
  [ ERROR ] Cannot automatically add the host to the Default cluster: 
  Cannot add Host. Host with the same address already exists. 
  
  I'm not entirely sure what I have to do when this message comes, so I 
  just press ENTER: 
  
  ### 
  To continue make a selection from the options below: 
  (1) Continue setup - engine installation is complete 
  (2) Power off and restart the VM 
  (3) Abort setup 
  
  (1, 2, 3)[1]: 
   
  
  Is there any other interaction required prior to selecting 1? 
  
  In the Web Gui I get the following message: 
  
  X Adding new Host hosted_engine_2 to Cluster Default 
  
  Here is the console output: 
  
  # hosted-engine --deploy 
  [ INFO  ] Stage: Initializing 
  Continuing will configure this host for serving as 
  hypervisor 
  and create a VM where you have to install oVirt Engine afterwards. 
  Are you sure you want to continue? (Yes, No)[Yes]: 
  [ INFO  ] Generating a temporary VNC password. 
  [ INFO  ] Stage: Environment setup 
  Configuration files: [] 
  Log file: 
  /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log
   
  
  
  Version: otopi-1.2.3 (otopi-1.2.3-1.el6) 
  [ INFO  ] Hardware supports virtualization 
  [ INFO  ] Bridge ovirtmgmt already created 
  [ INFO  ] Stage: Environment packages setup 
  [ INFO  ] Stage: Programs detection 
  [ INFO  ] Stage: Environment setup 
  [ INFO  ] Stage: Environment customization 
  
  --== STORAGE CONFIGURATION ==-- 
  
  During customization use CTRL-D to abort. 
  Please specify the storage you would like to use (nfs3, 
  nfs4)[nfs3]: 
  Please specify the full shared storage connection path 
  to use 
  (example: host:/path): some address:/volume1 
  The specified storage location already contains a data 
  domain. 
  Is this an additional host setup (Yes, No)[Yes]? 
  [ INFO  ] Installing on additional host 
  Please specify the Host ID [Must be integer, default: 2]: 
  The Host ID is already known. Is this a re-deployment 
  on an 
  additional host that was previously set up (Yes, No)[Yes]? 
  
  --== SYSTEM CONFIGURATION ==-- 
  
  [WARNING] A configuration file must be supplied to deploy Hosted 
  Engine 
  on an additional host. 
  The answer file may be fetched from the first host 
  using scp. 
  If you do not want to download it automatically you can 
  abort 
  the setup answering no to the following question. 
  Do you want to scp the answer file from the first host? 
  (Yes, 
  No)[Yes]: 
  Please provide the FQDN or IP of the first host: 
  node1.domain 
  Enter 'root' user password for host node1.domain: 
  [ INFO  ] Answer file successfully downloaded 
  
  --== NETWORK CONFIGURATION ==-- 
  
  The following CPU types are supported by this host: 
     - model_Westmere: Intel Westmere Family 
     - model_Nehalem: Intel Nehalem Family 
     - model_Penryn: Intel Penryn Family 
     - model_Conroe: Intel Conroe Family 
  
  --== HOSTED ENGINE CONFIGURATION ==-- 
  
  Enter the name which will be used to identify this host 
  inside 
  the Administrator Portal [hosted_engine_2]: 
  Enter 'admin@internal' user password that will be used for 
  accessing the Administrator Portal: 
  Confirm 'admin@internal' user password: 
     [ INFO  ] Stage: Setup validation 
  
  --== CONFIGURATION PREVIEW ==-- 
  
  Engine FQDN    : engine.domain 
  Bridge name    : ovirtmgmt 
  SSH daemon port    : 22 
  Gateway address    : some address 
  Host name for web application  : hosted_engine_2 
  Host ID    : 2 
  Image size GB  : 25 
  Storage connection : some 
  address:/volume1 
  Console type   : vnc 
  Memory size MB : 8192 
  MAC address    : 00:16:3e:3b:8d:66 
  Boot type  : disk 
  Number of CPUs : 2 
  CPU Type 

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-22 Thread Stefan Wendler
On 09/19/2014 15:58, Itamar Heim wrote:
 On 09/19/2014 03:32 PM, Stefan Wendler wrote:
 Hi there.

 I'm trying to install a hosted-engine on our second node (fist engine
 runs on node1).

 But I always get the message:

 [ ERROR ] Cannot automatically add the host to the Default cluster:
 Cannot add Host. Host with the same address already exists.

 I'm not entirely sure what I have to do when this message comes, so I
 just press ENTER:

 ###
 To continue make a selection from the options below:
(1) Continue setup - engine installation is complete
(2) Power off and restart the VM
(3) Abort setup

(1, 2, 3)[1]:
 

 Is there any other interaction required prior to selecting 1?

 In the Web Gui I get the following message:

 X Adding new Host hosted_engine_2 to Cluster Default

 Here is the console output:

 # hosted-engine --deploy
 [ INFO  ] Stage: Initializing
Continuing will configure this host for serving as hypervisor
 and create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
 [ INFO  ] Generating a temporary VNC password.
 [ INFO  ] Stage: Environment setup
Configuration files: []
Log file:
 /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log

Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
 [ INFO  ] Hardware supports virtualization
 [ INFO  ] Bridge ovirtmgmt already created
 [ INFO  ] Stage: Environment packages setup
 [ INFO  ] Stage: Programs detection
 [ INFO  ] Stage: Environment setup
 [ INFO  ] Stage: Environment customization

--== STORAGE CONFIGURATION ==--

During customization use CTRL-D to abort.
Please specify the storage you would like to use (nfs3,
 nfs4)[nfs3]:
Please specify the full shared storage connection path to use
 (example: host:/path): some address:/volume1
The specified storage location already contains a data domain.
 Is this an additional host setup (Yes, No)[Yes]?
 [ INFO  ] Installing on additional host
Please specify the Host ID [Must be integer, default: 2]:
The Host ID is already known. Is this a re-deployment on an
 additional host that was previously set up (Yes, No)[Yes]?

--== SYSTEM CONFIGURATION ==--

 [WARNING] A configuration file must be supplied to deploy Hosted Engine
 on an additional host.
The answer file may be fetched from the first host using scp.
If you do not want to download it automatically you can abort
 the setup answering no to the following question.
Do you want to scp the answer file from the first host? (Yes,
 No)[Yes]:
Please provide the FQDN or IP of the first host:
 node1.domain
Enter 'root' user password for host node1.domain:
 [ INFO  ] Answer file successfully downloaded

--== NETWORK CONFIGURATION ==--

The following CPU types are supported by this host:
   - model_Westmere: Intel Westmere Family
   - model_Nehalem: Intel Nehalem Family
   - model_Penryn: Intel Penryn Family
   - model_Conroe: Intel Conroe Family

--== HOSTED ENGINE CONFIGURATION ==--

Enter the name which will be used to identify this host inside
 the Administrator Portal [hosted_engine_2]:
Enter 'admin@internal' user password that will be used for
 accessing the Administrator Portal:
Confirm 'admin@internal' user password:
   [ INFO  ] Stage: Setup validation

--== CONFIGURATION PREVIEW ==--

Engine FQDN: engine.domain
Bridge name: ovirtmgmt
SSH daemon port: 22
Gateway address: some address
Host name for web application  : hosted_engine_2
Host ID: 2
Image size GB  : 25
Storage connection : some address:/volume1
Console type   : vnc
Memory size MB : 8192
MAC address: 00:16:3e:3b:8d:66
Boot type  : disk
Number of CPUs : 2
CPU Type   : model_Westmere

Please confirm installation settings (Yes, No)[No]: yes
 [ ERROR ] Invalid value

Please confirm installation settings (Yes, No)[No]: Yes
 [ INFO  ] Stage: Transaction setup
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Stage: Package installation
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Configuring libvirt
 [ INFO  ] Configuring VDSM
 [ INFO  ] Starting vdsmd
 [ INFO  ] Waiting for VDSM hardware info
 [ INFO  ] Waiting for VDSM hardware 

[ovirt-users] hosted engine setup on second host fails

2014-09-19 Thread Stefan Wendler
Hi there.

I'm trying to install a hosted-engine on our second node (fist engine
runs on node1).

But I always get the message:

[ ERROR ] Cannot automatically add the host to the Default cluster:
Cannot add Host. Host with the same address already exists.

I'm not entirely sure what I have to do when this message comes, so I
just press ENTER:

###
To continue make a selection from the options below:
  (1) Continue setup - engine installation is complete
  (2) Power off and restart the VM
  (3) Abort setup

  (1, 2, 3)[1]:


Is there any other interaction required prior to selecting 1?

In the Web Gui I get the following message:

X Adding new Host hosted_engine_2 to Cluster Default

Here is the console output:

# hosted-engine --deploy
[ INFO  ] Stage: Initializing
  Continuing will configure this host for serving as hypervisor
and create a VM where you have to install oVirt Engine afterwards.
  Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
  Configuration files: []
  Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log
  Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

  --== STORAGE CONFIGURATION ==--

  During customization use CTRL-D to abort.
  Please specify the storage you would like to use (nfs3,
nfs4)[nfs3]:
  Please specify the full shared storage connection path to use
(example: host:/path): some address:/volume1
  The specified storage location already contains a data domain.
Is this an additional host setup (Yes, No)[Yes]?
[ INFO  ] Installing on additional host
  Please specify the Host ID [Must be integer, default: 2]:
  The Host ID is already known. Is this a re-deployment on an
additional host that was previously set up (Yes, No)[Yes]?

  --== SYSTEM CONFIGURATION ==--

[WARNING] A configuration file must be supplied to deploy Hosted Engine
on an additional host.
  The answer file may be fetched from the first host using scp.
  If you do not want to download it automatically you can abort
the setup answering no to the following question.
  Do you want to scp the answer file from the first host? (Yes,
No)[Yes]:
  Please provide the FQDN or IP of the first host: node1.domain
  Enter 'root' user password for host node1.domain:
[ INFO  ] Answer file successfully downloaded

  --== NETWORK CONFIGURATION ==--

  The following CPU types are supported by this host:
 - model_Westmere: Intel Westmere Family
 - model_Nehalem: Intel Nehalem Family
 - model_Penryn: Intel Penryn Family
 - model_Conroe: Intel Conroe Family

  --== HOSTED ENGINE CONFIGURATION ==--

  Enter the name which will be used to identify this host inside
the Administrator Portal [hosted_engine_2]:
  Enter 'admin@internal' user password that will be used for
accessing the Administrator Portal:
  Confirm 'admin@internal' user password:
 [ INFO  ] Stage: Setup validation

  --== CONFIGURATION PREVIEW ==--

  Engine FQDN: engine.domain
  Bridge name: ovirtmgmt
  SSH daemon port: 22
  Gateway address: some address
  Host name for web application  : hosted_engine_2
  Host ID: 2
  Image size GB  : 25
  Storage connection : some address:/volume1
  Console type   : vnc
  Memory size MB : 8192
  MAC address: 00:16:3e:3b:8d:66
  Boot type  : disk
  Number of CPUs : 2
  CPU Type   : model_Westmere

  Please confirm installation settings (Yes, No)[No]: yes
[ ERROR ] Invalid value

  Please confirm installation settings (Yes, No)[No]: Yes
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Connecting Storage Domain
[ INFO  ] Configuring VM
[ INFO  ] Updating hosted-engine configuration
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up
  To continue make a selection from the options below:

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-19 Thread Itamar Heim

On 09/19/2014 03:32 PM, Stefan Wendler wrote:

Hi there.

I'm trying to install a hosted-engine on our second node (fist engine
runs on node1).

But I always get the message:

[ ERROR ] Cannot automatically add the host to the Default cluster:
Cannot add Host. Host with the same address already exists.

I'm not entirely sure what I have to do when this message comes, so I
just press ENTER:

###
To continue make a selection from the options below:
   (1) Continue setup - engine installation is complete
   (2) Power off and restart the VM
   (3) Abort setup

   (1, 2, 3)[1]:


Is there any other interaction required prior to selecting 1?

In the Web Gui I get the following message:

X Adding new Host hosted_engine_2 to Cluster Default

Here is the console output:

# hosted-engine --deploy
[ INFO  ] Stage: Initializing
   Continuing will configure this host for serving as hypervisor
and create a VM where you have to install oVirt Engine afterwards.
   Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
   Configuration files: []
   Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log
   Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

   --== STORAGE CONFIGURATION ==--

   During customization use CTRL-D to abort.
   Please specify the storage you would like to use (nfs3,
nfs4)[nfs3]:
   Please specify the full shared storage connection path to use
(example: host:/path): some address:/volume1
   The specified storage location already contains a data domain.
Is this an additional host setup (Yes, No)[Yes]?
[ INFO  ] Installing on additional host
   Please specify the Host ID [Must be integer, default: 2]:
   The Host ID is already known. Is this a re-deployment on an
additional host that was previously set up (Yes, No)[Yes]?

   --== SYSTEM CONFIGURATION ==--

[WARNING] A configuration file must be supplied to deploy Hosted Engine
on an additional host.
   The answer file may be fetched from the first host using scp.
   If you do not want to download it automatically you can abort
the setup answering no to the following question.
   Do you want to scp the answer file from the first host? (Yes,
No)[Yes]:
   Please provide the FQDN or IP of the first host: node1.domain
   Enter 'root' user password for host node1.domain:
[ INFO  ] Answer file successfully downloaded

   --== NETWORK CONFIGURATION ==--

   The following CPU types are supported by this host:
  - model_Westmere: Intel Westmere Family
  - model_Nehalem: Intel Nehalem Family
  - model_Penryn: Intel Penryn Family
  - model_Conroe: Intel Conroe Family

   --== HOSTED ENGINE CONFIGURATION ==--

   Enter the name which will be used to identify this host inside
the Administrator Portal [hosted_engine_2]:
   Enter 'admin@internal' user password that will be used for
accessing the Administrator Portal:
   Confirm 'admin@internal' user password:
  [ INFO  ] Stage: Setup validation

   --== CONFIGURATION PREVIEW ==--

   Engine FQDN: engine.domain
   Bridge name: ovirtmgmt
   SSH daemon port: 22
   Gateway address: some address
   Host name for web application  : hosted_engine_2
   Host ID: 2
   Image size GB  : 25
   Storage connection : some address:/volume1
   Console type   : vnc
   Memory size MB : 8192
   MAC address: 00:16:3e:3b:8d:66
   Boot type  : disk
   Number of CPUs : 2
   CPU Type   : model_Westmere

   Please confirm installation settings (Yes, No)[No]: yes
[ ERROR ] Invalid value

   Please confirm installation settings (Yes, No)[No]: Yes
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Connecting Storage Domain
[ INFO  ] Configuring VM
[ INFO  ] Updating hosted-engine configuration
[ INFO  ] Stage: Transaction