[ovirt-users] LDAP login extension

2018-07-01 Thread Mariusz Kozakowski
Hello,

We managed to setup oVirt Engine with your help, now we're facing other issue.

I'm trying to configure AD auth for web portal, but unfortunately I got error 
during ovirt-engine-extension-aaa-ldap-setup:


  2018-06-27 09:06:21,926+02 INFO

  2018-06-27 09:06:21,926+02 INFO== 
Execution ===
  2018-06-27 09:06:21,926+02 INFO

  2018-06-27 09:06:21,927+02 INFOIteration: 0
  2018-06-27 09:06:21,928+02 INFOProfile='ad' authn='ad-authn' 
authz='ad-authz' mapping='null'
  2018-06-27 09:06:21,928+02 INFOAPI: 
-->Authn.InvokeCommands.AUTHENTICATE_CREDENTIALS profile='ad' user='username'
  2018-06-27 09:06:21,945+02 INFOAPI: 
<--Authn.InvokeCommands.AUTHENTICATE_CREDENTIALS profile='ad' result=SUCCESS
  2018-06-27 09:06:21,948+02 INFO--- Begin AuthRecord ---
  2018-06-27 09:06:21,949+02 INFOAAA_AUTHN_AUTH_RECORD_PRINCIPAL: 
username
  2018-06-27 09:06:21,949+02 INFO--- End   AuthRecord ---
  2018-06-27 09:06:21,950+02 INFOAPI: 
-->Authz.InvokeCommands.FETCH_PRINCIPAL_RECORD principal='username'
  2018-06-27 09:06:21,952+02 WARNING Ignoring records from pool: 'gc'
  2018-06-27 09:06:21,953+02 SEVERE  Cannot resolve principal 'username'

Do you have any idea what's the issue and what we're missing? As it looks like 
credentials are correct - passing wrong username gives fail earlier, so issue 
is somewhere after authentication.


--

Best regards/Pozdrawiam/MfG

Mariusz Kozakowski

Site Reliability Engineer

Dansk Supermarked Group
Baltic Business Park
ul. 1 Maja 38-39
71-627 Szczecin
dansksupermarked.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6BZXOA6ZXMSN5EPC67LNBUSANJLUBHA7/


[ovirt-users] Re: oVirt hosted-engine-setup issues with getting host facts

2018-06-07 Thread Mariusz Kozakowski
On Tue, 2018-06-05 at 23:26 +0200, Simone Tiraboschi wrote:
I tried to deploy hosted-engine over vlan over a bond and everything worked as 
expected but I also found a case where it fails: SetupNetworks is going to fail 
if bond0.123 is correctly configured with an IPv4 address while the untagged 
bond0 lacks IPv4 configuration.
Simply configuring a static IPv4 address from an unused subnet  on the untagged 
bond is a valid workaround.
I just opened https://bugzilla.redhat.com/show_bug.cgi?id=1586280


Ok, I removed all bridges, so IPs are configured on bond0.id, added dummy IP 
for bond0, flushed iptables. And it works. So thank you for your help!


One last question - does the bond0 IP is still needed or it's just issue during 
install and now we can delete it?


Cheers

--

Best regards/Pozdrawiam/MfG

Mariusz Kozakowski

Site Reliability Engineer

Dansk Supermarked Group
Baltic Business Park
ul. 1 Maja 38-39
71-627 Szczecin
dansksupermarked.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QNJNDRBTNXBPMF57PYRAKJKPUMGAEGCO/


[ovirt-users] Re: oVirt hosted-engine-setup issues with getting host facts

2018-06-07 Thread Mariusz Kozakowski
] 
(EE-ManagedThreadFactory-engineScheduled-Thread-45) [374e0c] START, 
SetVdsStatusVDSCommand(HostName = host01.redacted, 
SetVdsStatusVDSCommandParameters:{hostId='9956cebf-59ab-426b-ace6-25342705445e',
 status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', 
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 460b0642
2018-06-05 13:26:19,106+02 INFO  
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-45) [374e0c] FINISH, 
SetVdsStatusVDSCommand, log id: 460b0642
2018-06-05 13:26:19,161+02 ERROR 
[org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-45) [374e0c] Host 
'host01.redacted' is set to Non-Operational, it is missing the following 
networks: 'ovirtmgmt'
2018-06-05 13:26:19,196+02 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-45) [374e0c] EVENT_ID: 
VDS_SET_NONOPERATIONAL_NETWORK(519), Host host01.redacted does not comply with 
the cluster Default networks, the following networks are missing on host: 
'ovirtmgmt'
2018-06-05 13:26:19,235+02 INFO  
[org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-45) [647a0b6c] Running command: 
HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected :  
ID: 9956cebf-59ab-426b-ace6-25342705445e Type: VDS


Answers used for ansible deploy:

OVEHOSTED_NETWORK/bridgeIf=str:br0.
OVEHOSTED_NETWORK/bridgeName=str:ovirtmgmt

Do you have idea why the ovirtmgmt wasn't created during deploy? Or should we 
use other settings for network for deploy script?

On Tue, 2018-06-05 at 11:03 +0200, Simone Tiraboschi wrote:


On Tue, Jun 5, 2018 at 10:16 AM, Mariusz Kozakowski 
mailto:mariusz.kozakow...@sallinggroup.com>>
 wrote:
On Tue, 2018-06-05 at 10:09 +0200, Simone Tiraboschi wrote:
But did you manually created the bridge or did the engine created it for you 
triggered by hosted-engine-setup?

Manually. Before we had br0.. Should we go back with network configuration 
to br0., and no ovritmgmt network created?


Yes, I'm pretty sure that the issue is there, see also 
https://bugzilla.redhat.com/show_bug.cgi?id=1317125

It will work for sure if the management bridge has been created in the past by 
the engine but we had a lot of failure reports trying to consume management 
bridge manually created with wrong options.
Letting the engine creating it with the right configuration is by far the 
safest option.


Also, what we should use as anwers here?

OVEHOSTED_NETWORK/bridgeIf=str:bond0.
OVEHOSTED_NETWORK/bridgeName=str:ovirtmgmt



Yes, this should be fine.



--

Best regards/Pozdrawiam/MfG

Mariusz Kozakowski

Site Reliability Engineer

Dansk Supermarked Group
Baltic Business Park
ul. 1 Maja 38-39
71-627 Szczecin
dansksupermarked.com<http://dansksupermarked.com>


--

Best regards/Pozdrawiam/MfG

Mariusz Kozakowski

Site Reliability Engineer

Dansk Supermarked Group
Baltic Business Park
ul. 1 Maja 38-39
71-627 Szczecin
dansksupermarked.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJIT26K5UBK4X5JZVOQ4ALKS5RRXCGHW/


[ovirt-users] Re: oVirt hosted-engine-setup issues with getting host facts

2018-06-07 Thread Mariusz Kozakowski
On Tue, 2018-06-05 at 10:09 +0200, Simone Tiraboschi wrote:
But did you manually created the bridge or did the engine created it for you 
triggered by hosted-engine-setup?

Manually. Before we had br0.. Should we go back with network configuration 
to br0., and no ovritmgmt network created?

Also, what we should use as anwers here?

OVEHOSTED_NETWORK/bridgeIf=str:bond0.
OVEHOSTED_NETWORK/bridgeName=str:ovirtmgmt



--

Best regards/Pozdrawiam/MfG

Mariusz Kozakowski

Site Reliability Engineer

Dansk Supermarked Group
Baltic Business Park
ul. 1 Maja 38-39
71-627 Szczecin
dansksupermarked.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UGEGWBWWSKWEYPHSDSXURNBDM44DBE7U/


[ovirt-users] Re: oVirt hosted-engine-setup issues with getting host facts

2018-06-07 Thread Mariusz Kozakowski
Hi,

we managed to get a bit forward, but we still face issues.

2018-06-05 09:38:42,556+02 INFO  
[org.ovirt.engine.core.bll.host.HostConnectivityChecker] 
(EE-ManagedThreadFactory-engine-Thread-1) [2617aebd] Engine managed to 
communicate with VDSM agent on host 'host01.redacted' with address 
'host01.redacted' ('8af21ab3-ce7a-49a5-a526-94b65aa3da29')
2018-06-05 09:38:47,488+02 WARN  
[org.ovirt.engine.core.bll.network.NetworkConfigurator] 
(EE-ManagedThreadFactory-engine-Thread-1) [2617aebd] Failed to find a valid 
interface for the management network of host host01.redacted. If the interface 
ovirtmgmt is a bridge, it should be torn-down manually.
2018-06-05 09:38:47,488+02 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] 
(EE-ManagedThreadFactory-engine-Thread-1) [2617aebd] Exception: 
org.ovirt.engine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException:
 Interface ovirtmgmt is invalid for management network

Our network configuration, bond0. is bridged into ovirtmgmt:

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
[…]
11: ovirtmgmt:  mtu 1500 qdisc noqueue state 
UP group default qlen 1000
link/ether 5c:f3:fc:da:b6:18 brd ff:ff:ff:ff:ff:ff
inet 1.2.3.42/24 brd 1.2.3.255 scope global noprefixroute ovirtmgmt
   valid_lft forever preferred_lft forever
inet6 fe80::e8dd:fff:fe33:4bba/64 scope link
   valid_lft forever preferred_lft forever
12: bond0:  mtu 1500 qdisc noqueue 
state UP group default qlen 1000
link/ether 5c:f3:fc:da:b6:18 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5ef3:fcff:feda:b618/64 scope link
   valid_lft forever preferred_lft forever
13: bond0.3019@bond0<mailto:bond0.3019@bond0>: 
 mtu 1500 qdisc noqueue master br0.3019 state 
UP group default qlen 1000
link/ether 5c:f3:fc:da:b6:18 brd ff:ff:ff:ff:ff:ff
14: bond0.@bond0<mailto:bond0.@bond0>: 
 mtu 1500 qdisc noqueue master ovirtmgmt state 
UP group default qlen 1000
link/ether 5c:f3:fc:da:b6:18 brd ff:ff:ff:ff:ff:ff
15: br0.3019:  mtu 1500 qdisc noqueue state UP 
group default qlen 1000
link/ether 5c:f3:fc:da:b6:18 brd ff:ff:ff:ff:ff:ff
inet 19.2.3.22/16 brd 192.168.255.255 scope global noprefixroute br0.3019
   valid_lft forever preferred_lft forever
inet6 fe80::5ef3:fcff:feda:b618/64 scope link
   valid_lft forever preferred_lft forever
31: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN group 
default qlen 1000
link/ether da:aa:73:7e:d7:93 brd ff:ff:ff:ff:ff:ff
32: virbr0:  mtu 1500 qdisc noqueue state UP 
group default qlen 1000
link/ether 52:54:00:a6:75:67 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
   valid_lft forever preferred_lft forever
33: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 
state DOWN group default qlen 1000
link/ether 52:54:00:a6:75:67 brd ff:ff:ff:ff:ff:ff
40: vnet0:  mtu 1500 qdisc pfifo_fast master 
virbr0 state UNKNOWN group default qlen 1000
link/ether fe:16:3e:2d:0d:55 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe2d:d55/64 scope link
   valid_lft forever preferred_lft forever

On Mon, 2018-05-28 at 12:57 +0200, Simone Tiraboschi wrote:


On Mon, May 28, 2018 at 11:44 AM, Mariusz Kozakowski 
mailto:mariusz.kozakow...@dsg.dk>> wrote:
On Fri, 2018-05-25 at 11:21 +0200, Simone Tiraboschi wrote:



On Fri, May 25, 2018 at 9:20 AM, Mariusz Kozakowski 
mailto:mariusz.kozakow...@dsg.dk>> wrote:
On Thu, 2018-05-24 at 14:11 +0200, Simone Tiraboschi wrote:
To better understand what it's happening you have to check host-deploy logs; 
they are available under /var/log/ovirt-engine/host-deploy/ on your engine VM.

Unfortunately there is no logs under that directory. It's empty.


So it probably failed to reach the host due to a name resolution issue or 
something like that.
Can you please double check it in /var/log/ovirt-engine/engine.log on the 
engine VM ?


Thanks - it helped a bit. At least now we have logs for host-deploy, but still 
no success.

Few parts I found in engine log:

2018-05-28 11:07:39,473+02 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] 
(EE-ManagedThreadFactory-engine-Thread-1) [1a4cf85e] Exception: 
org.ovirt.engine.core.common.errors.EngineException: EngineException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: 
VDSGenericException: VDSNetworkException: Message timeout which can be caused 
by communication issues (Failed with error VDS_NETWORK_ERROR and code 5022)


2018-05-28 11:07:39,485+02 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] 
(EE-ManagedThreadFactory-engine-Thread-1) [1a4cf85e] Host installation failed 
for host '098c3c99-921d-46f0-bdba-86370a2dc895', 'host01.redacted': Failed 

[ovirt-users] Re: oVirt hosted-engine-setup issues with getting host facts

2018-05-30 Thread Mariusz Kozakowski
On Mon, 2018-05-28 at 14:30 +0200, Mariusz Kozakowski wrote:
On Mon, 2018-05-28 at 12:57 +0200, Simone Tiraboschi wrote:
The issue is on network configuration:
you have to check /var/log/vdsm/vdsm.log and /var/log/vdsm/supervdsm.log to 
understand why it failed.

From same time frame, vdsm.log. Can this be related?

2018-05-28 11:07:34,481+0200 INFO  (jsonrpc/1) [api.host] START getAllVmStats() 
from=::1,45816 (api:46)
2018-05-28 11:07:34,482+0200 INFO  (jsonrpc/1) [api.host] FINISH getAllVmStats 
return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} 
from=::1,45816 (api:52)
2018-05-28 11:07:34,483+0200 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.01 seconds (__init__:573)
2018-05-28 11:07:34,489+0200 INFO  (jsonrpc/2) [api.host] START 
getAllVmIoTunePolicies() from=::1,45816 (api:46)
2018-05-28 11:07:34,489+0200 INFO  (jsonrpc/2) [api.host] FINISH 
getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code': 0}, 
'io_tune_policies_dict': {'405f8ec0-03f9-43cb-a7e1-343a4c30453f': {'policy': 
[], 'current_values': []}}} from=::1,45816 (api:52)
2018-05-28 11:07:34,490+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:573)
2018-05-28 11:07:35,555+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=6f517c47-a9f3-4913-bf9d-661355262c38 (api:46)
2018-05-28 11:07:35,555+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=6f517c47-a9f3-4913-bf9d-661355262c38 (api:52)
2018-05-28 11:07:35,555+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:707)
2018-05-28 11:07:38,982+0200 WARN  (vdsm.Scheduler) [Executor] Worker blocked: 
 timeout=60, 
duration=180 at 0x2fc3650> task#=1 at 0x3590710>, traceback:
File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
  self.__bootstrap_inner()
File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
  self.run()
File: "/usr/lib64/python2.7/threading.py", line 765, in run
  self.__target(*self.__args, **self.__kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 194, 
in run
  ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run
  self._execute_task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in 
_execute_task
  task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__
  self._callable()
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 523, in 
__call__
  self._handler(self._ctx, self._req)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 566, in 
_serveRequest
  response = self._handle_request(req, ctx)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in 
_handle_request
  res = method(**params)
File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in 
_dynamicMethod
  result = fn(*methodArgs)
File: "", line 2, in getCapabilities
File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
  ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1339, in 
getCapabilities
  c = caps.get()
File: "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 201, in get
  liveSnapSupported = _getLiveSnapshotSupport(cpuarch.effective())
File: "/usr/lib/python2.7/site-packages/vdsm/common/cache.py", line 41, in 
__call__
  value = self.func(*args)
File: "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 92, in 
_getLiveSnapshotSupport
  capabilities = _getCapsXMLStr()
File: "/usr/lib/python2.7/site-packages/vdsm/common/cache.py", line 41, in 
__call__
  value = self.func(*args)
File: "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 60, in 
_getCapsXMLStr
  return _getFreshCapsXMLStr()
File: "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 55, in 
_getFreshCapsXMLStr
  return libvirtconnection.get().getCapabilities()
File: "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 
130, in wrapper
  ret = f(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in 
wrapper
  return func(inst, *args, **kwargs)
File: "/usr/lib64/python2.7/site-packages/libvirt.py", line 3669, in 
getCapabilities
  ret = libvirtmod.virConnectGetCapabilities(self._o) (executor:363)
2018-05-28 11:07:40,561+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=2ef7c0a3-4cf9-436e-ad6b-ee04e2a0cf3a (api:46)
2018-05-28 11:07:40,561+0200 INFO  (vmrecovery) [vds

[ovirt-users] Re: oVirt hosted-engine-setup issues with getting host facts

2018-05-30 Thread Mariusz Kozakowski
On Mon, 2018-05-28 at 12:57 +0200, Simone Tiraboschi wrote:
The issue is on network configuration:
you have to check /var/log/vdsm/vdsm.log and /var/log/vdsm/supervdsm.log to 
understand why it failed.

From same time frame, vdsm.log. Can this be related?

2018-05-28 11:07:34,481+0200 INFO  (jsonrpc/1) [api.host] START getAllVmStats() 
from=::1,45816 (api:46)
2018-05-28 11:07:34,482+0200 INFO  (jsonrpc/1) [api.host] FINISH getAllVmStats 
return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} 
from=::1,45816 (api:52)
2018-05-28 11:07:34,483+0200 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.01 seconds (__init__:573)
2018-05-28 11:07:34,489+0200 INFO  (jsonrpc/2) [api.host] START 
getAllVmIoTunePolicies() from=::1,45816 (api:46)
2018-05-28 11:07:34,489+0200 INFO  (jsonrpc/2) [api.host] FINISH 
getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code': 0}, 
'io_tune_policies_dict': {'405f8ec0-03f9-43cb-a7e1-343a4c30453f': {'policy': 
[], 'current_values': []}}} from=::1,45816 (api:52)
2018-05-28 11:07:34,490+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:573)
2018-05-28 11:07:35,555+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=6f517c47-a9f3-4913-bf9d-661355262c38 (api:46)
2018-05-28 11:07:35,555+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=6f517c47-a9f3-4913-bf9d-661355262c38 (api:52)
2018-05-28 11:07:35,555+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:707)
2018-05-28 11:07:38,982+0200 WARN  (vdsm.Scheduler) [Executor] Worker blocked: 
 timeout=60, 
duration=180 at 0x2fc3650> task#=1 at 0x3590710>, traceback:
File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
  self.__bootstrap_inner()
File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
  self.run()
File: "/usr/lib64/python2.7/threading.py", line 765, in run
  self.__target(*self.__args, **self.__kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 194, 
in run
  ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run
  self._execute_task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in 
_execute_task
  task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__
  self._callable()
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 523, in 
__call__
  self._handler(self._ctx, self._req)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 566, in 
_serveRequest
  response = self._handle_request(req, ctx)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in 
_handle_request
  res = method(**params)
File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in 
_dynamicMethod
  result = fn(*methodArgs)
File: "", line 2, in getCapabilities
File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
  ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1339, in 
getCapabilities
  c = caps.get()
File: "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 201, in get
  liveSnapSupported = _getLiveSnapshotSupport(cpuarch.effective())
File: "/usr/lib/python2.7/site-packages/vdsm/common/cache.py", line 41, in 
__call__
  value = self.func(*args)
File: "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 92, in 
_getLiveSnapshotSupport
  capabilities = _getCapsXMLStr()
File: "/usr/lib/python2.7/site-packages/vdsm/common/cache.py", line 41, in 
__call__
  value = self.func(*args)
File: "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 60, in 
_getCapsXMLStr
  return _getFreshCapsXMLStr()
File: "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 55, in 
_getFreshCapsXMLStr
  return libvirtconnection.get().getCapabilities()
File: "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 
130, in wrapper
  ret = f(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in 
wrapper
  return func(inst, *args, **kwargs)
File: "/usr/lib64/python2.7/site-packages/libvirt.py", line 3669, in 
getCapabilities
  ret = libvirtmod.virConnectGetCapabilities(self._o) (executor:363)
2018-05-28 11:07:40,561+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=2ef7c0a3-4cf9-436e-ad6b-ee04e2a0cf3a (api:46)
2018-05-28 11:07:40,561+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=2ef7c0a3-4cf9-436e-ad6b-ee04e2a0cf3a (api:52)
2018-05-28 11:07:40,561+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:707)

supervdsm hase nothing at 11:07:39.

[ovirt-users] Re: oVirt hosted-engine-setup issues with getting host facts

2018-05-27 Thread Mariusz Kozakowski
On Thu, 2018-05-24 at 14:11 +0200, Simone Tiraboschi wrote:
To better understand what it's happening you have to check host-deploy logs; 
they are available under /var/log/ovirt-engine/host-deploy/ on your engine VM.

Unfortunately there is no logs under that directory. It's empty.


--

Best regards/Pozdrawiam/MfG

Mariusz Kozakowski

Site Reliability Engineer

Dansk Supermarked Group
Baltic Business Park
ul. 1 Maja 38-39
71-627 Szczecin
dansksupermarked.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] oVirt hosted-engine-setup issues with getting host facts

2018-05-24 Thread Mariusz Kozakowski
Hello,

We've been trying to setup oVirt environment for few days but we have issue 
with hosted-engine-setup (ansible script).
We managed to fix few small things and have them merged upstream but 
unfortunately right now the installation process fails on getting host facts.
It looks like it cannot proceed because it fails when connecting to 
ovirt-engine API of the bootstrap VM.

The oVirt API / webpanel is working, I tested it via a browser and I can login 
without issues using the admin password chosen earlier in the process.

2018-05-18 15:26:47,800+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:100 TASK [Wait for the host to be up]
2018-05-18 15:39:14,025+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 
{u'_ansible_parsed': True, u'_ansible_no_log': False, u'changed': False, 
u'attempts': 120, u'invocation': {u'module_args': {
u'pattern': u'name=host01.redacted', u'fetch_nested': False, 
u'nested_attributes': []}}, u'ansible_facts': {u'ovirt_hosts': []}}
2018-05-18 15:39:14,127+0200 ERROR 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 
fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, 
"attempts": 120, "changed": false}


May 18 13:34:34 host01 python: ansible-ovirt_hosts_facts Invoked with 
pattern=name=host01.redacted fetch_nested=False nested_attributes=[] 
auth={'timeout': 0, 'url': 'https://ovirt-dev.redacted/ovirt-engine/api', 
'insecure': True, 'kerberos': False, 'compress': True, 'headers': None, 
'token': 'R--token-redacted', 'ca_file': None}


Do you have idea what/where is issue and how to fix it?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org