No, not yet, but first I think I need to understand what OpenShift is
trying to do at this point.
Any Red Hatters out there who understand this?
On 17/01/18 10:56, Joel Pearson wrote:
Have you tried an OpenStack users list? It sounds like you need
someone with in-depth OpenStack knowledge
On Wed, 17 Jan 2018 at 9:55 pm, Tim Dudgeon <tdudgeon...@gmail.com
<mailto:tdudgeon...@gmail.com>> wrote:
So what does "complete an install" entail?
Presumably OpenShift/Kubernetes is trying to do something in
OpenStack but this is failing.
But what is it trying to do?
On 17/01/18 10:49, Joel Pearson wrote:
Complete stab in the dark, but maybe your OpenStack account
doesn’t have enough privileges to be able to complete an install?
On Wed, 17 Jan 2018 at 9:46 pm, Tim Dudgeon
<tdudgeon...@gmail.com <mailto:tdudgeon...@gmail.com>> wrote:
I'm still having problems getting the OpenStack cloud
provider running.
I have a minimal OpenShift Origin 3.7 Ansible install that
runs OK. But
when I add the definition for the OpenStack cloud provider
(just the
cloud provider definition, nothing yet that uses it) the
installation
fails like this:
TASK [nickhammond.logrotate : nickhammond.logrotate | Setup
logrotate.d
scripts]
*******************************************************************************************************************
RUNNING HANDLER [openshift_node : restart node]
****************************************************************************************************************************************************
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
fatal: [orndev-node-000]: FAILED! => {"attempts": 3,
"changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited
with error
code. See \"systemctl status origin-node.service\" and
\"journalctl
-xe\" for details.\n"}
fatal: [orndev-node-001]: FAILED! => {"attempts": 3,
"changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited
with error
code. See \"systemctl status origin-node.service\" and
\"journalctl
-xe\" for details.\n"}
fatal: [orndev-master-000]: FAILED! => {"attempts": 3,
"changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited
with error
code. See \"systemctl status origin-node.service\" and
\"journalctl
-xe\" for details.\n"}
fatal: [orndev-node-002]: FAILED! => {"attempts": 3,
"changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited
with error
code. See \"systemctl status origin-node.service\" and
\"journalctl
-xe\" for details.\n"}
fatal: [orndev-infra-000]: FAILED! => {"attempts": 3,
"changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited
with error
code. See \"systemctl status origin-node.service\" and
\"journalctl
-xe\" for details.\n"}
RUNNING HANDLER [openshift_node : reload systemd units]
********************************************************************************************************************************************
to retry, use: --limit
@/home/centos/openshift-ansible/playbooks/byo/config.retry
Looking on one of the nodes I see this error in the
origin-node.service
logs:
Jan 17 09:40:49 orndev-master-000 origin-node[2419]: E0117
09:40:49.746806 2419 kubelet_node_status.go:106] Unable to
register
node "orndev-master-000" with API server: nodes
"orndev-master-000" is
forbidden: node 10.0.0.6 cannot modify node orndev-master-000
The /etc/origin/cloudprovider/openstack.conf file has been
created OK,
and looks to be what is expected.
But I can't be sure its specified correctly and will work. In
fact if I
deliberately change the configuration to use an invalid openstack
username the install fails at the same place, but the error
message on
the node is different:
Jan 17 10:08:58 orndev-master-000 origin-node[24066]: F0117
10:08:58.474152 24066 start_node.go:159] could not init
cloud provider
"openstack": Authentication failed
When set back to the right username the node service again
fails because of:
Unable to register node "orndev-master-000" with API server:
nodes
"orndev-master-000" is forbidden: node 10.0.0.6 cannot modify
node
orndev-master-000
How can this be tested on a node to ensure that the cloud
provider is
configured correctly?
Any idea what the "node 10.0.0.6 cannot modify node
orndev-master-000"
error is about?
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users