Hi All,

I have a 5 node Openshift cluster split across 2 AZs; our colocation center
and AWS, with a master in each AZ and the rest being nodes.

We setup our cluster with the Ansible script, and somewhere during the
setup, the EC2 instance's private hostname were picked up and registered as
node names of the nodes in AWS, which is a bit annoying as that deviates
from our hostname conventions and is rather difficult to read, and it's not
something that can be changed post setup.

It didn't help that parts of the admin operations seem to be using the EC2
instance's private hostname, so I get errors like this:

# oc logs logging-fluentd-shfnu
Error from server: Get
https://ip-10-20-128-101.us-west-1.compute.internal:10250/containerLogs/logging/logging-fluentd-shfnu/fluentd-elasticsearch:
dial tcp 198.90.20.95:10250: i/o timeout

Scheduling system related pods on the AWS instances works (router,
fluentd), though any build pods that lands up on EC2s never gets built, and
just eventually times out; my suspicion is that the build process monitors
depends on the hostname which can't be reached from our colocation center
master (which we use as a primary), and hence breaks.

I'm unable to find much detail on this behaviour.

1. Can we manually change the hostname of certain nodes?

2. How do we avoid registering EC2 nodes with their private hostnames?

Frank
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to