Re: Best way to use oc client

2016-12-13 Thread Skarbek, John
Den,

I’m a fan limiting interactions with the cluster using specific roles and users 
to help with auditing purposes. A strategy I would recommend in your case would 
be to create users that have the specific permissions they need, and with a 
password they control. This will prevent your need to copy this configuration 
around everywhere.


--
John Skarbek


On December 13, 2016 at 07:44:41, Den Cowboy 
(dencow...@hotmail.com) wrote:

Hi,


I've installed openshift 1.3.2 for the first time with atomic as OS. It went 
fine.
I used one normal centos as installation-server (so there ansible was installed 
and I executed the playbook there).


Now is my question. What is the best way to interact with my environment.

I've installed the oc-client tools on the centos server and I use ./oc login 
https://192.xx.xx.xx:8443
 to authenticate.
But when I want to authenticate as system:admin I need the $KUBECONFIG 
(admin.kubeconfig). Is it a normal approach to copy this file from my os-master 
(atomic) to my centos server from which I try to manage everything?

Or do I need to install the client tools on my master itself? What is the most 
common approach?


Thanks

___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DgICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=-3ai5GrWechgyR8zg7hEFYtEW2HyFmlflRdhGicSQek&s=DtLlLgc3iRNtUzHSjKdoFWsKFQSQWH9d0149B4yvhO4&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Master/ETCD Migration

2016-12-13 Thread Skarbek, John
Diego,

We’ve done a similar thing in our environment. I’m not sure if the 
openshift-ansible guys have a better way, but this is what we did at that time.

We created a custom playbook to run through all the steps as necessary. And due 
to the version of openshift-ansible we were running, we had to be careful when 
we did whichever server was index 0 in the array of hosts. (I think they 
resolved that problem now)

First we created a play that copied the necessary certificates too all the 
nodes, such that it didn’t matter which node was in index 0 of the list of 
nodes. So we had the playbook limited to operate one one node at a time which 
dealt with tearing it down. Then we’d run the deploy on the entire cluster. For 
the new node, everything was installed as necessary. For the rest of the 
cluster it was mostly a no-op. We use static addresses, so the only thing that 
really changed was the underlying host. Certificate regeneration was limited.

For the master nodes, this was pretty easy. For the etcd nodes, we had to do a 
bit of extra work as the nodes being added to the cluster, had different member 
id’s that what the cluster thought that node ought to have. Following etcd’s 
docs on Member Migration should be able to help you out here.

The only major part we had to be careful of, was doing the work on the node 
that was going to be the first node. Due to the way the playbooks operated, it 
put a lot of config and certificate details that would get copied around. If 
they’ve addressed this, it shouldn’t be an issue, but at the time, we got 
around this by simply adjusting the order of which nodes defined in our 
inventory file.

A wee bit laborious, but definitely doable.

In our case, we didn’t experience any downtime, the master nodes cycled through 
the haproxy box appropriately, and the etcd nodes were removed and added to the 
cluster without any major headaches.

Though I’m now more curious if the team at redhat working on openshift-ansible 
may have addressed any of these sorts of issues to make it easier.


--
John Skarbek


On December 13, 2016 at 08:35:54, Diego Castro 
(diego.cas...@getupcloud.com) wrote:

Hello, i have to migrate my production HA masters/etcd servers to new boxes.

Steps Intended:

1) Create a new masters and etcd machines using byo/config playbook.
2) Stop the old masters and move etcd data directory to new etcd servers
3) Start the new masters
4) Run byo/openshift-cluster/redeploy-certificates.yml against the cluster to 
updage CA and node configuration.

Question:
- Is it the best or the right way to do since this is a production cluster and 
i want minimal downtime?


---
Diego Castro / The CloudFather
GetupCloud.com
 - Eliminamos a Gravidade
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DgICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=SXZbgql2jEdZcxZf-F7G1PY7KWstOe44c8cHN7wPNKM&s=hljug4_Dzfra1fGcjSvwVO2n6CAsCQpr5yyPBcbOc-Y&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Creating new-app from image in openshift repository

2016-12-02 Thread Skarbek, John


--
John Skarbek


On December 2, 2016 at 07:02:18, Thomas Diesler 
(tdies...@redhat.com<mailto:tdies...@redhat.com>) wrote:

On 02 Dec 2016, at 12:57, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

On December 2, 2016 at 05:01:52, Thomas Diesler 
(tdies...@redhat.com<mailto:tdies...@redhat.com>) wrote:
Folks,

I have a scenario where a maven build creates an image and pushes this to the 
local openshift docker repository. I’m then trying to use `oc new-app …` to 
create an application from that image. This fails because the image cannot be 
found on docker hub.

Is there a way to tell openshift to also look in it local repository where the 
image exists already?

Indeed!  From the help docs, you should run new-app like this:

  # Use a MySQL image in a private registry to create an app and override 
application artifacts' names

  oc new-app 
--docker-image=myregistry.com/mycompany/mysql<https://urldefense.proofpoint.com/v2/url?u=http-3A__myregistry.com_mycompany_mysql&d=DgMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=dpeIKZoMdzwR2k1WAnERa2ysEUmjLfCs9r2nlCazz6E&s=H-aCQiMhelrpnMIhWqUWnN4t5WCTI23lywwBsPcw-qc&e=>

I have the image in my local openshift registry already

[ec2-user@ip-172-30-0-66 wildfly-camel]$ docker images
REPOSITORY TAG IMAGE ID
CREATED SIZE
wildflyext/wildfly-camel   latest  b0a354ce295d15 
hours ago1.06 GB
jboss/wildfly  10.1.0.Final071c4a43ead013 
days ago 582.6 MB
openshift/origin-deployer  v1.3.0  5bf464732ca811 
weeks ago487.1 MB
openshift/origin-docker-registry   v1.3.0  59d447094a3c11 
weeks ago345.5 MB
openshift/origin-haproxy-routerv1.3.0  e33d4e33dffb11 
weeks ago506.2 MB
openshift/origin   v1.3.0  7b24611e640f11 
weeks ago487.1 MB
openshift/origin-pod   v1.3.0  35873f68181d11 
weeks ago1.591 MB

I would like `oc new-app wildflyext/wildfly-camel` to use that image and not go 
to 
hub.docker.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__hub.docker.com&d=DgMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=dpeIKZoMdzwR2k1WAnERa2ysEUmjLfCs9r2nlCazz6E&s=EnswWGPM02IdYT62aWgImG5-WPjAwfMhZXL8N-fEn-4&e=>

Openshift needs to talk to a registry in order to build an appropriate 
deployment configuration or image stream.  If you have built your image and 
it's pushed to the openshift docker registry, then you can utilize the service 
ip or an exposed route pointed to that openshift registry.

It's not possible take an image that's built locally and pass it into 
openshift.  An image stream would be missing critical information preventing 
them from having a way to pull down the image and start the container.  The 
screenshot you provided doesn't provide proof that you've pushed the image into 
any docker registry.





cheers
— thomas

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DgIGaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=bHsKm1jYyc8z2aKvWT8EnfifZqrNYIBsUsWWceDKN_8&s=ijcti_roNOiv5pwjgkHD2PzNlChW9BWWzz6S6jhGzXg&e=


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Creating new-app from image in openshift repository

2016-12-02 Thread Skarbek, John
On December 2, 2016 at 05:01:52, Thomas Diesler 
(tdies...@redhat.com) wrote:
Folks,

I have a scenario where a maven build creates an image and pushes this to the 
local openshift docker repository. I’m then trying to use `oc new-app …` to 
create an application from that image. This fails because the image cannot be 
found on docker hub.

Is there a way to tell openshift to also look in it local repository where the 
image exists already?

Indeed!  From the help docs, you should run new-app like this:

  # Use a MySQL image in a private registry to create an app and override 
application artifacts' names

  oc new-app --docker-image=myregistry.com/mycompany/mysql


cheers
— thomas

___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DgIGaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=bHsKm1jYyc8z2aKvWT8EnfifZqrNYIBsUsWWceDKN_8&s=ijcti_roNOiv5pwjgkHD2PzNlChW9BWWzz6S6jhGzXg&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error: error communicating with registry: Get https://registry.example.com/healthz: x509: certificate signed by unknown authority

2016-11-28 Thread Skarbek, John

On November 28, 2016 at 08:19:21, Stéphane Klein 
(cont...@stephane-klein.info) wrote:

Hi,

I can execute with success this command on my desktop host:

oc adm --token=`oc -n default sa get-token pruner` prune images --confirm 
--registry-url=registry.example.com

On OpenShift master host, I have this error:
/usr/local/bin/oc adm --token=`/usr/local/bin/oc -n default sa get-token 
pruner` prune images --confirm 
--registry-url=registry.example.com
error: error communicating with registry: Get 
https://registry.example.com/healthz:
 x509: certificate signed by unknown authority

I have tried with 
--certificate-authority=/etc/origin/master/openshift-registry.crt parameter, 
but always the same error.

The above is the path to the certificate used by the registry, not the 
authority.  You probably want `/etc/origin/master/ca.crt` here


Where is my mistake ?

Best regards,
Stéphane
--
Stéphane Klein mailto:cont...@stephane-klein.info>>
blog: 
http://stephane-klein.info
cv : 
http://cv.stephane-klein.info
Twitter: 
http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DgICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=PWiBBEC5h0oUZhVRERJ9MHhtup2ov9ZBheVyqkFsxSE&s=wvYO22kLbNvVGxWYsQPRUyKq8ljps4iGxKe9CML1QsI&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Why I don't have debug information in DockerRegistry logs?

2016-10-23 Thread Skarbek, John
In my opinion, I don’t believe this is an auth issue.

If it were an auth issue, you would’ve seen something from the builder 
indicating a failure to push with incorrect credentials. Instead you are 
looking at what appears as his inability to talk to the registry entirely.

Your log output you produce is from two different spans of time. I would 
suggest gathering the logs from the same time range and let’s take another look.


--
John Skarbek


On October 23, 2016 at 06:19:37, Stéphane Klein 
(cont...@stephane-klein.info) wrote:

I see some debug message here 
https://github.com/openshift/origin/blob/master/pkg/dockerregistry/server/token.go#L60

Why I didn't see it in container logs ?

2016-10-23 11:41 GMT+02:00 Stéphane Klein 
mailto:cont...@stephane-klein.info>>:
Hi,

I've some auth issue with my OpenShift DockerRegistry:

I1023 08:54:24.043049   1 docker.go:118] Pushing image 
172.30.201.95:5000/openshift/ta-s2i-base-prod:latest
 ...
E1023 08:54:24.046357   1 dockerutil.go:86] push for image 
172.30.201.95:5000/openshift/base-prod:latest
 failed, will retry in 5s ...
E1023 08:54:29.051732   1 dockerutil.go:86] push for image 
172.30.201.95:5000/openshift/base-prod:latest
 failed, will retry in 5s ...
E1023 08:54:34.054921   1 dockerutil.go:86] push for image 
172.30.201.95:5000/openshift/base-prod:latest
 failed, will retry in 5s ...
E1023 08:54:39.058377   1 dockerutil.go:86] push for image 
172.30.201.95:5000/openshift/base-prod:latest
 failed, will retry in 5s ...
E1023 08:54:44.061671   1 dockerutil.go:86] push for image 
172.30.201.95:5000/openshift/base-prod:latest
 failed, will retry in 5s ...
E1023 08:54:49.064716   1 dockerutil.go:86] push for image 
172.30.201.95:5000/openshift/base-prod:latest
 failed, will retry in 5s ...
E1023 08:54:54.067985   1 dockerutil.go:86] push for image 
172.30.201.95:5000/openshift/base-prod:latest
 failed, will retry in 5s ...
F1023 08:54:59.068275   1 builder.go:204] Error: build error: Failed to 
push image: unable to ping registry endpoint 
https://172.30.201.95:5000/v0/

Re: Pod does not have Scale up/down buttons

2016-09-20 Thread Skarbek, John
The reason this is occurring, is due to you utilizing a Pod definition. The 
purpose of the pod is to spin up one pod and do nothing else.

Checkout the documentation on creating a replication 
controller.
 Creating an Replication Controller, instead of a Pod, will allow you to 
perform the scale operation and maintain pods through a lifecycle.


--
John Skarbek


On September 19, 2016 at 15:22:49, Ravi Kapoor 
(ravikapoor...@gmail.com) wrote:

Once more, now with JSON

{
"kind": "List",
"apiVersion": "v1beta3",
"metadata": {},
"items": [
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"labels": {
"name": "node-test"
},
"name": "node-test"
},
"spec": {
"containers": [
{
"image": "node:4.4.7",
"imagePullPolicy": "IfNotPresent",
"name": "node-test",
"command": [
"node"
],
"args": [
"/usr/src/app/server.js"
],
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/usr/src/app",
"name": "myclaim2"
}
],
"securityContext": {
"capabilities": {},
"privileged": false
},
"terminationMessagePath": "/dev/termination-log"
}
],
"volumes": [
{
"name": "myclaim2",
"persistentVolumeClaim": {
"claimName": "myclaim2"
}
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"serviceAccount": ""
},
"status": {}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": null,
"name": "node-service"
},
"spec": {
"portalIP": "",
"ports": [
{
"name": "web",
"port": 8080,
"protocol": "TCP"
}
],
"selector": {
"name": "node-test"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"annotations": {},
"name": "node-route"
},
"spec": {
"to": {
"name": "node-service"
}
}
}
]
}

On Mon, Sep 19, 2016 at 2:19 PM, Ravi Kapoor 
mailto:ravikapoor...@gmail.com>> wrote:

I created following job definition. It successfully creates a service, pod and 
a route. I am able to access the website.

It shows 1 Pod running, however, there are no scale up/down buttons in the UI.
How can I scale this application up?


___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=6l_MOhHqckZVxRAVlG5uYw1ZkOM5XppORXJ7qaoZCsk&s=zcRaAM1wl1KVuO50vZdVuVyCjyynVuwCFd-2Jc9ffAE&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Openshift/Kuberenets Nodes Ready State

2016-09-14 Thread Skarbek, John
Good Morning,

Is there any documentation anywhere within openshift or kubernetes that 
discusses what kubernetes does to determine that a node is Ready? I certainly 
haven’t found any.

The reason why I ask is every once in awhile, I’ll run into an issue where 
kubernetes is trying to schedule something to a node, that despite being Ready, 
there’s something wrong with the node itself. Example today, I’ve got a node 
that apparently lost his ability to write to any underlying storage mechanism. 
The origin-node services sees this in the logs, and this is verified running 
the mount command. However, kubernetes doesn’t know anything is wrong, and not 
a single POD will start up on this node due to this.


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Using OpenShift registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.2-12 image

2016-09-13 Thread Skarbek, John
Den,


--
John Skarbek


On September 13, 2016 at 07:03:18, Den Cowboy 
(dencow...@hotmail.com) wrote:

Hi


We are using the image:

registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.2-12

inside our OpenShift environment to deploy some .WARS.

Now we need to edit the dockerfile because one of our WARS expects another 
encoding than UTF-8.

How can I check which is the default encoding inside my container? Where is it 
stored?

Usually you can find this info by running the command `locale -a`.  If it's run 
without any arguments you can see what it is set to by default.

I don't see it as an environment variable. I only saw:

CATALINA_OPTS=-Djava.security.egd=file:/dev/./urandom

Which is an unreadable file for me.

This is to be expected.  This option is to assist java's random number 
generator.  This file produces random numbers.



Thanks

___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=SAviYYOT6WnGUbofBTSQRI7wPBtjh7SOItuO78x_5Yg&s=LxFWuaQa_OCoSGwrtiL_242PHibI2BMk7hZaVyqvNeQ&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: s2i build set hostname

2016-09-09 Thread Skarbek, John
Robson,

What is it that you are trying to accomplish? In your prior thread you mention 
the use of setting up an ftp docker image. I think there’s a misunderstanding 
for the purpose of s2i here.

s2i build is not going to build the Dockerfile that you posted. s2i is meant to 
take an existing docker image that is already built, and create a runnable 
application. It’s a builder for an application container.


--
John Skarbek


On September 9, 2016 at 12:58:27, Robson Ramos Barreto 
(robson.rbarr...@gmail.com) wrote:

Sure,

I'm trying create an image with ssh because I think that sshd is more easy to 
test

Follow the code so far:

# cat Dockerfile

# ose-ftp-rhel7
FROM rhel7.2

ENV BUILDER_VERSION 1.0
ENV HOSTNAME 
ose-ftp.example.com

RUN INSTALL_PKGS="openssh-server ipa-client" && KADMIN_PASS="" && 
HOSTNAME="ose-ftp.example.com"
 && \
yum install -y $INSTALL_PKGS && yum clean all -y
#ipa-client-install 
--server=ipaserver.example.com
 --password=$KADMIN_PASS --mkhomedir 
--domain=example.com
 --unattended

LABEL io.openshift.s2i.scripts-url=image:///usr/local/s2i

COPY ./.s2i/bin/ /usr/local/s2i

USER root

EXPOSE 22

CMD ["/usr/sbin/sshd", "-D"]


# cat .s2i/bin/run
exec /usr/sbin/sshd -D

# make
# docker run -d -P ose-ftp-rhel7-candidate

and so I have the image with the ssh service working but no hostname changed

Perhaps should I put the set hostname in the .s2i/bin/run

---

Another thing that I'm figuring out is when I run:

# s2i build . rhel7 ose-ftp --loglevel=3

This isn't running the RUN section from dockerfile to install the packages, 
only if I run the: # make command that it have effect.

Is it normal ?

From s2i build command I'm not getting any error

Thank you



2016-09-09 12:28 GMT-03:00 Jonathan Yu 
mailto:jaw...@redhat.com>>:
Hey Robson,

I see, thanks for the context. I'm unfamiliar with ipa-client, so will just 
share two thoughts:

1. The hostname is fixed for the lifetime of the container (that is, once the 
pod starts, it will keep its hostname until it crashes or is stopped)
2. If you want to choose a hostname, consider using the PetSet feature: 
http://kubernetes.io/docs/user-guide/petset/
 - this requires OpenShift Origin 1.3, or the yet-to-be-released OpenShift 
Container Platform 3.3

However, I'm unsure how this relates to s2i. Would it be possible for you to 
share any code or working example that you have put together so far?

On Fri, Sep 9, 2016 at 7:35 AM, Robson Ramos Barreto 
mailto:robson.rbarr...@gmail.com>> wrote:
Hello Jonathan

Thank you for your time.

I'm trying to install the ipa-client on which must have a fixed hostname

My final goal is set up a FTP container with centralized authentication as I 
asked for advice:

http://lists.openshift.redhat.com/openshift-archives/users/2016-September/msg00026.html

Thank you

2016-09-08 17:36 GMT-03:00 Jonathan Yu 
mailto:jaw...@redhat.com>>:
Hey Robson,

Can you elaborate more on what you're trying to do and why you need to change 
the hostname?

I'm no expert, but I believe changing the hostname requires root, and s2i 
builds typically run as a nonprivileged user, so it's likely not possible to 
change the hostname from within the build at build time.

On Thu, Sep 8, 2016 at 1:28 PM, Robson Ramos Barreto 
ma

Openshift, vip-manager, and DHCP

2016-09-08 Thread Skarbek, John
Good Morning,

I’m curious if anyone is successfully running openshift in an environment where 
they manage their own dhcp clients and scopes. Our infrastructure recently had 
an issue and we are struggling to find a root cause. In our environment we run 
two vip-manager POD’s which manages 2 ip addresses.

One of our suspicions has led us to believe that keepalived doesn’t play nice 
with dhcp. As an example, if the dhcp client dies or renews it’s IP address, 
the vip-manager POD recognizes this event. He logs the VIP he’s managing as 
well as the IP assigned to the node is removed, however, keepalived continues 
to send out the VRRP’s as if he’s still MASTER for that IP.

This puts us in a bad spot, as the BACKUP keepalived never takes this IP 
address over and this IP is no longer assigned to anything. Here’s example log 
output from the POD that I forced this failure:

10.0.0.1 == address assigned to node via DHCP

10.0.0.2 == address assigned to vip_manager_VIP_1

10.0.0.3 == address assigned to vip_manager_VIP_2

10.1.4.1 == lbr0/tun0

  - Loading ip_vs module ...
  - Checking if ip_vs module is available ...
ip_vs 140944  0
  - Module ip_vs is loaded.
  - Generating and writing config to /etc/keepalived/keepalived.conf
  - Starting failover services ...
Starting Healthcheck child process, pid=136
Initializing ipvs 2.6
Starting VRRP child process, pid=137
Netlink reflector reports IP 10.0.0.1 added
Netlink reflector reports IP 10.0.0.1 added
Netlink reflector reports IP 10.1.4.1 added
Netlink reflector reports IP 10.1.4.1 added
Netlink reflector reports IP 10.1.4.1 added
Netlink reflector reports IP 10.1.4.1 added
Registering Kernel netlink reflector
Registering Kernel netlink reflector
Registering Kernel netlink command channel
Registering Kernel netlink command channel
Registering gratuitous ARP shared channel
Opening file '/etc/keepalived/keepalived.conf'.
Opening file '/etc/keepalived/keepalived.conf'.
Configuration is using : 8733 Bytes
Truncating auth_pass to 8 characters
Truncating auth_pass to 8 characters
Configuration is using : 73522 Bytes
Using LinkWatch kernel netlink reflector...
VRRP_Instance(vip_manager_VIP_1) Entering BACKUP STATE
VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(9,10)]
VRRP_Instance(vip_manager_VIP_2) Transition to MASTER STATE
VRRP_Instance(vip_manager_VIP_2) Entering FAULT STATE
VRRP_Script(chk_vip_manager) succeeded
VRRP_Instance(vip_manager_VIP_2) prio is higher than received advert
VRRP_Instance(vip_manager_VIP_2) Transition to MASTER STATE
VRRP_Instance(vip_manager_VIP_2) Received lower prio advert, forcing new 
election
VRRP_Instance(vip_manager_VIP_2) Entering MASTER STATE
VRRP_Instance(vip_manager_VIP_2) setting protocol VIPs.
Netlink reflector reports IP 10.0.0.3 added
VRRP_Instance(vip_manager_VIP_2) Sending gratuitous ARPs on eno16780032 for 
10.0.0.3
VRRP_Instance(vip_manager_VIP_2) Sending gratuitous ARPs on eno16780032 for 
10.0.0.3

..

Netlink reflector reports IP 10.0.0.1 removed
Netlink reflector reports IP 10.0.0.1 removed
Netlink reflector reports IP 10.0.0.3 removed
Netlink reflector reports IP 10.0.0.3 removed
Netlink reflector reports IP 10.0.0.1 added
Netlink reflector reports IP 10.0.0.1 added


And the other vip-manager pod is still receiving VRRP’s for 10.0.0.3, therefore 
never takes over this IP address, so effectively half of the traffic (pending 
DNS round-robin) is being lost at this point.

Our recovery option at this point is to restart the network, which would stop 
the VRRP packets long enough to cause a failover, or restart the effected POD.

The version of keepalived provided by RHEL is 10 minor revisions behind, I’m 
curious if there may be a benefit to getting this package updated. Pending any 
advice from anyone my next step in troubleshooting this would be to go about 
building my own version of the vip-manager with an upgraded version of 
keepalived to see if this issue continues.


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: multiple master multiple etcd

2016-09-07 Thread Skarbek, John
On September 7, 2016 at 11:42:05, Julio Saura 
(jsa...@hiberus.com) wrote:
Hello

i am about building a new cluster with 2 masters and 3 etcd servers por HA ..

my doubt is that i think i read somewhere in doc it is not recommended to have 
the external etcd servers in the same nodes than masters are running

is this true?

Not necessarily, this it totally up to your own definition of how you'd like to 
run your own infrastructure.  Openshift operates perfectly fine with this 
configuration.


what is the best approach? 2 masters in native HA + 3 different nodes por ectd 
or could it be possible to have just 3 nodes por master + etcd running along?


thanks and sorry for the silly question.

Best regards

___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=D7ScPrp6NR9vhBqREtBBwEPlRAno4LhgSIRHfJVzJSY&s=3m73tyKEVo0D-c2RYOx-hwObkVoHiFqHE4RF1Ef7vEo&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift SDN considerations

2016-08-31 Thread Skarbek, John
Boris,

Regarding question one, this would be solved by using a route that is exposed 
by said authentication service. This prevents the need for having to join the 
various projects together. Only services between namespaces are locked down. 
The exposed route will still be available to any and all pods from whichever 
project.

Regarding question two, It sounds as if you need some sort of IDS or 
manipulation of iptables/firewalld rules on the openshift nodes. Though that 
can be difficult to manage and what I’d end up doing is probably putting all 
the openshift nodes on a separate network, such that I can put a firewall 
device between the openshift nodes and the rest of the network.


--
John Skarbek


On August 30, 2016 at 15:42:50, Boris Kodel 
(boris.ko...@gmail.com) wrote:

Hello,
I am working in strict security environment in which we use a firewall to limit 
the traffic between all of our servers. e.g application server 'A' can only 
access DB server 'B' via port 1521 and cannot access app 'C' nor database 'D' 
at any port.

Since by default openshift can schedule any pod on any host (and we wish to 
keep it that way) we have a difficulty complying with the organizational 
network security model.

We considered using the ovs-multitenant plug-in but still we have a couple of 
issues:

  1.  Limiting traffic inside openshift - if two projects need to communicate 
with each other we ought to merge their networks. But if we have some central 
service (like an authentication service) we will need to merge all of the 
network together thus diminishing the network isolation.
  2.  Limiting outbound traffic - If one of our projects needs access to some 
external service we must allow all of the openshift hosts to access it. So we 
wish to limit or at least monitor that only this particular project's pods 
access this service. [In general some tool that show network connections 
between the internal and the external networks would be most helpful.]

Did someone else ever tackled this issues? I guess that most 
financial/government organizations have some variation as we do.

Cheers,
Boris K.
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=7uJd1nape9MBiQK60LsEZD40c4JZrbuCeAgGZ-XHUuY&s=niWSMaBOJrPaH6RG-P4JdDmZcWChHPgKwp-4OQHIXJY&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Canary release via Openshift

2016-08-18 Thread Skarbek, John
I don’t believe Openshift themselves have documentation covering this 
deployment method, however, kubernetes certainly does.

http://kubernetes.io/docs/user-guide/managing-deployments/#canary-deployments

Selectors are a key component to enabling this functionality.


--
John Skarbek


On August 18, 2016 at 07:51:26, Ronan O Keeffe 
(rona...@donedeal.ie) wrote:

Hi,

Just wondering is it possible via OpenShift for us to do a canary release of an 
application? e.g. we put a new version of a component live alongside the old 
version and push a only (tweakable) subset of traffic to the new version?

Ultimately I suppose we'd be running two different versions of the same service 
simultaneously.

oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5


Cheers,
Ronan.

___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=hIZdaBW9CLn6vZGCN84Olnq6SVQxuxXI5oFVSMQiwcU&s=u1RekRVkzHlHLii-Y36bg50Lf0A-BYmekIp4AOayngI&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Node startup Failure on SDN

2016-08-15 Thread Skarbek, John
So I figured it out. Ntp went kaboom on one of our master nodes.

ERROR: [DCli0015 from diagnostic 
ConfigContexts@openshift/origin/pkg/diagnostics/client/config_contexts.go:285]
   For client config context 'default/cluster:8443/system:admin':
   The server URL is 'https://cluster:8443'
   The user authentication is 'system:admin/cluster:8443'
   The current project is 'default'
   (*url.Error) Get https://cluster:8443/api: x509: certificate has expired 
or is not yet valid
   Diagnostics does not have an explanation for what this means. Please 
report this error so one can be added.



I ended up finding that the master node clock just…. I have no idea:

[/etc/origin/master]# date
Wed Feb 14 12:23:13 UTC 2001


I’d like to suggest that diagnostics checks the date and time of all the 
certificates and perhaps do some sort of ntp check and maybe even go the extra 
mile and compare the time on the server to …life. I have no idea why my master 
node decided to back to Valentines day in 2001. I think I was single way back 
when.


--
John Skarbek


On August 15, 2016 at 13:32:13, Skarbek, John 
(john.skar...@ca.com<mailto:john.skar...@ca.com>) wrote:

It would appear the certificate is valid 2018:

`[/etc/origin/node]# openssl x509 -enddate -in system:node:node-001.crt 
notAfter=Mar 21 15:18:10 2018 GMT

Got any other ideas?


--
John Skarbek


On August 15, 2016 at 13:27:57, Clayton Coleman 
(ccole...@redhat.com<mailto:ccole...@redhat.com>) wrote:

The node's client certificate may have expired - that a common failure mode.

On Aug 15, 2016, at 1:23 PM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:


Good Morning,

We recently had a node go down, upon trying to get it back online, the 
origin-node service fails to start. The rest of the cluster appears to be just 
fine, so with the desire to troubleshoot, what can I look at to determine the 
root cause of the following error:

Aug 15 17:12:59 node-001 origin-node[14536]: E0815 17:12:59.469682   14536 
common.go:194] Failed to obtain ClusterNetwork: the server has asked for the 
client to provide credentials (get clusterNetworks default)
Aug 15 17:12:59 node-001 origin-node[14536]: F0815 17:12:59.469705   14536 
node.go:310] error: SDN node startup failed: the server has asked for the 
client to provide credentials (get clusterNetworks default)



--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=-p7_G2tPVjh3zWnf5akGGhZ70QneCbuPX2DpAn88VMs&s=DHJZlgYV5c-7cKNAUeEZz8StndZktN0HEe8oFUvCjH4&e=>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Node startup Failure on SDN

2016-08-15 Thread Skarbek, John
It would appear the certificate is valid 2018:

` [/etc/origin/node]# openssl x509 -enddate -in system:node:node-001.crt 
notAfter=Mar 21 15:18:10 2018 GMT

Got any other ideas?


--
John Skarbek


On August 15, 2016 at 13:27:57, Clayton Coleman 
(ccole...@redhat.com<mailto:ccole...@redhat.com>) wrote:

The node's client certificate may have expired - that a common failure mode.

On Aug 15, 2016, at 1:23 PM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:


Good Morning,

We recently had a node go down, upon trying to get it back online, the 
origin-node service fails to start. The rest of the cluster appears to be just 
fine, so with the desire to troubleshoot, what can I look at to determine the 
root cause of the following error:

Aug 15 17:12:59 node-001 origin-node[14536]: E0815 17:12:59.469682   14536 
common.go:194] Failed to obtain ClusterNetwork: the server has asked for the 
client to provide credentials (get clusterNetworks default)
Aug 15 17:12:59 node-001 origin-node[14536]: F0815 17:12:59.469705   14536 
node.go:310] error: SDN node startup failed: the server has asked for the 
client to provide credentials (get clusterNetworks default)



--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=-p7_G2tPVjh3zWnf5akGGhZ70QneCbuPX2DpAn88VMs&s=DHJZlgYV5c-7cKNAUeEZz8StndZktN0HEe8oFUvCjH4&e=>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Node startup Failure on SDN

2016-08-15 Thread Skarbek, John
Good Morning,

We recently had a node go down, upon trying to get it back online, the 
origin-node service fails to start. The rest of the cluster appears to be just 
fine, so with the desire to troubleshoot, what can I look at to determine the 
root cause of the following error:

Aug 15 17:12:59 node-001 origin-node[14536]: E0815 17:12:59.469682   14536 
common.go:194] Failed to obtain ClusterNetwork: the server has asked for the 
client to provide credentials (get clusterNetworks default)
Aug 15 17:12:59 node-001 origin-node[14536]: F0815 17:12:59.469705   14536 
node.go:310] error: SDN node startup failed: the server has asked for the 
client to provide credentials (get clusterNetworks default)



--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Custom Builder Pull Error

2016-08-01 Thread Skarbek, John
Thanks Ben,

That indeed was the root cause.

BTW, if you by chance have any feedback on this repo, it’d be greatly 
appreciated. I know it doesn’t match the s2i’s that red hat maintains, but I 
think having this available would be beneficial. I have a feeling this web 
framework is going to become quite popular quickly.


--
John Skarbek


On July 31, 2016 at 22:24:42, Ben Parees 
(bpar...@redhat.com<mailto:bpar...@redhat.com>) wrote:

I think it actually is related to you building it with docker 1.12 and pushing 
it using that client, there is a known manifest compatibility issue which 
prevents openshift from pulling images pushed with docker 1.10+.

There are two ways you can work around this for now:
1) rebuild/push the image using docker 1.9

or

2) instead of using an imagestream in your buildconfig, use a DockerImage 
reference and just specify the pull spec 
(docker.io/jtslear/phoenix-builder:latest<https://urldefense.proofpoint.com/v2/url?u=http-3A__docker.io_jtslear_phoenix-2Dbuilder-3Alatest&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=FvoVCWa1yrKKCoCpoeb4ek1MUxFgqb8YSlUU9JU1Mhc&s=OlgHByRaEHWTCML5lKLAPQYO0jAA-GsPH-GH8qITq_8&e=>)


On Sun, Jul 31, 2016 at 9:29 PM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

Good Morning,


I'm playing around with the openshift dev preview, in doing so I'm toying 
around with creating a custom s2i builder specifically for the phoenix web 
framework.  While it's not feature complete, I've had a working example up 
until today.  After updating the builder image, I'm struggling to figure out 
why openshift can't pull it down to complete a build.  I've published my custom 
builder in the public on dockerhub.


After importing the image stream and providing the builder my source code, 
initiating a build will fail.  Here's the log output


I0731 21:11:23.044337   1 builder.go:57] Master version 
"v3.2.1.10-1-g668ed0a", Builder version "v3.2.1.10-1-g668ed0a"
I0731 21:11:23.055778   1 builder.go:145] Running build with cgroup limits: 
api.CGroupLimits{MemoryLimitBytes:536870912, CPUShares:61, CPUPeriod:10, 
CPUQuota:10, MemorySwap:536870912}
I0731 21:11:23.062868   1 sti.go:206] The value of ALLOWED_UIDS is [1-]
I0731 21:11:23.062889   1 sti.go:214] The value of DROP_CAPS is 
[KILL,MKNOD,SETGID,SETUID,SYS_CHROOT]
I0731 21:11:23.068774   1 docker.go:351] Image 
"jtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b"
 not available locally, pulling ...
I0731 21:11:23.068813   1 docker.go:373] Pulling Docker image 
jtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b
 ...
I0731 21:11:23.972778   1 sti.go:233] Creating a new S2I builder with build 
config: "Builder 
Image:\t\t\tjtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b\nSource:\t\t\t\tfile:///tmp/s2i-build707398611/upload/src#master\nOutput
 Image 
Tag:\t\ttest-app/phoenix-example-7:190eb9d9\nEnvironment:\t\t\tOPENSHIFT_BUILD_NAME=phoenix-example-7,OPENSHIFT_BUILD_NAMESPACE=test-app,OPENSHIFT_BUILD_SOURCE=https://github.com/jtslear/phoenix-example.git,OPENSHIFT_BUILD_REFERENCE=master\nIncremental<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jtslear_phoenix-2Dexample.git-2COPENSHIFT-5FBUILD-5FREFERENCE-3Dmaster-255CnIncremental&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=FvoVCWa1yrKKCoCpoeb4ek1MUxFgqb8YSlUU9JU1Mhc&s=vBeMQ7izCRBrzEghC-Qrsl-AwwFlDEacWLYN7aT3IkY&e=>
 Build:\t\tdisabled\nRemove Old Build:\t\tdisabled\nBuilder Pull 
Policy:\t\tif-not-present\nPrevious Image Pull 
Policy:\talways\nQuiet:\t\t\t\tdisabled\nLayered 
Build:\t\t\tdisabled\nWorkdir:\t\t\t/tmp/s2i-build707398611\nDocker 
NetworkMode:\t\tcontainer:6f1accf9aec56c68763142e1c31a21594619f2167a886f396c1fbe740c682b32\nDocker
 Endpoint:\t\tunix:///var/run/docker.sock\n"
I0731 21:11:23.976278   1 docker.go:351] Image 
"jtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b"
 not available locally, pulling ...
I0731 21:11:23.976316   1 docker.go:373] Pulling Docker image 
jtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b
 ...
F0731 21:11:24.590949   1 builder.go:204] Error: build error: unable to get 
jtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b



Unfortunately, openshift doesn't say WHY it was unable to get the image.  This 
did previously work until I updated the image.  The only thing at this point I 
can think of, is that I recently built the image on version 1.12 of docker.  
But I'm kinda brushing that off, cuz I've se

Custom Builder Pull Error

2016-07-31 Thread Skarbek, John
Good Morning,


I'm playing around with the openshift dev preview, in doing so I'm toying 
around with creating a custom s2i builder specifically for the phoenix web 
framework.  While it's not feature complete, I've had a working example up 
until today.  After updating the builder image, I'm struggling to figure out 
why openshift can't pull it down to complete a build.  I've published my custom 
builder in the public on dockerhub.


After importing the image stream and providing the builder my source code, 
initiating a build will fail.  Here's the log output


I0731 21:11:23.044337   1 builder.go:57] Master version 
"v3.2.1.10-1-g668ed0a", Builder version "v3.2.1.10-1-g668ed0a"
I0731 21:11:23.055778   1 builder.go:145] Running build with cgroup limits: 
api.CGroupLimits{MemoryLimitBytes:536870912, CPUShares:61, CPUPeriod:10, 
CPUQuota:10, MemorySwap:536870912}
I0731 21:11:23.062868   1 sti.go:206] The value of ALLOWED_UIDS is [1-]
I0731 21:11:23.062889   1 sti.go:214] The value of DROP_CAPS is 
[KILL,MKNOD,SETGID,SETUID,SYS_CHROOT]
I0731 21:11:23.068774   1 docker.go:351] Image 
"jtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b"
 not available locally, pulling ...
I0731 21:11:23.068813   1 docker.go:373] Pulling Docker image 
jtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b
 ...
I0731 21:11:23.972778   1 sti.go:233] Creating a new S2I builder with build 
config: "Builder 
Image:\t\t\tjtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b\nSource:\t\t\t\tfile:///tmp/s2i-build707398611/upload/src#master\nOutput
 Image 
Tag:\t\ttest-app/phoenix-example-7:190eb9d9\nEnvironment:\t\t\tOPENSHIFT_BUILD_NAME=phoenix-example-7,OPENSHIFT_BUILD_NAMESPACE=test-app,OPENSHIFT_BUILD_SOURCE=https://github.com/jtslear/phoenix-example.git,OPENSHIFT_BUILD_REFERENCE=master\nIncremental
 Build:\t\tdisabled\nRemove Old Build:\t\tdisabled\nBuilder Pull 
Policy:\t\tif-not-present\nPrevious Image Pull 
Policy:\talways\nQuiet:\t\t\t\tdisabled\nLayered 
Build:\t\t\tdisabled\nWorkdir:\t\t\t/tmp/s2i-build707398611\nDocker 
NetworkMode:\t\tcontainer:6f1accf9aec56c68763142e1c31a21594619f2167a886f396c1fbe740c682b32\nDocker
 Endpoint:\t\tunix:///var/run/docker.sock\n"
I0731 21:11:23.976278   1 docker.go:351] Image 
"jtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b"
 not available locally, pulling ...
I0731 21:11:23.976316   1 docker.go:373] Pulling Docker image 
jtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b
 ...
F0731 21:11:24.590949   1 builder.go:204] Error: build error: unable to get 
jtslear/phoenix-builder@sha256:3559cc2a714e5075fff0591e1d33793d175bacdb601c0afd534dcd63f153a18b



Unfortunately, openshift doesn't say WHY it was unable to get the image.  This 
did previously work until I updated the image.  The only thing at this point I 
can think of, is that I recently built the image on version 1.12 of docker.  
But I'm kinda brushing that off, cuz I've seen openshift complain in the past 
about manifest versions; and secondly when I started this project, I was using 
docker 1.12 beta...


For additional reference, my s2i builder source code: 
https://github.com/jtslear/s2i-phoenix


John T Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Preview Openshift 3 Pod Failure, System error

2016-07-13 Thread Skarbek, John
That never even occurred to me. Thank you sir.


--
John Skarbek


On July 13, 2016 at 09:18:44, Alex Wauck 
(alexwa...@exosite.com<mailto:alexwa...@exosite.com>) wrote:

Your Dockerfile clobbers /run in the image.  That leads to bad things. Don't 
feel bad; we made the same mistake.

On Wed, Jul 13, 2016 at 7:06 AM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

Good Morning,

I was messing around with a random quick application on the preview of 
openshift 3 online. I ran into this in the log of a container that won’t start:

Timestamp: 2016-07-13 11:49:38.160398231 + UTC
Code: System error

Message: lstat 
/var/lib/docker/devicemapper/mnt/704986103e760820b33944aaf09c2210b07e6b89f158f2053f3782307de89846/rootfs/run/secrets:
 not a directory

Frames:
---
0: setupRootfs
Package: 
github.com/opencontainers/runc/libcontainer<https://urldefense.proofpoint.com/v2/url?u=http-3A__github.com_opencontainers_runc_libcontainer&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Y6yqnj7Vh5AReMgc3-pyk_HyGvQyVeFBu9onFo7G2jo&s=MdjgQTD-AKc4DXGFR9lBj3OWx96g1ug9qZmpE5xAJ8k&e=>
File: rootfs_linux.go@40
---
1: Init
Package: 
github.com/opencontainers/runc/libcontainer.(*linuxStandardInit)<https://urldefense.proofpoint.com/v2/url?u=http-3A__github.com_opencontainers_runc_libcontainer.-28-2AlinuxStandardInit-29&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Y6yqnj7Vh5AReMgc3-pyk_HyGvQyVeFBu9onFo7G2jo&s=ukIIiIFSzKGhSGOxjReVzzlOpAnRsCdVj-gNUc7xuHc&e=>
File: standard_init_linux.go@57
---
2: StartInitialization
Package: 
github.com/opencontainers/runc/libcontainer.(*LinuxFactory)<https://urldefense.proofpoint.com/v2/url?u=http-3A__github.com_opencontainers_runc_libcontainer.-28-2ALinuxFactory-29&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Y6yqnj7Vh5AReMgc3-pyk_HyGvQyVeFBu9onFo7G2jo&s=7TjVgGEoTxGMQcH4_KMsykvVYn6SQJKQw5h79BnC_34&e=>
File: factory_linux.go@242
---
3: initializer
Package: 
github.com/docker/docker/daemon/execdriver/native<https://urldefense.proofpoint.com/v2/url?u=http-3A__github.com_docker_docker_daemon_execdriver_native&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Y6yqnj7Vh5AReMgc3-pyk_HyGvQyVeFBu9onFo7G2jo&s=qUnmNFii0TZmmX0J95Q3ouEfCVLDo6SFTIttKmytmIg&e=>
File: init.go@35
---
4: Init
Package: 
github.com/docker/docker/pkg/reexec<https://urldefense.proofpoint.com/v2/url?u=http-3A__github.com_docker_docker_pkg_reexec&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Y6yqnj7Vh5AReMgc3-pyk_HyGvQyVeFBu9onFo7G2jo&s=0OFbbrkatBCcMpPOnUiLyJxNvfZuIFPz3MH5M3yXdIo&e=>
File: reexec.go@26
---
5: main
Package: main
File: docker.go@18
---
6: main
Package: runtime
File: proc.go@63
---
7: goexit
Package: runtime
File: asm_amd64.s@2232


The pod remains in a Crashloop. I fear something might be wrong with the 
ability handle secrets. Despite me not using any…

For reference here’s my quick and nasty docker image: 
https://hub.docker.com/r/jtslear/command-check/<https://urldefense.proofpoint.com/v2/url?u=https-3A__hub.docker.com_r_jtslear_command-2Dcheck_&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Y6yqnj7Vh5AReMgc3-pyk_HyGvQyVeFBu9onFo7G2jo&s=GCRCW7-ezPTFBrHmgYodNa-m3kp77iONbxCWrvjXrHM&e=>


--
John Skarbek

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Y6yqnj7Vh5AReMgc3-pyk_HyGvQyVeFBu9onFo7G2jo&s=euJS0SUEBywbXJ12I8fCt82PsCgZRdiHgdi68Pvqyhw&e=>




--

Alex Wauck // DevOps Engineer

E X O S I T E
www.exosite.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.exosite.com_&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Y6yqnj7Vh5AReMgc3-pyk_HyGvQyVeFBu9onFo7G2jo&s=e6D0tcn1-iPb3hLDOvnbIa-_vROQWeLo8FpgvwKIkBY&e=>

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Preview Openshift 3 Pod Failure, System error

2016-07-13 Thread Skarbek, John
Good Morning,

I was messing around with a random quick application on the preview of 
openshift 3 online. I ran into this in the log of a container that won’t start:

Timestamp: 2016-07-13 11:49:38.160398231 + UTC
Code: System error

Message: lstat 
/var/lib/docker/devicemapper/mnt/704986103e760820b33944aaf09c2210b07e6b89f158f2053f3782307de89846/rootfs/run/secrets:
 not a directory

Frames:
---
0: setupRootfs
Package: github.com/opencontainers/runc/libcontainer
File: rootfs_linux.go@40
---
1: Init
Package: github.com/opencontainers/runc/libcontainer.(*linuxStandardInit)
File: standard_init_linux.go@57
---
2: StartInitialization
Package: github.com/opencontainers/runc/libcontainer.(*LinuxFactory)
File: factory_linux.go@242
---
3: initializer
Package: github.com/docker/docker/daemon/execdriver/native
File: init.go@35
---
4: Init
Package: github.com/docker/docker/pkg/reexec
File: reexec.go@26
---
5: main
Package: main
File: docker.go@18
---
6: main
Package: runtime
File: proc.go@63
---
7: goexit
Package: runtime
File: asm_amd64.s@2232


The pod remains in a Crashloop. I fear something might be wrong with the 
ability handle secrets. Despite me not using any…

For reference here’s my quick and nasty docker image: 
https://hub.docker.com/r/jtslear/command-check/


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Evacuation of pods and scheduling

2016-06-09 Thread Skarbek, John
Good Morning,

I’ve continued some research into how this probably could’ve happened but I’m 
still left with one remaining question.

What I can’t seem to find, is information about how the replication controller 
and the evacuate command interact. If I mimic what the evac command does via 
this awesome bash line:

oadm manage-node node-002.ose.bld.f4tech.com --list-pods -o json | tail -n +4 | 
jq '.items[].metadata.name' | xargs oc delete pod


I’m able to recreate the problem. This makes me think that when a lot of 
commands are executed, the replication controller is not able to keep up with 
the needs of the application. Something I found in the events log during this 
scenario is a little nerving.

7:35:23 AM  sample-jvm-app-30-xunju Pod Normal  Scheduled   Successfully 
assigned sample-jvm-app-30-xunju to node-001.ose.bld.f4tech.com
7:35:23 AM  sample-jvm-app-30-g4drx Pod Normal  Scheduled   Successfully 
assigned sample-jvm-app-30-g4drx to node-003.ose.bld.f4tech.com
7:35:22 AM  sample-jvm-app-30-362hb Pod Normal  Scheduled   Successfully 
assigned sample-jvm-app-30-362hb to node-003.ose.bld.f4tech.com
7:35:19 AM  sample-jvm-app-30-qn5nt Pod Normal  Killing Killing container 
with docker id 99a673abe7e3: Need to kill pod.
7:35:19 AM  sample-jvm-app-30-xo9w6 Pod Normal  Killing Killing container 
with docker id 33c23ef1e7ac: Need to kill pod.
7:35:19 AM  sample-jvm-app-30-pcxlr Pod Normal  Killing Killing container 
with docker id f1b3ce10a5c1: Need to kill pod.
7:34:22 AM  sample-jvm-app-30-362hb Pod Warning Failed scheduling   node 
'node-002.ose.bld.f4tech.com' is not in cache
7 times in the last 2 minutes
7:34:22 AM  sample-jvm-app-30-xunju Pod Warning Failed scheduling   node 
'node-002.ose.bld.f4tech.com' is not in cache
7 times in the last 2 minutes
7:34:22 AM  sample-jvm-app-30-g4drx Pod Warning Failed scheduling   node 
'node-002.ose.bld.f4tech.com' is not in cache
7 times in the last 2 minutes


As seen from the above, the newly created pods appear first to be wanted to be 
placed on node–002, but node–002 is not found in the cache, which suggests he’s 
failing to pass through the predicate search of available nodes. Which should 
be understandable as he’s been marked unschedulable. What I don’t understand is 
that during this period of time, node–001 and node–003 are available and more 
than willing to accept these pods. I ponder if the replication controller 
doesn’t have updated information regarding the availability of nodes until 
after the pods are finally killed off.

I’m still researching how I can prevent all three pods from ending up on a 
single node.


--
John Skarbek


On June 7, 2016 at 16:05:12, Skarbek, John 
(john.skar...@ca.com<mailto:john.skar...@ca.com>) wrote:

Good Morning,

I’d like to ask a question regarding the use of evacuating pods and how 
openshift/kubernetes schedules the replacement.

We have 3 nodes configured to run applications, and we went through a cycle of 
applying patches. So we’ve created an ansible playbook that goes through, 
evacuates the pods and restarts that node, one node at a time.

Prior to starting, we had an application running 3 pods, one one each node. 
When node1 was forced to evac the pods, kubernetes scheduled the replacement 
pod on node3. Node2 was next in line, when ansible forced the evac of pods, the 
final pod was placed on node3. So at this point, all pods were on the same 
physical node.

When ansible forced the evac of pods on node3, I then had an outage. The three 
pods were put in a “terminating” state, while 3 others were in a “pending” 
state. It took approximately 30 seconds to terminate the pods. The new 
‘pending’ pods sat pending for about 65 seconds, after which they were finally 
scheduled on nodes 1 and 2 and X time to start the containers.

Is this expected behavior? I was hoping that the replication controller woud 
recognize this behavior a bit better for scheduling nodes to ensure pods don’t 
get shifted to the same physical box when there’s two boxes available. I’m also 
hoping that before pods are term’ed, replacements are brought online.


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Evacuation of pods and scheduling

2016-06-07 Thread Skarbek, John
Good Morning,

I’d like to ask a question regarding the use of evacuating pods and how 
openshift/kubernetes schedules the replacement.

We have 3 nodes configured to run applications, and we went through a cycle of 
applying patches. So we’ve created an ansible playbook that goes through, 
evacuates the pods and restarts that node, one node at a time.

Prior to starting, we had an application running 3 pods, one one each node. 
When node1 was forced to evac the pods, kubernetes scheduled the replacement 
pod on node3. Node2 was next in line, when ansible forced the evac of pods, the 
final pod was placed on node3. So at this point, all pods were on the same 
physical node.

When ansible forced the evac of pods on node3, I then had an outage. The three 
pods were put in a “terminating” state, while 3 others were in a “pending” 
state. It took approximately 30 seconds to terminate the pods. The new 
‘pending’ pods sat pending for about 65 seconds, after which they were finally 
scheduled on nodes 1 and 2 and X time to start the containers.

Is this expected behavior? I was hoping that the replication controller woud 
recognize this behavior a bit better for scheduling nodes to ensure pods don’t 
get shifted to the same physical box when there’s two boxes available. I’m also 
hoping that before pods are term’ed, replacements are brought online.


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: The Router is so hard to get right

2016-05-29 Thread Skarbek, John
Dean,

Obviously, despite being public, I cannot see this I guess do to fireballing or 
security groups, but what is the end resultant behavior? Do we see a 503 page 
from the haproxy router?


--
John Skarbek


On May 29, 2016 at 23:23:57, Dean Peterson 
(peterson.d...@gmail.com<mailto:peterson.d...@gmail.com>) wrote:

Yes, that works:

 oc get routes
NAME HOST/PORT  PATH  SERVICE   
   TERMINATION   LABELS
abecornlandingpage   
landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=5wvPPkum61VcCicru_usEqZ3Ag06IRhmkEdzG2WSaos&s=NU2Dg-TJqCdhpFExLH4JtyKDj3x6b08DoSwGHjp0zt0&e=>
 abecornlandingpage:web 
template=abecorn-landing-page-template



On Sun, May 29, 2016 at 10:22 PM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

That’s weird, that should’ve worked… What about simply oc get routes


--
John Skarbek


On May 29, 2016 at 23:19:40, Dean Peterson 
(peterson.d...@gmail.com<mailto:peterson.d...@gmail.com>) wrote:

is the command correct? i just get "Error from server: routes 
"–-all-namespaces" not found"


On Sun, May 29, 2016 at 10:15 PM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

What do we see when we do a:

oc get routes –all-namespaces


--
John Skarbek


On May 29, 2016 at 23:01:16, Dean Peterson 
(peterson.d...@gmail.com<mailto:peterson.d...@gmail.com>) wrote:

The docker logs look fine:

 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:17:38.582715   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:17:43.579202   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:17:51.216070   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:27:34.602411   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:37:35.603549   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:47:36.601751   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).

I've verified the router service account can access endpoints:

oadm policy who-can list endpoints --all-namespaces
Namespace: 
Verb:  list
Resource:  endpoints

Users:  system:serviceaccount:default:router
system:serviceaccount:management-infra:management-admin
system:serviceaccount:openshift-infra:namespace-controller

Groups: system:cluster-admins
system:cluster-readers
system:masters
system:nodes
system:routers



On Sun, May 29, 2016 at 8:06 PM, Dean Peterson 
mailto:peterson.d...@gmail.com>> wrote:
Yes, the route url is pointing at the public ip address of my ec2 instance:

[ec2-user@ip-172-31-15-150 ~]$ sudo su
[root@ip-172-31-15-150 ec2-user]# dig 
landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> 
landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41943
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>.
 IN   A

;; ANSWER SECTION:
landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing

Re: The Router is so hard to get right

2016-05-29 Thread Skarbek, John
That’s weird, that should’ve worked… What about simply oc get routes


--
John Skarbek


On May 29, 2016 at 23:19:40, Dean Peterson 
(peterson.d...@gmail.com<mailto:peterson.d...@gmail.com>) wrote:

is the command correct? i just get "Error from server: routes 
"–-all-namespaces" not found"


On Sun, May 29, 2016 at 10:15 PM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

What do we see when we do a:

oc get routes –all-namespaces


--
John Skarbek


On May 29, 2016 at 23:01:16, Dean Peterson 
(peterson.d...@gmail.com<mailto:peterson.d...@gmail.com>) wrote:

The docker logs look fine:

 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:17:38.582715   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:17:43.579202   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:17:51.216070   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:27:34.602411   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:37:35.603549   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:47:36.601751   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).

I've verified the router service account can access endpoints:

oadm policy who-can list endpoints --all-namespaces
Namespace: 
Verb:  list
Resource:  endpoints

Users:  system:serviceaccount:default:router
system:serviceaccount:management-infra:management-admin
system:serviceaccount:openshift-infra:namespace-controller

Groups: system:cluster-admins
system:cluster-readers
system:masters
system:nodes
system:routers



On Sun, May 29, 2016 at 8:06 PM, Dean Peterson 
mailto:peterson.d...@gmail.com>> wrote:
Yes, the route url is pointing at the public ip address of my ec2 instance:

[ec2-user@ip-172-31-15-150 ~]$ sudo su
[root@ip-172-31-15-150 ec2-user]# dig 
landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> 
landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41943
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>.
 IN   A

;; ANSWER SECTION:
landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>.
 30 IN A   52.39.102.120

;; Query time: 30 msec
;; SERVER: 172.31.15.150#53(172.31.15.150)
;; WHEN: Sun May 29 21:03:52 EDT 2016
;; MSG SIZE  rcvd: 77

I have tried pointing a cname record at the public dns name of the openshift 
master running the router as well with no luck.

On Sun, May 29, 2016 at 7:34 PM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

Dean,

You should not need to touch the /etc/resolv.conf file at all. Do you have a 
wildcard A or CNAME record pointed to the public IP or FQDN of your instance?

If you were to do an nslookup or dig using the dns name provided by the route, 
does it resolve to the public IP of your instance?


--
John Skarbek


On May 29, 2016 at 20:14:27, Dean Peterson 
(peterson.d...@gmail.com<mailto:peterson.d...@

Re: The Router is so hard to get right

2016-05-29 Thread Skarbek, John
What do we see when we do a:

oc get routes –all-namespaces


--
John Skarbek


On May 29, 2016 at 23:01:16, Dean Peterson 
(peterson.d...@gmail.com<mailto:peterson.d...@gmail.com>) wrote:

The docker logs look fine:

 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:17:38.582715   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:17:43.579202   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:17:51.216070   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:27:34.602411   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:37:35.603549   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
I0530 02:47:36.601751   1 router.go:310] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).

I've verified the router service account can access endpoints:

oadm policy who-can list endpoints --all-namespaces
Namespace: 
Verb:  list
Resource:  endpoints

Users:  system:serviceaccount:default:router
system:serviceaccount:management-infra:management-admin
system:serviceaccount:openshift-infra:namespace-controller

Groups: system:cluster-admins
system:cluster-readers
system:masters
system:nodes
system:routers



On Sun, May 29, 2016 at 8:06 PM, Dean Peterson 
mailto:peterson.d...@gmail.com>> wrote:
Yes, the route url is pointing at the public ip address of my ec2 instance:

[ec2-user@ip-172-31-15-150 ~]$ sudo su
[root@ip-172-31-15-150 ec2-user]# dig 
landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> 
landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41943
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>.
 IN   A

;; ANSWER SECTION:
landing.enterprisewebservice.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__landing.enterprisewebservice.com&d=DQMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Q3AdrkSZOU_rcZHf2m47KnWSZ3f1fVx5dTNk1KZCv38&s=QdLx5uRgnlFAxOCITylv9UOGHmCAq2HOG5GJ1GiQ8E0&e=>.
 30 IN A   52.39.102.120

;; Query time: 30 msec
;; SERVER: 172.31.15.150#53(172.31.15.150)
;; WHEN: Sun May 29 21:03:52 EDT 2016
;; MSG SIZE  rcvd: 77

I have tried pointing a cname record at the public dns name of the openshift 
master running the router as well with no luck.

On Sun, May 29, 2016 at 7:34 PM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

Dean,

You should not need to touch the /etc/resolv.conf file at all. Do you have a 
wildcard A or CNAME record pointed to the public IP or FQDN of your instance?

If you were to do an nslookup or dig using the dns name provided by the route, 
does it resolve to the public IP of your instance?


--
John Skarbek


On May 29, 2016 at 20:14:27, Dean Peterson 
(peterson.d...@gmail.com<mailto:peterson.d...@gmail.com>) wrote:

It seems every time I install openshift everything goes perfectly, right up 
until I add a route and try to reach any of my services.  I installed with 
ansible using the stock openshift ansible playbook.  After I installed I 
completed the setup for AWS configuration.  Everything is running.  I am able 
to get to the actual openshift instance but I am not able to hit the service

Re: integrated docker registry

2016-05-28 Thread Skarbek, John
On May 28, 2016 at 13:07:23, Alan Jones 
(ajo...@diamanti.com) wrote:
Friends,
I'm trying to deploy an integrated docker registry for OpenShift 3.2.
The instructions I'm trying to follow are:
https://docs.openshift.com/enterprise/3.2/install_config/install/docker_registry.html
The example command has environment variables for the OpenShift Enterprise 
image component and version:
$ oadm registry --config=/etc/origin/master/admin.kubeconfig \
--credentials=/etc/origin/master/openshift-registry.kubeconfig \

--images='registry.access.redhat.com/openshift3/ose-${component}:${version}'
My guess is the version string is a docker tag, like "v3.2".
I tried component "ose-docker-registry" looking at some old example on the web.
Does anyone know what to put in the component and version here?

You don’t need to substitute these at all; the command you posted would end up 
interpreted by openshift and plug in the correct values.  My suggestion would 
be to run the command without the images option.  Unless you need a custom 
registry, the default option would work just fine and pull the correct version 
of the regsitry associated with your openshift version.

If, by chance, you need to specify this for some reason, the component is 
“docker-registry”, and the version would be exactly what you stated, “v3.2”, 
resulting in 
`--images=‘registry.access.redhat.com/openshift3/ose-docker-registry:v3.2'`

Alan

___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=fAWlAvonigjK6Dv9jWgKKALPTOXGOfDdarbI0QvikzI&s=wNzA4fp-IziafSIg9gO-1wbPIop9vLv8wNOuKiX35gc&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Order of Deploy/Template Configuration

2016-05-25 Thread Skarbek, John
Good Morning,

Is there some concept of parenting, or timing, or simply ordering of items when 
building a template configuration? Specifically around first time deploys.

I’ve got a multi service application, where the head honcho service requires 
prior services to be up and running. Thus far, I’ve modified the start script 
for the images to simply do a check and wait then repeat for a long time. I 
worry as I continue this, it’ll end up hurting us in the long run.

I’m wondering if I can tell the template that app B depends on app A, such that 
it’ll wait to deploy app B until after the deploy and readiness checks are 
complete on app A.


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Deploy Failure

2016-05-20 Thread Skarbek, John
Anyone got any tips on troubleshooting this:

In the events log:

Deployment Config   Warning Failed update   Error updating deployment 
default/router-14 status to Pending


And in the log from the deployer pod:

oc logs router-14-deploy
I0520 20:55:31.651525   1 deployer.go:201] Deploying from default/router-10 
to default/router-14 (replicas: 3)
I0520 20:55:32.769105   1 rolling.go:228] RollingUpdater: Continuing update 
with existing controller router-14.
I0520 20:55:32.769140   1 rolling.go:228] RollingUpdater: Scaling up 
router-14 from 0 to 3, scaling down router-10 from 3 to 0 (keep 2 pods 
available, don't exceed 3 pods)
I0520 20:55:33.808723   1 rolling.go:228] RollingUpdater: Scaling router-10 
down to 2
I0520 20:55:35.916581   1 rolling.go:228] RollingUpdater: Scaling router-14 
up to 1
F0520 20:55:43.260379   1 deployer.go:69] Get 
https://172.30.0.1:443/api/v1/namespaces/default/pods?labelSelector=deployment%3Drouter-10%2Cdeploymentconfig%3Drouter%2Crouter%3Dtrue:
 read tcp 172.30.0.1:443: connection reset by peer


I find it hard to believe he had a problem talking to kubernetes. I’m able to 
deploy other things without issues. This problems appears to be very sporadic, 
but enough to annoy me.


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Haproxy Routing Balance Implementation Question

2016-05-20 Thread Skarbek, John
Good Morning,

We have an application which terminates their own SSL, therefore we utilize TLS 
passthrough in the route configuration. This is our preferred method of 
communicating with this particular application. This enforces haproxy to 
operate using tcp mode, which the balance method is hard coded to source [1].

The problems comes in that we’ve got a front door to openshift, so all traffic 
hits a load balancer external to openshift. Due to the source reading the IP of 
that external loadbalancer, all traffic gets routed to the same pod.

Looking through various PR’s and commits I cannot find the reason why source 
was chosen, but did see where last year, there was a glimpse of this being 
touched later on. Would anyone be able to share why this particular balance 
type was chosen? roundrobin seems like a better choice for us in our particular 
situation. I also feel like having an external load balancer to openshift is 
not uncommon and would love to see this be configurable.

  *   [1] 
https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L237-L242
  *   [2] https://cbonte.github.io/haproxy-dconv/configuration–1.5.html#balance


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Grant access for a user authenticated with an identity provider to the namespace/project default

2016-05-20 Thread Skarbek, John
Charles,

You’ve created a new user in the system, and by default he’s not going to 
inherit any permissions. You’ll need to add a role to the user to access any 
projects. A command such as this should provide you admin access to the default 
project:

oc policy add-role-to-user admin admin -n default


That command would need to be run by a user that already has access to manage 
users/policies.

https://docs.openshift.org/latest/admin_guide/manage_users.html 
https://docs.openshift.org/latest/admin_guide/manage_authorization_policy.html


--
John Skarbek


On May 20, 2016 at 07:26:12, Charles Moulliard 
(cmoul...@redhat.com) wrote:

Hi,

I have configured Openshift Origin (version 18 of May 2016) with an external 
identoty provider. The user (admin/admin) can be authenticated and I get an 
openshift token that I can use with the oc client

Example :

oc login 
https://192.168.99.100:8443
 --token=g-4GsryPAdD6kttH6JV295xr3exXr46IsKtZjLt0gx4
Logged into 
"https://192.168.99.100:8443"
 as "admin" using the token provided.

You don't have any projects. You can try to create a new project, by running

$ oc new-project 

As we can see, I'm connected and authenticated to the platform but no projects 
are assigned to the user 'admin'

If I try to access the project default or create it, then that fails

./oc project default
error: You are not a member of project "default".

./oc new-project default
Error from server: project "default" already exists

What should I do to get/access the projects ?

Regards,

Charles
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=4eKzKQcdruAb8UKtxZlpQDMyFNWvQPRX9tkRyp2HdmA&s=X6YK_Wk_emk5ygZW67RJ96aX-ROo-43r40o8Pf5Nfio&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error updating deployment [deploy] status to Pending

2016-05-19 Thread Skarbek, John
Philippe,

Is the node in a Ready state?

The log output you posted makes it seem like something isn’t working properly 
if it keeps reading a config file over and over.

Are you able to start pods that do not utilize a PV?


--
John Skarbek


On May 19, 2016 at 16:43:16, Philippe Lafoucrière 
(philippe.lafoucri...@tech-angels.com)
 wrote:

If I make this node unschedulable, I get an event: "node 'openshift-node-1' is 
not in cache"
Does this ring a bell? (and the pod is still pending)
For the record, there's PV used in this pod, and all pods have the same 
behavior now on this cluster. Only a restart of origin-node can unlock them.
​Thanks
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=jLh1X9RvUZ4CzmdQ7elsOeY1CR-qoLimd877VlHjB6k&s=9WbmAPFRTGUQFaa5BonKWxuoR1_O_IEXYHq6UiaFwGg&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Jenkins setup for OpenShift

2016-05-13 Thread Skarbek, John
On May 13, 2016 at 04:02:28, Den Cowboy 
(dencow...@hotmail.com<mailto:dencow...@hotmail.com>) wrote:
In that file is something like:

{
"auths": {
"172.30.xx.xx:5000": {
"auth": "XXX",
"email": "a...@mail.com"
}
}
}

What do I need from that:
I use the docker build-publish plugin:
This is my configuration at the moment:

Repository name: openshift/myimage (not sure)
tag: latest
Docker host url: tcp://172.31.xx.xx:2375
server credentials: none (I'm able to use docker)
docker registry url: 
<https://urldefense.proofpoint.com/v2/url?u=https-3A__docker-2Dregistry.xx.xx-2Dxx.com&d=DQMGaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wh4ieqWm0HLy0V-4IaqL_AVX_Osz7IxF8F-lWhiaMxI&s=q1lb8X8OCsEvpRMzTQfZipBVERHLXtvpdex5dyh5N1g&e=>
 
https://docker-registry.xx.xx-xx.com<https://urldefense.proofpoint.com/v2/url?u=https-3A__docker-2Dregistry.xx.xx-2Dxx.com&d=DQMGaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wh4ieqWm0HLy0V-4IaqL_AVX_Osz7IxF8F-lWhiaMxI&s=q1lb8X8OCsEvpRMzTQfZipBVERHLXtvpdex5dyh5N1g&e=>
Registry credentials: still to add (can add, username and password or username 
and cert (no passwd), etc ..)

You’ll want the blob of text at the `auth` section of that file.  That is the 
password.


My registry is exposed. I'm able to 'visit' it in my browser (empty page but 
accepting certificate and https).
Where do I have to put my ca.crt on openshift. Is it on 
/etc/docker/certs/172.30.xx:5000 etc or 
/etc/docker/certs/docker-registry.xx.xx-xx.com:5000 or 
/etc/docker/certs/docker-registry.xx.xx-xx.com:8443?

When you visit the registry using your browser, what url did you utilize?  I 
suspect it would’ve been https://docker-registry.xx.xx-xx.com.  If that is the 
case it should be placed in 
/etc/docker/certs.d/docker-registry.xx.xx-xx.com<https://urldefense.proofpoint.com/v2/url?u=https-3A__docker-2Dregistry.xx.xx-2Dxx.com&d=DQMGaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wh4ieqWm0HLy0V-4IaqL_AVX_Osz7IxF8F-lWhiaMxI&s=q1lb8X8OCsEvpRMzTQfZipBVERHLXtvpdex5dyh5N1g&e=>/ca.crt

From: bpar...@redhat.com
Date: Thu, 12 May 2016 10:03:23 -0400
Subject: Re: Jenkins setup for OpenShift
To: dencow...@hotmail.com
CC: john.skar...@ca.com; users@lists.openshift.redhat.com



On Thu, May 12, 2016 at 4:40 AM, Den Cowboy 
mailto:dencow...@hotmail.com>> wrote:
Thanks for the replies.
We can use some docker plugins to build our images. But the main problem 
remains the login into our registry from our external jenkins.

We don't have experience with dockercfg but it seems an option.
All the plugins of Docker give the option to describe the registry and a key 
which is fine. But for the openshift registry we still the that token which is 
only available after authenticating on openshift itself. Is it possible to 
configure this in dockercfg?


​if you manually do a docker login to the openshift registry (using the 
openshift token)​, your dockercfg should get populated w/ the key you need to 
supply to the jenkins docker plugin, just take a look at the dockercfg after 
you've done a docker login.  (~/.docker/config.json or ~/.dockercfg)



We're searching for the best way to push an image from an external jenkins to 
the OpenShift registry:


Date: Wed, 11 May 2016 10:44:07 -0400
Subject: Re: Jenkins setup for OpenShift
From: bpar...@redhat.com<mailto:bpar...@redhat.com>
To: john.skar...@ca.com<mailto:john.skar...@ca.com>
CC: dencow...@hotmail.com<mailto:dencow...@hotmail.com>; 
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>



You can also just supply a dockercfg file that already has the right 
credentials in it, just make that file available to your Jenkins job.

Ben Parees | OpenShift

On May 11, 2016 9:30 AM, "Skarbek, John" 
mailto:john.skar...@ca.com>> wrote:
On May 11, 2016 at 08:46:18, Den Cowboy 
(dencow...@hotmail.com<mailto:dencow...@hotmail.com>) wrote:
We are using a Jenkins server which isn't running on openshift.
The main goal at the moment is:
- Get dockerfile out of our git
- Build image
- Push image to OpenShift Docker Registry

We have the dockerfile on our system. We can use docker commands in our Jenkins.
At the moment we are building our images like this:

cd folder/
docker build -t 172.30.xx.xx:5000/image:latest  .

So we have our image. Now we need to push our image to our OpenShift Registry.
We have 2 big issues:

1) Our first issue/question: Do we need to authenticate on our OpenShift 
environment (to get the necessary token of the next step) and if so, is there a 
more efficient way dan this?:

Re: Jenkins setup for OpenShift

2016-05-11 Thread Skarbek, John
On May 11, 2016 at 08:46:18, Den Cowboy 
(dencow...@hotmail.com) wrote:
We are using a Jenkins server which isn't running on openshift.
The main goal at the moment is:
- Get dockerfile out of our git
- Build image
- Push image to OpenShift Docker Registry

We have the dockerfile on our system. We can use docker commands in our Jenkins.
At the moment we are building our images like this:

cd folder/
docker build -t 172.30.xx.xx:5000/image:latest  .

So we have our image. Now we need to push our image to our OpenShift Registry.
We have 2 big issues:

1) Our first issue/question: Do we need to authenticate on our OpenShift 
environment (to get the necessary token of the next step) and if so, is there a 
more efficient way dan this?:
prereq: install oc tools on jenkins
oc login -u user -p password 
https://ec2-xx-xx-xx-xx-xx-1.compute.amazonaws.com:8443
 --certificate-authority='/path/to/ca.crt'

We have our ca.crt of our OpenShift stored in a folder on our Jenkins (manually 
putted on the server..)

You’ll need to get the credentials required to log into the docker registry 
somehow.  And there are options for completing this.

In our environment, we configure a service account for this exact process.  And 
when we build the jenkins server, we’ve got a play that’ll pull the docker 
config secret from the service account and push it into jenkins appropriately.

In your case, it sounds like you are doing this manually, simply grab the 
credentials from your service account.  Look for the associated secret for the 
docker config. oc get secrets will list the available secrets.  And  you should 
see the secret associated with the service account labeled something along the 
lines of -dockercfg-.  Run an oc describe secret 
-dockercfg- and it’ll put put the huge 
preconfigured password for that service account, that you can use to log into 
the docker registry.

https://docs.openshift.org/latest/dev_guide/service_accounts.html#managing-service-account-credentials

There are some service accounts created per project automatically that you may 
be able to use to get away without creating one

https://docs.openshift.org/latest/admin_guide/service_accounts.html#managed-service-accounts


2) Our second issue is related to the first issue. It seems a strange behaviour 
to "login" on your openshift from your jenkins" and perform the steps from 
there.

# authenticate for OpenShift Registry
docker login -u user -e a...@mail.com \
-p `oc whoami -t` 172.30.xx.xx:5000

# push image to our registry
docker push 172.30.xx.xx:5000/dev/image:latest



You don’t need to log into openshift in order to push to the registry.  But one 
MUST log into the docker registry before pushing.  Without logging in, the 
docker registry will more than likely deny your request to push.

As a secondary note, to prevent jenkins from sending that huge password in 
clear text to the console, you can do something like this in the jenkins job:

docker build $image .
(set +x; docker login -u nobody -e nob...@nobody.com -p $token $registry)
docker push $registry/$image


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: overwrite parameteres (env) of template

2016-04-27 Thread Skarbek, John
Den,

You are passing the incorrect flags. The templates don’t use the -e flag, but 
rather the --param flag.  Something like this should work:

```

oc new-app mysql-ephemeral \
> --param=MYSQL_USER=activiti \
> --param=MYSQL_PASSWORD=activiti \
> --param=MYSQL_DATABASE=activiti_production

```


--
John Skarbek


On April 27, 2016 at 08:02:11, Den Cowboy 
(dencow...@hotmail.com) wrote:

mysql-ephemeral
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: pod deployment error: couldn't get deployment: dial tcp 172.30.0.1:443: getsockopt: no route to host

2016-04-19 Thread Skarbek, John
Isn’t flushing iptable rules a dangerous option? I thought iptables was heavily 
utilized for destination NAT’ing for the kube service…


--
John Skarbek


On April 19, 2016 at 00:23:39, v (vekt...@gmx.net) 
wrote:

Hey,

I'd try to disable all firewall rules and then see if the error message is 
still there.
For example:
iptables -F
iptables -t nat -F
systemctl restart origin-master origin-node docker openvswitch

Note that all iptables chains have to be set to policy "accept" for this to 
work.
"No route to host" can be caused by "--reject-with icmp-host-prohibited" so you 
can try looking for that in your firewall config too.

Regards,
v

Am 2016-04-19 um 07:38 schrieb Sebastian Wieseler:
> Hi Clayton,
> Thanks for your reply.
>
> I opened now the firewall and have only the iptables rules from ansible in 
> place.
> 4789 UDP is open for the OVS as I saw.
>
> I ran ansible again and deployed the pod without any success.
> Restarting the OVS daemon everywhere in the masters,nodes doesn’t help either.
>
> What’s the procedure to get it fixed?
> Thanks again in advance.
>
> Greetings,
> Sebastian
>
>
>> On 19 Apr 2016, at 12:06 PM, Clayton Coleman  wrote:
>>
>> This is very commonly a misconfiguration of the network firewall rules
>> and the Openshift SDN. Pods attempt to connect over OVS bridges to
>> the masters, and the OVS traffic is carried over port 4789 (I think
>> that's the port, you may want to double check).
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__access.redhat.com_documentation_en_openshift-2Denterprise_3.1_cluster-2Dadministration_chapter-2D17-2Dtroubleshooting-2Dopenshift-2Dsdn&d=CwIGaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=UbAkDuZnwbaSECJ-D6Hc6sF-w8cSCEURXpRl70Ht91s&s=Iekdl0wEmzIYng61ltSIpzfAwlsvKjfViYDRUIAfsCk&e=
>>
>> Covers debugging network configuration issues
>>
>>> On Apr 18, 2016, at 11:28 PM, Sebastian Wieseler 
>>>  wrote:
>>>
>>> Hi community,
>>> We’re having difficulties to deploy pods.
>>> Our setup includes three masters plus three nodes.
>>>
>>> If we deploy a pod in the default project on a master, everything works 
>>> fine.
>>> But when we’re deploying it on a node, we’re getting STATUS Error for the 
>>> pod and the log shows:
>>> F0418 09:07:26.429738 1 deployer.go:70] couldn't get deployment 
>>> project/pod-1: Get 
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A_172.30.0.1-3A443_api_v1_namespaces_project_replicationcontrollers_pod-2D1-3A&d=CwIGaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=UbAkDuZnwbaSECJ-D6Hc6sF-w8cSCEURXpRl70Ht91s&s=fTG-cS_Z2IyG5kH5Txkpg1bs1lu_Bnn9of2LJSCuFZ0&e=
>>>  dial tcp X.X.X.X:443: getsockopt: no route to host
>>>
>>> 172.30.0.1 is the default address for kubernetes.
>>> If I execute curl 
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__172.30.0.1-3A443_api_v1_namespaces_project_replicationcontrollers_pod-2D1on&d=CwIGaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=UbAkDuZnwbaSECJ-D6Hc6sF-w8cSCEURXpRl70Ht91s&s=kUv7hJlaucVB2gW1diMvJuAX88rwKYGPNyiJ-mdsRRw&e=
>>>  the master or on the nodes, I’ll get a valid response.
>>>
>>> How come the pod doesn’t have a route? I couldn’t find much in the logs.
>>> First I thought it’s a firewall issue, but even with "allow any" it doesn’t 
>>> work.
>>>
>>> Our syslog is also full of these messages, on master and nodes:
>>>
>>> Apr 19 03:15:24 localhost atomic-openshift-master-api: I0419 
>>> 03:15:24.578086 32022 iowatcher.go:103] Unexpected EOF during watch stream 
>>> event decoding: unexpected EOF
>>> Apr 19 03:15:24 localhost atomic-openshift-master-api: I0419 
>>> 03:15:24.947147 32022 iowatcher.go:103] Unexpected EOF during watch stream 
>>> event decoding: unexpected EOF
>>> Apr 19 03:15:24 localhost atomic-openshift-master-api: I0419 
>>> 03:15:24.948047 32022 iowatcher.go:103] Unexpected EOF during watch stream 
>>> event decoding: unexpected EOF
>>> Apr 19 03:15:24 localhost atomic-openshift-master-api: I0419 
>>> 03:15:24.948076 32022 iowatcher.go:103] Unexpected EOF during watch stream 
>>> event decoding: unexpected EOF
>>> Apr 19 03:15:25 localhost atomic-openshift-master-api: I0419 
>>> 03:15:25.576047 32022 iowatcher.go:103] Unexpected EOF during watch stream 
>>> event decoding: unexpected EOF
>>> Apr 19 03:15:26 localhost atomic-openshift-master-api: I0419 
>>> 03:15:26.207263 32022 iowatcher.go:103] Unexpected EOF during watch stream 
>>> event decoding: unexpected EOF
>>> Apr 19 03:15:27 localhost origin-master-controllers: I0419 03:15:27.947460 
>>> 51283 iowatcher.go:103] Unexpected EOF during watch stream event decoding: 
>>> unexpected EOF
>>> Apr 19 03:15:28 localhost origin-master-controllers: I0419 03:15:28.580092 
>>> 51283 iowatcher.go:103] Unexpected EOF during watch stream event decoding: 
>>> unexpected EOF
>>> Apr 19 03:15:28 localhost origin-master-controllers: I041

Re: OpenShift version 1.1.6

2016-04-12 Thread Skarbek, John
Den,

This repo is indeed separate from all things origin. If you run this 3 months 
from now on a brand new cluster, it’ll pull the latest version of openshift 
available.

In order to pin the version of openshift that is installed you could throw this 
in your inventory file:

openshift_pkg_version=-1.1.6


If, however, you run the openshift installer on the same cluster three months 
from now, assuming all is well, it will not proceed to rebuild the cluster 
using a differing version of openshift.


--
John Skarbek


On April 12, 2016 at 02:20:13, Den Cowboy 
(dencow...@hotmail.com) wrote:

Hi,

We have a POC environment of Origin 1.1.6
We've pulled the ansible repo and created it. Now is my question:
Is this an independent repository (so when we will create a new cluster in 3 
moths, will it still be version 1.1.6?) or does it use external resources 
(which aren't in the repo) which could effect the expected version of the 
cluster?
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=ezUY1QTEdTuUTBDUBV5QLkHNaw-y4LW9-rGz4dOyo28&s=M8edE6rMA6tz1PYmXfmaBlDfHYHQr-H8abTwE44jZaY&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Router Pod stuck at pending

2016-04-08 Thread Skarbek, John
GaiNFVKGTAtB6uxWq93pgksl6WqJo0o&s=yoPCQkGDa3dVGnixTcRvdfNN8BmJQ__6nJr0uEUhhhU&e=>
Pods Status:0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Events:
  FirstSeen LastSeenCount   From
SubobjectPath   TypeReason  Message
  - -   
-   --  ---
  4m4m  1   {deploymentconfig-controller }  
Normal  DeploymentCreated   Created new deployment 
"router-1" for version 1
  4m    4m  1   {deployer } 
Warning FailedUpdateError updating deployment 
openshift/router-1 status to Pending


On Thu, Apr 7, 2016 at 12:50 PM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

Hello,

I ponder if there’s an issue with the labels being utilized by the nodes and 
the pods. Can you run the following command: oc get nodes —show-labels

And then an: oc describe dc router


--
John Skarbek


On April 7, 2016 at 04:26:37, Mfawa Alfred Onen 
(muffycomp...@gmail.com<mailto:muffycomp...@gmail.com>) wrote:

So I enabled scheduling as you pointed out but still no luck:

oc get nodes

NAME STATUSAGE
master.dev.local   Ready 8d
node1.dev.localReady 8d
node2.dev.localReady 8d

oc get pods

docker-registry-2-pbvcf   1/1   Running   0  10h
router-1-bk55a0/1   Pending   0  1s
router-1-deploy   1/1   Running   0  4s

oc describe pod router-1-bk55a


Events:
  FirstSeen LastSeenCount   FromSubobjectPath   
TypeReason  Message
  - -   -   
--  ---
  1m1m  1   {default-scheduler }
Warning FailedSchedulingpod (router-1-bk55a) failed to fit in 
any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): Region
fit failure on node (node2.dev.local): MatchNodeSelector

  1m1m  1   {default-scheduler }Warning 
FailedSchedulingpod (router-1-bk55a) failed to fit in any node
fit failure on node (node2.dev.local): MatchNodeSelector
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): MatchNodeSelector

  1m1m  2   {default-scheduler }Warning 
FailedSchedulingpod (router-1-bk55a) failed to fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): Region
fit failure on node (node2.dev.local): Region

  47s   47s 1   {default-scheduler }Warning 
FailedSchedulingpod (router-1-bk55a) failed to fit in any node
fit failure on node (node1.dev.local): Region
fit failure on node (node2.dev.local): Region
fit failure on node 
(master.dhcpaas.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__master.dhcpaas.com&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Kvu7h4NH0VKpICUhIHRvPYCilfYCENeI2O51tCkTZy4&s=80sqJagvP-0xCvCUUP08aqapr3tX91PSkV1qZpv-ij4&e=>):
 PodFitsPorts

  1m15s 2   {default-scheduler }Warning 
FailedSchedulingpod (router-1-bk55a) failed to fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): MatchNodeSelector
fit failure on node (node2.dev.local): Region


Regards!


On Thu, Apr 7, 2016 at 8:01 AM, Tobias Florek 
mailto:opensh...@ibotty.net>> wrote:
Hi.

I assume your router does not get scheduled on master.dev.local, because
scheduling is disabled there:

> *1. oc get nodes*
>
> NAME STATUS AGE
> master.dev.local   Ready,SchedulingDisabled   8d

Run

oadm manage-node master.dev.local --schedulable=true

to enable pods to run on your master.

Cheers,
 Tobias Florek



--
Mfawa Alfred Onen
System Administrator / GDG Lead, Bingham University
Department of Computer Science,
Bingham University.

E-Mail: muffycomp...@gmail.com<mailto:muffycomp...@gmail.com>
Phone1: +234 805 944 3154
Phone2: +234 803 079 6088
Twitter: 
@muffycompo<https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_muffycompo&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Kvu7h4NH0VKpICUhIHRvPYCilfYCENeI2O51tCkTZy4&s=LWKLw0w264TLoL6B5JmDnMIuDJNd54Hjq-y-ukhqN2Q&e=>
Google+: 
https://plus.google.com/+MfawaAlfredOnen<https://urldefense.proofpoint.com/v2/url?u=https-3A__plus.google.com_-2BMfawaAlfredOnen&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-

Re: Router Pod stuck at pending

2016-04-07 Thread Skarbek, John
Hello,

I ponder if there’s an issue with the labels being utilized by the nodes and 
the pods. Can you run the following command: oc get nodes —show-labels

And then an: oc describe dc router


--
John Skarbek


On April 7, 2016 at 04:26:37, Mfawa Alfred Onen 
(muffycomp...@gmail.com) wrote:

So I enabled scheduling as you pointed out but still no luck:

oc get nodes

NAME STATUSAGE
master.dev.local   Ready 8d
node1.dev.localReady 8d
node2.dev.localReady 8d

oc get pods

docker-registry-2-pbvcf   1/1   Running   0  10h
router-1-bk55a0/1   Pending   0  1s
router-1-deploy   1/1   Running   0  4s

oc describe pod router-1-bk55a


Events:
  FirstSeen LastSeenCount   FromSubobjectPath   
TypeReason  Message
  - -   -   
--  ---
  1m1m  1   {default-scheduler }
Warning FailedSchedulingpod (router-1-bk55a) failed to fit in 
any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): Region
fit failure on node (node2.dev.local): MatchNodeSelector

  1m1m  1   {default-scheduler }Warning 
FailedSchedulingpod (router-1-bk55a) failed to fit in any node
fit failure on node (node2.dev.local): MatchNodeSelector
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): MatchNodeSelector

  1m1m  2   {default-scheduler }Warning 
FailedSchedulingpod (router-1-bk55a) failed to fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): Region
fit failure on node (node2.dev.local): Region

  47s   47s 1   {default-scheduler }Warning 
FailedSchedulingpod (router-1-bk55a) failed to fit in any node
fit failure on node (node1.dev.local): Region
fit failure on node (node2.dev.local): Region
fit failure on node 
(master.dhcpaas.com):
 PodFitsPorts

  1m15s 2   {default-scheduler }Warning 
FailedSchedulingpod (router-1-bk55a) failed to fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): MatchNodeSelector
fit failure on node (node2.dev.local): Region


Regards!


On Thu, Apr 7, 2016 at 8:01 AM, Tobias Florek 
mailto:opensh...@ibotty.net>> wrote:
Hi.

I assume your router does not get scheduled on master.dev.local, because
scheduling is disabled there:

> *1. oc get nodes*
>
> NAME STATUS AGE
> master.dev.local   Ready,SchedulingDisabled   8d

Run

oadm manage-node master.dev.local --schedulable=true

to enable pods to run on your master.

Cheers,
 Tobias Florek



--
Mfawa Alfred Onen
System Administrator / GDG Lead, Bingham University
Department of Computer Science,
Bingham University.

E-Mail: muffycomp...@gmail.com
Phone1: +234 805 944 3154
Phone2: +234 803 079 6088
Twitter: 
@muffycompo
Google+: 
https://plus.google.com/+MfawaAlfredOnen
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Kvu7h4NH0VKpICUhIHRvPYCilfYCENeI2O51tCkTZy4&s=0WL53uxKYHilIXS-LCdwwWsrnLspLh5l5njDeBw48m8&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: policy for openshift user who can only push to openshift registry.

2016-03-22 Thread Skarbek, John
I now remember why I didn’t use this role. The image-pusher doesn’t have the 
ability to also create an image stream. Hence my use of the edit role. If there 
were a policy strictly for creating image streams I could possibly combine that 
and the image pusher into a role that works for my use case.


--
John Skarbek


On March 18, 2016 at 08:10:52, David Eads 
(de...@redhat.com<mailto:de...@redhat.com>) wrote:

We created `system:image-pusher` back in 
1.1.1<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openshift_origin_releases_tag_v1.1.1&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=4Sk3h1f1z-7Fa4ULTLgX5gxQIPjIfUqp9Cuk3363ROk&s=NLILdrGCvaLKwYCYusb_DBYKfWwPh6uKEtDcZZCujAc&e=>
 with 
https://github.com/openshift/origin/pull/5962<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openshift_origin_pull_5962&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=4Sk3h1f1z-7Fa4ULTLgX5gxQIPjIfUqp9Cuk3363ROk&s=wKQ4AoqV3j8QCmC0eME8PR8a8XcMz7auoSxSypnMGxo&e=>.
  Check to make sure that your policy is up to date: `oadm policy 
reconcile-cluster-roles`.  By default that makes no changes.  If you approve of 
the changes it wants to make, you can use `--confirm`.

On Fri, Mar 18, 2016 at 7:17 AM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

I would love to know a good answer to this as well.

Currently we create a service account called application_robot, similar to 
their documentation, this robot is dedicated to the appropriate namespace and 
is applied via the example: system:service account:default:application_robot.

Our automation rips out that users auth token and throws it in a jenkins job. 
This allows us to log into the exposed docker registry using that token. It’s a 
service account so the auth should last forever. This bypasses the need to log 
into openshift as you currently do.

But regarding your original question, I think even my solution, the robot 
account still has too much permission in the namespace as I only want him to 
push, but thus far it gets the job done.


--
John Skarbek


On March 18, 2016 at 05:17:44, Lorenz Vanthillo 
(lorenz.vanthi...@outlook.com<mailto:lorenz.vanthi...@outlook.com>) wrote:

Hi,

We have an origin 1.1.3 environment which is running a Jenkins CI-server.
In a Jenkins job we're performing the following:

- authenticate in OpenShift env to get token
- login into openshift docker registry
- push image into registry

We don't really like the part we need to authenticate in our OpenShift 
environment .
At the moment jenkins is authenticating with a user with the cluster-admin role.
But we want to create an OpenShift user who's only able to push an image to a 
registry.
Which policiy do we have to give?

We checked 
https://docs.openshift.com/enterprise/3.1/admin_guide/manage_authorization_policy.html<https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openshift.com_enterprise_3.1_admin-5Fguide_manage-5Fauthorization-5Fpolicy.html&d=CwMFAw&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=JtLLxoOmjtBEwjvZ2Hew-MxymkC4e2jlj7_LhHctUkI&s=rlQxwQo2yi9xPUsOVXqrOSU2sBkWmnSQBDlGV52HB1k&e=>
There is a system:image-puller but nothing about pushing

Thanks
___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=JtLLxoOmjtBEwjvZ2Hew-MxymkC4e2jlj7_LhHctUkI&s=h8nEKonV6j_PuyQ4KnoyPrscxGk5s_PWueBi031wQtw&e=

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=4Sk3h1f1z-7Fa4ULTLgX5gxQIPjIfUqp9Cuk3363ROk&s=QVTPNjsFTy2tHPVgHas-rqUkU6UOZCP4goS6gzeZlb4&e=>


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: DockerBuild Vs STI

2016-03-19 Thread Skarbek, John
Srinivas,

I’d like to throw another option your way.

Eclipse —> Git —> Jenkins to build and create artifacts —> Jenkins Docker 
Plug-in to create image -> push image to the built-in openshift docker registry

Something you’ll need before the above pipeline, is a configuration already in 
place on the openshift cluster (deployment config, pointing to the built-in 
registry, service, route, etc…) and the build-in registry needs to be exposed.

We are using this in my environment and it works well. We are using our own 
existing build processes that we’ve already had in place without any extra 
unnecessary work, and simply plugging in openshift to complete the deployment. 
Sending an image to the internal registry is essentially what the openshift 
native build process does; therefore, one is able to abuse the native 
extensions that openshift has built to complete deployments in an automated 
fashion.

We do, however, utilize multiple openshift clusters. One for testing and one 
for prod as an example. So we simply have two deploy jobs that get kicked off 
appropriately. If by chance you utilize the same cluster for both dev and prod, 
you can probably take the above model and utilize tagging appropriately to send 
the final deployment image into production.

We like this as we are able to integrate our existing large and ugly pipelines 
that have already been fine tuned to our liking. It also allows our deployment 
engineers to continue the path they already do without having to learn a new 
system. The downside I see to this is that we may be missing out one some 
features that openshift may provide in their building process. It also forces 
some undue troubleshooting as we are building our own docker containers which 
is a little difficult for our developers to test with locally. Things we’ve had 
to deal with are documented well within openshift, and most if it concerns the 
use of SCC’s and image file permissions. Though, thus far, it hasn’t been 
terrible.


--
John Skarbek


On March 18, 2016 at 02:57:28, Srinivas Naga Kotaru (skotaru) 
(skot...@cisco.com) wrote:

We’re thinking what is the best approach for our code deployment and promotion.

This is our proposed flow for each approach

Docker build: ( Outside of Openshift)
==

Eclipse —> Git —> Jenkins to build and create artifacts —> Jenkins Docker 
Plug-in to create image and push to corporate repo —> oc import-image and oc 
deploy —latest

Basically build & Image creation happening out side of Openshift.

OpenShift native:
==

Eclipse —> GIT —> Jenkins to build artifacts —> OC Binary deploy by CI/CD tool 
against each app as CI/CD has admin access to each project

We have 2 choices here a) binary build for each life cycle b) build for dev 
life cycle and promote using docker tag and push to other life cycles. Option B 
make more sense naturally

This approach using native openshift for build and deployments. Also using 
openshift internal registries to store final build images for each life cycle.

Can you comment on each pros and cons? From scaling ( hundred thousand 
deployments) as well as easy to operate and maintain. Whatever approach it 
should be repeatable and reliable without errors since we will  automate 
everything as part of CI/CD pipeline.

Thanks in advance and appreciated feedback

--
SrinivasKotaru
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=lwMOrMH5uQuS0bXtBW5_dMK5rygsmaJRq1XyBxGrjm0&s=--Mau5gkyzLBfAg_YMyH1xFRPx739yA31PmjW9CZcDc&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: policy for openshift user who can only push to openshift registry.

2016-03-19 Thread Skarbek, John
I would love to know a good answer to this as well.

Currently we create a service account called application_robot, similar to 
their documentation, this robot is dedicated to the appropriate namespace and 
is applied via the example: system:service account:default:application_robot.

Our automation rips out that users auth token and throws it in a jenkins job. 
This allows us to log into the exposed docker registry using that token. It’s a 
service account so the auth should last forever. This bypasses the need to log 
into openshift as you currently do.

But regarding your original question, I think even my solution, the robot 
account still has too much permission in the namespace as I only want him to 
push, but thus far it gets the job done.


--
John Skarbek


On March 18, 2016 at 05:17:44, Lorenz Vanthillo 
(lorenz.vanthi...@outlook.com) wrote:

Hi,

We have an origin 1.1.3 environment which is running a Jenkins CI-server.
In a Jenkins job we're performing the following:

- authenticate in OpenShift env to get token
- login into openshift docker registry
- push image into registry

We don't really like the part we need to authenticate in our OpenShift 
environment .
At the moment jenkins is authenticating with a user with the cluster-admin role.
But we want to create an OpenShift user who's only able to push an image to a 
registry.
Which policiy do we have to give?

We checked 
https://docs.openshift.com/enterprise/3.1/admin_guide/manage_authorization_policy.html
There is a system:image-puller but nothing about pushing

Thanks
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=JtLLxoOmjtBEwjvZ2Hew-MxymkC4e2jlj7_LhHctUkI&s=h8nEKonV6j_PuyQ4KnoyPrscxGk5s_PWueBi031wQtw&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Openshift Routing Haproxy Logging

2016-03-19 Thread Skarbek, John
Good Morning,

Anyone have any advice of plucking the access logs out of the haproxy router?

I’m pushing a TLS feature and while I love the fact that I get a 502 responses, 
at this moment, I have zero method to debug this.

My guess is that I need to create a custom haproxy image to add some ability to 
log to some location. The haproxy config running in the container currently 
doesn’t appear to do any logging whatsoever.


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


openshift-ansible release cycle

2016-03-14 Thread Skarbek, John
HI

So quick question. What determines when you guys do a release on the 
openshift-ansible repo? There are fixes in master that haven’t been released 
yet. Looking at the history, there’s no pattern. Thank you.


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Kubernetes Update Cadence

2016-03-14 Thread Skarbek, John
Hello!

Is there are particular cadence from which you guys choose to update kubernetes 
for openshift?

I’m pondering hopping onboard with the spread 
thing and there’s a 
bug that exists in the current 
utilized version of kubernetes with openshift.


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Serious docker upgrade problem -> 1.8 -> 1.9 update breaks system

2016-03-09 Thread Skarbek, John
Andy,

In my case I’m running CentOS7 latest.


--
John Skarbek


On March 9, 2016 at 07:45:59, Andy Goldstein 
(agold...@redhat.com<mailto:agold...@redhat.com>) wrote:

What OS - Fedora/Centos/RHEL?

Jhon/Dan, PTAL as this might be related to forward-journald.

Andy

On Wed, Mar 9, 2016 at 7:44 AM, Skarbek, John 
mailto:john.skar...@ca.com>> wrote:

Andy,

David had already file an 
issue<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openshift_openshift-2Dansible_issues_1573&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wSpRMUY_QLon0BxG2UtXQHwm7mED5XO4j3iXIqmbutA&s=tQXEyJ6SDBafZa4iEmSEByvtPL2q0mTa7_UsdMH9oNY&e=>

Mar 09 12:40:07 
js-router-001.ose.bld.f4tech.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__js-2Drouter-2D001.ose.bld.f4tech.com&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wSpRMUY_QLon0BxG2UtXQHwm7mED5XO4j3iXIqmbutA&s=pI5ZYm7a3934AcUni9gMrH2b7lkHPts3kWCanLzZk1c&e=>
 systemd[1]: Starting Docker Application Container Engine...
Mar 09 12:40:07 
js-router-001.ose.bld.f4tech.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__js-2Drouter-2D001.ose.bld.f4tech.com&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wSpRMUY_QLon0BxG2UtXQHwm7mED5XO4j3iXIqmbutA&s=pI5ZYm7a3934AcUni9gMrH2b7lkHPts3kWCanLzZk1c&e=>
 forward-journal[18500]: Forwarding stdin to journald using Priority 
Informational and tag docker
Mar 09 12:40:08 
js-router-001.ose.bld.f4tech.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__js-2Drouter-2D001.ose.bld.f4tech.com&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wSpRMUY_QLon0BxG2UtXQHwm7mED5XO4j3iXIqmbutA&s=pI5ZYm7a3934AcUni9gMrH2b7lkHPts3kWCanLzZk1c&e=>
 forward-journal[18500]: time="2016-03-09T12:40:08.104822274Z" level=info 
msg="Firewalld running: false"
Mar 09 12:40:08 
js-router-001.ose.bld.f4tech.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__js-2Drouter-2D001.ose.bld.f4tech.com&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wSpRMUY_QLon0BxG2UtXQHwm7mED5XO4j3iXIqmbutA&s=pI5ZYm7a3934AcUni9gMrH2b7lkHPts3kWCanLzZk1c&e=>
 forward-journal[18500]: time="2016-03-09T12:40:08.136388972Z" level=info 
msg="Default bridge (docker0) is assigned with an IP address 
172.17.0.1/16<https://urldefense.proofpoint.com/v2/url?u=http-3A__172.17.0.1_16&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wSpRMUY_QLon0BxG2UtXQHwm7mED5XO4j3iXIqmbutA&s=ncTcEwikUs2Maj9hdFF4iiuf087rDkMD-ujSQmtJfaE&e=>.
 Daemon option --bip can be used to set a preferred IP address"
Mar 09 12:40:08 
js-router-001.ose.bld.f4tech.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__js-2Drouter-2D001.ose.bld.f4tech.com&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wSpRMUY_QLon0BxG2UtXQHwm7mED5XO4j3iXIqmbutA&s=pI5ZYm7a3934AcUni9gMrH2b7lkHPts3kWCanLzZk1c&e=>
 forward-journal[18500]: time="2016-03-09T12:40:08.183636904Z" level=info 
msg="Loading containers: start."
Mar 09 12:40:08 
js-router-001.ose.bld.f4tech.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__js-2Drouter-2D001.ose.bld.f4tech.com&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wSpRMUY_QLon0BxG2UtXQHwm7mED5XO4j3iXIqmbutA&s=pI5ZYm7a3934AcUni9gMrH2b7lkHPts3kWCanLzZk1c&e=>
 forward-journal[18500]:
Mar 09 12:40:08 
js-router-001.ose.bld.f4tech.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__js-2Drouter-2D001.ose.bld.f4tech.com&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wSpRMUY_QLon0BxG2UtXQHwm7mED5XO4j3iXIqmbutA&s=pI5ZYm7a3934AcUni9gMrH2b7lkHPts3kWCanLzZk1c&e=>
 forward-journal[18500]: time="2016-03-09T12:40:08.183842066Z" level=info 
msg="Loading containers: done."
Mar 09 12:40:08 
js-router-001.ose.bld.f4tech.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__js-2Drouter-2D001.ose.bld.f4tech.com&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=wSpRMUY_QLon0BxG2UtXQHwm7mED5XO4j3iXIqmbutA&s=pI5ZYm7a3934AcUni9gMrH2b7lkHPts3kWCanLzZk1c&e=>
 forward-journal[18500]: time="2016-03-09T12:40:08.184069850Z" level=info 
msg="Daemon has completed initialization"
Mar 09 12:40:08 
js-router-001.ose.bld.f4tech.com<https://urldefense.proofpoint.com/v2/url?u=h

Re: Serious docker upgrade problem -> 1.8 -> 1.9 update breaks system

2016-03-09 Thread Skarbek, John
Andy,

David had already file an 
issue

Mar 09 12:40:07 js-router-001.ose.bld.f4tech.com systemd[1]: Starting Docker 
Application Container Engine...
Mar 09 12:40:07 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
Forwarding stdin to journald using Priority Informational and tag docker
Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
time="2016-03-09T12:40:08.104822274Z" level=info msg="Firewalld running: false"
Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
time="2016-03-09T12:40:08.136388972Z" level=info msg="Default bridge (docker0) 
is assigned with an IP address 172.17.0.1/16. Daemon option --bip can be used 
to set a preferred IP address"
Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
time="2016-03-09T12:40:08.183636904Z" level=info msg="Loading containers: 
start."
Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]:
Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
time="2016-03-09T12:40:08.183842066Z" level=info msg="Loading containers: done."
Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
time="2016-03-09T12:40:08.184069850Z" level=info msg="Daemon has completed 
initialization"
Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
time="2016-03-09T12:40:08.184116853Z" level=info msg="Docker daemon" 
commit="185277d/1.9.1" execdriver=native-0.2 graphdriver=devicemapper 
version=1.9.1
Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com forward-journal[18500]: 
time="2016-03-09T12:40:08.193532957Z" level=info msg="API listen on 
/var/run/docker.sock"
Mar 09 12:40:08 js-router-001.ose.bld.f4tech.com systemd[1]: docker.service: 
Got notification message from PID 18499, but reception only permitted for main 
PID 18498




--
John Skarbek


On March 9, 2016 at 07:41:03, Andy Goldstein 
(agold...@redhat.com) wrote:

What is the output of 'sudo journalctl -u docker -e'?

On Wed, Mar 9, 2016 at 3:38 AM, David Strejc 
mailto:david.str...@gmail.com>> wrote:
I don't know where I could find right person for this issue so I am trying to 
post it here as many people are reading this.

Clean installation of Open Shift v3 via ansible is broken by simple yum update 
as yum updates Docker from 1.8 to 1.9.1 and Docker is not starting anymore.

This is the message in logs:

Mar 09 09:03:45 
1.devcloud.cz
 systemd[1]: docker.service: Got notification message from PID 7150, but 
reception only permitted for main PID 7149


David Strejc
t: +420734270131
e: david.str...@gmail.com

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=2_kN02gJlhscVfvV2RC2IaT4PbU3aXbeIvn7SmOh60k&s=VtI_OicOZR1K3k47Lnidb-zMLVVNJIPEnadtVHfyXN0&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Serious docker upgrade problem -> 1.8 -> 1.9 update breaks system

2016-03-09 Thread Skarbek, John
I’m seeing the same.

I don’t believe this is specific to Openshift. But perhaps a problem with 
docker’s systemd configuration. Something was added in the docker.service 
config file. Looks as if they added a shell wrapper around starting docker for 
some reason (1.9.x):

ExecStart=/bin/sh -c '/usr/bin/docker daemon $OPTIONS \
  $DOCKER_STORAGE_OPTIONS \
  $DOCKER_NETWORK_OPTIONS \
  $ADD_REGISTRY \
  $BLOCK_REGISTRY \
  $INSECURE_REGISTRY \
  2>&1 | /usr/bin/forward-journald -tag docker'


Where previously (1.8.x) it was simply this:

ExecStart=/usr/bin/docker daemon $OPTIONS \
  $DOCKER_STORAGE_OPTIONS \
  $DOCKER_NETWORK_OPTIONS \
  $ADD_REGISTRY \
  $BLOCK_REGISTRY \
  $INSECURE_REGISTRY


I ponder if there’s a way to pin the docker version in openshift-ansible.


--
John Skarbek


On March 9, 2016 at 03:40:41, David Strejc 
(david.str...@gmail.com) wrote:

I don't know where I could find right person for this issue so I am trying to 
post it here as many people are reading this.

Clean installation of Open Shift v3 via ansible is broken by simple yum update 
as yum updates Docker from 1.8 to 1.9.1 and Docker is not starting anymore.

This is the message in logs:

Mar 09 09:03:45 
1.devcloud.cz
 systemd[1]: docker.service: Got notification message from PID 7150, but 
reception only permitted for main PID 7149


David Strejc
t: +420734270131
e: david.str...@gmail.com
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=GehNmbM9G77ApUz5f479U2rnA5Kd1XPXVv3jdJ0SqUs&s=dxh0-UZEzBfh0x2EvOwkiI7gdDBQNliqQ7X8O-zMUtw&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Errors: container "x" in pod/x-1-8vhpi is crash-looping

2016-02-25 Thread Skarbek, John
Lorenz,

The reason for using an arbitrary UID is to prevent the user inside of the 
container from having access to resources outside of the container if somehow 
breached. This includes resources on the host as well as resources accessed by 
other containers.

Since you don’t know what that user is going to be ahead of time, the solution 
would be to make the files needed by the user to be world readable. And if 
necessary world writable.

I would agree that the change you made is not the greatest as this would allow 
the user specified in the docker image to run potentially adding a bit of risk 
to the host which may have a collision with the same username resources.

Should for some reason the container MUST run as a specific user (which I’ve 
run into a couple of these cases), the documentation I linked can assist with 
such. It simply requires an extra bit of work but helps keeps things in a safer 
state.


--
John Skarbek


On February 25, 2016 at 07:09:07, Lorenz Vanthillo 
(lorenz.vanthi...@outlook.com) wrote:

I performed:

1. Edit the restricted SCC:

$ oc edit scc restricted


And changed:

runAsUser:
  type: MustRunAsRange

to

runAsUser:
  type: RunAsAny





But I assume that this is a bad solution. Although it's still not very clear 
why OpenShift is using a random user inside a container.



From: lorenz.vanthi...@outlook.com
To: john.skar...@ca.com
CC: users@lists.openshift.redhat.com
Subject: RE: Errors: container "x" in pod/x-1-8vhpi is crash-looping
Date: Thu, 25 Feb 2016 12:11:51 +0100

Hi John,

Thanks for the fast reply.

"Running a container with an arbitrary user ID also has the benefit of ensuring 
that a process which is able to escape the container due to a vulnerability in 
the container framework will not have specific user permissions on the host 
system."

The permissions on the server.xml in the container are: -rw---. 1 root 
root. Here is a permission error in OpenShift.
How would you change these permissions to make it "world writable"? Isn't it 
unsave to make it "world writable"?

Thanks


From: john.skar...@ca.com
To: users@lists.openshift.redhat.com; lorenz.vanthi...@outlook.com
Subject: Re: Errors: container "x" in pod/x-1-8vhpi is crash-looping
Date: Thu, 25 Feb 2016 10:58:13 +

Lorenz,
The issue is not that the image is coming from a specific repo, but rather the 
image itself is not fine tuned for use within openshift. CrashLoop indicates 
the container was able to start, but then crashed, and subsequent restarts are 
resulting in the same.
In general your permissions are not set properly for this container to run 
inside of openshift. I suggest modifying those permissions to being world 
writable.
For additional information take a look at Support Arbitrary User ID's portion 
of this 
documentation



--
John Skarbek


On February 25, 2016 at 05:22:21, Lorenz Vanthillo 
(lorenz.vanthi...@outlook.com) wrote:

I'm on Origin 1.1.3
I've pulled an image from a private registry (insecure: self-signed certs + 
basic authentication).

docker pull ec2-xxx:5000/image:2.3

The image is on my node. I create a project where a will run an instance of 
this image:
$ oc new-project image
$ oc new-app --insecure-registry ec2-xxx:5000/image:2.3


W0225 09:55:55.3220356777 pipeline.go:154] Could not find an image stream 
match for "ec2xxx:5000/image:2.3". Make sure that a Docker image with that tag 
is available on the node for the deployment to succeed.

--> Found Docker image 51e260c (20 hours old) from ec2-xxx:5000 for 
"ec2-xxx:5000/image:2.3"



* This image will be deployed in deployment config "image"

* Port 8080/tcp will be load balanced by service "image"

  * Other containers can access this service through the hostname "image"

* WARNING: Image "image" runs as the 'root' user which may not be permitted 
by your cluster administrator



--> Creating resources with label app=image ...

deploymentconfig "image" created

service "image" created

--> Success

Run 'oc status' to view your app.

oc status shows me:
Errors:
  * container "image" in pod/image-1-3J24 is crash-looping

Is it because there is no image-stream for this image at the moment? I've did 
already the same steps with another image from the same registry and it did not 
went in a loop.

The logs of the container show:
$ docker logs 457deef27b1
Feb 25, 2016 9:57:27 AM org.apache.catalina.startup.
Catalina load
WARNING: Unable to load server configuration from 
[/usr/local/tomcat/conf/server.

Re: Errors: container "x" in pod/x-1-8vhpi is crash-looping

2016-02-25 Thread Skarbek, John
Lorenz,

The issue is not that the image is coming from a specific repo, but rather the 
image itself is not fine tuned for use within openshift. CrashLoop indicates 
the container was able to start, but then crashed, and subsequent restarts are 
resulting in the same.

In general your permissions are not set properly for this container to run 
inside of openshift. I suggest modifying those permissions to being world 
writable.

For additional information take a look at Support Arbitrary User ID's portion 
of this 
documentation


--
John Skarbek


On February 25, 2016 at 05:22:21, Lorenz Vanthillo 
(lorenz.vanthi...@outlook.com) wrote:

I'm on Origin 1.1.3
I've pulled an image from a private registry (insecure: self-signed certs + 
basic authentication).

docker pull ec2-xxx:5000/image:2.3

The image is on my node. I create a project where a will run an instance of 
this image:
$ oc new-project image
$ oc new-app --insecure-registry ec2-xxx:5000/image:2.3

W0225 09:55:55.3220356777 pipeline.go:154] Could not find an image stream 
match for "ec2xxx:5000/image:2.3". Make sure that a Docker image with that tag 
is available on the node for the deployment to succeed.
--> Found Docker image 51e260c (20 hours old) from ec2-xxx:5000 for 
"ec2-xxx:5000/image:2.3"

* This image will be deployed in deployment config "image"
* Port 8080/tcp will be load balanced by service "image"
  * Other containers can access this service through the hostname "image"
* WARNING: Image "image" runs as the 'root' user which may not be permitted 
by your cluster administrator

--> Creating resources with label app=image ...
deploymentconfig "image" created
service "image" created
--> Success
Run 'oc status' to view your app.

oc status shows me:
Errors:
  * container "image" in pod/image-1-3J24 is crash-looping

Is it because there is no image-stream for this image at the moment? I've did 
already the same steps with another image from the same registry and it did not 
went in a loop.

The logs of the container show:
$ docker logs 457deef27b1
Feb 25, 2016 9:57:27 AM org.apache.catalina.startup.
Catalina load
WARNING: Unable to load server configuration from 
[/usr/local/tomcat/conf/server.xml]
Feb 25, 2016 9:57:27 AM org.apache.catalina.startup.Catalina load
WARNING: Permissions incorrect, read permission is not allowed on the file.
Feb 25, 2016 9:57:27 AM org.apache.catalina.startup.Catalina load
WARNING: Unable to load server configuration from 
[/usr/local/tomcat/conf/server.xml]
Feb 25, 2016 9:57:27 AM org.apache.catalina.startup.Catalina load
WARNING: Permissions incorrect, read permission is not allowed on the file.
Feb 25, 2016 9:57:27 AM org.apache.catalina.startup.Catalina start
SEVERE: Cannot start server. Server instance is not configured.


But when I just perform an 'docker run ec2-xxx:image:2.3' the container is 
running fine. So it's no issue with the container.
25-Feb-2016 10:16:44.047 INFO [localhost-startStop-1] xxx has finished in 41 ms
25-Feb-2016 10:16:44.056 INFO [main] xxx
25-Feb-2016 10:16:44.062 INFO [main] xxx
25-Feb-2016 10:16:44.064 INFO [main] org.apache.catalina.startup.Catalina.start 
Server startup in 13824 ms

___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=HHhWXrx0bumM_yqZ6f4wecTofvnXLn09S6iTTCb1wEE&s=dZNG1Ur0Iu7DWNi8m2O91SdIGxsW96hU1SCIuacY4O0&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Running applications that dont use LB

2016-02-20 Thread Skarbek, John
Kevin,

Tis true haproxy is used for the web traffic. But you can run other arbitrary 
services inside of openshift. I believe the documentation that may help lead 
you the direction you should go is here: 
https://docs.openshift.org/latest/architecture/core_concepts/pods_and_services.html#services


--
John Skarbek


On February 21, 2016 at 00:38:36, kevin parrikar 
(kevin.parker...@gmail.com) wrote:

Hi ,

I have successfully containerised my application using docker and is running 
fine. This application "doesn't" use any HTTP for communication.

Using same docker image how can
I move that to openshift?

I heard that currently openshift can only port applications that can be load 
balanced using HaPRoxy is this true?

Since my application can not use HTTP can openshift still create a pod of it 
and provide an IP address in data centre network using its router(Similar to 
net=host docker flag)

Regards,
Kevin

___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=J0VLs0P0G33O2LEtxJqN90EdV1IJ6cEqiu7HXFdo794&s=NYhcxmNycfTXzCiDApzpIUgilYcayyq6NCLDCXhzOkE&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Hairpin?

2016-02-17 Thread Skarbek, John
Ugh,

Found it after sending an email… https://github.com/openshift/origin/issues/6362


--
John Skarbek


On February 17, 2016 at 07:46:46, Skarbek, John 
(john.skar...@ca.com<mailto:john.skar...@ca.com>) wrote:

Anyone know what the following log output means?

7659 manager.go:1841] Hairpin setup failed for pod 
"sample-jvm-app-1-deploy_sample-project(93a68aeb-d50a-11e5-bd72-005056b41fcd)": 
open /sys/devices/virtual/net/veth5b1a753/brport/hairpin_mode: no such file or 
directory




I’ve got some pods that appear to randomly not work and this is the only error 
message I can find. I’m still investigating, but quite curious about this…


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Hairpin?

2016-02-17 Thread Skarbek, John
Anyone know what the following log output means?

7659 manager.go:1841] Hairpin setup failed for pod 
"sample-jvm-app-1-deploy_sample-project(93a68aeb-d50a-11e5-bd72-005056b41fcd)": 
open /sys/devices/virtual/net/veth5b1a753/brport/hairpin_mode: no such file or 
directory



I’ve got some pods that appear to randomly not work and this is the only error 
message I can find. I’m still investigating, but quite curious about this…


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Start container which needs env's

2016-02-17 Thread Skarbek, John
Den,

There’s quite a few ways to set ENV vars.

Have a look at this 
documentation

It is also possible to include environmental variables as part of the 
deployment configuration as well as templates.


--
John Skarbek


On February 17, 2016 at 07:09:05, Den Cowboy 
(dencow...@hotmail.com) wrote:

I've an image. When I want to start the image I have to define som 
env-variables:
How do I have to do this in OpenShift? Can I just add the --env also after the 
oc-command?

$ docker run --restart=always --name "nodejs" --env MOCK="x" --env BRANDS="x" 
--env PORT="" -d my-image:73

___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=60UePY86xVs1NjLZBFRHiiBY54VyqOf44whmSKC2v2M&s=_WN8v3KDRr_tVvulwUKq9Eh_Gj8blriQHhclDjH1W4M&e=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift with Docker method installation on CentOS, error : deployer.go:65] couldn't get deployment default/docker-registry-1: Get https://10.0.2.15:8443/api/v1/namespaces/default/replicationcont

2016-02-10 Thread Skarbek, John
You don’t have any rules for port 8443. We would need to find out which chain 
the rule should go inside But something similar to this should fix the problem:

iptables -I INPUT -p tcp —-dport 8443 -j ACCEPT


Though I’d be more concerned as to why the rule wasn’t put in place from the 
get go.


--
John Skarbek


On February 10, 2016 at 05:59:16, Stéphane Klein 
(cont...@stephane-klein.info) wrote:

Do you see my mistake ? It's the default iptable config on CentOS.

2016-02-10 11:48 GMT+01:00 Stéphane Klein 
mailto:cont...@stephane-klein.info>>:


2016-02-10 11:44 GMT+01:00 Clayton Coleman 
mailto:ccole...@redhat.com>>:
Firewall it is :)


```
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination
ACCEPT all  --  anywhere anywhere ctstate 
RELATED,ESTABLISHED
ACCEPT all  --  anywhere anywhere
INPUT_direct  all  --  anywhere anywhere
INPUT_ZONES_SOURCE  all  --  anywhere anywhere
INPUT_ZONES  all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhere
REJECT all  --  anywhere anywhere reject-with 
icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target prot opt source   destination
DOCKER all  --  anywhere anywhere
ACCEPT all  --  anywhere anywhere ctstate 
RELATED,ESTABLISHED
ACCEPT all  --  anywhere anywhere
ACCEPT all  --  anywhere anywhere
ACCEPT all  --  anywhere anywhere ctstate 
RELATED,ESTABLISHED
ACCEPT all  --  anywhere anywhere
FORWARD_direct  all  --  anywhere anywhere
FORWARD_IN_ZONES_SOURCE  all  --  anywhere anywhere
FORWARD_IN_ZONES  all  --  anywhere anywhere
FORWARD_OUT_ZONES_SOURCE  all  --  anywhere anywhere
FORWARD_OUT_ZONES  all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhere
REJECT all  --  anywhere anywhere reject-with 
icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination
OUTPUT_direct  all  --  anywhere anywhere

Chain DOCKER (1 references)
target prot opt source   destination

Chain FORWARD_IN_ZONES (1 references)
target prot opt source   destination
FWDI_public  all  --  anywhere anywhere[goto]
FWDI_public  all  --  anywhere anywhere[goto]

Chain FORWARD_IN_ZONES_SOURCE (1 references)
target prot opt source   destination

Chain FORWARD_OUT_ZONES (1 references)
target prot opt source   destination
FWDO_public  all  --  anywhere anywhere[goto]
FWDO_public  all  --  anywhere anywhere[goto]

Chain FORWARD_OUT_ZONES_SOURCE (1 references)
target prot opt source   destination

Chain FORWARD_direct (1 references)
target prot opt source   destination

Chain FWDI_public (2 references)
target prot opt source   destination
FWDI_public_log  all  --  anywhere anywhere
FWDI_public_deny  all  --  anywhere anywhere
FWDI_public_allow  all  --  anywhere anywhere

Chain FWDI_public_allow (1 references)
target prot opt source   destination

Chain FWDI_public_deny (1 references)
target prot opt source   destination

Chain FWDI_public_log (1 references)
target prot opt source   destination

Chain FWDO_public (2 references)
target prot opt source   destination
FWDO_public_log  all  --  anywhere anywhere
FWDO_public_deny  all  --  anywhere anywhere
FWDO_public_allow  all  --  anywhere anywhere

Chain FWDO_public_allow (1 references)
target prot opt source   destination

Chain FWDO_public_deny (1 references)
target prot opt source   destination

Chain FWDO_public_log (1 references)
target prot opt source   destination

Chain INPUT_ZONES (1 references)
target prot opt source   destination
IN_public  all  --  anywhere anywhere[goto]
IN_public  all  --  anywhere anywhere[goto]

Chain INPUT_ZONES_SOURCE (1 references)
target prot opt source   destination

Chain INPUT_direct (1 references)
target prot opt source   destination

Chain IN_public (2 references)
target prot opt source   destination
IN_public_log  all  --  anywhere anywhere
IN_public_deny  all  --  anywhere anywhere
IN_public_allow  all  --  anywhere anywhere

Chain IN_public_allow (1 references)
target prot opt source   destination
ACCEPT tcp  --  anywhere anywhere tcp dp

Re: install everything with ansible

2016-01-27 Thread Skarbek, John
Den,

Indeed the openshift-ansible<https://github.com/openshift/openshift-ansible> 
repo contains the capability to stand up entire environments.  Checkout the 
various readme’s located at the root of the repo.

In our organization we use the same repo to create an entire environment.  It 
works very well.

On Jan 27, 2016, at 08:10, Skarbek, John 
mailto:skaj...@ca.com>> wrote:

Den,

Indeed the openshift-ansible<https://github.com/openshift/openshift-ansible> 
repo contains the capability to stand up entire environments.  Checkout the 
various readme’s located at the root of the repo.

In our organization we use the same repo to create an entire environment.  It 
works very well.

On Jan 27, 2016, at 04:57, Den Cowboy 
mailto:dencow...@hotmail.com>> wrote:

It's because when you want to install or create big clusters. ansible will do 
that for us. But we still have to install docker etc manually on each server...


From: dencow...@hotmail.com<mailto:dencow...@hotmail.com>
To: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: install everything with ansible
Date: Wed, 27 Jan 2016 09:43:18 +

Hi,

Is it possible to  install the whole OpenShift environment with ansible?
So also the prerequisitions like Docker etc.

___ users mailing 
listus...@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>http://lists.openshift.redhat.com/openshiftmm/listinfo/users<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwQFAw&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=BZ2S09kcMRiJIUh57WZsng&m=uXDstKSTglrBcbjS21Jssh1oKXZ4ZaPC5dVhXBSNlkM&s=E8ZIYW1kwKO3WdJjUi712aSIqwzMEP9ujse94ov0uG0&e=>
___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=BZ2S09kcMRiJIUh57WZsng&m=uXDstKSTglrBcbjS21Jssh1oKXZ4ZaPC5dVhXBSNlkM&s=E8ZIYW1kwKO3WdJjUi712aSIqwzMEP9ujse94ov0uG0&e=


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users