Re: Router Pod stuck at pending

2016-04-08 Thread Mfawa Alfred Onen
Hello Skarbek, I redployed the router like you mentioned but still got a
pending router pod.

*1. oc get pods*

NAME  READY STATUSRESTARTS   AGE
docker-registry-2-pbvcf   1/1   Running   0  2d
router-2-8uodm0/1   Pending   0  20s
router-2-deploy   1/1   Running   0  25s

*2. oc describe pod router-2-8uodm*

Name:   router-2-8uodm
Namespace:  openshift
Image(s):   openshift/origin-haproxy-router:v1.1.4
Node:   /
Labels: deployment=router-2,deploymentconfig=router,router=router
Status: Pending
Reason:
Message:
IP:
Controllers:ReplicationController/router-2
Containers:
  router:
Container ID:
Image:  openshift/origin-haproxy-router:v1.1.4
Image ID:
Ports:  80/TCP, 443/TCP, 1936/TCP
QoS Tier:
  cpu:  BestEffort
  memory:   BestEffort
State:  Waiting
Ready:  False
Restart Count:  0
Liveness:   http-get http://localhost:1936/healthz delay=10s
timeout=1s period=10s #success=1 #failure=3
Readiness:  http-get http://localhost:1936/healthz delay=10s
timeout=1s period=10s #success=1 #failure=3
Environment Variables:

*[Truncated Certificate Data for extra clarity]*

  OPENSHIFT_MASTER:
https://master.dev.local:8443
  ROUTER_EXTERNAL_HOST_HOSTNAME:
  ROUTER_EXTERNAL_HOST_HTTPS_VSERVER:
  ROUTER_EXTERNAL_HOST_HTTP_VSERVER:
  ROUTER_EXTERNAL_HOST_INSECURE:false
  ROUTER_EXTERNAL_HOST_PARTITION_PATH:
  ROUTER_EXTERNAL_HOST_PASSWORD:
  ROUTER_EXTERNAL_HOST_PRIVKEY:
/etc/secret-volume/router.pem
  ROUTER_EXTERNAL_HOST_USERNAME:
  ROUTER_SERVICE_HTTPS_PORT:443
  ROUTER_SERVICE_HTTP_PORT: 80
  ROUTER_SERVICE_NAME:  router
  ROUTER_SERVICE_NAMESPACE: openshift
  ROUTER_SUBDOMAIN:
  STATS_PASSWORD:   j4RksqDAD6
  STATS_PORT:   1936
  STATS_USERNAME:   admin
Volumes:
  router-token-qk5ot:
Type:   Secret (a secret that should populate this volume)
SecretName: router-token-qk5ot
Events:
  FirstSeen LastSeenCount   From
 SubobjectPath   TypeReason  Message
  - -   
 -   --  ---
  29s   29s 1   {default-scheduler }
 Warning FailedSchedulingpod (router-2-8uodm) failed to
fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): Region
fit failure on node (node2.dev.local): MatchNodeSelector

  28s   28s 1   {default-scheduler }Warning
FailedSchedulingpod (router-2-8uodm) failed to fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): MatchNodeSelector
fit failure on node (node2.dev.local): MatchNodeSelector

  22s   22s 1   {default-scheduler }Warning
FailedSchedulingpod (router-2-8uodm) failed to fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): Region
fit failure on node (node2.dev.local): Region

  26s   14s 2   {default-scheduler }Warning
FailedSchedulingpod (router-2-8uodm) failed to fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): MatchNodeSelector
fit failure on node (node2.dev.local): Region


Kind Regards!

On Fri, Apr 8, 2016 at 11:27 PM, Skarbek, John  wrote:

> I have a feeling that now that you’ve enabled scheduling this ought to
> work. I bet if you ran a deploy, it’ll work now. You’ll need to cancel the
> current running one. So the following commands *might* help out.
>
> oc deploy -—cancel dc/router -n default
> oc deploy -—latest dc/router -n default
>
>
>
> --
> John Skarbek
>
> On April 8, 2016 at 14:01:06, Mfawa Alfred Onen (muffycomp...@gmail.com)
> wrote:
>
> Hello Tobias, below is the output of the commands you mentioned:
>
> *1. oc get nodes --show-labels*
>
> master.dev.local   Ready 10d
> kubernetes.io/hostname=master.dev.local,region=infra,router=router,zone=default
> 
> node1.dev.localReady 10d
> kubernetes.io/hostname=node1.dev.local,region=primary,zone=dhc
> 

Re: Router Pod stuck at pending

2016-04-08 Thread Skarbek, John
I have a feeling that now that you’ve enabled scheduling this ought to work. I 
bet if you ran a deploy, it’ll work now. You’ll need to cancel the current 
running one. So the following commands might help out.

oc deploy -—cancel dc/router -n default
oc deploy -—latest dc/router -n default



--
John Skarbek


On April 8, 2016 at 14:01:06, Mfawa Alfred Onen 
(muffycomp...@gmail.com) wrote:

Hello Tobias, below is the output of the commands you mentioned:

1. oc get nodes --show-labels

master.dev.local   Ready 10d   
kubernetes.io/hostname=master.dev.local,region=infra,router=router,zone=default
node1.dev.localReady 10d   
kubernetes.io/hostname=node1.dev.local,region=primary,zone=dhc
node2.dev.localReady 10d   
kubernetes.io/hostname=node2.dev.local,region=primary,zone=dhc

2. oc describe dc router

Name:   router
Created:4 minutes ago
Labels: router=router
Annotations:
Latest Version: 1
Triggers:   Config
Strategy:   Rolling
Template:
  Selector: router=router
  Replicas: 1
  Containers:
  router:
Image:  openshift/origin-haproxy-router:v1.1.4
QoS Tier:
  cpu:  BestEffort
  memory:   BestEffort
Liveness:   http-get 
http://localhost:1936/healthz
 delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness:  http-get 
http://localhost:1936/healthz
 delay=10s timeout=1s period=10s #success=1 #failure=3
Environment Variables:

 [Truncated Certificate Data for extra clarity]

  OPENSHIFT_MASTER: 
https://master.dev.local:8443
  ROUTER_EXTERNAL_HOST_HOSTNAME:
  ROUTER_EXTERNAL_HOST_HTTPS_VSERVER:
  ROUTER_EXTERNAL_HOST_HTTP_VSERVER:
  ROUTER_EXTERNAL_HOST_INSECURE:false
  ROUTER_EXTERNAL_HOST_PARTITION_PATH:
  ROUTER_EXTERNAL_HOST_PASSWORD:
  ROUTER_EXTERNAL_HOST_PRIVKEY: /etc/secret-volume/router.pem
  ROUTER_EXTERNAL_HOST_USERNAME:
  ROUTER_SERVICE_HTTPS_PORT:443
  ROUTER_SERVICE_HTTP_PORT: 80
  ROUTER_SERVICE_NAME:  router
  ROUTER_SERVICE_NAMESPACE: openshift
  ROUTER_SUBDOMAIN:
  STATS_PASSWORD:   Lt1ZhBJc8n
  STATS_PORT:   1936
  STATS_USERNAME:   admin
Deployment #1 (latest):
Name:   router-1
Created:4 minutes ago
Status: Running
Replicas:   1 current / 1 desired
Selector:   
deployment=router-1,deploymentconfig=router,router=router
Labels: 
openshift.io/deployment-config.name=router,router=router
Pods Status:0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Events:
  FirstSeen LastSeenCount   From
SubobjectPath   TypeReason  Message
  

Re: route hostname generation in template

2016-04-08 Thread Aleksandar Lazic
Hi Dale.

I have solved this with the 

https://docs.openshift.org/latest/dev_guide/downward_api.html

We use in the template the following.

###
DeploymentConfig
spec
  template
spec
  containers
env
  - name: PROJECT
valueFrom:
  fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
###

"parameters": [
{
"name": "PROJECT",
"description": "Project namespace",
"required": true
}


Then in the container is the namespace as ENV var PROJECT available.

But I'm not sure if you can use the same syntax fo the Routes.
Maybe it would be a good idea to have some default variables in the template 
which can be used such as.

namespace
defaultdomain
...

BR Aleks

From: users-boun...@lists.openshift.redhat.com 
 on behalf of Dale Bewley 

Sent: Friday, April 08, 2016 21:29
To: users@lists.openshift.redhat.com
Subject: route hostname generation in template

I'm creating a template which has 2 services. One is a python gunicorn and one 
is httpd.

I want the first service reachable at app-project.domain/ and the second 
service to be reachable at app-project.domain/static. That works, but I'm 
having trouble automating it in a template.

Unfortunately if I use default value of ${APPLICATION_DOMAIN} it includes the 
service name and I wind up with a distinct hostname in each route: 
app-static-project.domain and app-py-project.domain

{
  "kind": "Route",
  "apiVersion": "v1",
  "metadata": {
"name": "${NAME}-static"
  },
  "spec": {
"host": "${APPLICATION_DOMAIN}",
"path": "/${STATIC_DIR}",
"to": {
  "kind": "Service",
  "name": "${NAME}-static"
},
"tls": {
  "termination" : "edge"
}
  }
},
{
  "kind": "Route",
  "apiVersion": "v1",
  "metadata": {
"name": "${NAME}-py"
  },
  "spec": {
"host": "${APPLICATION_DOMAIN}",
"to": {
  "kind": "Service",
  "name": "${NAME}-py"
},
"tls": {
  "termination" : "edge"
}
  }
},


I could prompt for a hostname, but I would like to auto-generate the hostname 
to include the project by default. What I would like is 
-. in both routes.


Is is there a list somewhere of the variables available to templates?

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: route hostname generation in template

2016-04-08 Thread Ben Parees
On Fri, Apr 8, 2016 at 4:01 PM, Jordan Liggitt  wrote:

> I'm pretty sure it is intentional that the only variables available are
> the ones defined in the template itself.
>

​right, those are not system variables or anything, they are just
parameters that users can provide values for.  if you provide no value for
a route hostname, you get a generated default, as seen when you left
APPLICATION_DOMAIN blank.

​


>
> On Fri, Apr 8, 2016 at 3:29 PM, Dale Bewley  wrote:
>
>> I'm creating a template which has 2 services. One is a python gunicorn
>> and one is httpd.
>>
>> I want the first service reachable at app-project.domain/ and the second
>> service to be reachable at app-project.domain/static. That works, but I'm
>> having trouble automating it in a template.
>>
>> Unfortunately if I use default value of ${APPLICATION_DOMAIN} it includes
>> the service name and I wind up with a distinct hostname in each route:
>> app-static-project.domain and app-py-project.domain
>>
>> {
>>   "kind": "Route",
>>   "apiVersion": "v1",
>>   "metadata": {
>> "name": "${NAME}-static"
>>   },
>>   "spec": {
>> "host": "${APPLICATION_DOMAIN}",
>> "path": "/${STATIC_DIR}",
>> "to": {
>>   "kind": "Service",
>>   "name": "${NAME}-static"
>> },
>> "tls": {
>>   "termination" : "edge"
>> }
>>   }
>> },
>> {
>>   "kind": "Route",
>>   "apiVersion": "v1",
>>   "metadata": {
>> "name": "${NAME}-py"
>>   },
>>   "spec": {
>> "host": "${APPLICATION_DOMAIN}",
>> "to": {
>>   "kind": "Service",
>>   "name": "${NAME}-py"
>> },
>> "tls": {
>>   "termination" : "edge"
>> }
>>   }
>> },
>>
>>
>> I could prompt for a hostname, but I would like to auto-generate the
>> hostname to include the project by default. What I would like is
>> -. in both routes.
>>
>>
>> Is is there a list somewhere of the variables available to templates?
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: route hostname generation in template

2016-04-08 Thread Jordan Liggitt
I'm pretty sure it is intentional that the only variables available are the
ones defined in the template itself.

On Fri, Apr 8, 2016 at 3:29 PM, Dale Bewley  wrote:

> I'm creating a template which has 2 services. One is a python gunicorn and
> one is httpd.
>
> I want the first service reachable at app-project.domain/ and the second
> service to be reachable at app-project.domain/static. That works, but I'm
> having trouble automating it in a template.
>
> Unfortunately if I use default value of ${APPLICATION_DOMAIN} it includes
> the service name and I wind up with a distinct hostname in each route:
> app-static-project.domain and app-py-project.domain
>
> {
>   "kind": "Route",
>   "apiVersion": "v1",
>   "metadata": {
> "name": "${NAME}-static"
>   },
>   "spec": {
> "host": "${APPLICATION_DOMAIN}",
> "path": "/${STATIC_DIR}",
> "to": {
>   "kind": "Service",
>   "name": "${NAME}-static"
> },
> "tls": {
>   "termination" : "edge"
> }
>   }
> },
> {
>   "kind": "Route",
>   "apiVersion": "v1",
>   "metadata": {
> "name": "${NAME}-py"
>   },
>   "spec": {
> "host": "${APPLICATION_DOMAIN}",
> "to": {
>   "kind": "Service",
>   "name": "${NAME}-py"
> },
> "tls": {
>   "termination" : "edge"
> }
>   }
> },
>
>
> I could prompt for a hostname, but I would like to auto-generate the
> hostname to include the project by default. What I would like is
> -. in both routes.
>
>
> Is is there a list somewhere of the variables available to templates?
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[no subject]

2016-04-08 Thread Marcos Ortiz


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Router Pod stuck at pending

2016-04-08 Thread Mfawa Alfred Onen
Hello Tobias, below is the output of the commands you mentioned:

*1. oc get nodes --show-labels*

master.dev.local   Ready 10d
kubernetes.io/hostname=master.dev.local,region=infra,router=router,zone=default
node1.dev.localReady 10d
kubernetes.io/hostname=node1.dev.local,region=primary,zone=dhc
node2.dev.localReady 10d
kubernetes.io/hostname=node2.dev.local,region=primary,zone=dhc

*2. oc describe dc router*

Name:   router
Created:4 minutes ago
Labels: router=router
Annotations:
Latest Version: 1
Triggers:   Config
Strategy:   Rolling
Template:
  Selector: router=router
  Replicas: 1
  Containers:
  router:
Image:  openshift/origin-haproxy-router:v1.1.4
QoS Tier:
  cpu:  BestEffort
  memory:   BestEffort
Liveness:   http-get http://localhost:1936/healthz delay=10s timeout=1s
period=10s #success=1 #failure=3
Readiness:  http-get http://localhost:1936/healthz delay=10s timeout=1s
period=10s #success=1 #failure=3
Environment Variables:

 *[Truncated Certificate Data for extra clarity]*

  OPENSHIFT_MASTER:
https://master.dev.local:8443
  ROUTER_EXTERNAL_HOST_HOSTNAME:
  ROUTER_EXTERNAL_HOST_HTTPS_VSERVER:
  ROUTER_EXTERNAL_HOST_HTTP_VSERVER:
  ROUTER_EXTERNAL_HOST_INSECURE:false
  ROUTER_EXTERNAL_HOST_PARTITION_PATH:
  ROUTER_EXTERNAL_HOST_PASSWORD:
  ROUTER_EXTERNAL_HOST_PRIVKEY:
/etc/secret-volume/router.pem
  ROUTER_EXTERNAL_HOST_USERNAME:
  ROUTER_SERVICE_HTTPS_PORT:443
  ROUTER_SERVICE_HTTP_PORT: 80
  ROUTER_SERVICE_NAME:  router
  ROUTER_SERVICE_NAMESPACE: openshift
  ROUTER_SUBDOMAIN:
  STATS_PASSWORD:   Lt1ZhBJc8n
  STATS_PORT:   1936
  STATS_USERNAME:   admin
Deployment #1 (latest):
Name:   router-1
Created:4 minutes ago
Status: Running
Replicas:   1 current / 1 desired
Selector:
deployment=router-1,deploymentconfig=router,router=router
Labels:
openshift.io/deployment-config.name=router,router=router
Pods Status:0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Events:
  FirstSeen LastSeenCount   From
 SubobjectPath   TypeReason  Message
  - -   
 -   --  ---
  4m4m  1   {deploymentconfig-controller }
 Normal  DeploymentCreated   Created new deployment
"router-1" for version 1
  4m4m  1   {deployer }
Warning FailedUpdateError updating
deployment openshift/router-1 status to Pending


On Thu, Apr 7, 2016 at 12:50 PM, Skarbek, John  wrote:

> Hello,
>
> I ponder if there’s an issue with the labels being utilized by the nodes
> and the pods. Can you run the following command: oc get nodes
> —show-labels
>
> And then an: oc describe dc router
>
>
>
> --
> John Skarbek
>
> On April 7, 2016 at 04:26:37, Mfawa Alfred Onen (muffycomp...@gmail.com)
> wrote:
>
> So I enabled scheduling as you pointed out but still no luck:
>
> *oc get nodes*
>
> NAME STATUSAGE
> master.dev.local   Ready 8d
> node1.dev.localReady 8d
> node2.dev.localReady 8d
>
> *oc get pods*
>
> docker-registry-2-pbvcf   1/1   Running   0  10h
> router-1-bk55a0/1   Pending   0  1s
> router-1-deploy   1/1   Running   0  4s
>
> *oc describe pod router-1-bk55a*
>
> 
> Events:
>   FirstSeen LastSeenCount   From
>  SubobjectPath   TypeReason  Message
>   - -   
>  -   --  ---
>   1m1m  1   {default-scheduler }
>  Warning FailedSchedulingpod (router-1-bk55a) failed to
> fit in any node
> fit failure on node (master.dev.local): PodFitsPorts
> fit failure on node (node1.dev.local): Region
> fit failure on node (node2.dev.local): MatchNodeSelector
>
>   1m1m  1   {default-scheduler }Warning
> FailedSchedulingpod (router-1-bk55a) failed to fit in any node
> fit failure on node (node2.dev.local): MatchNodeSelector
> fit failure on node (master.dev.local): PodFitsPorts
> fit failure on node (node1.dev.local): MatchNodeSelector
>
>   1m1m  2   {default-scheduler }Warning
> FailedSchedulingpod (router-1-bk55a) failed to fit in any node
> fit failure on node (master.dev.local): PodFitsPorts
> fit failure on node (node1.dev.local): Region
> fit failure on node (node2.dev.local): Region
>
>   47s   47s 1   {default-scheduler }Warning
> 

Re: RWO mounted on multiple hosts

2016-04-08 Thread Philippe Lafoucrière
ho, and btw, Openshift was mentioned MANY times ;)​
Thanks for the hard work guys.

http://www.slideshare.net/plafoucriere/rails-monolithtomicroservicesdesign
(With speaker notes:)
https://speakerdeck.com/jipiboily/from-rails-to-microservices-with-go-our-experience-with-gemnasium-enterprise
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: RWO mounted on multiple hosts

2016-04-08 Thread Philippe Lafoucrière
I'm at a conference this week, will try to send you something next week.
Thanks
Philippe
​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: accessing secure registry on master isn't possible?

2016-04-08 Thread Maciej Szulik
Have you checked with --insecure-flag as well, if the problem exists?

On Fri, Apr 8, 2016 at 11:17 AM, Den Cowboy  wrote:

> I'm using the ca.crt from /etc/origin/master/ca.crt and
> /etc/origin/node/ca.crt
>
> --
> Date: Fri, 8 Apr 2016 11:02:19 +0200
>
> Subject: Re: accessing secure registry on master isn't possible?
> From: maszu...@redhat.com
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
>
>
>
> On Fri, Apr 8, 2016 at 8:27 AM, Den Cowboy  wrote:
>
> Yes I performed the same steps on my master as on my nodes. This is the
> error:
> sudo docker login -u admin -e m...@mail.com \
> > -p token 172.30.xx.xx:5000
> Error response from daemon: invalid registry endpoint
> https://172.30.109.95:5000/v0/: unable to ping registry endpoint
> https://172.30.xx.xx:5000/v0/
> v2 ping attempt failed with error: Get https://172.30.xx.xx:5000/v2/:
> dial tcp 172.30.xx.xx:5000: i/o timeout
>  v1 ping attempt failed with error: Get
> https://172.30.xx.xx:5000/v1/_ping: dial tcp 172.30.xx.xx:5000: i/o
> timeout. If this private registry supports only HTTP or HTTPS with an
> unknown CA certificate, please add `--insecure-registry 172.30.xx.xx:5000`
> to the daemon's arguments. In the case of HTTPS, if you have access to the
> registry's CA certificate, no need for the flag; simply place the CA
> certificate at /etc/docker/certs.d/172.30.xx.xx:5000/ca.crt
>
>
> Do you have the CA cert in /etc/docker/certs.d/172.30.xx.xx:5000/ca.crt
> the log you're seeing is
> the usual log that happens when you're using self-singed certs for
> registry. Eventually make sure
> the above ca is the right one.
>
>
> While on all my 3 nodes:
>
> sudo docker login -u admin -e m...@mail.com \
> > -p token 172.30.xx.xx:5000
> WARNING: login credentials saved in /root/.docker/config.json
> Login Succeeded
>
> --
> Date: Thu, 7 Apr 2016 22:02:06 +0200
> Subject: Re: accessing secure registry on master isn't possible?
> From: maszu...@redhat.com
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
>
>
> Per
> https://docs.openshift.org/latest/install_config/install/docker_registry.html#securing-the-registry,
> step 11 and 12,
> I assume you've copied CA certificate to the Docker certificates directory
> on all nodes and restarted docker service,
> did you also do that on master as well. Without it any docker operation
> will fail with certificate check failure.
> What is the error you're seeing and what is the operation you're trying to
> do?
>
>
> On Thu, Apr 7, 2016 at 4:20 PM, Den Cowboy  wrote:
>
> I've created a secur registry on 1.1.6
> For the first time I've created my environment with 1 real master and 3
> nodes (one infra). (The reason for this is because I'm using the community
> ansible aws setup.
> 
> https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
> Normally my master is also an unschedulable node. Now I've secured my
> registry.
> I'm able to login and push to the registry from my nodes but not from my
> master?
> Is this normal , if yes,  why is it that way?
> I don't think it's an issue because the images will always be pulled and
> pushed on my nodes because only there can run my containers but I want to
> know if it's a known thing.
>
> Thanks
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: accessing secure registry on master isn't possible?

2016-04-08 Thread Den Cowboy
I'm using the ca.crt from /etc/origin/master/ca.crt and /etc/origin/node/ca.crt 

Date: Fri, 8 Apr 2016 11:02:19 +0200
Subject: Re: accessing secure registry on master isn't possible?
From: maszu...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com



On Fri, Apr 8, 2016 at 8:27 AM, Den Cowboy  wrote:



Yes I performed the same steps on my master as on my nodes. This is the error:
sudo docker login -u admin -e m...@mail.com \
> -p token 172.30.xx.xx:5000
Error response from daemon: invalid registry endpoint 
https://172.30.109.95:5000/v0/: unable to ping registry endpoint 
https://172.30.xx.xx:5000/v0/
v2 ping attempt failed with error: Get https://172.30.xx.xx:5000/v2/: dial tcp 
172.30.xx.xx:5000: i/o timeout
 v1 ping attempt failed with error: Get https://172.30.xx.xx:5000/v1/_ping: 
dial tcp 172.30.xx.xx:5000: i/o timeout. If this private registry supports only 
HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry 
172.30.xx.xx:5000` to the daemon's arguments. In the case of HTTPS, if you have 
access to the registry's CA certificate, no need for the flag; simply place the 
CA certificate at /etc/docker/certs.d/172.30.xx.xx:5000/ca.crt


Do you have the CA cert in /etc/docker/certs.d/172.30.xx.xx:5000/ca.crt the log 
you're seeing is 
the usual log that happens when you're using self-singed certs for registry. 
Eventually make sure
the above ca is the right one.
 While on all my 3 nodes:

sudo docker login -u admin -e m...@mail.com \
> -p token 172.30.xx.xx:5000
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded

Date: Thu, 7 Apr 2016 22:02:06 +0200
Subject: Re: accessing secure registry on master isn't possible?
From: maszu...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Per 
https://docs.openshift.org/latest/install_config/install/docker_registry.html#securing-the-registry,
 step 11 and 12,
I assume you've copied CA certificate to the Docker certificates directory on 
all nodes and restarted docker service, 
did you also do that on master as well. Without it any docker operation will 
fail with certificate check failure. 
What is the error you're seeing and what is the operation you're trying to do?


On Thu, Apr 7, 2016 at 4:20 PM, Den Cowboy  wrote:



I've created a secur registry on 1.1.6 
For the first time I've created my environment with 1 real master and 3 nodes 
(one infra). (The reason for this is because I'm using the community ansible 
aws setup. 
https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
Normally my master is also an unschedulable node. Now I've secured my registry.
I'm able to login and push to the registry from my nodes but not from my 
master? 
Is this normal , if yes,  why is it that way?
I don't think it's an issue because the images will always be pulled and pushed 
on my nodes because only there can run my containers but I want to know if it's 
a known thing.

Thanks

  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



  

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: accessing secure registry on master isn't possible?

2016-04-08 Thread Maciej Szulik
On Fri, Apr 8, 2016 at 8:27 AM, Den Cowboy  wrote:

> Yes I performed the same steps on my master as on my nodes. This is the
> error:
> sudo docker login -u admin -e m...@mail.com \
> > -p token 172.30.xx.xx:5000
> Error response from daemon: invalid registry endpoint
> https://172.30.109.95:5000/v0/: unable to ping registry endpoint
> https://172.30.xx.xx:5000/v0/
> v2 ping attempt failed with error: Get https://172.30.xx.xx:5000/v2/:
> dial tcp 172.30.xx.xx:5000: i/o timeout
>  v1 ping attempt failed with error: Get
> https://172.30.xx.xx:5000/v1/_ping: dial tcp 172.30.xx.xx:5000: i/o
> timeout. If this private registry supports only HTTP or HTTPS with an
> unknown CA certificate, please add `--insecure-registry 172.30.xx.xx:5000`
> to the daemon's arguments. In the case of HTTPS, if you have access to the
> registry's CA certificate, no need for the flag; simply place the CA
> certificate at /etc/docker/certs.d/172.30.xx.xx:5000/ca.crt
>
>
Do you have the CA cert in /etc/docker/certs.d/172.30.xx.xx:5000/ca.crt the
log you're seeing is
the usual log that happens when you're using self-singed certs for
registry. Eventually make sure
the above ca is the right one.


> While on all my 3 nodes:
>
> sudo docker login -u admin -e m...@mail.com \
> > -p token 172.30.xx.xx:5000
> WARNING: login credentials saved in /root/.docker/config.json
> Login Succeeded
>
> --
> Date: Thu, 7 Apr 2016 22:02:06 +0200
> Subject: Re: accessing secure registry on master isn't possible?
> From: maszu...@redhat.com
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
>
>
> Per
> https://docs.openshift.org/latest/install_config/install/docker_registry.html#securing-the-registry,
> step 11 and 12,
> I assume you've copied CA certificate to the Docker certificates directory
> on all nodes and restarted docker service,
> did you also do that on master as well. Without it any docker operation
> will fail with certificate check failure.
> What is the error you're seeing and what is the operation you're trying to
> do?
>
>
> On Thu, Apr 7, 2016 at 4:20 PM, Den Cowboy  wrote:
>
> I've created a secur registry on 1.1.6
> For the first time I've created my environment with 1 real master and 3
> nodes (one infra). (The reason for this is because I'm using the community
> ansible aws setup.
> 
> https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
> Normally my master is also an unschedulable node. Now I've secured my
> registry.
> I'm able to login and push to the registry from my nodes but not from my
> master?
> Is this normal , if yes,  why is it that way?
> I don't think it's an issue because the images will always be pulled and
> pushed on my nodes because only there can run my containers but I want to
> know if it's a known thing.
>
> Thanks
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: ansible run with cert errors (certificate signed by unknown authority)

2016-04-08 Thread Sebastian Wieseler
Dear community,
I think the problem lies here:

$ openssl x509 -in /etc/etcd/peer.crt -text -noout
Subject: CN=xxx.xxx
X509v3 Subject Alternative Name:
IP Address:z.z.z.z

CN - master 1
IP - master 3

Plus this cert  /etc/etcd/peer.crt appears in all three masters - with the same 
values.
It should be: (on master1) CN:master1 IP:master1
(on master2) CN:master2 IP:master2

Seems like one of the last commits in these area broke things. It was working 
fine before :(
But I can’t find the commit. :(

Really need help with this.
Thanks a lot!
   Sebastian Wieseler



On 8 Apr 2016, at 12:05 PM, Sebastian Wieseler 
> wrote:

Dear community,
I am running the latest ansible playbook version and followed the advanced 
installation guide.
(Updating 6bae443..1b82b1b)


When I execute ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml it 
fails with:
TASK: [openshift_master | Start and enable master api] 
failed: [x.x.x.x] => {"failed": true}
msg: Job for origin-master-api.service failed because the control process 
exited with error code. See "systemctl status origin-master-api.service" and 
"journalctl -xe" for details.



Apr 08 03:47:43   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:43   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:43   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:43   etcd[12180]: dropped MsgHeartbeatResp to 9dc58f8e2290c613 
since pipeline's sending buffer is full
Apr 08 03:47:43   etcd[12180]: dropped MsgProp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:45   etcd[12180]: dropped MsgHeartbeatResp to 9dc58f8e2290c613 
since pipeline's sending buffer is full
Apr 08 03:47:45   etcd[12180]: publish error: etcdserver: request timed out, 
possibly due to connection lost
Apr 08 03:47:45   origin-master-controllers[116866]: E0408 03:47:45.976514  
116866 leaderlease.go:69] unable to check lease 
openshift.io/leases/controllers: 501:
All the given peers are not reachable (failed to propose on members 
[https://xxx.xxx:2379 x509: certificate signed by unknown 
authority]) [0]

Apr 08 03:47:47   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:47   etcd[12180]: the connection to peer af936f5f6ff57c05 is 
unhealthy
Apr 08 03:47:47   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:47   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:47   etcd[12180]: dropped MsgHeartbeatResp to 9dc58f8e2290c613 
since pipeline's sending buffer is full
Apr 08 03:47:47   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:47   etcd[12180]: dropped MsgProp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:47   origin-node[26652]: E0408 03:47:47.708378   26652 
kubelet.go:2761] Error updating node status, will retry: error getting node 
“xxx.xxx": error #0: net/http: TLS handshake timeout
Apr 08 03:47:47   origin-node[26652]: error #1: net/http: TLS handshake timeout
Apr 08 03:47:47   origin-node[26652]: error #2: x509: certificate signed by 
unknown authority
Apr 08 03:47:48   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:48   etcd[12180]: dropped MsgHeartbeatResp to 9dc58f8e2290c613 
since pipeline's sending buffer is full
Apr 08 03:47:48   etcd[12180]: dropped MsgHeartbeatResp to 9dc58f8e2290c613 
since pipeline's sending buffer is full
Apr 08 03:47:49   origin-node[26652]: E0408 03:47:49.187066   26652 
kubelet.go:2761] Error updating node status, will retry: error getting node 
“xxx.xxx": error #0: x509: certificate signed by unknown authority
Apr 08 03:47:49   origin-node[26652]: error #1: x509: certificate signed by 
unknown authority
Apr 08 03:47:49   origin-node[26652]: error #2: x509: certificate signed by 
unknown authority
Apr 08 03:47:49   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:49   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:49   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:49   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:49   etcd[12180]: dropped MsgAppResp to 9dc58f8e2290c613 since 
pipeline's sending buffer is full
Apr 08 03:47:49   etcd[12180]: dropped MsgHeartbeatResp to 9dc58f8e2290c613 
since pipeline's sending buffer is full
Apr 08 03:47:49   etcd[12180]: 

RE: accessing secure registry on master isn't possible?

2016-04-08 Thread Den Cowboy
Yes I performed the same steps on my master as on my nodes. This is the error:
sudo docker login -u admin -e m...@mail.com \
> -p token 172.30.xx.xx:5000
Error response from daemon: invalid registry endpoint 
https://172.30.109.95:5000/v0/: unable to ping registry endpoint 
https://172.30.xx.xx:5000/v0/
v2 ping attempt failed with error: Get https://172.30.xx.xx:5000/v2/: dial tcp 
172.30.xx.xx:5000: i/o timeout
 v1 ping attempt failed with error: Get https://172.30.xx.xx:5000/v1/_ping: 
dial tcp 172.30.xx.xx:5000: i/o timeout. If this private registry supports only 
HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry 
172.30.xx.xx:5000` to the daemon's arguments. In the case of HTTPS, if you have 
access to the registry's CA certificate, no need for the flag; simply place the 
CA certificate at /etc/docker/certs.d/172.30.xx.xx:5000/ca.crt

While on all my 3 nodes:

sudo docker login -u admin -e m...@mail.com \
> -p token 172.30.xx.xx:5000
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded

Date: Thu, 7 Apr 2016 22:02:06 +0200
Subject: Re: accessing secure registry on master isn't possible?
From: maszu...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Per 
https://docs.openshift.org/latest/install_config/install/docker_registry.html#securing-the-registry,
 step 11 and 12,
I assume you've copied CA certificate to the Docker certificates directory on 
all nodes and restarted docker service, 
did you also do that on master as well. Without it any docker operation will 
fail with certificate check failure. 
What is the error you're seeing and what is the operation you're trying to do?


On Thu, Apr 7, 2016 at 4:20 PM, Den Cowboy  wrote:



I've created a secur registry on 1.1.6 
For the first time I've created my environment with 1 real master and 3 nodes 
(one infra). (The reason for this is because I'm using the community ansible 
aws setup. 
https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
Normally my master is also an unschedulable node. Now I've secured my registry.
I'm able to login and push to the registry from my nodes but not from my 
master? 
Is this normal , if yes,  why is it that way?
I don't think it's an issue because the images will always be pulled and pushed 
on my nodes because only there can run my containers but I want to know if it's 
a known thing.

Thanks

  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users