Re: route not load balancing

2019-12-16 Thread Dan Mace
On Mon, Dec 16, 2019 at 8:12 AM Just Marvin <
marvin.the.cynical.ro...@gmail.com> wrote:

> Hi,
>
> In the openshift-ingress/router-default pod, in
> /var/lib/haproxy/conf/haproxy.config I see:
>
> backend be_edge_http:session-persistence:song-demo
>   mode http
>   option redispatch
>   option forwardfor
>
> *  balance leastconn*
>   timeout check 5000ms
>   http-request set-header X-Forwarded-Host %[req.hdr(host)]
>   http-request set-header X-Forwarded-Port %[dst_port]
>   http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
>   http-request set-header X-Forwarded-Proto https if { ssl_fc }
>   http-request set-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i
> h2 }
>   # Forwarded header: quote IPv6 addresses and values that may be empty as
> per https://tools.ietf.org/html/rfc7239
>   http-request add-header Forwarded
> for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)];proto-version=\"%[req.hdr(X-Forwarded-Proto-Version)]\"
>   cookie 3e15d2ed54afa750c6c99f3e7974d5f8 insert indirect nocache httponly
> secure
>   server pod:song-demo-6-tw5qr:song-demo:10.128.0.46:9080 10.128.0.46:9080
> cookie 88434433dbc1caefd053e8d5252218f0 weight 256 check inter 5000ms
>   server pod:song-demo-6-nz8c2:song-demo:10.128.0.47:9080 10.128.0.47:9080
> cookie c67ffb0cd9ed5115a63c2f6b0a39151c weight 256 check inter 5000ms
> sh-4.2$
>
> I probably want "roundrobin" instead of that, though I would argue the
> system isn't doing what is implied by "leastconn" either. Is there a way to
> change that setting to what I need?
>
> Regards,
> Marvin
>


Marvin,

You can try the "haproxy.router.openshift.io/balance" annotation on your
Route resource to configure the balancing strategy[1].

https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html#route-specific-annotations


On Sun, Dec 15, 2019 at 8:21 PM Just Marvin <
> marvin.the.cynical.ro...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm using code-ready containers and I've scaled my pod to do, but
>> when I hit it form a for loop in a shell script, I get back results
>> indicating that its only being routed to one of the pods. Is there some
>> setting that I need to tweak to make things load balanced even at low load?
>>
>> [zaphod@oc6010654212 session-song-demo]$ oc describe svc song-demo
>> Name:  song-demo
>> Namespace: session-persistence
>> Labels:app=song-demo
>>app.kubernetes.io/component=song-demo
>>app.kubernetes.io/instance=song-demo
>> Annotations:   openshift.io/generated-by: OpenShiftNewApp
>> Selector:  deploymentconfig=song-demo
>> Type:  ClusterIP
>> IP:172.30.140.43
>> Port:  9080-tcp  9080/TCP
>> TargetPort:9080/TCP
>> Endpoints: 10.128.0.42:9080,10.128.0.43:9080
>> Port:  9443-tcp  9443/TCP
>> TargetPort:9443/TCP
>> Endpoints: 10.128.0.42:9443,10.128.0.43:9443
>> Session Affinity:  None
>> Events:
>> [zaphod@oc6010654212 session-song-demo]$ oc describe route song-demo
>> Name: song-demo
>> Namespace: session-persistence
>> Created: 5 hours ago
>> Labels: app=song-demo
>> app.kubernetes.io/component=song-demo
>> app.kubernetes.io/instance=song-demo
>> Annotations: 
>> Requested Host: song-demo.apps-crc.testing
>>  exposed on router default (host apps-crc.testing) 5 hours ago
>> Path: 
>> TLS Termination: edge
>> Insecure Policy: 
>> Endpoint Port: 9080-tcp
>>
>> Service: song-demo
>> Weight: 100 (100%)
>> Endpoints: 10.128.0.42:9443, 10.128.0.43:9443, 10.128.0.42:9080 + 1
>> more...
>> [zaphod@oc6010654212 session-song-demo]$
>>
>> Regards,
>> Marvin
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>


-- 

Dan Mace

Principal Software Engineer, OpenShift

Red Hat

dm...@redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Keycloak as oauth provider

2016-09-20 Thread Dan Mace
On Tue, Sep 20, 2016 at 1:23 PM, Charles Moulliard 
wrote:

> Hi,
>
> What is the status about the integration of Keycloak as Oauth provider
> with OpenShift Origin ? Is it done - not done ? Still planned ?
>
> Regards,
>
> Charles
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
​Which authentication protocol are you interested in? You can integrate
OpenShift with KeyCloak out of the box today using OpenID Connect.​ [1]

[1]
https://docs.openshift.org/latest/install_config/configuring_authentication.html#OpenID
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Are the identity provider token roles used ?

2016-05-20 Thread Dan Mace
Replies inline. cc’ing Jordan who can correct any inaccuracies on my part
related to authentication.

On Fri, May 20, 2016 at 9:19 AM, Charles Moulliard 
wrote:

> Hi,
>
> I have installed and configured Openshiftv 1.3.0-alpha.0-581-gcf6465c with
> Keycloak 1.9.2.Final as identity provider
>
> I can log to the openshift server with the user admin or default created
> within the Openshift Realm of Keycloak
>
>  ./oc login https://192.168.99.100:8443 -u admin -p admin
>> Login successful.
>> You don't have any projects. You can try to create a new project, by
>> running
>> $ oc new-project 
>
>
>
> But the user doesn't belong to the cluster-admin role even if it has been
> added to keycloak realm and passed within the OpenID Token
>
> See the screenshot here :
> https://www.dropbox.com/s/c2n7a671jdkbhs9/Screenshot%202016-05-20%2015.16.56.png?dl=0
>
>  ./oc project default
> error: You are not a member of project "default".
> You are not a member of any projects. You can request a project to be
> created with the 'new-project' command.
>
> ./oc new-project default
> Error from server: project "default" already exists
>
> ./oc describe clusterPolicy default
> Error from server: User "admin" cannot get clusterpolicies at the cluster
> scope
>
> Questions :
> - Is the role passed within the OpenID Token used ?
>

Origin does not currently support mapping identity information to Origin
groups[1]. The role claim on your token is ignored by the system.

https://docs.openshift.org/latest/install_config/configuring_authentication.html#mapping-identities-to-users

- How can we add for a user the cluster-admin role as we can't connect to
> the platform using user 'system:admin' - error: username system:admin is
> invalid for basic auth ?
>

​I believe the `oadm policy add-cluster-role-to-user` command targeting
that new user will do what you’re looking for.​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: CMD with env vars

2016-02-25 Thread Dan Mace
On Thu, Feb 25, 2016 at 1:43 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Hi,
>
> Sorry it's not the right place to post that, but I can't manage to get a
> container running when env var are present in the command:
>
>  spec:
>containers:
>- command:
>  - /nsqd
>  - --worker-id=${WORKER_ID:-0}
>  env:
>  - name: WORKER_ID
>value: "1"
>
> I have tried with /bin/sh -c, with or without quotes, etc., and nothing is
> working.
>
> Thanks,
> Philippe
>

​Try this syntax:

-w​orker-id=$(WORKER_ID)

Notice the parens rather than braces. I don't know that defaulting is
supported.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: pods hanging in "pending" state

2016-02-18 Thread Dan Mace
On Thu, Feb 18, 2016 at 5:14 PM, Candide Kemmler 
wrote:

> I did it all from scratch again (delete/re-created the project).
>
> oc get event:
>
> https://gist.github.com/ckemmler/a4bda6af62dc3b6daa01
>
> logs for intrinsic-pfs deployment are empty ("An error occured loading the
> log")
>
> Happy to take this to another channel (irc?)
>
> Thank *you* for helping me with this!
>
>
>
​A quick IRC session could be easier... I'm `danmace` in #openshift-dev on
Freenode.



> On 18 Feb 2016, at 23:01, Dan Mace  wrote:
>
>
>
> On Thu, Feb 18, 2016 at 4:46 PM, Candide Kemmler 
> wrote:
>
>> Also, here is the template I'm using to try to create the app, if it
>> helps:
>>
>> https://gist.github.com/ckemmler/72738543f9aec97a3bca
>>
>> Just retried to create the app, here's the output of 'oc get pod -o yaml'
>>
>> https://gist.github.com/ckemmler/6de93bc17b9ed8208d4f
>>
>
>
> ​Looks like the pods associated with your new replicationController (aka
> deployment) are stuck "Pending" and don't have any status indicating
> they've been scheduled on a node. During the time of the hung deployment
> process, do you see any logs ​from the master related to the scheduling of
> 'intrinsic-pds-1-', or any events? Try:
>
> `oc get event`
>
> From what I have seen so far, the problem could be that the deployer
> process scaled up the RC to 1, but the pod for the RC never becomes ready,
> possibly because of a scheduling issue. Events and logs will hopefully tell
> us more.
>
> Thanks for your patience diagnosing this.
>
>> On 18 Feb 2016, at 22:36, Dan Mace  wrote:
>>
>>
>>
>> On Thu, Feb 18, 2016 at 4:32 PM, Candide Kemmler > > wrote:
>>
>>> Clayton, I don't know that's what I get from running "oc get dc
>>> intrinsic-pds":
>>>
>>> [admin@paas pds]$ oc get dc/intrinsic-pds -o yaml
>>> Error from server: deploymentConfig "\u200bi\u200bntrinsic-pds" not found
>>> [admin@paas pds]$ oc get dc/intrinsic-pds
>>> Error from server: deploymentConfig "\u200bi\u200bntrinsic-pds" not found
>>>
>>> Anyway getting all deploymentconfigs works:
>>>
>>
>> ​Okay, so next I'm curious to know if the pods or containers for the
>> newly deployed RC are failing to be created or are stuck in a crash-loop.
>> While the deployment is waiting, can you take a look at the
>> replicationControllers:
>>
>> `oc get rc -o yaml`
>>
>> And also the pods:
>>
>> `oc get pod -o yaml`
>>
>> ​
>>
>>
>>
>>> apiVersion: v1
>>> items:
>>> - apiVersion: v1
>>>   kind: DeploymentConfig
>>>   metadata:
>>> creationTimestamp: 2016-02-18T20:33:44Z
>>> labels:
>>>   template: couchdb-persistent-template
>>> name: couchdb
>>> namespace: intrinsic-dev
>>> resourceVersion: "26314"
>>> selfLink: /oapi/v1/namespaces/intrinsic-dev/deploymentconfigs/couchdb
>>> uid: e79d7892-d67e-11e5-9c86-fa163e3b8107
>>>   spec:
>>> replicas: 1
>>> selector:
>>>   name: couchdb
>>> strategy:
>>>   resources: {}
>>>   type: Recreate
>>> template:
>>>   metadata:
>>> creationTimestamp: null
>>> labels:
>>>   name: couchdb
>>>   spec:
>>> containers:
>>> - image:
>>> docker.io/intrinsic/couchdb@sha256:71fce8ab4ea3c148624e8d85a7cf3b49610b3d20cccfcc15a10572bdf1cca28c
>>>   imagePullPolicy: IfNotPresent
>>>   name: couchdb
>>>   ports:
>>>   - containerPort: 5984
>>> protocol: TCP
>>>   resources: {}
>>>   securityContext:
>>> capabilities: {}
>>> privileged: false
>>>   terminationMessagePath: /dev/termination-log
>>>   volumeMounts:
>>>   - mountPath: /usr/local/var/lib/couchdb
>>> name: couch-data
>>> dnsPolicy: ClusterFirst
>>> restartPolicy: Always
>>> securityContext: {}
>>> terminationGracePeriodSeconds: 30
>>> volumes:
>>> - name: couch-data
>>>   persistentVolumeClaim:
>>> claimName: couchdb
>>> triggers:
>>> - imageChangePa

Re: pods hanging in "pending" state

2016-02-18 Thread Dan Mace
On Thu, Feb 18, 2016 at 4:46 PM, Candide Kemmler 
wrote:

> Also, here is the template I'm using to try to create the app, if it helps:
>
> https://gist.github.com/ckemmler/72738543f9aec97a3bca
>
> Just retried to create the app, here's the output of 'oc get pod -o yaml'
>
> https://gist.github.com/ckemmler/6de93bc17b9ed8208d4f
>


​Looks like the pods associated with your new replicationController (aka
deployment) are stuck "Pending" and don't have any status indicating
they've been scheduled on a node. During the time of the hung deployment
process, do you see any logs ​from the master related to the scheduling of
'intrinsic-pds-1-', or any events? Try:

`oc get event`

>From what I have seen so far, the problem could be that the deployer
process scaled up the RC to 1, but the pod for the RC never becomes ready,
possibly because of a scheduling issue. Events and logs will hopefully tell
us more.

Thanks for your patience diagnosing this.

> On 18 Feb 2016, at 22:36, Dan Mace  wrote:
>
>
>
> On Thu, Feb 18, 2016 at 4:32 PM, Candide Kemmler 
> wrote:
>
>> Clayton, I don't know that's what I get from running "oc get dc
>> intrinsic-pds":
>>
>> [admin@paas pds]$ oc get dc/intrinsic-pds -o yaml
>> Error from server: deploymentConfig "\u200bi\u200bntrinsic-pds" not found
>> [admin@paas pds]$ oc get dc/intrinsic-pds
>> Error from server: deploymentConfig "\u200bi\u200bntrinsic-pds" not found
>>
>> Anyway getting all deploymentconfigs works:
>>
>
> ​Okay, so next I'm curious to know if the pods or containers for the newly
> deployed RC are failing to be created or are stuck in a crash-loop. While
> the deployment is waiting, can you take a look at the
> replicationControllers:
>
> `oc get rc -o yaml`
>
> And also the pods:
>
> `oc get pod -o yaml`
>
> ​
>
>
>
>> apiVersion: v1
>> items:
>> - apiVersion: v1
>>   kind: DeploymentConfig
>>   metadata:
>> creationTimestamp: 2016-02-18T20:33:44Z
>> labels:
>>   template: couchdb-persistent-template
>> name: couchdb
>> namespace: intrinsic-dev
>> resourceVersion: "26314"
>> selfLink: /oapi/v1/namespaces/intrinsic-dev/deploymentconfigs/couchdb
>> uid: e79d7892-d67e-11e5-9c86-fa163e3b8107
>>   spec:
>> replicas: 1
>> selector:
>>   name: couchdb
>> strategy:
>>   resources: {}
>>   type: Recreate
>> template:
>>   metadata:
>> creationTimestamp: null
>> labels:
>>   name: couchdb
>>   spec:
>> containers:
>> - image:
>> docker.io/intrinsic/couchdb@sha256:71fce8ab4ea3c148624e8d85a7cf3b49610b3d20cccfcc15a10572bdf1cca28c
>>   imagePullPolicy: IfNotPresent
>>   name: couchdb
>>   ports:
>>   - containerPort: 5984
>> protocol: TCP
>>   resources: {}
>>   securityContext:
>> capabilities: {}
>> privileged: false
>>   terminationMessagePath: /dev/termination-log
>>   volumeMounts:
>>   - mountPath: /usr/local/var/lib/couchdb
>> name: couch-data
>> dnsPolicy: ClusterFirst
>> restartPolicy: Always
>> securityContext: {}
>> terminationGracePeriodSeconds: 30
>> volumes:
>> - name: couch-data
>>   persistentVolumeClaim:
>> claimName: couchdb
>> triggers:
>> - imageChangeParams:
>> automatic: true
>> containerNames:
>> - couchdb
>> from:
>>   kind: ImageStreamTag
>>   name: couchdb:latest
>>   namespace: openshift
>> lastTriggeredImage:
>> docker.io/intrinsic/couchdb@sha256:71fce8ab4ea3c148624e8d85a7cf3b49610b3d20cccfcc15a10572bdf1cca28c
>>   type: ImageChange
>> - type: ConfigChange
>>   status:
>> details:
>>   causes:
>>   - type: ConfigChange
>> latestVersion: 1
>> - apiVersion: v1
>>   kind: DeploymentConfig
>>   metadata:
>> creationTimestamp: 2016-02-18T21:19:05Z
>> labels:
>>   app: intrinsic-pds
>> name: intrinsic-pds
>> namespace: intrinsic-dev
>> resourceVersion: "26863"
>> selfLink:
>> /oapi/v1/namespaces/intrinsic-dev/deploymentconfigs/intrinsic-pds
>> uid: 3d99e06b-d685-11e5-9c86-fa163e3b8107
>>

Re: pods hanging in "pending" state

2016-02-18 Thread Dan Mace
>   - containerPort: 8778
> protocol: TCP
>   resources: {}
>   terminationMessagePath: /dev/termination-log
> dnsPolicy: ClusterFirst
> restartPolicy: Always
> securityContext: {}
> terminationGracePeriodSeconds: 30
> triggers:
> - imageChangeParams:
> automatic: true
> containerNames:
> - intrinsic-pds
> from:
>   kind: ImageStreamTag
>   name: intrinsic-pds:latest
> lastTriggeredImage:
> 172.30.122.240:5000/intrinsic-dev/intrinsic-pds@sha256:0d26174694f39e4b2d6996c7ec03fcbd17af981de62393c0462743e9a0f0dac6
>   type: ImageChange
> - type: ConfigChange
>   status:
> details:
>   causes:
>   - imageTrigger:
>   from:
> kind: DockerImage
> name: 172.30.122.240:5000/intrinsic-dev/intrinsic-pds:latest
> type: ImageChange
> latestVersion: 1
> - apiVersion: v1
>   kind: DeploymentConfig
>   metadata:
> creationTimestamp: 2016-02-18T20:26:58Z
> labels:
>   template: mysql-persistent-template
> name: mysql
> namespace: intrinsic-dev
> resourceVersion: "26218"
> selfLink: /oapi/v1/namespaces/intrinsic-dev/deploymentconfigs/mysql
> uid: f58518e0-d67d-11e5-9c86-fa163e3b8107
>   spec:
> replicas: 1
> selector:
>   name: mysql
> strategy:
>   resources: {}
>   type: Recreate
> template:
>   metadata:
> creationTimestamp: null
> labels:
>   name: mysql
>   spec:
> containers:
> - env:
>   - name: MYSQL_USER
> value: ***
>   - name: MYSQL_PASSWORD
> value: ***
>   - name: MYSQL_DATABASE
> value: ***
>   image:
> docker.io/centos/mysql-56-centos7@sha256:5a1d4c653e953c75a283cfecb1016ae57023b52ea12ad35ec0d1f861adb1
>   imagePullPolicy: IfNotPresent
>   name: mysql
>   ports:
>   - containerPort: 3306
> protocol: TCP
>   resources: {}
>   securityContext:
> capabilities: {}
> privileged: false
>   terminationMessagePath: /dev/termination-log
>   volumeMounts:
>   - mountPath: /var/lib/mysql/data
> name: mysql-data
> dnsPolicy: ClusterFirst
> restartPolicy: Always
> securityContext: {}
> terminationGracePeriodSeconds: 30
> volumes:
> - name: mysql-data
>   persistentVolumeClaim:
> claimName: mysql
> triggers:
> - imageChangeParams:
> automatic: true
> containerNames:
> - mysql
> from:
>   kind: ImageStreamTag
>   name: mysql:latest
>       namespace: openshift
> lastTriggeredImage:
> docker.io/centos/mysql-56-centos7@sha256:5a1d4c653e953c75a283cfecb1016ae57023b52ea12ad35ec0d1f861adb1
>   type: ImageChange
> - type: ConfigChange
>   status:
> details:
>   causes:
>   - type: ConfigChange
> latestVersion: 1
> kind: List
> metadata: {}
>
> > On 18 Feb 2016, at 22:28, Clayton Coleman  wrote:
> >
> > What is "i\u200b"?  Is that a unicode character?
> >
> > On Thu, Feb 18, 2016 at 4:23 PM, Candide Kemmler
> >  wrote:
> >> No there is no readiness proble in place.
> >>
> >> Really strange: here's what `oc get dc intrinsic-pds -o yaml` tells me:
> >>
> >> Error from server: deploymentConfig "i\u200bntrinsic-pds" not found
> >>
> >> ???
> >>
> >> On 18 Feb 2016, at 22:10, Dan Mace  wrote:
> >>
> >> On Thu, Feb 18, 2016 at 4:03 PM, Candide Kemmler
> 
> >> wrote:
> >>>
> >>> I have successfully created templates for all 5 microservices in our
> >>> application but now, at the "deployment" phase, the pod will remain
> >>> "pending" and even deleting all related objects will not get rid of it
> and
> >>> it will remain forever at the bottom of the overview list with an
> orange
> >>> circle around it. I can see that the s2i phase completed successfully,
> the
> >>> replicationcontroller duly created the pod which was assigned a node,
> as is
> >>> shown in the logs:
> >>>
> >>>
> >>> 9:53:18 PM  intrinsic-pds-1-7fohj   Pod Scheduled
> >>> Successfully assigned intrinsic-pds-1-7fohj to apps.intrinsic.world
> >>> 9

Re: pods hanging in "pending" state

2016-02-18 Thread Dan Mace
On Thu, Feb 18, 2016 at 4:23 PM, Candide Kemmler 
wrote:

> No there is no readiness proble in place.
>
> Really strange: here's what `oc get dc intrinsic-pds -o yaml` tells me:
>
> Error from server: deploymentConfig "i\u200bntrinsic-pds" not found
>
> ???
> ​
>

​You may have missed a slash there:

`oc get dc/​i​ntrinsic-pds -o yaml`

Or just:

`oc get dc -o yaml`

If you don't mind sharing all your deploymentConfigs.
​


> ​
>
> On 18 Feb 2016, at 22:10, Dan Mace  wrote:
>
> On Thu, Feb 18, 2016 at 4:03 PM, Candide Kemmler 
> wrote:
>
>> I have successfully created templates for all 5 microservices in our
>> application but now, at the "deployment" phase, the pod will remain
>> "pending" and even deleting all related objects will not get rid of it and
>> it will remain forever at the bottom of the overview list with an orange
>> circle around it. I can see that the s2i phase completed successfully, the
>> replicationcontroller duly created the pod which was assigned a node, as is
>> shown in the logs:
>>
>>
>> 9:53:18 PM  intrinsic-pds-1-7fohj   Pod Scheduled
>>  Successfully assigned intrinsic-pds-1-7fohj to apps.intrinsic.world
>> 9:53:18 PM  intrinsic-pds-1 ReplicationController   SuccessfulCreate
>>   Created pod: intrinsic-pds-1-7fohj
>> 9:53:15 PM  intrinsic-pds-1-deploy  Pod Scheduled
>>  Successfully assigned
>> ​​
>> i
>> ​​
>> ntrinsic-pds-1-deploy to apps.intrinsic.world
>>
>> The deployment, which will be forever "running" seems to be stuck saying
>> the following:
>>
>>
>> I0218 20:52:12.846055 1 deployer.go:196] Deploying
>> intrinsic-dev/intrinsic-pds-1 for the first time (replicas: 1)
>> I0218 20:52:12.848446 1 recreate.go:105] Scaling
>> intrinsic-dev/intrinsic-pds-1 to 1 before performing acceptance check
>> I0218 20:52:14.909059 1 recreate.go:110] Performing acceptance check of
>> intrinsic-dev/intrinsic-pds-1
>> I0218 20:52:14.909455 1 lifecycle.go:379] Waiting 600 seconds for pods
>> owned by deployment "intrinsic-dev/intrinsic-pds-1" to become ready
>> (checking every 1 seconds; 0 pods previously accepted)
>>
>> Other than by destroying the entire project (that works), how can I get
>> rid of these buggy pods and more importantly, how can I debug what's
>> affecting my deployments?
>>
>
> ​The deployment is waiting up to 10 minutes to verify that the newly
> deployed version's first pod is ready before progressing. Do you have​
>
> ​any livenessProbe or readinessProbe defined on the pod template inside
> your deploymentConfig? The output of "oc get dc/​i​ntrinsic-pds -o yaml"
> would be helpful.
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: pods hanging in "pending" state

2016-02-18 Thread Dan Mace
On Thu, Feb 18, 2016 at 4:03 PM, Candide Kemmler 
wrote:

> I have successfully created templates for all 5 microservices in our
> application but now, at the "deployment" phase, the pod will remain
> "pending" and even deleting all related objects will not get rid of it and
> it will remain forever at the bottom of the overview list with an orange
> circle around it. I can see that the s2i phase completed successfully, the
> replicationcontroller duly created the pod which was assigned a node, as is
> shown in the logs:
>
>
> 9:53:18 PM  intrinsic-pds-1-7fohj   Pod Scheduled
>  Successfully assigned intrinsic-pds-1-7fohj to apps.intrinsic.world
> 9:53:18 PM  intrinsic-pds-1 ReplicationController   SuccessfulCreate
>   Created pod: intrinsic-pds-1-7fohj
> 9:53:15 PM  intrinsic-pds-1-deploy  Pod Scheduled
>  Successfully assigned
> ​​
> i
> ​​
> ntrinsic-pds-1-deploy to apps.intrinsic.world
>
> The deployment, which will be forever "running" seems to be stuck saying
> the following:
>
>
> I0218 20:52:12.846055 1 deployer.go:196] Deploying
> intrinsic-dev/intrinsic-pds-1 for the first time (replicas: 1)
> I0218 20:52:12.848446 1 recreate.go:105] Scaling
> intrinsic-dev/intrinsic-pds-1 to 1 before performing acceptance check
> I0218 20:52:14.909059 1 recreate.go:110] Performing acceptance check of
> intrinsic-dev/intrinsic-pds-1
> I0218 20:52:14.909455 1 lifecycle.go:379] Waiting 600 seconds for pods
> owned by deployment "intrinsic-dev/intrinsic-pds-1" to become ready
> (checking every 1 seconds; 0 pods previously accepted)
>
> Other than by destroying the entire project (that works), how can I get
> rid of these buggy pods and more importantly, how can I debug what's
> affecting my deployments?
>

​The deployment is waiting up to 10 minutes to verify that the newly
deployed version's first pod is ready before progressing. Do you have​

​any livenessProbe or readinessProbe defined on the pod template inside
your deploymentConfig? The output of "oc get dc/​i​ntrinsic-pds -o yaml"
would be helpful.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unauthorized trying to deploy docker container

2016-02-15 Thread Dan Mace
On Mon, Feb 15, 2016 at 5:01 AM, Candide Kemmler 
wrote:

> I'm trying to deploy a couchdb container
>  on openshift, and have created
> 3 yaml files to this end:
>
> a persistent volume
> a persistent volume claim
> a deployment config
>
> (see attached files)
>
> However, I'm getting the following error upon deployment
>
> Failed to pull image "
> ​​
>
> docker.io/centos/klaemo/couchdb@sha256:d33822ecaae2e6247243b24c5c72c525714d77b904105f71679fbad04756bc96":
> unauthorized: access to the requested resource is not authorized
>
> The container is hosted on docker's public registry so wondering what this
> message means...
>

​I tried pulling that image myself with `docker pull` and got the same
error, which means access to the image requires docker hub authentication.
You need to create a secret in OpenShift which contains the pull
credentials and then reference that secret in the pod defined in your
deploymentConfig. Here's some more information about image pull secrets in
Kubernetes and OpenShift:

https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/images.md#specifying-imagepullsecrets-on-a-pod
https://docs.openshift.org/latest/dev_guide/image_pull_secrets.html

If that's not enough to get you moving, please reach out!

-- Dan
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users