You're missing a "cat" before /var/run/secrets/
kubernetes.io/serviceaccount/token, i.e.

-H "Authorization: Bearer $(cat /var/run/secrets/
kubernetes.io/serviceaccount/token)"

On Thu, Mar 3, 2016 at 1:26 PM, Dean Peterson <peterson.d...@gmail.com>
wrote:

> I have followed some of Ram's steps last night after recreating the router
> a few times.
>
> 1. oadm router aberouter --replicas=1 \
>     --credentials=/etc/origin/master/openshift-router.kubeconfig \
>     --service-account=system:serviceaccount:default:router
>
> 2. docker ps | grep haproxy
>
> 3.  I grab the container id and type "cid=<container id>" replacing
> container id with what I get from step 2.
>
> 4.  sudo nsenter -m -u -n -i -p -t  $(sudo docker inspect --format "{{
> .State.Pid }}" "$cid")
>
> 5. curl -k -vvv https://openshift.abecorn.com:8443/api/v1/routes/ -H
> "Authorization: Bearer $(/var/run/secrets/
> kubernetes.io/serviceaccount/token)"
>
> Instead of seeing "system:router\" cannot list all routes in the cluster,
> I see "system:anonymous\" cannot list all routes in the cluster.
>
> *It looks like the haproxy container is being created with
> system:anonymous credentials?  How is that possible and how can I force it
> to use system:router when I already used
> --service-account=system:serviceaccount:default:router in step 1?*
>
>
> curl -k -vvv https://openshift.abecorn.com:8443/api/v1/routes/ -H
> "Authorization: Bearer $(/var/run/secrets/
> kubernetes.io/serviceaccount/token)"
> -bash: /var/run/secrets/kubernetes.io/serviceaccount/token: Permission
> denied
> * About to connect() to openshift.abecorn.com port 8443 (#0)
> *   Trying 23.25.149.227...
> * Connected to openshift.abecorn.com (23.25.149.227) port 8443 (#0)
> * Initializing NSS with certpath: sql:/etc/pki/nssdb
> * skipping SSL peer certificate verification
> * NSS: client certificate not found (nickname not specified)
> * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
> * Server certificate:
> * subject: CN=172.30.0.1
> * start date: Feb 29 03:50:56 2016 GMT
> * expire date: Feb 28 03:50:57 2018 GMT
> * common name: 172.30.0.1
> * issuer: CN=openshift-signer@1456717855
> > GET /api/v1/routes/ HTTP/1.1
> > User-Agent: curl/7.29.0
> > Host: openshift.abecorn.com:8443
> > Accept: */*
> > Authorization: Bearer
> >
> < HTTP/1.1 403 Forbidden
> < Cache-Control: no-store
> < Content-Type: application/json
> < Date: Thu, 03 Mar 2016 18:20:09 GMT
> < Content-Length: 247
> <
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "User \"system:anonymous\" cannot list all routes in the
> cluster",
>   "reason": "Forbidden",
>   "details": {
>     "kind": "routes"
>   },
>   "code": 403
> }
>
>
> On Thu, Mar 3, 2016 at 10:39 AM, Dean Peterson <peterson.d...@gmail.com>
> wrote:
>
>> Ram actually went through quite a lot with me last night.  Here is a gist
>> of the irc chat:
>>
>> https://gist.github.com/deanpeterson/568f07b032933e9d219b
>>
>> On Thu, Mar 3, 2016 at 10:36 AM, Dean Peterson <peterson.d...@gmail.com>
>> wrote:
>>
>>> The logs only say:  "Router is including routes in all namespaces"
>>>
>>>
>>> On Thu, Mar 3, 2016 at 10:22 AM, Jordan Liggitt <jligg...@redhat.com>
>>> wrote:
>>>
>>>> What is in your router logs?
>>>>
>>>> On Thu, Mar 3, 2016 at 11:21 AM, Dean Peterson <peterson.d...@gmail.com
>>>> > wrote:
>>>>
>>>>> *The service account does exist:*
>>>>>
>>>>>  oc describe serviceaccount router
>>>>> Name:           router
>>>>> Namespace:      default
>>>>> Labels:         <none>
>>>>>
>>>>> Image pull secrets:     router-dockercfg-2d4wd
>>>>>
>>>>> Mountable secrets:      router-token-9p8at
>>>>>                         router-dockercfg-2d4wd
>>>>>
>>>>> Tokens:                 router-token-1le9y
>>>>>                         router-token-9p8at
>>>>>
>>>>> *It is in scc privileged:*
>>>>>
>>>>> users:
>>>>> - system:serviceaccount:openshift-infra:build-controller
>>>>> - system:serviceaccount:management-infra:management-admin
>>>>> - system:serviceaccount:default:router
>>>>> - system:serviceaccount:default:registry
>>>>>
>>>>> *And it has the policy to view endpoints in all namespaces:*
>>>>>
>>>>> oadm policy who-can get endpoints --all-namespaces         Namespace:
>>>>> <all>
>>>>> Verb:      get
>>>>> Resource:  endpoints
>>>>>
>>>>> Users:  system:serviceaccount:default:router
>>>>>         system:serviceaccount:management-infra:management-admin
>>>>>
>>>>> Groups: system:cluster-admins
>>>>>         system:cluster-readers
>>>>>         system:masters
>>>>>         system:nodes
>>>>>
>>>>> Still getting 503 error on all services
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Mar 3, 2016 at 10:14 AM, Dean Peterson <
>>>>> peterson.d...@gmail.com> wrote:
>>>>>
>>>>>> Now when it displays:
>>>>>>
>>>>>>  oadm policy who-can get endpoints --all-namespaces
>>>>>> Namespace: <all>
>>>>>> Verb:      get
>>>>>> Resource:  endpoints
>>>>>>
>>>>>> Users:  system:serviceaccount:default:router
>>>>>>         system:serviceaccount:management-infra:management-admin
>>>>>>
>>>>>> Groups: system:cluster-admins
>>>>>>         system:cluster-readers
>>>>>>         system:masters
>>>>>>         system:nodes
>>>>>>
>>>>>> But I still get the 503 error even after removing the router and
>>>>>> recreating it.
>>>>>>
>>>>>> On Thu, Mar 3, 2016 at 9:43 AM, Jordan Liggitt <jligg...@redhat.com>
>>>>>> wrote:
>>>>>>
>>>>>>> oadm policy add-cluster-role-to-user system:router
>>>>>>> system:serviceaccount:default:router
>>>>>>>
>>>>>>> On Thu, Mar 3, 2016 at 10:16 AM, Dean Peterson <
>>>>>>> peterson.d...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Yes, it only shows this:
>>>>>>>>
>>>>>>>>  oadm policy who-can get endpoints --all-namespaces
>>>>>>>> Namespace: <all>
>>>>>>>> Verb:      get
>>>>>>>> Resource:  endpoints
>>>>>>>>
>>>>>>>> Users:  router
>>>>>>>>         system:serviceaccount:management-infra:management-admin
>>>>>>>>
>>>>>>>> Groups: system:cluster-admins
>>>>>>>>         system:cluster-readers
>>>>>>>>         system:masters
>>>>>>>>         system:nodes
>>>>>>>>
>>>>>>>> Do I add cluster-admin like this:  oadm policy
>>>>>>>> add-cluster-role-to-user  cluster-admin router ?
>>>>>>>>
>>>>>>>> On Thu, Mar 3, 2016 at 7:03 AM, Jordan Liggitt <jligg...@redhat.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Was that service account given permission only within that
>>>>>>>>> namespace or cluster wide?
>>>>>>>>>
>>>>>>>>> What does this show:
>>>>>>>>> $  oadm policy who-can get endpoints --all-namespaces
>>>>>>>>>
>>>>>>>>> If it doesn't include the router service account, then you need to
>>>>>>>>> grant a cluster role to that user (oadm policy 
>>>>>>>>> add-cluster-role-to-user …)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mar 3, 2016, at 3:47 AM, Ram Ranganathan <rrang...@redhat.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Yeah, that should not matter. The routes + namespaces you would
>>>>>>>>> see are based on the permissions of the service account.
>>>>>>>>>
>>>>>>>>> I was able to get Dean on irc and ssh into his instance seeing
>>>>>>>>> something wonky with the permissions.
>>>>>>>>> CCing Jordan and Paul  for some help.
>>>>>>>>>
>>>>>>>>> Inside the router container, I tried running this:
>>>>>>>>> curl -k -vvv https://127.0.0.1:8443/api/v1/endpoints -H
>>>>>>>>> "Authorization: Bearer $(</var/run/secrets/
>>>>>>>>> kubernetes.io/serviceaccount/token)"
>>>>>>>>>
>>>>>>>>> which returns the endpoints if that token has permissions and I
>>>>>>>>> get a 403 error back :
>>>>>>>>> "message": "User \"system:serviceaccount:default:router\" cannot
>>>>>>>>> list all endpoints in the cluster",
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> but the oadm policy shows that the router service account has
>>>>>>>>> those permissions.
>>>>>>>>>
>>>>>>>>> On the host, running :
>>>>>>>>> $  oadm policy who-can get endpoints
>>>>>>>>>
>>>>>>>>> output has the router service account:
>>>>>>>>> http://fpaste.org/332733/45699454/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The token info from inside the router container (/var/run/secrets/
>>>>>>>>> kubernetes.io/serviceaccount/token) seems to work if I use it
>>>>>>>>> with oc login but not with the curl command - so it feels a bit
>>>>>>>>> odd.   Any ideas what's amiss here?
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>> Ram//
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Mar 2, 2016 at 11:56 PM, Dean Peterson <
>>>>>>>>> peterson.d...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> The router is on default namespace but the service pods are
>>>>>>>>>> running on a different namespace.
>>>>>>>>>>
>>>>>>>>>> On Thu, Mar 3, 2016 at 1:53 AM, Julio Saura <jsa...@hiberus.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> seems your router is running on default namespace, your pods are
>>>>>>>>>>> also running on namespace default?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> El 3 mar 2016, a las 7:58, Dean Peterson <
>>>>>>>>>>> peterson.d...@gmail.com> escribió:
>>>>>>>>>>>
>>>>>>>>>>> I did do an "oc edit scc privileged" and made sure this was at
>>>>>>>>>>> the end:
>>>>>>>>>>>
>>>>>>>>>>> users:
>>>>>>>>>>> - system:serviceaccount:openshift-infra:build-controller
>>>>>>>>>>> - system:serviceaccount:management-infra:management-admin
>>>>>>>>>>> - system:serviceaccount:default:router
>>>>>>>>>>> - system:serviceaccount:default:registry
>>>>>>>>>>>
>>>>>>>>>>> router has always been a privileged user service account.
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Mar 3, 2016 at 12:55 AM, Ram Ranganathan <
>>>>>>>>>>> rrang...@redhat.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> So you have no app level backends in that gist (haproxy.config
>>>>>>>>>>>> file). That would explain the 503s - there's nothing there for 
>>>>>>>>>>>> haproxy to
>>>>>>>>>>>> route to.  Most likely its due to the router service account has no
>>>>>>>>>>>> permissions to get the routes/endpoints info from etcd.
>>>>>>>>>>>> Check that the router service account (router default or
>>>>>>>>>>>> whatever service account you used to start the router) is
>>>>>>>>>>>> part of the privileged SCC and has read permissions to etcd.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Mar 2, 2016 at 10:43 PM, Dean Peterson <
>>>>>>>>>>>> peterson.d...@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I created a public gist from the output:
>>>>>>>>>>>>> https://gist.github.com/deanpeterson/76aa9abf2c7fa182b56c
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Mar 3, 2016 at 12:35 AM, Ram Ranganathan <
>>>>>>>>>>>>> rrang...@redhat.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> You shouldn't need to restart the router. It should have
>>>>>>>>>>>>>> created a new deployment and redeployed the router.
>>>>>>>>>>>>>> So looks like the cause for your 503 errors is something else.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can you check that your haproxy.config file is correct (has
>>>>>>>>>>>>>> the correct backends and servers).
>>>>>>>>>>>>>> Either nsenter into your router docker container and cat the
>>>>>>>>>>>>>> file or
>>>>>>>>>>>>>> then run:
>>>>>>>>>>>>>>     oc exec <router-pod-name> cat
>>>>>>>>>>>>>> /var/lib/haproxy/conf/haproxy.config    #  router-pod-name as 
>>>>>>>>>>>>>> shown in oc
>>>>>>>>>>>>>> get pods
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ram//
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Mar 2, 2016 at 10:10 PM, Dean Peterson <
>>>>>>>>>>>>>> peterson.d...@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I ran that "oc env dc router RELOAD_INTERVAL=5s" but I
>>>>>>>>>>>>>>> still get the 503 error.  Do I need to restart anything?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Mar 2, 2016 at 11:47 PM, Ram Ranganathan <
>>>>>>>>>>>>>>> rrang...@redhat.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Dean, we did have a recent change to coalesce router
>>>>>>>>>>>>>>>> reloads (default is 0s) and it looks like with that default we 
>>>>>>>>>>>>>>>> are more
>>>>>>>>>>>>>>>> aggressive with the reloads which could be causing this 
>>>>>>>>>>>>>>>> problem.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Could you please try setting an environment variable ala:
>>>>>>>>>>>>>>>>     oc env dc router RELOAD_INTERVAL=5s
>>>>>>>>>>>>>>>>        #  or even 2s or 3s  - that's reload interval in
>>>>>>>>>>>>>>>> seconds btw
>>>>>>>>>>>>>>>>        # if you have a custom deployment config then
>>>>>>>>>>>>>>>> replace the dc name router to that deployment config name.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> and see if that helps.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Mar 2, 2016 at 6:21 PM, Dean Peterson <
>>>>>>>>>>>>>>>> peterson.d...@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Is there another place I can look to track down the
>>>>>>>>>>>>>>>>> problem?  The router logs don't say much, just: " Router
>>>>>>>>>>>>>>>>> is including routes in all namespaces"
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Mar 2, 2016 at 7:39 PM, Dean Peterson <
>>>>>>>>>>>>>>>>> peterson.d...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> All it says is: " Router is including routes in all
>>>>>>>>>>>>>>>>>> namespaces"  That's it.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Mar 2, 2016 at 7:38 PM, Clayton Coleman <
>>>>>>>>>>>>>>>>>> ccole...@redhat.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> What do the router logs say?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mar 2, 2016, at 7:43 PM, Dean Peterson <
>>>>>>>>>>>>>>>>>>> peterson.d...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This is as close to having openshift origin set up
>>>>>>>>>>>>>>>>>>> perfectly as I have gotten.  My builds work great, 
>>>>>>>>>>>>>>>>>>> container deployments
>>>>>>>>>>>>>>>>>>> always work now.  I thought I was finally going to have a 
>>>>>>>>>>>>>>>>>>> smooth running
>>>>>>>>>>>>>>>>>>> Openshift; I just need to get past this last router issue.  
>>>>>>>>>>>>>>>>>>> It makes little
>>>>>>>>>>>>>>>>>>> sense.  I have set up a router many times before and never 
>>>>>>>>>>>>>>>>>>> had this issue.
>>>>>>>>>>>>>>>>>>> I've had issues with other parts of the system but never 
>>>>>>>>>>>>>>>>>>> the router.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Mar 2, 2016 at 6:34 PM, Dean Peterson <
>>>>>>>>>>>>>>>>>>> peterson.d...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I have a number of happy pods.  They are all running
>>>>>>>>>>>>>>>>>>>> normally.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Mar 2, 2016 at 6:28 PM, Mohamed Lrhazi <
>>>>>>>>>>>>>>>>>>>> mohamed.lrh...@georgetown.edu> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Click on a pod and get to its log and events tabs....
>>>>>>>>>>>>>>>>>>>>> see if they are actually happy or stuck on something...
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Mar 2, 2016 at 7:03 PM, Dean Peterson <
>>>>>>>>>>>>>>>>>>>>> peterson.d...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I have successfully started the ha proxy router.  I
>>>>>>>>>>>>>>>>>>>>>> have a pod running, yet all my routes take me to a 503 
>>>>>>>>>>>>>>>>>>>>>> service unavailable
>>>>>>>>>>>>>>>>>>>>>> error page.  I updated my resolv.conf file to have my 
>>>>>>>>>>>>>>>>>>>>>> master ip as
>>>>>>>>>>>>>>>>>>>>>> nameserver; I've never had this problem on previous 
>>>>>>>>>>>>>>>>>>>>>> versions.  I installed
>>>>>>>>>>>>>>>>>>>>>> openshift origin 1.1.3 with ansible; everything seems to 
>>>>>>>>>>>>>>>>>>>>>> be running
>>>>>>>>>>>>>>>>>>>>>> smoothly like before but I just get 503 service 
>>>>>>>>>>>>>>>>>>>>>> unavailable errors trying
>>>>>>>>>>>>>>>>>>>>>> to visit any route.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>>>>>>> users mailing list
>>>>>>>>>>>>>>>>>>>>>> users@lists.openshift.redhat.com
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>>>> users mailing list
>>>>>>>>>>>>>>>>>>> users@lists.openshift.redhat.com
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>> users mailing list
>>>>>>>>>>>>>>>>> users@lists.openshift.redhat.com
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Ram//
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> main(O,s){s=--O;10<putchar(3^O?97-(15&7183>>4*s)*(O++?-1:1):10)&&\
>>>>>>>>>>>>>>>> main(++O,s++);}
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Ram//
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> main(O,s){s=--O;10<putchar(3^O?97-(15&7183>>4*s)*(O++?-1:1):10)&&\
>>>>>>>>>>>>>> main(++O,s++);}
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Ram//
>>>>>>>>>>>>
>>>>>>>>>>>> main(O,s){s=--O;10<putchar(3^O?97-(15&7183>>4*s)*(O++?-1:1):10)&&\
>>>>>>>>>>>> main(++O,s++);}
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> users mailing list
>>>>>>>>>>> users@lists.openshift.redhat.com
>>>>>>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Ram//
>>>>>>>>> main(O,s){s=--O;10<putchar(3^O?97-(15&7183>>4*s)*(O++?-1:1):10)&&\
>>>>>>>>> main(++O,s++);}
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
> _______________________________________________
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to