Re: error on scaling up cluster

2018-02-23 Thread Julio Saura
solved sorry

just copying ca-bundle.crt to ca.crt did the trick :/



> El 23 feb 2018, a las 11:31, Julio Saura  escribió:
> 
> Hello
> 
> i have been scaling up or oc origin cluster without any problem so far, but 
> now i get this weird error when running de scale playbook
> 
> running version 
> 
> 
> oc v3.6.0+c4dd4cf
> kubernetes v1.6.1+5115d708d7
> 
> 
> 
> ansible-playbook -i /etc/ansible/hosts 
> ./openshift-ansible/playbooks/byo/openshift-node/scaleup.yml
> 
> 
> TASK [openshift_node_certificates : Generate the node client config]
> 
> ["oc", "adm", "create-api-client-config", 
> "--certificate-authority=/etc/origin/master/ca.crt", 
> "--client-dir=/etc/origin/generated-configs/node03, 
> "--groups=system:nodes", "--master=https://MASTER <https://master/>", 
> "--signer-cert=/etc/origin/master/ca.crt", 
> "--signer-key=/etc/origin/master/ca.key", 
> "--signer-serial=/etc/origin/master/ca.serial.txt", 
> "--user=system:node:node03", "--expire-days=730”],
> 
> 
> 
> error: tls: private key does not match public key", "stderr_lines": ["error: 
> tls: private key does not match public key"], "stdout": "", "stdout_lines": 
> []}
> 
> cluster is running ok . but i am not able to add new nodes anymore
> 
> any clue?
> 
> thanks
> 
> 
> 
> 
> 
> 
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


error on scaling up cluster

2018-02-23 Thread Julio Saura
Hello

i have been scaling up or oc origin cluster without any problem so far, but now 
i get this weird error when running de scale playbook

running version 


oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7



ansible-playbook -i /etc/ansible/hosts 
./openshift-ansible/playbooks/byo/openshift-node/scaleup.yml


TASK [openshift_node_certificates : Generate the node client config]

["oc", "adm", "create-api-client-config", 
"--certificate-authority=/etc/origin/master/ca.crt", 
"--client-dir=/etc/origin/generated-configs/node03, 
"--groups=system:nodes", "--master=https://MASTER";, 
"--signer-cert=/etc/origin/master/ca.crt", 
"--signer-key=/etc/origin/master/ca.key", 
"--signer-serial=/etc/origin/master/ca.serial.txt", 
"--user=system:node:node03", "--expire-days=730”],



error: tls: private key does not match public key", "stderr_lines": ["error: 
tls: private key does not match public key"], "stdout": "", "stdout_lines": []}

cluster is running ok . but i am not able to add new nodes anymore

any clue?

thanks







___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-20 Thread Julio Saura
hello


> El 20 oct 2017, a las 9:57, Frederic Giloux  escribió:
> 
> Hi Julio
> 
> a couple of points here:
> - oc policy add-role-to-user admin system:serviceaccounts:project1:inciga -n 
> project1 would have worked for the project.

did not work :( trust me .. checked a lot of times

same command with view role did the trick

> If you have used oadm policy add-cluster-role-to-user you should use a 
> cluster role, which view or cluster-admin are and admin is not.

also tried, no luck :(



> - we validated with oc get rc -n project1 
> --as=system:serviceaccounts:project1:inciga that the rights were sufficient 
> for queries specific to the project.

i know .. and i am still trying to understand why the view role did the trick 
for me using curl or python request and was not needed using oc get ..

> - when you say the token provided by oc login you probably mean the token of 
> a user account, which is shorter than the token of a service account. On the 
> other hand it will expire, which is not the case for a token of a service 
> account.

right! that is why i decided to move to service account
> 
> Happy that it works for you now.

me too :)

thanks all for the support.

> 
> Regards,
> 
> Frédéric
> 
> 
> On Fri, Oct 20, 2017 at 9:40 AM, Julio Saura  <mailto:jsa...@hiberus.com>> wrote:
> python problem solved too
> 
> all working
> 
> view role was the key :/
> 
> 
> 
> 
>> El 20 oct 2017, a las 9:27, Julio Saura > <mailto:jsa...@hiberus.com>> escribió:
>> 
>> problem solved
>> 
>> i do not know why but giving user role view instead of admin make the trick 
>> ..
>> 
>> :/
>> 
>> now i am able to access using curl with the token, but not using python xD i 
>> get a 401 with long token, but i i use the short one that oc login gives 
>> works xD
>> 
>> 
>> 
>> 
>>> El 20 oct 2017, a las 8:59, Frederic Giloux >> <mailto:fgil...@redhat.com>> escribió:
>>> 
>>> Julio,
>>> 
>>> have you tried the command with higer log level as per my previous email?
>>> # oc get rc -n project1 --as=system:serviceaccounts:project1:inciga 
>>> --loglevel=8
>>> This gives you the successful rest call, which is made by the OC client to 
>>> the API server. You can then check whether it differs from your curl.
>>> 
>>> Regards,
>>> 
>>> Frédéric
>>> 
>>> On Fri, Oct 20, 2017 at 8:30 AM, Julio Saura >> <mailto:jsa...@hiberus.com>> wrote:
>>> headers look ok in curl request
>>> 
>>> * Cipher selection: 
>>> ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
>>> * successfully set certificate verify locations:
>>> *   CAfile: /etc/ssl/certs/ca-certificates.crt
>>>   CApath: none
>>> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
>>> * TLSv1.2 (IN), TLS handshake, Server hello (2):
>>> * NPN, negotiated HTTP1.1
>>> * TLSv1.2 (IN), TLS handshake, Certificate (11):
>>> * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
>>> * TLSv1.2 (IN), TLS handshake, Request CERT (13):
>>> * TLSv1.2 (IN), TLS handshake, Server finished (14):
>>> * TLSv1.2 (OUT), TLS handshake, Certificate (11):
>>> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
>>> * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
>>> * TLSv1.2 (OUT), TLS handshake, Unknown (67):
>>> * TLSv1.2 (OUT), TLS handshake, Finished (20):
>>> * TLSv1.2 (IN), TLS change cipher, Client hello (1):
>>> * TLSv1.2 (IN), TLS handshake, Finished (20):
>>> * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
>>> * Server certificate:
>>> *  subject: CN=10.1.5.31
>>> *  start date: Sep 21 11:19:56 2017 GMT
>>> *  expire date: Sep 21 11:19:57 2019 GMT
>>> *  issuer: CN=openshift-signer@1505992768
>>> *  SSL certificate verify result: self signed certificate in certificate 
>>> chain (19), continuing anyway.
>>> > GET /api/v1/namespaces/project1/replicationcontrollers HTTP/1.1
>>> > Host: BALANCER:8443
>>> > User-Agent: curl/7.56.0
>>> > Accept: */*
>>> > Authorization: Bearer 
>>> > eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsZHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaW5jaWdhLXRva2VuLTBkNDcyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImluY2lnYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Ln

Re: service account for rest api

2017-10-20 Thread Julio Saura
python problem solved too

all working

view role was the key :/




> El 20 oct 2017, a las 9:27, Julio Saura  escribió:
> 
> problem solved
> 
> i do not know why but giving user role view instead of admin make the trick ..
> 
> :/
> 
> now i am able to access using curl with the token, but not using python xD i 
> get a 401 with long token, but i i use the short one that oc login gives 
> works xD
> 
> 
> 
> 
>> El 20 oct 2017, a las 8:59, Frederic Giloux > <mailto:fgil...@redhat.com>> escribió:
>> 
>> Julio,
>> 
>> have you tried the command with higer log level as per my previous email?
>> # oc get rc -n project1 --as=system:serviceaccounts:project1:inciga 
>> --loglevel=8
>> This gives you the successful rest call, which is made by the OC client to 
>> the API server. You can then check whether it differs from your curl.
>> 
>> Regards,
>> 
>> Frédéric
>> 
>> On Fri, Oct 20, 2017 at 8:30 AM, Julio Saura > <mailto:jsa...@hiberus.com>> wrote:
>> headers look ok in curl request
>> 
>> * Cipher selection: 
>> ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
>> * successfully set certificate verify locations:
>> *   CAfile: /etc/ssl/certs/ca-certificates.crt
>>   CApath: none
>> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
>> * TLSv1.2 (IN), TLS handshake, Server hello (2):
>> * NPN, negotiated HTTP1.1
>> * TLSv1.2 (IN), TLS handshake, Certificate (11):
>> * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
>> * TLSv1.2 (IN), TLS handshake, Request CERT (13):
>> * TLSv1.2 (IN), TLS handshake, Server finished (14):
>> * TLSv1.2 (OUT), TLS handshake, Certificate (11):
>> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
>> * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
>> * TLSv1.2 (OUT), TLS handshake, Unknown (67):
>> * TLSv1.2 (OUT), TLS handshake, Finished (20):
>> * TLSv1.2 (IN), TLS change cipher, Client hello (1):
>> * TLSv1.2 (IN), TLS handshake, Finished (20):
>> * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
>> * Server certificate:
>> *  subject: CN=10.1.5.31
>> *  start date: Sep 21 11:19:56 2017 GMT
>> *  expire date: Sep 21 11:19:57 2019 GMT
>> *  issuer: CN=openshift-signer@1505992768
>> *  SSL certificate verify result: self signed certificate in certificate 
>> chain (19), continuing anyway.
>> > GET /api/v1/namespaces/project1/replicationcontrollers HTTP/1.1
>> > Host: BALANCER:8443
>> > User-Agent: curl/7.56.0
>> > Accept: */*
>> > Authorization: Bearer 
>> > eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsZHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaW5jaWdhLXRva2VuLTBkNDcyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImluY2lnYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIyMjE0YTI4LWI0ZTMtMTFlNy1hZTBhLTAwNTA1NmE0M2M0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsZHA6aW5jaWdhIn0.VfJa8fLQQjSYySjWO3d_hp0kGqVFAnhvFQ2R6jTcLmtFwiA2NouO0QJCI2KZqvhXigAzPsksOKP7-BP_v2c-93UH3UyXW7RhkYKMOO7d1EMZVMGnT6NBKhVkw45wa20kH221ggh98wdv4MZRAoNEOvmN9qXHmsUWEnxfT8uNIjIkAt_aydocQ22hIbYXzd6w5x6zmOWIVWllgF3qGtY8ArTgRf4WxhuwhUJRy_Gm31WhtKioovk2Hpt6XnlPhnfvHhioqtizZsTepVOD0A-yjearxiDBE7yuIzRsMHo014Dq3O2T_qIZ2P2wvEWBzfpi7i1to4ep3jcb_qDM2vQ0IQ
>> > Content-Type: application/json
>> >
>> < HTTP/1.1 403 Forbidden
>> < Cache-Control: no-store
>> < Content-Type: application/json
>> < Date: Fri, 20 Oct 2017 06:28:52 GMT
>> < Content-Length: 295
>> {
>>   "kind": "Status",
>>   "apiVersion": "v1",
>>   "metadata": {},
>>   "status": "Failure",
>>   "message": "User \"system:serviceaccount:ldp:inciga\" cannot list 
>> replicationcontrollers in project \"ldp\"",
>>   "reason": "Forbidden",
>>   "details": {
>> "kind": "replicationcontrollers"
>>   },
>>   "code": 403
>> }
>> 
>> 
>> 
>> 
>>> El 19 oct 2017, a las 18:17, Frederic Giloux >> <mailto:fgil...@redhat.com>> escribió:
>>> 
>>> Very good. The issue is with your curl. Next step run the same command with 
>>> --loglevel=8 and check the queries that are sent to the API server. 
>>> 
>>> Regards, 
>>> 
>>> Frédéric 
>>> 
>>> On 19 Oct 2017 18

Re: service account for rest api

2017-10-20 Thread Julio Saura
problem solved

i do not know why but giving user role view instead of admin make the trick ..

:/

now i am able to access using curl with the token, but not using python xD i 
get a 401 with long token, but i i use the short one that oc login gives works 
xD




> El 20 oct 2017, a las 8:59, Frederic Giloux  escribió:
> 
> Julio,
> 
> have you tried the command with higer log level as per my previous email?
> # oc get rc -n project1 --as=system:serviceaccounts:project1:inciga 
> --loglevel=8
> This gives you the successful rest call, which is made by the OC client to 
> the API server. You can then check whether it differs from your curl.
> 
> Regards,
> 
> Frédéric
> 
> On Fri, Oct 20, 2017 at 8:30 AM, Julio Saura  <mailto:jsa...@hiberus.com>> wrote:
> headers look ok in curl request
> 
> * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
> * successfully set certificate verify locations:
> *   CAfile: /etc/ssl/certs/ca-certificates.crt
>   CApath: none
> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
> * TLSv1.2 (IN), TLS handshake, Server hello (2):
> * NPN, negotiated HTTP1.1
> * TLSv1.2 (IN), TLS handshake, Certificate (11):
> * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
> * TLSv1.2 (IN), TLS handshake, Request CERT (13):
> * TLSv1.2 (IN), TLS handshake, Server finished (14):
> * TLSv1.2 (OUT), TLS handshake, Certificate (11):
> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
> * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
> * TLSv1.2 (OUT), TLS handshake, Unknown (67):
> * TLSv1.2 (OUT), TLS handshake, Finished (20):
> * TLSv1.2 (IN), TLS change cipher, Client hello (1):
> * TLSv1.2 (IN), TLS handshake, Finished (20):
> * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
> * Server certificate:
> *  subject: CN=10.1.5.31
> *  start date: Sep 21 11:19:56 2017 GMT
> *  expire date: Sep 21 11:19:57 2019 GMT
> *  issuer: CN=openshift-signer@1505992768
> *  SSL certificate verify result: self signed certificate in certificate 
> chain (19), continuing anyway.
> > GET /api/v1/namespaces/project1/replicationcontrollers HTTP/1.1
> > Host: BALANCER:8443
> > User-Agent: curl/7.56.0
> > Accept: */*
> > Authorization: Bearer 
> > eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsZHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaW5jaWdhLXRva2VuLTBkNDcyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImluY2lnYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIyMjE0YTI4LWI0ZTMtMTFlNy1hZTBhLTAwNTA1NmE0M2M0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsZHA6aW5jaWdhIn0.VfJa8fLQQjSYySjWO3d_hp0kGqVFAnhvFQ2R6jTcLmtFwiA2NouO0QJCI2KZqvhXigAzPsksOKP7-BP_v2c-93UH3UyXW7RhkYKMOO7d1EMZVMGnT6NBKhVkw45wa20kH221ggh98wdv4MZRAoNEOvmN9qXHmsUWEnxfT8uNIjIkAt_aydocQ22hIbYXzd6w5x6zmOWIVWllgF3qGtY8ArTgRf4WxhuwhUJRy_Gm31WhtKioovk2Hpt6XnlPhnfvHhioqtizZsTepVOD0A-yjearxiDBE7yuIzRsMHo014Dq3O2T_qIZ2P2wvEWBzfpi7i1to4ep3jcb_qDM2vQ0IQ
> > Content-Type: application/json
> >
> < HTTP/1.1 403 Forbidden
> < Cache-Control: no-store
> < Content-Type: application/json
> < Date: Fri, 20 Oct 2017 06:28:52 GMT
> < Content-Length: 295
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "User \"system:serviceaccount:ldp:inciga\" cannot list 
> replicationcontrollers in project \"ldp\"",
>   "reason": "Forbidden",
>   "details": {
> "kind": "replicationcontrollers"
>   },
>   "code": 403
> }
> 
> 
> 
> 
>> El 19 oct 2017, a las 18:17, Frederic Giloux > <mailto:fgil...@redhat.com>> escribió:
>> 
>> Very good. The issue is with your curl. Next step run the same command with 
>> --loglevel=8 and check the queries that are sent to the API server. 
>> 
>> Regards, 
>> 
>> Frédéric 
>> 
>> On 19 Oct 2017 18:11, "Julio Saura" > <mailto:jsa...@hiberus.com>> wrote:
>> umm that works …
>> 
>> weird
>> 
>> Julio Saura Alejandre
>> Responsable Servicios Gestionados
>> hiberus TRAVEL
>> Tel.: + 34 902 87 73 92 Ext. 659 
>> Parque Empresarial PLAZA
>> Edificio EXPOINNOVACIÓN
>> C/. Bari 25 <https://maps.google.com/?q=C/.+Bari+25&entry=gmail&source=g> 
>> Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
>> www.hiberus.com <http://www.hiberus.com/>
>> Crecemos cont

Re: service account for rest api

2017-10-19 Thread Julio Saura
headers look ok in curl request

* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* NPN, negotiated HTTP1.1
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Unknown (67):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
*  subject: CN=10.1.5.31
*  start date: Sep 21 11:19:56 2017 GMT
*  expire date: Sep 21 11:19:57 2019 GMT
*  issuer: CN=openshift-signer@1505992768
*  SSL certificate verify result: self signed certificate in certificate chain 
(19), continuing anyway.
> GET /api/v1/namespaces/project1/replicationcontrollers HTTP/1.1
> Host: BALANCER:8443
> User-Agent: curl/7.56.0
> Accept: */*
> Authorization: Bearer 
> eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsZHAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaW5jaWdhLXRva2VuLTBkNDcyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImluY2lnYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIyMjE0YTI4LWI0ZTMtMTFlNy1hZTBhLTAwNTA1NmE0M2M0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsZHA6aW5jaWdhIn0.VfJa8fLQQjSYySjWO3d_hp0kGqVFAnhvFQ2R6jTcLmtFwiA2NouO0QJCI2KZqvhXigAzPsksOKP7-BP_v2c-93UH3UyXW7RhkYKMOO7d1EMZVMGnT6NBKhVkw45wa20kH221ggh98wdv4MZRAoNEOvmN9qXHmsUWEnxfT8uNIjIkAt_aydocQ22hIbYXzd6w5x6zmOWIVWllgF3qGtY8ArTgRf4WxhuwhUJRy_Gm31WhtKioovk2Hpt6XnlPhnfvHhioqtizZsTepVOD0A-yjearxiDBE7yuIzRsMHo014Dq3O2T_qIZ2P2wvEWBzfpi7i1to4ep3jcb_qDM2vQ0IQ
> Content-Type: application/json
>
< HTTP/1.1 403 Forbidden
< Cache-Control: no-store
< Content-Type: application/json
< Date: Fri, 20 Oct 2017 06:28:52 GMT
< Content-Length: 295
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "User \"system:serviceaccount:ldp:inciga\" cannot list 
replicationcontrollers in project \"ldp\"",
  "reason": "Forbidden",
  "details": {
"kind": "replicationcontrollers"
  },
  "code": 403
}




> El 19 oct 2017, a las 18:17, Frederic Giloux  escribió:
> 
> Very good. The issue is with your curl. Next step run the same command with 
> --loglevel=8 and check the queries that are sent to the API server. 
> 
> Regards, 
> 
> Frédéric 
> 
> On 19 Oct 2017 18:11, "Julio Saura"  <mailto:jsa...@hiberus.com>> wrote:
> umm that works …
> 
> weird
> 
> Julio Saura Alejandre
> Responsable Servicios Gestionados
> hiberus TRAVEL
> Tel.: + 34 902 87 73 92 Ext. 659 
> Parque Empresarial PLAZA
> Edificio EXPOINNOVACIÓN
> C/. Bari 25 <https://maps.google.com/?q=C/.+Bari+25&entry=gmail&source=g> 
> Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
> www.hiberus.com <http://www.hiberus.com/>
> Crecemos contigo
> 
> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
> los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
> destinatario y pueden contener información privilegiada o confidencial. Si tú 
> no eres el destinatario indicado, queda notificado de que la utilización, 
> divulgación y/o copia sin autorización está prohibida en virtud de la 
> legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
> información contenida en el mismo es reservada y su uso no autorizado está 
> prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
> vía e-mail o teléfono, te abstengas de realizar copias del mensaje o 
> remitirlo o entregarlo a terceras personas y procedas a devolverlo a su 
> emisor y/o destruirlo de inmediato.
> 
>> El 19 oct 2017, a las 18:01, Frederic Giloux > <mailto:fgil...@redhat.com>> escribió:
>> 
>> oc get rc -n project1 --as=system:serviceaccounts:project1:inciga
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-19 Thread Julio Saura
compiled last stable curl version

same problem

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "User \"system:serviceaccount:project1:inciga\" cannot list 
replicationcontrollers in project \”project1\"",
  "reason": "Forbidden",
  "details": {
"kind": "replicationcontrollers"
  },
  "code": 403
}

curl-7.56.0

this is weird

> El 19 oct 2017, a las 19:23, Hiberus  escribió:
> 
> Yikes !!
> 
> I will check tomorrow 
> 
> Ty!
> 
> El 19 oct 2017, a las 18:16, Cesar Wong  <mailto:cew...@redhat.com>> escribió:
> 
>> 
>> Julio, 
>> 
>> Depending on your version of curl, you may be hitting this:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1260178 
>> <https://bugzilla.redhat.com/show_bug.cgi?id=1260178>
>> 
>> On Thu, Oct 19, 2017 at 12:11 PM, Julio Saura > <mailto:jsa...@hiberus.com>> wrote:
>> umm that works …
>> 
>> weird
>> 
>> Julio Saura Alejandre
>> Responsable Servicios Gestionados
>> hiberus TRAVEL
>> Tel.: + 34 902 87 73 92 Ext. 659
>> Parque Empresarial PLAZA
>> Edificio EXPOINNOVACIÓN
>> C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
>> www.hiberus.com <http://www.hiberus.com/>
>> Crecemos contigo
>> 
>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>> su destinatario y pueden contener información privilegiada o confidencial. 
>> Si tú no eres el destinatario indicado, queda notificado de que la 
>> utilización, divulgación y/o copia sin autorización está prohibida en virtud 
>> de la legislación vigente. Por ello, se informa a quien lo reciba por error, 
>> que la información contenida en el mismo es reservada y su uso no autorizado 
>> está prohibido legalmente, por lo que en tal caso te rogamos que nos lo 
>> comuniques vía e-mail o teléfono, te abstengas de realizar copias del 
>> mensaje o remitirlo o entregarlo a terceras personas y procedas a devolverlo 
>> a su emisor y/o destruirlo de inmediato.
>> 
>>> El 19 oct 2017, a las 18:01, Frederic Giloux >> <mailto:fgil...@redhat.com>> escribió:
>>> 
>>> oc get rc -n project1 --as=system:serviceaccounts:project1:inciga
>> 
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-19 Thread Julio Saura
tried

no luck :(


Julio Saura Alejandre
Responsable Servicios Gestionados
hiberus TRAVEL
Tel.: + 34 902 87 73 92 Ext. 659
Parque Empresarial PLAZA
Edificio EXPOINNOVACIÓN
C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
www.hiberus.com <http://www.hiberus.com/>
Crecemos contigo

Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 19 oct 2017, a las 21:40, Luke Meyer  escribió:
> 
> oc policy add-role-to-user admin

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-19 Thread Julio Saura
umm that works …

weird

Julio Saura Alejandre
Responsable Servicios Gestionados
hiberus TRAVEL
Tel.: + 34 902 87 73 92 Ext. 659
Parque Empresarial PLAZA
Edificio EXPOINNOVACIÓN
C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
www.hiberus.com <http://www.hiberus.com/>
Crecemos contigo

Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 19 oct 2017, a las 18:01, Frederic Giloux  escribió:
> 
> oc get rc -n project1 --as=system:serviceaccounts:project1:inciga

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-19 Thread Julio Saura
typo yes sorry

> curl -k -H "Authorization: Bearer $(oc sa get-token inciga -n project1)"  -H 
> "Content-Type: application/json" 
> https://MASTER_BALANCER_IP:8443/api/v1/namespaces/project1/replicationcontrollers
>  
> <https://master_balancer_ip:8443/api/v1/namespaces/project1/replicationcontrollers>
>  —insecure


is not project1 really i change the project name when i write the email sorry



Julio Saura Alejandre
Responsable Servicios Gestionados
hiberus TRAVEL
Tel.: + 34 902 87 73 92 Ext. 659
Parque Empresarial PLAZA
Edificio EXPOINNOVACIÓN
C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
www.hiberus.com <http://www.hiberus.com/>
Crecemos contigo

Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 19 oct 2017, a las 17:49, Frederic Giloux  escribió:
> 
> Hi Julio
> 
> I don't know whether that's a typo when you wrote the email but you get the 
> sa token from project and request rc from project1.
> 
> Regards, 
> 
> Frédéric 
> 
> 
> On 19 Oct 2017 17:41, "Julio Saura"  <mailto:jsa...@hiberus.com>> wrote:
> typed same command than you
> 
> still not working
> 
> i have 3 masters balanced .. maybe is that
> 
> i am doing the curl against the balancer..
> 
> curl -k -H "Authorization: Bearer $(oc sa get-token inciga -n project)"  -H 
> "Content-Type: application/json" 
> https://MASTER_BALANCER_IP:8443/api/v1/namespaces/project1/replicationcontrollers
>  
> <https://master_balancer_ip:8443/api/v1/namespaces/project1/replicationcontrollers>
>  --insecure
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "User \"system:serviceaccount:project1:inciga\" cannot list 
> replicationcontrollers in project \"project1\"",
>   "reason": "Forbidden",
>   "details": {
> "kind": "replicationcontrollers"
>   },
>   "code": 403
> }
> 
> 
> Julio Saura Alejandre
> Responsable Servicios Gestionados
> hiberus TRAVEL
> Tel.: + 34 902 87 73 92 Ext. 659 
> Parque Empresarial PLAZA
> Edificio EXPOINNOVACIÓN
> C/. Bari 25 <https://maps.google.com/?q=C/.+Bari+25&entry=gmail&source=g> 
> Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
> www.hiberus.com <http://www.hiberus.com/>
> Crecemos contigo
> 
> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
> los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
> destinatario y pueden contener información privilegiada o confidencial. Si tú 
> no eres el destinatario indicado, queda notificado de que la utilización, 
> divulgación y/o copia sin autorización está prohibida en virtud de la 
> legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
> información contenida en el mismo es reservada y su uso no autorizado está 
> prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
> vía e-mail o teléfono, te abstengas de realizar copias del mensaje o 
> remitirlo o entregarlo a terceras personas y procedas a devolverlo a su 
> emisor y/o destruirlo de inmediato.
> 
>> El 19 oct 2017, a las 17:29, Frederic Giloux > <mailto:fgil...@redhat.com>> escribió:
>> 
>> curl -k -H "Authorization: Bearer $(oc sa get-token inciga -n project1)"  -H 
>> "Content-Type: application/json" 
>> https://192.168.42.199:8443/api/v1/namespaces/project1/replicationcontrollers
>>  
>> <https://192.168.42.199:8443/api/v1/namespaces/project1/replicationcontrollers>
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-19 Thread Julio Saura
typed same command than you

still not working

i have 3 masters balanced .. maybe is that

i am doing the curl against the balancer..

curl -k -H "Authorization: Bearer $(oc sa get-token inciga -n project)"  -H 
"Content-Type: application/json" 
https://MASTER_BALANCER_IP:8443/api/v1/namespaces/project1/replicationcontrollers
 --insecure
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "User \"system:serviceaccount:project1:inciga\" cannot list 
replicationcontrollers in project \"project1\"",
  "reason": "Forbidden",
  "details": {
"kind": "replicationcontrollers"
  },
  "code": 403
}


Julio Saura Alejandre
Responsable Servicios Gestionados
hiberus TRAVEL
Tel.: + 34 902 87 73 92 Ext. 659
Parque Empresarial PLAZA
Edificio EXPOINNOVACIÓN
C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
www.hiberus.com <http://www.hiberus.com/>
Crecemos contigo

Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 19 oct 2017, a las 17:29, Frederic Giloux  escribió:
> 
> curl -k -H "Authorization: Bearer $(oc sa get-token inciga -n project1)"  -H 
> "Content-Type: application/json" 
> https://192.168.42.199:8443/api/v1/namespaces/project1/replicationcontrollers 
> <https://192.168.42.199:8443/api/v1/namespaces/project1/replicationcontrollers>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: service account for rest api

2017-10-19 Thread Julio Saura
yes ofc

oc create serviceaccount icinga -n project1

oadm policy add-cluster-role-to-user admin 
system:serviceaccounts:project1:icinga

oadm policy reconcile-cluster-roles —confirm

and then dump the token

oc serviceaccounts get-token icing


ty frederic!

i do login with curl but i get 

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "User \"system:serviceaccount:project1:icinga\" cannot list 
replicationcontrollers in project \”project1\"",
  "reason": "Forbidden",
  "details": {
"kind": "replicationcontrollers"
  },
  "code": 403
}





> El 19 oct 2017, a las 16:55, Frederic Giloux  escribió:
> 
> Hi Julio, 
> 
> Could you copy the commands you have used?
> 
> Regards, 
> 
> Frédéric 
> 
> On 19 Oct 2017 11:43, "Julio Saura"  <mailto:jsa...@hiberus.com>> wrote:
> Hello
> 
> i am trying to create a sa for accessing rest api with token ..
> 
> i have followed the doc steps
> 
> creating the account, applying admin role to that account and getting the 
> token
> 
> trying to access replicacioncontroller info with bearer in curl, i can auth 
> into but i get i have no permission to list rc on the project
> 
> i also did a reconciliate role on cluster
> 
> i also logged in with oc login passing token as parameter, i log in but it 
> says i have no projects ..
> 
> what else i am missing?
> 
> ty
> 
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


service account for rest api

2017-10-19 Thread Julio Saura
Hello

i am trying to create a sa for accessing rest api with token ..

i have followed the doc steps

creating the account, applying admin role to that account and getting the token

trying to access replicacioncontroller info with bearer in curl, i can auth 
into but i get i have no permission to list rc on the project

i also did a reconciliate role on cluster

i also logged in with oc login passing token as parameter, i log in but it says 
i have no projects ..

what else i am missing?

ty



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: origin 1.2 bad certificate

2017-06-13 Thread Julio Saura
more clues

etcd nodes have two ips, public an private

for some reason open shift is creating the certificates using de public ip 
instead of private

so connecting to etcd gives me and error saying certificate is generated to 
this IP and not to that IP

so it fails for that reason after re generating them

any clue ?

best regards



> El 13 jun 2017, a las 13:53, Julio Saura  escribió:
> 
> more info
> 
> i managed to connect with curl to the etcd server and queried about 
> controller keys
> 
> {"action":"get","node":{"key":"/openshift.io/leases/controllers 
> <http://openshift.io/leases/controllers>","value":"master-lyy7bxfg","expiration":"2017-05-31T10:26:28.833756573Z","ttl":-1128220,"modifiedIndex":20547532,"createdIndex":18120566}
> 
> 
> looks that what is expired is the key on the etcd BBDD..
> 
> how can i solve this?
> 
> best regards
> 
> 
> 
>> El 13 jun 2017, a las 13:46, Julio Saura > <mailto:jsa...@hiberus.com>> escribió:
>> 
>> sorry about wget
>> 
>> connecting to etcd nodes using openssl and passing client certs looks good
>> 
>> openssl s_client -cert master.etcd-client.crt  -key master.etcd-client.key 
>> -connect etcd-node1:2379 -debug
>> 
>> connects without problem
>> 
>> but api service does not
>> 
>> 
>> Jun 13 15:25:04 openshift-master01 origin-master-controllers: E0613 
>> 15:25:04.9978612391 leaderlease.go:69] unable to check lease 
>> openshift.io/leases/controllers: <http://openshift.io/leases/controllers:> 
>> 501: All the given peers are not reachable (failed to propose on members 
>> [https://etcd-node02l:2379 https:/etcd-node01:2379 
>> <https://etcd-node02l:2379 https:/etcd-node01:2379>] twice [last error: Put 
>> https://etcd-node02:2379/v2/keys/openshift.io/leases/controllers?prevExist=false:
>>  
>> <https://etcd-node02:2379/v2/keys/openshift.io/leases/controllers?prevExist=false:>
>>  remote error: bad certificate
>> 
>> 
>> Julio Saura Alejandre
>> Responsable Servicios Gestionados
>> hiberus TRAVEL
>> Tel.: + 34 902 87 73 92 Ext. 659
>> Parque Empresarial PLAZA
>> Edificio EXPOINNOVACIÓN
>> C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
>> www.hiberus.com <http://www.hiberus.com/>
>> Crecemos contigo
>> 
>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>> su destinatario y pueden contener información privilegiada o confidencial. 
>> Si tú no eres el destinatario indicado, queda notificado de que la 
>> utilización, divulgación y/o copia sin autorización está prohibida en virtud 
>> de la legislación vigente. Por ello, se informa a quien lo reciba por error, 
>> que la información contenida en el mismo es reservada y su uso no autorizado 
>> está prohibido legalmente, por lo que en tal caso te rogamos que nos lo 
>> comuniques vía e-mail o teléfono, te abstengas de realizar copias del 
>> mensaje o remitirlo o entregarlo a terceras personas y procedas a devolverlo 
>> a su emisor y/o destruirlo de inmediato.
>> 
>>> El 13 jun 2017, a las 13:28, Julio Saura >> <mailto:jsa...@hiberus.com>> escribió:
>>> 
>>> Hello
>>> 
>>> i have a problem in a 1.2.0 cluster with etcd ca and certificates, mainly 
>>> they did expire
>>> 
>>> i followed the doc regarding this and after update my openshift-ansible i 
>>> got the needed playbook
>>> 
>>> after running em i see etcd certs and ca are updated on my nodes, and 
>>> dumping them with openssl looks good.
>>> 
>>> ansible-playbook -v -i /etc/ansible/hosts 
>>> ./playbooks/byo/openshift-cluster/redeploy-certificates.yml
>>> 
>>> i see the ca and certs have been updates nicely on my etcd nodes, they do 
>>> start but i still get bad certificate when api/master tries to connect to 
>>> ectd
>>> 
>>> i did check connecting with wget for example but it says bad certificate
>>> 
>>> OpenSSL: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad 
>>> certificate
>>> 
>>> any clue? my cluster is down right now :/
>>> 
>>> best regards
>>> 
>> 
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: origin 1.2 bad certificate

2017-06-13 Thread Julio Saura
more info

i managed to connect with curl to the etcd server and queried about controller 
keys

{"action":"get","node":{"key":"/openshift.io/leases/controllers","value":"master-lyy7bxfg","expiration":"2017-05-31T10:26:28.833756573Z","ttl":-1128220,"modifiedIndex":20547532,"createdIndex":18120566}


looks that what is expired is the key on the etcd BBDD..

how can i solve this?

best regards



> El 13 jun 2017, a las 13:46, Julio Saura  escribió:
> 
> sorry about wget
> 
> connecting to etcd nodes using openssl and passing client certs looks good
> 
> openssl s_client -cert master.etcd-client.crt  -key master.etcd-client.key 
> -connect etcd-node1:2379 -debug
> 
> connects without problem
> 
> but api service does not
> 
> 
> Jun 13 15:25:04 openshift-master01 origin-master-controllers: E0613 
> 15:25:04.9978612391 leaderlease.go:69] unable to check lease 
> openshift.io/leases/controllers: <http://openshift.io/leases/controllers:> 
> 501: All the given peers are not reachable (failed to propose on members 
> [https://etcd-node02l:2379 https:/etcd-node01:2379 <https://etcd-node02l:2379 
> https:/etcd-node01:2379>] twice [last error: Put 
> https://etcd-node02:2379/v2/keys/openshift.io/leases/controllers?prevExist=false:
>  
> <https://etcd-node02:2379/v2/keys/openshift.io/leases/controllers?prevExist=false:>
>  remote error: bad certificate
> 
> 
> Julio Saura Alejandre
> Responsable Servicios Gestionados
> hiberus TRAVEL
> Tel.: + 34 902 87 73 92 Ext. 659
> Parque Empresarial PLAZA
> Edificio EXPOINNOVACIÓN
> C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
> www.hiberus.com <http://www.hiberus.com/>
> Crecemos contigo
> 
> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
> los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
> destinatario y pueden contener información privilegiada o confidencial. Si tú 
> no eres el destinatario indicado, queda notificado de que la utilización, 
> divulgación y/o copia sin autorización está prohibida en virtud de la 
> legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
> información contenida en el mismo es reservada y su uso no autorizado está 
> prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
> vía e-mail o teléfono, te abstengas de realizar copias del mensaje o 
> remitirlo o entregarlo a terceras personas y procedas a devolverlo a su 
> emisor y/o destruirlo de inmediato.
> 
>> El 13 jun 2017, a las 13:28, Julio Saura > <mailto:jsa...@hiberus.com>> escribió:
>> 
>> Hello
>> 
>> i have a problem in a 1.2.0 cluster with etcd ca and certificates, mainly 
>> they did expire
>> 
>> i followed the doc regarding this and after update my openshift-ansible i 
>> got the needed playbook
>> 
>> after running em i see etcd certs and ca are updated on my nodes, and 
>> dumping them with openssl looks good.
>> 
>> ansible-playbook -v -i /etc/ansible/hosts 
>> ./playbooks/byo/openshift-cluster/redeploy-certificates.yml
>> 
>> i see the ca and certs have been updates nicely on my etcd nodes, they do 
>> start but i still get bad certificate when api/master tries to connect to 
>> ectd
>> 
>> i did check connecting with wget for example but it says bad certificate
>> 
>> OpenSSL: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad 
>> certificate
>> 
>> any clue? my cluster is down right now :/
>> 
>> best regards
>> 
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: origin 1.2 bad certificate

2017-06-13 Thread Julio Saura
sorry about wget

connecting to etcd nodes using openssl and passing client certs looks good

openssl s_client -cert master.etcd-client.crt  -key master.etcd-client.key 
-connect etcd-node1:2379 -debug

connects without problem

but api service does not


Jun 13 15:25:04 openshift-master01 origin-master-controllers: E0613 
15:25:04.9978612391 leaderlease.go:69] unable to check lease 
openshift.io/leases/controllers: 501: All the given peers are not reachable 
(failed to propose on members [https://etcd-node02l:2379 
https:/etcd-node01:2379] twice [last error: Put 
https://etcd-node02:2379/v2/keys/openshift.io/leases/controllers?prevExist=false:
 remote error: bad certificate


Julio Saura Alejandre
Responsable Servicios Gestionados
hiberus TRAVEL
Tel.: + 34 902 87 73 92 Ext. 659
Parque Empresarial PLAZA
Edificio EXPOINNOVACIÓN
C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
www.hiberus.com <http://www.hiberus.com/>
Crecemos contigo

Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 13 jun 2017, a las 13:28, Julio Saura  escribió:
> 
> Hello
> 
> i have a problem in a 1.2.0 cluster with etcd ca and certificates, mainly 
> they did expire
> 
> i followed the doc regarding this and after update my openshift-ansible i got 
> the needed playbook
> 
> after running em i see etcd certs and ca are updated on my nodes, and dumping 
> them with openssl looks good.
> 
> ansible-playbook -v -i /etc/ansible/hosts 
> ./playbooks/byo/openshift-cluster/redeploy-certificates.yml
> 
> i see the ca and certs have been updates nicely on my etcd nodes, they do 
> start but i still get bad certificate when api/master tries to connect to ectd
> 
> i did check connecting with wget for example but it says bad certificate
> 
> OpenSSL: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad 
> certificate
> 
> any clue? my cluster is down right now :/
> 
> best regards
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


origin 1.2 bad certificate

2017-06-13 Thread Julio Saura
Hello

i have a problem in a 1.2.0 cluster with etcd ca and certificates, mainly they 
did expire

i followed the doc regarding this and after update my openshift-ansible i got 
the needed playbook

after running em i see etcd certs and ca are updated on my nodes, and dumping 
them with openssl looks good.

ansible-playbook -v -i /etc/ansible/hosts 
./playbooks/byo/openshift-cluster/redeploy-certificates.yml

i see the ca and certs have been updates nicely on my etcd nodes, they do start 
but i still get bad certificate when api/master tries to connect to ectd

i did check connecting with wget for example but it says bad certificate

OpenSSL: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad certificate

any clue? my cluster is down right now :/

best regards


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: upgrading haproxy version on origin 1.2

2017-03-03 Thread Julio Saura
thank you!!!


Julio Saura Alejandre
Responsable Servicios Gestionados
hiberus TRAVEL
Tel.: + 34 902 87 73 92 Ext. 659
Parque Empresarial PLAZA
Edificio EXPOINNOVACIÓN
C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
www.hiberus.com <http://www.hiberus.com/>
Crecemos contigo

Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 2 mar 2017, a las 22:56, Aleksandar Lazic  escribió:
> 
> Hi.
> 
> I have written a blog entry how youc an build a openshift router with newer 
> haproxy.
> 
> https://me2digital.online/2017/03/02/how-to-use-haproxy-1-7-in-openshift-router/
> 
> 
> Best regards
> aleks
> 
> 
>  On Thu, 09 Feb 2017 10:33:45 +0100 Julio Saura  
> wrote  
>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>> su destinatario y pueden contener información privilegiada o confidencial. 
>> Si tú no eres el destinatario indicado, queda notificado de que la 
>> utilización, divulgación y/o copia sin autorización está prohibida en virtud 
>> de la legislación vigente. Por ello, se informa a quien lo reciba por error, 
>> que la información contenida en el mismo es reservada y su uso no autorizado 
>> está prohibido legalmente, por lo que en tal caso te rogamos que nos lo 
>> comuniques vía e-mail o teléfono, te abstengas de realizar copias del 
>> mensaje o remitirlo o entregarlo a terceras personas y procedas a devolverlo 
>> a su emisor y/o destruirlo de inmediato.  
>> El 9 feb 2017, a las 10:32, Aleksandar Lazic  escribió:
>> Hi.
>> 
>>  On Thu, 09 Feb 2017 10:21:28 +0100 Julio Saura  
>> wrote  
>> nah no luck
>> just pulled latest image and run it on a local docker engine
>> [I have no name!@f9de398a0238 conf]$ /usr/sbin/haproxy -version
>> HA-Proxy version 1.5.18 2016/05/10
>> Copyright 2000-2016 Willy Tarreau 
>> 
>> :(
>> 
>> As you have seen it's not that easy.
>> You will need to build your own image!
>> 
>> yeah.. it would have been so lovely! :(
>> :P
>> 
>> 
>> BR
>> Aleks
>> 
>> Julio Saura AlejandreResponsable Servicios Gestionadoshiberus TRAVELTel.: + 
>> 34 902 87 73 92 Ext. 659Parque Empresarial PLAZAEdificio EXPOINNOVACIÓNC/. 
>> Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 
>> Zaragozawww.hiberus.comCrecemos contigo
>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>> su destinatario y pueden contener información privilegiada o confidencial. 
>> Si tú no eres el destinatario indicado, queda notificado de que la 
>> utilización, divulgación y/o copia sin autorización está prohibida en virtud 
>> de la legislación vigente. Por ello, se informa a quien lo reciba por error, 
>> que la información contenida en el mismo es reservada y su uso no autorizado 
>> está prohibido legalmente, por lo que en tal caso te rogamos que nos lo 
>> comuniques vía e-mail o teléfono, te abstengas de realizar copias del 
>> mensaje o remitirlo o entregarlo a terceras personas y procedas a devolverlo 
>> a su emisor y/o destruirlo de inmediato.  
>> El 9 feb 2017, a las 10:19, Julio Saura  escribió:
>> hello
>> thanks for the answer
>> i was thinking it editing de DC and point the image to the latest in the 
>> router images repo . .but i am concerned about dependencies or maybe 
>> something i am missing right know that could break my environment..
>> the latest version in repo es v1.4.1 but i don’t know that haproxy is inside 
>> that image …
>> could it be so easy as editing de dc an pointing to the latest repo image?
>> Thanks!
>> 
>> 
>> El 9 feb 2017, a las 10:14, Aleksandar Lazic  escribió:
>> Hi.
>> 
>> i think fastest way is to follow this steps.
>> 
>> https://docs.opens

Re: haproxy logs

2017-02-23 Thread Julio Saura
thanks mate

but iptables is not the problem

i can reach syslog server from nodes

[root@openshift-balancer01 ~]# echo '<14> user test message from router pod' | 
nc -w 2 -u SERVER 514


root@syslog:/var/log# tcpdump  -i eth0 udp port 514

12:51:43.927424 IP 10.1.5.40.40857 > syslog.SERVER.syslog: SYSLOG user.info, 
length: 39


there is no traffic from haproxy pods to the syslog server ..

seems haproxy is not sending anything to the syslog server

if i describe de router pods i see the variables right

  ROUTER_SYSLOG_ADDRESS:10.1.5.12
  ROUTER_LOG_LEVEL: debug

but nothing is sent ..


Julio Saura Alejandre
Responsable Servicios Gestionados
hiberus TRAVEL
Tel.: + 34 902 87 73 92 Ext. 659
Parque Empresarial PLAZA
Edificio EXPOINNOVACIÓN
C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
www.hiberus.com <http://www.hiberus.com/>
Crecemos contigo

Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 22 feb 2017, a las 1:16, Ram Ranganathan  escribió:
> 
> As Phil mentioned, you can check if the iptables rule is blocking it. 
> 
> Simple test would be to rsh into the router pod and use netcat to send a 
> message.
> $ oc rsh 
> pod>  echo '<14> user test message from router pod' | nc -w 2 -u 
>  514
> 
> And maybe try from the host (openshift-node) or another node as well. 
> $ echo '<14> user test message from the node' | nc -w 2 -u  514
> 
> And if you need to open the port you could use something like:
> $ sudo iptables -I INPUT -p udp -s  --dport 514 -j ACCEPT
> $ sudo service iptables save
> $ sudo service iptables restart
> 
> HTH
> 
> On Mon, Feb 20, 2017 at 3:16 AM, Julio Saura  <mailto:jsa...@hiberus.com>> wrote:
> hello
> 
> any clue please?
> 
> thanks
> 
> 
>> El 17 feb 2017, a las 10:04, Julio Saura > <mailto:jsa...@hiberus.com>> escribió:
>> 
>> Hello
>> 
>> i need to enable haproxy access logs on my openshift routers..
>> 
>> i followed the guide and enabled a syslog server on my net ..
>> 
>> after adding env variables on my router dc for poiting to my syslog server i 
>> don’t see any packet sent to my syslog server ( tcpdump on my syslog servers 
>> shows no traffic on syslog port tcp or udp ) y put haproxy log level to 
>> debug for being sure it generates logs.
>> 
>> if i describe my router pods y see env variables are passed and filled with 
>> the right values ,  and the routers have been redeployed by the router DC ..
>> 
>> anything else i am missing?
>> 
>> thanks
>> 
>> Best regards
>> 
>> 
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
> 
> 
> 
> 
> -- 
> Ram//
> main(O,s){s=--O;10>4*s)*(O++?-1:1):10)&&\
> main(++O,s++);}

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: haproxy logs

2017-02-20 Thread Julio Saura
hello

any clue please?

thanks

> El 17 feb 2017, a las 10:04, Julio Saura  escribió:
> 
> Hello
> 
> i need to enable haproxy access logs on my openshift routers..
> 
> i followed the guide and enabled a syslog server on my net ..
> 
> after adding env variables on my router dc for poiting to my syslog server i 
> don’t see any packet sent to my syslog server ( tcpdump on my syslog servers 
> shows no traffic on syslog port tcp or udp ) y put haproxy log level to debug 
> for being sure it generates logs.
> 
> if i describe my router pods y see env variables are passed and filled with 
> the right values ,  and the routers have been redeployed by the router DC ..
> 
> anything else i am missing?
> 
> thanks
> 
> Best regards
> 
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


haproxy logs

2017-02-17 Thread Julio Saura
Hello

i need to enable haproxy access logs on my openshift routers..

i followed the guide and enabled a syslog server on my net ..

after adding env variables on my router dc for poiting to my syslog server i 
don’t see any packet sent to my syslog server ( tcpdump on my syslog servers 
shows no traffic on syslog port tcp or udp ) y put haproxy log level to debug 
for being sure it generates logs.

if i describe my router pods y see env variables are passed and filled with the 
right values ,  and the routers have been redeployed by the router DC ..

anything else i am missing?

thanks

Best regards


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: upgrading haproxy version on origin 1.2

2017-02-09 Thread Julio Saura
Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 9 feb 2017, a las 10:32, Aleksandar Lazic  escribió:
> 
> Hi.
> 
>  On Thu, 09 Feb 2017 10:21:28 +0100 Julio Saura  
> wrote  
>> nah no luck
>> just pulled latest image and run it on a local docker engine
>> [I have no name!@f9de398a0238 conf]$ /usr/sbin/haproxy -version
>> HA-Proxy version 1.5.18 2016/05/10
>> Copyright 2000-2016 Willy Tarreau 
>> 
>> :(
> 
> As you have seen it's not that easy.
> You will need to build your own image!

yeah.. it would have been so lovely! :(

:P


> 
> BR
> Aleks
> 
>> Julio Saura AlejandreResponsable Servicios Gestionadoshiberus TRAVELTel.: + 
>> 34 902 87 73 92 Ext. 659Parque Empresarial PLAZAEdificio EXPOINNOVACIÓNC/. 
>> Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 
>> Zaragozawww.hiberus.comCrecemos contigo
>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>> su destinatario y pueden contener información privilegiada o confidencial. 
>> Si tú no eres el destinatario indicado, queda notificado de que la 
>> utilización, divulgación y/o copia sin autorización está prohibida en virtud 
>> de la legislación vigente. Por ello, se informa a quien lo reciba por error, 
>> que la información contenida en el mismo es reservada y su uso no autorizado 
>> está prohibido legalmente, por lo que en tal caso te rogamos que nos lo 
>> comuniques vía e-mail o teléfono, te abstengas de realizar copias del 
>> mensaje o remitirlo o entregarlo a terceras personas y procedas a devolverlo 
>> a su emisor y/o destruirlo de inmediato.  
>> El 9 feb 2017, a las 10:19, Julio Saura  escribió:
>> hello
>> thanks for the answer
>> i was thinking it editing de DC and point the image to the latest in the 
>> router images repo . .but i am concerned about dependencies or maybe 
>> something i am missing right know that could break my environment..
>> the latest version in repo es v1.4.1 but i don’t know that haproxy is inside 
>> that image …
>> could it be so easy as editing de dc an pointing to the latest repo image?
>> Thanks!
>> 
>> 
>> El 9 feb 2017, a las 10:14, Aleksandar Lazic  escribió:
>> Hi.
>> 
>> i think fastest way is to follow this steps.
>> 
>> https://docs.openshift.org/1.2/install_config/install/deploy_router.html#rebuilding-your-router
>> 
>> I think you will need to build the haproxy from the sources but you can use 
>> the router images as Base docker image.
>> 
>> You can use my haproxy docker files to see which packages you will need to 
>> add.
>> 
>> https://gitlab.com/aleks001/haproxy
>> 
>> Hth
>> 
>> --- 
>> Aleksandar Lazic - ME2Digital e. U.
>> https://me2digital.online/
>> 
>> 
>> ---- On Thu, 09 Feb 2017 10:04:44 +0100 Julio Saura  
>> wrote  
>> ok 
>> auto answer, me retarded
>> i saw that editing de router DC i am able to point to other image
>> image: openshift/origin-haproxy-router:v1.2.0-rc1
>> now my question is, is safe to choose a newer image for my router DC without 
>> full upgrading open shift? for example latest ?
>> Best regards
>> 
>> Julio Saura AlejandreResponsable Servicios Gestionadoshiberus TRAVELTel.: + 
>> 34 902 87 73 92 Ext. 659Parque Empresarial PLAZAEdificio EXPOINNOVACIÓNC/. 
>> Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 
>> Zaragozawww.hiberus.comCrecemos contigo
>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>> su destinatario y pueden contener información privilegiada o confidencial. 
>> Si tú no eres el destinatario indicado, queda notificado de que la 
>> utilización, divulgación y/o copia sin autorización está prohibid

Re: upgrading haproxy version on origin 1.2

2017-02-09 Thread Julio Saura
nah no luck

just pulled latest image and run it on a local docker engine

[I have no name!@f9de398a0238 conf]$ /usr/sbin/haproxy -version
HA-Proxy version 1.5.18 2016/05/10
Copyright 2000-2016 Willy Tarreau 

:(


Julio Saura Alejandre
Responsable Servicios Gestionados
hiberus TRAVEL
Tel.: + 34 902 87 73 92 Ext. 659
Parque Empresarial PLAZA
Edificio EXPOINNOVACIÓN
C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
www.hiberus.com <http://www.hiberus.com/>
Crecemos contigo

Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 9 feb 2017, a las 10:19, Julio Saura  escribió:
> 
> hello
> 
> thanks for the answer
> 
> i was thinking it editing de DC and point the image to the latest in the 
> router images repo . .but i am concerned about dependencies or maybe 
> something i am missing right know that could break my environment..
> 
> the latest version in repo es v1.4.1 but i don’t know that haproxy is inside 
> that image …
> 
> could it be so easy as editing de dc an pointing to the latest repo image?
> 
> Thanks!
> 
> 
> 
>> El 9 feb 2017, a las 10:14, Aleksandar Lazic > <mailto:al...@me2digital.eu>> escribió:
>> 
>> Hi.
>> 
>> i think fastest way is to follow this steps.
>> 
>> https://docs.openshift.org/1.2/install_config/install/deploy_router.html#rebuilding-your-router
>>  
>> <https://docs.openshift.org/1.2/install_config/install/deploy_router.html#rebuilding-your-router>
>> 
>> I think you will need to build the haproxy from the sources but you can use 
>> the router images as Base docker image.
>> 
>> You can use my haproxy docker files to see which packages you will need to 
>> add.
>> 
>> https://gitlab.com/aleks001/haproxy
>> 
>> Hth
>> 
>> --- 
>> Aleksandar Lazic - ME2Digital e. U.
>> https://me2digital.online/
>> 
>> 
>>  On Thu, 09 Feb 2017 10:04:44 +0100 Julio Saura  
>> wrote  
>>> ok 
>>> auto answer, me retarded
>>> i saw that editing de router DC i am able to point to other image
>>> image: openshift/origin-haproxy-router:v1.2.0-rc1
>>> now my question is, is safe to choose a newer image for my router DC 
>>> without full upgrading open shift? for example latest ?
>>> Best regards
>>> 
>>> Julio Saura AlejandreResponsable Servicios Gestionadoshiberus TRAVELTel.: + 
>>> 34 902 87 73 92 Ext. 659Parque Empresarial PLAZAEdificio EXPOINNOVACIÓNC/. 
>>> Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 
>>> Zaragozawww.hiberus.comCrecemos contigo
>>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>>> su destinatario y pueden contener información privilegiada o confidencial. 
>>> Si tú no eres el destinatario indicado, queda notificado de que la 
>>> utilización, divulgación y/o copia sin autorización está prohibida en 
>>> virtud de la legislación vigente. Por ello, se informa a quien lo reciba 
>>> por error, que la información contenida en el mismo es reservada y su uso 
>>> no autorizado está prohibido legalmente, por lo que en tal caso te rogamos 
>>> que nos lo comuniques vía e-mail o teléfono, te abstengas de realizar 
>>> copias del mensaje o remitirlo o entregarlo a terceras personas y procedas 
>>> a devolverlo a su emisor y/o destruirlo de inmediato.  
>>> El 9 feb 2017, a las 10:00, Julio Saura  escribió:
>>> Hello
>>> 
>>> 
>>> is there any way yo just update haproxy version for routers without a full 
>>> openshift upgrade?
>>> 
>>> currently origin 1.2 is using openshift/origin-haproxy-router:v1.2.1 image 
>>> .. but we are facing a weird bug due to haproxy version ( 1.5.8 ) and i 
>>> would love to upgrade it to al least 1.6.x where it seems the bug is solved.
>>> 
>>> is it possible? can i instruct open shift to just download a newer 
>>> origin-haproxy image?
>>> 
>>> thanks
>>> 
>>> Best regards
>>> 
>>> 
>>> ___ 
>>> users mailing list 
>>> users@lists.openshift.redhat.com 
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>>> 
>> 
>> 
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: upgrading haproxy version on origin 1.2

2017-02-09 Thread Julio Saura
hello

thanks for the answer

i was thinking it editing de DC and point the image to the latest in the router 
images repo . .but i am concerned about dependencies or maybe something i am 
missing right know that could break my environment..

the latest version in repo es v1.4.1 but i don’t know that haproxy is inside 
that image …

could it be so easy as editing de dc an pointing to the latest repo image?

Thanks!



> El 9 feb 2017, a las 10:14, Aleksandar Lazic  escribió:
> 
> Hi.
> 
> i think fastest way is to follow this steps.
> 
> https://docs.openshift.org/1.2/install_config/install/deploy_router.html#rebuilding-your-router
> 
> I think you will need to build the haproxy from the sources but you can use 
> the router images as Base docker image.
> 
> You can use my haproxy docker files to see which packages you will need to 
> add.
> 
> https://gitlab.com/aleks001/haproxy
> 
> Hth
> 
> --- 
> Aleksandar Lazic - ME2Digital e. U.
> https://me2digital.online/
> 
> 
>  On Thu, 09 Feb 2017 10:04:44 +0100 Julio Saura  
> wrote  
>> ok 
>> auto answer, me retarded
>> i saw that editing de router DC i am able to point to other image
>> image: openshift/origin-haproxy-router:v1.2.0-rc1
>> now my question is, is safe to choose a newer image for my router DC without 
>> full upgrading open shift? for example latest ?
>> Best regards
>> 
>> Julio Saura AlejandreResponsable Servicios Gestionadoshiberus TRAVELTel.: + 
>> 34 902 87 73 92 Ext. 659Parque Empresarial PLAZAEdificio EXPOINNOVACIÓNC/. 
>> Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 
>> Zaragozawww.hiberus.comCrecemos contigo
>> Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje 
>> y los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a 
>> su destinatario y pueden contener información privilegiada o confidencial. 
>> Si tú no eres el destinatario indicado, queda notificado de que la 
>> utilización, divulgación y/o copia sin autorización está prohibida en virtud 
>> de la legislación vigente. Por ello, se informa a quien lo reciba por error, 
>> que la información contenida en el mismo es reservada y su uso no autorizado 
>> está prohibido legalmente, por lo que en tal caso te rogamos que nos lo 
>> comuniques vía e-mail o teléfono, te abstengas de realizar copias del 
>> mensaje o remitirlo o entregarlo a terceras personas y procedas a devolverlo 
>> a su emisor y/o destruirlo de inmediato.  
>> El 9 feb 2017, a las 10:00, Julio Saura  escribió:
>> Hello
>> 
>> 
>> is there any way yo just update haproxy version for routers without a full 
>> openshift upgrade?
>> 
>> currently origin 1.2 is using openshift/origin-haproxy-router:v1.2.1 image 
>> .. but we are facing a weird bug due to haproxy version ( 1.5.8 ) and i 
>> would love to upgrade it to al least 1.6.x where it seems the bug is solved.
>> 
>> is it possible? can i instruct open shift to just download a newer 
>> origin-haproxy image?
>> 
>> thanks
>> 
>> Best regards
>> 
>> 
>> ___ 
>> users mailing list 
>> users@lists.openshift.redhat.com 
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>> 
> 
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: upgrading haproxy version on origin 1.2

2017-02-09 Thread Julio Saura
ok 

auto answer, me retarded

i saw that editing de router DC i am able to point to other image

image: openshift/origin-haproxy-router:v1.2.0-rc1

now my question is, is safe to choose a newer image for my router DC without 
full upgrading open shift? for example latest ?

Best regards


Julio Saura Alejandre
Responsable Servicios Gestionados
hiberus TRAVEL
Tel.: + 34 902 87 73 92 Ext. 659
Parque Empresarial PLAZA
Edificio EXPOINNOVACIÓN
C/. Bari 25 Duplicado, Escalera 1, Planta 2ª. 50197 Zaragoza
www.hiberus.com <http://www.hiberus.com/>
Crecemos contigo

Este mensaje se envía desde la plataforma de correo de Hiberus Este mensaje y 
los documentos que, en su caso, lleve anexos, se dirigen exclusivamente a su 
destinatario y pueden contener información privilegiada o confidencial. Si tú 
no eres el destinatario indicado, queda notificado de que la utilización, 
divulgación y/o copia sin autorización está prohibida en virtud de la 
legislación vigente. Por ello, se informa a quien lo reciba por error, que la 
información contenida en el mismo es reservada y su uso no autorizado está 
prohibido legalmente, por lo que en tal caso te rogamos que nos lo comuniques 
vía e-mail o teléfono, te abstengas de realizar copias del mensaje o remitirlo 
o entregarlo a terceras personas y procedas a devolverlo a su emisor y/o 
destruirlo de inmediato.

> El 9 feb 2017, a las 10:00, Julio Saura  escribió:
> 
> Hello
> 
> 
> is there any way yo just update haproxy version for routers without a full 
> openshift upgrade?
> 
> currently origin 1.2 is using openshift/origin-haproxy-router:v1.2.1 image .. 
> but we are facing a weird bug due to haproxy version ( 1.5.8 ) and i would 
> love to upgrade it to al least 1.6.x where it seems the bug is solved.
> 
> is it possible? can i instruct open shift to just download a newer 
> origin-haproxy image?
> 
> thanks
> 
> Best regards
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


upgrading haproxy version on origin 1.2

2017-02-09 Thread Julio Saura
Hello


is there any way yo just update haproxy version for routers without a full 
openshift upgrade?

currently origin 1.2 is using openshift/origin-haproxy-router:v1.2.1 image .. 
but we are facing a weird bug due to haproxy version ( 1.5.8 ) and i would love 
to upgrade it to al least 1.6.x where it seems the bug is solved.

is it possible? can i instruct open shift to just download a newer 
origin-haproxy image?

thanks

Best regards


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: pod disk size

2016-11-17 Thread Julio Saura
yes i did that

i removed my custom image , and pulled back from scratch

i think i will use PV for this issue …

thanks again

best regards



> El 17 nov 2016, a las 13:38, Frederic Giloux  escribió:
> 
> Hi Julio
> 
> have you looked at this point in the blog?
> "Why is the container still showing 10GB of container rootfs size? Shouldn’t 
> we be getting 20 GB? This is expected behavior. Since our new container is 
> based on our old Fedora image, which is based on the old base device size, 
> the new container would not get a 20-GB device size  unless we update the 
> image.
> 
> So let’s remove the existing Fedora image and update it from the registry."
> 
> Also see limitations at the end of the blog.
> 
> Hope this helps.
> 
> 
> 
> Frédéric
> 
> 
> On Thu, Nov 17, 2016 at 1:30 PM, Julio Saura  <mailto:jsa...@hiberus.com>> wrote:
> wopss
> 
> no sorry not working, my mistake..
> 
> still 10 gb 
> 
> /dev/mapper/docker-253:5-33569780-2c3028e6722088dd70791f7a34128f61a07f96e75523529503d7febc4d27275f
> 10G   1,2G  8,9G  12% /
> 
> but docker info says 20 gb 
> 
> 
> 
>  Base Device Size: 21.47 GB
> 
> 
> i have restarted docker daemon on that node an pulled my image again 
> 
> 
>> El 17 nov 2016, a las 12:52, Julio Saura > <mailto:jsa...@hiberus.com>> escribió:
>> 
>> hello
>> 
>> yes, that was the point i needed .
>> 
>> applied and working ;)
>> 
>> thanks!
>> 
>> i was looking for an openshift flag/option instead of directly docker :/
>> 
>> 
>> 
>> 
>>> El 17 nov 2016, a las 12:39, Frederic Giloux >> <mailto:fgil...@redhat.com>> escribió:
>>> 
>>> Hi Julio
>>> 
>>> I hope I understand your question correctly. The first time docker is 
>>> started, it sets up a base device with a default size specified in "Base 
>>> Device Size", visible with the command "#docker info". All future images 
>>> and containers will be a snapshot of this base device. Base size is the 
>>> maximum size that a container/image can grow to.
>>> Information on how to increase the size is available in this blog entry:
>>> http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/ 
>>> <http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/>
>>> 
>>> Best Regards,
>>> 
>>> Frédéric
>>> 
>>> On Thu, Nov 17, 2016 at 10:41 AM, Julio Saura >> <mailto:jsa...@hiberus.com>> wrote:
>>> Hello
>>> 
>>> i have noticed all my pods are started with 10 gb disk .. and i don’t know 
>>> why, the problem is that i need more disk per pod, how do i increase the 
>>> size of de pod disk? i don’t find any doc regarding this issue.
>>> 
>>> i have tried to mount a host mount on my pod just to get a jmap out of the 
>>> pod but i was not able to make it run ..
>>> 
>>> if pod disk size increase is not possible i will try to use PV using nfs, 
>>> but i guess increasing the pod disk is possible right?
>>> 
>>> best regards
>>> 
>>> thanks.
>>> 
>>> 
>>> 
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>>> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
>>> 
>>> 
>>> 
>>> -- 
>>> Frédéric Giloux
>>> Senior Middleware Consultant
>>> 
>>> Red Hat GmbH 
>>> MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main 
>>> 
>>> Mobile: +49 (0) 174 1724661 
>>> E-Mail: fgil...@redhat.com <mailto:fgil...@redhat.com>, 
>>> http://www.redhat.de/  <http://www.redhat.de/>
>>> 
>>> Delivering value year after year 
>>> Red Hat ranks # 1 in value among software vendors 
>>> http://www.redhat.com/promo/vendor/ <http://www.redhat.com/promo/vendor/> 
>>> 
>>> Freedom...Courage...Commitment...Accountability 
>>>  
>>> Red Hat GmbH, http://www.de.redhat.com/ <http://www.de.redhat.com/> Sitz: 
>>> Grasbrunn, 
>>> Handelsregister: Amtsgericht München, HRB 153243 
>>> Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael 
>>> O'

Re: pod disk size

2016-11-17 Thread Julio Saura
wopss

no sorry not working, my mistake..

still 10 gb 

/dev/mapper/docker-253:5-33569780-2c3028e6722088dd70791f7a34128f61a07f96e75523529503d7febc4d27275f
10G   1,2G  8,9G  12% /

but docker info says 20 gb 



 Base Device Size: 21.47 GB


i have restarted docker daemon on that node an pulled my image again 


> El 17 nov 2016, a las 12:52, Julio Saura  escribió:
> 
> hello
> 
> yes, that was the point i needed .
> 
> applied and working ;)
> 
> thanks!
> 
> i was looking for an openshift flag/option instead of directly docker :/
> 
> 
> 
> 
>> El 17 nov 2016, a las 12:39, Frederic Giloux > <mailto:fgil...@redhat.com>> escribió:
>> 
>> Hi Julio
>> 
>> I hope I understand your question correctly. The first time docker is 
>> started, it sets up a base device with a default size specified in "Base 
>> Device Size", visible with the command "#docker info". All future images and 
>> containers will be a snapshot of this base device. Base size is the maximum 
>> size that a container/image can grow to.
>> Information on how to increase the size is available in this blog entry:
>> http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/ 
>> <http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/>
>> 
>> Best Regards,
>> 
>> Frédéric
>> 
>> On Thu, Nov 17, 2016 at 10:41 AM, Julio Saura > <mailto:jsa...@hiberus.com>> wrote:
>> Hello
>> 
>> i have noticed all my pods are started with 10 gb disk .. and i don’t know 
>> why, the problem is that i need more disk per pod, how do i increase the 
>> size of de pod disk? i don’t find any doc regarding this issue.
>> 
>> i have tried to mount a host mount on my pod just to get a jmap out of the 
>> pod but i was not able to make it run ..
>> 
>> if pod disk size increase is not possible i will try to use PV using nfs, 
>> but i guess increasing the pod disk is possible right?
>> 
>> best regards
>> 
>> thanks.
>> 
>> 
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
>> 
>> 
>> 
>> -- 
>> Frédéric Giloux
>> Senior Middleware Consultant
>> 
>> Red Hat GmbH 
>> MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main 
>> 
>> Mobile: +49 (0) 174 1724661 
>> E-Mail: fgil...@redhat.com <mailto:fgil...@redhat.com>, 
>> http://www.redhat.de/  <http://www.redhat.de/>
>> 
>> Delivering value year after year 
>> Red Hat ranks # 1 in value among software vendors 
>> http://www.redhat.com/promo/vendor/ <http://www.redhat.com/promo/vendor/> 
>> 
>> Freedom...Courage...Commitment...Accountability 
>>  
>> Red Hat GmbH, http://www.de.redhat.com/ <http://www.de.redhat.com/> Sitz: 
>> Grasbrunn, 
>> Handelsregister: Amtsgericht München, HRB 153243 
>> Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael 
>> O'Neill
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: pod disk size

2016-11-17 Thread Julio Saura
hello

yes, that was the point i needed .

applied and working ;)

thanks!

i was looking for an openshift flag/option instead of directly docker :/




> El 17 nov 2016, a las 12:39, Frederic Giloux  escribió:
> 
> Hi Julio
> 
> I hope I understand your question correctly. The first time docker is 
> started, it sets up a base device with a default size specified in "Base 
> Device Size", visible with the command "#docker info". All future images and 
> containers will be a snapshot of this base device. Base size is the maximum 
> size that a container/image can grow to.
> Information on how to increase the size is available in this blog entry:
> http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/ 
> <http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/>
> 
> Best Regards,
> 
> Frédéric
> 
> On Thu, Nov 17, 2016 at 10:41 AM, Julio Saura  <mailto:jsa...@hiberus.com>> wrote:
> Hello
> 
> i have noticed all my pods are started with 10 gb disk .. and i don’t know 
> why, the problem is that i need more disk per pod, how do i increase the size 
> of de pod disk? i don’t find any doc regarding this issue.
> 
> i have tried to mount a host mount on my pod just to get a jmap out of the 
> pod but i was not able to make it run ..
> 
> if pod disk size increase is not possible i will try to use PV using nfs, but 
> i guess increasing the pod disk is possible right?
> 
> best regards
> 
> thanks.
> 
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
> 
> 
> 
> -- 
> Frédéric Giloux
> Senior Middleware Consultant
> 
> Red Hat GmbH 
> MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main 
> 
> Mobile: +49 (0) 174 1724661 
> E-Mail: fgil...@redhat.com <mailto:fgil...@redhat.com>, http://www.redhat.de/ 
>  <http://www.redhat.de/>
> 
> Delivering value year after year 
> Red Hat ranks # 1 in value among software vendors 
> http://www.redhat.com/promo/vendor/ <http://www.redhat.com/promo/vendor/> 
> 
> Freedom...Courage...Commitment...Accountability 
>  
> Red Hat GmbH, http://www.de.redhat.com/ <http://www.de.redhat.com/> Sitz: 
> Grasbrunn, 
> Handelsregister: Amtsgericht München, HRB 153243 
> Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael 
> O'Neill

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


pod disk size

2016-11-17 Thread Julio Saura
Hello

i have noticed all my pods are started with 10 gb disk .. and i don’t know why, 
the problem is that i need more disk per pod, how do i increase the size of de 
pod disk? i don’t find any doc regarding this issue.

i have tried to mount a host mount on my pod just to get a jmap out of the pod 
but i was not able to make it run ..

if pod disk size increase is not possible i will try to use PV using nfs, but i 
guess increasing the pod disk is possible right?

best regards

thanks.



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: ansible origin playbook errata?

2016-10-06 Thread Julio Saura
naaah my fault

i already did the uninstall yes.


i did not put 
openshift_image_tag=v1.2.0
openshift_pkg_version=-1.2.0

on the ansible file besides openshift_release


installed now  without problems 

sorry ..

thanks

> El 6 oct 2016, a las 13:50, Scott Dodson  escribió:
> 
> Can you run the uninstall playbook at playbooks/adhoc/uninstall.yml
> first to clean up any previous installs? We record the version the
> first time we install cluster components in order to ensure the rest
> of the cluster matches that version, so we'd need to be starting from
> a clean environment.
> 
> --
> Scott
> 
> On Thu, Oct 6, 2016 at 7:16 AM, Julio Saura  wrote:
>> i have moved to branch release-1.2 but still the same error :/
>> 
>> 
>>> El 6 oct 2016, a las 12:07, Julio Saura  escribió:
>>> 
>>> hello
>>> 
>>> thanks for the input
>>> 
>>> after typing in my ansible file the version i get this error
>>> 
>>> fatal: [openshift-master01]: FAILED! => {"changed": false, "failed": true, 
>>> "msg": "Detected OpenShift version 1.3.0 does not match requested 
>>> openshift_release 1.2. You may need to adjust your yum repositories, 
>>> inventory, or run the appropriate OpenShift upgrade playbook.”}
>>> 
>>> do i need to change anything else for installing 1.2?
>>> 
>>> i have check yum repos but i  don’t see any reference to version..
>>> 
>>> thanks again
>>> 
>>>> El 5 oct 2016, a las 20:43, Scott Dodson  escribió:
>>>> 
>>>> We maintain branches in the github repo, release-1.2, release-1.3, etc
>>>> that are updated less frequently meaning they often don't get the
>>>> latest installer features but should be more stable. The master branch
>>>> shouldn't be used for 1.2 installs at this point, we're only testing
>>>> it against the latest stable release and current development releases.
>>>> Setting openshift_release=v1.2 should force installation of 1.2,
>>>> though i'm not certain if that feature exists in the release-1.2
>>>> branch or not.
>>>> 
>>>> The error you ran into is fixed on master via
>>>> https://github.com/openshift/openshift-ansible/pull/2552 the problem
>>>> was introduced in
>>>> https://github.com/openshift/openshift-ansible/pull/2511
>>>> 
>>>> On Wed, Oct 5, 2016 at 11:55 AM, Julio Saura  wrote:
>>>>> hello
>>>>> 
>>>>> installing a new brand cluster on centos this afternoon y realized that 
>>>>> is installing now version 1.3.0 ( ok so far )
>>>>> 
>>>>> when deploying the playbook from git master branch i got an error when 
>>>>> checking if master API is up through a native master cluster ( haproxy )
>>>>> 
>>>>> the playbook is trying to connect to port 8443 on master load balancer 
>>>>> ..but the haproxy.conf is set to listen on port 8843 .
>>>>> 
>>>>> taking a look on the haproxy.config deployed i also see that is trying to 
>>>>> connect with masters using port 8843 but masters service is up and 
>>>>> running on 8443 .. so checks fail and so masters are “down” from haproxy 
>>>>> perspective.
>>>>> 
>>>>> i did realized that on the lb playbook ( 
>>>>> playbooks/common/openshift-loadbalancer/config.yml ) the default port is 
>>>>> set as 8843, changed it to 8443 and installation completes without any 
>>>>> problem.
>>>>> 
>>>>> just in case it may help people installing right now because i think is 
>>>>> an error on the playbooks
>>>>> 
>>>>> btw: is possible to install version 1.2.0 instead on 1.3.0 using the 
>>>>> ansible procedure? i do not really trust 1.3.0 right now for my 
>>>>> production environments :( and my staging environments are based on 1.2.x 
>>>>> and i want them to be on the same version.
>>>>> 
>>>>> thanks
>>>>> 
>>>>> best regards
>>>>> 
>>>>> ___
>>>>> users mailing list
>>>>> users@lists.openshift.redhat.com
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>> 
>>> 
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> 


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: ansible origin playbook errata?

2016-10-06 Thread Julio Saura
i have moved to branch release-1.2 but still the same error :/


> El 6 oct 2016, a las 12:07, Julio Saura  escribió:
> 
> hello
> 
> thanks for the input
> 
> after typing in my ansible file the version i get this error
> 
> fatal: [openshift-master01]: FAILED! => {"changed": false, "failed": true, 
> "msg": "Detected OpenShift version 1.3.0 does not match requested 
> openshift_release 1.2. You may need to adjust your yum repositories, 
> inventory, or run the appropriate OpenShift upgrade playbook.”}
> 
> do i need to change anything else for installing 1.2? 
> 
> i have check yum repos but i  don’t see any reference to version..
> 
> thanks again
> 
>> El 5 oct 2016, a las 20:43, Scott Dodson  escribió:
>> 
>> We maintain branches in the github repo, release-1.2, release-1.3, etc
>> that are updated less frequently meaning they often don't get the
>> latest installer features but should be more stable. The master branch
>> shouldn't be used for 1.2 installs at this point, we're only testing
>> it against the latest stable release and current development releases.
>> Setting openshift_release=v1.2 should force installation of 1.2,
>> though i'm not certain if that feature exists in the release-1.2
>> branch or not.
>> 
>> The error you ran into is fixed on master via
>> https://github.com/openshift/openshift-ansible/pull/2552 the problem
>> was introduced in
>> https://github.com/openshift/openshift-ansible/pull/2511
>> 
>> On Wed, Oct 5, 2016 at 11:55 AM, Julio Saura  wrote:
>>> hello
>>> 
>>> installing a new brand cluster on centos this afternoon y realized that is 
>>> installing now version 1.3.0 ( ok so far )
>>> 
>>> when deploying the playbook from git master branch i got an error when 
>>> checking if master API is up through a native master cluster ( haproxy )
>>> 
>>> the playbook is trying to connect to port 8443 on master load balancer 
>>> ..but the haproxy.conf is set to listen on port 8843 .
>>> 
>>> taking a look on the haproxy.config deployed i also see that is trying to 
>>> connect with masters using port 8843 but masters service is up and running 
>>> on 8443 .. so checks fail and so masters are “down” from haproxy 
>>> perspective.
>>> 
>>> i did realized that on the lb playbook ( 
>>> playbooks/common/openshift-loadbalancer/config.yml ) the default port is 
>>> set as 8843, changed it to 8443 and installation completes without any 
>>> problem.
>>> 
>>> just in case it may help people installing right now because i think is an 
>>> error on the playbooks
>>> 
>>> btw: is possible to install version 1.2.0 instead on 1.3.0 using the 
>>> ansible procedure? i do not really trust 1.3.0 right now for my production 
>>> environments :( and my staging environments are based on 1.2.x and i want 
>>> them to be on the same version.
>>> 
>>> thanks
>>> 
>>> best regards
>>> 
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: ansible origin playbook errata?

2016-10-06 Thread Julio Saura
hello

thanks for the input

after typing in my ansible file the version i get this error

fatal: [openshift-master01]: FAILED! => {"changed": false, "failed": true, 
"msg": "Detected OpenShift version 1.3.0 does not match requested 
openshift_release 1.2. You may need to adjust your yum repositories, inventory, 
or run the appropriate OpenShift upgrade playbook.”}

do i need to change anything else for installing 1.2? 

i have check yum repos but i  don’t see any reference to version..

thanks again

> El 5 oct 2016, a las 20:43, Scott Dodson  escribió:
> 
> We maintain branches in the github repo, release-1.2, release-1.3, etc
> that are updated less frequently meaning they often don't get the
> latest installer features but should be more stable. The master branch
> shouldn't be used for 1.2 installs at this point, we're only testing
> it against the latest stable release and current development releases.
> Setting openshift_release=v1.2 should force installation of 1.2,
> though i'm not certain if that feature exists in the release-1.2
> branch or not.
> 
> The error you ran into is fixed on master via
> https://github.com/openshift/openshift-ansible/pull/2552 the problem
> was introduced in
> https://github.com/openshift/openshift-ansible/pull/2511
> 
> On Wed, Oct 5, 2016 at 11:55 AM, Julio Saura  wrote:
>> hello
>> 
>> installing a new brand cluster on centos this afternoon y realized that is 
>> installing now version 1.3.0 ( ok so far )
>> 
>> when deploying the playbook from git master branch i got an error when 
>> checking if master API is up through a native master cluster ( haproxy )
>> 
>> the playbook is trying to connect to port 8443 on master load balancer ..but 
>> the haproxy.conf is set to listen on port 8843 .
>> 
>> taking a look on the haproxy.config deployed i also see that is trying to 
>> connect with masters using port 8843 but masters service is up and running 
>> on 8443 .. so checks fail and so masters are “down” from haproxy perspective.
>> 
>> i did realized that on the lb playbook ( 
>> playbooks/common/openshift-loadbalancer/config.yml ) the default port is set 
>> as 8843, changed it to 8443 and installation completes without any problem.
>> 
>> just in case it may help people installing right now because i think is an 
>> error on the playbooks
>> 
>> btw: is possible to install version 1.2.0 instead on 1.3.0 using the ansible 
>> procedure? i do not really trust 1.3.0 right now for my production 
>> environments :( and my staging environments are based on 1.2.x and i want 
>> them to be on the same version.
>> 
>> thanks
>> 
>> best regards
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


ansible origin playbook errata?

2016-10-05 Thread Julio Saura
hello

installing a new brand cluster on centos this afternoon y realized that is 
installing now version 1.3.0 ( ok so far )

when deploying the playbook from git master branch i got an error when checking 
if master API is up through a native master cluster ( haproxy )

the playbook is trying to connect to port 8443 on master load balancer ..but 
the haproxy.conf is set to listen on port 8843 .

taking a look on the haproxy.config deployed i also see that is trying to 
connect with masters using port 8843 but masters service is up and running on 
8443 .. so checks fail and so masters are “down” from haproxy perspective.

i did realized that on the lb playbook ( 
playbooks/common/openshift-loadbalancer/config.yml ) the default port is set as 
8843, changed it to 8443 and installation completes without any problem.

just in case it may help people installing right now because i think is an 
error on the playbooks

btw: is possible to install version 1.2.0 instead on 1.3.0 using the ansible 
procedure? i do not really trust 1.3.0 right now for my production environments 
:( and my staging environments are based on 1.2.x and i want them to be on the 
same version.

thanks

best regards

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: docker version doubt

2016-09-27 Thread Julio Saura
ok

one week up and running

no problem at all, so it seems is not so unsupported :(

best regards


> El 20 sept 2016, a las 22:39, Hiberus  escribió:
> 
> Ummm weird
> 
> I am still puzzled then with the unsupported word
> 
> I will deploy my apps on that cluster tomorrow. Hope it works without 
> problems :-/
> 
> 
> El 20 sept 2016, a las 22:29, Alex Wauck  <mailto:alexwa...@exosite.com>> escribió:
> 
>> Oh, I didn't notice the "unsupported" part.  Mine says that, too, though.  
>> Interestingly enough, I *don't* see it on my laptop or a Debian server here 
>> at work.  On the Debian server, it's 1.12, and it comes straight from Docker 
>> themselves.  On my laptop, it comes from the Arch Linux community package, 
>> which compiles the docker binary instead of downloading a pre-built binary 
>> from Docker.  So, I guess my initial theory that binaries that *don't* come 
>> straight from Docker themselves have that "unsupported" bit is false.  I 
>> also don't see it on my personal Debian server, where I installed Docker 
>> 1.6.2 from the Debian repository, so it's not phoning home and asking if 
>> it's still supported.
>> 
>> So, no idea why it says that.  Sorry.
>> 
>> On Tue, Sep 20, 2016 at 10:23 AM, Julio Saura > <mailto:jsa...@hiberus.com>> wrote:
>> nice to hear
>> 
>> is a weird version name although xD
>> 
>> thanks alex.
>> 
>> best regards
>> 
>>> El 20 sept 2016, a las 17:16, Alex Wauck >> <mailto:alexwa...@exosite.com>> escribió:
>>> 
>>> I've seen the same thing myself.  It seems to cause some bad interactions 
>>> with image stream tags (i.e. sha256-based references result in pull 
>>> failures), but on the plus side, you can use all those images on Docker Hub 
>>> that were pushed with 1.10 or later.  On balance, I'd say it solves more 
>>> problems than it creates.  We're running our production OpenShift cluster 
>>> with 1.10, and it's worked out pretty well for us.
>>> 
>>> On Tue, Sep 20, 2016 at 3:50 AM, Julio Saura >> <mailto:jsa...@hiberus.com>> wrote:
>>> Hello
>>> 
>>> i am installing a new brand open shift origin cluster with centos 7.
>>> 
>>> after installing docker engine ( from centos repo ) i check the version and 
>>> i am concerned about the result
>>> 
>>> docker --version
>>> Docker version 1.10.3, build cb079f6-unsupported
>>> 
>>> unsupported¿?
>>> 
>>> is this normal?
>>> 
>>> thanks
>>> 
>>> 
>>> 
>>> 
>>> 
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>>> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
>>> 
>>> 
>>> 
>>> -- 
>>> Alex Wauck // DevOps Engineer
>>> 
>>> E X O S I T E 
>>> www.exosite.com <http://www.exosite.com/> 
>>> 
>>> Making Machines More Human.
>> 
>> 
>> 
>> 
>> -- 
>> Alex Wauck // DevOps Engineer
>> 
>> E X O S I T E 
>> www.exosite.com <http://www.exosite.com/> 
>> 
>> Making Machines More Human.
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: docker version doubt

2016-09-20 Thread Julio Saura
nice to hear

is a weird version name although xD

thanks alex.

best regards

> El 20 sept 2016, a las 17:16, Alex Wauck  escribió:
> 
> I've seen the same thing myself.  It seems to cause some bad interactions 
> with image stream tags (i.e. sha256-based references result in pull 
> failures), but on the plus side, you can use all those images on Docker Hub 
> that were pushed with 1.10 or later.  On balance, I'd say it solves more 
> problems than it creates.  We're running our production OpenShift cluster 
> with 1.10, and it's worked out pretty well for us.
> 
> On Tue, Sep 20, 2016 at 3:50 AM, Julio Saura  <mailto:jsa...@hiberus.com>> wrote:
> Hello
> 
> i am installing a new brand open shift origin cluster with centos 7.
> 
> after installing docker engine ( from centos repo ) i check the version and i 
> am concerned about the result
> 
> docker --version
> Docker version 1.10.3, build cb079f6-unsupported
> 
> unsupported¿?
> 
> is this normal?
> 
> thanks
> 
> 
> 
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
> 
> 
> 
> -- 
> Alex Wauck // DevOps Engineer
> 
> E X O S I T E 
> www.exosite.com <http://www.exosite.com/> 
> 
> Making Machines More Human.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


docker version doubt

2016-09-20 Thread Julio Saura
Hello

i am installing a new brand open shift origin cluster with centos 7.

after installing docker engine ( from centos repo ) i check the version and i 
am concerned about the result

docker --version
Docker version 1.10.3, build cb079f6-unsupported

unsupported¿?

is this normal?

thanks





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: open shift env variables with slash characters

2016-09-12 Thread Julio Saura
ok seems solved

i had to put export for each env variable java did not recognize on 
/etc/profile file in the docker OS..

i did put that on my entry point script before starting jboss process and solved

thanks all!

best regards
> El 12 sept 2016, a las 12:14, Julio Saura  escribió:
> 
> for example
> 
> in the docker shell
> 
> root@apis-rc-y7pox:/# echo $AUTH_SQL_URL
> jdbc:oracle:thin:@bbdd:1521/sid
> 
> but on java when i recover that value is empty so  i get a null pointer 
> exception.
> 
> 
>> El 12 sept 2016, a las 12:04, Julio Saura > <mailto:jsa...@hiberus.com>> escribió:
>> 
>> hello, no i don’t use official jboss images..
>> 
>> i use a custom image build by me ..
>> 
>> on my dockerfile there is nothing related to env variables ..  is on my 
>> entry point script where i read em and do some substitutions in jboss config 
>> files before launching jboss process  .. that part works ok
>> 
>> but after that reading them from our java app does not work , their value is 
>> null, .. and starting that same docker image in a docker standalone engine , 
>> and passing the same variables, works ok ..
>> 
>> my docker file is simple
>> 
>> FROM debian
>> 
>> RUN apt-get update -y
>> RUN apt-get install -y vim
>> RUN apt-get install -y sudo
>> USER root
>> COPY jboss.tar.gz / ( custom jboss tar.gz )
>> COPY jboss-as.conf /etc/init.d
>> COPY jboss-as /etc/init.d
>> RUN useradd -ms /bin/bash jbossadmin
>> RUN chmod +x /etc/init.d/jboss-as && chown jbossadmin: /etc/init.d/jboss-as
>> RUN cd / && tar xvfz jboss.tar.gz && rm jboss.tar.gz && chown -R jbossadmin: 
>> /opt
>> RUN /opt/jboss/jboss-as/bin/add-user.sh admin Developer#2015 --silent
>> 
>> EXPOSE 8080 
>> 
>> ENTRYPOINT ["/etc/init.d/jboss-as", "start”]
>> 
>> 
>> and in my entry point script i just do some replaces before starting jboss.
>> 
>> for example ( i have ASYNC_POOL word in the value entry of the xml inside 
>> the jboss.tar.gz i use,  so it get replaced ok )
>> 
>> sed -i -e 's/ASYNC_POOL/'"$ASYNC_POOL"'/g' 
>> /opt/jboss/jboss-as/standalone/configuration/standalone.xml
>> 
>> 
>> and in my RC i set it as follows :
>> 
>>"env": [
>>  {
>>"name”:"ASYNC_POOL",
>>"value”:"300"
>>  },
>> 
>> etc etc
>> 
>> that works like a charm, but all this variables are NULL when recovering 
>> them from my java app.
>> 
>> 
>> best regards
>> 
>> 
>>> El 12 sept 2016, a las 11:37, Aleksandar Kostadinov >> <mailto:akost...@redhat.com>> escribió:
>>> 
>>> Do you use official Wildfly/JBossEAP images for OpenShift? OpenShift does 
>>> not run containers as root by default (this is the reason most standard 
>>> docker images need extra configuration to run on OpenShift).
>>> 
>>> To check why env vars are not visible, I think you'd need to post your 
>>> dockerfile somewhere.
>>> 
>>> Julio Saura wrote on 09/12/16 12:11:
>>>> hello
>>>> 
>>>> i have just started the process as root inside the docker just to be
>>>> sure that  was the problem ,but still the same issue
>>>> 
>>>> java process running inside jboss is not reading variable values but
>>>> when connected to the docker shell , echoing them shows the values
>>>> properly ..
>>>> 
>>>> what i am doing wrong? shell processes are able to read values but app
>>>> inside jboss no..
>>>> 
>>>> best regards
>>>> 
>>>> thanks again.
>>>> 
>>>> 
>>>>> El 12 sept 2016, a las 11:04, Julio Saura >>>> <mailto:jsa...@hiberus.com>
>>>>> <mailto:jsa...@hiberus.com <mailto:jsa...@hiberus.com>>> escribió:
>>>>> 
>>>>> do i need to do something to make the env variables i pass on son when
>>>>> creating the RC to all users? or shall i do it manually dumping them
>>>>> into /etc/profile or user bash_profile file prior launching jboss?
>>>>> 
>>>>> thanks
>>>>> 
>>>>>> El 12 sept 2016, a las 10:45, Julio Saura >>>>> <mailto:jsa...@hiberus.com>
>>>>>> <mailto:jsa...@hiberus.com <mailto:jsa...@hiberus.c

Re: open shift env variables with slash characters

2016-09-12 Thread Julio Saura
for example

in the docker shell

root@apis-rc-y7pox:/# echo $AUTH_SQL_URL
jdbc:oracle:thin:@bbdd:1521/sid

but on java when i recover that value is empty so  i get a null pointer 
exception.


> El 12 sept 2016, a las 12:04, Julio Saura  escribió:
> 
> hello, no i don’t use official jboss images..
> 
> i use a custom image build by me ..
> 
> on my dockerfile there is nothing related to env variables ..  is on my entry 
> point script where i read em and do some substitutions in jboss config files 
> before launching jboss process  .. that part works ok
> 
> but after that reading them from our java app does not work , their value is 
> null, .. and starting that same docker image in a docker standalone engine , 
> and passing the same variables, works ok ..
> 
> my docker file is simple
> 
> FROM debian
> 
> RUN apt-get update -y
> RUN apt-get install -y vim
> RUN apt-get install -y sudo
> USER root
> COPY jboss.tar.gz / ( custom jboss tar.gz )
> COPY jboss-as.conf /etc/init.d
> COPY jboss-as /etc/init.d
> RUN useradd -ms /bin/bash jbossadmin
> RUN chmod +x /etc/init.d/jboss-as && chown jbossadmin: /etc/init.d/jboss-as
> RUN cd / && tar xvfz jboss.tar.gz && rm jboss.tar.gz && chown -R jbossadmin: 
> /opt
> RUN /opt/jboss/jboss-as/bin/add-user.sh admin Developer#2015 --silent
> 
> EXPOSE 8080 
> 
> ENTRYPOINT ["/etc/init.d/jboss-as", "start”]
> 
> 
> and in my entry point script i just do some replaces before starting jboss.
> 
> for example ( i have ASYNC_POOL word in the value entry of the xml inside the 
> jboss.tar.gz i use,  so it get replaced ok )
> 
> sed -i -e 's/ASYNC_POOL/'"$ASYNC_POOL"'/g' 
> /opt/jboss/jboss-as/standalone/configuration/standalone.xml
> 
> 
> and in my RC i set it as follows :
> 
>"env": [
>  {
>"name”:"ASYNC_POOL",
>"value”:"300"
>  },
> 
> etc etc
> 
> that works like a charm, but all this variables are NULL when recovering them 
> from my java app.
> 
> 
> best regards
> 
> 
>> El 12 sept 2016, a las 11:37, Aleksandar Kostadinov > <mailto:akost...@redhat.com>> escribió:
>> 
>> Do you use official Wildfly/JBossEAP images for OpenShift? OpenShift does 
>> not run containers as root by default (this is the reason most standard 
>> docker images need extra configuration to run on OpenShift).
>> 
>> To check why env vars are not visible, I think you'd need to post your 
>> dockerfile somewhere.
>> 
>> Julio Saura wrote on 09/12/16 12:11:
>>> hello
>>> 
>>> i have just started the process as root inside the docker just to be
>>> sure that  was the problem ,but still the same issue
>>> 
>>> java process running inside jboss is not reading variable values but
>>> when connected to the docker shell , echoing them shows the values
>>> properly ..
>>> 
>>> what i am doing wrong? shell processes are able to read values but app
>>> inside jboss no..
>>> 
>>> best regards
>>> 
>>> thanks again.
>>> 
>>> 
>>>> El 12 sept 2016, a las 11:04, Julio Saura >>> <mailto:jsa...@hiberus.com>
>>>> <mailto:jsa...@hiberus.com <mailto:jsa...@hiberus.com>>> escribió:
>>>> 
>>>> do i need to do something to make the env variables i pass on son when
>>>> creating the RC to all users? or shall i do it manually dumping them
>>>> into /etc/profile or user bash_profile file prior launching jboss?
>>>> 
>>>> thanks
>>>> 
>>>>> El 12 sept 2016, a las 10:45, Julio Saura >>>> <mailto:jsa...@hiberus.com>
>>>>> <mailto:jsa...@hiberus.com <mailto:jsa...@hiberus.com>>> escribió:
>>>>> 
>>>>> hi
>>>>> 
>>>>> we have found the problem
>>>>> 
>>>>> the process inside the docker is launched with sudo to a non
>>>>> privileged user por starting the jboss server  , don’t want the
>>>>> process to be launched as root.
>>>>> 
>>>>> env variables are only available for the root user of the docker, not
>>>>> for the user that launches jboss process, and so code finds em as null
>>>>> 
>>>>> Best regards
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>> El 9 sept 2016, a las 17:49, Julio Saura >>>>> <

Re: open shift env variables with slash characters

2016-09-12 Thread Julio Saura
hello, no i don’t use official jboss images..

i use a custom image build by me ..

on my dockerfile there is nothing related to env variables ..  is on my entry 
point script where i read em and do some substitutions in jboss config files 
before launching jboss process  .. that part works ok

but after that reading them from our java app does not work , their value is 
null, .. and starting that same docker image in a docker standalone engine , 
and passing the same variables, works ok ..

my docker file is simple

FROM debian

RUN apt-get update -y
RUN apt-get install -y vim
RUN apt-get install -y sudo
USER root
COPY jboss.tar.gz / ( custom jboss tar.gz )
COPY jboss-as.conf /etc/init.d
COPY jboss-as /etc/init.d
RUN useradd -ms /bin/bash jbossadmin
RUN chmod +x /etc/init.d/jboss-as && chown jbossadmin: /etc/init.d/jboss-as
RUN cd / && tar xvfz jboss.tar.gz && rm jboss.tar.gz && chown -R jbossadmin: 
/opt
RUN /opt/jboss/jboss-as/bin/add-user.sh admin Developer#2015 --silent

EXPOSE 8080 

ENTRYPOINT ["/etc/init.d/jboss-as", "start”]


and in my entry point script i just do some replaces before starting jboss.

for example ( i have ASYNC_POOL word in the value entry of the xml inside the 
jboss.tar.gz i use,  so it get replaced ok )

sed -i -e 's/ASYNC_POOL/'"$ASYNC_POOL"'/g' 
/opt/jboss/jboss-as/standalone/configuration/standalone.xml


and in my RC i set it as follows :

   "env": [
 {
   "name”:"ASYNC_POOL",
   "value”:"300"
 },

etc etc

that works like a charm, but all this variables are NULL when recovering them 
from my java app.


best regards


> El 12 sept 2016, a las 11:37, Aleksandar Kostadinov  
> escribió:
> 
> Do you use official Wildfly/JBossEAP images for OpenShift? OpenShift does not 
> run containers as root by default (this is the reason most standard docker 
> images need extra configuration to run on OpenShift).
> 
> To check why env vars are not visible, I think you'd need to post your 
> dockerfile somewhere.
> 
> Julio Saura wrote on 09/12/16 12:11:
>> hello
>> 
>> i have just started the process as root inside the docker just to be
>> sure that  was the problem ,but still the same issue
>> 
>> java process running inside jboss is not reading variable values but
>> when connected to the docker shell , echoing them shows the values
>> properly ..
>> 
>> what i am doing wrong? shell processes are able to read values but app
>> inside jboss no..
>> 
>> best regards
>> 
>> thanks again.
>> 
>> 
>>> El 12 sept 2016, a las 11:04, Julio Saura >> <mailto:jsa...@hiberus.com <mailto:jsa...@hiberus.com>>> escribió:
>>> 
>>> do i need to do something to make the env variables i pass on son when
>>> creating the RC to all users? or shall i do it manually dumping them
>>> into /etc/profile or user bash_profile file prior launching jboss?
>>> 
>>> thanks
>>> 
>>>> El 12 sept 2016, a las 10:45, Julio Saura >>> <mailto:jsa...@hiberus.com>
>>>> <mailto:jsa...@hiberus.com <mailto:jsa...@hiberus.com>>> escribió:
>>>> 
>>>> hi
>>>> 
>>>> we have found the problem
>>>> 
>>>> the process inside the docker is launched with sudo to a non
>>>> privileged user por starting the jboss server  , don’t want the
>>>> process to be launched as root.
>>>> 
>>>> env variables are only available for the root user of the docker, not
>>>> for the user that launches jboss process, and so code finds em as null
>>>> 
>>>> Best regards
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> El 9 sept 2016, a las 17:49, Julio Saura >>>> <mailto:jsa...@hiberus.com>
>>>>> <mailto:jsa...@hiberus.com <mailto:jsa...@hiberus.com>>> escribió:
>>>>> 
>>>>>> 
>>>>>> El 9 sept 2016, a las 17:47, Aleksandar Kostadinov
>>>>>> mailto:akost...@redhat.com> 
>>>>>> <mailto:akost...@redhat.com <mailto:akost...@redhat.com>>> escribió:
>>>>>> 
>>>>>> Julio Saura wrote on 09/09/16 18:41:
>>>>>>> 
>>>>>>>> El 9 sept 2016, a las 17:39, Aleksandar Kostadinov
>>>>>>>> mailto:akost...@redhat.com> 
>>>>>>>> <mailto:akost...@redhat.com <mailto:akost...@redhat.com>>> escribió:
>>>>>>>> 
>>>>>>>> J

Re: open shift env variables with slash characters

2016-09-12 Thread Julio Saura
hello

i have just started the process as root inside the docker just to be sure that  
was the problem ,but still the same issue

java process running inside jboss is not reading variable values but when 
connected to the docker shell , echoing them shows the values properly ..

what i am doing wrong? shell processes are able to read values but app inside 
jboss no..

best regards

thanks again.


> El 12 sept 2016, a las 11:04, Julio Saura  escribió:
> 
> do i need to do something to make the env variables i pass on son when 
> creating the RC to all users? or shall i do it manually dumping them into 
> /etc/profile or user bash_profile file prior launching jboss?
> 
> thanks
> 
>> El 12 sept 2016, a las 10:45, Julio Saura > <mailto:jsa...@hiberus.com>> escribió:
>> 
>> hi
>> 
>> we have found the problem
>> 
>> the process inside the docker is launched with sudo to a non privileged user 
>> por starting the jboss server  , don’t want the process to be launched as 
>> root.
>> 
>> env variables are only available for the root user of the docker, not for 
>> the user that launches jboss process, and so code finds em as null
>> 
>> Best regards
>> 
>> 
>> 
>> 
>>> El 9 sept 2016, a las 17:49, Julio Saura >> <mailto:jsa...@hiberus.com>> escribió:
>>> 
>>>> 
>>>> El 9 sept 2016, a las 17:47, Aleksandar Kostadinov >>> <mailto:akost...@redhat.com>> escribió:
>>>> 
>>>> Julio Saura wrote on 09/09/16 18:41:
>>>>> 
>>>>>> El 9 sept 2016, a las 17:39, Aleksandar Kostadinov >>>>> <mailto:akost...@redhat.com>> escribió:
>>>>>> 
>>>>>> Julio Saura wrote on 09/09/16 17:09:
>>>>>>> Hello
>>>>>>> 
>>>>>>> just to be sure ..
>>>>>>> 
>>>>>>> i need to pass some env variables containing @ and / characters
>>>>>>> 
>>>>>>> so far i did always scape them
>>>>>>> 
>>>>>>> for example
>>>>>>> 
>>>>>>>   {
>>>>>>>  "name”:"URL",
>>>>>>>  "value”:”http:\/\/www.example.com <http://www.example.com/>\/uri"
>>>>>>>   },
>>>>>> 
>>>>>> This is not OpenShift specific. You need just correct JSON. As far as 
>>>>>> ruby JSON parser says, there is no need to escape anything in the above 
>>>>>> URL.
>>>>> 
>>>>> yeah i know is not open shift specific but since this an open shift group 
>>>>> i though it would be the right way to answer this :)
>>>>> 
>>>>> ok, i will investigate further then why i am having problems in the app 
>>>>> recovering variable values
>>>> 
>>>> You need to provide to the list more details of what you are seeing 
>>>> exactly. I said JSON escaping is not specific to OpenSHift. But issue 
>>>> might not be escaping itself.
>>> 
>>> yeah agree. let me do some investigations in the code i had .. i am sure is 
>>> a code problem if escaping is not needed .. 
>>> 
>>> i am getting is a null pointer exception when recovering the env variables 
>>> values.
>>> 
>>> i will share conclusions as soon as i have 
>>> 
>>> thanks
>>> 
>>>> 
>>>>> thanks aleksandar
>>>>> 
>>>>> 
>>>>>> 
>>>>>>> 
>>>>>>> this is the right way ? i am having problems recovering this variable 
>>>>>>> on a java app in my pods
>>>>>>> 
>>>>>>> on shell if i echo the variable’s value shows the value ok.
>>>>>>> 
>>>>>>> thanks in advance
>>>>>>> Best regards.
>>>>>>> 
>>>>>>> ___
>>>>>>> users mailing list
>>>>>>> users@lists.openshift.redhat.com 
>>>>>>> <mailto:users@lists.openshift.redhat.com>
>>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>>>>>>> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: open shift env variables with slash characters

2016-09-12 Thread Julio Saura
do i need to do something to make the env variables i pass on son when creating 
the RC to all users? or shall i do it manually dumping them into /etc/profile 
or user bash_profile file prior launching jboss?

thanks

> El 12 sept 2016, a las 10:45, Julio Saura  <mailto:jsa...@hiberus.com>> escribió:
> 
> hi
> 
> we have found the problem
> 
> the process inside the docker is launched with sudo to a non privileged user 
> por starting the jboss server  , don’t want the process to be launched as 
> root.
> 
> env variables are only available for the root user of the docker, not for the 
> user that launches jboss process, and so code finds em as null
> 
> Best regards
> 
> 
> 
> 
>> El 9 sept 2016, a las 17:49, Julio Saura > <mailto:jsa...@hiberus.com>> escribió:
>> 
>>> 
>>> El 9 sept 2016, a las 17:47, Aleksandar Kostadinov >> <mailto:akost...@redhat.com>> escribió:
>>> 
>>> Julio Saura wrote on 09/09/16 18:41:
>>>> 
>>>>> El 9 sept 2016, a las 17:39, Aleksandar Kostadinov >>>> <mailto:akost...@redhat.com>> escribió:
>>>>> 
>>>>> Julio Saura wrote on 09/09/16 17:09:
>>>>>> Hello
>>>>>> 
>>>>>> just to be sure ..
>>>>>> 
>>>>>> i need to pass some env variables containing @ and / characters
>>>>>> 
>>>>>> so far i did always scape them
>>>>>> 
>>>>>> for example
>>>>>> 
>>>>>>   {
>>>>>>  "name”:"URL",
>>>>>>  "value”:”http:\/\/www.example.com <http://www.example.com/>\/uri"
>>>>>>   },
>>>>> 
>>>>> This is not OpenShift specific. You need just correct JSON. As far as 
>>>>> ruby JSON parser says, there is no need to escape anything in the above 
>>>>> URL.
>>>> 
>>>> yeah i know is not open shift specific but since this an open shift group 
>>>> i though it would be the right way to answer this :)
>>>> 
>>>> ok, i will investigate further then why i am having problems in the app 
>>>> recovering variable values
>>> 
>>> You need to provide to the list more details of what you are seeing 
>>> exactly. I said JSON escaping is not specific to OpenSHift. But issue might 
>>> not be escaping itself.
>> 
>> yeah agree. let me do some investigations in the code i had .. i am sure is 
>> a code problem if escaping is not needed .. 
>> 
>> i am getting is a null pointer exception when recovering the env variables 
>> values.
>> 
>> i will share conclusions as soon as i have 
>> 
>> thanks
>> 
>>> 
>>>> thanks aleksandar
>>>> 
>>>> 
>>>>> 
>>>>>> 
>>>>>> this is the right way ? i am having problems recovering this variable on 
>>>>>> a java app in my pods
>>>>>> 
>>>>>> on shell if i echo the variable’s value shows the value ok.
>>>>>> 
>>>>>> thanks in advance
>>>>>> Best regards.
>>>>>> 
>>>>>> ___
>>>>>> users mailing list
>>>>>> users@lists.openshift.redhat.com 
>>>>>> <mailto:users@lists.openshift.redhat.com>
>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>>>>>> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
> ___
> users mailing list
> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: open shift env variables with slash characters

2016-09-12 Thread Julio Saura
hi

we have found the problem

the process inside the docker is launched with sudo to a non privileged user 
por starting the jboss server  , don’t want the process to be launched as root.

env variables are only available for the root user of the docker, not for the 
user that launches jboss process, and so code finds em as null

Best regards




> El 9 sept 2016, a las 17:49, Julio Saura  escribió:
> 
>> 
>> El 9 sept 2016, a las 17:47, Aleksandar Kostadinov  
>> escribió:
>> 
>> Julio Saura wrote on 09/09/16 18:41:
>>> 
>>>> El 9 sept 2016, a las 17:39, Aleksandar Kostadinov  
>>>> escribió:
>>>> 
>>>> Julio Saura wrote on 09/09/16 17:09:
>>>>> Hello
>>>>> 
>>>>> just to be sure ..
>>>>> 
>>>>> i need to pass some env variables containing @ and / characters
>>>>> 
>>>>> so far i did always scape them
>>>>> 
>>>>> for example
>>>>> 
>>>>>   {
>>>>>  "name”:"URL",
>>>>>  "value”:”http:\/\/www.example.com\/uri"
>>>>>   },
>>>> 
>>>> This is not OpenShift specific. You need just correct JSON. As far as ruby 
>>>> JSON parser says, there is no need to escape anything in the above URL.
>>> 
>>> yeah i know is not open shift specific but since this an open shift group i 
>>> though it would be the right way to answer this :)
>>> 
>>> ok, i will investigate further then why i am having problems in the app 
>>> recovering variable values
>> 
>> You need to provide to the list more details of what you are seeing exactly. 
>> I said JSON escaping is not specific to OpenSHift. But issue might not be 
>> escaping itself.
> 
> yeah agree. let me do some investigations in the code i had .. i am sure is a 
> code problem if escaping is not needed .. 
> 
> i am getting is a null pointer exception when recovering the env variables 
> values.
> 
> i will share conclusions as soon as i have 
> 
> thanks
> 
>> 
>>> thanks aleksandar
>>> 
>>> 
>>>> 
>>>>> 
>>>>> this is the right way ? i am having problems recovering this variable on 
>>>>> a java app in my pods
>>>>> 
>>>>> on shell if i echo the variable’s value shows the value ok.
>>>>> 
>>>>> thanks in advance
>>>>> Best regards.
>>>>> 
>>>>> ___
>>>>> users mailing list
>>>>> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>>>>> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


open shift env variables with slash characters

2016-09-09 Thread Julio Saura
Hello

just to be sure ..

i need to pass some env variables containing @ and / characters

so far i did always scape them 

for example

 {
"name”:"URL",
"value”:”http:\/\/www.example.com\/uri"
 },


this is the right way ? i am having problems recovering this variable on a java 
app in my pods

on shell if i echo the variable’s value shows the value ok.

thanks in advance
Best regards.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: multiple master multiple etcd

2016-09-08 Thread Julio Saura
thanks john!


> El 7 sept 2016, a las 19:25, Skarbek, John  escribió:
> 
> On September 7, 2016 at 11:42:05, Julio Saura (jsa...@hiberus.com 
> <mailto:jsa...@hiberus.com>) wrote:
>> Hello 
>> 
>> i am about building a new cluster with 2 masters and 3 etcd servers por HA 
>> .. 
>> 
>> my doubt is that i think i read somewhere in doc it is not recommended to 
>> have the external etcd servers in the same nodes than masters are running 
>> 
>> is this true? 
> 
> Not necessarily, this it totally up to your own definition of how you'd like 
> to run your own infrastructure.  Openshift operates perfectly fine with this 
> configuration.
> 
>> 
>> 
>> what is the best approach? 2 masters in native HA + 3 different nodes por 
>> ectd or could it be possible to have just 3 nodes por master + etcd running 
>> along? 
> 
>> 
>> 
>> thanks and sorry for the silly question. 
>> 
>> Best regards 
>> 
>> ___ 
>> users mailing list 
>> users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com> 
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=D7ScPrp6NR9vhBqREtBBwEPlRAno4LhgSIRHfJVzJSY&s=3m73tyKEVo0D-c2RYOx-hwObkVoHiFqHE4RF1Ef7vEo&e=
>>  
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=DQICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=D7ScPrp6NR9vhBqREtBBwEPlRAno4LhgSIRHfJVzJSY&s=3m73tyKEVo0D-c2RYOx-hwObkVoHiFqHE4RF1Ef7vEo&e=>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


multiple master multiple etcd

2016-09-07 Thread Julio Saura
Hello

i am about building a new cluster with 2 masters and 3 etcd servers por HA ..

my doubt is that i think i read somewhere in doc it is not recommended to have 
the external etcd servers in the same nodes than masters are running

is this true?

what is the best approach? 2 masters in native HA + 3 different nodes por ectd 
or could it be possible to have just 3 nodes por master + etcd running along?

thanks and sorry for the silly question.

Best regards

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: weird issue with etcd

2016-06-21 Thread Julio Saura
etcdctl -C https://openshift-balancer01:2379,https://openshift-balancer02:2379 
--ca-file=/etc/origin/master/maer.etcd-ca.crt 
--cert-file=/etc/origin/master/master.etcd-client.crt 
--key-file=/etc/origin/master/master.etcd-client.key member list


12c8a31c8fcae0d4: name=openshift-balancer02 peerURLs=https://:2380 
clientURLs=https://:2379
bf80ee3a26e8772c: name=openshift-balancer01 peerURLs=https://:2380 
clientURLs=https://:2379 <https://:2379/>



member list is ok

cluster health tells me what i already know :(

etcdctl -C https://openshift-balancer01;2379,https://openshift-balancer02:2379 
--ca-file=/etc/origin/master/master.etcd-ca.crt 
--cert-file=/etc/origin/master/master.etcd-client.crt 
--key-file=/etc/origin/master/master.etcd-client.key cluster-health

member 12c8a31c8fcae0d4 is unhealthy: got unhealthy result from 
https://:2379
failed to check the health of member bf80ee3a26e8772c on https://:2379: Get 
https://:2379/health: dial tcp :2379: i/o timeout
member bf80ee3a26e8772c is unreachable: [https://:2379] are all unreachable

the "main etcd" is halted right now 

Thanks!





> El 21 jun 2016, a las 17:45, Julio Saura  escribió:
> 
> regarding the certs, i used ansible to install origin so i guess ansible 
> should have done it right …
> 
> 
>> El 21 jun 2016, a las 15:29, Julio Saura > <mailto:jsa...@hiberus.com>> escribió:
>> 
>> hello
>> 
>> yes, they are synced with and internal NTP server .. 
>> 
>> gonna try ectdctl thanks!
>> 
>> 
>>> El 21 jun 2016, a las 15:20, Jason DeTiberus >> <mailto:jdeti...@redhat.com>> escribió:
>>> 
>>> On Tue, Jun 21, 2016 at 7:28 AM, Julio Saura >> <mailto:jsa...@hiberus.com>> wrote:
>>>> yes
>>>> 
>>>> working
>>>> 
>>>> [root@openshift-master01 ~]# telnet X 2380
>>>> Trying ...
>>>> Connected to .
>>>> Escape character is '^]'.
>>>> ^CConnection closed by foreign host.
>>> 
>>> 
>>> Have you verified that time is syncd between the hosts? I'd also check
>>> the peer certs between the hosts... Can you connect to the hosts using
>>> etcdctl? There should be a status command that will give you more
>>> information.
>>> 
>>>> 
>>>> 
>>>> El 21 jun 2016, a las 13:21, Jason DeTiberus >>> <mailto:jdeti...@redhat.com>> escribió:
>>>> 
>>>> Did you verify connectivity over the peering port as well (2380)?
>>>> 
>>>> On Jun 21, 2016 7:17 AM, "Julio Saura" >>> <mailto:jsa...@hiberus.com>> wrote:
>>>>> 
>>>>> hello
>>>>> 
>>>>> same problem
>>>>> 
>>>>> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
>>>>> F0621 13:11:03.155246   59618 auth.go:141] error #0: dial tcp :2379:
>>>>> connection refused ( the one i rebooted )
>>>>> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
>>>>> error #1: client: etcd member https://:2379 <https://:2379/> has 
>>>>> no leader
>>>>> 
>>>>> i rebooted the etcd server and my master is not able to use other one
>>>>> 
>>>>> still able to connect from both masters using telnet to the etcd port ..
>>>>> 
>>>>> any clue? this is weird.
>>>>> 
>>>>> 
>>>>>> El 14 jun 2016, a las 9:28, Julio Saura >>>>> <mailto:jsa...@hiberus.com>> escribió:
>>>>>> 
>>>>>> hello
>>>>>> 
>>>>>> yes is correct .. it was the first thing i checked ..
>>>>>> 
>>>>>> first master
>>>>>> 
>>>>>> etcdClientInfo:
>>>>>> ca: master.etcd-ca.crt
>>>>>> certFile: master.etcd-client.crt
>>>>>> keyFile: master.etcd-client.key
>>>>>> urls:
>>>>>>  - https://openshift-balancer01:2379 <https://openshift-balancer01:2379/>
>>>>>>  - https://openshift-balancer02:2379 <https://openshift-balancer02:2379/>
>>>>>> 
>>>>>> 
>>>>>> second master
>>>>>> 
>>>>>> etcdClientInfo:
>>>>>> ca: master.etcd-ca.crt
>>>>>> certFile: master.etcd-client.crt
>>>>>>

Re: weird issue with etcd

2016-06-21 Thread Julio Saura
regarding the certs, i used ansible to install origin so i guess ansible should 
have done it right …


> El 21 jun 2016, a las 15:29, Julio Saura  escribió:
> 
> hello
> 
> yes, they are synced with and internal NTP server .. 
> 
> gonna try ectdctl thanks!
> 
> 
>> El 21 jun 2016, a las 15:20, Jason DeTiberus > <mailto:jdeti...@redhat.com>> escribió:
>> 
>> On Tue, Jun 21, 2016 at 7:28 AM, Julio Saura > <mailto:jsa...@hiberus.com>> wrote:
>>> yes
>>> 
>>> working
>>> 
>>> [root@openshift-master01 ~]# telnet X 2380
>>> Trying ...
>>> Connected to .
>>> Escape character is '^]'.
>>> ^CConnection closed by foreign host.
>> 
>> 
>> Have you verified that time is syncd between the hosts? I'd also check
>> the peer certs between the hosts... Can you connect to the hosts using
>> etcdctl? There should be a status command that will give you more
>> information.
>> 
>>> 
>>> 
>>> El 21 jun 2016, a las 13:21, Jason DeTiberus >> <mailto:jdeti...@redhat.com>> escribió:
>>> 
>>> Did you verify connectivity over the peering port as well (2380)?
>>> 
>>> On Jun 21, 2016 7:17 AM, "Julio Saura" >> <mailto:jsa...@hiberus.com>> wrote:
>>>> 
>>>> hello
>>>> 
>>>> same problem
>>>> 
>>>> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
>>>> F0621 13:11:03.155246   59618 auth.go:141] error #0: dial tcp :2379:
>>>> connection refused ( the one i rebooted )
>>>> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
>>>> error #1: client: etcd member https://:2379 <https://:2379/> has 
>>>> no leader
>>>> 
>>>> i rebooted the etcd server and my master is not able to use other one
>>>> 
>>>> still able to connect from both masters using telnet to the etcd port ..
>>>> 
>>>> any clue? this is weird.
>>>> 
>>>> 
>>>>> El 14 jun 2016, a las 9:28, Julio Saura >>>> <mailto:jsa...@hiberus.com>> escribió:
>>>>> 
>>>>> hello
>>>>> 
>>>>> yes is correct .. it was the first thing i checked ..
>>>>> 
>>>>> first master
>>>>> 
>>>>> etcdClientInfo:
>>>>> ca: master.etcd-ca.crt
>>>>> certFile: master.etcd-client.crt
>>>>> keyFile: master.etcd-client.key
>>>>> urls:
>>>>>  - https://openshift-balancer01:2379 <https://openshift-balancer01:2379/>
>>>>>  - https://openshift-balancer02:2379 <https://openshift-balancer02:2379/>
>>>>> 
>>>>> 
>>>>> second master
>>>>> 
>>>>> etcdClientInfo:
>>>>> ca: master.etcd-ca.crt
>>>>> certFile: master.etcd-client.crt
>>>>> keyFile: master.etcd-client.key
>>>>> urls:
>>>>>  - https://openshift-balancer01:2379 <https://openshift-balancer01:2379/>
>>>>>  - https://openshift-balancer02:2379 <https://openshift-balancer02:2379/>
>>>>> 
>>>>> dns names resolve in both masters
>>>>> 
>>>>> Best regards and thanks!
>>>>> 
>>>>> 
>>>>>> El 13 jun 2016, a las 18:45, Scott Dodson >>>>> <mailto:sdod...@redhat.com>>
>>>>>> escribió:
>>>>>> 
>>>>>> Can you verify the connection information etcdClientInfo section in
>>>>>> /etc/origin/master/master-config.yaml is correct?
>>>>>> 
>>>>>> On Mon, Jun 13, 2016 at 11:56 AM, Julio Saura >>>>> <mailto:jsa...@hiberus.com>>
>>>>>> wrote:
>>>>>>> hello
>>>>>>> 
>>>>>>> yes.. i have a external balancer in front of my masters for HA as doc
>>>>>>> says.
>>>>>>> 
>>>>>>> i don’t have any balancer in front of my etcd servers for masters
>>>>>>> connection, it’s not necessary right? masters will try all etcd 
>>>>>>> availables
>>>>>>> it one is down right?
>>>>>>> 
>>>>>>> i don’t know why but none of my masters were able to connect to the
>>

Re: weird issue with etcd

2016-06-21 Thread Julio Saura
hello

yes only have two .. i know 3 is the number but i guessed that it might also 
work with 2 :/

yes both etcd servers can connect between them with the peer port .. i have 
also checked it ..

so maybe is because i only have three etcd?

thanks scott!!


> El 21 jun 2016, a las 15:19, Scott Dodson  escribió:
> 
> Julio,
> 
> First, it looks like you've only got two etcd hosts, in order to
> tolerate failure of a single host you'll want three.
> From your master config it looks like your two etcd hosts are
> openshift-balancer01 and openshift-balancer02, can each of those hosts
> connect to each other on port 2380? They will connect directly to each
> other for clustering purposes, then the masters will connect to each
> of the etcd hosts on port 2379 for client connectivity.
> 
> --
> Scott
> 
> On Tue, Jun 21, 2016 at 7:28 AM, Julio Saura  wrote:
>> yes
>> 
>> working
>> 
>> [root@openshift-master01 ~]# telnet X 2380
>> Trying ...
>> Connected to .
>> Escape character is '^]'.
>> ^CConnection closed by foreign host.
>> 
>> 
>> El 21 jun 2016, a las 13:21, Jason DeTiberus  escribió:
>> 
>> Did you verify connectivity over the peering port as well (2380)?
>> 
>> On Jun 21, 2016 7:17 AM, "Julio Saura"  wrote:
>>> 
>>> hello
>>> 
>>> same problem
>>> 
>>> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
>>> F0621 13:11:03.155246   59618 auth.go:141] error #0: dial tcp :2379:
>>> connection refused ( the one i rebooted )
>>> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
>>> error #1: client: etcd member https://YYYY:2379 has no leader
>>> 
>>> i rebooted the etcd server and my master is not able to use other one
>>> 
>>> still able to connect from both masters using telnet to the etcd port ..
>>> 
>>> any clue? this is weird.
>>> 
>>> 
>>>> El 14 jun 2016, a las 9:28, Julio Saura  escribió:
>>>> 
>>>> hello
>>>> 
>>>> yes is correct .. it was the first thing i checked ..
>>>> 
>>>> first master
>>>> 
>>>> etcdClientInfo:
>>>> ca: master.etcd-ca.crt
>>>> certFile: master.etcd-client.crt
>>>> keyFile: master.etcd-client.key
>>>> urls:
>>>>  - https://openshift-balancer01:2379
>>>>  - https://openshift-balancer02:2379
>>>> 
>>>> 
>>>> second master
>>>> 
>>>> etcdClientInfo:
>>>> ca: master.etcd-ca.crt
>>>> certFile: master.etcd-client.crt
>>>> keyFile: master.etcd-client.key
>>>> urls:
>>>>  - https://openshift-balancer01:2379
>>>>  - https://openshift-balancer02:2379
>>>> 
>>>> dns names resolve in both masters
>>>> 
>>>> Best regards and thanks!
>>>> 
>>>> 
>>>>> El 13 jun 2016, a las 18:45, Scott Dodson 
>>>>> escribió:
>>>>> 
>>>>> Can you verify the connection information etcdClientInfo section in
>>>>> /etc/origin/master/master-config.yaml is correct?
>>>>> 
>>>>> On Mon, Jun 13, 2016 at 11:56 AM, Julio Saura 
>>>>> wrote:
>>>>>> hello
>>>>>> 
>>>>>> yes.. i have a external balancer in front of my masters for HA as doc
>>>>>> says.
>>>>>> 
>>>>>> i don’t have any balancer in front of my etcd servers for masters
>>>>>> connection, it’s not necessary right? masters will try all etcd 
>>>>>> availables
>>>>>> it one is down right?
>>>>>> 
>>>>>> i don’t know why but none of my masters were able to connect to the
>>>>>> second etcd instance, but using telnet from their shell worked .. so it 
>>>>>> was
>>>>>> not a net o fw issue..
>>>>>> 
>>>>>> 
>>>>>> best regards.
>>>>>> 
>>>>>>> El 13 jun 2016, a las 17:53, Clayton Coleman 
>>>>>>> escribió:
>>>>>>> 
>>>>>>> I have not seen that particular issue.  Do you have a load balancer
>>>>>>> in
>>>>>>> between your masters and etcd?
>>>>>>> 
>>>>>>> On Fri, Jun 10, 2016 at 5:5

Re: weird issue with etcd

2016-06-21 Thread Julio Saura
hello

yes, they are synced with and internal NTP server .. 

gonna try ectdctl thanks!


> El 21 jun 2016, a las 15:20, Jason DeTiberus  escribió:
> 
> On Tue, Jun 21, 2016 at 7:28 AM, Julio Saura  <mailto:jsa...@hiberus.com>> wrote:
>> yes
>> 
>> working
>> 
>> [root@openshift-master01 ~]# telnet X 2380
>> Trying ...
>> Connected to .
>> Escape character is '^]'.
>> ^CConnection closed by foreign host.
> 
> 
> Have you verified that time is syncd between the hosts? I'd also check
> the peer certs between the hosts... Can you connect to the hosts using
> etcdctl? There should be a status command that will give you more
> information.
> 
>> 
>> 
>> El 21 jun 2016, a las 13:21, Jason DeTiberus  escribió:
>> 
>> Did you verify connectivity over the peering port as well (2380)?
>> 
>> On Jun 21, 2016 7:17 AM, "Julio Saura"  wrote:
>>> 
>>> hello
>>> 
>>> same problem
>>> 
>>> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
>>> F0621 13:11:03.155246   59618 auth.go:141] error #0: dial tcp :2379:
>>> connection refused ( the one i rebooted )
>>> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
>>> error #1: client: etcd member https://:2379 has no leader
>>> 
>>> i rebooted the etcd server and my master is not able to use other one
>>> 
>>> still able to connect from both masters using telnet to the etcd port ..
>>> 
>>> any clue? this is weird.
>>> 
>>> 
>>>> El 14 jun 2016, a las 9:28, Julio Saura  escribió:
>>>> 
>>>> hello
>>>> 
>>>> yes is correct .. it was the first thing i checked ..
>>>> 
>>>> first master
>>>> 
>>>> etcdClientInfo:
>>>> ca: master.etcd-ca.crt
>>>> certFile: master.etcd-client.crt
>>>> keyFile: master.etcd-client.key
>>>> urls:
>>>>  - https://openshift-balancer01:2379
>>>>  - https://openshift-balancer02:2379
>>>> 
>>>> 
>>>> second master
>>>> 
>>>> etcdClientInfo:
>>>> ca: master.etcd-ca.crt
>>>> certFile: master.etcd-client.crt
>>>> keyFile: master.etcd-client.key
>>>> urls:
>>>>  - https://openshift-balancer01:2379
>>>>  - https://openshift-balancer02:2379
>>>> 
>>>> dns names resolve in both masters
>>>> 
>>>> Best regards and thanks!
>>>> 
>>>> 
>>>>> El 13 jun 2016, a las 18:45, Scott Dodson 
>>>>> escribió:
>>>>> 
>>>>> Can you verify the connection information etcdClientInfo section in
>>>>> /etc/origin/master/master-config.yaml is correct?
>>>>> 
>>>>> On Mon, Jun 13, 2016 at 11:56 AM, Julio Saura 
>>>>> wrote:
>>>>>> hello
>>>>>> 
>>>>>> yes.. i have a external balancer in front of my masters for HA as doc
>>>>>> says.
>>>>>> 
>>>>>> i don’t have any balancer in front of my etcd servers for masters
>>>>>> connection, it’s not necessary right? masters will try all etcd 
>>>>>> availables
>>>>>> it one is down right?
>>>>>> 
>>>>>> i don’t know why but none of my masters were able to connect to the
>>>>>> second etcd instance, but using telnet from their shell worked .. so it 
>>>>>> was
>>>>>> not a net o fw issue..
>>>>>> 
>>>>>> 
>>>>>> best regards.
>>>>>> 
>>>>>>> El 13 jun 2016, a las 17:53, Clayton Coleman 
>>>>>>> escribió:
>>>>>>> credentials from
>>>>>>> I have not seen that particular issue.  Do you have a load balancer
>>>>>>> in
>>>>>>> between your masters and etcd?
>>>>>>> 
>>>>>>> On Fri, Jun 10, 2016 at 5:55 AM, Julio Saura 
>>>>>>> wrote:
>>>>>>>> hello
>>>>>>>> 
>>>>>>>> i have an origin 3.1 installation working cool so far
>>>>>>>> 
>>>>>>>> today one of my etcd nodes ( 1 of 2 ) crashed and i started having
>>>>>>>> problems..
>>>

Re: weird issue with etcd

2016-06-21 Thread Julio Saura
yes

working

[root@openshift-master01 ~]# telnet X 2380
Trying ...
Connected to .
Escape character is '^]'.
^CConnection closed by foreign host.


> El 21 jun 2016, a las 13:21, Jason DeTiberus  <mailto:jdeti...@redhat.com>> escribió:
> 
> Did you verify connectivity over the peering port as well (2380)?
> 
> On Jun 21, 2016 7:17 AM, "Julio Saura"  <mailto:jsa...@hiberus.com>> wrote:
> hello
> 
> same problem
> 
> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]: F0621 
> 13:11:03.155246   59618 auth.go:141] error #0: dial tcp :2379: connection 
> refused ( the one i rebooted )
> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]: error 
> #1: client: etcd member https://:2379 <https://:2379/> has no leader
> 
> i rebooted the etcd server and my master is not able to use other one
> 
> still able to connect from both masters using telnet to the etcd port ..
> 
> any clue? this is weird.
> 
> 
> > El 14 jun 2016, a las 9:28, Julio Saura  > <mailto:jsa...@hiberus.com>> escribió:
> >
> > hello
> >
> > yes is correct .. it was the first thing i checked ..
> >
> > first master
> >
> > etcdClientInfo:
> > ca: master.etcd-ca.crt
> > certFile: master.etcd-client.crt
> > keyFile: master.etcd-client.key
> > urls:
> >   - https://openshift-balancer01:2379 <https://openshift-balancer01:2379/>
> >   - https://openshift-balancer02:2379 <https://openshift-balancer02:2379/>
> >
> >
> > second master
> >
> > etcdClientInfo:
> > ca: master.etcd-ca.crt
> > certFile: master.etcd-client.crt
> > keyFile: master.etcd-client.key
> > urls:
> >   - https://openshift-balancer01:2379 <https://openshift-balancer01:2379/>
> >   - https://openshift-balancer02:2379 <https://openshift-balancer02:2379/>
> >
> > dns names resolve in both masters
> >
> > Best regards and thanks!
> >
> >
> >> El 13 jun 2016, a las 18:45, Scott Dodson  >> <mailto:sdod...@redhat.com>> escribió:
> >>
> >> Can you verify the connection information etcdClientInfo section in
> >> /etc/origin/master/master-config.yaml is correct?
> >>
> >> On Mon, Jun 13, 2016 at 11:56 AM, Julio Saura  >> <mailto:jsa...@hiberus.com>> wrote:
> >>> hello
> >>>
> >>> yes.. i have a external balancer in front of my masters for HA as doc 
> >>> says.
> >>>
> >>> i don’t have any balancer in front of my etcd servers for masters 
> >>> connection, it’s not necessary right? masters will try all etcd 
> >>> availables it one is down right?
> >>>
> >>> i don’t know why but none of my masters were able to connect to the 
> >>> second etcd instance, but using telnet from their shell worked .. so it 
> >>> was not a net o fw issue..
> >>>
> >>>
> >>> best regards.
> >>>
> >>>> El 13 jun 2016, a las 17:53, Clayton Coleman  >>>> <mailto:ccole...@redhat.com>> escribió:
> >>>>
> >>>> I have not seen that particular issue.  Do you have a load balancer in
> >>>> between your masters and etcd?
> >>>>
> >>>> On Fri, Jun 10, 2016 at 5:55 AM, Julio Saura  >>>> <mailto:jsa...@hiberus.com>> wrote:
> >>>>> hello
> >>>>>
> >>>>> i have an origin 3.1 installation working cool so far
> >>>>>
> >>>>> today one of my etcd nodes ( 1 of 2 ) crashed and i started having 
> >>>>> problems..
> >>>>>
> >>>>> i noticed on one of my master nodes that it was not able to connect to 
> >>>>> second etcd server and that the etcd server was not able to promote as 
> >>>>> leader..
> >>>>>
> >>>>>
> >>>>> un 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 is 
> >>>>> starting a new election at term 10048
> >>>>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 
> >>>>> became candidate at term 10049
> >>>>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 
> >>>>> received vote from 12c8a31c8fcae0d4 at term 10049
> >>>>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 
> >>>>> [logterm: 8, index: 460

Re: weird issue with etcd

2016-06-21 Thread Julio Saura
hello

same problem

jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]: F0621 
13:11:03.155246   59618 auth.go:141] error #0: dial tcp :2379: connection 
refused ( the one i rebooted )
jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]: error 
#1: client: etcd member https://:2379 has no leader

i rebooted the etcd server and my master is not able to use other one

still able to connect from both masters using telnet to the etcd port ..

any clue? this is weird.


> El 14 jun 2016, a las 9:28, Julio Saura  escribió:
> 
> hello
> 
> yes is correct .. it was the first thing i checked ..
> 
> first master
> 
> etcdClientInfo:
> ca: master.etcd-ca.crt
> certFile: master.etcd-client.crt
> keyFile: master.etcd-client.key
> urls:
>   - https://openshift-balancer01:2379
>   - https://openshift-balancer02:2379
> 
> 
> second master
> 
> etcdClientInfo:
> ca: master.etcd-ca.crt
> certFile: master.etcd-client.crt
> keyFile: master.etcd-client.key
> urls:
>   - https://openshift-balancer01:2379
>   - https://openshift-balancer02:2379
> 
> dns names resolve in both masters
> 
> Best regards and thanks!
> 
> 
>> El 13 jun 2016, a las 18:45, Scott Dodson  escribió:
>> 
>> Can you verify the connection information etcdClientInfo section in
>> /etc/origin/master/master-config.yaml is correct?
>> 
>> On Mon, Jun 13, 2016 at 11:56 AM, Julio Saura  wrote:
>>> hello
>>> 
>>> yes.. i have a external balancer in front of my masters for HA as doc says.
>>> 
>>> i don’t have any balancer in front of my etcd servers for masters 
>>> connection, it’s not necessary right? masters will try all etcd availables 
>>> it one is down right?
>>> 
>>> i don’t know why but none of my masters were able to connect to the second 
>>> etcd instance, but using telnet from their shell worked .. so it was not a 
>>> net o fw issue..
>>> 
>>> 
>>> best regards.
>>> 
>>>> El 13 jun 2016, a las 17:53, Clayton Coleman  
>>>> escribió:
>>>> 
>>>> I have not seen that particular issue.  Do you have a load balancer in
>>>> between your masters and etcd?
>>>> 
>>>> On Fri, Jun 10, 2016 at 5:55 AM, Julio Saura  wrote:
>>>>> hello
>>>>> 
>>>>> i have an origin 3.1 installation working cool so far
>>>>> 
>>>>> today one of my etcd nodes ( 1 of 2 ) crashed and i started having 
>>>>> problems..
>>>>> 
>>>>> i noticed on one of my master nodes that it was not able to connect to 
>>>>> second etcd server and that the etcd server was not able to promote as 
>>>>> leader..
>>>>> 
>>>>> 
>>>>> un 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 is 
>>>>> starting a new election at term 10048
>>>>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 became 
>>>>> candidate at term 10049
>>>>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 
>>>>> received vote from 12c8a31c8fcae0d4 at term 10049
>>>>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 
>>>>> [logterm: 8, index: 4600461] sent vote request to bf80ee3a26e8772c at 
>>>>> term 10049
>>>>> jun 10 11:09:56 openshift-balancer02 etcd[47218]: got unexpected response 
>>>>> error (etcdserver: request timed out)
>>>>> 
>>>>> my masters logged that they were not able to connect to the etcd
>>>>> 
>>>>> er.go:218] unexpected ListAndWatch error: pkg/storage/cacher.go:161: 
>>>>> Failed to list *extensions.Job: error #0: dial tcp X.X.X.X:2379: 
>>>>> connection refused
>>>>> 
>>>>> so i tried a simple test, just telnet from masters to the etcd node port 
>>>>> ..
>>>>> 
>>>>> [root@openshift-master01 log]# telnet X.X.X.X 2379
>>>>> Trying X.X.X.X...
>>>>> Connected to X.X.X.X.
>>>>> Escape character is '^]’
>>>>> 
>>>>> so i was able to connect from masters.
>>>>> 
>>>>> i was not able to recover my oc masters until the first etcd node 
>>>>> rebooted .. so it seems my etcd “cluster” is not working without the 
>>>>> first node ..
>>>>> 
>>>>> any clue?
>>>>> 
>>>>> thanks
>>>>> 
>>>>> 
>>>>> ___
>>>>> users mailing list
>>>>> users@lists.openshift.redhat.com
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>> 
>>> 
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: weird issue with etcd

2016-06-14 Thread Julio Saura
hello

yes is correct .. it was the first thing i checked ..

first master

etcdClientInfo:
 ca: master.etcd-ca.crt
 certFile: master.etcd-client.crt
 keyFile: master.etcd-client.key
 urls:
   - https://openshift-balancer01:2379
   - https://openshift-balancer02:2379


second master

etcdClientInfo:
 ca: master.etcd-ca.crt
 certFile: master.etcd-client.crt
 keyFile: master.etcd-client.key
 urls:
   - https://openshift-balancer01:2379
   - https://openshift-balancer02:2379

dns names resolve in both masters

Best regards and thanks!


> El 13 jun 2016, a las 18:45, Scott Dodson  escribió:
> 
> Can you verify the connection information etcdClientInfo section in
> /etc/origin/master/master-config.yaml is correct?
> 
> On Mon, Jun 13, 2016 at 11:56 AM, Julio Saura  wrote:
>> hello
>> 
>> yes.. i have a external balancer in front of my masters for HA as doc says.
>> 
>> i don’t have any balancer in front of my etcd servers for masters 
>> connection, it’s not necessary right? masters will try all etcd availables 
>> it one is down right?
>> 
>> i don’t know why but none of my masters were able to connect to the second 
>> etcd instance, but using telnet from their shell worked .. so it was not a 
>> net o fw issue..
>> 
>> 
>> best regards.
>> 
>>> El 13 jun 2016, a las 17:53, Clayton Coleman  escribió:
>>> 
>>> I have not seen that particular issue.  Do you have a load balancer in
>>> between your masters and etcd?
>>> 
>>> On Fri, Jun 10, 2016 at 5:55 AM, Julio Saura  wrote:
>>>> hello
>>>> 
>>>> i have an origin 3.1 installation working cool so far
>>>> 
>>>> today one of my etcd nodes ( 1 of 2 ) crashed and i started having 
>>>> problems..
>>>> 
>>>> i noticed on one of my master nodes that it was not able to connect to 
>>>> second etcd server and that the etcd server was not able to promote as 
>>>> leader..
>>>> 
>>>> 
>>>> un 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 is 
>>>> starting a new election at term 10048
>>>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 became 
>>>> candidate at term 10049
>>>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 
>>>> received vote from 12c8a31c8fcae0d4 at term 10049
>>>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 
>>>> [logterm: 8, index: 4600461] sent vote request to bf80ee3a26e8772c at term 
>>>> 10049
>>>> jun 10 11:09:56 openshift-balancer02 etcd[47218]: got unexpected response 
>>>> error (etcdserver: request timed out)
>>>> 
>>>> my masters logged that they were not able to connect to the etcd
>>>> 
>>>> er.go:218] unexpected ListAndWatch error: pkg/storage/cacher.go:161: 
>>>> Failed to list *extensions.Job: error #0: dial tcp X.X.X.X:2379: 
>>>> connection refused
>>>> 
>>>> so i tried a simple test, just telnet from masters to the etcd node port ..
>>>> 
>>>> [root@openshift-master01 log]# telnet X.X.X.X 2379
>>>> Trying X.X.X.X...
>>>> Connected to X.X.X.X.
>>>> Escape character is '^]’
>>>> 
>>>> so i was able to connect from masters.
>>>> 
>>>> i was not able to recover my oc masters until the first etcd node rebooted 
>>>> .. so it seems my etcd “cluster” is not working without the first node ..
>>>> 
>>>> any clue?
>>>> 
>>>> thanks
>>>> 
>>>> 
>>>> ___
>>>> users mailing list
>>>> users@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> 
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: weird issue with etcd

2016-06-13 Thread Julio Saura
hello

yes.. i have a external balancer in front of my masters for HA as doc says.

i don’t have any balancer in front of my etcd servers for masters connection, 
it’s not necessary right? masters will try all etcd availables it one is down 
right?

i don’t know why but none of my masters were able to connect to the second etcd 
instance, but using telnet from their shell worked .. so it was not a net o fw 
issue..


best regards.

> El 13 jun 2016, a las 17:53, Clayton Coleman  escribió:
> 
> I have not seen that particular issue.  Do you have a load balancer in
> between your masters and etcd?
> 
> On Fri, Jun 10, 2016 at 5:55 AM, Julio Saura  wrote:
>> hello
>> 
>> i have an origin 3.1 installation working cool so far
>> 
>> today one of my etcd nodes ( 1 of 2 ) crashed and i started having problems..
>> 
>> i noticed on one of my master nodes that it was not able to connect to 
>> second etcd server and that the etcd server was not able to promote as 
>> leader..
>> 
>> 
>> un 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 is 
>> starting a new election at term 10048
>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 became 
>> candidate at term 10049
>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 received 
>> vote from 12c8a31c8fcae0d4 at term 10049
>> jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 [logterm: 
>> 8, index: 4600461] sent vote request to bf80ee3a26e8772c at term 10049
>> jun 10 11:09:56 openshift-balancer02 etcd[47218]: got unexpected response 
>> error (etcdserver: request timed out)
>> 
>> my masters logged that they were not able to connect to the etcd
>> 
>> er.go:218] unexpected ListAndWatch error: pkg/storage/cacher.go:161: Failed 
>> to list *extensions.Job: error #0: dial tcp X.X.X.X:2379: connection refused
>> 
>> so i tried a simple test, just telnet from masters to the etcd node port ..
>> 
>> [root@openshift-master01 log]# telnet X.X.X.X 2379
>> Trying X.X.X.X...
>> Connected to X.X.X.X.
>> Escape character is '^]’
>> 
>> so i was able to connect from masters.
>> 
>> i was not able to recover my oc masters until the first etcd node rebooted 
>> .. so it seems my etcd “cluster” is not working without the first node ..
>> 
>> any clue?
>> 
>> thanks
>> 
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


weird issue with etcd

2016-06-10 Thread Julio Saura
hello

i have an origin 3.1 installation working cool so far

today one of my etcd nodes ( 1 of 2 ) crashed and i started having problems..

i noticed on one of my master nodes that it was not able to connect to second 
etcd server and that the etcd server was not able to promote as leader..


un 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 is starting a 
new election at term 10048
jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 became 
candidate at term 10049
jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 received 
vote from 12c8a31c8fcae0d4 at term 10049
jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 [logterm: 8, 
index: 4600461] sent vote request to bf80ee3a26e8772c at term 10049
jun 10 11:09:56 openshift-balancer02 etcd[47218]: got unexpected response error 
(etcdserver: request timed out)

my masters logged that they were not able to connect to the etcd

er.go:218] unexpected ListAndWatch error: pkg/storage/cacher.go:161: Failed to 
list *extensions.Job: error #0: dial tcp X.X.X.X:2379: connection refused

so i tried a simple test, just telnet from masters to the etcd node port ..

[root@openshift-master01 log]# telnet X.X.X.X 2379
Trying X.X.X.X...
Connected to X.X.X.X.
Escape character is '^]’

so i was able to connect from masters. 

i was not able to recover my oc masters until the first etcd node rebooted .. 
so it seems my etcd “cluster” is not working without the first node ..

any clue?

thanks


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift failing to start on second master potential HAProxy issue

2016-05-24 Thread Julio Saura
hello

just if it helps i have just deployed a  new installation like this and i can 
confirm using one master as de LB does not work .. you have to deploy de LB on 
other machine rather than master .. 

i used a small VM on my environment as a dedicated load balancer for master and 
worked without any problem

hope it helps!

Best regards

> El 24 may 2016, a las 16:52, Scott Dodson  escribió:
> 
> Ok, so the master that's failing is also your load balancer defined in your 
> [lb] group, correct? I'd suggest not using one of your masters as the load 
> balancer if at all possible. There may be enough knobs in the inventory to 
> allow this to work but I don't think this is a setup we've tested.
> 
> On Tue, May 24, 2016 at 10:16 AM, Ronan O Keeffe  > wrote:
> HI Scott, 
> 
> I have deleted the installs and am starting fresh. What I ran earlier was
> lsof -i :8443, which I assume will be identical to lsof -i4 :8443 as IPv6 is 
> disabled. 
> 
> I am re-installing now from a template and can test that exact command in a 
> few mins. 
> 
> lsof -i :8443
> 
> COMMAND   PIDUSER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
> haproxy 16621 haproxy6u  IPv4   62049  0t0  TCP *:pcsync-https 
> (LISTEN)
> haproxy 16621 haproxy   26u  IPv4 8617543  0t0  TCP 
> master2.openshift:pcsync-https->master2.openshift:36691 (CLOSE_WAIT)
> haproxy 16621 haproxy   27u  IPv4 8617545  0t0  TCP 
> master2.openshift:57968->master2.openshift:pcsync-https (FIN_WAIT2)
> haproxy 16621 haproxy   28u  IPv4 8617546  0t0  TCP 
> master2.openshift:57969->master2.openshift:pcsync-https (FIN_WAIT2)
> haproxy 16621 haproxy   29u  IPv4 8617547  0t0  TCP 
> master2.openshift:57970->master2.openshift:pcsync-https (FIN_WAIT2)
> haproxy 16621 haproxy   30u  IPv4 8617548  0t0  TCP 
> master2.openshift:57971->master2.openshift:pcsync-https (FIN_WAIT2)
> haproxy 16621 haproxy   31u  IPv4 8617549  0t0  TCP 
> master2.openshift:57972->master2.openshift:pcsync-https (FIN_WAIT2)
> haproxy 16621 haproxy *792u  IPv4 8657802  0t0  TCP 
> master2.openshift:49633->master2.openshift:pcsync-https (FIN_WAIT2)
> haproxy 16621 haproxy *793u  IPv4 8657803  0t0  TCP 
> master2.openshift:49634->master2.openshift:pcsync-https (FIN_WAIT2)
> haproxy 16621 haproxy *794u  IPv4 8657804  0t0  TCP 
> master2.openshift:49635->master2.openshift:pcsync-https (FIN_WAIT2)
> haproxy 16621 haproxy *795u  IPv4 8657805  0t0  TCP 
> master2.openshift:49636->master2.openshift:pcsync-https (FIN_WAIT2)
> haproxy 16621 haproxy *796u  IPv4 8657806  0t0  TCP 
> master2.openshift:49637->master2.openshift:pcsync-https (FIN_WAIT2)
> haproxy 16621 haproxy *797u  IPv4 8657807  0t0  TCP 
> master2.openshift:pcsync-https->master2.openshift:49615 (CLOSE_WAIT)
> haproxy 16621 haproxy *798u  IPv4 8657808  0t0  TCP 
> master2.openshift:pcsync-https->master2.openshift:49616 (CLOSE_WAIT)
> haproxy 16621 haproxy *799u  IPv4 8657809  0t0  TCP 
> master2.openshift:pcsync-https->master2.openshift:49617 (CLOSE_WAIT)
> haproxy 16621 haproxy *800u  IPv4 8657810  0t0  TCP 
> master2.openshift:pcsync-https->master2.openshift:49618 (CLOSE_WAIT)
> haproxy 16621 haproxy *801u  IPv4 8657811  0t0  TCP 
> master2.openshift:pcsync-https->master2.openshift:49619 (CLOSE_WAIT)
> haproxy 16621 haproxy *802u  IPv4 8657812  0t0  TCP 
> master2.openshift:pcsync-https->master2.openshift:49620 (CLOSE_WAIT)
> .
> .
> .
> 
> lsof -i :8443 | wc -l
> 39983
> 
> All the same for the almost 4 entries. Nothing else on port 8443 that I 
> can see. 
> 
> 
> Ronan O Keeffe
> System Administrator
> 
> DoneDeal
> Email: rona...@donedeal.ie 
> 
> Have you DoneDealed yet? <>
> www.DoneDeal.ie 
> Where to find us
>      
>     
> 
> 
>> On 24 May 2016, at 15:02, Scott Dodson > > wrote:
>> 
>> Can you see what's using port 8443 and remove that conflict? `lsof
>> -i4:8443` should show you what's listening on that port.
>> 
>> On Tue, May 24, 2016 at 7:47 AM, Ronan O Keeffe > > wrote:
>>> Hi,
>>> 
>>> We are currently attempting to install Openshift Origin via the advanced
>>> install method using Ansible. Initially this will be for testing but we hope
>>> to move this to production eventually.
>>> We are installing two masters and one node as a test.
>>> master1.openshift
>>> master2.openshift
>>> node1.openshift
>>> 
>>> Specifically we are following this method:
>>> https://docs.openshift.org/latest/install_config/install/advanced_install.html
>>>  
>>> 
>>> but using the 7.6 EPEL repo as the 7.5 one is not available.
>>> 
>>> Running: ansible-playbook
>>> ~/openshift-ansible-master/playbook

Re: Problem deploying origin on centOS 7.2 and ansible 2.0.2 ( epel 7-6 )

2016-05-20 Thread Julio Saura
so far so good!!!

looks is working line a charm!

thanks mate ;)


> El 20 may 2016, a las 16:43, Scott Dodson  escribió:
> 
> Julio,
> 
> We're fixing this, you can install yum-utils on all hosts as a
> workaround until this is merged
> https://github.com/openshift/openshift-ansible/pull/1924
> 
> Ansible 1.9 installed this automatically whereas ansible 2.0 doesn't.
> 
> --
> Scott
> 
> On Fri, May 20, 2016 at 10:30 AM, Julio Saura  wrote:
>> hello
>> 
>> i have been deploying a new open shift origin installation from scratch this
>> morning
>> 
>> i have another 3 installations working great but i have faced this problem
>> in this new installation
>> 
>> when running the playbook i get this error
>> 
>> TASK [docker : Get current installed version if docker_version is specified]
>> ***
>> fatal: [openshift-master01l]: FAILED! => {"changed": false, "cmd":
>> "repoquery --installed --qf '%{version}' docker", "failed": true, "msg":
>> "[Errno 2] file or directory does not exist", "rc": 2}
>> 
>> 
>> the only difference i found between the working ones and this one is that in
>> the running ones ansible is 1.9.2 and in this one is 2.0.2 ..
>> 
>> ansible is installed from fedora repo
>> 
>> https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-6.noarch.rpm
>> 
>> 
>> official documentation says 7-5 but seems is no longer available so i had to
>> use 7-6 repo..
>> 
>> any clue? is an ansible problem?
>> 
>> thanks
>> 
>> best regards
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> 


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Problem deploying origin on centOS 7.2 and ansible 2.0.2 ( epel 7-6 )

2016-05-20 Thread Julio Saura
hello

i have been deploying a new open shift origin installation from scratch this 
morning

i have another 3 installations working great but i have faced this problem in 
this new installation

when running the playbook i get this error

TASK [docker : Get current installed version if docker_version is specified] ***
fatal: [openshift-master01l]: FAILED! => {"changed": false, "cmd": "repoquery 
--installed --qf '%{version}' docker", "failed": true, "msg": "[Errno 2] file 
or directory does not exist", "rc": 2}


the only difference i found between the working ones and this one is that in 
the running ones ansible is 1.9.2 and in this one is 2.0.2 ..

ansible is installed from fedora repo

https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-6.noarch.rpm

official documentation says 7-5 but seems is no longer available so i had to 
use 7-6 repo..

any clue? is an ansible problem?

thanks

best regards___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem in Replication controller

2016-03-28 Thread Julio Saura
hello

i solved it giving the oc user cluster-admin role .. i do not really like this 
way but it works :/

Best regards

> El 22 mar 2016, a las 12:03, Julio Saura  escribió:
> 
> i forgot saying that my pod is running process under user jboss and that unix 
> user exists in my nodes ..
> 
> so i create pods under unix user developer ( loged in Openshift master as 
> developer too )  but the pod has runAs jboss..
> 
> thanks in advance
> 
>> El 22 mar 2016, a las 11:46, Julio Saura  escribió:
>> 
>> sorry for the necro post but i have the problem again
>> 
>> now i don’t want to use root account ( cluster admin ) for deploying pods, 
>> so i created a new unix user ( developer ) for deploying on a new project ( 
>> project1 )
>> 
>> the pods running under this new unix user are owned by 
>> 
>> UID PID   PPID  C STIME TTY  TIME CMD
>> 140+  1  0  0 11:15 ?00:00:00 /bin/bash 
>> /etc/init.d/jboss-as start
>> 140+ 63  0  1 11:15 ?00:00:00 bash
>> 140+ 71  1  0 11:15 ?00:00:00 sleep 1
>> 140+ 72 63  0 11:15 ?00:00:00 ps -ef
>> 
>> and due to permission problems my pods dies ..
>> 
>> i have tried to  use the command
>> 
>> oadm policy add-ssc-to-user anyuid -z developer
>> oadm policy add-ssc-to-user anyuid -z project1
>> 
>> and still no luck .. seems that developer unix user is still no able to run 
>> pods with other user ..
>> 
>> i am missing something?
>> 
>> oc get scc
>> NAME   PRIV  CAPS  HOSTDIR   EMPTYDIR   SELINUX 
>> RUNASUSER  FSGROUPSUPGROUP   PRIORITY
>> anyuid false []false true   MustRunAs   
>> RunAsAny   RunAsAny   RunAsAny   10
>> hostaccess false []true  true   MustRunAs   
>> MustRunAsRange RunAsAny   RunAsAny   
>> hostmount-anyuid   false []true  true   MustRunAs   
>> RunAsAny   RunAsAny   RunAsAny   
>> nonrootfalse []false true   MustRunAs   
>> MustRunAsNonRoot   RunAsAny   RunAsAny   
>> privileged true  []true  true   RunAsAny
>> RunAsAny   RunAsAny   RunAsAny   
>> restricted false []false true   MustRunAs   
>> MustRunAsRange RunAsAny   RunAsAny   
>> 
>> thanks
>> 
>> 
>> 
>> 
>>>>> 
>> 
>>> El 3 mar 2016, a las 17:27, Clayton Coleman  escribió:
>>> 
>>> When you create a pod directly as a cluster admin, you have permission
>>> to run as any user.  So the check allows you to create that process.
>>> When you run under a replication controller, permission has to be
>>> delegated to ensure that the controller (which is acting on your
>>> behalf) can create a pod that runs that way.  The service account is
>>> what is delegated.
>>> 
>>>> On Mar 1, 2016, at 9:37 AM, Julio Saura  wrote:
>>>> 
>>>> hello
>>>> 
>>>> thanks for answering
>>>> 
>>>> but why is running without problem if i run my image as a POD without 
>>>> doing that and failing when i use RC instead of POD?
>>>> 
>>>> thanks
>>>> 
>>>> 
>>>>> El 1 mar 2016, a las 16:21, Clayton Coleman  
>>>>> escribió:
>>>>> 
>>>>> Regular Openshift users don't have permission to run as arbitrary
>>>>> UIDs.  You can read more here:
>>>>> https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints
>>>>> 
>>>>> To give yourself access as a root user (if you are an admin) run
>>>>> 
>>>>> oadm policy add-scc-to-user anyuid -z default
>>>>> 
>>>>> Or to let your pods run as any non-root user, run
>>>>> 
>>>>> oadm policy add-scc-to-user nonroot -z default
>>>>> 
>>>>>> On Mar 1, 2016, at 9:04 AM, Julio Saura  wrote:
>>>>>> 
>>>>>> Hello
>>>>>> 
>>>>>> i have a working open shift running and maybe is my misunderstanding but 
>>>>>> i have a problem with RC
>>>>>> 
>>>>>> so,
>>>>>> 
>>>>>> i have an own docker image for my app, my entry point in my docker file 
>>>>>

Re: Problem in Replication controller

2016-03-22 Thread Julio Saura
i forgot saying that my pod is running process under user jboss and that unix 
user exists in my nodes ..

so i create pods under unix user developer ( loged in Openshift master as 
developer too )  but the pod has runAs jboss..

thanks in advance

> El 22 mar 2016, a las 11:46, Julio Saura  escribió:
> 
> sorry for the necro post but i have the problem again
> 
> now i don’t want to use root account ( cluster admin ) for deploying pods, so 
> i created a new unix user ( developer ) for deploying on a new project ( 
> project1 )
> 
> the pods running under this new unix user are owned by 
> 
> UID PID   PPID  C STIME TTY  TIME CMD
> 140+  1  0  0 11:15 ?00:00:00 /bin/bash 
> /etc/init.d/jboss-as start
> 140+ 63  0  1 11:15 ?00:00:00 bash
> 140+ 71  1  0 11:15 ?00:00:00 sleep 1
> 140+ 72 63  0 11:15 ?00:00:00 ps -ef
> 
> and due to permission problems my pods dies ..
> 
> i have tried to  use the command
> 
> oadm policy add-ssc-to-user anyuid -z developer
> oadm policy add-ssc-to-user anyuid -z project1
> 
> and still no luck .. seems that developer unix user is still no able to run 
> pods with other user ..
> 
> i am missing something?
> 
> oc get scc
> NAME   PRIV  CAPS  HOSTDIR   EMPTYDIR   SELINUX 
> RUNASUSER  FSGROUPSUPGROUP   PRIORITY
> anyuid false []false true   MustRunAs   
> RunAsAny   RunAsAny   RunAsAny   10
> hostaccess false []true  true   MustRunAs   
> MustRunAsRange RunAsAny   RunAsAny   
> hostmount-anyuid   false []true  true   MustRunAs   
> RunAsAny   RunAsAny   RunAsAny   
> nonrootfalse []false true   MustRunAs   
> MustRunAsNonRoot   RunAsAny   RunAsAny   
> privileged true  []true  true   RunAsAny
> RunAsAny   RunAsAny   RunAsAny   
> restricted false []false true   MustRunAs   
> MustRunAsRange RunAsAny   RunAsAny   
> 
> thanks
> 
> 
> 
> 
>>>> 
> 
>> El 3 mar 2016, a las 17:27, Clayton Coleman  escribió:
>> 
>> When you create a pod directly as a cluster admin, you have permission
>> to run as any user.  So the check allows you to create that process.
>> When you run under a replication controller, permission has to be
>> delegated to ensure that the controller (which is acting on your
>> behalf) can create a pod that runs that way.  The service account is
>> what is delegated.
>> 
>>> On Mar 1, 2016, at 9:37 AM, Julio Saura  wrote:
>>> 
>>> hello
>>> 
>>> thanks for answering
>>> 
>>> but why is running without problem if i run my image as a POD without doing 
>>> that and failing when i use RC instead of POD?
>>> 
>>> thanks
>>> 
>>> 
>>>> El 1 mar 2016, a las 16:21, Clayton Coleman  escribió:
>>>> 
>>>> Regular Openshift users don't have permission to run as arbitrary
>>>> UIDs.  You can read more here:
>>>> https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints
>>>> 
>>>> To give yourself access as a root user (if you are an admin) run
>>>> 
>>>> oadm policy add-scc-to-user anyuid -z default
>>>> 
>>>> Or to let your pods run as any non-root user, run
>>>> 
>>>> oadm policy add-scc-to-user nonroot -z default
>>>> 
>>>>> On Mar 1, 2016, at 9:04 AM, Julio Saura  wrote:
>>>>> 
>>>>> Hello
>>>>> 
>>>>> i have a working open shift running and maybe is my misunderstanding but 
>>>>> i have a problem with RC
>>>>> 
>>>>> so,
>>>>> 
>>>>> i have an own docker image for my app, my entry point in my docker file 
>>>>> creates some directories that are needed for my app to work and starts a 
>>>>> jboss,, so far so good
>>>>> 
>>>>> the image is running if i define it as a POD, but when i try to create a 
>>>>> RC using that image i am having some weird permission denied when 
>>>>> creating the directories and so my pod dies.
>>>>> 
>>>>> i have noticed that when i run it as POD my process is running under the 
>>>>> user i define in a step inside my docker file when building the image, 
>>>>> b

Re: Problem in Replication controller

2016-03-22 Thread Julio Saura
sorry for the necro post but i have the problem again

now i don’t want to use root account ( cluster admin ) for deploying pods, so i 
created a new unix user ( developer ) for deploying on a new project ( project1 
)

the pods running under this new unix user are owned by 

UID PID   PPID  C STIME TTY  TIME CMD
140+  1  0  0 11:15 ?00:00:00 /bin/bash 
/etc/init.d/jboss-as start
140+ 63  0  1 11:15 ?00:00:00 bash
140+ 71  1  0 11:15 ?00:00:00 sleep 1
140+ 72 63  0 11:15 ?00:00:00 ps -ef

and due to permission problems my pods dies ..

i have tried to  use the command

oadm policy add-ssc-to-user anyuid -z developer
oadm policy add-ssc-to-user anyuid -z project1

and still no luck .. seems that developer unix user is still no able to run 
pods with other user ..

i am missing something?

oc get scc
NAME   PRIV  CAPS  HOSTDIR   EMPTYDIR   SELINUX 
RUNASUSER  FSGROUPSUPGROUP   PRIORITY
anyuid false []false true   MustRunAs   
RunAsAny   RunAsAny   RunAsAny   10
hostaccess false []true  true   MustRunAs   
MustRunAsRange RunAsAny   RunAsAny   
hostmount-anyuid   false []true  true   MustRunAs   
RunAsAny   RunAsAny   RunAsAny   
nonrootfalse []false true   MustRunAs   
MustRunAsNonRoot   RunAsAny   RunAsAny   
privileged true  []true  true   RunAsAny
RunAsAny   RunAsAny   RunAsAny   
restricted false []false true   MustRunAs   
MustRunAsRange RunAsAny   RunAsAny   

thanks




>>> 

> El 3 mar 2016, a las 17:27, Clayton Coleman  escribió:
> 
> When you create a pod directly as a cluster admin, you have permission
> to run as any user.  So the check allows you to create that process.
> When you run under a replication controller, permission has to be
> delegated to ensure that the controller (which is acting on your
> behalf) can create a pod that runs that way.  The service account is
> what is delegated.
> 
>> On Mar 1, 2016, at 9:37 AM, Julio Saura  wrote:
>> 
>> hello
>> 
>> thanks for answering
>> 
>> but why is running without problem if i run my image as a POD without doing 
>> that and failing when i use RC instead of POD?
>> 
>> thanks
>> 
>> 
>>> El 1 mar 2016, a las 16:21, Clayton Coleman  escribió:
>>> 
>>> Regular Openshift users don't have permission to run as arbitrary
>>> UIDs.  You can read more here:
>>> https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints
>>> 
>>> To give yourself access as a root user (if you are an admin) run
>>> 
>>>  oadm policy add-scc-to-user anyuid -z default
>>> 
>>> Or to let your pods run as any non-root user, run
>>> 
>>>  oadm policy add-scc-to-user nonroot -z default
>>> 
>>>> On Mar 1, 2016, at 9:04 AM, Julio Saura  wrote:
>>>> 
>>>> Hello
>>>> 
>>>> i have a working open shift running and maybe is my misunderstanding but i 
>>>> have a problem with RC
>>>> 
>>>> so,
>>>> 
>>>> i have an own docker image for my app, my entry point in my docker file 
>>>> creates some directories that are needed for my app to work and starts a 
>>>> jboss,, so far so good
>>>> 
>>>> the image is running if i define it as a POD, but when i try to create a 
>>>> RC using that image i am having some weird permission denied when creating 
>>>> the directories and so my pod dies.
>>>> 
>>>> i have noticed that when i run it as POD my process is running under the 
>>>> user i define in a step inside my docker file when building the image, but 
>>>> if i run it on a RC the process is running under an unknown UID
>>>> 
>>>> UID PID   PPID  C STIME TTY  TIME CMD
>>>> 1000120+  1  0  0 17:02 ?00:00:00 /bin/bash 
>>>> /etc/init.d/jboss-as st
>>>> 
>>>> and so when that entry point is trying to create the directories i need i 
>>>> get permission denied errors, logically the process dies and so does my 
>>>> pod inside de RC ..
>>>> 
>>>> why is this happening? on my dockerfile i add a unix user as the process 
>>>> proprietary and in my entry point command script i am changing the user 
>>>> when starting .. running on the RC the user is not created and not used, 
>>>> but running it as a POD works like a charm..
>>>> 
>>>> i am missing something?
>>>> 
>>>> best regards
>>>> thanks all!
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> ___
>>>> users mailing list
>>>> users@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> 


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Imported image-stream in list of builder images

2016-03-09 Thread Julio Saura
i think you should have a tag the image with “builder” to see it on the list


> El 9 mar 2016, a las 10:45, Lorenz Vanthillo  
> escribió:
> 
> I create an image stream by pushing an image to my openshift registry. But I 
> don't like the behaviour that every image-stream is in the builder-images 
> list of its project. So you can click on it and than you have to define a 
> git-repo while it isn't a builder image. How can you unable this?
> ___
> users mailing list
> users@lists.openshift.redhat.com 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> 
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem in Replication controller

2016-03-03 Thread Julio Saura
ahh i see

thank you very much, now i see the difference

thanks!

> El 3 mar 2016, a las 17:27, Clayton Coleman  escribió:
> 
> When you create a pod directly as a cluster admin, you have permission
> to run as any user.  So the check allows you to create that process.
> When you run under a replication controller, permission has to be
> delegated to ensure that the controller (which is acting on your
> behalf) can create a pod that runs that way.  The service account is
> what is delegated.
> 
>> On Mar 1, 2016, at 9:37 AM, Julio Saura  wrote:
>> 
>> hello
>> 
>> thanks for answering
>> 
>> but why is running without problem if i run my image as a POD without doing 
>> that and failing when i use RC instead of POD?
>> 
>> thanks
>> 
>> 
>>> El 1 mar 2016, a las 16:21, Clayton Coleman  escribió:
>>> 
>>> Regular Openshift users don't have permission to run as arbitrary
>>> UIDs.  You can read more here:
>>> https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints
>>> 
>>> To give yourself access as a root user (if you are an admin) run
>>> 
>>>  oadm policy add-scc-to-user anyuid -z default
>>> 
>>> Or to let your pods run as any non-root user, run
>>> 
>>>  oadm policy add-scc-to-user nonroot -z default
>>> 
>>>> On Mar 1, 2016, at 9:04 AM, Julio Saura  wrote:
>>>> 
>>>> Hello
>>>> 
>>>> i have a working open shift running and maybe is my misunderstanding but i 
>>>> have a problem with RC
>>>> 
>>>> so,
>>>> 
>>>> i have an own docker image for my app, my entry point in my docker file 
>>>> creates some directories that are needed for my app to work and starts a 
>>>> jboss,, so far so good
>>>> 
>>>> the image is running if i define it as a POD, but when i try to create a 
>>>> RC using that image i am having some weird permission denied when creating 
>>>> the directories and so my pod dies.
>>>> 
>>>> i have noticed that when i run it as POD my process is running under the 
>>>> user i define in a step inside my docker file when building the image, but 
>>>> if i run it on a RC the process is running under an unknown UID
>>>> 
>>>> UID PID   PPID  C STIME TTY  TIME CMD
>>>> 1000120+  1  0  0 17:02 ?00:00:00 /bin/bash 
>>>> /etc/init.d/jboss-as st
>>>> 
>>>> and so when that entry point is trying to create the directories i need i 
>>>> get permission denied errors, logically the process dies and so does my 
>>>> pod inside de RC ..
>>>> 
>>>> why is this happening? on my dockerfile i add a unix user as the process 
>>>> proprietary and in my entry point command script i am changing the user 
>>>> when starting .. running on the RC the user is not created and not used, 
>>>> but running it as a POD works like a charm..
>>>> 
>>>> i am missing something?
>>>> 
>>>> best regards
>>>> thanks all!
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> ___
>>>> users mailing list
>>>> users@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> 


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: 503 service unavailable

2016-03-03 Thread Julio Saura
i know that should not matter, but it works on the same namespace as pods it is 
clear that it was a permission problem :)


> El 3 mar 2016, a las 9:47, Ram Ranganathan  escribió:
> 
> Yeah, that should not matter. The routes + namespaces you would see are based 
> on the permissions of the service account.
> 
> I was able to get Dean on irc and ssh into his instance seeing something 
> wonky with the permissions.
> CCing Jordan and Paul  for some help. 
> 
> Inside the router container, I tried running this: 
> curl -k -vvv https://127.0.0.1:8443/api/v1/endpoints 
> <https://127.0.0.1:8443/api/v1/endpoints> -H "Authorization: Bearer 
> $( <http://kubernetes.io/serviceaccount/token>)"
> 
> which returns the endpoints if that token has permissions and I get a 403 
> error back : 
> "message": "User \"system:serviceaccount:default:router\" cannot list all 
> endpoints in the cluster",
> 
> 
> but the oadm policy shows that the router service account has those 
> permissions. 
> 
> On the host, running :  
> $  oadm policy who-can get endpoints
> 
> output has the router service account:  http://fpaste.org/332733/45699454/ 
> <http://fpaste.org/332733/45699454/>
> 
> 
> The token info from inside the router container 
> (/var/run/secrets/kubernetes.io/serviceaccount/token 
> <http://kubernetes.io/serviceaccount/token>) seems to work if I use it 
> with oc login but not with the curl command - so it feels a bit odd.   Any 
> ideas what's amiss here? 
> 
> Thanks,
> 
> Ram//
> 
> 
> 
> On Wed, Mar 2, 2016 at 11:56 PM, Dean Peterson  <mailto:peterson.d...@gmail.com>> wrote:
> The router is on default namespace but the service pods are running on a 
> different namespace.
> 
> On Thu, Mar 3, 2016 at 1:53 AM, Julio Saura  <mailto:jsa...@hiberus.com>> wrote:
> seems your router is running on default namespace, your pods are also running 
> on namespace default?
> 
> 
>> El 3 mar 2016, a las 7:58, Dean Peterson > <mailto:peterson.d...@gmail.com>> escribió:
>> 
>> I did do an "oc edit scc privileged" and made sure this was at the end:
>> 
>> users:
>> - system:serviceaccount:openshift-infra:build-controller
>> - system:serviceaccount:management-infra:management-admin
>> - system:serviceaccount:default:router
>> - system:serviceaccount:default:registry
>> 
>> router has always been a privileged user service account.
>> 
>> On Thu, Mar 3, 2016 at 12:55 AM, Ram Ranganathan > <mailto:rrang...@redhat.com>> wrote:
>> So you have no app level backends in that gist (haproxy.config file). That 
>> would explain the 503s - there's nothing there for haproxy to route to.  
>> Most likely its due to the router service account has no permissions to get 
>> the routes/endpoints info from etcd. 
>> Check that the router service account (router default or whatever service 
>> account you used to start the router) is
>> part of the privileged SCC and has read permissions to etcd.
>> 
>> 
>> On Wed, Mar 2, 2016 at 10:43 PM, Dean Peterson > <mailto:peterson.d...@gmail.com>> wrote:
>> I created a public gist from the output: 
>> https://gist.github.com/deanpeterson/76aa9abf2c7fa182b56c 
>> <https://gist.github.com/deanpeterson/76aa9abf2c7fa182b56c>
>> 
>> On Thu, Mar 3, 2016 at 12:35 AM, Ram Ranganathan > <mailto:rrang...@redhat.com>> wrote:
>> You shouldn't need to restart the router. It should have created a new 
>> deployment and redeployed the router. 
>> So looks like the cause for your 503 errors is something else.
>> 
>> Can you check that your haproxy.config file is correct (has the correct 
>> backends and servers). 
>> Either nsenter into your router docker container and cat the file or 
>> then run:  
>> oc exec  cat /var/lib/haproxy/conf/haproxy.config#  
>> router-pod-name as shown in oc get pods
>> 
>> Ram//
>> 
>> On Wed, Mar 2, 2016 at 10:10 PM, Dean Peterson > <mailto:peterson.d...@gmail.com>> wrote:
>> I ran that "oc env dc router RELOAD_INTERVAL=5s" but I still get the 503 
>> error.  Do I need to restart anything?
>> 
>> On Wed, Mar 2, 2016 at 11:47 PM, Ram Ranganathan > <mailto:rrang...@redhat.com>> wrote:
>> Dean, we did have a recent change to coalesce router reloads (default is 0s) 
>> and it looks like with that default we are more aggressive with the reloads 
>> which could be causing this problem.   
>> 
>> Could you please try setting an environm

Re: 503 service unavailable

2016-03-03 Thread Julio Saura
umm

could yo please move the router or create a new one on the same namespace as 
pods and try again ..

just for check

best regards


> El 3 mar 2016, a las 8:56, Dean Peterson  escribió:
> 
> The router is on default namespace but the service pods are running on a 
> different namespace.
> 
> On Thu, Mar 3, 2016 at 1:53 AM, Julio Saura  <mailto:jsa...@hiberus.com>> wrote:
> seems your router is running on default namespace, your pods are also running 
> on namespace default?
> 
> 
>> El 3 mar 2016, a las 7:58, Dean Peterson > <mailto:peterson.d...@gmail.com>> escribió:
>> 
>> I did do an "oc edit scc privileged" and made sure this was at the end:
>> 
>> users:
>> - system:serviceaccount:openshift-infra:build-controller
>> - system:serviceaccount:management-infra:management-admin
>> - system:serviceaccount:default:router
>> - system:serviceaccount:default:registry
>> 
>> router has always been a privileged user service account.
>> 
>> On Thu, Mar 3, 2016 at 12:55 AM, Ram Ranganathan > <mailto:rrang...@redhat.com>> wrote:
>> So you have no app level backends in that gist (haproxy.config file). That 
>> would explain the 503s - there's nothing there for haproxy to route to.  
>> Most likely its due to the router service account has no permissions to get 
>> the routes/endpoints info from etcd. 
>> Check that the router service account (router default or whatever service 
>> account you used to start the router) is
>> part of the privileged SCC and has read permissions to etcd.
>> 
>> 
>> On Wed, Mar 2, 2016 at 10:43 PM, Dean Peterson > <mailto:peterson.d...@gmail.com>> wrote:
>> I created a public gist from the output: 
>> https://gist.github.com/deanpeterson/76aa9abf2c7fa182b56c 
>> <https://gist.github.com/deanpeterson/76aa9abf2c7fa182b56c>
>> 
>> On Thu, Mar 3, 2016 at 12:35 AM, Ram Ranganathan > <mailto:rrang...@redhat.com>> wrote:
>> You shouldn't need to restart the router. It should have created a new 
>> deployment and redeployed the router. 
>> So looks like the cause for your 503 errors is something else.
>> 
>> Can you check that your haproxy.config file is correct (has the correct 
>> backends and servers). 
>> Either nsenter into your router docker container and cat the file or 
>> then run:  
>> oc exec  cat /var/lib/haproxy/conf/haproxy.config#  
>> router-pod-name as shown in oc get pods
>> 
>> Ram//
>> 
>> On Wed, Mar 2, 2016 at 10:10 PM, Dean Peterson > <mailto:peterson.d...@gmail.com>> wrote:
>> I ran that "oc env dc router RELOAD_INTERVAL=5s" but I still get the 503 
>> error.  Do I need to restart anything?
>> 
>> On Wed, Mar 2, 2016 at 11:47 PM, Ram Ranganathan > <mailto:rrang...@redhat.com>> wrote:
>> Dean, we did have a recent change to coalesce router reloads (default is 0s) 
>> and it looks like with that default we are more aggressive with the reloads 
>> which could be causing this problem.   
>> 
>> Could you please try setting an environment variable ala: 
>> oc env dc router RELOAD_INTERVAL=5s
>>#  or even 2s or 3s  - that's reload interval in seconds btw
>># if you have a custom deployment config then replace the dc name 
>> router to that deployment config name.
>> 
>> and see if that helps.
>> 
>> 
>> On Wed, Mar 2, 2016 at 6:21 PM, Dean Peterson > <mailto:peterson.d...@gmail.com>> wrote:
>> Is there another place I can look to track down the problem?  The router 
>> logs don't say much, just: " Router is including routes in all namespaces"
>> 
>> On Wed, Mar 2, 2016 at 7:39 PM, Dean Peterson > <mailto:peterson.d...@gmail.com>> wrote:
>> All it says is: " Router is including routes in all namespaces"  That's it.
>> 
>> On Wed, Mar 2, 2016 at 7:38 PM, Clayton Coleman > <mailto:ccole...@redhat.com>> wrote:
>> What do the router logs say?
>> 
>> On Mar 2, 2016, at 7:43 PM, Dean Peterson > <mailto:peterson.d...@gmail.com>> wrote:
>> 
>>> This is as close to having openshift origin set up perfectly as I have 
>>> gotten.  My builds work great, container deployments always work now.  I 
>>> thought I was finally going to have a smooth running Openshift; I just need 
>>> to get past this last router issue.  It makes little sense.  I have set up 
>>> a router many times before and never had this issue.  I've had issues with 
>>> other parts of t

Re: 503 service unavailable

2016-03-02 Thread Julio Saura
seems your router is running on default namespace, your pods are also running 
on namespace default?


> El 3 mar 2016, a las 7:58, Dean Peterson  escribió:
> 
> I did do an "oc edit scc privileged" and made sure this was at the end:
> 
> users:
> - system:serviceaccount:openshift-infra:build-controller
> - system:serviceaccount:management-infra:management-admin
> - system:serviceaccount:default:router
> - system:serviceaccount:default:registry
> 
> router has always been a privileged user service account.
> 
> On Thu, Mar 3, 2016 at 12:55 AM, Ram Ranganathan  > wrote:
> So you have no app level backends in that gist (haproxy.config file). That 
> would explain the 503s - there's nothing there for haproxy to route to.  Most 
> likely its due to the router service account has no permissions to get the 
> routes/endpoints info from etcd. 
> Check that the router service account (router default or whatever service 
> account you used to start the router) is
> part of the privileged SCC and has read permissions to etcd.
> 
> 
> On Wed, Mar 2, 2016 at 10:43 PM, Dean Peterson  > wrote:
> I created a public gist from the output: 
> https://gist.github.com/deanpeterson/76aa9abf2c7fa182b56c 
> 
> 
> On Thu, Mar 3, 2016 at 12:35 AM, Ram Ranganathan  > wrote:
> You shouldn't need to restart the router. It should have created a new 
> deployment and redeployed the router. 
> So looks like the cause for your 503 errors is something else.
> 
> Can you check that your haproxy.config file is correct (has the correct 
> backends and servers). 
> Either nsenter into your router docker container and cat the file or 
> then run:  
> oc exec  cat /var/lib/haproxy/conf/haproxy.config#  
> router-pod-name as shown in oc get pods
> 
> Ram//
> 
> On Wed, Mar 2, 2016 at 10:10 PM, Dean Peterson  > wrote:
> I ran that "oc env dc router RELOAD_INTERVAL=5s" but I still get the 503 
> error.  Do I need to restart anything?
> 
> On Wed, Mar 2, 2016 at 11:47 PM, Ram Ranganathan  > wrote:
> Dean, we did have a recent change to coalesce router reloads (default is 0s) 
> and it looks like with that default we are more aggressive with the reloads 
> which could be causing this problem.   
> 
> Could you please try setting an environment variable ala: 
> oc env dc router RELOAD_INTERVAL=5s
>#  or even 2s or 3s  - that's reload interval in seconds btw
># if you have a custom deployment config then replace the dc name 
> router to that deployment config name.
> 
> and see if that helps.
> 
> 
> On Wed, Mar 2, 2016 at 6:21 PM, Dean Peterson  > wrote:
> Is there another place I can look to track down the problem?  The router logs 
> don't say much, just: " Router is including routes in all namespaces"
> 
> On Wed, Mar 2, 2016 at 7:39 PM, Dean Peterson  > wrote:
> All it says is: " Router is including routes in all namespaces"  That's it.
> 
> On Wed, Mar 2, 2016 at 7:38 PM, Clayton Coleman  > wrote:
> What do the router logs say?
> 
> On Mar 2, 2016, at 7:43 PM, Dean Peterson  > wrote:
> 
>> This is as close to having openshift origin set up perfectly as I have 
>> gotten.  My builds work great, container deployments always work now.  I 
>> thought I was finally going to have a smooth running Openshift; I just need 
>> to get past this last router issue.  It makes little sense.  I have set up a 
>> router many times before and never had this issue.  I've had issues with 
>> other parts of the system but never the router. 
>> 
>> On Wed, Mar 2, 2016 at 6:34 PM, Dean Peterson > > wrote:
>> I have a number of happy pods.  They are all running normally.
>> 
>> On Wed, Mar 2, 2016 at 6:28 PM, Mohamed Lrhazi 
>> mailto:mohamed.lrh...@georgetown.edu>> wrote:
>> Click on a pod and get to its log and events tabs see if they are 
>> actually happy or stuck on something...
>> 
>> On Wed, Mar 2, 2016 at 7:03 PM, Dean Peterson > > wrote:
>> I have successfully started the ha proxy router.  I have a pod running, yet 
>> all my routes take me to a 503 service unavailable error page.  I updated my 
>> resolv.conf file to have my master ip as nameserver; I've never had this 
>> problem on previous versions.  I installed openshift origin 1.1.3 with 
>> ansible; everything seems to be running smoothly like before but I just get 
>> 503 service unavailable errors trying to visit any route.
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com 
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>> 

Re: Problem in Replication controller

2016-03-01 Thread Julio Saura
btw now is working inside the RC with de policies you told me, thanks ;) 


but i do not understand that is the difference in running in a RC, it is 
supposed that is a controller for handling pods, and in the pod wait worked 
fine..


> El 1 mar 2016, a las 16:28, Julio Saura  escribió:
> 
> hello
> 
> thanks for answering
> 
> but why is running without problem if i run my image as a POD without doing 
> that and failing when i use RC instead of POD?
> 
> thanks
> 
> 
>> El 1 mar 2016, a las 16:21, Clayton Coleman  escribió:
>> 
>> Regular Openshift users don't have permission to run as arbitrary
>> UIDs.  You can read more here:
>> https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints
>> 
>> To give yourself access as a root user (if you are an admin) run
>> 
>>   oadm policy add-scc-to-user anyuid -z default
>> 
>> Or to let your pods run as any non-root user, run
>> 
>>   oadm policy add-scc-to-user nonroot -z default
>> 
>>> On Mar 1, 2016, at 9:04 AM, Julio Saura  wrote:
>>> 
>>> Hello
>>> 
>>> i have a working open shift running and maybe is my misunderstanding but i 
>>> have a problem with RC
>>> 
>>> so,
>>> 
>>> i have an own docker image for my app, my entry point in my docker file 
>>> creates some directories that are needed for my app to work and starts a 
>>> jboss,, so far so good
>>> 
>>> the image is running if i define it as a POD, but when i try to create a RC 
>>> using that image i am having some weird permission denied when creating the 
>>> directories and so my pod dies.
>>> 
>>> i have noticed that when i run it as POD my process is running under the 
>>> user i define in a step inside my docker file when building the image, but 
>>> if i run it on a RC the process is running under an unknown UID
>>> 
>>> UID PID   PPID  C STIME TTY  TIME CMD
>>> 1000120+  1  0  0 17:02 ?00:00:00 /bin/bash 
>>> /etc/init.d/jboss-as st
>>> 
>>> and so when that entry point is trying to create the directories i need i 
>>> get permission denied errors, logically the process dies and so does my pod 
>>> inside de RC ..
>>> 
>>> why is this happening? on my dockerfile i add a unix user as the process 
>>> proprietary and in my entry point command script i am changing the user 
>>> when starting .. running on the RC the user is not created and not used, 
>>> but running it as a POD works like a charm..
>>> 
>>> i am missing something?
>>> 
>>> best regards
>>> thanks all!
>>> 
>>> 
>>> 
>>> 
>>> 
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> 


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem in Replication controller

2016-03-01 Thread Julio Saura
hello

thanks for answering

but why is running without problem if i run my image as a POD without doing 
that and failing when i use RC instead of POD?

thanks


> El 1 mar 2016, a las 16:21, Clayton Coleman  escribió:
> 
> Regular Openshift users don't have permission to run as arbitrary
> UIDs.  You can read more here:
> https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints
> 
> To give yourself access as a root user (if you are an admin) run
> 
>oadm policy add-scc-to-user anyuid -z default
> 
> Or to let your pods run as any non-root user, run
> 
>oadm policy add-scc-to-user nonroot -z default
> 
>> On Mar 1, 2016, at 9:04 AM, Julio Saura  wrote:
>> 
>> Hello
>> 
>> i have a working open shift running and maybe is my misunderstanding but i 
>> have a problem with RC
>> 
>> so,
>> 
>> i have an own docker image for my app, my entry point in my docker file 
>> creates some directories that are needed for my app to work and starts a 
>> jboss,, so far so good
>> 
>> the image is running if i define it as a POD, but when i try to create a RC 
>> using that image i am having some weird permission denied when creating the 
>> directories and so my pod dies.
>> 
>> i have noticed that when i run it as POD my process is running under the 
>> user i define in a step inside my docker file when building the image, but 
>> if i run it on a RC the process is running under an unknown UID
>> 
>> UID PID   PPID  C STIME TTY  TIME CMD
>> 1000120+  1  0  0 17:02 ?00:00:00 /bin/bash 
>> /etc/init.d/jboss-as st
>> 
>> and so when that entry point is trying to create the directories i need i 
>> get permission denied errors, logically the process dies and so does my pod 
>> inside de RC ..
>> 
>> why is this happening? on my dockerfile i add a unix user as the process 
>> proprietary and in my entry point command script i am changing the user when 
>> starting .. running on the RC the user is not created and not used, but 
>> running it as a POD works like a charm..
>> 
>> i am missing something?
>> 
>> best regards
>> thanks all!
>> 
>> 
>> 
>> 
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Problem in Replication controller

2016-03-01 Thread Julio Saura
Hello

i have a working open shift running and maybe is my misunderstanding but i have 
a problem with RC

so,

 i have an own docker image for my app, my entry point in my docker file 
creates some directories that are needed for my app to work and starts a 
jboss,, so far so good

the image is running if i define it as a POD, but when i try to create a RC 
using that image i am having some weird permission denied when creating the 
directories and so my pod dies.

i have noticed that when i run it as POD my process is running under the user i 
define in a step inside my docker file when building the image, but if i run it 
on a RC the process is running under an unknown UID

UID PID   PPID  C STIME TTY  TIME CMD
1000120+  1  0  0 17:02 ?00:00:00 /bin/bash 
/etc/init.d/jboss-as st

and so when that entry point is trying to create the directories i need i get 
permission denied errors, logically the process dies and so does my pod inside 
de RC ..

why is this happening? on my dockerfile i add a unix user as the process 
proprietary and in my entry point command script i am changing the user when 
starting .. running on the RC the user is not created and not used, but running 
it as a POD works like a charm..

i am missing something?

best regards
thanks all!





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users