Re: Cannot pull images from internal registry when creating a pod

2017-12-06 Thread Yu Wei
Could you access registry web console?


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux


From: users-boun...@lists.openshift.redhat.com 
 on behalf of Andreas Mather 

Sent: Friday, December 1, 2017 9:01:34 PM
To: users@lists.openshift.redhat.com
Subject: Cannot pull images from internal registry when creating a pod

Hi All!

I'm facing an issue where, even though I can push images from my client into 
the internal registry, creating a pod which uses internal images fails with 
'image not found'. Further debugging indicated an authentication problem.

I've created following issue where I described all the details:
https://github.com/openshift/origin/issues/17523

The issue was closed without any reason given so I hope someone here can help.

In the meantime, I've tried installing the cluster with following 
openshift-ansible checkouts/configurations and hit the problem in all setups:

openshift-ansible checkout openshift-ansible-3.7.2-1-8-g56b529e:
installs the cluster without issues

openshift-ansible checkout master:
installs the cluster but then fails at "Reconcile with RBAC file"
(that's the reason I usually used above checkout)

openshift-ansible checkout master with openshift_repos_enable_testing=true in 
[OSEv3:vars]:
installs the cluster but then fails at "Verify that TSB is running"

So it doesn't seem to be correlated to the openshift-ansible version I checkout 
or the openshift/kubernetes version the cluster installs with.

Another noteable detail: As my nodes and master communicate via host-to-host 
IPSsec I had to set the mtu to 1350 in /etc/origin/node/node-config.yaml and 
rebooted all nodes and master prior to installing the registry. I had TLS and 
networking issues before, but setting the MTU resolved all of them.

Maybe I'm missing a configuration step, so here's the complete list of commands 
I issue to setup the registry, push the image and creating the pod:

# create registry
# on master as root (whaomi: system:admin):
$ cd /etc/origin/master
$ oadm registry --config=admin.kubeconfig --service-account=registry
$ oc get svc docker-registry # get service IP address
$ oadm ca create-server-cert \
--signer-cert=/etc/origin/master/ca.crt \
--signer-key=/etc/origin/master/ca.key \
--signer-serial=/etc/origin/master/ca.serial.txt \

--hostnames='registry.mycompany.com,docker-registry.default.svc.cluster.local,172.30.185.69'
 \
--cert=/etc/secrets/registry.crt \
--key=/etc/secrets/registry.key
$ oc rollout pause dc/docker-registry
$ oc secrets new registry-certificates /etc/secrets/registry.crt 
/etc/secrets/registry.key
$ oc secrets link registry registry-certificates
$ oc secrets link default  registry-certificates
$ oc volume dc/docker-registry --add --type=secret 
--secret-name=registry-certificates -m /etc/secrets
$ oc set env dc/docker-registry 
REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt 
REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key
$ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": 
{"containers":[{"name":"registry","livenessProbe":  {"httpGet": 
{"scheme":"HTTPS"}}}]'
$ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": 
{"containers":[{"name":"registry","readinessProbe":  {"httpGet": 
{"scheme":"HTTPS"}}}]'
$ oc rollout resume dc/docker-registry

# deploy registry certs
$ cat deploy_docker_certs.sh
for h in kubmaster1 kubnode1 kubnode2
do
  ssh="ssh -o StrictHostKeyChecking=no $h"

  for dir in docker-registry.default.svc.cluster.local:5000 
172.30.185.69:5000 
registry.mycompany.com:5000
  do
$ssh "mkdir /etc/docker/certs.d/${dir}" 2>/dev/null
scp -o StrictHostKeyChecking=no /etc/origin/master/ca.crt 
${h}:/etc/docker/certs.d/${dir}/
  done
  $ssh sudo systemctl daemon-reload
  $ssh sudo systemctl restart docker
done
$ ./deploy_docker_cert.sh

# external route
$ oc create route reencrypt --service=docker-registry 
--cert=/server/tls/mywildcard.cer --key=/server/tls/mywildcard.key 
--ca-cert=/server/tls/mywildcard_ca.cer 
--dest-ca-cert=/etc/origin/master/ca.crt 
--hostname=registry.mycompany.com

# create user
$ newuser=amather
$ htpasswd htpasswd $newuser # htpasswd auth and file location configured in 
ansible hosts file
$ oc create user $newuser
$ oc create identity htpasswd_auth:$newuser
$ oc create useridentitymapping htpasswd_auth:$newuser $newuser
$ oadm policy add-role-to-user system:registry $newuser # registry login
$ oadm policy add-role-to-user admin $newuser # project admin
$ oadm policy add-role-to-user system:image-builder $newuser # image pusher

# on my client (os x)
$ oc login
$ oc whoami
amather
$ docker login -u $(oc whoami) -p $(oc whoami -t) 
registry.mycompany.com
WARNING! Using 

Re: Service Catalog and Openshift Origin 3.7

2017-12-06 Thread Luke Meyer
As Aleksander said, more information would help.

The service broker waits on the service catalog API to come up. It may be
that the service catalog was deployed but the pods are not actually
starting for some reason (e.g. not available at requested version). Check
the pods in the namespace.

$ oc get pods,ds  -n kube-service-catalog

On Tue, Dec 5, 2017 at 6:26 PM, Aleksandar Lazic 
wrote:

> Hi.
>
> -- Originalnachricht --
> Von: "Marcello Lorenzi" 
> An: "users" 
> Gesendet: 05.12.2017 16:55:22
> Betreff: Service Catalog and Openshift Origin 3.7
>
> Hi All,
>> we tried to install the newer version of Openshift Origin 3.7 but during
>> the playbook execution we noticed this error:
>>
>> FAILED - RETRYING: wait for api server to be ready (120 retries left).
>>
>> The issue seems to be related to the service catalog but we don't know
>> there this is running.
>>
> Why do you assume this?
> Please can you share some more datas like.
>
> * inventory file
> * ansible version
> * playbook version
> * os
> * some logs
>
> Does someone notice this issue?
>>
>> Thanks,
>> Marcello
>>
>
> Regards
> Aleks
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to Create Multiple Project Templates

2017-12-06 Thread Walters, Todd
 I asked this question before but had no replies.  I’d like to know if it’s 
possible to create new project with the oc new-project command and call a 
different template from the ‘default’ that is defined in 
/etc/origin/master/master-config.yml file?   For example, I’d like to call a 
project template with different quotas/limits sizes.

Currently with our ci/cd process uses Jenkins service account to create new 
project. This account can do this, but it cannot edit quota/limits that the 
default project template sets. Without changing permission on this sa 
(impersonate, give cluster-admin, etc..) what is the best way to apply 
different sized quota/limits using a project templates? Or, can this even be 
done without cluster admin?  The ideal, would be to have a choice during 
deployment job to choose project size (small,med,large) and then apply those 
quota/limits.

Thank you,

Todd Walters



The information contained in this message, and any attachments thereto,
is intended solely for the use of the addressee(s) and may contain
confidential and/or privileged material. Any review, retransmission,
dissemination, copying, or other use of the transmitted information is
prohibited. If you received this in error, please contact the sender
and delete the material from any computer. UNIGROUP.COM



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: S2I with wildfly: keycloak adapter

2017-12-06 Thread Steven Pousty
I think the wildfly builder also understand that you can put modules in
your source repo under modules/ and they will get copied into the right
location

https://github.com/openshift/openshift-jee-sample/blob/master/.s2i/bin/assemble.ignore#L59

On Wed, Dec 6, 2017 at 7:37 AM, Ben Parees  wrote:

>
>
>
>
> On Dec 6, 2017 1:47 AM, "jelmer van amen"  wrote:
>
> The keycloak adapter subsystem needs more than just some XML configuration
> (as described on http://www.keycloak.org/docs/latest/securing_apps/index.
> html#_jboss_adapter).  It needs to have some layers extracted to the
> wildfly root dir, besides from the XML change. How would one go around
> solving this using the standard S2I pipeline?
>
>
> If the default assemble script doesn't allow you to override the
> configuration you need to override, you'll have to provide a custom
> assemble script that does.
>
>
> On 5 December 2017 at 14:38, Ben Parees  wrote:
>
>> The wildfly image allows you to supply your own standalone.xml config as
>> part of your source.
>>
>>
>> Ben Parees | OpenShift
>>
>> On Dec 5, 2017 05:10, "jelmer van amen"  wrote:
>>
>>> When using the standard s2i pipeline, no configuration seems to be
>>> present to add a keycloak security subsystem. How would one go about adding
>>> a keycloak adapter (as subsystem in wildfly) using the standard S2I image
>>> stream for wildfly?
>>>
>>> Kind regards,
>>> Jelmer
>>>
>>> On 5 December 2017 at 07:15, Steven Pousty  wrote:
>>>
 Why do you think it doesn't support it. There should be no problem
 adding it. WHich part are you stuck on?
 Thanks
 Steve

 On Mon, Dec 4, 2017 at 10:11 PM, jelmer van amen <
 jelmervana...@gmail.com> wrote:

> Hi,
>
>
>
> We’re migrating our software to OpenShift. We have a maven (well,
> actually gradle, but we’re ok with moving to maven) J2EE war application
> secured using keycloak adapter in a wildfly instance.
>
>
>
> We’d like to use s2i for this application. Our first guess would be
> https://github.com/openshift-s2i/s2i-wildfly , but that does not
> (seem to) support adding the keycloak adapter (
> http://www.keycloak.org/docs/3.0/securing_apps/topics/oidc/
> java/jboss-adapter.html#_jboss_adapter).
>
>
>
> What would be the best way to go?
>
>
>
> Thanks!
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>

>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: S2I with wildfly: keycloak adapter

2017-12-06 Thread Ben Parees
On Dec 6, 2017 1:47 AM, "jelmer van amen"  wrote:

The keycloak adapter subsystem needs more than just some XML configuration
(as described on http://www.keycloak.org/docs/latest/securing_apps/
index.html#_jboss_adapter).  It needs to have some layers extracted to the
wildfly root dir, besides from the XML change. How would one go around
solving this using the standard S2I pipeline?


If the default assemble script doesn't allow you to override the
configuration you need to override, you'll have to provide a custom
assemble script that does.


On 5 December 2017 at 14:38, Ben Parees  wrote:

> The wildfly image allows you to supply your own standalone.xml config as
> part of your source.
>
>
> Ben Parees | OpenShift
>
> On Dec 5, 2017 05:10, "jelmer van amen"  wrote:
>
>> When using the standard s2i pipeline, no configuration seems to be
>> present to add a keycloak security subsystem. How would one go about adding
>> a keycloak adapter (as subsystem in wildfly) using the standard S2I image
>> stream for wildfly?
>>
>> Kind regards,
>> Jelmer
>>
>> On 5 December 2017 at 07:15, Steven Pousty  wrote:
>>
>>> Why do you think it doesn't support it. There should be no problem
>>> adding it. WHich part are you stuck on?
>>> Thanks
>>> Steve
>>>
>>> On Mon, Dec 4, 2017 at 10:11 PM, jelmer van amen <
>>> jelmervana...@gmail.com> wrote:
>>>
 Hi,



 We’re migrating our software to OpenShift. We have a maven (well,
 actually gradle, but we’re ok with moving to maven) J2EE war application
 secured using keycloak adapter in a wildfly instance.



 We’d like to use s2i for this application. Our first guess would be
 https://github.com/openshift-s2i/s2i-wildfly , but that does not (seem
 to) support adding the keycloak adapter (http://www.keycloak.org/docs/
 3.0/securing_apps/topics/oidc/java/jboss-adapter.html#_jboss_adapter).



 What would be the best way to go?



 Thanks!

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


router certificate question

2017-12-06 Thread Feld, Michael (IMS)
Hey all,

I have a cluster where we use an external HAProxy to terminate SSL and send 
traffic to the routers in the OpenShift cluster, so the routes within the 
cluster do not use TLS. It looks like when this cluster was setup, default 
certificates were given to the routers and are expiring soon (I get this when 
running the ansible easy-mode.yaml):

"router": [
{
  "cert_cn": "OU=Domain Control Validated:, CN=*..com:, 
DNS:*. .com, DNS: .com",
  "days_remaining": 11,
  "expiry": "2017-12-17 20:13:24",
  "health": "warning",
  "path": "/api/v1/namespaces/default/secrets/router-certs",
  "serial": ,
  "serial_hex": ""
}
  ]

My question is, is it OK to let this expire without taking any action? How can 
I safely remove the default certificates to remove the warnings in the future?

Thanks
Mike



Information in this e-mail may be confidential. It is intended only for the 
addressee(s) identified above. If you are not the addressee(s), or an employee 
or agent of the addressee(s), please note that any dissemination, distribution, 
or copying of this communication is strictly prohibited. If you have received 
this e-mail in error, please notify the sender of the error.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


router certificate question

2017-12-06 Thread Feld, Michael (IMS)
Hey all,

I have a cluster where we use an external HAProxy to terminate SSL and send 
traffic to the routers in the OpenShift cluster, so the routes within the 
cluster do not use TLS. It looks like when this cluster was setup, default 
certificates were given to the routers and are expiring soon (I get this when 
running the ansible easy-mode.yaml):

"router": [
{
  "cert_cn": "OU=Domain Control Validated:, CN=*..com:, 
DNS:*. .com, DNS: .com",
  "days_remaining": 11,
  "expiry": "2017-12-17 20:13:24",
  "health": "warning",
  "path": "/api/v1/namespaces/default/secrets/router-certs",
  "serial": ,
  "serial_hex": ""
}
  ]

My question is, is it OK to let this expire without taking any action? How can 
I safely remove the default certificates to remove the warnings in the future?

Thanks
Mike



Information in this e-mail may be confidential. It is intended only for the 
addressee(s) identified above. If you are not the addressee(s), or an employee 
or agent of the addressee(s), please note that any dissemination, distribution, 
or copying of this communication is strictly prohibited. If you have received 
this e-mail in error, please notify the sender of the error.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Node not getting taint

2017-12-06 Thread Marko Lukša

You can verify by using oc get nodes/node-dev1 -o yaml

You can also try seeing if something is overriding the taint by running:

oc adm taint nodes node-dev1 mode=testing:NoSchedule && oc get nodes/node-dev1 
-o yaml

It's likely you'll see the taint, but it will get removed later.

M.

On 06. 12. 2017 11:29, Daniel Kučera wrote:

Hi,

I'm trying to set taint on node:

$ oc adm taint nodes node-dev1 mode=testing:NoSchedule
node "node-dev1" tainted

However if I run describe, it shows no taints:

$ oc describe nodes/node-dev1
Name:node-dev1
Role:
Labels:beta.kubernetes.io/arch=amd64
 beta.kubernetes.io/os=linux
 kubernetes.io/hostname=node-dev1
Annotations:
Taints:
CreationTimestamp:Wed, 06 Dec 2017 11:01:41 +0100
Phase:

$ oc get nodes/node-dev1
NAME  STATUSAGE   VERSION
node-dev1   Ready 27m   v1.5.2+43a9be4

Bug again? How can I know if it's not set or only not displayed in describe?



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Node not getting taint

2017-12-06 Thread Daniel Kučera
Hi,

I'm trying to set taint on node:

$ oc adm taint nodes node-dev1 mode=testing:NoSchedule
node "node-dev1" tainted

However if I run describe, it shows no taints:

$ oc describe nodes/node-dev1
Name:node-dev1
Role:
Labels:beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=node-dev1
Annotations:
Taints:
CreationTimestamp:Wed, 06 Dec 2017 11:01:41 +0100
Phase:

$ oc get nodes/node-dev1
NAME  STATUSAGE   VERSION
node-dev1   Ready 27m   v1.5.2+43a9be4

Bug again? How can I know if it's not set or only not displayed in describe?

-- 

S pozdravom / Best regards
Daniel Kucera.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users