Re: oadm from a client

2016-07-08 Thread Clayton Coleman
Almost all oadm commands can be run from anywhere you can run OC.  Some
commands require access to the master configuration (the config / ca / node
commands), but don't require being ON the master.

Put another way, there is no action that occurs between oadm and the master
other than a) REST API or b) direct manipulating of config on disk.

On Fri, Jul 8, 2016 at 9:48 PM, Luis Pabón  wrote:

> Is it possible to install oadm on a client machine instead of running it
> from the master?
>
> - Luis
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


oadm from a client

2016-07-08 Thread Luis Pabón
Is it possible to install oadm on a client machine instead of running it from 
the master?

- Luis

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error message when exposing service with HA router

2016-07-08 Thread Clayton Coleman
The error message may be incorrect - we probably need to tone it down
because it is not actionable in all cases.  Would you please file an issue
regarding this with info about your router?

On Jul 7, 2016, at 2:47 PM, Tony Saxon  wrote:

I set up an HA router based on the docs at
https://docs.openshift.org/latest/admin_guide/high_availability.html#admin-guide-high-availability

Everything seemed to work fine, however when I exposed a deployed example I
get an error message in the status:

[root@oso-master ~]# oc status -v
In project test on server https://oso-master.libvirt:8443

http://deployment-example-test.router.default.svc.cluster.local to pod port
8080-tcp (svc/deployment-example)
  dc/deployment-example deploys istag/deployment-example:latest
deployment #1 deployed 40 hours ago - 1 pod

Errors:
  * route/deployment-example is routing traffic to svc/deployment-example,
but either the administrator has not installed a router or the router is
not selecting this route.
try: oc adm router -h
Warnings:
  * dc/deployment-example has no readiness probe to verify pods are ready
to accept traffic or ensure deployment is successful.
try: oc set probe dc/deployment-example --readiness ...

View details with 'oc describe /' or list everything with
'oc get all'.


After spending a few hours trying to figure out what the issue was, I
finally just tried to test access to the service from outside through the
HA router that was set up and it seemed to work. Can anyone point me to
where I would look to determine what is actually causing the error?

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Determine when a deployment finishes programmatically

2016-07-08 Thread Clayton Coleman
Ultimately this is a gap that we'd like to fix by a) making the API expose
information on the deployment config that reflects the last successful
deployment and b) adding CLI commands to handle that info.

On Fri, Jul 8, 2016 at 3:58 PM, Rodolfo Carvalho 
wrote:

> On Fri, Jul 8, 2016 at 9:19 PM, Alex Wauck  wrote:
>
>> On Fri, Jul 8, 2016 at 12:25 PM, Rodolfo Carvalho 
>> wrote:
>>
>>> @Alex, when using `oc get --watch`, probably you want to combine that
>>> with an alternative output format, like json, jsonpath or template.
>>> Then act upon every new output.
>>>
>>
>> That will probably be OK.  I see that using a different output format
>> tells me how many replicas I should expect, so I can just wait for
>> status.replicas to match spec.replicas.
>>
>>
>>> Or maybe I just interpreted you wrong and all you want is some
>>> programmatic way the current deployment state? (complete, failed, running)
>>> And not *wait for it to finish*?
>>>
>>
>> Well, if I can get the current deployment state, I can wait for it to
>> finish by polling until the state is "Complete".  This is intended for use
>> in automated tests, since we don't really want to start testing until the
>> new version is fully deployed in our staging environment.
>>
>
>
>
>
> In that case, use and abuse the knowledge from OpenShift Origin tests.
>
> For instance, this is a Bash helper to wait for a registry to be deployed:
>
>
> https://github.com/openshift/origin/blob/4f6e3a896831f2ec34b9daf0ced341be382daf21/hack/util.sh#L669-L674
>
> You could do the same for any other deployment config.
>
>
>
>
> Rodolfo Carvalho | OpenShift
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deploy wordpress with persistent volume

2016-07-08 Thread Ben Parees
On Fri, Jul 8, 2016 at 3:33 PM, Robson Ramos Barreto <
robson.rbarr...@gmail.com> wrote:

> Hello Ben
>
> Thank you for your time
>
> I'm using this one:
> https://github.com/rrbarreto/ose-wordpress/blob/master/wordpress-template.json
>
> I see. So the files are being replaced when the PV is mounted because the
> image with the contents are created before, right ?
>

​right
​


>
> > Your goal should be to have an image that is immutable w/ respect to
> application logic.
> I'm newbie with openshift / docker so I'd appreciate very much if you
> could give me any example from how I can do that
>

​well that template creates an image that is immutable w/ respect to
application logic..  the template is defining a buildconfig which produces
a wordpress image that includes the wordpress code as part of the image.

If you are interested in changing the wordpress code that's included in the
image, you should change what the buildconfig is building by changing the
code in the repository referenced here:
https://github.com/rrbarreto/ose-wordpress/blob/master/wordpress-template.json#L75

That parameter is defined here with a default value:
https://github.com/rrbarreto/ose-wordpress/blob/master/wordpress-template.json#L378

obviously you can't change the https://github.com/wordpress/wordpress.git
repository, so you'd want to:
1) fork the wordpress repo
2) make your changes in your fork
3) update the template to change the default parameter value to point to
your fork, or at least when instantiating the template, explicitly set the
parameter value to point to your fork instead of accepting the default
parameter value.


​



>
> Thank you
>
> Regards
>
> Robson
>
> 2016-07-08 13:18 GMT-03:00 Ben Parees :
>
>> Not sure which wordpress image/example you're using since there are two
>> in there, but if you started from this one:
>> https://github.com/openshift/origin/blob/master/examples/wordpress/template/wordpress-mysql.json,
>> then the wordpress source lives in /opt/app-root/src within the image and
>> since you're mounting a PV to that path, you're replacing the
>> /opt/app-root/src image content with the contents of your PV.
>>
>> Fundamentally that wordpress image isn't designed to have
>> /opt/app-root/src be a volume.
>>
>> If you want the source code to be on a PV so you can safely edit it and
>> have your changes persisted, you need an image that's going to, on startup,
>> copy its source from a location within the image, to a location that you're
>> mounting the PV at, and then run the source from that PV directory.
>>
>> But in general that would not be a recommended pattern.  Your goal should
>> be to have an image that is immutable w/ respect to application logic.
>>
>>
>>
>> On Fri, Jul 8, 2016 at 11:29 AM, Robson Ramos Barreto <
>> robson.rbarr...@gmail.com> wrote:
>>
>>> Hello Guys,
>>>
>>> I'm trying to deploy wordpress with persistent volume on openshift
>>> enterprise 3.2 (30 Day Self-Supported) as in the example [1] but the git
>>> files aren't being wirtten in the NFS path. MySQL is being deployed
>>> properly in the NFS persistent volume
>>>
>>> # ls -ld /exports/wordpress/mysql/
>>> drwxrwxrwx. 5 nfsnobody nfsnobody 4096 Jul  8 10:35
>>> /exports/wordpress/mysql/
>>>
>>> # ls -lr /exports/wordpress/mysql/
>>> total 88084
>>> drwx--. 2 27 27   19 Jul  8 09:48 wordpress
>>> drwx--. 2 27 27 4096 Jul  8 09:48 performance_schema
>>> -rw-rw. 1 27 272 Jul  8 10:35 mysql-1-ijptl.pid
>>> -rw-rw. 1 27 272 Jul  8 09:48 mysql-1-1soui.pid
>>> drwx--. 2 27 27 4096 Jul  8 09:48 mysql
>>> -rw-rw. 1 27 27 38797312 Jul  8 09:48 ib_logfile1
>>> -rw-rw. 1 27 27 38797312 Jul  8 10:35 ib_logfile0
>>> -rw-rw. 1 27 27 12582912 Jul  8 10:35 ibdata1
>>> -rw-rw. 1 27 27   56 Jul  8 09:48 auto.cnf
>>>
>>> # ls -ld /exports/wordpress/wp/
>>> drwxrwxrwx. 2 nfsnobody nfsnobody 26 Jul  7 18:43 /exports/wordpress/wp/
>>>
>>> # ls -lr /exports/wordpress/wp/
>>> total 0
>>>
>>> $ oc get pods
>>> NAME  READY STATUS  RESTARTS   AGE
>>> mysql-1-ijptl 1/1   Running 0  44m
>>> wordpress-mysql-example-1-1clom   1/1   Running 0  41m
>>> wordpress-mysql-example-1-build   0/1   Completed   0  44m
>>>
>>> $ oc rsh wordpress-mysql-example-1-1clom
>>> sh-4.2$ pwd
>>> /opt/app-root/src
>>> sh-4.2$ df -h /opt/app-root/src
>>> Filesystem Size  Used Avail Use% Mounted on
>>> 192.168.0.9:/exports/wordpress/wp   50G   11G   40G  22%
>>> /opt/app-root/src
>>> sh-4.2$ ls
>>> sh-4.2$ echo "Create file from pod" > teste.txt
>>>
>>> # ls -lr /exports/wordpress/wp/
>>> total 4
>>> -rw-r--r--. 1 1001 nfsnobody 21 Jul  8 11:21 teste.txt
>>>
>>> # cat /exports/wordpress/wp/teste.txt
>>> Create file from pod
>>>
>>> $ oc get pvc
>>> NAME  STATUSVOLUME  CAPACITY   ACCESSMODES   AGE
>>> claim-mysql   Bound nfs-pv007   5Gi

Re: Deploy wordpress with persistent volume

2016-07-08 Thread Robson Ramos Barreto
Hello Ben

Thank you for your time

I'm using this one:
https://github.com/rrbarreto/ose-wordpress/blob/master/wordpress-template.json

I see. So the files are being replaced when the PV is mounted because the
image with the contents are created before, right ?

> Your goal should be to have an image that is immutable w/ respect to
application logic.
I'm newbie with openshift / docker so I'd appreciate very much if you could
give me any example from how I can do that

Thank you

Regards

Robson

2016-07-08 13:18 GMT-03:00 Ben Parees :

> Not sure which wordpress image/example you're using since there are two in
> there, but if you started from this one:
> https://github.com/openshift/origin/blob/master/examples/wordpress/template/wordpress-mysql.json,
> then the wordpress source lives in /opt/app-root/src within the image and
> since you're mounting a PV to that path, you're replacing the
> /opt/app-root/src image content with the contents of your PV.
>
> Fundamentally that wordpress image isn't designed to have
> /opt/app-root/src be a volume.
>
> If you want the source code to be on a PV so you can safely edit it and
> have your changes persisted, you need an image that's going to, on startup,
> copy its source from a location within the image, to a location that you're
> mounting the PV at, and then run the source from that PV directory.
>
> But in general that would not be a recommended pattern.  Your goal should
> be to have an image that is immutable w/ respect to application logic.
>
>
>
> On Fri, Jul 8, 2016 at 11:29 AM, Robson Ramos Barreto <
> robson.rbarr...@gmail.com> wrote:
>
>> Hello Guys,
>>
>> I'm trying to deploy wordpress with persistent volume on openshift
>> enterprise 3.2 (30 Day Self-Supported) as in the example [1] but the git
>> files aren't being wirtten in the NFS path. MySQL is being deployed
>> properly in the NFS persistent volume
>>
>> # ls -ld /exports/wordpress/mysql/
>> drwxrwxrwx. 5 nfsnobody nfsnobody 4096 Jul  8 10:35
>> /exports/wordpress/mysql/
>>
>> # ls -lr /exports/wordpress/mysql/
>> total 88084
>> drwx--. 2 27 27   19 Jul  8 09:48 wordpress
>> drwx--. 2 27 27 4096 Jul  8 09:48 performance_schema
>> -rw-rw. 1 27 272 Jul  8 10:35 mysql-1-ijptl.pid
>> -rw-rw. 1 27 272 Jul  8 09:48 mysql-1-1soui.pid
>> drwx--. 2 27 27 4096 Jul  8 09:48 mysql
>> -rw-rw. 1 27 27 38797312 Jul  8 09:48 ib_logfile1
>> -rw-rw. 1 27 27 38797312 Jul  8 10:35 ib_logfile0
>> -rw-rw. 1 27 27 12582912 Jul  8 10:35 ibdata1
>> -rw-rw. 1 27 27   56 Jul  8 09:48 auto.cnf
>>
>> # ls -ld /exports/wordpress/wp/
>> drwxrwxrwx. 2 nfsnobody nfsnobody 26 Jul  7 18:43 /exports/wordpress/wp/
>>
>> # ls -lr /exports/wordpress/wp/
>> total 0
>>
>> $ oc get pods
>> NAME  READY STATUS  RESTARTS   AGE
>> mysql-1-ijptl 1/1   Running 0  44m
>> wordpress-mysql-example-1-1clom   1/1   Running 0  41m
>> wordpress-mysql-example-1-build   0/1   Completed   0  44m
>>
>> $ oc rsh wordpress-mysql-example-1-1clom
>> sh-4.2$ pwd
>> /opt/app-root/src
>> sh-4.2$ df -h /opt/app-root/src
>> Filesystem Size  Used Avail Use% Mounted on
>> 192.168.0.9:/exports/wordpress/wp   50G   11G   40G  22%
>> /opt/app-root/src
>> sh-4.2$ ls
>> sh-4.2$ echo "Create file from pod" > teste.txt
>>
>> # ls -lr /exports/wordpress/wp/
>> total 4
>> -rw-r--r--. 1 1001 nfsnobody 21 Jul  8 11:21 teste.txt
>>
>> # cat /exports/wordpress/wp/teste.txt
>> Create file from pod
>>
>> $ oc get pvc
>> NAME  STATUSVOLUME  CAPACITY   ACCESSMODES   AGE
>> claim-mysql   Bound nfs-pv007   5GiRWO   19h
>> claim-wp  Bound nfs-pv008   2GiRWO,RWX   19h
>>
>> $ oc volumes dc --all
>> deploymentconfigs/mysql
>>   pvc/claim-mysql (allocated 5GiB) as mysql-data
>> mounted at /var/lib/mysql/data
>> deploymentconfigs/wordpress-mysql-example
>>   pvc/claim-wp (allocated 2GiB) as wordpress-mysql-example-data
>> mounted at /opt/app-root/src
>>
>> Template
>>
>> 172   spec:
>> {¬
>> 173 volumes:
>> [¬
>> 174
>> {¬
>> 175 name:
>> ${APP_NAME}-data,¬
>> 176 persistentVolumeClaim:
>> {¬
>> 177   claimName:
>> ${CLAIM_WP_NAME}¬
>> 178
>> }¬
>> 179
>> }¬
>> 180
>> ],¬
>> 181 containers:
>> [¬
>> 182
>> {¬
>> 183 name:
>> ${APP_NAME},¬
>> 184 image:
>> ${APP_NAME},¬
>> 185 ports:
>> [¬
>> 186
>> {¬
>> 187 containerPort:
>> 8080,¬
>> 188 name:
>> wp-server¬
>> 189
>> }¬
>> 190
>> ],¬
>> 191 volumeMounts:
>> [¬
>> 192
>> {¬
>> 193 name:
>> ${APP_NAME}-data,¬
>> 194 mountPath:
>> ${WP_PATH}¬
>> 195
>> }¬
>> 196 ],¬
>>
>>
>> Any help will very appreciate
>>
>> Thank you
>>
>> [1] 

Re: Determine when a deployment finishes programmatically

2016-07-08 Thread Alex Wauck
On Fri, Jul 8, 2016 at 12:25 PM, Rodolfo Carvalho 
wrote:

> @Alex, when using `oc get --watch`, probably you want to combine that with
> an alternative output format, like json, jsonpath or template.
> Then act upon every new output.
>

That will probably be OK.  I see that using a different output format tells
me how many replicas I should expect, so I can just wait for
status.replicas to match spec.replicas.


> Or maybe I just interpreted you wrong and all you want is some
> programmatic way the current deployment state? (complete, failed, running)
> And not *wait for it to finish*?
>

Well, if I can get the current deployment state, I can wait for it to
finish by polling until the state is "Complete".  This is intended for use
in automated tests, since we don't really want to start testing until the
new version is fully deployed in our staging environment.

-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Create selfsigned certs for securing openshift registry

2016-07-08 Thread Den Cowboy
I've created the certificate with my wildcard hostname ntoo and I've exposed 
it. Created pusher service-accounts in some projects because we are working 
with an external jenkins which builds images. Everything works fine now. Thanks

Date: Fri, 8 Jul 2016 09:05:14 -0400
Subject: Re: Create selfsigned certs for securing openshift registry
From: jdeti...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com



On Jul 8, 2016 1:52 AM, "Den Cowboy"  wrote:

>

> I try to secure my openshift registry:

>

> $ oadm ca create-server-cert \

> --signer-cert=/etc/origin/master/ca.crt \

> --signer-key=/etc/origin/master/ca.key \

> --signer-serial=/etc/origin/master/ca.serial.txt \

> --hostnames='docker-registry.default.svc.cluster.local,172.30.124.220' \

> --cert=/etc/secrets/registry.crt \

> --key=/etc/secrets/registry.key

>

>

> Which hostnames do I have to use?

> The service IP of my docker registry of course but what then?:
Currently everything internal should be using just the service IP.
>

> docker-registry.default.svc.cluster.local
This would cover the created service. We have plans to eventually use the 
registry service name instead of IP.
> OR/AND

> docker-registry.dev.wildcard.com
This would only be needed if you intend to expose the registry using a route 
for access external to the cluster.
>

> Thanks

>

> ___

> users mailing list

> users@lists.openshift.redhat.com

> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

>

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Determine when a deployment finishes programmatically

2016-07-08 Thread Rodolfo Carvalho
On Fri, Jul 8, 2016 at 7:10 PM, Rich Megginson  wrote:

> On 07/08/2016 11:00 AM, Rodolfo Carvalho wrote:
>
> I'm testing my proposed solution using `oc get --watch`
>
> Then I just remembered another trick I've used:
>
>
> oc logs -f dc/ruby-hello-world
>
>
> If the deployment has started, you can stream the logs and the oc call
> will block until the deployment finishes.
>
>
> I was told that, in general, it is better to poll for status with a
> timeout [1] than rely on `oc logs -f ...something...` [2].
>
> [1]
> https://github.com/openshift/origin/blob/719eb73481e0270d31f49eb53a26c333a6496943/hack/lib/cmd.sh#L132
> [2] https://github.com/openshift/origin-aggregated-logging/pull/192 - see
> the Conversation and the followups under comment 3.  You'll have to "Show 5
> comments" to see them.
>



I'd say it depends what the end use will be.

If the only concern is about limiting the execution with a timeout, then
there's the `timeout` command (from the coreutils rpm).



@Alex, when using `oc get --watch`, probably you want to combine that with
an alternative output format, like json, jsonpath or template.
Then act upon every new output.


Or maybe I just interpreted you wrong and all you want is some programmatic
way the current deployment state? (complete, failed, running)
And not *wait for it to finish*?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Determine when a deployment finishes programmatically

2016-07-08 Thread Alex Wauck
That's kind of helpful.  I tried --watch for one of my deployments, and I
got this result:

$ oc get rc/watcher-17 --watch
NAME DESIRED   CURRENT   AGE
watcher-17   1 1 12s
watcher-17   2 1 55s
watcher-17   2 1 55s
watcher-17   2 2 55s
watcher-17   2 2 1m
watcher-17   2 2 1m

So, if I don't know how many replicas it's supposed to have, then this
output doesn't tell me when it's done.  Not very helpful.  Also, the CLI
seems to think that the old rc has 0 replicas before the web UI does.  Kind
of strange.

On Fri, Jul 8, 2016 at 11:20 AM, Rodolfo Carvalho 
wrote:

> Hi Alex,
>
> The way our tests wait for a deployment to finish is like this:
>
>
> https://github.com/openshift/origin/blob/69bd3991df7256befa0c979b6620153c44b428c1/test/extended/util/framework.go#L484-L487
> *https://github.com/openshift/origin/blob/69bd3991df7256befa0c979b6620153c44b428c1/test/extended/util/framework.go#L370-L482
> *
>
>
> The key part there is using the watch API.
>
>
> I think there's no CLI command that would give you as much flexibility as
> the API today, but you could try to do something on top of
>
> $ oc get dc/... --watch / --watch-only
>
>
> You'd react to every new output until you see the desired state.
>
>
> Rodolfo Carvalho | OpenShift
>
> On Fri, Jul 8, 2016 at 6:02 PM, Alex Wauck  wrote:
>
>> No luck:
>> $ oc get rc -l deploymentconfig=$PROJECT,deployment=$PROJECT-12
>> $ oc describe rc/$PROJECT-12
>> Name: $PROJECT-12
>> Namespace: $PROJECT
>> Image(s):
>> 172.30.151.60:5000/$PROJECT/$PROJECT@sha256:91a0d57c0dca1a985c6c5f78ccad4c1e1c79db4a98832d2f5749326da0154c88
>> Selector: app=$PROJECT,deployment=$PROJECT-12,deploymentconfig=$PROJECT
>> Labels: app=$PROJECT,openshift.io/deployment-config.name=$PROJECT
>> Replicas: 2 current / 2 desired
>> Pods Status: 1 Running / 1 Waiting / 0 Succeeded / 0 Failed
>> No volumes.
>> Events:
>>   FirstSeen LastSeen Count From SubobjectPath Type Reason Message
>>   -  -  -  -- ---
>>   1m 1m 1 {replication-controller } Normal SuccessfulCreate Created pod:
>> $PROJECT-12-7udpy
>>   38s 38s 1 {replication-controller } Normal SuccessfulCreate Created
>> pod: $PROJECT-12-dvjp8
>>
>>
>> On Fri, Jul 8, 2016 at 9:57 AM, Clayton Coleman 
>> wrote:
>>
>>> oc get rc -l deploymentconfig=NAME,deployment=# should show you
>>>
>>> On Jul 8, 2016, at 10:07 AM, Alex Wauck  wrote:
>>>
>>> Is there any decent way to determine when a deployment has completed?
>>> I've tried `oc get deployments`, which never shows me anything, even when I
>>> have a deployment in progress.  I can go into the web UI and see a list of
>>> deployments, but I can't find any way to access that information via the
>>> CLI aside from parsing the very machine-unfriendly output of `oc describe
>>> dc/whatever`.
>>>
>>> How does the web UI get that information?  It doesn't have any special
>>> access that the CLI doesn't, does it?
>>>
>>> --
>>>
>>> Alex Wauck // DevOps Engineer
>>>
>>> *E X O S I T E*
>>> *www.exosite.com *
>>>
>>> Making Machines More Human.
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>>
>> Alex Wauck // DevOps Engineer
>>
>> *E X O S I T E*
>> *www.exosite.com *
>>
>> Making Machines More Human.
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Determine when a deployment finishes programmatically

2016-07-08 Thread Rodolfo Carvalho
Hi Alex,

The way our tests wait for a deployment to finish is like this:

https://github.com/openshift/origin/blob/69bd3991df7256befa0c979b6620153c44b428c1/test/extended/util/framework.go#L484-L487
*https://github.com/openshift/origin/blob/69bd3991df7256befa0c979b6620153c44b428c1/test/extended/util/framework.go#L370-L482
*


The key part there is using the watch API.


I think there's no CLI command that would give you as much flexibility as
the API today, but you could try to do something on top of

$ oc get dc/... --watch / --watch-only


You'd react to every new output until you see the desired state.


Rodolfo Carvalho | OpenShift

On Fri, Jul 8, 2016 at 6:02 PM, Alex Wauck  wrote:

> No luck:
> $ oc get rc -l deploymentconfig=$PROJECT,deployment=$PROJECT-12
> $ oc describe rc/$PROJECT-12
> Name: $PROJECT-12
> Namespace: $PROJECT
> Image(s):
> 172.30.151.60:5000/$PROJECT/$PROJECT@sha256:91a0d57c0dca1a985c6c5f78ccad4c1e1c79db4a98832d2f5749326da0154c88
> Selector: app=$PROJECT,deployment=$PROJECT-12,deploymentconfig=$PROJECT
> Labels: app=$PROJECT,openshift.io/deployment-config.name=$PROJECT
> Replicas: 2 current / 2 desired
> Pods Status: 1 Running / 1 Waiting / 0 Succeeded / 0 Failed
> No volumes.
> Events:
>   FirstSeen LastSeen Count From SubobjectPath Type Reason Message
>   -  -  -  -- ---
>   1m 1m 1 {replication-controller } Normal SuccessfulCreate Created pod:
> $PROJECT-12-7udpy
>   38s 38s 1 {replication-controller } Normal SuccessfulCreate Created
> pod: $PROJECT-12-dvjp8
>
>
> On Fri, Jul 8, 2016 at 9:57 AM, Clayton Coleman 
> wrote:
>
>> oc get rc -l deploymentconfig=NAME,deployment=# should show you
>>
>> On Jul 8, 2016, at 10:07 AM, Alex Wauck  wrote:
>>
>> Is there any decent way to determine when a deployment has completed?
>> I've tried `oc get deployments`, which never shows me anything, even when I
>> have a deployment in progress.  I can go into the web UI and see a list of
>> deployments, but I can't find any way to access that information via the
>> CLI aside from parsing the very machine-unfriendly output of `oc describe
>> dc/whatever`.
>>
>> How does the web UI get that information?  It doesn't have any special
>> access that the CLI doesn't, does it?
>>
>> --
>>
>> Alex Wauck // DevOps Engineer
>>
>> *E X O S I T E*
>> *www.exosite.com *
>>
>> Making Machines More Human.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deploy wordpress with persistent volume

2016-07-08 Thread Ben Parees
Not sure which wordpress image/example you're using since there are two in
there, but if you started from this one:
https://github.com/openshift/origin/blob/master/examples/wordpress/template/wordpress-mysql.json,
then the wordpress source lives in /opt/app-root/src within the image and
since you're mounting a PV to that path, you're replacing the
/opt/app-root/src image content with the contents of your PV.

Fundamentally that wordpress image isn't designed to have /opt/app-root/src
be a volume.

If you want the source code to be on a PV so you can safely edit it and
have your changes persisted, you need an image that's going to, on startup,
copy its source from a location within the image, to a location that you're
mounting the PV at, and then run the source from that PV directory.

But in general that would not be a recommended pattern.  Your goal should
be to have an image that is immutable w/ respect to application logic.



On Fri, Jul 8, 2016 at 11:29 AM, Robson Ramos Barreto <
robson.rbarr...@gmail.com> wrote:

> Hello Guys,
>
> I'm trying to deploy wordpress with persistent volume on openshift
> enterprise 3.2 (30 Day Self-Supported) as in the example [1] but the git
> files aren't being wirtten in the NFS path. MySQL is being deployed
> properly in the NFS persistent volume
>
> # ls -ld /exports/wordpress/mysql/
> drwxrwxrwx. 5 nfsnobody nfsnobody 4096 Jul  8 10:35
> /exports/wordpress/mysql/
>
> # ls -lr /exports/wordpress/mysql/
> total 88084
> drwx--. 2 27 27   19 Jul  8 09:48 wordpress
> drwx--. 2 27 27 4096 Jul  8 09:48 performance_schema
> -rw-rw. 1 27 272 Jul  8 10:35 mysql-1-ijptl.pid
> -rw-rw. 1 27 272 Jul  8 09:48 mysql-1-1soui.pid
> drwx--. 2 27 27 4096 Jul  8 09:48 mysql
> -rw-rw. 1 27 27 38797312 Jul  8 09:48 ib_logfile1
> -rw-rw. 1 27 27 38797312 Jul  8 10:35 ib_logfile0
> -rw-rw. 1 27 27 12582912 Jul  8 10:35 ibdata1
> -rw-rw. 1 27 27   56 Jul  8 09:48 auto.cnf
>
> # ls -ld /exports/wordpress/wp/
> drwxrwxrwx. 2 nfsnobody nfsnobody 26 Jul  7 18:43 /exports/wordpress/wp/
>
> # ls -lr /exports/wordpress/wp/
> total 0
>
> $ oc get pods
> NAME  READY STATUS  RESTARTS   AGE
> mysql-1-ijptl 1/1   Running 0  44m
> wordpress-mysql-example-1-1clom   1/1   Running 0  41m
> wordpress-mysql-example-1-build   0/1   Completed   0  44m
>
> $ oc rsh wordpress-mysql-example-1-1clom
> sh-4.2$ pwd
> /opt/app-root/src
> sh-4.2$ df -h /opt/app-root/src
> Filesystem Size  Used Avail Use% Mounted on
> 192.168.0.9:/exports/wordpress/wp   50G   11G   40G  22% /opt/app-root/src
> sh-4.2$ ls
> sh-4.2$ echo "Create file from pod" > teste.txt
>
> # ls -lr /exports/wordpress/wp/
> total 4
> -rw-r--r--. 1 1001 nfsnobody 21 Jul  8 11:21 teste.txt
>
> # cat /exports/wordpress/wp/teste.txt
> Create file from pod
>
> $ oc get pvc
> NAME  STATUSVOLUME  CAPACITY   ACCESSMODES   AGE
> claim-mysql   Bound nfs-pv007   5GiRWO   19h
> claim-wp  Bound nfs-pv008   2GiRWO,RWX   19h
>
> $ oc volumes dc --all
> deploymentconfigs/mysql
>   pvc/claim-mysql (allocated 5GiB) as mysql-data
> mounted at /var/lib/mysql/data
> deploymentconfigs/wordpress-mysql-example
>   pvc/claim-wp (allocated 2GiB) as wordpress-mysql-example-data
> mounted at /opt/app-root/src
>
> Template
>
> 172   spec:
> {¬
> 173 volumes:
> [¬
> 174
> {¬
> 175 name:
> ${APP_NAME}-data,¬
> 176 persistentVolumeClaim:
> {¬
> 177   claimName:
> ${CLAIM_WP_NAME}¬
> 178
> }¬
> 179
> }¬
> 180
> ],¬
> 181 containers:
> [¬
> 182
> {¬
> 183 name:
> ${APP_NAME},¬
> 184 image:
> ${APP_NAME},¬
> 185 ports:
> [¬
> 186
> {¬
> 187 containerPort:
> 8080,¬
> 188 name:
> wp-server¬
> 189
> }¬
> 190
> ],¬
> 191 volumeMounts:
> [¬
> 192
> {¬
> 193 name:
> ${APP_NAME}-data,¬
> 194 mountPath:
> ${WP_PATH}¬
> 195
> }¬
> 196 ],¬
>
>
> Any help will very appreciate
>
> Thank you
>
> [1] https://github.com/openshift/origin/tree/master/examples/wordpress/
>
> Regards
>
> Robson
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Determine when a deployment finishes programmatically

2016-07-08 Thread Clayton Coleman
oh, you'll need to use -l openshift.io/deployment-config.name=$PROJECT

On Fri, Jul 8, 2016 at 12:02 PM, Alex Wauck  wrote:

> No luck:
> $ oc get rc -l deploymentconfig=$PROJECT,deployment=$PROJECT-12
> $ oc describe rc/$PROJECT-12
> Name: $PROJECT-12
> Namespace: $PROJECT
> Image(s):
> 172.30.151.60:5000/$PROJECT/$PROJECT@sha256:91a0d57c0dca1a985c6c5f78ccad4c1e1c79db4a98832d2f5749326da0154c88
> Selector: app=$PROJECT,deployment=$PROJECT-12,deploymentconfig=$PROJECT
> Labels: app=$PROJECT,openshift.io/deployment-config.name=$PROJECT
> Replicas: 2 current / 2 desired
> Pods Status: 1 Running / 1 Waiting / 0 Succeeded / 0 Failed
> No volumes.
> Events:
>   FirstSeen LastSeen Count From SubobjectPath Type Reason Message
>   -  -  -  -- ---
>   1m 1m 1 {replication-controller } Normal SuccessfulCreate Created pod:
> $PROJECT-12-7udpy
>   38s 38s 1 {replication-controller } Normal SuccessfulCreate Created
> pod: $PROJECT-12-dvjp8
>
>
> On Fri, Jul 8, 2016 at 9:57 AM, Clayton Coleman 
> wrote:
>
>> oc get rc -l deploymentconfig=NAME,deployment=# should show you
>>
>> On Jul 8, 2016, at 10:07 AM, Alex Wauck  wrote:
>>
>> Is there any decent way to determine when a deployment has completed?
>> I've tried `oc get deployments`, which never shows me anything, even when I
>> have a deployment in progress.  I can go into the web UI and see a list of
>> deployments, but I can't find any way to access that information via the
>> CLI aside from parsing the very machine-unfriendly output of `oc describe
>> dc/whatever`.
>>
>> How does the web UI get that information?  It doesn't have any special
>> access that the CLI doesn't, does it?
>>
>> --
>>
>> Alex Wauck // DevOps Engineer
>>
>> *E X O S I T E*
>> *www.exosite.com *
>>
>> Making Machines More Human.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Determine when a deployment finishes programmatically

2016-07-08 Thread Alex Wauck
OK, that gets me a list of replication controllers.  I then have to dig
through those, find the latest two, and then check for the second-latest
one going to zero?

On Fri, Jul 8, 2016 at 11:11 AM, Clayton Coleman 
wrote:

> oh, you'll need to use -l openshift.io/deployment-config.name=$PROJECT
>
> On Fri, Jul 8, 2016 at 12:02 PM, Alex Wauck  wrote:
>
>> No luck:
>> $ oc get rc -l deploymentconfig=$PROJECT,deployment=$PROJECT-12
>> $ oc describe rc/$PROJECT-12
>> Name: $PROJECT-12
>> Namespace: $PROJECT
>> Image(s):
>> 172.30.151.60:5000/$PROJECT/$PROJECT@sha256:91a0d57c0dca1a985c6c5f78ccad4c1e1c79db4a98832d2f5749326da0154c88
>> Selector: app=$PROJECT,deployment=$PROJECT-12,deploymentconfig=$PROJECT
>> Labels: app=$PROJECT,openshift.io/deployment-config.name=$PROJECT
>> Replicas: 2 current / 2 desired
>> Pods Status: 1 Running / 1 Waiting / 0 Succeeded / 0 Failed
>> No volumes.
>> Events:
>>   FirstSeen LastSeen Count From SubobjectPath Type Reason Message
>>   -  -  -  -- ---
>>   1m 1m 1 {replication-controller } Normal SuccessfulCreate Created pod:
>> $PROJECT-12-7udpy
>>   38s 38s 1 {replication-controller } Normal SuccessfulCreate Created
>> pod: $PROJECT-12-dvjp8
>>
>>
>> On Fri, Jul 8, 2016 at 9:57 AM, Clayton Coleman 
>> wrote:
>>
>>> oc get rc -l deploymentconfig=NAME,deployment=# should show you
>>>
>>> On Jul 8, 2016, at 10:07 AM, Alex Wauck  wrote:
>>>
>>> Is there any decent way to determine when a deployment has completed?
>>> I've tried `oc get deployments`, which never shows me anything, even when I
>>> have a deployment in progress.  I can go into the web UI and see a list of
>>> deployments, but I can't find any way to access that information via the
>>> CLI aside from parsing the very machine-unfriendly output of `oc describe
>>> dc/whatever`.
>>>
>>> How does the web UI get that information?  It doesn't have any special
>>> access that the CLI doesn't, does it?
>>>
>>> --
>>>
>>> Alex Wauck // DevOps Engineer
>>>
>>> *E X O S I T E*
>>> *www.exosite.com *
>>>
>>> Making Machines More Human.
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>>
>> Alex Wauck // DevOps Engineer
>>
>> *E X O S I T E*
>> *www.exosite.com *
>>
>> Making Machines More Human.
>>
>>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Deploy wordpress with persistent volume

2016-07-08 Thread Robson Ramos Barreto
Hello Guys,

I'm trying to deploy wordpress with persistent volume on openshift
enterprise 3.2 (30 Day Self-Supported) as in the example [1] but the git
files aren't being wirtten in the NFS path. MySQL is being deployed
properly in the NFS persistent volume

# ls -ld /exports/wordpress/mysql/
drwxrwxrwx. 5 nfsnobody nfsnobody 4096 Jul  8 10:35
/exports/wordpress/mysql/

# ls -lr /exports/wordpress/mysql/
total 88084
drwx--. 2 27 27   19 Jul  8 09:48 wordpress
drwx--. 2 27 27 4096 Jul  8 09:48 performance_schema
-rw-rw. 1 27 272 Jul  8 10:35 mysql-1-ijptl.pid
-rw-rw. 1 27 272 Jul  8 09:48 mysql-1-1soui.pid
drwx--. 2 27 27 4096 Jul  8 09:48 mysql
-rw-rw. 1 27 27 38797312 Jul  8 09:48 ib_logfile1
-rw-rw. 1 27 27 38797312 Jul  8 10:35 ib_logfile0
-rw-rw. 1 27 27 12582912 Jul  8 10:35 ibdata1
-rw-rw. 1 27 27   56 Jul  8 09:48 auto.cnf

# ls -ld /exports/wordpress/wp/
drwxrwxrwx. 2 nfsnobody nfsnobody 26 Jul  7 18:43 /exports/wordpress/wp/

# ls -lr /exports/wordpress/wp/
total 0

$ oc get pods
NAME  READY STATUS  RESTARTS   AGE
mysql-1-ijptl 1/1   Running 0  44m
wordpress-mysql-example-1-1clom   1/1   Running 0  41m
wordpress-mysql-example-1-build   0/1   Completed   0  44m

$ oc rsh wordpress-mysql-example-1-1clom
sh-4.2$ pwd
/opt/app-root/src
sh-4.2$ df -h /opt/app-root/src
Filesystem Size  Used Avail Use% Mounted on
192.168.0.9:/exports/wordpress/wp   50G   11G   40G  22% /opt/app-root/src
sh-4.2$ ls
sh-4.2$ echo "Create file from pod" > teste.txt

# ls -lr /exports/wordpress/wp/
total 4
-rw-r--r--. 1 1001 nfsnobody 21 Jul  8 11:21 teste.txt

# cat /exports/wordpress/wp/teste.txt
Create file from pod

$ oc get pvc
NAME  STATUSVOLUME  CAPACITY   ACCESSMODES   AGE
claim-mysql   Bound nfs-pv007   5GiRWO   19h
claim-wp  Bound nfs-pv008   2GiRWO,RWX   19h

$ oc volumes dc --all
deploymentconfigs/mysql
  pvc/claim-mysql (allocated 5GiB) as mysql-data
mounted at /var/lib/mysql/data
deploymentconfigs/wordpress-mysql-example
  pvc/claim-wp (allocated 2GiB) as wordpress-mysql-example-data
mounted at /opt/app-root/src

Template

172   spec:
{¬
173 volumes:
[¬
174
{¬
175 name:
${APP_NAME}-data,¬
176 persistentVolumeClaim:
{¬
177   claimName:
${CLAIM_WP_NAME}¬
178
}¬
179
}¬
180
],¬
181 containers:
[¬
182
{¬
183 name:
${APP_NAME},¬
184 image:
${APP_NAME},¬
185 ports:
[¬
186
{¬
187 containerPort:
8080,¬
188 name:
wp-server¬
189
}¬
190
],¬
191 volumeMounts:
[¬
192
{¬
193 name:
${APP_NAME}-data,¬
194 mountPath:
${WP_PATH}¬
195
}¬
196 ],¬


Any help will very appreciate

Thank you

[1] https://github.com/openshift/origin/tree/master/examples/wordpress/

Regards

Robson
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Determine when a deployment finishes programmatically

2016-07-08 Thread Clayton Coleman
oc get rc -l deploymentconfig=NAME,deployment=# should show you

On Jul 8, 2016, at 10:07 AM, Alex Wauck  wrote:

Is there any decent way to determine when a deployment has completed?  I've
tried `oc get deployments`, which never shows me anything, even when I have
a deployment in progress.  I can go into the web UI and see a list of
deployments, but I can't find any way to access that information via the
CLI aside from parsing the very machine-unfriendly output of `oc describe
dc/whatever`.

How does the web UI get that information?  It doesn't have any special
access that the CLI doesn't, does it?

-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Determine when a deployment finishes programmatically

2016-07-08 Thread Alex Wauck
Is there any decent way to determine when a deployment has completed?  I've
tried `oc get deployments`, which never shows me anything, even when I have
a deployment in progress.  I can go into the web UI and see a list of
deployments, but I can't find any way to access that information via the
CLI aside from parsing the very machine-unfriendly output of `oc describe
dc/whatever`.

How does the web UI get that information?  It doesn't have any special
access that the CLI doesn't, does it?

-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logs from container app stored to local disk on nodes

2016-07-08 Thread Ronan O Keeffe
Cheers Luke, 

chcon -Rt svirt_sandbox_file_t / did the trick. 

Appreciate your's and Clayton's replies. 

Ronan. 

> On 6 Jul 2016, at 18:39, Luke Meyer  wrote:
> 
> You may need to modify the file permissions and/or selinux context for the 
> volume so that the container user can write to it. Under the default SCC the 
> container user/group are randomized. Under the privileged SCC it will 
> probably be whatever user the Dockerfile indicates (and you can choose an 
> selinux context in the pod security context if needed).
> 
> On Wed, Jul 6, 2016 at 3:49 AM, Ronan O Keeffe  > wrote:
> Hi Clayton, 
> 
> Much appreciated. I have run the following: 
> 
> oadm policy add-scc-to-user privileged -n staging -z default (It's a test box 
> and we're deploying our own images, I can edit the scc to hostaccess or 
> hostmount-anyuid later). 
> 
> I have then run 
> oc volume dc/ --add --name=logging --type=hostPath 
> --mount-path=/var/log/
> 
> The app deploys alright is is up and running sucesfully, but there is nothing 
> logging to the node. 
> 
> In case it matters I created the log storage by adding a 10Gb disk to the VM 
> the node lives on, created an xfs partition on it and mounted it in the 
> folder that the webapps should log to. 
> 
> Any pointers would be appreciated. 
> 
> Regards, 
> Ronan. 
> 
>> On 5 Jul 2016, at 01:44, Clayton Coleman > > wrote:
>> 
>> In the future there is an ongoing design to have a specific "log volume" 
>> defined on a per pod basis that will be respected by the system.
>> 
>> For now, the correct way is to use hostPath, but there's a catch - security. 
>>  The reason why it failed to deploy is because users have to be granted the 
>> permission to access the host (for security reasons).  You'll want to grant 
>> access to an SCC that allows host volumes to your service account (do "oc 
>> get scc" to see the full list, then "oadm policy add-scc-to-user NAME -z 
>> default" to grant access to that SCC to a named service account).
>> 
>> On Mon, Jul 4, 2016 at 5:26 AM, Ronan O Keeffe > > wrote:
>> Hi, 
>> 
>> Just wondering is it possible to have an app living in a container log back 
>> to the box the container lives on. 
>> 
>> Our test set up is as follows: 
>> 
>> All web apps identical
>> webapp1 > node1
>> webapp2 > node2
>> webapp3 > node3
>> webapp4 > node4
>> 
>> Ideally we'd like logs from the webapp inside a container on node1 to log to 
>> a dedicated logging partition on the host OS of node1 and so on for the 
>> other nodes. 
>> Ultimately we'd like the logs to persist beyond the life of the container I 
>> suppose. 
>> 
>> We've tried oc edit dc/webapp and specifying a volume to log to
>> oc volume dc/ --add --name=v1 --type=hostPath 
>> --path=/var/log/
>> 
>> And then specifying that the webapp log to the above partition. 
>> 
>> However the webapp fails to deploy. I'll need to dig in to why that is, but 
>> in the meantime is this vaguely the correct way to go about logging?
>> 
>> Cheers, 
>> Ronan. 
>> 
>> 
>> P.S. I went to thank Scott Dodson and for help with a previous matter 
>> recently but for some reason the mail has not been received on the list. 
>> 
>> 
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com 
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>> 
>> 
>> 
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> 
> 
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Create selfsigned certs for securing openshift registry

2016-07-08 Thread Jason DeTiberus
On Jul 8, 2016 1:52 AM, "Den Cowboy"  wrote:
>
> I try to secure my openshift registry:
>
> $ oadm ca create-server-cert \
> --signer-cert=/etc/origin/master/ca.crt \
> --signer-key=/etc/origin/master/ca.key \
> --signer-serial=/etc/origin/master/ca.serial.txt \
>
--hostnames='docker-registry.default.svc.cluster.local,172.30.124.220' \
> --cert=/etc/secrets/registry.crt \
> --key=/etc/secrets/registry.key
>
>
> Which hostnames do I have to use?
> The service IP of my docker registry of course but what then?:

Currently everything internal should be using just the service IP.

>
> docker-registry.default.svc.cluster.local

This would cover the created service. We have plans to eventually use the
registry service name instead of IP.

> OR/AND
> docker-registry.dev.wildcard.com

This would only be needed if you intend to expose the registry using a
route for access external to the cluster.

>
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users