Re: configuring periodic import of images

2016-08-16 Thread Tony Saxon
So I got it working, and I'm trying to figure out why what I did worked.

I took a closer look at the packet captures where I saw what I thought was
the polling. It wasn't actually good traffic, but rather the master
thinking that it had a connection open to the registry server. Checking
netstat on both machines showed an open connection on the master, but
nothing on the registry server. I then restarted the master while taking a
capture. This showed it trying to close the open connection and just
getting rst packets. After restarting it I didn't see any traffic to the
docker registry at all. I took a closer look at my master config file.
Originally it did not have a section in it for the imagePolicyConfig. After
being given the information from Clayton earlier about that section I added
it. Since I wasn't seeing the traffic, I changed the
scheduledImageImportMinimumIntervalSeconds to 30 instead of 900. After
restarting it again, it pulled down the image without issue.

As a final test I changed the code, rebuilt the app and docker image and
repushed it to the repository. Within 30 seconds it was detected, pulled
and started deploying.

On Tue, Aug 16, 2016 at 4:36 PM, Tony Saxon  wrote:

> Ok, I ran a packet capture on both the master and the docker registry and
> I see periodic traffic from the master to the docker container on port
> 5000. It's about every 30 seconds or so, so I'm assuming that it's the
> periodic polling. It doesn't appear that it's picking up a difference
> between the registry and the imagestream.
>
> However if I perform another 'oc tag' command, it updates the imagestream
> and pulls down the latest tag and deploys to the application.
>
> On Tue, Aug 16, 2016 at 3:29 PM, Clayton Coleman 
> wrote:
>
>> Yes, scheduled=true will poll the upstream registry.
>>
>> On Tue, Aug 16, 2016 at 1:59 PM, Tony Saxon  wrote:
>>
>>> Ok, that makes sense. But I am understanding correctly that when you use
>>> scheduled=true that it should periodically poll the source registry and
>>> pull the latest tag configured if it's newer than what is currently pulled?
>>>
>>> On Tue, Aug 16, 2016 at 1:48 PM, Clayton Coleman 
>>> wrote:
>>>


 On Aug 16, 2016, at 1:40 PM, Tony Saxon  wrote:

 Can someone tell me if I'm understanding the difference between
 alias=true and scheduled=true for tagging imagestreams as documented at
 https://docs.openshift.org/latest/dev_guide/managing_images.
 html#adding-tag ?

 The way I read it is that alias true will track the source image tag
 and update the destination when the source is updated, whereas scheduled
 does the same thing but only on a periodic basis. Am I off on that?


 Alias simply points to the destination, and will *not* update when the
 destination changes.  Ie "a:latest" points to "b:1.0" - updating "b:1.0"
 only triggers deployments based on b:1.0.  A deployment created from
 "a:latest" will use "b:1:0" in its pods.

 Alias is really when you want to drive Openshift based on external
 versioned tags in another repo (my:latest updating from MySQL:5.1,
 MySQL:5.2, etc).



 On Mon, Aug 15, 2016 at 4:04 PM, Tony Saxon 
 wrote:

> I'm using a registry deployed from a docker compose:
>
> registry:
>   restart: always
>   image: registry:2.2.1
>   ports:
> - 5000:5000
>   environment:
> REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
> REGISTRY_HTTP_TLS_KEY: /certs/domain.key
> REGISTRY_AUTH: htpasswd
> REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
> REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
>   volumes:
> - /var/docker/registry/data:/var/lib/registry
> - /var/docker/registry/certs:/certs
> - /var/docker/registry/auth:/auth
>
>
> It was originally using "image:2" but that was the one that I had
> problems even importing the docker imaged due to the schema v1/v2 issue.
> After changing it to 2.2.1 and repushing the image it worked.
>
> On Mon, Aug 15, 2016 at 4:00 PM, Clayton Coleman 
> wrote:
>
>> Did a test, but the import looks like it works correctly for hub
>> images.  In this case are you using a regular Docker registry, the
>> integrated registry, or a third party Docker registry?
>>
>> On Mon, Aug 15, 2016 at 3:34 PM, Clayton Coleman > > wrote:
>>
>>> It's currently 15 minutes:
>>>
>>> imagePolicyConfig:
>>>   disableScheduledImport: false
>>>   maxImagesBulkImportedPerRepository: 5
>>>   maxScheduledImageImportsPerMinute: 60
>>>   scheduledImageImportMinimumIntervalSeconds: 900
>>>
>>> Will take a look and see if I can recreate this issue.
>>>
>>>
>>> On 

Re: configuring periodic import of images

2016-08-16 Thread Tony Saxon
Ok, I ran a packet capture on both the master and the docker registry and I
see periodic traffic from the master to the docker container on port 5000.
It's about every 30 seconds or so, so I'm assuming that it's the periodic
polling. It doesn't appear that it's picking up a difference between the
registry and the imagestream.

However if I perform another 'oc tag' command, it updates the imagestream
and pulls down the latest tag and deploys to the application.

On Tue, Aug 16, 2016 at 3:29 PM, Clayton Coleman 
wrote:

> Yes, scheduled=true will poll the upstream registry.
>
> On Tue, Aug 16, 2016 at 1:59 PM, Tony Saxon  wrote:
>
>> Ok, that makes sense. But I am understanding correctly that when you use
>> scheduled=true that it should periodically poll the source registry and
>> pull the latest tag configured if it's newer than what is currently pulled?
>>
>> On Tue, Aug 16, 2016 at 1:48 PM, Clayton Coleman 
>> wrote:
>>
>>>
>>>
>>> On Aug 16, 2016, at 1:40 PM, Tony Saxon  wrote:
>>>
>>> Can someone tell me if I'm understanding the difference between
>>> alias=true and scheduled=true for tagging imagestreams as documented at
>>> https://docs.openshift.org/latest/dev_guide/managing_images.
>>> html#adding-tag ?
>>>
>>> The way I read it is that alias true will track the source image tag and
>>> update the destination when the source is updated, whereas scheduled does
>>> the same thing but only on a periodic basis. Am I off on that?
>>>
>>>
>>> Alias simply points to the destination, and will *not* update when the
>>> destination changes.  Ie "a:latest" points to "b:1.0" - updating "b:1.0"
>>> only triggers deployments based on b:1.0.  A deployment created from
>>> "a:latest" will use "b:1:0" in its pods.
>>>
>>> Alias is really when you want to drive Openshift based on external
>>> versioned tags in another repo (my:latest updating from MySQL:5.1,
>>> MySQL:5.2, etc).
>>>
>>>
>>>
>>> On Mon, Aug 15, 2016 at 4:04 PM, Tony Saxon 
>>> wrote:
>>>
 I'm using a registry deployed from a docker compose:

 registry:
   restart: always
   image: registry:2.2.1
   ports:
 - 5000:5000
   environment:
 REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
 REGISTRY_HTTP_TLS_KEY: /certs/domain.key
 REGISTRY_AUTH: htpasswd
 REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
 REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
   volumes:
 - /var/docker/registry/data:/var/lib/registry
 - /var/docker/registry/certs:/certs
 - /var/docker/registry/auth:/auth


 It was originally using "image:2" but that was the one that I had
 problems even importing the docker imaged due to the schema v1/v2 issue.
 After changing it to 2.2.1 and repushing the image it worked.

 On Mon, Aug 15, 2016 at 4:00 PM, Clayton Coleman 
 wrote:

> Did a test, but the import looks like it works correctly for hub
> images.  In this case are you using a regular Docker registry, the
> integrated registry, or a third party Docker registry?
>
> On Mon, Aug 15, 2016 at 3:34 PM, Clayton Coleman 
> wrote:
>
>> It's currently 15 minutes:
>>
>> imagePolicyConfig:
>>   disableScheduledImport: false
>>   maxImagesBulkImportedPerRepository: 5
>>   maxScheduledImageImportsPerMinute: 60
>>   scheduledImageImportMinimumIntervalSeconds: 900
>>
>> Will take a look and see if I can recreate this issue.
>>
>>
>> On Mon, Aug 15, 2016 at 2:33 PM, Tony Saxon 
>> wrote:
>>
>>>
>>> So I've found that if I tag the imagestream manually, that it is
>>> able to pull down the latest changes and deploys them to my app:
>>>
>>> oc tag --source=docker --scheduled=true
>>> docker-lab.example.com:5000/testwebapp:latest testwebapp:latest
>>>
>>> [root@os-master ~]# oc describe is
>>> Name:   testwebapp
>>> Created:4 days ago
>>> Labels: 
>>> Annotations:openshift.io/image.dockerRepos
>>> itoryCheck=2016-08-15T17:49:36Z
>>> Docker Pull Spec:   172.30.11.167:5000/testwebapp/testwebapp
>>>
>>> Tag Spec
>>> Created PullSpec
>>>   Image
>>> latest  docker-lab.example.com:5000/testwebapp:latest *38
>>> minutes ago  docker-lab.example.com:5000/te
>>> stwebapp@sha256:dd75ff58184489...
>>>
>>> About an hour ago   docker-lab.example.com:5000/te
>>> stwebapp@sha256:2a4f9e1262e377...
>>> 4
>>> days ago  docker-lab.example.com:5000/te
>>> stwebapp@sha256:c1c8c6c3e1c672...
>>>

Re: configuring periodic import of images

2016-08-16 Thread Clayton Coleman
Yes, scheduled=true will poll the upstream registry.

On Tue, Aug 16, 2016 at 1:59 PM, Tony Saxon  wrote:

> Ok, that makes sense. But I am understanding correctly that when you use
> scheduled=true that it should periodically poll the source registry and
> pull the latest tag configured if it's newer than what is currently pulled?
>
> On Tue, Aug 16, 2016 at 1:48 PM, Clayton Coleman 
> wrote:
>
>>
>>
>> On Aug 16, 2016, at 1:40 PM, Tony Saxon  wrote:
>>
>> Can someone tell me if I'm understanding the difference between
>> alias=true and scheduled=true for tagging imagestreams as documented at
>> https://docs.openshift.org/latest/dev_guide/managing_images.
>> html#adding-tag ?
>>
>> The way I read it is that alias true will track the source image tag and
>> update the destination when the source is updated, whereas scheduled does
>> the same thing but only on a periodic basis. Am I off on that?
>>
>>
>> Alias simply points to the destination, and will *not* update when the
>> destination changes.  Ie "a:latest" points to "b:1.0" - updating "b:1.0"
>> only triggers deployments based on b:1.0.  A deployment created from
>> "a:latest" will use "b:1:0" in its pods.
>>
>> Alias is really when you want to drive Openshift based on external
>> versioned tags in another repo (my:latest updating from MySQL:5.1,
>> MySQL:5.2, etc).
>>
>>
>>
>> On Mon, Aug 15, 2016 at 4:04 PM, Tony Saxon  wrote:
>>
>>> I'm using a registry deployed from a docker compose:
>>>
>>> registry:
>>>   restart: always
>>>   image: registry:2.2.1
>>>   ports:
>>> - 5000:5000
>>>   environment:
>>> REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
>>> REGISTRY_HTTP_TLS_KEY: /certs/domain.key
>>> REGISTRY_AUTH: htpasswd
>>> REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
>>> REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
>>>   volumes:
>>> - /var/docker/registry/data:/var/lib/registry
>>> - /var/docker/registry/certs:/certs
>>> - /var/docker/registry/auth:/auth
>>>
>>>
>>> It was originally using "image:2" but that was the one that I had
>>> problems even importing the docker imaged due to the schema v1/v2 issue.
>>> After changing it to 2.2.1 and repushing the image it worked.
>>>
>>> On Mon, Aug 15, 2016 at 4:00 PM, Clayton Coleman 
>>> wrote:
>>>
 Did a test, but the import looks like it works correctly for hub
 images.  In this case are you using a regular Docker registry, the
 integrated registry, or a third party Docker registry?

 On Mon, Aug 15, 2016 at 3:34 PM, Clayton Coleman 
 wrote:

> It's currently 15 minutes:
>
> imagePolicyConfig:
>   disableScheduledImport: false
>   maxImagesBulkImportedPerRepository: 5
>   maxScheduledImageImportsPerMinute: 60
>   scheduledImageImportMinimumIntervalSeconds: 900
>
> Will take a look and see if I can recreate this issue.
>
>
> On Mon, Aug 15, 2016 at 2:33 PM, Tony Saxon 
> wrote:
>
>>
>> So I've found that if I tag the imagestream manually, that it is able
>> to pull down the latest changes and deploys them to my app:
>>
>> oc tag --source=docker --scheduled=true
>> docker-lab.example.com:5000/testwebapp:latest testwebapp:latest
>>
>> [root@os-master ~]# oc describe is
>> Name:   testwebapp
>> Created:4 days ago
>> Labels: 
>> Annotations:openshift.io/image.dockerRepos
>> itoryCheck=2016-08-15T17:49:36Z
>> Docker Pull Spec:   172.30.11.167:5000/testwebapp/testwebapp
>>
>> Tag Spec
>> Created PullSpec
>>   Image
>> latest  docker-lab.example.com:5000/testwebapp:latest *38
>> minutes ago  docker-lab.example.com:5000/te
>> stwebapp@sha256:dd75ff58184489...
>>
>> About an hour ago   docker-lab.example.com:5000/te
>> stwebapp@sha256:2a4f9e1262e377...
>> 4
>> days ago  docker-lab.example.com:5000/te
>> stwebapp@sha256:c1c8c6c3e1c672...
>>
>>   * tag is scheduled for periodic import
>>   ! tag is insecure and can be imported over HTTP or self-signed HTTPS
>>
>>
>> This updates the tags, redeploys the pods and all my new changes are
>> visible once the new containers are up. It appears that it's not doing 
>> the
>> periodic import despite being configured to. What is the default period
>> that it uses to check the source registry?
>>
>>
>> On Mon, Aug 15, 2016 at 2:29 PM, Tony Saxon 
>> wrote:
>>
>>> So I've found that if I tag the imagestream manually, that it is
>>> able to pull down the latest changes and 

Re: oc get route

2016-08-16 Thread Rajat Chopra
Also check 'oc get routes --all-namespaces' just to rule out that the route
got created in another namespace. Also, the traffic needs to be directed to
the node where haproxy is running (not the master).
If the route is not created then one needs to create it, not sure if the
app you created at a 'route' resource in the template/json/yaml.

/Rajat

On Tue, Aug 16, 2016 at 11:26 AM, Sco  wrote:

> Have been trying to get a working version of Openshift M5 up and running,
> and have been running into trouble.
> The ansible installer runs through with no errors, can login and create
> app's but cannot get to them. I can see the traffic hit the master but the
> browser only gets a refused to connect.
> oc get route  comes back with nothing.
> docker ps on the node does show that haproxy router is running.
>
> Any ideas on where to look?
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc get route

2016-08-16 Thread Jonathan Yu
Hi there,

For development/test purposes, I think "oc cluster up" is the easiest way
to get a cluster running. It simply requires a running docker daemon (and
access to the docker socket, so you either need to be root or your user
added to the "docker" group).  There's a demonstration here:
https://asciinema.org/a/49402

Simply download and export our openshift client (oc binary) from the
releases page: https://github.com/openshift/origin/releases - extract and
run "oc cluster up", it should take care of the rest. You will need to
adjust your Docker config (e.g. /etc/sysconfig/docker file) but the "oc
cluster up" command will check and return an error if this isn't done.

This will allow you to try out the latest and greatest features from
OpenShift Origin :)

Jonathan

On Tue, Aug 16, 2016 at 11:26 AM, Sco  wrote:

> Have been trying to get a working version of Openshift M5 up and running,
> and have been running into trouble.
> The ansible installer runs through with no errors, can login and create
> app's but cannot get to them. I can see the traffic hit the master but the
> browser only gets a refused to connect.
> oc get route  comes back with nothing.
> docker ps on the node does show that haproxy router is running.
>
> Any ideas on where to look?
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jonathan Yu, P.Eng. / Software Engineer, OpenShift by Red Hat / Twitter
(@jawnsy) is the quickest way to my heart 

*“A master in the art of living draws no sharp distinction between his work
and his play; his labor and his leisure; his mind and his body; his
education and his recreation. He hardly knows which is which. He simply
pursues his vision of excellence through whatever he is doing, and leaves
others to determine whether he is working or playing. To himself, he always
appears to be doing both.”* — L. P. Jacks, Education through Recreation
(1932), p. 1
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


oc get route

2016-08-16 Thread Sco
Have been trying to get a working version of Openshift M5 up and running,
and have been running into trouble.
The ansible installer runs through with no errors, can login and create
app's but cannot get to them. I can see the traffic hit the master but the
browser only gets a refused to connect.
oc get route  comes back with nothing.
docker ps on the node does show that haproxy router is running.

Any ideas on where to look?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: configuring periodic import of images

2016-08-16 Thread Tony Saxon
Ok, that makes sense. But I am understanding correctly that when you use
scheduled=true that it should periodically poll the source registry and
pull the latest tag configured if it's newer than what is currently pulled?

On Tue, Aug 16, 2016 at 1:48 PM, Clayton Coleman 
wrote:

>
>
> On Aug 16, 2016, at 1:40 PM, Tony Saxon  wrote:
>
> Can someone tell me if I'm understanding the difference between alias=true
> and scheduled=true for tagging imagestreams as documented at
> https://docs.openshift.org/latest/dev_guide/managing_
> images.html#adding-tag ?
>
> The way I read it is that alias true will track the source image tag and
> update the destination when the source is updated, whereas scheduled does
> the same thing but only on a periodic basis. Am I off on that?
>
>
> Alias simply points to the destination, and will *not* update when the
> destination changes.  Ie "a:latest" points to "b:1.0" - updating "b:1.0"
> only triggers deployments based on b:1.0.  A deployment created from
> "a:latest" will use "b:1:0" in its pods.
>
> Alias is really when you want to drive Openshift based on external
> versioned tags in another repo (my:latest updating from MySQL:5.1,
> MySQL:5.2, etc).
>
>
>
> On Mon, Aug 15, 2016 at 4:04 PM, Tony Saxon  wrote:
>
>> I'm using a registry deployed from a docker compose:
>>
>> registry:
>>   restart: always
>>   image: registry:2.2.1
>>   ports:
>> - 5000:5000
>>   environment:
>> REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
>> REGISTRY_HTTP_TLS_KEY: /certs/domain.key
>> REGISTRY_AUTH: htpasswd
>> REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
>> REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
>>   volumes:
>> - /var/docker/registry/data:/var/lib/registry
>> - /var/docker/registry/certs:/certs
>> - /var/docker/registry/auth:/auth
>>
>>
>> It was originally using "image:2" but that was the one that I had
>> problems even importing the docker imaged due to the schema v1/v2 issue.
>> After changing it to 2.2.1 and repushing the image it worked.
>>
>> On Mon, Aug 15, 2016 at 4:00 PM, Clayton Coleman 
>> wrote:
>>
>>> Did a test, but the import looks like it works correctly for hub
>>> images.  In this case are you using a regular Docker registry, the
>>> integrated registry, or a third party Docker registry?
>>>
>>> On Mon, Aug 15, 2016 at 3:34 PM, Clayton Coleman 
>>> wrote:
>>>
 It's currently 15 minutes:

 imagePolicyConfig:
   disableScheduledImport: false
   maxImagesBulkImportedPerRepository: 5
   maxScheduledImageImportsPerMinute: 60
   scheduledImageImportMinimumIntervalSeconds: 900

 Will take a look and see if I can recreate this issue.


 On Mon, Aug 15, 2016 at 2:33 PM, Tony Saxon 
 wrote:

>
> So I've found that if I tag the imagestream manually, that it is able
> to pull down the latest changes and deploys them to my app:
>
> oc tag --source=docker --scheduled=true docker-lab.example.com:5000/te
> stwebapp:latest testwebapp:latest
>
> [root@os-master ~]# oc describe is
> Name:   testwebapp
> Created:4 days ago
> Labels: 
> Annotations:openshift.io/image.dockerRepos
> itoryCheck=2016-08-15T17:49:36Z
> Docker Pull Spec:   172.30.11.167:5000/testwebapp/testwebapp
>
> Tag Spec
> Created PullSpec
>   Image
> latest  docker-lab.example.com:5000/testwebapp:latest *38
> minutes ago  docker-lab.example.com:5000/te
> stwebapp@sha256:dd75ff58184489...
> About
> an hour ago   docker-lab.example.com:5000/te
> stwebapp@sha256:2a4f9e1262e377...
> 4
> days ago  docker-lab.example.com:5000/te
> stwebapp@sha256:c1c8c6c3e1c672...
>
>   * tag is scheduled for periodic import
>   ! tag is insecure and can be imported over HTTP or self-signed HTTPS
>
>
> This updates the tags, redeploys the pods and all my new changes are
> visible once the new containers are up. It appears that it's not doing the
> periodic import despite being configured to. What is the default period
> that it uses to check the source registry?
>
>
> On Mon, Aug 15, 2016 at 2:29 PM, Tony Saxon 
> wrote:
>
>> So I've found that if I tag the imagestream manually, that it is able
>> to pull down the latest changes and deploys them to my app:
>>
>> On Mon, Aug 15, 2016 at 8:46 AM, Tony Saxon 
>> wrote:
>>
>>> There are logs showing that it's detecting that the imagestream has

Re: configuring periodic import of images

2016-08-16 Thread Tony Saxon
Can someone tell me if I'm understanding the difference between alias=true
and scheduled=true for tagging imagestreams as documented at
https://docs.openshift.org/latest/dev_guide/managing_images.html#adding-tag
?

The way I read it is that alias true will track the source image tag and
update the destination when the source is updated, whereas scheduled does
the same thing but only on a periodic basis. Am I off on that?

On Mon, Aug 15, 2016 at 4:04 PM, Tony Saxon  wrote:

> I'm using a registry deployed from a docker compose:
>
> registry:
>   restart: always
>   image: registry:2.2.1
>   ports:
> - 5000:5000
>   environment:
> REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
> REGISTRY_HTTP_TLS_KEY: /certs/domain.key
> REGISTRY_AUTH: htpasswd
> REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
> REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
>   volumes:
> - /var/docker/registry/data:/var/lib/registry
> - /var/docker/registry/certs:/certs
> - /var/docker/registry/auth:/auth
>
>
> It was originally using "image:2" but that was the one that I had problems
> even importing the docker imaged due to the schema v1/v2 issue. After
> changing it to 2.2.1 and repushing the image it worked.
>
> On Mon, Aug 15, 2016 at 4:00 PM, Clayton Coleman 
> wrote:
>
>> Did a test, but the import looks like it works correctly for hub images.
>> In this case are you using a regular Docker registry, the integrated
>> registry, or a third party Docker registry?
>>
>> On Mon, Aug 15, 2016 at 3:34 PM, Clayton Coleman 
>> wrote:
>>
>>> It's currently 15 minutes:
>>>
>>> imagePolicyConfig:
>>>   disableScheduledImport: false
>>>   maxImagesBulkImportedPerRepository: 5
>>>   maxScheduledImageImportsPerMinute: 60
>>>   scheduledImageImportMinimumIntervalSeconds: 900
>>>
>>> Will take a look and see if I can recreate this issue.
>>>
>>>
>>> On Mon, Aug 15, 2016 at 2:33 PM, Tony Saxon 
>>> wrote:
>>>

 So I've found that if I tag the imagestream manually, that it is able
 to pull down the latest changes and deploys them to my app:

 oc tag --source=docker --scheduled=true docker-lab.example.com:5000/te
 stwebapp:latest testwebapp:latest

 [root@os-master ~]# oc describe is
 Name:   testwebapp
 Created:4 days ago
 Labels: 
 Annotations:openshift.io/image.dockerRepos
 itoryCheck=2016-08-15T17:49:36Z
 Docker Pull Spec:   172.30.11.167:5000/testwebapp/testwebapp

 Tag Spec
 Created PullSpec
   Image
 latest  docker-lab.example.com:5000/testwebapp:latest *38
 minutes ago  docker-lab.example.com:5000/te
 stwebapp@sha256:dd75ff58184489...
 About
 an hour ago   docker-lab.example.com:5000/te
 stwebapp@sha256:2a4f9e1262e377...
 4 days
 ago  docker-lab.example.com:5000/te
 stwebapp@sha256:c1c8c6c3e1c672...

   * tag is scheduled for periodic import
   ! tag is insecure and can be imported over HTTP or self-signed HTTPS


 This updates the tags, redeploys the pods and all my new changes are
 visible once the new containers are up. It appears that it's not doing the
 periodic import despite being configured to. What is the default period
 that it uses to check the source registry?


 On Mon, Aug 15, 2016 at 2:29 PM, Tony Saxon 
 wrote:

> So I've found that if I tag the imagestream manually, that it is able
> to pull down the latest changes and deploys them to my app:
>
> On Mon, Aug 15, 2016 at 8:46 AM, Tony Saxon 
> wrote:
>
>> There are logs showing that it's detecting that the imagestream has
>> changed, but doesn't seem like there's any explanation of why it can't 
>> get
>> it:
>>
>> Aug 15 08:18:10 os-master origin-master: I0815 08:18:10.446822
>> 77042 image_change_controller.go:47] Build image change controller 
>> detected
>> ImageStream change 172.30.11.167:5000/testwebapp/testwebapp
>> Aug 15 08:20:01 os-master origin-master:
>> ation":2}]},{"tag":"8.1","items":[{"created":"2016-08-02T18:
>> 21:31Z","dockerImageReference":"openshift/wildfly-81-centos7
>> @sha256:68a27d407fd1ead3b8a9e33aa2054c948ad3a54556d28bb4caaf
>> 704a0f651f96","image":"sha256:68a27d407fd1ead3b8a9e33aa2054c
>> 948ad3a54556d28bb4caaf704a0f651f96","generation":2}]},{"tag"
>> :"9.0","items":[{"created":"2016-08-02T18:21:31Z","dockerIma
>> geReference":"openshift/wildfly-90-centos7@sha256:212d8e093d
>> 

Re: Kibana Logs Empty

2016-08-16 Thread Eric Wolinetz
Realized I never replied-all... Re-adding users_list

On Mon, Aug 15, 2016 at 10:58 AM, Eric Wolinetz  wrote:

> Fluentd tries to connect to both "logging-es" | "logging-es-ops" in the
> logging namespace (if you're using the ops deployment) and "kubernetes" in
> the default namespace.  I think in this case it is having trouble
> connecting to the kubernetes service to look up metadata for your
> containers.
>
>
> On Mon, Aug 15, 2016 at 10:54 AM, Frank Liauw  wrote:
>
>> Oh stupid me; I was confused by my own namespaces; was looking at the
>> wrong namespace, thinking that's the one with pods that have an active log
>> stream. The logs are ingested fine, thanks for your assistance! :)
>>
>> On the possible DNS issue of fluentd on one of my nodes, what hostname is
>> fluentd trying to reach when starting up? We did perform some network
>> changes to this particular node to aid public routing, but as far as the
>> routing table is concerned, it should not have made a difference for local
>> traffic.
>>
>> Normal functioning node without public routing changes
>>
>> [root@node1 network-scripts]# route -n
>> Kernel IP routing table
>> Destination Gateway Genmask Flags Metric RefUse
>> Iface
>> 0.0.0.0 10.10.0.5   0.0.0.0 UG10000
>> ens160
>> 10.1.0.00.0.0.0 255.255.0.0 U 0  00
>> tun0
>> 10.10.0.0   0.0.0.0 255.255.0.0 U 10000
>> ens160
>> 172.30.0.0  0.0.0.0 255.255.0.0 U 0  00
>> tun0
>>
>> Malfunctioning node with public routing changes
>>
>> [root@node2 network-scripts]# route -n
>> Kernel IP routing table
>> Destination Gateway Genmask Flags Metric RefUse
>> Iface
>> 0.0.0.0 199.27.105.10.0.0.0 UG10000
>> ens192
>> 10.0.0.010.10.0.5   255.0.0.0   UG10000
>> ens160
>> 10.1.0.00.0.0.0 255.255.0.0 U 0  00
>> tun0
>> 10.10.0.0   0.0.0.0 255.255.0.0 U 10000
>> ens160
>> 172.30.0.0  0.0.0.0 255.255.0.0 U 0  00
>> tun0
>> 199.27.105.00.0.0.0 255.255.255.128 U 10000
>> ens192
>>
>> Frank
>> Systems Engineer
>>
>> VSee: fr...@vsee.com  | Cell: +65 9338 0035
>>
>> Join me on VSee for Free 
>>
>>
>>
>>
>> On Mon, Aug 15, 2016 at 11:23 PM, Eric Wolinetz 
>> wrote:
>>
>>> Correct, the way Fluentd pulls in the logs for your other containers is
>>> the same pipeline used for collecting logs for the below shown Kibana pod.
>>>
>>> Going back to your ES logs, can you verify the date portion of a
>>> microsvc index line?
>>> We can then update time range in the upper-right corner of Kibana to
>>> change from the last hour to something like the last month (something that
>>> would encompass the date for the index).
>>>
>>>
>>> On Mon, Aug 15, 2016 at 10:15 AM, Frank Liauw  wrote:
>>>
 Screencap is as follows:


 The query is as simple as it gets, *. I see my namespaces / projects as
 indexes.

 I see logs for logging project just fine:



 Fluentd is not ingesting the logs for pods in my namespaces. I'm yet to
 pull apart how fluentd does that, though there's no reason why logs for my
 other pods aren't getting indexed whereas kibana logs are if they are both
 ingested by fluentd, assuming that kibana logs use the same pipeline as all
 other pod logs.

 Frank
 Systems Engineer

 VSee: fr...@vsee.com  | Cell: +65 9338 0035

 Join me on VSee for Free 




 On Mon, Aug 15, 2016 at 10:59 PM, Eric Wolinetz 
 wrote:

> Can you either send a screencap of your Kibana console? Or describe
> how you are accessing Kibana and what you are seeing? (e.g. your query
> string, the index you're querying on, the time range for fetched 
> responses)
>
> On Mon, Aug 15, 2016 at 9:55 AM, Frank Liauw  wrote:
>
>> I can see indexes of my namespaces, but nothing going on in actual
>> logs in kibana though.
>>
>> Frank
>> Systems Engineer
>>
>> VSee: fr...@vsee.com  | Cell: +65 9338 0035
>>
>> Join me on VSee for Free 
>>
>>
>>
>>
>> On Mon, Aug 15, 2016 at 10:37 PM, Eric Wolinetz 
>> wrote:
>>
>>> True, we should be able to.  You should be able to see entries in
>>> the master ES node's logs that indices were created.  Based on your log
>>> snippet it should be "One Above All" in this pod: 
>>> logging-es-0w45va6n-2-8m8
>>> 5p
>>>
>>> If we don't see anything 

Re: Kibana Logs Empty

2016-08-16 Thread Luke Meyer
On Mon, Aug 15, 2016 at 3:54 AM, Frank Liauw  wrote:

> Hi All,
>
> I followed through the instructions on https://docs.openshift.org/
> latest/install_config/aggregate_logging.html and have setup a 3 node ES
> cluster. Fluentd is also deployed on all my nodes.
>
> I am getting kibana logs on the logging project, but all my other projects
> do not have any logs; kibana shows "No results found", with occasional
> errors reading "Discover: An error occurred with your request. Reset your
> inputs and try again."
>

Just to make sure... the default time period in Kibana is to look only 15
minutes in the past - are you sure your projects had logs in the last 15
minutes?
That wouldn't have anything to do with the errors you're seeing though.


>
> Probing the requests made by kibana, some calls to
> /elasticsearch/_msearch?timeout=0_unavailable=true
> =1471245075265 are failing from time to time.
>

That certainly shouldn't be happening. Do you have any more details on how
they're failing? Do they fail to connect, or just get back an error
response code? Not sure if you can tell...


>
> Looking into the ES logs for all 3 cluster pods, I don't see much errors
> to be concerned, with the last error of 2 nodes similar to the following
> which seems to be a known issue with Openshift's setup (
> https://lists.openshift.redhat.com/openshift-archives/users
> /2015-December/msg00078.html) and possibly explains the failed requests
> made by kibana on auto-refresh, but that's a problem for another day:
>
> [2016-08-15 06:53:49,130][INFO ][cluster.service  ] [Gremlin]
> added {[Quicksilver][t2l6Oz8uT-WS8Fa7S7jzfQ][logging-es-d7r1t3dm-
> 2-a0cf0][inet[/10.1.3.3:9300]],}, reason: zen-disco-receive(from master
> [[One Above All][CyFgyTTtS_S85yYRom2wVQ][logging-es-0w45va6n-2-8m85p][in
> et[/10.1.2.5:9300]]])
>

This is good, means your cluster is forming...


> [2016-08-15 
> 06:59:27,727][ERROR][com.floragunn.searchguard.filter.SearchGuardActionFilter]
> Error while apply() due to com.floragunn.searchguard.toke
> neval.MalformedConfigurationException: no bypass or execute filters at
> all for action indices:admin/mappings/fields/get
> com.floragunn.searchguard.tokeneval.MalformedConfigurationException: no
> bypass or execute filters at all
>

Unfortunate SearchGuard behavior while the cluster is starting, but nothing
to be concerned about as long as it doesn't continue.


>
> Looking into fluentd logs, one of my nodes is complaining of a
> "getaddrinfo" error:
>
> 2016-08-15 03:45:18 -0400 [error]: unexpected error error="getaddrinfo:
> Name or service not known"
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:878:in
> `initialize'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:878:in
> `open'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:878:in
> `block in connect'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/timeout.rb:52:in
> `timeout'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:877:in
> `connect'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:862:in
> `do_start'
>   2016-08-15 03:45:18 -0400 [error]: /usr/share/ruby/net/http.rb:851:in
> `start'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/rest-cl
> ient-2.0.0/lib/restclient/request.rb:766:in `transmit'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/rest-cl
> ient-2.0.0/lib/restclient/request.rb:215:in `execute'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/rest-cl
> ient-2.0.0/lib/restclient/request.rb:52:in `execute'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/rest-cl
> ient-2.0.0/lib/restclient/resource.rb:51:in `get'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/kubecli
> ent-1.1.4/lib/kubeclient/common.rb:328:in `block in api'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/kubecli
> ent-1.1.4/lib/kubeclient/common.rb:58:in `handle_exception'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/kubecli
> ent-1.1.4/lib/kubeclient/common.rb:327:in `api'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/kubecli
> ent-1.1.4/lib/kubeclient/common.rb:322:in `api_valid?'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluent-
> plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/
> filter_kubernetes_metadata.rb:167:in `configure'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluentd
> -0.12.23/lib/fluent/agent.rb:144:in `add_filter'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluentd
> -0.12.23/lib/fluent/agent.rb:61:in `block in configure'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluentd
> -0.12.23/lib/fluent/agent.rb:57:in `each'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluentd
> -0.12.23/lib/fluent/agent.rb:57:in `configure'
>   2016-08-15 03:45:18 -0400 [error]: /opt/app-root/src/gems/fluentd
> 

Re: Multi-Tenant Cluster: Create a separate docker registry for every project?

2016-08-16 Thread Robson Ramos Barreto
Hello Guys

I thought that project very interesting.

I'm looking for openshift integration too.

Thank you


2016-08-16 10:59 GMT-03:00 v :

> Hello Jonathan,
>
> thank you for you input, it is highly valued.
>
> I have to admit that I did not know of Atomic Registry - looks like a very
> promising project. I am looking forward to seeing Atomic Registry being
> integrated into OpenShift. :)
>
> We're using one-registry-per-project because this allows us sell to
> customers things like "100 GiB of Registry Storage". Each project registry
> gets its own PersistentVolume via NFS and the NFS storage consists of a
> thinly provisioned Logical Volume limited to 100 GiB at max.
>
> I haven't found out how to do something similar like that (i.e. selling
> 100 GiB of registry storage to a customer and making sure he can't use more
> than 100 GiB) using the integrated docker registry. Will check out whether
> this is possible with Atomic Registry.
>
> Regards
> v
>
>
>
> Am 2016-08-14 um 00:41 schrieb Jonathan Yu:
>
> Hi there,
>
> I'm not an expert regarding the registry by any means, but since nobody
> else has replied, I just wanted to share some thoughts.
>
> On Wed, Aug 3, 2016 at 5:59 AM, v  wrote:
>
>> Hello,
>>
>> we would like to deploy our first multi-tenant OpenShift Cluster and we'd
>> like to have all customers as separated as possible. For this reason we
>> would like to give each customer his/her own docker registry and registry
>> storage.
>>
>
> OpenShift is designed for multi-tenancy, including the networking stack
> (that's why we have OpenShift SDN) and the registry (the registry itself is
> shared, but our registry has integrated security for the multi-tenant use
> case).  It's worth noting that the code in OpenShift Origin eventually
> becomes the code that we run on our multi-tenant public cloud, OpenShift
> Online.
>
> There is some ongoing work to integrate our Atomic Registry into OpenShift
> as this would provide better management capabilities:
> https://trello.com/c/0hsX6B4G/210-enterprise-image-registry-
> atomic-registry
>
> That said, configuring individual registries would indeed give the best
> isolation, though at the cost of (potentially significant) administrative
> overhead, as you will need to monitor/manage many registries instead of
> just one.  On the other hand, the advantage is that downtime of the
> registry would only affect individual tenants, so it really depends on your
> use case.
>
>
>> We thought we'd create a registry-pod in every project. Should we just
>> use the official docker registry image from the docker hub or should we use
>> openshift/origin-docker-registry?
>>
>
> I'm biased, since I work for Red Hat, but I'd suggest trying the Atomic
> Registry as it comes with sophisticated security controls and a management
> interface.  Just a personal preference - I like graphical interfaces versus
> doing stuff on the command line.
>
> If you do go the route of having individual registries for each project,
> you'll need to tell projects where to push images:
> https://docs.openshift.org/latest/dev_guide/builds.html#build-output
>
> I think this also means that our built-in tools for managing registries
> will be unusable: https://docs.openshift.org/
> latest/admin_guide/monitoring_images.html
>
>
>> What is the "best practice" concerning the handling of docker registries
>> in multi-tenant clusters?
>>
>
> I'd suggest just sticking to what we ship (a multi-tenant-aware registry)
> and using that, as it's well-tested and fairly flexible:
> https://docs.openshift.org/latest/install_config/install/
> docker_registry.html#advanced-overriding-the-registry-configuration
>
> --
> Jonathan Yu, P.Eng. / Software Engineer, OpenShift by Red Hat / Twitter
> (@jawnsy) is the quickest way to my heart 
>
> *“A master in the art of living draws no sharp distinction between his
> work and his play; his labor and his leisure; his mind and his body; his
> education and his recreation. He hardly knows which is which. He simply
> pursues his vision of excellence through whatever he is doing, and leaves
> others to determine whether he is working or playing. To himself, he always
> appears to be doing both.”* — L. P. Jacks, Education through Recreation
> (1932), p. 1
>
>
> ___
> users mailing 
> listusers@lists.openshift.redhat.comhttp://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Multi-Tenant Cluster: Create a separate docker registry for every project?

2016-08-16 Thread v

Hello Jonathan,

thank you for you input, it is highly valued.

I have to admit that I did not know of Atomic Registry - looks like a very 
promising project. I am looking forward to seeing Atomic Registry being 
integrated into OpenShift. :)

We're using one-registry-per-project because this allows us sell to customers things like 
"100 GiB of Registry Storage". Each project registry gets its own 
PersistentVolume via NFS and the NFS storage consists of a thinly provisioned Logical 
Volume limited to 100 GiB at max.

I haven't found out how to do something similar like that (i.e. selling 100 GiB 
of registry storage to a customer and making sure he can't use more than 100 
GiB) using the integrated docker registry. Will check out whether this is 
possible with Atomic Registry.

Regards
v


Am 2016-08-14 um 00:41 schrieb Jonathan Yu:

Hi there,

I'm not an expert regarding the registry by any means, but since nobody else 
has replied, I just wanted to share some thoughts.

On Wed, Aug 3, 2016 at 5:59 AM, v > 
wrote:

Hello,

we would like to deploy our first multi-tenant OpenShift Cluster and we'd 
like to have all customers as separated as possible. For this reason we would 
like to give each customer his/her own docker registry and registry storage.


OpenShift is designed for multi-tenancy, including the networking stack (that's 
why we have OpenShift SDN) and the registry (the registry itself is shared, but 
our registry has integrated security for the multi-tenant use case).  It's 
worth noting that the code in OpenShift Origin eventually becomes the code that 
we run on our multi-tenant public cloud, OpenShift Online.

There is some ongoing work to integrate our Atomic Registry into OpenShift as 
this would provide better management capabilities: 
https://trello.com/c/0hsX6B4G/210-enterprise-image-registry-atomic-registry

That said, configuring individual registries would indeed give the best 
isolation, though at the cost of (potentially significant) administrative 
overhead, as you will need to monitor/manage many registries instead of just 
one.  On the other hand, the advantage is that downtime of the registry would 
only affect individual tenants, so it really depends on your use case.

We thought we'd create a registry-pod in every project. Should we just use 
the official docker registry image from the docker hub or should we use 
openshift/origin-docker-registry?


I'm biased, since I work for Red Hat, but I'd suggest trying the Atomic 
Registry as it comes with sophisticated security controls and a management 
interface.  Just a personal preference - I like graphical interfaces versus 
doing stuff on the command line.

If you do go the route of having individual registries for each project, you'll 
need to tell projects where to push images: 
https://docs.openshift.org/latest/dev_guide/builds.html#build-output

I think this also means that our built-in tools for managing registries will be 
unusable: https://docs.openshift.org/latest/admin_guide/monitoring_images.html

What is the "best practice" concerning the handling of docker registries in 
multi-tenant clusters?


I'd suggest just sticking to what we ship (a multi-tenant-aware registry) and 
using that, as it's well-tested and fairly flexible: 
https://docs.openshift.org/latest/install_config/install/docker_registry.html#advanced-overriding-the-registry-configuration

--
Jonathan Yu, P.Eng. / Software Engineer, OpenShift by Red Hat / Twitter (@jawnsy) is 
the quickest way to my heart 

/“A master in the art of living draws no sharp distinction between his work and 
his play; his labor and his leisure; his mind and his body; his education and 
his recreation. He hardly knows which is which. He simply pursues his vision of 
excellence through whatever he is doing, and leaves others to determine whether 
he is working or playing. To himself, he always appears to be doing both.”/ — 
L. P. Jacks, Education through Recreation (1932), p. 1


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Node startup Failure on SDN

2016-08-16 Thread Jonathan Yu
On Aug 15, 2016 11:08, "Skarbek, John"  wrote:
>
> So I figured it out. Ntp went kaboom on one of our master nodes.
>
> ERROR: [DCli0015 from diagnostic 
> ConfigContexts@openshift/origin/pkg/diagnostics/client/config_contexts.go:285]
For client config context 'default/cluster:8443/system:admin': The server
URL is 'https://cluster:8443' The user authentication is
'system:admin/cluster:8443' The current project is 'default' (*url.Error)
Get https://cluster:8443/api: x509: certificate has expired or is not yet
valid Diagnostics does not have an explanation for what this means. Please
report this error so one can be added.
>
> I ended up finding that the master node clock just…. I have no idea:
>
> [/etc/origin/master]# date Wed Feb 14 12:23:13 UTC 2001
>
> I’d like to suggest that diagnostics checks the date and time of all the
certificates and perhaps do some sort of ntp check and maybe even go the
extra mile and compare the time on the server to …life. I have no idea why
my master node decided to back to Valentines day in 2001. I think I was
single way back when.

Good idea. At minimum it seems like a good idea to record the build date
for the binary and check against that. I think Chrome does something
similar - perhaps figuring out how Chrome handles this is a reasonable
starting point
>
>
>
> --
> John Skarbek
>
> On August 15, 2016 at 13:32:13, Skarbek, John (john.skar...@ca.com) wrote:
>>
>> It would appear the certificate is valid 2018:
>>
>> `[/etc/origin/node]# openssl x509 -enddate -in system:node:node-001.crt
notAfter=Mar 21 15:18:10 2018 GMT
>>
>> Got any other ideas?
>>
>>
>>
>> --
>> John Skarbek
>>
>> On August 15, 2016 at 13:27:57, Clayton Coleman (ccole...@redhat.com)
wrote:
>>>
>>> The node's client certificate may have expired - that a common failure
mode.
>>>
>>> On Aug 15, 2016, at 1:23 PM, Skarbek, John  wrote:
>>>
 Good Morning,

 We recently had a node go down, upon trying to get it back online, the
origin-node service fails to start. The rest of the cluster appears to be
just fine, so with the desire to troubleshoot, what can I look at to
determine the root cause of the following error:

 Aug 15 17:12:59 node-001 origin-node[14536]: E0815 17:12:59.469682
14536 common.go:194] Failed to obtain ClusterNetwork: the server has asked
for the client to provide credentials (get clusterNetworks default) Aug 15
17:12:59 node-001 origin-node[14536]: F0815 17:12:59.469705 14536
node.go:310] error: SDN node startup failed: the server has asked for the
client to provide credentials (get clusterNetworks default)



 --
 John Skarbek

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users