Re: configuring periodic import of images

2016-08-11 Thread Philippe Lafoucrière
https://docs.openshift.com/enterprise/3.2/install_config/install/docker_registry.html

" The manifest v2 schema 2

 (*schema2*) is not yet supported."

Sorry :)
​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Adding master to 3 node install

2016-08-11 Thread Jason DeTiberus
On Aug 11, 2016 9:15 AM, "Philippe Lafoucrière" <
philippe.lafoucri...@tech-angels.com> wrote:
>
> Just for the records, we added a new node this week using the scaleup.yml
playbook, and it went pretty well.
>
> We also upgraded from 1.2.0 to 1.2.1 along with a Centos Atomic upgrade,
and it didn't went well :(
> All the images created by builders were "missing", and we had to rebuild
everything in every project, leading to a long unavailability (hopefully
during night).

This sounds like your registry was using ephemeral storage rather than
being backed by a PV or object storage.

The docs provide some additional details for this if manually deploying the
registry:
https://docs.openshift.org/latest/install_config/install/docker_registry.html

If using openshift-ansible for deployment, the example inventory file
provides some variables that allow for configuring an NFS volume, an
OpenStack Cinder volume, or a s3 bucket:
https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.origin.example#L290

--
Jason DeTiberus

> So if you have a virtualization system above OS, you should definitely
snapshot before each run...
> ​
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: configuring periodic import of images

2016-08-11 Thread Clayton Coleman
It is only in alpha.3

On Aug 11, 2016, at 7:12 PM, Andrew Lau  wrote:

I'm not sure if it's only in the 1.3.0-alpha3 but

https://docs.openshift.org/latest/install_config/install/docker_registry.html#docker-registry-configuration-reference-middleware

specifically, acceptschema2: false

?

On Fri, 12 Aug 2016 at 09:01 Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

>
> On Thu, Aug 11, 2016 at 5:15 PM, Tony Saxon  wrote:
>
>> Damn, I just went through having to downgrade my registry because I was
>> pushing with 1.12 and openshift (running docker 1.10) wasn't able to pull
>> the image due the the sha256 hash that it was referencing not existing
>> because of the v1/v2 issues. I guess my only option if I don't want to
>> upgrade my openshift is to push from a machine running docker 1.10?
>
>
> I'm afraid so. We're running into the same issue, and haven't found a
> solution yet :(
> You could also try the 1.3.0-alpha3, but I wouldn't recommend it for
> production of course...
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: configuring periodic import of images

2016-08-11 Thread Philippe Lafoucrière
On Thu, Aug 11, 2016 at 5:15 PM, Tony Saxon  wrote:

> Damn, I just went through having to downgrade my registry because I was
> pushing with 1.12 and openshift (running docker 1.10) wasn't able to pull
> the image due the the sha256 hash that it was referencing not existing
> because of the v1/v2 issues. I guess my only option if I don't want to
> upgrade my openshift is to push from a machine running docker 1.10?


I'm afraid so. We're running into the same issue, and haven't found a
solution yet :(
You could also try the 1.3.0-alpha3, but I wouldn't recommend it for
production of course...
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: configuring periodic import of images

2016-08-11 Thread Tony Saxon
Damn, I just went through having to downgrade my registry because I was
pushing with 1.12 and openshift (running docker 1.10) wasn't able to pull
the image due the the sha256 hash that it was referencing not existing
because of the v1/v2 issues. I guess my only option if I don't want to
upgrade my openshift is to push from a machine running docker 1.10?

On Thu, Aug 11, 2016 at 4:30 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> If you are using different versions of docker on openshift, and the server
> where the image was build, you will fall into this (known) problem.
> Check out https://trello.com/c/CJiUnVUm/136-3-docker-1-10-push-
> force-schema1-manifest-via-daemon-flag
>
> Hopefuly, this seems to be fixed in the upcoming 1.3. Check the section :
> Upgrading to Docker Registry 2.4, cross-repository linking, and better
> usage tooling
> in https://github.com/openshift/origin/releases/tag/v1.3.0-alpha.3
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: configuring periodic import of images

2016-08-11 Thread Philippe Lafoucrière
If you are using different versions of docker on openshift, and the server
where the image was build, you will fall into this (known) problem.
Check out https://trello.com/c/CJiUnVUm/136-3-docker-1-10-
push-force-schema1-manifest-via-daemon-flag

Hopefuly, this seems to be fixed in the upcoming 1.3. Check the section :
Upgrading to Docker Registry 2.4, cross-repository linking, and better
usage tooling
in https://github.com/openshift/origin/releases/tag/v1.3.0-alpha.3
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unexpected fault address 0x0

2016-08-11 Thread Philippe Lafoucrière
Hmm, indeed, the container will be always Up:

# cat /usr/local/bin/origin-node-run.sh
#!/bin/sh

set -eu

conf=${CONFIG_FILE:-/etc/origin/node/node-config.yaml}
opts=${OPTIONS:---loglevel=2}

function quit {
pkill -g 0 openshift
exit 0
}

trap quit SIGTERM

if [ ! -f ${HOST_ETC}/systemd/system/docker.service.d/docker-sdn-ovs.conf
]; then
mkdir -p ${HOST_ETC}/systemd/system/docker.service.d
cp /usr/lib/systemd/system/docker.service.d/docker-sdn-ovs.conf
${HOST_ETC}/systemd/system/docker.service.d
fi

/usr/bin/openshift start node "--config=${conf}" "${opts}" &

while true; do sleep 5; done


The while loop at the end is endless, even if openshift exited. Strange
idea.
What's the point of this?
Apparently, just handle the SIGTERM signal, and send it to all processes in
the same group:

# ps x -o  "%p %r %c" | grep 9003
 9003  9003 origin-node-run
 9012  9003 openshift
32347  9003 sleep

Maybe it lacks a test in the while loop to ensure at least "openshift" is
running?

​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Known-good iconClass listing?

2016-08-11 Thread N. Harrison Ripps

This is awesome, thanks Jessica!

On Thu, Aug 11, 2016 at 11:35 AM, Jessica Forrester 
 wrote:
If you don't set your own, it will fallback to the generic template 
icon.  In the new navigation the "application" icon is "fa fa-cubes"


You can use anything in fontawesome http://fontawesome.io/icons/
Anything in patternfly https://www.patternfly.org/styles/icons/#_
Or anything you see here: 
https://github.com/openshift/origin-web-console/blob/master/app/styles/_openshift-logos-icon.less#L31


Keep in mind that we do lock to a particular version of patternfly 
and fontawesome every release so if you arent seeing your icon of 
choice it may not be in the releases we are using, see 
https://github.com/openshift/origin-web-console/blob/master/bower.json 
  for the versions


On Thu, Aug 11, 2016 at 10:26 AM, N. Harrison Ripps  
wrote:

Hey there--
I am working on creating a template and was curious about the 
iconClass setting. I can grep for it in the origin codebase and I 
see several different icon names, but:


1) Is there an official list of valid iconClass values?
2) Is there a generic "Application" icon?

Thanks,
Harrison



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Resolving localhost (IPv6 issue?)

2016-08-11 Thread Clayton Coleman
Tried this with Fedora 24 and very similar config (but centos7 image) and
I'm able to ping localhost.

$ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
172.17.0.2 centoscentos7-debug

I have net.ipv6.conf.all.disable_ipv6 = 1 set - should be possible.

Can you provide your /etc/resolv.conf from inside that image?

On Thu, Aug 11, 2016 at 11:29 AM, Ulf Lilleengen  wrote:

> Host:
> OS: Fedora 24
> Docker: 1.10.3
> glibc-2.23.1-8
>
> Docker image:
> Name: gordons/qdrouterd:v10 (based on Fedora 23)
> Glibc:glibc-2.22-11
>
> Nothing special in the images other than that. The issue appeared without
> any significant change other than running the latest openshift/origin image.
>
> On 08/11/2016 04:58 PM, Clayton Coleman wrote:
>
>> That is very strange.  Anything special about the container (what OS,
>> libraries, glibc version, musl)?  What version of Docker was running?
>>
>> On Wed, Aug 10, 2016 at 3:45 AM, Ulf Lilleengen > > wrote:
>>
>> Hi,
>>
>> We were debugging an issue yesterday where 'localhost' could not be
>> resolved inside a container in openshift origin v1.3.0-alpha.3. I'm
>> not sure if this is openshift or kubernetes-related, but thought I'd
>> ask here first.
>>
>> We have two containers running on a pod, and one container is
>> connecting to the other using 'localhost'. This has worked fine for
>> several months, but stopped working yesterday. We resolved the issue
>> by using 127.0.0.1. We were also able to use the pod hostname as well.
>>
>> I'm thinking this might be related to IPv6, given that /etc/hosts
>> seemed to contain IPv6 records for localhost, and the other
>> container may be listening on IPv4 only. I tried disabling it with
>> sysctl net.ipv6.conf.all.disable_ipv6=1 to verify, but I still saw
>> the same issue.
>>
>> sh-4.3$ cat /etc/hosts
>> # Kubernetes-managed hosts file.
>> 127.0.0.1   localhost
>> ::1 localhost ip6-localhost ip6-loopback
>> fe00::0 ip6-localnet
>> fe00::0 ip6-mcastprefix
>> fe00::1 ip6-allnodes
>> fe00::2 ip6-allrouters
>> 172.17.0.6  controller-queue1-tnvav
>>
>> --
>> Ulf Lilleengen
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> 
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> 
>>
>>
>>
> --
> Ulf
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Using a Jenkins Slave in openshift

2016-08-11 Thread Akshaya Khare
By execute permissions in the repo I hope you mean in the Dockerfile...

these are the current commands in my docker file:



*mkdir -p /var/lib/jenkins && \chown -R 1001:0 /var/lib/jenkins && \chmod
-R g+w /var/lib/jenkins*
So if I add  'chmod -R g+x'  to the /var/lib/jenkins, it should do the job
*?*


On Thu, Aug 11, 2016 at 11:20 AM, Ben Parees  wrote:

>
>
> On Thu, Aug 11, 2016 at 11:18 AM, Akshaya Khare 
> wrote:
>
>> I'm adding the run-jnlp-client
>> 
>> file into my github repository under configuration folder.
>>
>
> ​does it have execute permissions in your repo?​
>
>
>
>> Then I'm using my github link to use in my jenkins-slave-builder url, and
>> then openshift builds an image for me...
>>
>> On Thu, Aug 11, 2016 at 11:15 AM, Ben Parees  wrote:
>>
>>>
>>>
>>> On Thu, Aug 11, 2016 at 11:10 AM, Akshaya Khare 
>>> wrote:
>>>
 Hi,

 Thanks for the detailed explanation, and I did get far but got stuck
 again.
 So I was able to build a slave Jenkins image and created a buildconfig.
 After updating the Kubernetes plugin configurations, I was able to
 spawn a new pod, but the pod fails with the error "ContainerCannotRun".
 On seeing the logs of the pod, it shows:

 *exec: "/var/lib/jenkins/run-jnlp-client": permission denied*

>>>
>>> ​sounds like /var/lib/jenkins/run-jnlp-client ​ doesn't have the right
>>> read/execute permissions set.  How are you building the slave image?
>>>
>>>
>>>
>>>

 I tried giving admin privileges to my user, and also edit privileges to
 the serviceaccount in my project:


 *oc policy add-role-to-group edit system:serviceaccounts -n jenkinstin2*
 How can I make sure that the pod runs without any permissions issues?

 On Mon, Aug 8, 2016 at 3:55 PM, Ben Parees  wrote:

> The sample defines a buildconfig which ultimately uses this directory
> as the context for a docker build:
> https://github.com/siamaksade/jenkins-s2i-example/tree/master/slave
>
> it does that by pointing the buildconfig to this repo:
> https://github.com/siamaksade/jenkins-s2i-example
>
> and the context directory named "slave" within that repo:
> https://github.com/siamaksade/jenkins-s2i-example/tree/master/slave
>
> which you can see defined here:
> https://github.com/siamaksade/jenkins-s2i-example/blob/maste
> r/jenkins-slave-builder-template.yaml#L36-L40
>
> https://github.com/siamaksade/jenkins-s2i-example/blob/maste
> r/jenkins-slave-builder-template.yaml#L61-L68
>
> If you are trying to build your own slave image, you need to point to
> a repo (and optionally a contextdir within that repo) that contains an
> appropriate Dockerfile, as the example does.
>
>
>
> On Mon, Aug 8, 2016 at 2:43 PM, Akshaya Khare 
> wrote:
>
>> Hi Ben,
>>
>> So after making changes to the imagestream, I wasn't able to get the
>> build running initially.
>> But that was because already there were failed builds and
>> buildconfigs which were preventing the build to run successfully.
>>
>> Once I deleted the old failed builds, I was able to get the new build
>> running, but it failed once I tried running my Jenkins job.
>> I gave my github repository as the repository url for the build, and
>> this is the log i get for the failed pod:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *I0808 14:06:51.779594   1 source.go:96] git ls-remote
>> https://github.com/akshayakhare/ims/ 
>> 
>> --headsI0808 14:06:51.779659   1 repository.go:275] Executing git
>> ls-remote https://github.com/akshayakhare/ims/
>>  --headsI0808 14:07:06.989568  
>>  1
>> source.go:189] Cloning source from https://github.com/akshayakhare/ims/
>> I0808 14:07:06.989649   1
>> repository.go:275] Executing git clone --recursive
>> https://github.com/akshayakhare/ims/ 
>> 
>> /tmp/docker-build543901321...I0808 14:07:35.174676   1
>> repository.go:300] Out: Merge pull request #28 from
>> chemistry-sourabh/LoggingI0808 14:07:35.174708   1 common.go:78]
>> Setting build revision to
>> {Commit:"79ed71a8470c973c6f6cad380657c2df93948345",
>> Author:api.SourceControlUser{Name:"Akshaya Khare",
>> Email:"akshayakh...@gmail.com "},
>> Committer:api.SourceControlUser{Name:"GitHub", Email:"nore...@github.com
>> "}, Message:"Merge pull request #28 from
>> chemistry-sourabh/Logging"}F0808 

Re: Known-good iconClass listing?

2016-08-11 Thread Jessica Forrester
If you don't set your own, it will fallback to the generic template icon.
In the new navigation the "application" icon is "fa fa-cubes"

You can use anything in fontawesome http://fontawesome.io/icons/
Anything in patternfly https://www.patternfly.org/styles/icons/#_
Or anything you see here:
https://github.com/openshift/origin-web-console/blob/master/app/styles/_openshift-logos-icon.less#L31

Keep in mind that we do lock to a particular version of patternfly and
fontawesome every release so if you arent seeing your icon of choice it may
not be in the releases we are using, see
https://github.com/openshift/origin-web-console/blob/master/bower.json
for the versions

On Thu, Aug 11, 2016 at 10:26 AM, N. Harrison Ripps  wrote:

> Hey there--
> I am working on creating a template and was curious about the iconClass
> setting. I can grep for it in the origin codebase and I see several
> different icon names, but:
>
> 1) Is there an official list of valid iconClass values?
> 2) Is there a generic "Application" icon?
>
> Thanks,
> Harrison
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Resolving localhost (IPv6 issue?)

2016-08-11 Thread Ulf Lilleengen

Host:
OS: Fedora 24
Docker: 1.10.3
glibc-2.23.1-8

Docker image:
Name: gordons/qdrouterd:v10 (based on Fedora 23)
Glibc:glibc-2.22-11

Nothing special in the images other than that. The issue appeared 
without any significant change other than running the latest 
openshift/origin image.


On 08/11/2016 04:58 PM, Clayton Coleman wrote:

That is very strange.  Anything special about the container (what OS,
libraries, glibc version, musl)?  What version of Docker was running?

On Wed, Aug 10, 2016 at 3:45 AM, Ulf Lilleengen > wrote:

Hi,

We were debugging an issue yesterday where 'localhost' could not be
resolved inside a container in openshift origin v1.3.0-alpha.3. I'm
not sure if this is openshift or kubernetes-related, but thought I'd
ask here first.

We have two containers running on a pod, and one container is
connecting to the other using 'localhost'. This has worked fine for
several months, but stopped working yesterday. We resolved the issue
by using 127.0.0.1. We were also able to use the pod hostname as well.

I'm thinking this might be related to IPv6, given that /etc/hosts
seemed to contain IPv6 records for localhost, and the other
container may be listening on IPv4 only. I tried disabling it with
sysctl net.ipv6.conf.all.disable_ipv6=1 to verify, but I still saw
the same issue.

sh-4.3$ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
172.17.0.6  controller-queue1-tnvav

--
Ulf Lilleengen

___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users





--
Ulf

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Using a Jenkins Slave in openshift

2016-08-11 Thread Ben Parees
On Thu, Aug 11, 2016 at 11:18 AM, Akshaya Khare 
wrote:

> I'm adding the run-jnlp-client
> 
> file into my github repository under configuration folder.
>

​does it have execute permissions in your repo?​



> Then I'm using my github link to use in my jenkins-slave-builder url, and
> then openshift builds an image for me...
>
> On Thu, Aug 11, 2016 at 11:15 AM, Ben Parees  wrote:
>
>>
>>
>> On Thu, Aug 11, 2016 at 11:10 AM, Akshaya Khare 
>> wrote:
>>
>>> Hi,
>>>
>>> Thanks for the detailed explanation, and I did get far but got stuck
>>> again.
>>> So I was able to build a slave Jenkins image and created a buildconfig.
>>> After updating the Kubernetes plugin configurations, I was able to spawn
>>> a new pod, but the pod fails with the error "ContainerCannotRun".
>>> On seeing the logs of the pod, it shows:
>>>
>>> *exec: "/var/lib/jenkins/run-jnlp-client": permission denied*
>>>
>>
>> ​sounds like /var/lib/jenkins/run-jnlp-client ​ doesn't have the right
>> read/execute permissions set.  How are you building the slave image?
>>
>>
>>
>>
>>>
>>> I tried giving admin privileges to my user, and also edit privileges to
>>> the serviceaccount in my project:
>>>
>>>
>>> *oc policy add-role-to-group edit system:serviceaccounts -n jenkinstin2*
>>> How can I make sure that the pod runs without any permissions issues?
>>>
>>> On Mon, Aug 8, 2016 at 3:55 PM, Ben Parees  wrote:
>>>
 The sample defines a buildconfig which ultimately uses this directory
 as the context for a docker build:
 https://github.com/siamaksade/jenkins-s2i-example/tree/master/slave

 it does that by pointing the buildconfig to this repo:
 https://github.com/siamaksade/jenkins-s2i-example

 and the context directory named "slave" within that repo:
 https://github.com/siamaksade/jenkins-s2i-example/tree/master/slave

 which you can see defined here:
 https://github.com/siamaksade/jenkins-s2i-example/blob/maste
 r/jenkins-slave-builder-template.yaml#L36-L40

 https://github.com/siamaksade/jenkins-s2i-example/blob/maste
 r/jenkins-slave-builder-template.yaml#L61-L68

 If you are trying to build your own slave image, you need to point to a
 repo (and optionally a contextdir within that repo) that contains an
 appropriate Dockerfile, as the example does.



 On Mon, Aug 8, 2016 at 2:43 PM, Akshaya Khare 
 wrote:

> Hi Ben,
>
> So after making changes to the imagestream, I wasn't able to get the
> build running initially.
> But that was because already there were failed builds and buildconfigs
> which were preventing the build to run successfully.
>
> Once I deleted the old failed builds, I was able to get the new build
> running, but it failed once I tried running my Jenkins job.
> I gave my github repository as the repository url for the build, and
> this is the log i get for the failed pod:
>
>
>
>
>
>
>
>
>
>
>
>
> *I0808 14:06:51.779594   1 source.go:96] git ls-remote
> https://github.com/akshayakhare/ims/ 
> 
> --headsI0808 14:06:51.779659   1 repository.go:275] Executing git
> ls-remote https://github.com/akshayakhare/ims/
>  --headsI0808 14:07:06.989568   
> 1
> source.go:189] Cloning source from https://github.com/akshayakhare/ims/
> I0808 14:07:06.989649   1
> repository.go:275] Executing git clone --recursive
> https://github.com/akshayakhare/ims/ 
> 
> /tmp/docker-build543901321...I0808 14:07:35.174676   1
> repository.go:300] Out: Merge pull request #28 from
> chemistry-sourabh/LoggingI0808 14:07:35.174708   1 common.go:78]
> Setting build revision to
> {Commit:"79ed71a8470c973c6f6cad380657c2df93948345",
> Author:api.SourceControlUser{Name:"Akshaya Khare",
> Email:"akshayakh...@gmail.com "},
> Committer:api.SourceControlUser{Name:"GitHub", Email:"nore...@github.com
> "}, Message:"Merge pull request #28 from
> chemistry-sourabh/Logging"}F0808 14:07:35.200435   1 builder.go:185]
> Error: build error: open /tmp/docker-build543901321/Dockerfile: no such
> file or directory*
> Do i need to create a docker file in my repository to run
> successfully?
> You mentioned that the sample git given in the blog uses a "slave" sub
> directory, will I have to create a similar structure in my repository?
>
> Looking at the sample Docker file given in the blog below, makes me
> believe that it copies the workspace from the 

Re: Using a Jenkins Slave in openshift

2016-08-11 Thread Ben Parees
On Thu, Aug 11, 2016 at 11:10 AM, Akshaya Khare 
wrote:

> Hi,
>
> Thanks for the detailed explanation, and I did get far but got stuck again.
> So I was able to build a slave Jenkins image and created a buildconfig.
> After updating the Kubernetes plugin configurations, I was able to spawn a
> new pod, but the pod fails with the error "ContainerCannotRun".
> On seeing the logs of the pod, it shows:
>
> *exec: "/var/lib/jenkins/run-jnlp-client": permission denied*
>

​sounds like /var/lib/jenkins/run-jnlp-client ​ doesn't have the right
read/execute permissions set.  How are you building the slave image?




>
> I tried giving admin privileges to my user, and also edit privileges to
> the serviceaccount in my project:
>
>
> *oc policy add-role-to-group edit system:serviceaccounts -n jenkinstin2*
> How can I make sure that the pod runs without any permissions issues?
>
> On Mon, Aug 8, 2016 at 3:55 PM, Ben Parees  wrote:
>
>> The sample defines a buildconfig which ultimately uses this directory as
>> the context for a docker build:
>> https://github.com/siamaksade/jenkins-s2i-example/tree/master/slave
>>
>> it does that by pointing the buildconfig to this repo:
>> https://github.com/siamaksade/jenkins-s2i-example
>>
>> and the context directory named "slave" within that repo:
>> https://github.com/siamaksade/jenkins-s2i-example/tree/master/slave
>>
>> which you can see defined here:
>> https://github.com/siamaksade/jenkins-s2i-example/blob/maste
>> r/jenkins-slave-builder-template.yaml#L36-L40
>>
>> https://github.com/siamaksade/jenkins-s2i-example/blob/maste
>> r/jenkins-slave-builder-template.yaml#L61-L68
>>
>> If you are trying to build your own slave image, you need to point to a
>> repo (and optionally a contextdir within that repo) that contains an
>> appropriate Dockerfile, as the example does.
>>
>>
>>
>> On Mon, Aug 8, 2016 at 2:43 PM, Akshaya Khare 
>> wrote:
>>
>>> Hi Ben,
>>>
>>> So after making changes to the imagestream, I wasn't able to get the
>>> build running initially.
>>> But that was because already there were failed builds and buildconfigs
>>> which were preventing the build to run successfully.
>>>
>>> Once I deleted the old failed builds, I was able to get the new build
>>> running, but it failed once I tried running my Jenkins job.
>>> I gave my github repository as the repository url for the build, and
>>> this is the log i get for the failed pod:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *I0808 14:06:51.779594   1 source.go:96] git ls-remote
>>> https://github.com/akshayakhare/ims/ 
>>> --headsI0808 14:06:51.779659   1 repository.go:275] Executing git
>>> ls-remote https://github.com/akshayakhare/ims/
>>>  --headsI0808 14:07:06.989568   1
>>> source.go:189] Cloning source from https://github.com/akshayakhare/ims/
>>> I0808 14:07:06.989649   1
>>> repository.go:275] Executing git clone --recursive
>>> https://github.com/akshayakhare/ims/ 
>>> /tmp/docker-build543901321...I0808 14:07:35.174676   1
>>> repository.go:300] Out: Merge pull request #28 from
>>> chemistry-sourabh/LoggingI0808 14:07:35.174708   1 common.go:78]
>>> Setting build revision to
>>> {Commit:"79ed71a8470c973c6f6cad380657c2df93948345",
>>> Author:api.SourceControlUser{Name:"Akshaya Khare",
>>> Email:"akshayakh...@gmail.com "},
>>> Committer:api.SourceControlUser{Name:"GitHub", Email:"nore...@github.com
>>> "}, Message:"Merge pull request #28 from
>>> chemistry-sourabh/Logging"}F0808 14:07:35.200435   1 builder.go:185]
>>> Error: build error: open /tmp/docker-build543901321/Dockerfile: no such
>>> file or directory*
>>> Do i need to create a docker file in my repository to run successfully?
>>> You mentioned that the sample git given in the blog uses a "slave" sub
>>> directory, will I have to create a similar structure in my repository?
>>>
>>> Looking at the sample Docker file given in the blog below, makes me
>>> believe that it copies the workspace from the current image to its own
>>> container and then runs it:
>>> https://github.com/siamaksade/jenkins-s2i-example/blob/maste
>>> r/slave/Dockerfile
>>>
>>> Is my understanding correct?
>>>
>>>
>>> On Fri, Aug 5, 2016 at 4:39 PM, Ben Parees  wrote:
>>>
 You'll need to define the imagestream you've got the build pushing to,
 the sample does that here:
 https://github.com/siamaksade/jenkins-s2i-example/blob/maste
 r/jenkins-slave-builder-template.yaml#L12-L21

 you'll need to name the imagestream "jdk8-jenkins-slave" in your case.


 On Fri, Aug 5, 2016 at 4:06 PM, Akshaya Khare 
 wrote:

> I've attached the buildconfig, and the project name is "jenkinstin2"...
>
> On 

Re: Using a Jenkins Slave in openshift

2016-08-11 Thread Akshaya Khare
Hi,

Thanks for the detailed explanation, and I did get far but got stuck again.
So I was able to build a slave Jenkins image and created a buildconfig.
After updating the Kubernetes plugin configurations, I was able to spawn a
new pod, but the pod fails with the error "ContainerCannotRun".
On seeing the logs of the pod, it shows:


*exec: "/var/lib/jenkins/run-jnlp-client": permission denied*
I tried giving admin privileges to my user, and also edit privileges to the
serviceaccount in my project:


*oc policy add-role-to-group edit system:serviceaccounts -n jenkinstin2*
How can I make sure that the pod runs without any permissions issues?

On Mon, Aug 8, 2016 at 3:55 PM, Ben Parees  wrote:

> The sample defines a buildconfig which ultimately uses this directory as
> the context for a docker build:
> https://github.com/siamaksade/jenkins-s2i-example/tree/master/slave
>
> it does that by pointing the buildconfig to this repo:
> https://github.com/siamaksade/jenkins-s2i-example
>
> and the context directory named "slave" within that repo:
> https://github.com/siamaksade/jenkins-s2i-example/tree/master/slave
>
> which you can see defined here:
> https://github.com/siamaksade/jenkins-s2i-example/blob/
> master/jenkins-slave-builder-template.yaml#L36-L40
>
> https://github.com/siamaksade/jenkins-s2i-example/blob/
> master/jenkins-slave-builder-template.yaml#L61-L68
>
> If you are trying to build your own slave image, you need to point to a
> repo (and optionally a contextdir within that repo) that contains an
> appropriate Dockerfile, as the example does.
>
>
>
> On Mon, Aug 8, 2016 at 2:43 PM, Akshaya Khare 
> wrote:
>
>> Hi Ben,
>>
>> So after making changes to the imagestream, I wasn't able to get the
>> build running initially.
>> But that was because already there were failed builds and buildconfigs
>> which were preventing the build to run successfully.
>>
>> Once I deleted the old failed builds, I was able to get the new build
>> running, but it failed once I tried running my Jenkins job.
>> I gave my github repository as the repository url for the build, and this
>> is the log i get for the failed pod:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *I0808 14:06:51.779594   1 source.go:96] git ls-remote
>> https://github.com/akshayakhare/ims/ 
>> --headsI0808 14:06:51.779659   1 repository.go:275] Executing git
>> ls-remote https://github.com/akshayakhare/ims/
>>  --headsI0808 14:07:06.989568   1
>> source.go:189] Cloning source from https://github.com/akshayakhare/ims/
>> I0808 14:07:06.989649   1
>> repository.go:275] Executing git clone --recursive
>> https://github.com/akshayakhare/ims/ 
>> /tmp/docker-build543901321...I0808 14:07:35.174676   1
>> repository.go:300] Out: Merge pull request #28 from
>> chemistry-sourabh/LoggingI0808 14:07:35.174708   1 common.go:78]
>> Setting build revision to
>> {Commit:"79ed71a8470c973c6f6cad380657c2df93948345",
>> Author:api.SourceControlUser{Name:"Akshaya Khare",
>> Email:"akshayakh...@gmail.com "},
>> Committer:api.SourceControlUser{Name:"GitHub", Email:"nore...@github.com
>> "}, Message:"Merge pull request #28 from
>> chemistry-sourabh/Logging"}F0808 14:07:35.200435   1 builder.go:185]
>> Error: build error: open /tmp/docker-build543901321/Dockerfile: no such
>> file or directory*
>> Do i need to create a docker file in my repository to run successfully?
>> You mentioned that the sample git given in the blog uses a "slave" sub
>> directory, will I have to create a similar structure in my repository?
>>
>> Looking at the sample Docker file given in the blog below, makes me
>> believe that it copies the workspace from the current image to its own
>> container and then runs it:
>> https://github.com/siamaksade/jenkins-s2i-example/blob/maste
>> r/slave/Dockerfile
>>
>> Is my understanding correct?
>>
>>
>> On Fri, Aug 5, 2016 at 4:39 PM, Ben Parees  wrote:
>>
>>> You'll need to define the imagestream you've got the build pushing to,
>>> the sample does that here:
>>> https://github.com/siamaksade/jenkins-s2i-example/blob/maste
>>> r/jenkins-slave-builder-template.yaml#L12-L21
>>>
>>> you'll need to name the imagestream "jdk8-jenkins-slave" in your case.
>>>
>>>
>>> On Fri, Aug 5, 2016 at 4:06 PM, Akshaya Khare 
>>> wrote:
>>>
 I've attached the buildconfig, and the project name is "jenkinstin2"...

 On Fri, Aug 5, 2016 at 2:38 PM, Ben Parees  wrote:

>
>
> On Fri, Aug 5, 2016 at 2:28 PM, Akshaya Khare 
> wrote:
>
>> Hi,
>>
>> I have a project configured in jenkins container(thanks to Ben Parees
>> for suggesting s2i, it works like a charm) running on openshiift which I

Re: Resolving localhost (IPv6 issue?)

2016-08-11 Thread Clayton Coleman
That is very strange.  Anything special about the container (what OS,
libraries, glibc version, musl)?  What version of Docker was running?

On Wed, Aug 10, 2016 at 3:45 AM, Ulf Lilleengen  wrote:

> Hi,
>
> We were debugging an issue yesterday where 'localhost' could not be
> resolved inside a container in openshift origin v1.3.0-alpha.3. I'm not
> sure if this is openshift or kubernetes-related, but thought I'd ask here
> first.
>
> We have two containers running on a pod, and one container is connecting
> to the other using 'localhost'. This has worked fine for several months,
> but stopped working yesterday. We resolved the issue by using 127.0.0.1. We
> were also able to use the pod hostname as well.
>
> I'm thinking this might be related to IPv6, given that /etc/hosts seemed
> to contain IPv6 records for localhost, and the other container may be
> listening on IPv4 only. I tried disabling it with sysctl
> net.ipv6.conf.all.disable_ipv6=1 to verify, but I still saw the same
> issue.
>
> sh-4.3$ cat /etc/hosts
> # Kubernetes-managed hosts file.
> 127.0.0.1   localhost
> ::1 localhost ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> fe00::0 ip6-mcastprefix
> fe00::1 ip6-allnodes
> fe00::2 ip6-allrouters
> 172.17.0.6  controller-queue1-tnvav
>
> --
> Ulf Lilleengen
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem authenticating to private docker registry

2016-08-11 Thread Tony Saxon
That worked. I downgraded the registry to version 2.2.1. Annoyingly I still
had to repush the image, as I couldn't pull the image from before. The
docker registry was deployed using the registry container using docker
compose, so I was hoping that I'd just be able to change the tag that was
referenced, but no go. The docker-compose file that I used for it is:

# cat docker-compose.yaml
registry:
  restart: always
  image: registry:2.2.1
  ports:
- 5000:5000
  environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
  volumes:
- /var/docker/registry/data:/var/lib/registry
- /var/docker/registry/certs:/certs
- /var/docker/registry/auth:/auth

All I did was change the image definition from registry:2 to registry:2.2.1
and repush the image and it worked.

Thanks for everyone's help!

On Thu, Aug 11, 2016 at 3:07 AM, Michal Minář  wrote:

> Tony,
>
> your docker-lab.example.com registry seems be of version 2.3.0 or higher
> which supports manifest v2 schema 2 [1]. If you use Docker client >=
> 1.10 (which also supports this schema) to push any image there, the
> registry will store schema 2. If you then pull it using older client
> (supporting v1) such as origin 1.2 / ose 3.2, registry will convert it
> on-the-fly to schema 1, which has a different digest. The digest doesn't
> actually exist on the registry and therefore the pull by digest (e.g.
> docker pull docker-lab.example.com:5000/testwebapp@sha256:
> 9799a25cd6fd7f7908bad740fc0c85823e38aa22afb22f687a5b8a3ed2bf9ec3)
> will fail.
>
> What you can do to address this:
>
> 1. re-push your images to docker-lab.example.com with docker 1.9
> 2. downgrade your docker-lab.example.com registry to version 2.2.1 and
>re-push your images with whatever Docker you have
> 3. update your origin to latest master so you can pull schema 2
>
> I'd recommend one of the first two options which will ensure that
> registry stores and serves only manifest v2 schema 1 which is pull-able
> without a problem.
>
> Hope that helps,
> Michal
>
> [1] https://github.com/docker/distribution/blob/master/docs/
> spec/manifest-v2-2.md
>
> On 10.8.2016 20:26, Tony Saxon wrote:
> > No worries. Thanks everyone for the help so far. Let me know if there's
> any other helpful information I can provide. > > I am able to pull the
> image down without any issues if I use the latest tag in case that helps: >
> > [root@os-node1 ~]# docker pull docker-lab.example.com:5000/
> testwebapp:latest > Trying to pull repository docker-lab.example.com:5000/
> testwebapp ... > latest: Pulling from docker-lab.example.com:5000/
> testwebapp > 3d8673bd162a: Pull complete > 855e002c7563: Pull complete >
> 2c2e00e4aa2a: Pull complete > Digest: sha256:
> 5216e79273fc6221f8f5896632e4de633eeaa66347d0500c39d9d0006912e42d >
> Status: Downloaded newer image for docker-lab.example.com:5000/
> testwebapp:latest > > On Wed, Aug 10, 2016 at 2:21 PM, Andy Goldstein
>   wrote: > > Ok, thanks.
> I'm not really involved with the registry any more, so I'll have to defer
> to Maciej and Michal. We may need to try to reproduce to see what's going
> on. Sorry I couldn't be more helpful. > > Andy > > On Wed, Aug 10,
> 2016 at 2:18 PM, Tony Saxon  
> wrote: > > [root@os-node1 ~]# docker pull
> docker-lab.example.com:5000/testwebapp@sha256:
> 9799a25cd6fd7f7908bad740fc0c85823e38aa22afb22f687a5b8a3ed2bf9ec3
> > Trying to pull repository docker-lab.example.com:5000/testwebapp
> ... > manifest unknown: manifest unknown > > > > On Wed,
> Aug 10, 2016 at 2:07 PM, Andy Goldstein 
>  wrote: > > Tony, can you show the
> output when you try to manually 'docker pull'? > > On Wed, Aug
> 10, 2016 at 2:04 PM, Cesar Wong  
> wrote: > > Hmm, I didn't know the issue existed between
> 1.10 and 1.12 as well. > > Andy, what would you recommend?
> > > >> On Aug 10, 2016, at 1:58 PM, Tony Saxon
>   wrote: >>
> >> Ok, maybe that is the issue. I can not do the docker
> pull referencing the sha256 hash on the node. >> >> The
> docker version running on the node is docker 1.10.3, and the docker version
> on the machine that pushed the image is 1.12.0. Is there a potential
> workaround for this, or do I need to get the docker version updated on the
> nodes? For reference, I installed the openshift platform using the ansible
> advanced installation referenced in the documentation. >>
> >> On Wed, Aug 10, 2016 at 1:46 PM, Cesar Wong
>   wrote: >> 

Known-good iconClass listing?

2016-08-11 Thread N. Harrison Ripps

Hey there--
I am working on creating a template and was curious about the iconClass 
setting. I can grep for it in the origin codebase and I see several 
different icon names, but:


1) Is there an official list of valid iconClass values?
2) Is there a generic "Application" icon?

Thanks,
Harrison



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Adding master to 3 node install

2016-08-11 Thread Philippe Lafoucrière
Did you check out this?
https://docs.openshift.com/enterprise/3.1/install_config/install/advanced_install.html#adding-nodes-advanced


​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: cluster up - reuse registry address

2016-08-11 Thread Clayton Coleman
This is the string flag bug again.  We really need to fix this.

On Aug 10, 2016, at 4:04 AM, Cesar Wong  wrote:

Lionel,

So is it working for you now?

On Aug 9, 2016, at 11:10 PM, Lionel Orellana  wrote:

Digging through the go libraries used for parsing the command options I
found that setting the no_proxy variable like this works:

-e \"no_proxy=172.17.0.3,172.17.0.4\"

It all comes down to https://golang.org/pkg/encoding/csv

which is used by the pflag package.
On Tue, 9 Aug 2016 at 10:31 PM, Lionel Orellana  wrote:

> Setting the log level to 4 I found the following
>
>   Starting OpenShift using container 'origin'
> I0809 22:21:26.415373   20151 run.go:143] Creating container named "origin"
> config:
>   image: openshift/origin:v1.3.0-alpha.2
>   command:
> start
>
> --master-config=/var/lib/origin/openshift.local.config/master/master-config.yaml
> --node-config=/var/lib/origin/openshift.local.config/
> node-poc-docker03.aipo.gov.au/node-config.yaml
>   environment:
> http_proxy=http://proxy.aipo.gov.au:3128
> https_proxy=http://proxy.aipo.gov.au:3128
>* no_proxy=172.17.0.3*
> *172.17.0.4*
>
> I've tried different ways of setting multiple ip's in no_proxy but they
> always seem to be getting split on the comma.
>
> -e "no_proxy=172.17.0.3,172.17.0.4"
> -e no_proxy="172.17.0.3\,172.17.0.4"
> -e no_proxy=’172.17.0.3,172.17.0.4’
> -e no_proxy=172.17.0.3,172.17.0.4
>
> This might be causing some of my problems. The fact that I can't set more
> than one ip address in no_proxy.
>
>
>
>
>
>
>
> On 9 August 2016 at 11:18, Lionel Orellana  wrote:
>
>> I guess what I need is a way to configure the proxy as per
>> https://docs.openshift.org/latest/install_config/http_proxies.html#configuring-hosts-for-proxies
>>
>>
>> On Tue, 9 Aug 2016 at 10:05 AM, Lionel Orellana 
>> wrote:
>>
>>> It's been difficult to get a functional poc going with oc cluster up
>>> behind a proxy.
>>>
>>> I need to maintain the registry's address so I can add it to the
>>> no_proxy variable of the docker deamon. Clayton's procedure works for
>>> reusing the address . I will try --use-existing-config.
>>>
>>> But I also need to add the registry's internal address (which always
>>> seems to be initially set to 172.17.0.4) to the no_proxy variable of the
>>> cluster up command itself. Otherwise the health checks try to go through
>>> the proxy and fail.
>>>
>>> When I recreate the registry (in order to set a known service ip) the
>>> pod ip changes and the health checks start to fail again.
>>>
>>> Obviously I am making this harder than it should be. But I just can't
>>> get the right combination to run a cluster behind a proxy where I can login
>>> to the registry (docker login). Maybe I should have said that's what I'm
>>> trying to do from the beginning.
>>>
>>> Cheers
>>>
>>>
>>> Lionel.
>>>
>>> On Tue, 9 Aug 2016 at 1:16 AM, Clayton Coleman 
>>> wrote:
>>>
 Generally deep configuration is not the goal of oc cluster up - that's
 more the Ansible installs responsibility.  oc cluster up is about getting a
 running cluster up for test / dev as quickly as possible, but we don't want
 to add fine grained tuning to it.

 On Mon, Aug 8, 2016 at 10:49 AM, Cesar Wong  wrote:

> Hi Lionel,
>
> You can always reuse the same data/config dirs and keep your service
> ips:
>
> oc cluster up --host-data-dir=blah --host-config-dir=blah
> --use-existing-config
>
> On Aug 7, 2016, at 9:17 PM, Lionel Orellana 
> wrote:
>
> Thanks Clayton.
>
> Would be nice to have a way of setting the address when using cluster
> up though.
> On Mon, 8 Aug 2016 at 11:03 AM, Clayton Coleman 
> wrote:
>
>> When you create the registry you can specify the service IP that is
>> assigned (as long as another service hasn't claimed it).
>>
>> $ oadm registry -o yaml > registry.yaml
>> $ vi registry.yaml
>> # Set the registry service `spec.clusterIP` field to a valid
>> service IP (must be within the service CIDR, typically 172.30.0.0/16)
>> $ oc create -f registry.yaml
>>
>>
>> On Sun, Aug 7, 2016 at 8:55 PM, Lionel Orellana 
>> wrote:
>>
>>> Hi
>>>
>>> I'm facing a similar problem to this:
>>> https://github.com/openshift/origin/issues/7879
>>>
>>> Basically I need to configure the NO_PROXY variable of the Docker
>>> deamon to include the registry address. Problem is with cluster up I 
>>> can't
>>> control the ip address that will be assigned to the registry. Or at 
>>> least I
>>> can't find a way to do it. Is there an option that I'm not seeing?
>>>
>>> Thanks
>>>
>>> Lionel.
>>>
>>> 

Re: cluster up - reuse registry address

2016-08-11 Thread Lionel Orellana
Yes.

On 10 August 2016 at 18:04, Cesar Wong  wrote:

> Lionel,
>
> So is it working for you now?
>
> On Aug 9, 2016, at 11:10 PM, Lionel Orellana  wrote:
>
> Digging through the go libraries used for parsing the command options I
> found that setting the no_proxy variable like this works:
>
> -e \"no_proxy=172.17.0.3,172.17.0.4\"
>
> It all comes down to https://golang.org/pkg/encoding/csv
>
> which is used by the pflag package.
> On Tue, 9 Aug 2016 at 10:31 PM, Lionel Orellana 
> wrote:
>
>> Setting the log level to 4 I found the following
>>
>>   Starting OpenShift using container 'origin'
>> I0809 22:21:26.415373   20151 run.go:143] Creating container named
>> "origin"
>> config:
>>   image: openshift/origin:v1.3.0-alpha.2
>>   command:
>> start
>> --master-config=/var/lib/origin/openshift.local.config/
>> master/master-config.yaml
>> --node-config=/var/lib/origin/openshift.local.config/node-
>> poc-docker03.aipo.gov.au/node-config.yaml
>>   environment:
>> http_proxy=http://proxy.aipo.gov.au:3128
>> https_proxy=http://proxy.aipo.gov.au:3128
>>* no_proxy=172.17.0.3*
>> *172.17.0.4*
>>
>> I've tried different ways of setting multiple ip's in no_proxy but they
>> always seem to be getting split on the comma.
>>
>> -e "no_proxy=172.17.0.3,172.17.0.4"
>> -e no_proxy="172.17.0.3\,172.17.0.4"
>> -e no_proxy=’172.17.0.3,172.17.0.4’
>> -e no_proxy=172.17.0.3,172.17.0.4
>>
>> This might be causing some of my problems. The fact that I can't set more
>> than one ip address in no_proxy.
>>
>>
>>
>>
>>
>>
>>
>> On 9 August 2016 at 11:18, Lionel Orellana  wrote:
>>
>>> I guess what I need is a way to configure the proxy as per
>>> https://docs.openshift.org/latest/install_config/http_
>>> proxies.html#configuring-hosts-for-proxies
>>>
>>>
>>> On Tue, 9 Aug 2016 at 10:05 AM, Lionel Orellana 
>>> wrote:
>>>
 It's been difficult to get a functional poc going with oc cluster up
 behind a proxy.

 I need to maintain the registry's address so I can add it to the
 no_proxy variable of the docker deamon. Clayton's procedure works for
 reusing the address . I will try --use-existing-config.

 But I also need to add the registry's internal address (which always
 seems to be initially set to 172.17.0.4) to the no_proxy variable of the
 cluster up command itself. Otherwise the health checks try to go through
 the proxy and fail.

 When I recreate the registry (in order to set a known service ip) the
 pod ip changes and the health checks start to fail again.

 Obviously I am making this harder than it should be. But I just can't
 get the right combination to run a cluster behind a proxy where I can login
 to the registry (docker login). Maybe I should have said that's what I'm
 trying to do from the beginning.

 Cheers


 Lionel.

 On Tue, 9 Aug 2016 at 1:16 AM, Clayton Coleman 
 wrote:

> Generally deep configuration is not the goal of oc cluster up - that's
> more the Ansible installs responsibility.  oc cluster up is about getting 
> a
> running cluster up for test / dev as quickly as possible, but we don't 
> want
> to add fine grained tuning to it.
>
> On Mon, Aug 8, 2016 at 10:49 AM, Cesar Wong  wrote:
>
>> Hi Lionel,
>>
>> You can always reuse the same data/config dirs and keep your service
>> ips:
>>
>> oc cluster up --host-data-dir=blah --host-config-dir=blah
>> --use-existing-config
>>
>> On Aug 7, 2016, at 9:17 PM, Lionel Orellana 
>> wrote:
>>
>> Thanks Clayton.
>>
>> Would be nice to have a way of setting the address when using cluster
>> up though.
>> On Mon, 8 Aug 2016 at 11:03 AM, Clayton Coleman 
>> wrote:
>>
>>> When you create the registry you can specify the service IP that is
>>> assigned (as long as another service hasn't claimed it).
>>>
>>> $ oadm registry -o yaml > registry.yaml
>>> $ vi registry.yaml
>>> # Set the registry service `spec.clusterIP` field to a valid
>>> service IP (must be within the service CIDR, typically 172.30.0.0/16
>>> )
>>> $ oc create -f registry.yaml
>>>
>>>
>>> On Sun, Aug 7, 2016 at 8:55 PM, Lionel Orellana 
>>> wrote:
>>>
 Hi

 I'm facing a similar problem to this: https://github.com/openshift/
 origin/issues/7879

 Basically I need to configure the NO_PROXY variable of the Docker
 deamon to include the registry address. Problem is with cluster up I 
 can't
 control the ip address that will be assigned to the registry. Or at 
 least I
 can't find a way to do it. Is there an 

Adding master to 3 node install

2016-08-11 Thread David Strejc
I got basic setup with 3 physical nodes running open shift nodes and
on first node there is installed master server.

Is there a way how I can add master server into this scenario?

I would like to have HA setup.

I've used openshift ansible for setup.

David Strejc
https://octopussystems.cz
t: +420734270131
e: david.str...@gmail.com

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem authenticating to private docker registry

2016-08-11 Thread Michal Minář

Tony,

your docker-lab.example.com registry seems be of version 2.3.0 or higher
which supports manifest v2 schema 2 [1]. If you use Docker client >=
1.10 (which also supports this schema) to push any image there, the
registry will store schema 2. If you then pull it using older client
(supporting v1) such as origin 1.2 / ose 3.2, registry will convert it
on-the-fly to schema 1, which has a different digest. The digest doesn't
actually exist on the registry and therefore the pull by digest (e.g.
docker pull 
docker-lab.example.com:5000/testwebapp@sha256:9799a25cd6fd7f7908bad740fc0c85823e38aa22afb22f687a5b8a3ed2bf9ec3)
will fail.

What you can do to address this:

1. re-push your images to docker-lab.example.com with docker 1.9
2. downgrade your docker-lab.example.com registry to version 2.2.1 and
   re-push your images with whatever Docker you have
3. update your origin to latest master so you can pull schema 2

I'd recommend one of the first two options which will ensure that
registry stores and serves only manifest v2 schema 1 which is pull-able
without a problem.

Hope that helps,
Michal

[1] 
https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-2.md

On 10.8.2016 20:26, Tony Saxon wrote:
No worries. Thanks everyone for the help so far. Let me know if there's any other helpful information I can provide.  > > I am able to pull the image down without any issues if I use the latest tag in case that helps: > > [root@os-node1 
~]# docker pull docker-lab.example.com:5000/testwebapp:latest > Trying to pull repository 
docker-lab.example.com:5000/testwebapp ... > latest: Pulling from docker-lab.example.com:5000/testwebapp > 3d8673bd162a: 
Pull complete > 855e002c7563: Pull complete > 2c2e00e4aa2a: Pull complete > Digest: 
sha256:5216e79273fc6221f8f5896632e4de633eeaa66347d0500c39d9d0006912e42d > Status: Downloaded newer image for 
docker-lab.example.com:5000/testwebapp:latest > > On Wed, Aug 10, 2016 at 2:21 PM, Andy Goldstein  
wrote: > > Ok, thanks. I'm not really involved with the registry any more, so I'll have to defer to Maciej and 
Michal. We may need to try to reproduce to see what's going on. Sorry I couldn't be more helpful. > > Andy > > 
On Wed, Aug 10, 2016 at 2:18 PM, Tony Saxon  wrote: > > [root@os-node1 ~]# docker pull 
docker-lab.example.com:5000/testwebapp@sha256:9799a25cd6fd7f7908bad740fc0c85823e38aa22afb22f687a5b8a3ed2bf9ec3 > 
Trying to pull repository docker-lab.example.com:5000/testwebapp ... > manifest unknown: manifest unknown > > > 
> On Wed, Aug 10, 2016 at 2:07 PM, Andy Goldstein  wrote: > > Tony, can you 
show the output when you try to manually 'docker pull'? > > On Wed, Aug 10, 2016 at 2:04 PM, Cesar Wong 
 wrote: > > Hmm, I didn't know the issue existed between 1.10 and 1.12 as well. > 
> Andy, what would you recommend? > > >> On Aug 10, 2016, at 1:58 PM, Tony Saxon 
 wrote: >> >> Ok, maybe that is the issue. I can not do the docker pull 
referencing the sha256 hash on the node. >> >> The docker version running on the node is docker 1.10.3, 
and the docker version on the machine that pushed the image is 1.12.0. Is there a potential workaround for this, or do I 
need to get the docker version updated on the nodes? For reference, I installed the openshift platform using the ansible 
advanced installation referenced in the documentation. >> >> On Wed, Aug 10, 2016 at 1:46 PM, Cesar Wong 
 wrote: >> >> Tony, >> >> The only other time that I've seen 
the manifest not found error was when there was a version mismatch between the Docker version that pushed the image vs 
the version that was consuming the image (ie. images pushed with Docker 1.9 and pulled with Docker 1.10). Are you able 
to pull the image spec directly from your node using the Docker cli? >> >> $ docker pull 
docker-lab.example.com:5000/testwebapp@sha256:9799a25cd6fd7f7908bad740fc0c85823e38aa22afb22f687a5b8a3ed2bf9ec3 >> 
>>> On Aug 10, 2016, at 1:02 PM, Tony Saxon  wrote: >>> 
>>> I'm not sure if this has anything to do with it, but I looked at the details of the imagestream 
that I imported and see that it has this as the docker image reference: >>> >>> status: 
>>>   dockerImageRepository: 172.30.11.167:5000/testwebapp/testwebapp >>>   
tags: >>>   - items: >>> - created: 2016-08-10T13:26:01Z 
>>>   dockerImageReference: 
docker-lab.example.com:5000/testwebapp@sha256:9799a25cd6fd7f7908bad740fc0c85823e38aa22afb22f687a5b8a3ed2bf9ec3 
>>>