Re: cluster up - reuse registry address

2016-08-12 Thread v

Hey,

for all the curious people out there:
Just wanted to say that pulling via oc new-app or oc run did not work for my 
v1.2.1 cluster any more after I used this trick.

On top of that this trick has permanently messed up something in my cluster because "oadm prune images 
--confirm" now randomly tries to connect to one of the IPs that I've tried to use with "spec.clusterIP". 
In order for "oadm prune images --confirm" to work I now have to use "oadm prune images --confirm 
--registry-url=SERVICEIP".

Regards
v

Am 2016-08-08 um 03:03 schrieb Clayton Coleman:

When you create the registry you can specify the service IP that is assigned 
(as long as another service hasn't claimed it).

$ oadm registry -o yaml > registry.yaml
$ vi registry.yaml
# Set the registry service `spec.clusterIP` field to a valid service IP (must be 
within the service CIDR, typically 172.30.0.0/16 )
$ oc create -f registry.yaml


On Sun, Aug 7, 2016 at 8:55 PM, Lionel Orellana mailto:lione...@gmail.com>> wrote:

Hi I'm facing a similar problem to this: 
https://github.com/openshift/origin/issues/7879 
 Basically I need to configure 
the NO_PROXY variable of the Docker deamon to include the registry address. Problem 
is with cluster up I can't control the ip address that will be assigned to the 
registry. Or at least I can't find a way to do it. Is there an option that I'm not 
seeing? ThanksLionel.
___ users mailing list users@lists.openshift.redhat.com  http://lists.openshift.redhat.com/openshiftmm/listinfo/users  


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


cluster dns

2016-08-12 Thread James Eckersall

Hi,

I believe I've identified a couple of bugs, but would like to ask for 
other opinions before raising them officially.  These might also be more 
related to Kubernetes than Openshift.



First one:

We have a cluster running Openshift Origin v1.2.1 with kubernetes 
v1.2.0-36-g4a3f9c5.  This cluster has 3 masters and 4+ nodes.


We've noticed that if we take master01 offline for maintenance, DNS 
lookups inside the cluster are affected.  It only affects lookups to the 
172.30.0.1 cluster IP which translates to 53 tcp/udp on the three 
masters.  Due to the iptables rules using the probability module, 1 in 3 
DNS lookups fails as it is directed to the offline master.  I guess 
what's really needed here is for the master to be removed from the 
service endpoint so that the iptables rules are amended to prevent 
traffic hitting a master that isn't working.  I would like to see 
healthchecks here too.



Second one:

This cluster is running Openshift Origin v1.3.0-alpha.2+3da4944 with 
kubernetes v1.3.0+57fb9ac.  Again 3 masters and a bunch (~20) nodes.


The first bug also applies to this cluster but the behaviour is somewhat 
different due to the introduction of iptables recent module rules.  I'm 
not 100% clear on the behaviour of this, but what seems to happen is 
that the first "recent" rule is always matched and hence all traffic 
from internal pods to cluster IP's always hits the first endpoint.  This 
means that ~100% of all DNS lookups against service names fail from 
other pods while master01 is down.



If anyone has any information to share on this, I'd be grateful. I can 
also provide further details if required.



Thanks


J

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Known-good iconClass listing?

2016-08-12 Thread Slava Semushin
Hi,


for the first question I created an issue some time ago: 
https://github.com/openshift/openshift-docs/issues/1329


-- 
Slava Semushin | OpenShift

- Original Message -
From: "N. Harrison Ripps" 
To: "Origin Users List" 
Sent: Thursday, August 11, 2016 4:26:20 PM
Subject: Known-good iconClass listing?

Hey there-- 
I am working on creating a template and was curious about the iconClass 
setting. I can grep for it in the origin codebase and I see several different 
icon names, but: 

1) Is there an official list of valid iconClass values? 
2) Is there a generic "Application" icon? 

Thanks, 
Harrison 



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Known-good iconClass listing?

2016-08-12 Thread Jessica Forrester
Thanks Slava! We can move any discussion about doc'ing this better to the
issue.

On Fri, Aug 12, 2016 at 7:57 AM, Slava Semushin  wrote:

> Hi,
>
>
> for the first question I created an issue some time ago:
> https://github.com/openshift/openshift-docs/issues/1329
>
>
> --
> Slava Semushin | OpenShift
>
> - Original Message -
> From: "N. Harrison Ripps" 
> To: "Origin Users List" 
> Sent: Thursday, August 11, 2016 4:26:20 PM
> Subject: Known-good iconClass listing?
>
> Hey there--
> I am working on creating a template and was curious about the iconClass
> setting. I can grep for it in the origin codebase and I see several
> different icon names, but:
>
> 1) Is there an official list of valid iconClass values?
> 2) Is there a generic "Application" icon?
>
> Thanks,
> Harrison
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: configuring periodic import of images

2016-08-12 Thread Tony Saxon
Ok, so I'm a little confused. If my problem is the manifest schema, I had
thought that I already fixed that by downgrading my private registry to an
older version that didn't support schema 2 (
http://lists.openshift.redhat.com/openshift-archives/users/2016-August/msg00081.html
).

Basically I downgraded my registry to version 2.2.1 just so that I could
deploy an application from an imagestream that pulled from my private
registry. That works successfully.

Does the internal registry that is used by docker support schema 2? If I
reconfigure that to be secure and expose it externally and push my images
to that will I still run into this problem?

On Thu, Aug 11, 2016 at 9:26 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> https://docs.openshift.com/enterprise/3.2/install_config/
> install/docker_registry.html
>
> " The manifest v2 schema 2
> 
>  (*schema2*) is not yet supported."
>
> Sorry :)
> ​
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Adding master to 3 node install

2016-08-12 Thread Philippe Lafoucrière
On Thu, Aug 11, 2016 at 8:42 PM, Jason DeTiberus 
wrote:

> This sounds like your registry was using ephemeral storage rather than
> being backed by a PV or object storage.


It's should not.
We're using:

docker_register_volume_source='{"nfs": { "server": "10.x.x.x", "path":
"/zpool-1234/registry/"}}'

Anyway, it seems this variable isn't used anymore :( (in favor of the
portion you mentionned in your link)
I will investigate that.

Yet, I can see the volume present in the yaml manifest:

spec:
  volumes:
-
  name: registry-storage
  nfs:
server: 10.x.x.x
path: /zpool-1234/registry/
-
  name: registry-token-xmulp
  secret:
secretName: registry-token-xmulp
[...]

volumeMounts:
  -
name: registry-storage
mountPath: /registry
  -
name: registry-token-xmulp
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount


but not in the console:

[image: Inline image 1]

Weird :)
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: cluster dns

2016-08-12 Thread Scott Dodson
https://github.com/openshift/origin/pull/9469 should resolve both
items but it's not in 1.2 codebases. It should be in the latest 1.3
alpha.

On Fri, Aug 12, 2016 at 4:37 AM, James Eckersall  wrote:
> Hi,
>
> I believe I've identified a couple of bugs, but would like to ask for other
> opinions before raising them officially.  These might also be more related
> to Kubernetes than Openshift.
>
>
> First one:
>
> We have a cluster running Openshift Origin v1.2.1 with kubernetes
> v1.2.0-36-g4a3f9c5.  This cluster has 3 masters and 4+ nodes.
>
> We've noticed that if we take master01 offline for maintenance, DNS lookups
> inside the cluster are affected.  It only affects lookups to the 172.30.0.1
> cluster IP which translates to 53 tcp/udp on the three masters.  Due to the
> iptables rules using the probability module, 1 in 3 DNS lookups fails as it
> is directed to the offline master.  I guess what's really needed here is for
> the master to be removed from the service endpoint so that the iptables
> rules are amended to prevent traffic hitting a master that isn't working.  I
> would like to see healthchecks here too.
>
>
> Second one:
>
> This cluster is running Openshift Origin v1.3.0-alpha.2+3da4944 with
> kubernetes v1.3.0+57fb9ac.  Again 3 masters and a bunch (~20) nodes.
>
> The first bug also applies to this cluster but the behaviour is somewhat
> different due to the introduction of iptables recent module rules.  I'm not
> 100% clear on the behaviour of this, but what seems to happen is that the
> first "recent" rule is always matched and hence all traffic from internal
> pods to cluster IP's always hits the first endpoint.  This means that ~100%
> of all DNS lookups against service names fail from other pods while master01
> is down.
>
>
> If anyone has any information to share on this, I'd be grateful. I can also
> provide further details if required.
>
>
> Thanks
>
>
> J
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: configuring periodic import of images

2016-08-12 Thread Clayton Coleman
To have openshift import an image's metadata from another registry (which
finds the digest ID of the image, so that internally you can trigger
deployments that use the latest digest ID), OpenShift needs to be able to
get the correct digest ID.  When Docker 1.10+ tries to push an image, it
first tries to push as a v2schema, and if that fails pushes as a v1schema.
Because v1schema and v2schema have different digest IDs, when a v2schema is
pushed the Docker registry tells OpenShift 1.2 that the digest is the
v1schema value, but in reality only the v2schema value can be pulled.

OpenShift 1.3 adds support for using the newer registry client so that it
gets the v2schema value.  We hope to cut an rc very soon, but until then,
if you want to have openshift import images by digest (what most of the
tools do by default) you need to push your images using Docker 1.9.  If you
want to bypass the import by digest, you can use the `--reference` flag
which only imports the tag name (but includes none of the metadata):

oc tag --reference --source=docker SOME_DOCKER_TAG IMAGESTREAM:TAG



On Fri, Aug 12, 2016 at 8:58 AM, Tony Saxon  wrote:

> Ok, so I'm a little confused. If my problem is the manifest schema, I had
> thought that I already fixed that by downgrading my private registry to an
> older version that didn't support schema 2 (http://lists.openshift.
> redhat.com/openshift-archives/users/2016-August/msg00081.html).
>
> Basically I downgraded my registry to version 2.2.1 just so that I could
> deploy an application from an imagestream that pulled from my private
> registry. That works successfully.
>
> Does the internal registry that is used by docker support schema 2? If I
> reconfigure that to be secure and expose it externally and push my images
> to that will I still run into this problem?
>
> On Thu, Aug 11, 2016 at 9:26 PM, Philippe Lafoucrière <
> philippe.lafoucri...@tech-angels.com> wrote:
>
>> https://docs.openshift.com/enterprise/3.2/install_config/ins
>> tall/docker_registry.html
>>
>> " The manifest v2 schema 2
>> 
>>  (*schema2*) is not yet supported."
>>
>> Sorry :)
>> ​
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: configuring periodic import of images

2016-08-12 Thread Tony Saxon
Right, I get the v1schema vs v2schema issue. What I'm saying is that I've
already been able to import the image from the private docker repository
into an imagestream:

[root@os-master ~]# oc describe is
Name:   testwebapp
Created:24 hours ago
Labels: 
Annotations:
openshift.io/image.dockerRepositoryCheck=2016-08-11T13:02:27Z
Docker Pull Spec:   172.30.11.167:5000/testwebapp/testwebapp

Tag Spec
Created
PullSpec
Image
latest  docker-lab.example.com:5000/testwebapp:latest *24 hours
agodocker-lab.example.com:5000/testwebapp@sha256:c1c8c6c3e1c672...


  * tag is scheduled for periodic import
  ! tag is insecure and can be imported over HTTP or self-signed HTTPS


[root@os-master ~]# oc describe dc/testwebapp
Name:   testwebapp
Created:24 hours ago
Labels: app=testwebapp
Annotations:openshift.io/generated-by=OpenShiftNewApp
Latest Version: 3
Selector:   app=testwebapp,deploymentconfig=testwebapp
Replicas:   3
Triggers:   Config, Image(testwebapp@latest, auto=true)
Strategy:   Rolling
Template:
  Labels:   app=testwebapp,deploymentconfig=testwebapp
  Annotations:
openshift.io/container.testwebapp.image.entrypoint=["/bin/sh","-c","/usr/local/tomcat/bin/startup.sh
\u0026\u0026 tail -f /usr/local/tomcat/logs/catalina.out"],
openshift.io/generated-by=OpenShiftNewApp
  Containers:
  testwebapp:
Image:
docker-lab.example.com:5000/testwebapp@sha256:c1c8c6c3e1c6729d1366acaf54c9772b4849f35d971e73449cf9044f3af06074
Port:
QoS Tier:
  cpu:  BestEffort
  memory:   BestEffort
Environment Variables:
  No volumes.

Deployment #3 (latest):
Name:   testwebapp-3
Created:18 hours ago
Status: Complete
Replicas:   3 current / 3 desired
Selector:
app=testwebapp,deployment=testwebapp-3,deploymentconfig=testwebapp
Labels: app=testwebapp,
openshift.io/deployment-config.name=testwebapp
Pods Status:3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Deployment #2:
Created:21 hours ago
Status: Complete
Replicas:   0 current / 0 desired
Deployment #1:
Created:24 hours ago
Status: Complete
Replicas:   0 current / 0 desired

No events.

All updated images have been pushed to the registry from the same docker
client. If the issue was the manifest 2 vs 1 issue wouldn't I have been
unable to deploy the app initially as well?

On Fri, Aug 12, 2016 at 9:30 AM, Clayton Coleman 
wrote:

> To have openshift import an image's metadata from another registry (which
> finds the digest ID of the image, so that internally you can trigger
> deployments that use the latest digest ID), OpenShift needs to be able to
> get the correct digest ID.  When Docker 1.10+ tries to push an image, it
> first tries to push as a v2schema, and if that fails pushes as a v1schema.
> Because v1schema and v2schema have different digest IDs, when a v2schema is
> pushed the Docker registry tells OpenShift 1.2 that the digest is the
> v1schema value, but in reality only the v2schema value can be pulled.
>
> OpenShift 1.3 adds support for using the newer registry client so that it
> gets the v2schema value.  We hope to cut an rc very soon, but until then,
> if you want to have openshift import images by digest (what most of the
> tools do by default) you need to push your images using Docker 1.9.  If you
> want to bypass the import by digest, you can use the `--reference` flag
> which only imports the tag name (but includes none of the metadata):
>
> oc tag --reference --source=docker SOME_DOCKER_TAG IMAGESTREAM:TAG
>
>
>
> On Fri, Aug 12, 2016 at 8:58 AM, Tony Saxon  wrote:
>
>> Ok, so I'm a little confused. If my problem is the manifest schema, I had
>> thought that I already fixed that by downgrading my private registry to an
>> older version that didn't support schema 2 (http://lists.openshift.redhat
>> .com/openshift-archives/users/2016-August/msg00081.html).
>>
>> Basically I downgraded my registry to version 2.2.1 just so that I could
>> deploy an application from an imagestream that pulled from my private
>> registry. That works successfully.
>>
>> Does the internal registry that is used by docker support schema 2? If I
>> reconfigure that to be secure and expose it externally and push my images
>> to that will I still run into this problem?
>>
>> On Thu, Aug 11, 2016 at 9:26 PM, Philippe Lafoucrière <
>> philippe.lafoucri...@tech-angels.com> wrote:
>>
>>> https://docs.openshift.com/enterprise/3.2/install_config/ins
>>> tall/docker_registry.html
>>>
>>> " The manifest v2 schema 2
>>> 
>>>  (*schema2*) is not yet supported."
>>>
>>> Sorry :)
>>> ​
>>>
>>
>>
>> ___
>> users mailing list
>> users@lists

Re: configuring periodic import of images

2016-08-12 Thread Clayton Coleman
When you restart your server it should attempt to import everything.  Can
you restart the openshift controllers process (or master, if you aren't
running the separate controllers process) with --loglevel=5 and search for "
172.30.11.167:5000/testwebapp/testwebapp"?  You should see log lines about
importing the image and a result about why it isn't imported.

On Fri, Aug 12, 2016 at 9:57 AM, Tony Saxon  wrote:

> Right, I get the v1schema vs v2schema issue. What I'm saying is that I've
> already been able to import the image from the private docker repository
> into an imagestream:
>
> [root@os-master ~]# oc describe is
> Name:   testwebapp
> Created:24 hours ago
> Labels: 
> Annotations:openshift.io/image.dockerRepositoryCheck=2016-08-
> 11T13:02:27Z
> Docker Pull Spec:   172.30.11.167:5000/testwebapp/testwebapp
>
> Tag Spec
> Created PullSpec
>   Image
> latest  docker-lab.example.com:5000/testwebapp:latest *24 hours
> agodocker-lab.example.com:5000/testwebapp@sha256:c1c8c6c3e1c672...
> 
>
>   * tag is scheduled for periodic import
>   ! tag is insecure and can be imported over HTTP or self-signed HTTPS
>
>
> [root@os-master ~]# oc describe dc/testwebapp
> Name:   testwebapp
> Created:24 hours ago
> Labels: app=testwebapp
> Annotations:openshift.io/generated-by=OpenShiftNewApp
> Latest Version: 3
> Selector:   app=testwebapp,deploymentconfig=testwebapp
> Replicas:   3
> Triggers:   Config, Image(testwebapp@latest, auto=true)
> Strategy:   Rolling
> Template:
>   Labels:   app=testwebapp,deploymentconfig=testwebapp
>   Annotations:  openshift.io/container.testwebapp.image.entrypoint=[
> "
> /bin/sh","-c","/usr/local/tomcat/bin/startup.sh \u0026\u0026 tail -f
> /usr/local/tomcat/logs/catalina.out"],openshift.io/
> generated-by=OpenShiftNewApp
>   Containers:
>   testwebapp:
> Image:  docker-lab.example.com:5000/testwebapp@sha256:
> c1c8c6c3e1c6729d1366acaf54c9772b4849f35d971e73449cf9044f3af06074
> Port:
> QoS Tier:
>   cpu:  BestEffort
>   memory:   BestEffort
> Environment Variables:
>   No volumes.
>
> Deployment #3 (latest):
> Name:   testwebapp-3
> Created:18 hours ago
> Status: Complete
> Replicas:   3 current / 3 desired
> Selector:   app=testwebapp,deployment=
> testwebapp-3,deploymentconfig=testwebapp
> Labels: app=testwebapp,openshift.io/
> deployment-config.name=testwebapp
> Pods Status:3 Running / 0 Waiting / 0 Succeeded / 0 Failed
> Deployment #2:
> Created:21 hours ago
> Status: Complete
> Replicas:   0 current / 0 desired
> Deployment #1:
> Created:24 hours ago
> Status: Complete
> Replicas:   0 current / 0 desired
>
> No events.
>
> All updated images have been pushed to the registry from the same docker
> client. If the issue was the manifest 2 vs 1 issue wouldn't I have been
> unable to deploy the app initially as well?
>
> On Fri, Aug 12, 2016 at 9:30 AM, Clayton Coleman 
> wrote:
>
>> To have openshift import an image's metadata from another registry (which
>> finds the digest ID of the image, so that internally you can trigger
>> deployments that use the latest digest ID), OpenShift needs to be able to
>> get the correct digest ID.  When Docker 1.10+ tries to push an image, it
>> first tries to push as a v2schema, and if that fails pushes as a v1schema.
>> Because v1schema and v2schema have different digest IDs, when a v2schema is
>> pushed the Docker registry tells OpenShift 1.2 that the digest is the
>> v1schema value, but in reality only the v2schema value can be pulled.
>>
>> OpenShift 1.3 adds support for using the newer registry client so that it
>> gets the v2schema value.  We hope to cut an rc very soon, but until then,
>> if you want to have openshift import images by digest (what most of the
>> tools do by default) you need to push your images using Docker 1.9.  If you
>> want to bypass the import by digest, you can use the `--reference` flag
>> which only imports the tag name (but includes none of the metadata):
>>
>> oc tag --reference --source=docker SOME_DOCKER_TAG IMAGESTREAM:TAG
>>
>>
>>
>> On Fri, Aug 12, 2016 at 8:58 AM, Tony Saxon  wrote:
>>
>>> Ok, so I'm a little confused. If my problem is the manifest schema, I
>>> had thought that I already fixed that by downgrading my private registry to
>>> an older version that didn't support schema 2 (
>>> http://lists.openshift.redhat.com/openshift-archives/users/
>>> 2016-August/msg00081.html).
>>>
>>> Basically I downgraded my registry to version 2.2.1 just so that I could
>>> deploy an application from an imagestream that pulled from my private
>>> registry. That works 

Re: Adding master to 3 node install

2016-08-12 Thread Jason DeTiberus
On Thu, Aug 11, 2016 at 3:28 AM, David Strejc 
wrote:

> I got basic setup with 3 physical nodes running open shift nodes and
> on first node there is installed master server.
>
> Is there a way how I can add master server into this scenario?
>

Apologies about missing the original question in my earlier reply. We don't
really have a good way to go from non-HA to HA currently. We do have a
playbook to add master hosts to a cluster, but it requires that the
environment already be configured as an HA environment. Switching from
non-HA to HA requires introducing a load balancer, switching from using the
embedded etcd to an external etcd cluster, and re-keying certificates for
the master hosts.


>
> I would like to have HA setup.
>
> I've used openshift ansible for setup.
>
> David Strejc
> https://octopussystems.cz
> t: +420734270131
> e: david.str...@gmail.com
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Adding master to 3 node install

2016-08-12 Thread Jason DeTiberus
On Fri, Aug 12, 2016 at 8:58 AM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

>
> On Thu, Aug 11, 2016 at 8:42 PM, Jason DeTiberus 
> wrote:
>
>> This sounds like your registry was using ephemeral storage rather than
>> being backed by a PV or object storage.
>
>
> It's should not.
> We're using:
>
> docker_register_volume_source='{"nfs": { "server": "10.x.x.x", "path":
> "/zpool-1234/registry/"}}'
>
> Anyway, it seems this variable isn't used anymore :( (in favor of the
> portion you mentionned in your link)
> I will investigate that.
>
> Yet, I can see the volume present in the yaml manifest:
>
> spec:
>   volumes:
> -
>   name: registry-storage
>   nfs:
> server: 10.x.x.x
> path: /zpool-1234/registry/
> -
>   name: registry-token-xmulp
>   secret:
> secretName: registry-token-xmulp
> [...]
>
> volumeMounts:
>   -
> name: registry-storage
> mountPath: /registry
>   -
> name: registry-token-xmulp
> readOnly: true
> mountPath: /var/run/secrets/kubernetes.io/serviceaccount
>
>
> but not in the console:
>
> [image: Inline image 1]
>
> Weird :)
>


Indeed. Where there any errors generated in the node logs? I would expect
issues mounting the volume or accessing the volume would have prevented the
pod from starting. I also must say that I'm not overly familiar with the
changes that were introduced between 1.2.0 and 1.2.1 and if there is
something there that could have an impact.

-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


s2i maven proxy

2016-08-12 Thread Lionel Orellana
Hi,

I'm trying to run the Wildfly 10 template from the console.

I found how to set the proxy variables for git and it is pulling down the
repo fine.

But maven can't connect to central.

Unknown host repo1.maven.org: unknown error


I've tried different environment variables with no luck


strategy:

type: Source

sourceStrategy:

  from:

kind: ImageStreamTag

namespace: openshift

name: 'wildfly:10.0'

  env:

-

  name: HTTPS_PROXY

  value: 'http://: '

-

  name: HTTP_PROXY

  value: 'http://: '

-

  name: JAVA_OPTS

  value: '-Dhttp.proxyHost='http://
' -Dhttp.proxyPort='


Is there a way to set this here or can it only be done in the s2i script
and the build image ?



Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Using a Jenkins Slave in openshift

2016-08-12 Thread Akshaya Khare
Apparently your suggestion was right, I had to add permissions for the file.
I got confused because I was using github as source, and didnt realize we
could use git as well.
I copied the code to my local git, changed permissions and was able to run
the pod successfully.

There were some configurations which needed to be done in the Kubernetes
plugin in jenkins, after which the slave pod was working smoothly, it looks
awesome now :)
Thanks for your help!!

Regards,
Akshaya

On Thu, Aug 11, 2016 at 11:53 AM, Ben Parees  wrote:

> No I mean in the repo. Does the file, as stored in your repo, have execute
> permissions.
>
> But what you suggest would also work.
>
> Ben Parees | OpenShift
>
> On Aug 11, 2016 11:47 AM, "Akshaya Khare"  wrote:
>
>> By execute permissions in the repo I hope you mean in the Dockerfile...
>>
>> these are the current commands in my docker file:
>>
>>
>>
>> *mkdir -p /var/lib/jenkins && \chown -R 1001:0 /var/lib/jenkins && \chmod
>> -R g+w /var/lib/jenkins*
>> So if I add  'chmod -R g+x'  to the /var/lib/jenkins, it should do the job
>> *?*
>>
>>
>> On Thu, Aug 11, 2016 at 11:20 AM, Ben Parees  wrote:
>>
>>>
>>>
>>> On Thu, Aug 11, 2016 at 11:18 AM, Akshaya Khare 
>>> wrote:
>>>
 I'm adding the run-jnlp-client
 
 file into my github repository under configuration folder.

>>>
>>> ​does it have execute permissions in your repo?​
>>>
>>>
>>>
 Then I'm using my github link to use in my jenkins-slave-builder url,
 and then openshift builds an image for me...

 On Thu, Aug 11, 2016 at 11:15 AM, Ben Parees 
 wrote:

>
>
> On Thu, Aug 11, 2016 at 11:10 AM, Akshaya Khare <
> khare...@husky.neu.edu> wrote:
>
>> Hi,
>>
>> Thanks for the detailed explanation, and I did get far but got stuck
>> again.
>> So I was able to build a slave Jenkins image and created a
>> buildconfig.
>> After updating the Kubernetes plugin configurations, I was able to
>> spawn a new pod, but the pod fails with the error "ContainerCannotRun".
>> On seeing the logs of the pod, it shows:
>>
>> *exec: "/var/lib/jenkins/run-jnlp-client": permission denied*
>>
>
> ​sounds like /var/lib/jenkins/run-jnlp-client ​ doesn't have the
> right read/execute permissions set.  How are you building the slave image?
>
>
>
>
>>
>> I tried giving admin privileges to my user, and also edit privileges
>> to the serviceaccount in my project:
>>
>>
>> *oc policy add-role-to-group edit system:serviceaccounts -n
>> jenkinstin2*
>> How can I make sure that the pod runs without any permissions issues?
>>
>> On Mon, Aug 8, 2016 at 3:55 PM, Ben Parees 
>> wrote:
>>
>>> The sample defines a buildconfig which ultimately uses this
>>> directory as the context for a docker build:
>>> https://github.com/siamaksade/jenkins-s2i-example/tree/master/slave
>>>
>>> it does that by pointing the buildconfig to this repo:
>>> https://github.com/siamaksade/jenkins-s2i-example
>>>
>>> and the context directory named "slave" within that repo:
>>> https://github.com/siamaksade/jenkins-s2i-example/tree/master/slave
>>>
>>> which you can see defined here:
>>> https://github.com/siamaksade/jenkins-s2i-example/blob/maste
>>> r/jenkins-slave-builder-template.yaml#L36-L40
>>>
>>> https://github.com/siamaksade/jenkins-s2i-example/blob/maste
>>> r/jenkins-slave-builder-template.yaml#L61-L68
>>>
>>> If you are trying to build your own slave image, you need to point
>>> to a repo (and optionally a contextdir within that repo) that contains 
>>> an
>>> appropriate Dockerfile, as the example does.
>>>
>>>
>>>
>>> On Mon, Aug 8, 2016 at 2:43 PM, Akshaya Khare <
>>> khare...@husky.neu.edu> wrote:
>>>
 Hi Ben,

 So after making changes to the imagestream, I wasn't able to get
 the build running initially.
 But that was because already there were failed builds and
 buildconfigs which were preventing the build to run successfully.

 Once I deleted the old failed builds, I was able to get the new
 build running, but it failed once I tried running my Jenkins job.
 I gave my github repository as the repository url for the build,
 and this is the log i get for the failed pod:












 *I0808 14:06:51.779594   1 source.go:96] git ls-remote
 https://github.com/akshayakhare/ims/ 
 
 --headsI0808 14:06:51.779659   1 repository.go:275] Executing git
 ls-remote https://github.com/akshayakhare/ims/
  --headsI0

Re: s2i maven proxy

2016-08-12 Thread Lionel Orellana
Got it.

name: MAVEN_OPTS

value: '-DproxyHost=http:// 
 -DproxyPort='

On 13 August 2016 at 08:08, Lionel Orellana  wrote:

> Hi,
>
> I'm trying to run the Wildfly 10 template from the console.
>
> I found how to set the proxy variables for git and it is pulling down the
> repo fine.
>
> But maven can't connect to central.
>
> Unknown host repo1.maven.org: unknown error
>
>
> I've tried different environment variables with no luck
>
>
> strategy:
>
> type: Source
>
> sourceStrategy:
>
>   from:
>
> kind: ImageStreamTag
>
> namespace: openshift
>
> name: 'wildfly:10.0'
>
>   env:
>
> -
>
>   name: HTTPS_PROXY
>
>   value: 'http://: 
> '
>
> -
>
>   name: HTTP_PROXY
>
>   value: 'http://: 
> '
>
> -
>
>   name: JAVA_OPTS
>
>   value: '-Dhttp.proxyHost='http://
> ' -Dhttp.proxyPort='
>
>
> Is there a way to set this here or can it only be done in the s2i script
> and the build image ?
>
>
>
> Thanks
>
>
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: s2i maven proxy

2016-08-12 Thread Ben Parees
There is also this logic you can leverage via env variables:
https://github.com/openshift-s2i/s2i-wildfly/blob/master/10.0/s2i/bin/assemble#L154

Ben Parees | OpenShift

On Aug 12, 2016 7:00 PM, "Lionel Orellana"  wrote:

> Got it.
>
> name: MAVEN_OPTS
>
> value: '-DproxyHost=http:// 
>  -DproxyPort='
>
> On 13 August 2016 at 08:08, Lionel Orellana  wrote:
>
>> Hi,
>>
>> I'm trying to run the Wildfly 10 template from the console.
>>
>> I found how to set the proxy variables for git and it is pulling down the
>> repo fine.
>>
>> But maven can't connect to central.
>>
>> Unknown host repo1.maven.org: unknown error
>>
>>
>> I've tried different environment variables with no luck
>>
>>
>> strategy:
>>
>> type: Source
>>
>> sourceStrategy:
>>
>>   from:
>>
>> kind: ImageStreamTag
>>
>> namespace: openshift
>>
>> name: 'wildfly:10.0'
>>
>>   env:
>>
>> -
>>
>>   name: HTTPS_PROXY
>>
>>   value: 'http://: 
>> '
>>
>> -
>>
>>   name: HTTP_PROXY
>>
>>   value: 'http://: <
>> port>'
>>
>> -
>>
>>   name: JAVA_OPTS
>>
>>   value: '-Dhttp.proxyHost='http://
>> ' -Dhttp.proxyPort='
>>
>>
>> Is there a way to set this here or can it only be done in the s2i script
>> and the build image ?
>>
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users