Re: jboss-eap64-openshift quickstart maven proxy

2016-10-13 Thread Ben Parees
also the wildfly contains the same logic for configuring maven proxies,
meaning it expects the same non-standard env variable:
https://github.com/openshift-s2i/s2i-wildfly/blob/master/10.0/s2i/bin/assemble#L154

so i'm not sure why you didn't have the same problem with it.


On Thu, Oct 13, 2016 at 10:43 PM, Ben Parees  wrote:

>
>
> On Thu, Oct 13, 2016 at 10:39 PM, Lionel Orellana 
> wrote:
>
>> The Wildfly quickstarts work out of the box. Are the Wildfly and JBoss
>> builder images completely different? What's the relationship
>> between jboss-eap-6/eap64-openshift and openshift/wildfly-100-centos7?
>>
>
> ​yes they are completely different, maintained by different teams.  There
> is a similarity of philosophy and original design, but otherwise they are
> independent.
>
> ​
>
>
>
>>
>> On 14 October 2016 at 11:00, Ben Parees  wrote:
>>
>>>
>>>
>>> On Thu, Oct 13, 2016 at 6:14 PM, Lionel Orellana 
>>> wrote:
>>>
 Thanks Jim.

 It worked by setting

 HTTP_PROXY_HOST=proxy.server.name

 and

 HTTP_PROXY_PORT=port


 Are these supposed to be set globally by the Ansible scripts? I have
 openshift_http_proxy, openshift_https_proxy, openshift_no_proxy and
 openshift_generate_no_proxy_hosts in my inventory file.

>>>
>>> ​no, unfortunately those values are very specific to those particular
>>> images.  That said, we have an open issue to have those images updated to
>>> respect the more standard proxy env variables.
>>> ​
>>>
>>>
>>>

 On 13 October 2016 at 19:25, Jim Minter  wrote:

> Hi Lionel,
>
> It should be a case of setting the HTTP_PROXY_HOST environment
> variable.  (I'm not sure why it's not just plain HTTP_PROXY - sorry).  See
> [1].
>
> Also note that you can predefine this and other environment variables
> used at build time globally across the cluster if you like [2].
>
> [1] https://access.redhat.com/documentation/en/red-hat-xpaas/0/s
> ingle/red-hat-xpaas-eap-image/#environment_variables_3
> [2] https://docs.openshift.com/container-platform/3.3/install_co
> nfig/build_defaults_overrides.html
>
> Cheers,
>
> Jim
>
> --
> Jim Minter
> Principal Software Engineer, Red Hat UK
>
>
> On 13/10/16 08:27, Lionel Orellana wrote:
>
>> Hi
>>
>> I'm trying to run the jboss-eap64-openshift quickstart but the build
>> is
>> failing to download from maven central.
>>
>> [ERROR] Non-resolvable import POM: Could not transfer artifact
>> org.jboss.bom.eap:jboss-javaee-6.0-with-tools:pom:6.4.0.GA
>>  from/to central
>> (https://repo1.maven.org/maven2):Connection to
>> https://repo1.maven.org
>> refused @ line 71, column 25
>>
>> I had no problems with the Wildfly quickstarts.
>>
>> I tried setting proxyHost and proxyPort in MAVEN_OPTS but that did
>> nothing.
>>
>> Do I really have to clone the repo and modify the maven settings.xml
>> file to run the quickstart? How come Wildfly works?
>>
>> Thanks
>>
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>>>
>>> --
>>> Ben Parees | OpenShift
>>>
>>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>


-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: jboss-eap64-openshift quickstart maven proxy

2016-10-13 Thread Ben Parees
On Thu, Oct 13, 2016 at 10:39 PM, Lionel Orellana 
wrote:

> The Wildfly quickstarts work out of the box. Are the Wildfly and JBoss
> builder images completely different? What's the relationship
> between jboss-eap-6/eap64-openshift and openshift/wildfly-100-centos7?
>

​yes they are completely different, maintained by different teams.  There
is a similarity of philosophy and original design, but otherwise they are
independent.

​



>
> On 14 October 2016 at 11:00, Ben Parees  wrote:
>
>>
>>
>> On Thu, Oct 13, 2016 at 6:14 PM, Lionel Orellana 
>> wrote:
>>
>>> Thanks Jim.
>>>
>>> It worked by setting
>>>
>>> HTTP_PROXY_HOST=proxy.server.name
>>>
>>> and
>>>
>>> HTTP_PROXY_PORT=port
>>>
>>>
>>> Are these supposed to be set globally by the Ansible scripts? I have
>>> openshift_http_proxy, openshift_https_proxy, openshift_no_proxy and
>>> openshift_generate_no_proxy_hosts in my inventory file.
>>>
>>
>> ​no, unfortunately those values are very specific to those particular
>> images.  That said, we have an open issue to have those images updated to
>> respect the more standard proxy env variables.
>> ​
>>
>>
>>
>>>
>>> On 13 October 2016 at 19:25, Jim Minter  wrote:
>>>
 Hi Lionel,

 It should be a case of setting the HTTP_PROXY_HOST environment
 variable.  (I'm not sure why it's not just plain HTTP_PROXY - sorry).  See
 [1].

 Also note that you can predefine this and other environment variables
 used at build time globally across the cluster if you like [2].

 [1] https://access.redhat.com/documentation/en/red-hat-xpaas/0/s
 ingle/red-hat-xpaas-eap-image/#environment_variables_3
 [2] https://docs.openshift.com/container-platform/3.3/install_co
 nfig/build_defaults_overrides.html

 Cheers,

 Jim

 --
 Jim Minter
 Principal Software Engineer, Red Hat UK


 On 13/10/16 08:27, Lionel Orellana wrote:

> Hi
>
> I'm trying to run the jboss-eap64-openshift quickstart but the build is
> failing to download from maven central.
>
> [ERROR] Non-resolvable import POM: Could not transfer artifact
> org.jboss.bom.eap:jboss-javaee-6.0-with-tools:pom:6.4.0.GA
>  from/to central
> (https://repo1.maven.org/maven2):Connection to https://repo1.maven.org
> refused @ line 71, column 25
>
> I had no problems with the Wildfly quickstarts.
>
> I tried setting proxyHost and proxyPort in MAVEN_OPTS but that did
> nothing.
>
> Do I really have to clone the repo and modify the maven settings.xml
> file to run the quickstart? How come Wildfly works?
>
> Thanks
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>> Ben Parees | OpenShift
>>
>>
>


-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: jboss-eap64-openshift quickstart maven proxy

2016-10-13 Thread Lionel Orellana
The Wildfly quickstarts work out of the box. Are the Wildfly and JBoss
builder images completely different? What's the relationship
between jboss-eap-6/eap64-openshift and openshift/wildfly-100-centos7?

On 14 October 2016 at 11:00, Ben Parees  wrote:

>
>
> On Thu, Oct 13, 2016 at 6:14 PM, Lionel Orellana 
> wrote:
>
>> Thanks Jim.
>>
>> It worked by setting
>>
>> HTTP_PROXY_HOST=proxy.server.name
>>
>> and
>>
>> HTTP_PROXY_PORT=port
>>
>>
>> Are these supposed to be set globally by the Ansible scripts? I have
>> openshift_http_proxy, openshift_https_proxy, openshift_no_proxy and
>> openshift_generate_no_proxy_hosts in my inventory file.
>>
>
> ​no, unfortunately those values are very specific to those particular
> images.  That said, we have an open issue to have those images updated to
> respect the more standard proxy env variables.
> ​
>
>
>
>>
>> On 13 October 2016 at 19:25, Jim Minter  wrote:
>>
>>> Hi Lionel,
>>>
>>> It should be a case of setting the HTTP_PROXY_HOST environment
>>> variable.  (I'm not sure why it's not just plain HTTP_PROXY - sorry).  See
>>> [1].
>>>
>>> Also note that you can predefine this and other environment variables
>>> used at build time globally across the cluster if you like [2].
>>>
>>> [1] https://access.redhat.com/documentation/en/red-hat-xpaas/0/s
>>> ingle/red-hat-xpaas-eap-image/#environment_variables_3
>>> [2] https://docs.openshift.com/container-platform/3.3/install_co
>>> nfig/build_defaults_overrides.html
>>>
>>> Cheers,
>>>
>>> Jim
>>>
>>> --
>>> Jim Minter
>>> Principal Software Engineer, Red Hat UK
>>>
>>>
>>> On 13/10/16 08:27, Lionel Orellana wrote:
>>>
 Hi

 I'm trying to run the jboss-eap64-openshift quickstart but the build is
 failing to download from maven central.

 [ERROR] Non-resolvable import POM: Could not transfer artifact
 org.jboss.bom.eap:jboss-javaee-6.0-with-tools:pom:6.4.0.GA
  from/to central
 (https://repo1.maven.org/maven2):Connection to https://repo1.maven.org
 refused @ line 71, column 25

 I had no problems with the Wildfly quickstarts.

 I tried setting proxyHost and proxyPort in MAVEN_OPTS but that did
 nothing.

 Do I really have to clone the repo and modify the maven settings.xml
 file to run the quickstart? How come Wildfly works?

 Thanks




 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Does NFS PVC wipe existing data before attaching?

2016-10-13 Thread Dean Peterson
Awesome, thanks for the answers!

On Thu, Oct 13, 2016 at 3:59 PM, Lionel Orellana  wrote:

> In my limited experimentation I had problems with NFS PVs getting wiped
> out even though the policy was set to Retain. In fact I ended up in this
> situation where if I created a file in the NFS volume it was deleted in
> front of my eyes in a few seconds. Obviously I did something very wrong
> with the PV.  I've found mounting the NFS volume directly into the pod as
> Seth suggested a lot easier.  I also was unable to force a particular PV to
> be bound to a PVC. They seem to work like a pool and you get what you get.
> So if you have an existing nfs volume with data you want to mount into a
> particular pod there might not be a way of doing that with PV's but I would
> love to be proven wrong by others.
>
> On 14 October 2016 at 07:32, Seth Jennings  wrote:
>
>> NFS mounts can be mounted directly into pods without being PVs like this:
>>
>> volumes:
>>   name: shared
>>   nfs:
>> server: 
>> path: 
>>
>> If you are using NFS PVs, then the persistentVolumeReclaimPolicy
>> determines if the data is wiped when the PVC is released.  The default
>> value is "Retain".  It will not delete the data unless you set it to
>> "Recycle".
>>
>> https://docs.openshift.com/enterprise/3.0/admin_guide/persis
>> tent_storage_nfs.html#reclaiming-resources
>>
>> Hope that answers your question!
>>
>> On Thu, Oct 13, 2016 at 10:19 AM, Dean Peterson 
>> wrote:
>> > If I create a persistent volume claim using an NFS share that has
>> existing
>> > data, will the data be wipded? Same thing with creating the persistent
>> > volume. Will the existing data be deleted. I want to make existing data
>> > accessible to multiple pods/containers in an NFS share. If I make a
>> > persistent volume pointing to that existing path. How do I get the
>> > persistent volume claim to access that existing path and make the
>> containers
>> > with the claim see it?
>> >
>> > ___
>> > users mailing list
>> > users@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: jboss-eap64-openshift quickstart maven proxy

2016-10-13 Thread Ben Parees
On Thu, Oct 13, 2016 at 6:14 PM, Lionel Orellana  wrote:

> Thanks Jim.
>
> It worked by setting
>
> HTTP_PROXY_HOST=proxy.server.name
>
> and
>
> HTTP_PROXY_PORT=port
>
>
> Are these supposed to be set globally by the Ansible scripts? I have
> openshift_http_proxy, openshift_https_proxy, openshift_no_proxy and
> openshift_generate_no_proxy_hosts in my inventory file.
>

​no, unfortunately those values are very specific to those particular
images.  That said, we have an open issue to have those images updated to
respect the more standard proxy env variables.
​



>
> On 13 October 2016 at 19:25, Jim Minter  wrote:
>
>> Hi Lionel,
>>
>> It should be a case of setting the HTTP_PROXY_HOST environment variable.
>> (I'm not sure why it's not just plain HTTP_PROXY - sorry).  See [1].
>>
>> Also note that you can predefine this and other environment variables
>> used at build time globally across the cluster if you like [2].
>>
>> [1] https://access.redhat.com/documentation/en/red-hat-xpaas/0/s
>> ingle/red-hat-xpaas-eap-image/#environment_variables_3
>> [2] https://docs.openshift.com/container-platform/3.3/install_co
>> nfig/build_defaults_overrides.html
>>
>> Cheers,
>>
>> Jim
>>
>> --
>> Jim Minter
>> Principal Software Engineer, Red Hat UK
>>
>>
>> On 13/10/16 08:27, Lionel Orellana wrote:
>>
>>> Hi
>>>
>>> I'm trying to run the jboss-eap64-openshift quickstart but the build is
>>> failing to download from maven central.
>>>
>>> [ERROR] Non-resolvable import POM: Could not transfer artifact
>>> org.jboss.bom.eap:jboss-javaee-6.0-with-tools:pom:6.4.0.GA
>>>  from/to central
>>> (https://repo1.maven.org/maven2):Connection to https://repo1.maven.org
>>> refused @ line 71, column 25
>>>
>>> I had no problems with the Wildfly quickstarts.
>>>
>>> I tried setting proxyHost and proxyPort in MAVEN_OPTS but that did
>>> nothing.
>>>
>>> Do I really have to clone the repo and modify the maven settings.xml
>>> file to run the quickstart? How come Wildfly works?
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: jboss-eap64-openshift quickstart maven proxy

2016-10-13 Thread Lionel Orellana
Thanks Jim.

It worked by setting

HTTP_PROXY_HOST=proxy.server.name

and

HTTP_PROXY_PORT=port


Are these supposed to be set globally by the Ansible scripts? I have
openshift_http_proxy, openshift_https_proxy, openshift_no_proxy and
openshift_generate_no_proxy_hosts in my inventory file.

On 13 October 2016 at 19:25, Jim Minter  wrote:

> Hi Lionel,
>
> It should be a case of setting the HTTP_PROXY_HOST environment variable.
> (I'm not sure why it's not just plain HTTP_PROXY - sorry).  See [1].
>
> Also note that you can predefine this and other environment variables used
> at build time globally across the cluster if you like [2].
>
> [1] https://access.redhat.com/documentation/en/red-hat-xpaas/0/
> single/red-hat-xpaas-eap-image/#environment_variables_3
> [2] https://docs.openshift.com/container-platform/3.3/install_
> config/build_defaults_overrides.html
>
> Cheers,
>
> Jim
>
> --
> Jim Minter
> Principal Software Engineer, Red Hat UK
>
>
> On 13/10/16 08:27, Lionel Orellana wrote:
>
>> Hi
>>
>> I'm trying to run the jboss-eap64-openshift quickstart but the build is
>> failing to download from maven central.
>>
>> [ERROR] Non-resolvable import POM: Could not transfer artifact
>> org.jboss.bom.eap:jboss-javaee-6.0-with-tools:pom:6.4.0.GA
>>  from/to central
>> (https://repo1.maven.org/maven2):Connection to https://repo1.maven.org
>> refused @ line 71, column 25
>>
>> I had no problems with the Wildfly quickstarts.
>>
>> I tried setting proxyHost and proxyPort in MAVEN_OPTS but that did
>> nothing.
>>
>> Do I really have to clone the repo and modify the maven settings.xml
>> file to run the quickstart? How come Wildfly works?
>>
>> Thanks
>>
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing OpenShift Configuration with Puppet/Ansible… what are your best practices?

2016-10-13 Thread Jason DeTiberus
On Thu, Oct 13, 2016 at 2:53 PM, Rich Megginson  wrote:

> On 10/13/2016 07:52 AM, Philippe Lafoucrière wrote:
>
>> Just to clarify our need here:
>>
>> We want the projects config inside a configuration tool. There's
>> currently nothing preventing from modifying the config of a project (let's
>> say, a DC), and no one will be notified of the change.
>>
>
> Do you mean, if someone does 'oc edit dc my-project-dc', you want to be
> able to sync those changes back to some config file, so that if you
> redeploy, it will use the changes you made when you did the 'oc edit'?
>


I believe he is looking to have the external config be the source of truth
in this case. Which would be covered by the future Ansible module work (we
aren't looking to provide additional configuration management support
beyond Ansible, as far as I know).


>
> We're looking for something to keep track of changes,
>
>
It is possible to do this part currently using watch, either through the
api or through the command line tooling.


> and make sure the config deployed is the config we have in our git repo.
>>
>
This is the trickier part, which the Ansible modules would help address.

--
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Satellite instead of subscription-manager PLEASE HELP (BLOCKED)

2016-10-13 Thread Jason DeTiberus
On Thu, Oct 13, 2016 at 4:48 PM, Dean Peterson 
wrote:

> Our machines use rhn classic. If I try to run subscription-manager
> register it says I am already registered with redhat classic. However, this
> does seem to be compatible with Docker and Openshift. Operations wants to
> stick with redhat classic and satellite. Is this possible?
>

I don't think this is currently possible, the entitlement/subscription
mapping is done through a set of plugins that are specific to
subscription-manager. With RHN Classic approaching end of life (
https://access.redhat.com/rhn-to-rhsm) I don't really see that changing,
but you could always reach out to support to file a formal RFE.

--
Jason DeTiberus


>
> On Thu, Oct 13, 2016 at 3:29 PM, Kent Perrier  wrote:
>
>> subscription-manager is used to register your host to your local
>> satellite as well. How are you patching your hosts if they are not
>> registered?
>>
>> Kent
>>
>> On Thu, Oct 13, 2016 at 3:05 PM, Dean Peterson 
>> wrote:
>>
>>> Can anyone please help? We use satellite for access to our software. We
>>> do not use subscription-manager. Unfortunately when running docker builds,
>>> the containers cannot access the hosts registries because they expect to
>>> access auto attached subscription-manager subscriptions
>>> How is openshift supposed to work with satellite instead of
>>> subscription-manager?
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>> Kent Perrier
>> Technical Account Manager
>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Does NFS PVC wipe existing data before attaching?

2016-10-13 Thread Seth Jennings
NFS mounts can be mounted directly into pods without being PVs like this:

volumes:
  name: shared
  nfs:
server: 
path: 

If you are using NFS PVs, then the persistentVolumeReclaimPolicy
determines if the data is wiped when the PVC is released.  The default
value is "Retain".  It will not delete the data unless you set it to
"Recycle".

https://docs.openshift.com/enterprise/3.0/admin_guide/persistent_storage_nfs.html#reclaiming-resources

Hope that answers your question!

On Thu, Oct 13, 2016 at 10:19 AM, Dean Peterson  wrote:
> If I create a persistent volume claim using an NFS share that has existing
> data, will the data be wipded? Same thing with creating the persistent
> volume. Will the existing data be deleted. I want to make existing data
> accessible to multiple pods/containers in an NFS share. If I make a
> persistent volume pointing to that existing path. How do I get the
> persistent volume claim to access that existing path and make the containers
> with the claim see it?
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing OpenShift Configuration with Puppet/Ansible… what are your best practices?

2016-10-13 Thread Rich Megginson

On 10/13/2016 07:52 AM, Philippe Lafoucrière wrote:

Just to clarify our need here:

We want the projects config inside a configuration tool. There's 
currently nothing preventing from modifying the config of a project 
(let's say, a DC), and no one will be notified of the change.


Do you mean, if someone does 'oc edit dc my-project-dc', you want to be 
able to sync those changes back to some config file, so that if you 
redeploy, it will use the changes you made when you did the 'oc edit'?


We're looking for something to keep track of changes, and make sure 
the config deployed is the config we have in our git repo.


Thanks

​


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Does NFS PVC wipe existing data before attaching?

2016-10-13 Thread Dean Peterson
If I create a persistent volume claim using an NFS share that has existing
data, will the data be wipded? Same thing with creating the persistent
volume. Will the existing data be deleted. I want to make existing data
accessible to multiple pods/containers in an NFS share. If I make a
persistent volume pointing to that existing path. How do I get the
persistent volume claim to access that existing path and make the
containers with the claim see it?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing OpenShift Configuration with Puppet/Ansible… what are your best practices?

2016-10-13 Thread Philippe Lafoucrière
Thanks Clayton!
Really looking forward to seeing this released :)
​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing OpenShift Configuration with Puppet/Ansible… what are your best practices?

2016-10-13 Thread Clayton Coleman
There are a number of lower level modules in use by the ansible tools
that are targeted at creating / updating config objects on OpenShift.

We've been discussing increasing and enhancing those tools to make it
even easier to manage openshift with ansible (for both platform tools
as well as for app delivery).  Jason DeTiberus and Kenny Woodson have
been heavily involved in several efforts in this direction.

> On Oct 13, 2016, at 9:54 AM, Philippe Lafoucrière 
>  wrote:
>
> Just to clarify our need here:
>
> We want the projects config inside a configuration tool. There's currently 
> nothing preventing from modifying the config of a project (let's say, a DC), 
> and no one will be notified of the change.
> We're looking for something to keep track of changes, and make sure the 
> config deployed is the config we have in our git repo.
>
> Thanks
>
> ​
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing OpenShift Configuration with Puppet/Ansible… what are your best practices?

2016-10-13 Thread Philippe Lafoucrière
Just to clarify our need here:

We want the projects config inside a configuration tool. There's currently
nothing preventing from modifying the config of a project (let's say, a
DC), and no one will be notified of the change.
We're looking for something to keep track of changes, and make sure the
config deployed is the config we have in our git repo.

Thanks

​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing OpenShift Configuration with Puppet/Ansible… what are your best practices?

2016-10-13 Thread Stéphane Klein
2016-10-12 17:41 GMT+02:00 Alex Wauck :

> we do the actual OpenShift installation using openshift-ansible (which
> Rich Megginson mentioned)
>

Thanks but my subject isn't about OpenShift cluster installation and
upgrade.

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing OpenShift Configuration with Puppet/Ansible… what are your best practices?

2016-10-13 Thread Stéphane Klein
2016-10-12 17:10 GMT+02:00 Rich Megginson :

> On 10/12/2016 03:15 AM, Stéphane Klein wrote:
>>
>> * are there some Ansible or Puppet tools for OpenShift (I found nothing)?
>>
>
> https://github.com/openshift/openshift-ansible


I know and I use that, it's only to install and upgrade OpenShift cluster.

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Container can ping external network but cannot telnet

2016-10-13 Thread David Strejc
I've experienced issue with folowing setup:

Hypervisor with 3 virrtual machines installed.

On top of it runnig Open Shift Origin v1.3.

When I create container it can ping external IPs but when I try to
telnet external service it times out.

Thank you for any advice.

David Strejc
https://octopussystems.cz
t: +420734270131
e: david.str...@gmail.com

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: jboss-eap64-openshift quickstart maven proxy

2016-10-13 Thread Jim Minter

Hi Lionel,

It should be a case of setting the HTTP_PROXY_HOST environment variable. 
 (I'm not sure why it's not just plain HTTP_PROXY - sorry).  See [1].


Also note that you can predefine this and other environment variables 
used at build time globally across the cluster if you like [2].


[1] 
https://access.redhat.com/documentation/en/red-hat-xpaas/0/single/red-hat-xpaas-eap-image/#environment_variables_3
[2] 
https://docs.openshift.com/container-platform/3.3/install_config/build_defaults_overrides.html


Cheers,

Jim

--
Jim Minter
Principal Software Engineer, Red Hat UK


On 13/10/16 08:27, Lionel Orellana wrote:

Hi

I'm trying to run the jboss-eap64-openshift quickstart but the build is
failing to download from maven central.

[ERROR] Non-resolvable import POM: Could not transfer artifact
org.jboss.bom.eap:jboss-javaee-6.0-with-tools:pom:6.4.0.GA
 from/to central
(https://repo1.maven.org/maven2):Connection to https://repo1.maven.org
refused @ line 71, column 25

I had no problems with the Wildfly quickstarts.

I tried setting proxyHost and proxyPort in MAVEN_OPTS but that did nothing.

Do I really have to clone the repo and modify the maven settings.xml
file to run the quickstart? How come Wildfly works?

Thanks




___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Complete cluster meltdown due to "Kubelet stopped posting node status"

2016-10-13 Thread v

Hello,

seems like that manual intervention (log on and evacuate node) is the price we 
have to pay if we don't want our master to wreak havoc in our cluster when it 
has connectivity problems.

Maybe this whole mechanism could be built in a more defensive way. What is 
missing for us is an option to just re-create the pods that were on that node 
somewhere else if that node can't be reached for 5 minutes, and only evacuate 
the node after, say, 4 hours. Because that node might still be working properly 
and serving requests, it might just not be reachable for the master, as was in 
our case.

Such an option would be great to have, because all our services are built in a 
way that they are allowed to exist multiple times in the network.

Best Regards & thanks for your support Clayton!
v


Am 2016-10-12 um 17:44 schrieb Clayton Coleman:

Yeah, if you make this change you'll be responsible for triggering evacuation of down 
nodes.  You can do that via "oadm manage-node NODE_NAME --evacuate"

On Mon, Oct 10, 2016 at 8:06 AM, v > 
wrote:

Hello Clayton,

thank you for replying!
I'm not sure whether changing the node failure detection threshhold is the 
right way to go. I have found this:


https://docs.openshift.com/enterprise/3.1/install_config/master_node_configuration.html
 

masterIP: 10.0.2.15podEvictionTimeout: 5mschedulerConfigFile: "" I think that podEvictionTimeout is the 
thing that bit us. After changing that to "24h" I don't see any "Evicting pods on node" or 
"Recording Deleting all Pods from Node" messages in the master logs any more.

Regards
v

Am 2016-10-10 um 15:21 schrieb Clayton Coleman:

Network segmentation mode is in 1.3.  In 1.1 or 1.2 you can also
increase the node failure detection threshold (80s by default) as high
as you want by setting the extended controller argument for it, which
will delay evictions (you could set 24h and use external tooling to
handle node down).

If you are concerned about external traffic causing DDoS, add a proxy
configuration for your masters that rate limits traffic by cookie or
source ip.




On Oct 10, 2016, at 2:56 AM, v   
wrote:

Hello,

we just had our whole Openshift cluster go down hard due to a "feature" in 
the Openshift master that deletes all pods from a node if the node doesn't report back to 
the master on a regular basis.

Turns out we're not the only ones who have been bitten by this "feature":
https://github.com/kubernetes/kubernetes/issues/30972#issuecomment-241077740 

https://github.com/kubernetes/kubernetes/issues/24200 


I am writing here to find out whether it is possible to disable this 
feature completely. We don't need it and we don't want our master to ever do 
something like that again.

Please note how easily this feature can be abused: At the moment anyone can 
bring down your whole Openshift cluster just by DDoSing the master(s) for a few 
minutes.







The logs (they were the same for all nodes):
Okt 09 21:47:10openshiftmaster.com   
origin-master[919215]: I1004 21:47:10.804666  919215 nodecontroller.go:697] 
nodeopenshiftnode.com   hasn't been updated for 
5m17.169004459s. Last out of disk condition is: &{Type:OutOfDisk Status:Unknown 
LastHeartbeatTime:2016-10-04 21:41:53 +0200 CEST LastTransitionTime:2016-10-04 21:42:33 +0200 
CEST Reason:NodeStatusUnknown Message:Kubelet stopped posting node status.}
Okt 09 21:47:10openshiftmaster.com   
origin-master[919215]: I1004 21:47:10.804742  919215 nodecontroller.go:451] Evicting pods 
on nodeopenshiftnode.com : 2016-10-04 21:47:10.80472667 
+0200 CEST is later than 2016-10-04 21:42:33.779813315 +0200 CEST + 4m20s
Okt 09 21:47:10openshiftmaster.com   origin-master[919215]: 
I1004 21:47:10.945766  919215 nodecontroller.go:540] Recording Deleting all Pods from 
Nodeopenshiftnode.com . event message for nodeopenshiftnode.com 


Regards
v

___
users mailing list
users@lists.openshift.redhat.com 
http://lists.openshift.redhat.com/openshiftmm/listinfo/users 


___
users mailing list
users@lists.openshift.redhat.com 
http://lists.openshift.redhat.com/openshiftmm/listinfo/users