Re: PV manual reclamation and recyling

2016-11-17 Thread Lionel Orellana
Files in other dirs in the same NFS server don't get deleted (e.g. /poc_runtime/test/)

There is something in my Openshift node deleting files in /poc_runtime/evs as soon as I put them there!

On 18 November 2016 at 18:04, Lionel Orellana  wrote:

>
> In fact, whatever is deleting my files is still doing it:
>
> [root@poc-docker03 evs]# touch x
> [root@poc-docker03 evs]# ls
> [root@poc-docker03 evs]#
>
> evs is a path on an NFS volume that I have added directly to some
> deployment configs
>
>  -
>   name: evs
>   nfs:
> server: 
> path: /poc_runtime/evs
>
> If I stop the origin-service on one particular node the file doesn't
> disappear.
>
> [root@poc-docker03 evs]# touch x
> [root@poc-docker03 evs]# ls
> x
> [root@poc-docker03 evs]#
>
> When I restart the origin-node service I see a lot of errors like this
>
>  Failed cleaning pods: [remove /var/lib/origin/openshift.
> local.volumes/pods/1b7e3a16-ab08-11e6-8618-005056915814/volumes/
> kubernetes.io~nfs device or resource bus
>  Failed to remove orphaned pod x dir; err: remove
> /var/lib/origin/openshift.local.volumes/pods//volumes/kubernetes.io
> ~nfs/*evs*: device or resource bus
>
> Despite the fact that the error says that it couldn't remove it, what
> exactly is it trying to do here? Is it possible that this process
> previously deleted the data in the evs folder?
>
>
>
>
> On 18 November 2016 at 16:45, Lionel Orellana  wrote:
>
>> What about NFS volumes added directly in build configs.
>>
>> volumes:
>> -
>>   name: jenkins-volume-1
>>   nfs:
>> server: 
>> path: /poc_runtime/jenkins/home
>>
>>
>> We just restarted all the servers hosting my openshift cluster and the
>> all data in the path above disappeared. Simply by restarting the host VM!
>>
>>
>>
>> On 18 November 2016 at 16:19, Lionel Orellana  wrote:
>>
>>> Thanks Mark
>>>
>>> On 18 November 2016 at 15:09, Mark Turansky  wrote:
>>>


 On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
 wrote:

> Hi,
>
> Couple of questions regarding Persistent Volumes, in particular NFS
> ones.
>
> 1) If I have a PV configured with the Retain policy it is not clear to
> me how this PV can be reused after the bound PVC is deleted. Deleting the
> PVC makes the PV status "Released". How do I make it "Available" again
> without losing the data?
>

 You can keep the PVC around longer if you intend to reuse it between
 pods. There is no way for a PV to go from Released to Available again in
 your scenario. You would have to delete and recreate the PV. It's a pointer
 to real storage (the NFS share), so you're just recreating the pointer. The
 data in the NFS volume itself is untouched.



>
> 2) Is there anything (e.g. all nodes crashing due to some underlying
> infrastructure failure) that would cause the data in a "Retain" volume to
> be wiped out? We had a problem with all our vmware servers  (where I host
> my openshift POC)  and all my NFS mounted volumes were wiped out. The
> storage guys assure me that nothing at their end caused that and it must
> have been a running process that did it.
>

 "Retain" is just a flag to the recycling process to leave that PV alone
 when it's Released. The PV's retention policy wouldn't cause everything to
 be deleted. NFS volumes on the node are no different than if you called
 "mount" yourself. There is nothing inherent in OpenShift itself that is
 running in that share that would wipe out data.



>
> Thanks
>
> Lionel.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>

>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: PV manual reclamation and recyling

2016-11-17 Thread Lionel Orellana
In fact, whatever is deleting my files is still doing it:

[root@poc-docker03 evs]# touch x
[root@poc-docker03 evs]# ls
[root@poc-docker03 evs]#

evs is a path on an NFS volume that I have added directly to some
deployment configs

 -
  name: evs
  nfs:
server: 
path: /poc_runtime/evs

If I stop the origin-service on one particular node the file doesn't
disappear.

[root@poc-docker03 evs]# touch x
[root@poc-docker03 evs]# ls
x
[root@poc-docker03 evs]#

When I restart the origin-node service I see a lot of errors like this

 Failed cleaning pods: [remove
/var/lib/origin/openshift.local.volumes/pods/1b7e3a16-ab08-11e6-8618-005056915814/volumes/
kubernetes.io~nfs device or resource bus
 Failed to remove orphaned pod x dir; err: remove
/var/lib/origin/openshift.local.volumes/pods//volumes/kubernetes.io~nfs/
*evs*: device or resource bus

Despite the fact that the error says that it couldn't remove it, what
exactly is it trying to do here? Is it possible that this process
previously deleted the data in the evs folder?




On 18 November 2016 at 16:45, Lionel Orellana  wrote:

> What about NFS volumes added directly in build configs.
>
> volumes:
> -
>   name: jenkins-volume-1
>   nfs:
> server: 
> path: /poc_runtime/jenkins/home
>
>
> We just restarted all the servers hosting my openshift cluster and the all
> data in the path above disappeared. Simply by restarting the host VM!
>
>
>
> On 18 November 2016 at 16:19, Lionel Orellana  wrote:
>
>> Thanks Mark
>>
>> On 18 November 2016 at 15:09, Mark Turansky  wrote:
>>
>>>
>>>
>>> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
>>> wrote:
>>>
 Hi,

 Couple of questions regarding Persistent Volumes, in particular NFS
 ones.

 1) If I have a PV configured with the Retain policy it is not clear to
 me how this PV can be reused after the bound PVC is deleted. Deleting the
 PVC makes the PV status "Released". How do I make it "Available" again
 without losing the data?

>>>
>>> You can keep the PVC around longer if you intend to reuse it between
>>> pods. There is no way for a PV to go from Released to Available again in
>>> your scenario. You would have to delete and recreate the PV. It's a pointer
>>> to real storage (the NFS share), so you're just recreating the pointer. The
>>> data in the NFS volume itself is untouched.
>>>
>>>
>>>

 2) Is there anything (e.g. all nodes crashing due to some underlying
 infrastructure failure) that would cause the data in a "Retain" volume to
 be wiped out? We had a problem with all our vmware servers  (where I host
 my openshift POC)  and all my NFS mounted volumes were wiped out. The
 storage guys assure me that nothing at their end caused that and it must
 have been a running process that did it.

>>>
>>> "Retain" is just a flag to the recycling process to leave that PV alone
>>> when it's Released. The PV's retention policy wouldn't cause everything to
>>> be deleted. NFS volumes on the node are no different than if you called
>>> "mount" yourself. There is nothing inherent in OpenShift itself that is
>>> running in that share that would wipe out data.
>>>
>>>
>>>

 Thanks

 Lionel.

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: PV manual reclamation and recyling

2016-11-17 Thread Lionel Orellana
What about NFS volumes added directly in build configs.

volumes:
-
  name: jenkins-volume-1
  nfs:
server: 
path: /poc_runtime/jenkins/home


We just restarted all the servers hosting my openshift cluster and the all
data in the path above disappeared. Simply by restarting the host VM!



On 18 November 2016 at 16:19, Lionel Orellana  wrote:

> Thanks Mark
>
> On 18 November 2016 at 15:09, Mark Turansky  wrote:
>
>>
>>
>> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
>> wrote:
>>
>>> Hi,
>>>
>>> Couple of questions regarding Persistent Volumes, in particular NFS
>>> ones.
>>>
>>> 1) If I have a PV configured with the Retain policy it is not clear to
>>> me how this PV can be reused after the bound PVC is deleted. Deleting the
>>> PVC makes the PV status "Released". How do I make it "Available" again
>>> without losing the data?
>>>
>>
>> You can keep the PVC around longer if you intend to reuse it between
>> pods. There is no way for a PV to go from Released to Available again in
>> your scenario. You would have to delete and recreate the PV. It's a pointer
>> to real storage (the NFS share), so you're just recreating the pointer. The
>> data in the NFS volume itself is untouched.
>>
>>
>>
>>>
>>> 2) Is there anything (e.g. all nodes crashing due to some underlying
>>> infrastructure failure) that would cause the data in a "Retain" volume to
>>> be wiped out? We had a problem with all our vmware servers  (where I host
>>> my openshift POC)  and all my NFS mounted volumes were wiped out. The
>>> storage guys assure me that nothing at their end caused that and it must
>>> have been a running process that did it.
>>>
>>
>> "Retain" is just a flag to the recycling process to leave that PV alone
>> when it's Released. The PV's retention policy wouldn't cause everything to
>> be deleted. NFS volumes on the node are no different than if you called
>> "mount" yourself. There is nothing inherent in OpenShift itself that is
>> running in that share that would wipe out data.
>>
>>
>>
>>>
>>> Thanks
>>>
>>> Lionel.
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


The API version v1 for kind BuildConfig is not supported by this server.

2016-11-17 Thread irvan hendrik
Hi,
I am completely new with OpenShift and docker. I set up the Openshift with
1 master, 2 nodes, 1 registry. I follows the documentation and I think I
got them right, but I keep getting this error at my OpenShift web console.



I am the cluster admin with access to the master and nodes.
Is there a configuration that I missed when setting up the Openshift
Container Platform?

Any help would be extremely appreciated.
Thank you.
Irvan Hendrik
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: PV manual reclamation and recyling

2016-11-17 Thread Lionel Orellana
Thanks Mark

On 18 November 2016 at 15:09, Mark Turansky  wrote:

>
>
> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
> wrote:
>
>> Hi,
>>
>> Couple of questions regarding Persistent Volumes, in particular NFS ones.
>>
>> 1) If I have a PV configured with the Retain policy it is not clear to me
>> how this PV can be reused after the bound PVC is deleted. Deleting the PVC
>> makes the PV status "Released". How do I make it "Available" again without
>> losing the data?
>>
>
> You can keep the PVC around longer if you intend to reuse it between pods.
> There is no way for a PV to go from Released to Available again in your
> scenario. You would have to delete and recreate the PV. It's a pointer to
> real storage (the NFS share), so you're just recreating the pointer. The
> data in the NFS volume itself is untouched.
>
>
>
>>
>> 2) Is there anything (e.g. all nodes crashing due to some underlying
>> infrastructure failure) that would cause the data in a "Retain" volume to
>> be wiped out? We had a problem with all our vmware servers  (where I host
>> my openshift POC)  and all my NFS mounted volumes were wiped out. The
>> storage guys assure me that nothing at their end caused that and it must
>> have been a running process that did it.
>>
>
> "Retain" is just a flag to the recycling process to leave that PV alone
> when it's Released. The PV's retention policy wouldn't cause everything to
> be deleted. NFS volumes on the node are no different than if you called
> "mount" yourself. There is nothing inherent in OpenShift itself that is
> running in that share that would wipe out data.
>
>
>
>>
>> Thanks
>>
>> Lionel.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


PV manual reclamation and recyling

2016-11-17 Thread Lionel Orellana
Hi,

Couple of questions regarding Persistent Volumes, in particular NFS ones.

1) If I have a PV configured with the Retain policy it is not clear to me
how this PV can be reused after the bound PVC is deleted. Deleting the PVC
makes the PV status "Released". How do I make it "Available" again without
losing the data?

2) Is there anything (e.g. all nodes crashing due to some underlying
infrastructure failure) that would cause the data in a "Retain" volume to
be wiped out? We had a problem with all our vmware servers  (where I host
my openshift POC)  and all my NFS mounted volumes were wiped out. The
storage guys assure me that nothing at their end caused that and it must
have been a running process that did it.

Thanks

Lionel.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: JBoss cluster

2016-11-17 Thread Lionel Orellana
I deployed a distributable war and got the output I was looking for. All
good with the world. Thanks.

On 18 November 2016 at 08:29, Lionel Orellana  wrote:

> But I can't tell if the second replica joined the cluster created by the
> first.
>
> I'm expecting to see "Number of cluster members: x" in the logs but it's
> not showing in any of the two instances.
>
> On 17 November 2016 at 22:55, Lionel Orellana  wrote:
>
>> Thanks Frederick. I have done that.
>>
>> On Thu., 17 Nov. 2016 at 10:48 pm, Frederic Giloux 
>> wrote:
>>
>>> Hi Lionel,
>>>
>>> JBoss on OpenShift uses JGroups Kube_ping to discover the other members
>>> of the cluster. This actually calls the Kubernetes REST API. Therefore you
>>> need to have the proper rights allocated to your service account. More
>>> information here: https://access.redhat.com/docu
>>> mentation/en/red-hat-xpaas/0/single/red-hat-xpaas-eap-image/#clustering
>>>
>>> Best Regards,
>>>
>>> Frédéric
>>>
>>>
>>> On Thu, Nov 17, 2016 at 11:58 AM, Lionel Orellana 
>>> wrote:
>>>
>>> Hi,
>>>
>>> I'm trying to run Jboss cluster using the eap64-basic-s2i v1.3.2
>>> template on Origin 1.3.
>>>
>>> The application built and deployed fined. The second one started fine
>>> but I can't tell if it joined the existing cluster. I was expecting to see
>>> an output along the lines of "number of members in the cluster: 2" which I
>>> have seen in the past with Jboss.
>>>
>>> There is a warning at the begining of the startup logs:
>>>
>>> "WARNING: Service account unable to test permissions to view pods in
>>> kubernetes (HTTP 000). Clustering will be unavailable. Please refer to the
>>> documentation for configuration."
>>>
>>> But looking at the ha.sh launch script this is just
>>> the check_view_pods_permission function using curl without --no-proxy. As
>>> far as I can tell this doesn't affect anything.
>>>
>>> So what do I need to look for to confirm that the second instance joined
>>> the cluster correctly? I am concerned that the proxy might be getting in
>>> the way somewhere else.
>>>
>>> Thanks
>>>
>>> Lionel
>>>
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>>
>>>
>>> --
>>> *Frédéric Giloux*
>>> Senior Middleware Consultant
>>>
>>> Red Hat GmbH
>>> MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main
>>>
>>> Mobile: +49 (0) 174 1724661 
>>> E-Mail: fgil...@redhat.com, http://www.redhat.de/
>>>
>>> Delivering value year after year
>>> Red Hat ranks # 1 in value among software vendors
>>> http://www.redhat.com/promo/vendor/
>>>
>>> Freedom...Courage...Commitment...Accountability
>>> 
>>>
>>> Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
>>> Handelsregister: Amtsgericht München, HRB 153243
>>> Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham,
>>> Michael O'Neill
>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: error during install: subnet id does not exist

2016-11-17 Thread Ravi


It only asks for VPC Subnet id. I have set that and that is giving 
trouble. No place to put VPC id itself.



On 11/17/2016 1:06 PM, Alex Wauck wrote:


On Thu, Nov 17, 2016 at 2:52 PM, Ravi Kapoor > wrote:

The instructions do not ask for availability zone or VPC. They only
ask for a subnet and I have specified that.
Maybe it is picking some other VPC where the subnet is not available.


Have you read this?
 https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md

According to that document, you are supposed to specify a whole bunch of
stuff in environment variables, including the VPC.  I've never tried it
myself, so I'm not sure how well it works.


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: JBoss cluster

2016-11-17 Thread Lionel Orellana
But I can't tell if the second replica joined the cluster created by the
first.

I'm expecting to see "Number of cluster members: x" in the logs but it's
not showing in any of the two instances.

On 17 November 2016 at 22:55, Lionel Orellana  wrote:

> Thanks Frederick. I have done that.
>
> On Thu., 17 Nov. 2016 at 10:48 pm, Frederic Giloux 
> wrote:
>
>> Hi Lionel,
>>
>> JBoss on OpenShift uses JGroups Kube_ping to discover the other members
>> of the cluster. This actually calls the Kubernetes REST API. Therefore you
>> need to have the proper rights allocated to your service account. More
>> information here: https://access.redhat.com/documentation/en/red-hat-
>> xpaas/0/single/red-hat-xpaas-eap-image/#clustering
>>
>> Best Regards,
>>
>> Frédéric
>>
>>
>> On Thu, Nov 17, 2016 at 11:58 AM, Lionel Orellana 
>> wrote:
>>
>> Hi,
>>
>> I'm trying to run Jboss cluster using the eap64-basic-s2i v1.3.2 template
>> on Origin 1.3.
>>
>> The application built and deployed fined. The second one started fine but
>> I can't tell if it joined the existing cluster. I was expecting to see an
>> output along the lines of "number of members in the cluster: 2" which I
>> have seen in the past with Jboss.
>>
>> There is a warning at the begining of the startup logs:
>>
>> "WARNING: Service account unable to test permissions to view pods in
>> kubernetes (HTTP 000). Clustering will be unavailable. Please refer to the
>> documentation for configuration."
>>
>> But looking at the ha.sh launch script this is just
>> the check_view_pods_permission function using curl without --no-proxy. As
>> far as I can tell this doesn't affect anything.
>>
>> So what do I need to look for to confirm that the second instance joined
>> the cluster correctly? I am concerned that the proxy might be getting in
>> the way somewhere else.
>>
>> Thanks
>>
>> Lionel
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>>
>>
>> --
>> *Frédéric Giloux*
>> Senior Middleware Consultant
>>
>> Red Hat GmbH
>> MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main
>>
>> Mobile: +49 (0) 174 1724661 
>> E-Mail: fgil...@redhat.com, http://www.redhat.de/
>>
>> Delivering value year after year
>> Red Hat ranks # 1 in value among software vendors
>> http://www.redhat.com/promo/vendor/
>>
>> Freedom...Courage...Commitment...Accountability
>> 
>> Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
>> Handelsregister: Amtsgericht München, HRB 153243
>> Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham,
>> Michael O'Neill
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: error during install: subnet id does not exist

2016-11-17 Thread Alex Wauck
On Thu, Nov 17, 2016 at 2:52 PM, Ravi Kapoor 
wrote:

> The instructions do not ask for availability zone or VPC. They only ask
> for a subnet and I have specified that.
> Maybe it is picking some other VPC where the subnet is not available.
>

Have you read this?
https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md

According to that document, you are supposed to specify a whole bunch of
stuff in environment variables, including the VPC.  I've never tried it
myself, so I'm not sure how well it works.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: error during install: subnet id does not exist

2016-11-17 Thread Ravi Kapoor
> Are you using openshift-ansible's AWS support to create EC2 instances for
you? We create our instances by other means and then run openshift-ansible
on them using the BYO playbooks,
I am not opposed to it, just that I am a beginner, trying to get something
up and running. I can create instances manually and run ansible on it, but
not able to find instructions.
Openshift's "advanced install" instructions are way too advanced.

I have a single node openshift working, but to add a node, instructions
only point to ansible (oadm does not have a command). So I am thinking
fastest way to a working cluster (with add node possibilities) is to use
ansible, hence this path.

> Do you have the availability zone or VPC set in your inventory file?  If
so, does it match the subnet you specified?
The instructions do not ask for availability zone or VPC. They only ask for
a subnet and I have specified that.
Maybe it is picking some other VPC where the subnet is not available.





On Thu, Nov 17, 2016 at 11:19 AM, Alex Wauck  wrote:

>
>
> On Thu, Nov 17, 2016 at 12:09 PM, Ravi Kapoor 
> wrote:
>
> Question1: Is this best way to install? So far I have been using "oc
>> cluster up" while it works it crashes once in a while (at least UI crashes,
>> so I am forced to restart it which kills all pods)
>>
>
> We used openshift-ansible to install our OpenShift cluster, and we fairly
> regularly use it to create temporary clusters for testing purposes.  I
> would consider it the best way to install.
>
>
>> Question2:
>> After I did all the configurations, my install still fails with following
>> error:
>>
>> exception occurred during task execution. To see the full traceback, use
>> -vvv. The error was: > >InvalidSubnetID.NotFoundThe subnet ID 'subnet-c7372dfd'
>> does not exist2b4d4256-7204-
>> 4ced-9af3-318d86a759f0
>>
>
> Are you using openshift-ansible's AWS support to create EC2 instances for
> you?  We create our instances by other means and then run openshift-ansible
> on them using the BYO playbooks, so I'm not familiar with
> openshift-ansible's AWS support.  Do you have the availability zone or VPC
> set in your inventory file?  If so, does it match the subnet you specified?
>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ansible installation OpenShift origin 1.2.0 failed

2016-11-17 Thread Andrew Butcher
You're right, there's no ansible-2.1 package in EPEL. The latest ansible
will work with openshift-ansible's release-1.2 branch where we've fixed
these templating issues. I'd recommend using that if you can.

Be sure to set the following inventory variables to get the right packages
if you go the release-1.2 branch route.

openshift_release=1.2
openshift_pkg_version=-1.2.1-1.el7

On Thu, Nov 17, 2016 at 1:04 PM, Den Cowboy  wrote:

> Seems not to help. If you list the possible packages it only shows the
> newest:
>
>
> yum -y --enablerepo=epel --showduplicates list ansible
> shows only ansible.noarch 2.2.0.0-3.el7 epel
> --
> *Van:* users-boun...@lists.openshift.redhat.com  openshift.redhat.com> namens Rich Megginson 
> *Verzonden:* donderdag 17 november 2016 17:54:26
> *Aan:* users@lists.openshift.redhat.com
> *Onderwerp:* Re: Ansible installation OpenShift origin 1.2.0 failed
>
> On 11/17/2016 10:36 AM, Den Cowboy wrote:
> >
> > Thanks. This could probably be the issue.
> >
> >
> > # yum -y --enablerepo=epel --showduplicates list ansible
> > Failed to set locale, defaulting to C
> > Loaded plugins: fastestmirror
> > Loading mirror speeds from cached hostfile
> >  * base: mirror2.hs-esslingen.de
> >  * epel: epel.mirrors.ovh.net
> >  * extras: it.centos.contactlab.it
> >  * updates: mirror.netcologne.de
> > Available Packages
> > ansible.noarch 2.2.0.0-3.el7 epel
> >
> >
> > I always installed 2.2 at the moment. Is there a way to install
> > 2.1.0.0-1.el7 using yum?
> >
>
> You could try to yum downgrade ansible and see if that gets you an older
> version.
>
> >
> > I found this website:
> > https://www.rpmfind.net/linux/rpm2html/search.php?query=ansible but
> > I'm not really familiar with rpm
> >
> > RPM resource ansible - Rpmfind mirror
> > 
> > www.rpmfind.net
> > RPM resource ansible. Ansible is a radically simple model-driven
> > configuration management, multi-node deployment, and remote task
> > execution system.
> >
> >
> > 
> > *Van:* Andrew Butcher 
> > *Verzonden:* donderdag 17 november 2016 15:49:50
> > *Aan:* Den Cowboy
> > *CC:* users@lists.openshift.redhat.com
> > *Onderwerp:* Re: Ansible installation OpenShift origin 1.2.0 failed
> > Hey,
> >
> > What version of ansible are you using? There is an untemplated
> > with_items for g_all_hosts | default([]) which isn't being
> > interpreted. Untemplated with_items would have been okay with previous
> > ansible versions but not with the latest 2.2 packages.
> >
> > Like this line but without the jinja template wrapping "{{ }}".
> >
> > https://github.com/openshift/openshift-ansible/blob/
> cd922e0f4a1370118c0e2fd60230a68d74b47095/playbooks/byo/
> openshift-cluster/config.yml#L16
> >
> > On Thu, Nov 17, 2016 at 10:30 AM, Den Cowboy  > >> wrote:
> >
> > Hi,
> >
> >
> > I forked the repo of openshift when it was version 1.2.0.
> >
> > Now I did all the prerequisitions and I was able to ssh from my
> > master to itself and to every node (using the names I specified in
> > /etc/hosts).
> >
> > I created my hosts file and I start the installation but it ends
> > pretty quick with this error. I don't understand why. I have some
> > experience with installating version 1.2.0 with ansible.
> >
> >
> > TASK [Evaluate oo_nodes_to_config]
> > *
> > changed: [localhost] => (item=master.xxx.com  >)
> > changed: [localhost] => (item=node01.xxx.com  >)
> > changed: [localhost] => (item=node02.xxx.com  >)
> > changed: [localhost] => (item=node03.xxx.com  >)
> > changed: [localhost] => (item=node04.xxx.com  >)
> >
> > TASK [Evaluate oo_nodes_to_config]
> > *
> > skipping: [localhost] => (item=master.xxx.com  >)
> >
> > TASK [Evaluate oo_first_etcd]
> > **
> > changed: [localhost]
> >
> > TASK [Evaluate oo_first_master]
> > 
> > changed: [localhost]
> >
> > TASK [Evaluate oo_lb_to_config]
> > 
> >
> > TASK [Evaluate oo_nfs_to_config]
> > ***
> >
> > PLAY [Initialize host facts]
> > ***
> >
> > TASK [setup]
> > ***
> > fatal: [g_all_hosts | default([])]: UNREACHABLE! => {"changed":
> > 

Re: error during install: subnet id does not exist

2016-11-17 Thread Alex Wauck
On Thu, Nov 17, 2016 at 12:09 PM, Ravi Kapoor 
wrote:

Question1: Is this best way to install? So far I have been using "oc
> cluster up" while it works it crashes once in a while (at least UI crashes,
> so I am forced to restart it which kills all pods)
>

We used openshift-ansible to install our OpenShift cluster, and we fairly
regularly use it to create temporary clusters for testing purposes.  I
would consider it the best way to install.


> Question2:
> After I did all the configurations, my install still fails with following
> error:
>
> exception occurred during task execution. To see the full traceback, use
> -vvv. The error was: <
> Code>InvalidSubnetID.NotFoundThe subnet ID
> 'subnet-c7372dfd' does not exist Errors>2b4d4256-7204-4ced-9af3-318d86a759f0 RequestID>
>

Are you using openshift-ansible's AWS support to create EC2 instances for
you?  We create our instances by other means and then run openshift-ansible
on them using the BYO playbooks, so I'm not familiar with
openshift-ansible's AWS support.  Do you have the availability zone or VPC
set in your inventory file?  If so, does it match the subnet you specified?



-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


error during install: subnet id does not exist

2016-11-17 Thread Ravi Kapoor
I am trying to install openshift using instructions at
https://github.com/openshift/openshift-ansible
Question1: Is this best way to install? So far I have been using "oc
cluster up" while it works it crashes once in a while (at least UI crashes,
so I am forced to restart it which kills all pods)


Question2:
After I did all the configurations, my install still fails with following
error:

exception occurred during task execution. To see the full traceback, use
-vvv. The error was:
InvalidSubnetID.NotFoundThe
subnet ID 'subnet-c7372dfd' does not
exist2b4d4256-7204-4ced-9af3-318d86a759f0


The subnet id is correct, here is a screenshot.
[image: Inline image 1]

Thanks for any help.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ansible installation OpenShift origin 1.2.0 failed

2016-11-17 Thread Rich Megginson

On 11/17/2016 10:36 AM, Den Cowboy wrote:


Thanks. This could probably be the issue.


# yum -y --enablerepo=epel --showduplicates list ansible
Failed to set locale, defaulting to C
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror2.hs-esslingen.de
 * epel: epel.mirrors.ovh.net
 * extras: it.centos.contactlab.it
 * updates: mirror.netcologne.de
Available Packages
ansible.noarch 2.2.0.0-3.el7 epel


I always installed 2.2 at the moment. Is there a way to install 
2.1.0.0-1.el7 using yum?




You could try to yum downgrade ansible and see if that gets you an older 
version.




I found this website: 
https://www.rpmfind.net/linux/rpm2html/search.php?query=ansible but 
I'm not really familiar with rpm


RPM resource ansible - Rpmfind mirror 


www.rpmfind.net
RPM resource ansible. Ansible is a radically simple model-driven 
configuration management, multi-node deployment, and remote task 
execution system.




*Van:* Andrew Butcher 
*Verzonden:* donderdag 17 november 2016 15:49:50
*Aan:* Den Cowboy
*CC:* users@lists.openshift.redhat.com
*Onderwerp:* Re: Ansible installation OpenShift origin 1.2.0 failed
Hey,

What version of ansible are you using? There is an untemplated 
with_items for g_all_hosts | default([]) which isn't being 
interpreted. Untemplated with_items would have been okay with previous 
ansible versions but not with the latest 2.2 packages.


Like this line but without the jinja template wrapping "{{ }}".

https://github.com/openshift/openshift-ansible/blob/cd922e0f4a1370118c0e2fd60230a68d74b47095/playbooks/byo/openshift-cluster/config.yml#L16

On Thu, Nov 17, 2016 at 10:30 AM, Den Cowboy > wrote:


Hi,


I forked the repo of openshift when it was version 1.2.0.

Now I did all the prerequisitions and I was able to ssh from my
master to itself and to every node (using the names I specified in
/etc/hosts).

I created my hosts file and I start the installation but it ends
pretty quick with this error. I don't understand why. I have some
experience with installating version 1.2.0 with ansible.


TASK [Evaluate oo_nodes_to_config]
*
changed: [localhost] => (item=master.xxx.com )
changed: [localhost] => (item=node01.xxx.com )
changed: [localhost] => (item=node02.xxx.com )
changed: [localhost] => (item=node03.xxx.com )
changed: [localhost] => (item=node04.xxx.com )

TASK [Evaluate oo_nodes_to_config]
*
skipping: [localhost] => (item=master.xxx.com )

TASK [Evaluate oo_first_etcd]
**
changed: [localhost]

TASK [Evaluate oo_first_master]

changed: [localhost]

TASK [Evaluate oo_lb_to_config]


TASK [Evaluate oo_nfs_to_config]
***

PLAY [Initialize host facts]
***

TASK [setup]
***
fatal: [g_all_hosts | default([])]: UNREACHABLE! => {"changed":
false, "msg": "Failed to connect to the host via ssh: ssh: Could
not resolve hostname g_all_hosts | default([]): Name or service
not known\r\n", "unreachable": true}

NO MORE HOSTS LEFT
*
to retry, use: --limit
@/root/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP
*
g_all_hosts: ok=1changed=0 unreachable=0   
failed=0
g_all_hosts | default([])  : ok=0changed=0 unreachable=1   
failed=0
localhost  : ok=9changed=8 unreachable=0   
failed=0



___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ansible installation OpenShift origin 1.2.0 failed

2016-11-17 Thread Den Cowboy
Thanks. This could probably be the issue.


# yum -y --enablerepo=epel --showduplicates list ansible
Failed to set locale, defaulting to C
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror2.hs-esslingen.de
 * epel: epel.mirrors.ovh.net
 * extras: it.centos.contactlab.it
 * updates: mirror.netcologne.de
Available Packages
ansible.noarch  
   2.2.0.0-3.el7
  epel


I always installed 2.2 at the moment. Is there a way to install 2.1.0.0-1.el7 
using yum?


I found this website: 
https://www.rpmfind.net/linux/rpm2html/search.php?query=ansible but I'm not 
really familiar with rpm

RPM resource ansible - Rpmfind 
mirror
www.rpmfind.net
RPM resource ansible. Ansible is a radically simple model-driven configuration 
management, multi-node deployment, and remote task execution system.




Van: Andrew Butcher 
Verzonden: donderdag 17 november 2016 15:49:50
Aan: Den Cowboy
CC: users@lists.openshift.redhat.com
Onderwerp: Re: Ansible installation OpenShift origin 1.2.0 failed

Hey,

What version of ansible are you using? There is an untemplated with_items for 
g_all_hosts | default([]) which isn't being interpreted. Untemplated with_items 
would have been okay with previous ansible versions but not with the latest 2.2 
packages.

Like this line but without the jinja template wrapping "{{ }}".

https://github.com/openshift/openshift-ansible/blob/cd922e0f4a1370118c0e2fd60230a68d74b47095/playbooks/byo/openshift-cluster/config.yml#L16

On Thu, Nov 17, 2016 at 10:30 AM, Den Cowboy 
> wrote:

Hi,


I forked the repo of openshift when it was version 1.2.0.

Now I did all the prerequisitions and I was able to ssh from my master to 
itself and to every node (using the names I specified in /etc/hosts).

I created my hosts file and I start the installation but it ends pretty quick 
with this error. I don't understand why. I have some experience with 
installating version 1.2.0 with ansible.


TASK [Evaluate oo_nodes_to_config] *
changed: [localhost] => (item=master.xxx.com)
changed: [localhost] => (item=node01.xxx.com)
changed: [localhost] => (item=node02.xxx.com)
changed: [localhost] => (item=node03.xxx.com)
changed: [localhost] => (item=node04.xxx.com)

TASK [Evaluate oo_nodes_to_config] *
skipping: [localhost] => (item=master.xxx.com)

TASK [Evaluate oo_first_etcd] **
changed: [localhost]

TASK [Evaluate oo_first_master] 
changed: [localhost]

TASK [Evaluate oo_lb_to_config] 

TASK [Evaluate oo_nfs_to_config] ***

PLAY [Initialize host facts] ***

TASK [setup] ***
fatal: [g_all_hosts | default([])]: UNREACHABLE! => {"changed": false, "msg": 
"Failed to connect to the host via ssh: ssh: Could not resolve hostname 
g_all_hosts | default([]): Name or service not known\r\n", "unreachable": true}

NO MORE HOSTS LEFT *
to retry, use: --limit @/root/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP *
g_all_hosts: ok=1changed=0unreachable=0failed=0
g_all_hosts | default([])  : ok=0changed=0unreachable=1failed=0
localhost  : ok=9changed=8unreachable=0failed=0


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ansible installation OpenShift origin 1.2.0 failed

2016-11-17 Thread Andrew Butcher
Hey,

What version of ansible are you using? There is an untemplated with_items
for g_all_hosts | default([]) which isn't being interpreted. Untemplated
with_items would have been okay with previous ansible versions but not with
the latest 2.2 packages.

Like this line but without the jinja template wrapping "{{ }}".

https://github.com/openshift/openshift-ansible/blob/cd922e0f4a1370118c0e2fd60230a68d74b47095/playbooks/byo/openshift-cluster/config.yml#L16

On Thu, Nov 17, 2016 at 10:30 AM, Den Cowboy  wrote:

> Hi,
>
>
> I forked the repo of openshift when it was version 1.2.0.
>
> Now I did all the prerequisitions and I was able to ssh from my master to
> itself and to every node (using the names I specified in /etc/hosts).
>
> I created my hosts file and I start the installation but it ends pretty
> quick with this error. I don't understand why. I have some experience with
> installating version 1.2.0 with ansible.
>
>
> TASK [Evaluate oo_nodes_to_config] **
> ***
> changed: [localhost] => (item=master.xxx.com)
> changed: [localhost] => (item=node01.xxx.com)
> changed: [localhost] => (item=node02.xxx.com)
> changed: [localhost] => (item=node03.xxx.com)
> changed: [localhost] => (item=node04.xxx.com)
>
> TASK [Evaluate oo_nodes_to_config] **
> ***
> skipping: [localhost] => (item=master.xxx.com)
>
> TASK [Evaluate oo_first_etcd] **
> 
> changed: [localhost]
>
> TASK [Evaluate oo_first_master] **
> **
> changed: [localhost]
>
> TASK [Evaluate oo_lb_to_config] **
> **
>
> TASK [Evaluate oo_nfs_to_config] **
> *
>
> PLAY [Initialize host facts] **
> *
>
> TASK [setup] 
> ***
> fatal: [g_all_hosts | default([])]: UNREACHABLE! => {"changed": false,
> "msg": "Failed to connect to the host via ssh: ssh: Could not resolve
> hostname g_all_hosts | default([]): Name or service not known\r\n",
> "unreachable": true}
>
> NO MORE HOSTS LEFT **
> ***
> to retry, use: --limit @/root/openshift-ansible/
> playbooks/byo/config.retry
>
> PLAY RECAP 
> *
> g_all_hosts: ok=1changed=0unreachable=0failed=0
> g_all_hosts | default([])  : ok=0changed=0unreachable=1failed=0
> localhost  : ok=9changed=8unreachable=0failed=0
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift use Github issue and Trello, why not use a service like https://waffle.io/ to avoid using two systems and create confusion ?

2016-11-17 Thread Stéphane Klein
2016-11-16 15:28 GMT+01:00 John Lamb :

> What confusion?
>

Where are feature requests and bug reports? In hidden Trello or in Github
issues?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: s2i build on OSX => fatal error: unexpected signal during runtime execution

2016-11-17 Thread Stéphane Klein
Done: https://github.com/openshift/source-to-image/issues/639

2016-11-17 15:23 GMT+01:00 Ben Parees :

> please open an issue on github.
>
> On Thu, Nov 17, 2016 at 9:08 AM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>> I have this error:
>>
>> https://gist.github.com/harobed/a3acf12956d073f1f8378379aea46764
>>
>> Information about my host:
>>
>> $ s2i version
>> s2i v1.1.3
>>
>> $ docker version
>> Client:
>>  Version:  1.12.3
>>  API version:  1.24
>>  Go version:   go1.6.3
>>  Git commit:   6b644ec
>>  Built:Wed Oct 26 23:26:11 2016
>>  OS/Arch:  darwin/amd64
>>
>> Server:
>>  Version:  1.12.3
>>  API version:  1.24
>>  Go version:   go1.6.3
>>  Git commit:   6b644ec
>>  Built:Wed Oct 26 23:26:11 2016
>>  OS/Arch:  linux/amd64
>>
>> $ docker info
>> Containers: 15
>>  Running: 0
>>  Paused: 0
>>  Stopped: 15
>> Images: 71
>> Server Version: 1.12.3
>> Storage Driver: aufs
>>  Root Dir: /var/lib/docker/aufs
>>  Backing Filesystem: extfs
>>  Dirs: 132
>>  Dirperm1 Supported: true
>> Logging Driver: json-file
>> Cgroup Driver: cgroupfs
>> Plugins:
>>  Volume: local
>>  Network: bridge host null overlay
>> Swarm: inactive
>> Runtimes: runc
>> Default Runtime: runc
>> Security Options: seccomp
>> Kernel Version: 4.4.27-moby
>> Operating System: Alpine Linux v3.4
>> OSType: linux
>> Architecture: x86_64
>> CPUs: 4
>> Total Memory: 1.951 GiB
>> Name: moby
>> ID: EINF:6OM6:4537:3WUL:3GJE:W42O:HJGQ:U22H:4VBP:PXMP:EQGO:43OL
>> Docker Root Dir: /var/lib/docker
>> Debug Mode (client): false
>> Debug Mode (server): true
>>  File Descriptors: 16
>>  Goroutines: 29
>>  System Time: 2016-11-17T14:06:42.466914005Z
>>  EventsListeners: 1
>> No Proxy: *.local, 169.254/16
>> Registry: https://index.docker.io/v1/
>> WARNING: No kernel memory limit support
>> Insecure Registries:
>>  127.0.0.0/8
>>
>> --
>> Stéphane Klein 
>> blog: http://stephane-klein.info
>> cv : http://cv.stephane-klein.info
>> Twitter: http://twitter.com/klein_stephane
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>


-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


s2i build on OSX => fatal error: unexpected signal during runtime execution

2016-11-17 Thread Stéphane Klein
I have this error:

https://gist.github.com/harobed/a3acf12956d073f1f8378379aea46764

Information about my host:

$ s2i version
s2i v1.1.3

$ docker version
Client:
 Version:  1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:Wed Oct 26 23:26:11 2016
 OS/Arch:  darwin/amd64

Server:
 Version:  1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:Wed Oct 26 23:26:11 2016
 OS/Arch:  linux/amd64

$ docker info
Containers: 15
 Running: 0
 Paused: 0
 Stopped: 15
Images: 71
Server Version: 1.12.3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 132
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.27-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.951 GiB
Name: moby
ID: EINF:6OM6:4537:3WUL:3GJE:W42O:HJGQ:U22H:4VBP:PXMP:EQGO:43OL
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 16
 Goroutines: 29
 System Time: 2016-11-17T14:06:42.466914005Z
 EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
WARNING: No kernel memory limit support
Insecure Registries:
 127.0.0.0/8

-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: pod disk size

2016-11-17 Thread Frederic Giloux
Hi Julio

have you looked at this point in the blog?

"Why is the container still showing 10GB of container rootfs size?
Shouldn’t we be getting 20 GB? This is expected behavior. Since our new
container is based on our old Fedora image, which is based on the old base
device size, the new container would not get a 20-GB device size unless we
update the image.

So let’s remove the existing Fedora image and update it from the registry."

Also see limitations at the end of the blog.

Hope this helps.


Frédéric

On Thu, Nov 17, 2016 at 1:30 PM, Julio Saura  wrote:

> wopss
>
> no sorry not working, my mistake..
>
> still 10 gb
>
> /dev/mapper/docker-253:5-33569780-2c3028e6722088dd70791f7a34128f
> 61a07f96e75523529503d7febc4d27275f10G   1,2G  8,9G  12% /
>
> but docker info says 20 gb
>
>
>
>  Base Device Size: 21.47 GB
>
>
> i have restarted docker daemon on that node an pulled my image again
>
>
> El 17 nov 2016, a las 12:52, Julio Saura  escribió:
>
> hello
>
> yes, that was the point i needed .
>
> applied and working ;)
>
> thanks!
>
> i was looking for an openshift flag/option instead of directly docker :/
>
>
>
>
> El 17 nov 2016, a las 12:39, Frederic Giloux 
> escribió:
>
> Hi Julio
>
> I hope I understand your question correctly. The first time docker is
> started, it sets up a base device with a default size specified in "Base
> Device Size", visible with the command "#docker info". All future images
> and containers will be a snapshot of this base device. Base size is the
> maximum size that a container/image can grow to.
> Information on how to increase the size is available in this blog entry:
> http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/
>
> Best Regards,
>
> Frédéric
>
> On Thu, Nov 17, 2016 at 10:41 AM, Julio Saura  wrote:
>
>> Hello
>>
>> i have noticed all my pods are started with 10 gb disk .. and i don’t
>> know why, the problem is that i need more disk per pod, how do i increase
>> the size of de pod disk? i don’t find any doc regarding this issue.
>>
>> i have tried to mount a host mount on my pod just to get a jmap out of
>> the pod but i was not able to make it run ..
>>
>> if pod disk size increase is not possible i will try to use PV using nfs,
>> but i guess increasing the pod disk is possible right?
>>
>> best regards
>>
>> thanks.
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
>
> --
> *Frédéric Giloux*
> Senior Middleware Consultant
>
> Red Hat GmbH
> MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main
>
> Mobile: +49 (0) 174 1724661 
> E-Mail: fgil...@redhat.com, http://www.redhat.de/
>
> Delivering value year after year
> Red Hat ranks # 1 in value among software vendors
> http://www.redhat.com/promo/vendor/
>
> Freedom...Courage...Commitment...Accountability
> 
> Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
> Handelsregister: Amtsgericht München, HRB 153243
> Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael
> O'Neill
>
>
>
>


-- 
*Frédéric Giloux*
Senior Middleware Consultant

Red Hat GmbH
MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main

Mobile: +49 (0) 174 1724661 
E-Mail: fgil...@redhat.com, http://www.redhat.de/

Delivering value year after year
Red Hat ranks # 1 in value among software vendors
http://www.redhat.com/promo/vendor/

Freedom...Courage...Commitment...Accountability

Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
Handelsregister: Amtsgericht München, HRB 153243
Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael
O'Neill
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: pod disk size

2016-11-17 Thread Julio Saura
wopss

no sorry not working, my mistake..

still 10 gb 

/dev/mapper/docker-253:5-33569780-2c3028e6722088dd70791f7a34128f61a07f96e75523529503d7febc4d27275f
10G   1,2G  8,9G  12% /

but docker info says 20 gb 



 Base Device Size: 21.47 GB


i have restarted docker daemon on that node an pulled my image again 


> El 17 nov 2016, a las 12:52, Julio Saura  escribió:
> 
> hello
> 
> yes, that was the point i needed .
> 
> applied and working ;)
> 
> thanks!
> 
> i was looking for an openshift flag/option instead of directly docker :/
> 
> 
> 
> 
>> El 17 nov 2016, a las 12:39, Frederic Giloux > > escribió:
>> 
>> Hi Julio
>> 
>> I hope I understand your question correctly. The first time docker is 
>> started, it sets up a base device with a default size specified in "Base 
>> Device Size", visible with the command "#docker info". All future images and 
>> containers will be a snapshot of this base device. Base size is the maximum 
>> size that a container/image can grow to.
>> Information on how to increase the size is available in this blog entry:
>> http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/ 
>> 
>> 
>> Best Regards,
>> 
>> Frédéric
>> 
>> On Thu, Nov 17, 2016 at 10:41 AM, Julio Saura > > wrote:
>> Hello
>> 
>> i have noticed all my pods are started with 10 gb disk .. and i don’t know 
>> why, the problem is that i need more disk per pod, how do i increase the 
>> size of de pod disk? i don’t find any doc regarding this issue.
>> 
>> i have tried to mount a host mount on my pod just to get a jmap out of the 
>> pod but i was not able to make it run ..
>> 
>> if pod disk size increase is not possible i will try to use PV using nfs, 
>> but i guess increasing the pod disk is possible right?
>> 
>> best regards
>> 
>> thanks.
>> 
>> 
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com 
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>> 
>> 
>> 
>> 
>> -- 
>> Frédéric Giloux
>> Senior Middleware Consultant
>> 
>> Red Hat GmbH 
>> MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main 
>> 
>> Mobile: +49 (0) 174 1724661 
>> E-Mail: fgil...@redhat.com , 
>> http://www.redhat.de/  
>> 
>> Delivering value year after year 
>> Red Hat ranks # 1 in value among software vendors 
>> http://www.redhat.com/promo/vendor/  
>> 
>> Freedom...Courage...Commitment...Accountability 
>>  
>> Red Hat GmbH, http://www.de.redhat.com/  Sitz: 
>> Grasbrunn, 
>> Handelsregister: Amtsgericht München, HRB 153243 
>> Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael 
>> O'Neill
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: JBoss cluster

2016-11-17 Thread Lionel Orellana
Thanks Frederick. I have done that.
On Thu., 17 Nov. 2016 at 10:48 pm, Frederic Giloux 
wrote:

> Hi Lionel,
>
> JBoss on OpenShift uses JGroups Kube_ping to discover the other members of
> the cluster. This actually calls the Kubernetes REST API. Therefore you
> need to have the proper rights allocated to your service account. More
> information here:
> https://access.redhat.com/documentation/en/red-hat-xpaas/0/single/red-hat-xpaas-eap-image/#clustering
>
> Best Regards,
>
> Frédéric
>
>
> On Thu, Nov 17, 2016 at 11:58 AM, Lionel Orellana 
> wrote:
>
> Hi,
>
> I'm trying to run Jboss cluster using the eap64-basic-s2i v1.3.2 template
> on Origin 1.3.
>
> The application built and deployed fined. The second one started fine but
> I can't tell if it joined the existing cluster. I was expecting to see an
> output along the lines of "number of members in the cluster: 2" which I
> have seen in the past with Jboss.
>
> There is a warning at the begining of the startup logs:
>
> "WARNING: Service account unable to test permissions to view pods in
> kubernetes (HTTP 000). Clustering will be unavailable. Please refer to the
> documentation for configuration."
>
> But looking at the ha.sh launch script this is just
> the check_view_pods_permission function using curl without --no-proxy. As
> far as I can tell this doesn't affect anything.
>
> So what do I need to look for to confirm that the second instance joined
> the cluster correctly? I am concerned that the proxy might be getting in
> the way somewhere else.
>
> Thanks
>
> Lionel
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
>
> --
> *Frédéric Giloux*
> Senior Middleware Consultant
>
> Red Hat GmbH
> MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main
>
> Mobile: +49 (0) 174 1724661 
> E-Mail: fgil...@redhat.com, http://www.redhat.de/
>
> Delivering value year after year
> Red Hat ranks # 1 in value among software vendors
> http://www.redhat.com/promo/vendor/
>
> Freedom...Courage...Commitment...Accountability
> 
> Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
> Handelsregister: Amtsgericht München, HRB 153243
> Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael
> O'Neill
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


JBoss cluster

2016-11-17 Thread Lionel Orellana
Hi,

I'm trying to run Jboss cluster using the eap64-basic-s2i v1.3.2 template
on Origin 1.3.

The application built and deployed fined. The second one started fine but I
can't tell if it joined the existing cluster. I was expecting to see an
output along the lines of "number of members in the cluster: 2" which I
have seen in the past with Jboss.

There is a warning at the begining of the startup logs:

"WARNING: Service account unable to test permissions to view pods in
kubernetes (HTTP 000). Clustering will be unavailable. Please refer to the
documentation for configuration."

But looking at the ha.sh launch script this is just
the check_view_pods_permission function using curl without --no-proxy. As
far as I can tell this doesn't affect anything.

So what do I need to look for to confirm that the second instance joined
the cluster correctly? I am concerned that the proxy might be getting in
the way somewhere else.

Thanks

Lionel
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Behaviour of scheduling in OpenShift Origin

2016-11-17 Thread Frederic Giloux
Hi Lorenz

it seems that inter-pod anti-affinity has been introduced in Kubernetes 1.4
(alpha) [1]. For your scenario you may however rely on setting appropriate
resource requests for your pods. The scheduler takes them per default in
consideration and won't schedule app2/pod2 on the same node as app1/pod1 if
there is not enough resources left. The scheduler looks at the sum of the
resources that have been requested by all the pods running on the node.

[1] https://github.com/kubernetes/kubernetes.github.io/pull/1148

Best Regards,

Frédéric

On Thu, Nov 17, 2016 at 10:27 AM, Lorenz Vanthillo <
lorenz.vanthi...@outlook.com> wrote:

> We're using OpenShift Origin v1.2.0.
>
> We have a cluster with 5 nodes. 2 of them are infra nodes. 3 of them are
> "primary" nodes.
>
> Our own applications are deployed on the nodes with the label "primary".
>
> Now we have 2 applications/pods which are consuming a lot of resources.
> When we run our process through our pipeline (multiple pods) it works when
> those 2 pods aren't on the same node. We are not able to increase the
> resources on our nodes.
>
>
> Is there a way to tell OpenShift:
>
> Deploy pod X only on nodes where pod Y isn't present (and vice versa)?
>
> We know we can assign labels to nodes and use node-selector's to prevent
> they are'nt on the same node but so we're limitting the power of scheduling.
>
>
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
*Frédéric Giloux*
Senior Middleware Consultant

Red Hat GmbH
MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main

Mobile: +49 (0) 174 1724661 
E-Mail: fgil...@redhat.com, http://www.redhat.de/

Delivering value year after year
Red Hat ranks # 1 in value among software vendors
http://www.redhat.com/promo/vendor/

Freedom...Courage...Commitment...Accountability

Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
Handelsregister: Amtsgericht München, HRB 153243
Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael
O'Neill
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


pod disk size

2016-11-17 Thread Julio Saura
Hello

i have noticed all my pods are started with 10 gb disk .. and i don’t know why, 
the problem is that i need more disk per pod, how do i increase the size of de 
pod disk? i don’t find any doc regarding this issue.

i have tried to mount a host mount on my pod just to get a jmap out of the pod 
but i was not able to make it run ..

if pod disk size increase is not possible i will try to use PV using nfs, but i 
guess increasing the pod disk is possible right?

best regards

thanks.



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error from server: User "system:serviceaccount:default:pruner" cannot list all images in the cluster

2016-11-17 Thread Stéphane Klein
Yes, thanks it's that:

$ oc adm policy add-cluster-role-to-user system:image-pruner
system:serviceaccount:default:pruner
# oadm --token=`oc sa get-token pruner` prune images --confirm

but now I've:

error: error communicating with registry: Get
http://172.30.154.75:5000/healthz: dial tcp 172.30.154.75:5000: i/o timeout

2016-11-16 17:54 GMT+01:00 Jordan Liggitt :

> When granting the cluster role, the username for the service account is
> not "pruner", it is "system:serviceaccount:default:pruner"
>
> On Nov 16, 2016, at 11:29 AM, Stéphane Klein 
> wrote:
>
> Hi,
>
> oc adm policy add-cluster-role-to-user system:image-pruner pruner
>
> oadm --token=`oc sa get-token pruner` prune images --confirm
> Error from server: User "system:serviceaccount:default:pruner" cannot
> list all images in the cluster
>
> What role I forget to grant to pruner ServiceAccount ?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein 
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users