Re: PV manual reclamation and recyling

2016-11-29 Thread Lionel Orellana
Thanks Clayton. Keep us posted.
On Wed., 30 Nov. 2016 at 2:48 am, Clayton Coleman 
wrote:

> It's likely, don't have an eta yet while the scope of the pick is assessed.
>
> On Thu, Nov 24, 2016 at 5:52 PM, Lionel Orellana 
> wrote:
>
> This is a pretty bad issue in Kubernetes. We are talking about deleting
> data from NFS volumes. Lucky for me I'm just doing a POC. Is this not
> considered bad enough to warrant a patch release for Origin 1.3.x?
>
> Cheers
>
> Lionel.
>
> On 19 November 2016 at 07:38, Lionel Orellana  wrote:
>
> The only "released" version of Openshift that includes Kubernetes 1.3.6 is
> v1.4.0.-alpha1. I don't want to upgrade to an alpha1 release.
>
> Can I request a patch of Openshift Origin to include Kubernetes 1.3.6 or
> higher? ( the Kubernetes 1.3 branch is up to 1.3.10).
>
> On 19 November 2016 at 07:26, Alex Wauck  wrote:
>
> OpenShift is a distribution of Kubernetes, so I don't think you can
> upgrade Kubernetes without upgrading OpenShift.
>
> On Fri, Nov 18, 2016 at 1:52 PM, Lionel Orellana 
> wrote:
>
> So the fix is on Kubernetes 1.3.6. The upgrade guide you mention is for
> Openshift as a whole unless I'm missing something.
> On Sat., 19 Nov. 2016 at 12:29 am, Mark Turansky 
> wrote:
>
> Good find on that bug. Our upgrade guide can help you get started on a
> fix.
>
>
> https://docs.openshift.com/container-platform/3.3/install_config/upgrading/index.html
>
> Mark
>
> On Fri, Nov 18, 2016 at 3:13 AM, Lionel Orellana 
> wrote:
>
>
> This sounds very very familiar:
> https://github.com/kubernetes/kubernetes/issues/30637
>
> Particularly comment:
> https://github.com/kubernetes/kubernetes/issues/30637#issuecomment-243276076
>
> That is a nasty bug. How can I upgrade Kubernetes in my cluster?
>
> My current versions are
>
> -bash-4.2$ oc version
> oc v1.3.0
> kubernetes v1.3.0+52492b4
> features: Basic-Auth GSSAPI Kerberos SPNEGO
>
> Server https://poc-docker01.aipo.gov.au:8443
> openshift v1.3.0
> kubernetes v1.3.0+52492b4
>
>
> On 18 November 2016 at 18:18, Lionel Orellana  wrote:
>
> Files in other dirs in the same NFS server don't get deleted (e.g.  name>/poc_runtime/test/)
>
> There is something in my Openshift node deleting files in  name>/poc_runtime/evs as soon as I put them there!
>
> On 18 November 2016 at 18:04, Lionel Orellana  wrote:
>
>
> In fact, whatever is deleting my files is still doing it:
>
> [root@poc-docker03 evs]# touch x
> [root@poc-docker03 evs]# ls
> [root@poc-docker03 evs]#
>
> evs is a path on an NFS volume that I have added directly to some
> deployment configs
>
>  -
>   name: evs
>   nfs:
> server: 
> path: /poc_runtime/evs
>
> If I stop the origin-service on one particular node the file doesn't
> disappear.
>
> [root@poc-docker03 evs]# touch x
> [root@poc-docker03 evs]# ls
> x
> [root@poc-docker03 evs]#
>
> When I restart the origin-node service I see a lot of errors like this
>
>  Failed cleaning pods: [remove
> /var/lib/origin/openshift.local.volumes/pods/1b7e3a16-ab08-11e6-8618-005056915814/volumes/
> kubernetes.io~nfs device or resource bus
>  Failed to remove orphaned pod x dir; err: remove
> /var/lib/origin/openshift.local.volumes/pods//volumes/kubernetes.io
> ~nfs/*evs*: device or resource bus
>
> Despite the fact that the error says that it couldn't remove it, what
> exactly is it trying to do here? Is it possible that this process
> previously deleted the data in the evs folder?
>
>
>
>
> On 18 November 2016 at 16:45, Lionel Orellana  wrote:
>
> What about NFS volumes added directly in build configs.
>
> volumes:
> -
>   name: jenkins-volume-1
>   nfs:
> server: 
> path: /poc_runtime/jenkins/home
>
>
> We just restarted all the servers hosting my openshift cluster and the all
> data in the path above disappeared. Simply by restarting the host VM!
>
>
>
> On 18 November 2016 at 16:19, Lionel Orellana  wrote:
>
> Thanks Mark
>
> On 18 November 2016 at 15:09, Mark Turansky  wrote:
>
>
>
> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
> wrote:
>
> Hi,
>
> Couple of questions regarding Persistent Volumes, in particular NFS ones.
>
> 1) If I have a PV configured with the Retain policy it is not clear to me
> how this PV can be reused after the bound PVC is deleted. Deleting the PVC
> makes the PV status "Released". How do I make it "Available" again without
> losing the data?
>
>
> You can keep the PVC around longer if you intend to reuse it between pods.
> There is no way for a PV to go from Released to Available again in your
> scenario. You would have to delete and recreate the PV. It's a pointer to
> real storage (the NFS share), so you're just recreating the pointer. The
> data 

Re: PV manual reclamation and recyling

2016-11-29 Thread Clayton Coleman
It's likely, don't have an eta yet while the scope of the pick is assessed.

On Thu, Nov 24, 2016 at 5:52 PM, Lionel Orellana  wrote:

> This is a pretty bad issue in Kubernetes. We are talking about deleting
> data from NFS volumes. Lucky for me I'm just doing a POC. Is this not
> considered bad enough to warrant a patch release for Origin 1.3.x?
>
> Cheers
>
> Lionel.
>
> On 19 November 2016 at 07:38, Lionel Orellana  wrote:
>
>> The only "released" version of Openshift that includes Kubernetes 1.3.6
>> is v1.4.0.-alpha1. I don't want to upgrade to an alpha1 release.
>>
>> Can I request a patch of Openshift Origin to include Kubernetes 1.3.6 or
>> higher? ( the Kubernetes 1.3 branch is up to 1.3.10).
>>
>> On 19 November 2016 at 07:26, Alex Wauck  wrote:
>>
>>> OpenShift is a distribution of Kubernetes, so I don't think you can
>>> upgrade Kubernetes without upgrading OpenShift.
>>>
>>> On Fri, Nov 18, 2016 at 1:52 PM, Lionel Orellana 
>>> wrote:
>>>
 So the fix is on Kubernetes 1.3.6. The upgrade guide you mention is for
 Openshift as a whole unless I'm missing something.
 On Sat., 19 Nov. 2016 at 12:29 am, Mark Turansky 
 wrote:

> Good find on that bug. Our upgrade guide can help you get started on a
> fix.
>
> https://docs.openshift.com/container-platform/3.3/install_co
> nfig/upgrading/index.html
>
> Mark
>
> On Fri, Nov 18, 2016 at 3:13 AM, Lionel Orellana 
> wrote:
>
>
> This sounds very very familiar: https://github.com/k
> ubernetes/kubernetes/issues/30637
>
> Particularly comment: https://github.com/ku
> bernetes/kubernetes/issues/30637#issuecomment-243276076
>
> That is a nasty bug. How can I upgrade Kubernetes in my cluster?
>
> My current versions are
>
> -bash-4.2$ oc version
> oc v1.3.0
> kubernetes v1.3.0+52492b4
> features: Basic-Auth GSSAPI Kerberos SPNEGO
>
> Server https://poc-docker01.aipo.gov.au:8443
> openshift v1.3.0
> kubernetes v1.3.0+52492b4
>
>
> On 18 November 2016 at 18:18, Lionel Orellana 
> wrote:
>
> Files in other dirs in the same NFS server don't get deleted (e.g.
> /poc_runtime/test/)
>
> There is something in my Openshift node deleting files in  name>/poc_runtime/evs as soon as I put them there!
>
> On 18 November 2016 at 18:04, Lionel Orellana 
> wrote:
>
>
> In fact, whatever is deleting my files is still doing it:
>
> [root@poc-docker03 evs]# touch x
> [root@poc-docker03 evs]# ls
> [root@poc-docker03 evs]#
>
> evs is a path on an NFS volume that I have added directly to some
> deployment configs
>
>  -
>   name: evs
>   nfs:
> server: 
> path: /poc_runtime/evs
>
> If I stop the origin-service on one particular node the file doesn't
> disappear.
>
> [root@poc-docker03 evs]# touch x
> [root@poc-docker03 evs]# ls
> x
> [root@poc-docker03 evs]#
>
> When I restart the origin-node service I see a lot of errors like this
>
>  Failed cleaning pods: [remove /var/lib/origin/openshift.loca
> l.volumes/pods/1b7e3a16-ab08-11e6-8618-005056915814/volumes/
> kubernetes.io~nfs device or resource bus
>  Failed to remove orphaned pod x dir; err: remove
> /var/lib/origin/openshift.local.volumes/pods//volumes/ku
> bernetes.io~nfs/*evs*: device or resource bus
>
> Despite the fact that the error says that it couldn't remove it, what
> exactly is it trying to do here? Is it possible that this process
> previously deleted the data in the evs folder?
>
>
>
>
> On 18 November 2016 at 16:45, Lionel Orellana 
> wrote:
>
> What about NFS volumes added directly in build configs.
>
> volumes:
> -
>   name: jenkins-volume-1
>   nfs:
> server: 
> path: /poc_runtime/jenkins/home
>
>
> We just restarted all the servers hosting my openshift cluster and the
> all data in the path above disappeared. Simply by restarting the host VM!
>
>
>
> On 18 November 2016 at 16:19, Lionel Orellana 
> wrote:
>
> Thanks Mark
>
> On 18 November 2016 at 15:09, Mark Turansky 
> wrote:
>
>
>
> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
> wrote:
>
> Hi,
>
> Couple of questions regarding Persistent Volumes, in particular NFS
> ones.
>
> 1) If I have a PV configured with the Retain policy it is not clear to
> me how this PV can be reused after the bound PVC is deleted. Deleting the

Re: PV manual reclamation and recyling

2016-11-24 Thread Lionel Orellana
This is a pretty bad issue in Kubernetes. We are talking about deleting
data from NFS volumes. Lucky for me I'm just doing a POC. Is this not
considered bad enough to warrant a patch release for Origin 1.3.x?

Cheers

Lionel.

On 19 November 2016 at 07:38, Lionel Orellana  wrote:

> The only "released" version of Openshift that includes Kubernetes 1.3.6 is
> v1.4.0.-alpha1. I don't want to upgrade to an alpha1 release.
>
> Can I request a patch of Openshift Origin to include Kubernetes 1.3.6 or
> higher? ( the Kubernetes 1.3 branch is up to 1.3.10).
>
> On 19 November 2016 at 07:26, Alex Wauck  wrote:
>
>> OpenShift is a distribution of Kubernetes, so I don't think you can
>> upgrade Kubernetes without upgrading OpenShift.
>>
>> On Fri, Nov 18, 2016 at 1:52 PM, Lionel Orellana 
>> wrote:
>>
>>> So the fix is on Kubernetes 1.3.6. The upgrade guide you mention is for
>>> Openshift as a whole unless I'm missing something.
>>> On Sat., 19 Nov. 2016 at 12:29 am, Mark Turansky 
>>> wrote:
>>>
 Good find on that bug. Our upgrade guide can help you get started on a
 fix.

 https://docs.openshift.com/container-platform/3.3/install_co
 nfig/upgrading/index.html

 Mark

 On Fri, Nov 18, 2016 at 3:13 AM, Lionel Orellana 
 wrote:


 This sounds very very familiar: https://github.com/k
 ubernetes/kubernetes/issues/30637

 Particularly comment: https://github.com/ku
 bernetes/kubernetes/issues/30637#issuecomment-243276076

 That is a nasty bug. How can I upgrade Kubernetes in my cluster?

 My current versions are

 -bash-4.2$ oc version
 oc v1.3.0
 kubernetes v1.3.0+52492b4
 features: Basic-Auth GSSAPI Kerberos SPNEGO

 Server https://poc-docker01.aipo.gov.au:8443
 openshift v1.3.0
 kubernetes v1.3.0+52492b4


 On 18 November 2016 at 18:18, Lionel Orellana 
 wrote:

 Files in other dirs in the same NFS server don't get deleted (e.g.
 /poc_runtime/test/)

 There is something in my Openshift node deleting files in >>> name>/poc_runtime/evs as soon as I put them there!

 On 18 November 2016 at 18:04, Lionel Orellana 
 wrote:


 In fact, whatever is deleting my files is still doing it:

 [root@poc-docker03 evs]# touch x
 [root@poc-docker03 evs]# ls
 [root@poc-docker03 evs]#

 evs is a path on an NFS volume that I have added directly to some
 deployment configs

  -
   name: evs
   nfs:
 server: 
 path: /poc_runtime/evs

 If I stop the origin-service on one particular node the file doesn't
 disappear.

 [root@poc-docker03 evs]# touch x
 [root@poc-docker03 evs]# ls
 x
 [root@poc-docker03 evs]#

 When I restart the origin-node service I see a lot of errors like this

  Failed cleaning pods: [remove /var/lib/origin/openshift.loca
 l.volumes/pods/1b7e3a16-ab08-11e6-8618-005056915814/volumes/
 kubernetes.io~nfs device or resource bus
  Failed to remove orphaned pod x dir; err: remove
 /var/lib/origin/openshift.local.volumes/pods//volumes/kubernetes.io
 ~nfs/*evs*: device or resource bus

 Despite the fact that the error says that it couldn't remove it, what
 exactly is it trying to do here? Is it possible that this process
 previously deleted the data in the evs folder?




 On 18 November 2016 at 16:45, Lionel Orellana 
 wrote:

 What about NFS volumes added directly in build configs.

 volumes:
 -
   name: jenkins-volume-1
   nfs:
 server: 
 path: /poc_runtime/jenkins/home


 We just restarted all the servers hosting my openshift cluster and the
 all data in the path above disappeared. Simply by restarting the host VM!



 On 18 November 2016 at 16:19, Lionel Orellana 
 wrote:

 Thanks Mark

 On 18 November 2016 at 15:09, Mark Turansky 
 wrote:



 On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
 wrote:

 Hi,

 Couple of questions regarding Persistent Volumes, in particular NFS
 ones.

 1) If I have a PV configured with the Retain policy it is not clear to
 me how this PV can be reused after the bound PVC is deleted. Deleting the
 PVC makes the PV status "Released". How do I make it "Available" again
 without losing the data?


 You can keep the PVC around longer if you intend to reuse it between
 pods. There is no way for a PV to go from Released to Available again in
 your scenario. You would have to 

Re: PV manual reclamation and recyling

2016-11-18 Thread Lionel Orellana
The only "released" version of Openshift that includes Kubernetes 1.3.6 is
v1.4.0.-alpha1. I don't want to upgrade to an alpha1 release.

Can I request a patch of Openshift Origin to include Kubernetes 1.3.6 or
higher? ( the Kubernetes 1.3 branch is up to 1.3.10).

On 19 November 2016 at 07:26, Alex Wauck  wrote:

> OpenShift is a distribution of Kubernetes, so I don't think you can
> upgrade Kubernetes without upgrading OpenShift.
>
> On Fri, Nov 18, 2016 at 1:52 PM, Lionel Orellana 
> wrote:
>
>> So the fix is on Kubernetes 1.3.6. The upgrade guide you mention is for
>> Openshift as a whole unless I'm missing something.
>> On Sat., 19 Nov. 2016 at 12:29 am, Mark Turansky 
>> wrote:
>>
>>> Good find on that bug. Our upgrade guide can help you get started on a
>>> fix.
>>>
>>> https://docs.openshift.com/container-platform/3.3/install_
>>> config/upgrading/index.html
>>>
>>> Mark
>>>
>>> On Fri, Nov 18, 2016 at 3:13 AM, Lionel Orellana 
>>> wrote:
>>>
>>>
>>> This sounds very very familiar: https://github.com/k
>>> ubernetes/kubernetes/issues/30637
>>>
>>> Particularly comment: https://github.com/ku
>>> bernetes/kubernetes/issues/30637#issuecomment-243276076
>>>
>>> That is a nasty bug. How can I upgrade Kubernetes in my cluster?
>>>
>>> My current versions are
>>>
>>> -bash-4.2$ oc version
>>> oc v1.3.0
>>> kubernetes v1.3.0+52492b4
>>> features: Basic-Auth GSSAPI Kerberos SPNEGO
>>>
>>> Server https://poc-docker01.aipo.gov.au:8443
>>> openshift v1.3.0
>>> kubernetes v1.3.0+52492b4
>>>
>>>
>>> On 18 November 2016 at 18:18, Lionel Orellana 
>>> wrote:
>>>
>>> Files in other dirs in the same NFS server don't get deleted (e.g.
>>> /poc_runtime/test/)
>>>
>>> There is something in my Openshift node deleting files in >> name>/poc_runtime/evs as soon as I put them there!
>>>
>>> On 18 November 2016 at 18:04, Lionel Orellana 
>>> wrote:
>>>
>>>
>>> In fact, whatever is deleting my files is still doing it:
>>>
>>> [root@poc-docker03 evs]# touch x
>>> [root@poc-docker03 evs]# ls
>>> [root@poc-docker03 evs]#
>>>
>>> evs is a path on an NFS volume that I have added directly to some
>>> deployment configs
>>>
>>>  -
>>>   name: evs
>>>   nfs:
>>> server: 
>>> path: /poc_runtime/evs
>>>
>>> If I stop the origin-service on one particular node the file doesn't
>>> disappear.
>>>
>>> [root@poc-docker03 evs]# touch x
>>> [root@poc-docker03 evs]# ls
>>> x
>>> [root@poc-docker03 evs]#
>>>
>>> When I restart the origin-node service I see a lot of errors like this
>>>
>>>  Failed cleaning pods: [remove /var/lib/origin/openshift.loca
>>> l.volumes/pods/1b7e3a16-ab08-11e6-8618-005056915814/volumes/
>>> kubernetes.io~nfs device or resource bus
>>>  Failed to remove orphaned pod x dir; err: remove
>>> /var/lib/origin/openshift.local.volumes/pods//volumes/kubernetes.io
>>> ~nfs/*evs*: device or resource bus
>>>
>>> Despite the fact that the error says that it couldn't remove it, what
>>> exactly is it trying to do here? Is it possible that this process
>>> previously deleted the data in the evs folder?
>>>
>>>
>>>
>>>
>>> On 18 November 2016 at 16:45, Lionel Orellana 
>>> wrote:
>>>
>>> What about NFS volumes added directly in build configs.
>>>
>>> volumes:
>>> -
>>>   name: jenkins-volume-1
>>>   nfs:
>>> server: 
>>> path: /poc_runtime/jenkins/home
>>>
>>>
>>> We just restarted all the servers hosting my openshift cluster and the
>>> all data in the path above disappeared. Simply by restarting the host VM!
>>>
>>>
>>>
>>> On 18 November 2016 at 16:19, Lionel Orellana 
>>> wrote:
>>>
>>> Thanks Mark
>>>
>>> On 18 November 2016 at 15:09, Mark Turansky  wrote:
>>>
>>>
>>>
>>> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
>>> wrote:
>>>
>>> Hi,
>>>
>>> Couple of questions regarding Persistent Volumes, in particular NFS
>>> ones.
>>>
>>> 1) If I have a PV configured with the Retain policy it is not clear to
>>> me how this PV can be reused after the bound PVC is deleted. Deleting the
>>> PVC makes the PV status "Released". How do I make it "Available" again
>>> without losing the data?
>>>
>>>
>>> You can keep the PVC around longer if you intend to reuse it between
>>> pods. There is no way for a PV to go from Released to Available again in
>>> your scenario. You would have to delete and recreate the PV. It's a pointer
>>> to real storage (the NFS share), so you're just recreating the pointer. The
>>> data in the NFS volume itself is untouched.
>>>
>>>
>>>
>>>
>>> 2) Is there anything (e.g. all nodes crashing due to some underlying
>>> infrastructure failure) that would cause the data in a "Retain" volume to
>>> be wiped out? We had a problem with all our vmware servers  (where I host
>>> my openshift POC)  and all my NFS mounted 

Re: PV manual reclamation and recyling

2016-11-18 Thread Lionel Orellana
So the fix is on Kubernetes 1.3.6. The upgrade guide you mention is for
Openshift as a whole unless I'm missing something.
On Sat., 19 Nov. 2016 at 12:29 am, Mark Turansky 
wrote:

> Good find on that bug. Our upgrade guide can help you get started on a
> fix.
>
>
> https://docs.openshift.com/container-platform/3.3/install_config/upgrading/index.html
>
> Mark
>
> On Fri, Nov 18, 2016 at 3:13 AM, Lionel Orellana 
> wrote:
>
>
> This sounds very very familiar:
> https://github.com/kubernetes/kubernetes/issues/30637
>
> Particularly comment:
> https://github.com/kubernetes/kubernetes/issues/30637#issuecomment-243276076
>
> That is a nasty bug. How can I upgrade Kubernetes in my cluster?
>
> My current versions are
>
> -bash-4.2$ oc version
> oc v1.3.0
> kubernetes v1.3.0+52492b4
> features: Basic-Auth GSSAPI Kerberos SPNEGO
>
> Server https://poc-docker01.aipo.gov.au:8443
> openshift v1.3.0
> kubernetes v1.3.0+52492b4
>
>
> On 18 November 2016 at 18:18, Lionel Orellana  wrote:
>
> Files in other dirs in the same NFS server don't get deleted (e.g.  name>/poc_runtime/test/)
>
> There is something in my Openshift node deleting files in  name>/poc_runtime/evs as soon as I put them there!
>
> On 18 November 2016 at 18:04, Lionel Orellana  wrote:
>
>
> In fact, whatever is deleting my files is still doing it:
>
> [root@poc-docker03 evs]# touch x
> [root@poc-docker03 evs]# ls
> [root@poc-docker03 evs]#
>
> evs is a path on an NFS volume that I have added directly to some
> deployment configs
>
>  -
>   name: evs
>   nfs:
> server: 
> path: /poc_runtime/evs
>
> If I stop the origin-service on one particular node the file doesn't
> disappear.
>
> [root@poc-docker03 evs]# touch x
> [root@poc-docker03 evs]# ls
> x
> [root@poc-docker03 evs]#
>
> When I restart the origin-node service I see a lot of errors like this
>
>  Failed cleaning pods: [remove
> /var/lib/origin/openshift.local.volumes/pods/1b7e3a16-ab08-11e6-8618-005056915814/volumes/
> kubernetes.io~nfs device or resource bus
>  Failed to remove orphaned pod x dir; err: remove
> /var/lib/origin/openshift.local.volumes/pods//volumes/kubernetes.io
> ~nfs/*evs*: device or resource bus
>
> Despite the fact that the error says that it couldn't remove it, what
> exactly is it trying to do here? Is it possible that this process
> previously deleted the data in the evs folder?
>
>
>
>
> On 18 November 2016 at 16:45, Lionel Orellana  wrote:
>
> What about NFS volumes added directly in build configs.
>
> volumes:
> -
>   name: jenkins-volume-1
>   nfs:
> server: 
> path: /poc_runtime/jenkins/home
>
>
> We just restarted all the servers hosting my openshift cluster and the all
> data in the path above disappeared. Simply by restarting the host VM!
>
>
>
> On 18 November 2016 at 16:19, Lionel Orellana  wrote:
>
> Thanks Mark
>
> On 18 November 2016 at 15:09, Mark Turansky  wrote:
>
>
>
> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
> wrote:
>
> Hi,
>
> Couple of questions regarding Persistent Volumes, in particular NFS ones.
>
> 1) If I have a PV configured with the Retain policy it is not clear to me
> how this PV can be reused after the bound PVC is deleted. Deleting the PVC
> makes the PV status "Released". How do I make it "Available" again without
> losing the data?
>
>
> You can keep the PVC around longer if you intend to reuse it between pods.
> There is no way for a PV to go from Released to Available again in your
> scenario. You would have to delete and recreate the PV. It's a pointer to
> real storage (the NFS share), so you're just recreating the pointer. The
> data in the NFS volume itself is untouched.
>
>
>
>
> 2) Is there anything (e.g. all nodes crashing due to some underlying
> infrastructure failure) that would cause the data in a "Retain" volume to
> be wiped out? We had a problem with all our vmware servers  (where I host
> my openshift POC)  and all my NFS mounted volumes were wiped out. The
> storage guys assure me that nothing at their end caused that and it must
> have been a running process that did it.
>
>
> "Retain" is just a flag to the recycling process to leave that PV alone
> when it's Released. The PV's retention policy wouldn't cause everything to
> be deleted. NFS volumes on the node are no different than if you called
> "mount" yourself. There is nothing inherent in OpenShift itself that is
> running in that share that would wipe out data.
>
>
>
>
> Thanks
>
> Lionel.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
>
>
>
>
>
>
___
users mailing list
users@lists.openshift.redhat.com

Re: PV manual reclamation and recyling

2016-11-18 Thread Lionel Orellana
This sounds very very familiar:
https://github.com/kubernetes/kubernetes/issues/30637

Particularly comment:
https://github.com/kubernetes/kubernetes/issues/30637#issuecomment-243276076

That is a nasty bug. How can I upgrade Kubernetes in my cluster?

My current versions are

-bash-4.2$ oc version
oc v1.3.0
kubernetes v1.3.0+52492b4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://poc-docker01.aipo.gov.au:8443
openshift v1.3.0
kubernetes v1.3.0+52492b4


On 18 November 2016 at 18:18, Lionel Orellana  wrote:

> Files in other dirs in the same NFS server don't get deleted (e.g.  name>/poc_runtime/test/)
>
> There is something in my Openshift node deleting files in  name>/poc_runtime/evs as soon as I put them there!
>
> On 18 November 2016 at 18:04, Lionel Orellana  wrote:
>
>>
>> In fact, whatever is deleting my files is still doing it:
>>
>> [root@poc-docker03 evs]# touch x
>> [root@poc-docker03 evs]# ls
>> [root@poc-docker03 evs]#
>>
>> evs is a path on an NFS volume that I have added directly to some
>> deployment configs
>>
>>  -
>>   name: evs
>>   nfs:
>> server: 
>> path: /poc_runtime/evs
>>
>> If I stop the origin-service on one particular node the file doesn't
>> disappear.
>>
>> [root@poc-docker03 evs]# touch x
>> [root@poc-docker03 evs]# ls
>> x
>> [root@poc-docker03 evs]#
>>
>> When I restart the origin-node service I see a lot of errors like this
>>
>>  Failed cleaning pods: [remove /var/lib/origin/openshift.loca
>> l.volumes/pods/1b7e3a16-ab08-11e6-8618-005056915814/volumes/kubernetes.io~nfs
>> device or resource bus
>>  Failed to remove orphaned pod x dir; err: remove
>> /var/lib/origin/openshift.local.volumes/pods//volumes/kubernetes.io
>> ~nfs/*evs*: device or resource bus
>>
>> Despite the fact that the error says that it couldn't remove it, what
>> exactly is it trying to do here? Is it possible that this process
>> previously deleted the data in the evs folder?
>>
>>
>>
>>
>> On 18 November 2016 at 16:45, Lionel Orellana  wrote:
>>
>>> What about NFS volumes added directly in build configs.
>>>
>>> volumes:
>>> -
>>>   name: jenkins-volume-1
>>>   nfs:
>>> server: 
>>> path: /poc_runtime/jenkins/home
>>>
>>>
>>> We just restarted all the servers hosting my openshift cluster and the
>>> all data in the path above disappeared. Simply by restarting the host VM!
>>>
>>>
>>>
>>> On 18 November 2016 at 16:19, Lionel Orellana 
>>> wrote:
>>>
 Thanks Mark

 On 18 November 2016 at 15:09, Mark Turansky 
 wrote:

>
>
> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
> wrote:
>
>> Hi,
>>
>> Couple of questions regarding Persistent Volumes, in particular NFS
>> ones.
>>
>> 1) If I have a PV configured with the Retain policy it is not clear
>> to me how this PV can be reused after the bound PVC is deleted. Deleting
>> the PVC makes the PV status "Released". How do I make it "Available" 
>> again
>> without losing the data?
>>
>
> You can keep the PVC around longer if you intend to reuse it between
> pods. There is no way for a PV to go from Released to Available again in
> your scenario. You would have to delete and recreate the PV. It's a 
> pointer
> to real storage (the NFS share), so you're just recreating the pointer. 
> The
> data in the NFS volume itself is untouched.
>
>
>
>>
>> 2) Is there anything (e.g. all nodes crashing due to some underlying
>> infrastructure failure) that would cause the data in a "Retain" volume to
>> be wiped out? We had a problem with all our vmware servers  (where I host
>> my openshift POC)  and all my NFS mounted volumes were wiped out. The
>> storage guys assure me that nothing at their end caused that and it must
>> have been a running process that did it.
>>
>
> "Retain" is just a flag to the recycling process to leave that PV
> alone when it's Released. The PV's retention policy wouldn't cause
> everything to be deleted. NFS volumes on the node are no different than if
> you called "mount" yourself. There is nothing inherent in OpenShift itself
> that is running in that share that would wipe out data.
>
>
>
>>
>> Thanks
>>
>> Lionel.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>

>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: PV manual reclamation and recyling

2016-11-17 Thread Lionel Orellana
Files in other dirs in the same NFS server don't get deleted (e.g. /poc_runtime/test/)

There is something in my Openshift node deleting files in /poc_runtime/evs as soon as I put them there!

On 18 November 2016 at 18:04, Lionel Orellana  wrote:

>
> In fact, whatever is deleting my files is still doing it:
>
> [root@poc-docker03 evs]# touch x
> [root@poc-docker03 evs]# ls
> [root@poc-docker03 evs]#
>
> evs is a path on an NFS volume that I have added directly to some
> deployment configs
>
>  -
>   name: evs
>   nfs:
> server: 
> path: /poc_runtime/evs
>
> If I stop the origin-service on one particular node the file doesn't
> disappear.
>
> [root@poc-docker03 evs]# touch x
> [root@poc-docker03 evs]# ls
> x
> [root@poc-docker03 evs]#
>
> When I restart the origin-node service I see a lot of errors like this
>
>  Failed cleaning pods: [remove /var/lib/origin/openshift.
> local.volumes/pods/1b7e3a16-ab08-11e6-8618-005056915814/volumes/
> kubernetes.io~nfs device or resource bus
>  Failed to remove orphaned pod x dir; err: remove
> /var/lib/origin/openshift.local.volumes/pods//volumes/kubernetes.io
> ~nfs/*evs*: device or resource bus
>
> Despite the fact that the error says that it couldn't remove it, what
> exactly is it trying to do here? Is it possible that this process
> previously deleted the data in the evs folder?
>
>
>
>
> On 18 November 2016 at 16:45, Lionel Orellana  wrote:
>
>> What about NFS volumes added directly in build configs.
>>
>> volumes:
>> -
>>   name: jenkins-volume-1
>>   nfs:
>> server: 
>> path: /poc_runtime/jenkins/home
>>
>>
>> We just restarted all the servers hosting my openshift cluster and the
>> all data in the path above disappeared. Simply by restarting the host VM!
>>
>>
>>
>> On 18 November 2016 at 16:19, Lionel Orellana  wrote:
>>
>>> Thanks Mark
>>>
>>> On 18 November 2016 at 15:09, Mark Turansky  wrote:
>>>


 On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
 wrote:

> Hi,
>
> Couple of questions regarding Persistent Volumes, in particular NFS
> ones.
>
> 1) If I have a PV configured with the Retain policy it is not clear to
> me how this PV can be reused after the bound PVC is deleted. Deleting the
> PVC makes the PV status "Released". How do I make it "Available" again
> without losing the data?
>

 You can keep the PVC around longer if you intend to reuse it between
 pods. There is no way for a PV to go from Released to Available again in
 your scenario. You would have to delete and recreate the PV. It's a pointer
 to real storage (the NFS share), so you're just recreating the pointer. The
 data in the NFS volume itself is untouched.



>
> 2) Is there anything (e.g. all nodes crashing due to some underlying
> infrastructure failure) that would cause the data in a "Retain" volume to
> be wiped out? We had a problem with all our vmware servers  (where I host
> my openshift POC)  and all my NFS mounted volumes were wiped out. The
> storage guys assure me that nothing at their end caused that and it must
> have been a running process that did it.
>

 "Retain" is just a flag to the recycling process to leave that PV alone
 when it's Released. The PV's retention policy wouldn't cause everything to
 be deleted. NFS volumes on the node are no different than if you called
 "mount" yourself. There is nothing inherent in OpenShift itself that is
 running in that share that would wipe out data.



>
> Thanks
>
> Lionel.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>

>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: PV manual reclamation and recyling

2016-11-17 Thread Lionel Orellana
In fact, whatever is deleting my files is still doing it:

[root@poc-docker03 evs]# touch x
[root@poc-docker03 evs]# ls
[root@poc-docker03 evs]#

evs is a path on an NFS volume that I have added directly to some
deployment configs

 -
  name: evs
  nfs:
server: 
path: /poc_runtime/evs

If I stop the origin-service on one particular node the file doesn't
disappear.

[root@poc-docker03 evs]# touch x
[root@poc-docker03 evs]# ls
x
[root@poc-docker03 evs]#

When I restart the origin-node service I see a lot of errors like this

 Failed cleaning pods: [remove
/var/lib/origin/openshift.local.volumes/pods/1b7e3a16-ab08-11e6-8618-005056915814/volumes/
kubernetes.io~nfs device or resource bus
 Failed to remove orphaned pod x dir; err: remove
/var/lib/origin/openshift.local.volumes/pods//volumes/kubernetes.io~nfs/
*evs*: device or resource bus

Despite the fact that the error says that it couldn't remove it, what
exactly is it trying to do here? Is it possible that this process
previously deleted the data in the evs folder?




On 18 November 2016 at 16:45, Lionel Orellana  wrote:

> What about NFS volumes added directly in build configs.
>
> volumes:
> -
>   name: jenkins-volume-1
>   nfs:
> server: 
> path: /poc_runtime/jenkins/home
>
>
> We just restarted all the servers hosting my openshift cluster and the all
> data in the path above disappeared. Simply by restarting the host VM!
>
>
>
> On 18 November 2016 at 16:19, Lionel Orellana  wrote:
>
>> Thanks Mark
>>
>> On 18 November 2016 at 15:09, Mark Turansky  wrote:
>>
>>>
>>>
>>> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
>>> wrote:
>>>
 Hi,

 Couple of questions regarding Persistent Volumes, in particular NFS
 ones.

 1) If I have a PV configured with the Retain policy it is not clear to
 me how this PV can be reused after the bound PVC is deleted. Deleting the
 PVC makes the PV status "Released". How do I make it "Available" again
 without losing the data?

>>>
>>> You can keep the PVC around longer if you intend to reuse it between
>>> pods. There is no way for a PV to go from Released to Available again in
>>> your scenario. You would have to delete and recreate the PV. It's a pointer
>>> to real storage (the NFS share), so you're just recreating the pointer. The
>>> data in the NFS volume itself is untouched.
>>>
>>>
>>>

 2) Is there anything (e.g. all nodes crashing due to some underlying
 infrastructure failure) that would cause the data in a "Retain" volume to
 be wiped out? We had a problem with all our vmware servers  (where I host
 my openshift POC)  and all my NFS mounted volumes were wiped out. The
 storage guys assure me that nothing at their end caused that and it must
 have been a running process that did it.

>>>
>>> "Retain" is just a flag to the recycling process to leave that PV alone
>>> when it's Released. The PV's retention policy wouldn't cause everything to
>>> be deleted. NFS volumes on the node are no different than if you called
>>> "mount" yourself. There is nothing inherent in OpenShift itself that is
>>> running in that share that would wipe out data.
>>>
>>>
>>>

 Thanks

 Lionel.

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: PV manual reclamation and recyling

2016-11-17 Thread Lionel Orellana
What about NFS volumes added directly in build configs.

volumes:
-
  name: jenkins-volume-1
  nfs:
server: 
path: /poc_runtime/jenkins/home


We just restarted all the servers hosting my openshift cluster and the all
data in the path above disappeared. Simply by restarting the host VM!



On 18 November 2016 at 16:19, Lionel Orellana  wrote:

> Thanks Mark
>
> On 18 November 2016 at 15:09, Mark Turansky  wrote:
>
>>
>>
>> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
>> wrote:
>>
>>> Hi,
>>>
>>> Couple of questions regarding Persistent Volumes, in particular NFS
>>> ones.
>>>
>>> 1) If I have a PV configured with the Retain policy it is not clear to
>>> me how this PV can be reused after the bound PVC is deleted. Deleting the
>>> PVC makes the PV status "Released". How do I make it "Available" again
>>> without losing the data?
>>>
>>
>> You can keep the PVC around longer if you intend to reuse it between
>> pods. There is no way for a PV to go from Released to Available again in
>> your scenario. You would have to delete and recreate the PV. It's a pointer
>> to real storage (the NFS share), so you're just recreating the pointer. The
>> data in the NFS volume itself is untouched.
>>
>>
>>
>>>
>>> 2) Is there anything (e.g. all nodes crashing due to some underlying
>>> infrastructure failure) that would cause the data in a "Retain" volume to
>>> be wiped out? We had a problem with all our vmware servers  (where I host
>>> my openshift POC)  and all my NFS mounted volumes were wiped out. The
>>> storage guys assure me that nothing at their end caused that and it must
>>> have been a running process that did it.
>>>
>>
>> "Retain" is just a flag to the recycling process to leave that PV alone
>> when it's Released. The PV's retention policy wouldn't cause everything to
>> be deleted. NFS volumes on the node are no different than if you called
>> "mount" yourself. There is nothing inherent in OpenShift itself that is
>> running in that share that would wipe out data.
>>
>>
>>
>>>
>>> Thanks
>>>
>>> Lionel.
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: PV manual reclamation and recyling

2016-11-17 Thread Lionel Orellana
Thanks Mark

On 18 November 2016 at 15:09, Mark Turansky  wrote:

>
>
> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana 
> wrote:
>
>> Hi,
>>
>> Couple of questions regarding Persistent Volumes, in particular NFS ones.
>>
>> 1) If I have a PV configured with the Retain policy it is not clear to me
>> how this PV can be reused after the bound PVC is deleted. Deleting the PVC
>> makes the PV status "Released". How do I make it "Available" again without
>> losing the data?
>>
>
> You can keep the PVC around longer if you intend to reuse it between pods.
> There is no way for a PV to go from Released to Available again in your
> scenario. You would have to delete and recreate the PV. It's a pointer to
> real storage (the NFS share), so you're just recreating the pointer. The
> data in the NFS volume itself is untouched.
>
>
>
>>
>> 2) Is there anything (e.g. all nodes crashing due to some underlying
>> infrastructure failure) that would cause the data in a "Retain" volume to
>> be wiped out? We had a problem with all our vmware servers  (where I host
>> my openshift POC)  and all my NFS mounted volumes were wiped out. The
>> storage guys assure me that nothing at their end caused that and it must
>> have been a running process that did it.
>>
>
> "Retain" is just a flag to the recycling process to leave that PV alone
> when it's Released. The PV's retention policy wouldn't cause everything to
> be deleted. NFS volumes on the node are no different than if you called
> "mount" yourself. There is nothing inherent in OpenShift itself that is
> running in that share that would wipe out data.
>
>
>
>>
>> Thanks
>>
>> Lionel.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users