Hi,
Couple of questions regarding Persistent Volumes, in particular NFS ones.
1) If I have a PV configured with the Retain policy it is not clear to me
how this PV can be reused after the bound PVC is deleted. Deleting the PVC
makes the PV status "Released". How do I make it "Available" again wit
On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana
wrote:
> Hi,
>
> Couple of questions regarding Persistent Volumes, in particular NFS ones.
>
> 1) If I have a PV configured with the Retain policy it is not clear to me
> how this PV can be reused after the bound PVC is deleted. Deleting the PVC
>
Thanks Mark
On 18 November 2016 at 15:09, Mark Turansky wrote:
>
>
> On Thu, Nov 17, 2016 at 10:41 PM, Lionel Orellana
> wrote:
>
>> Hi,
>>
>> Couple of questions regarding Persistent Volumes, in particular NFS ones.
>>
>> 1) If I have a PV configured with the Retain policy it is not clear to m
What about NFS volumes added directly in build configs.
volumes:
-
name: jenkins-volume-1
nfs:
server:
path: /poc_runtime/jenkins/home
We just restarted all the servers hosting my openshift cluster and the all
data in the path above disappeare
In fact, whatever is deleting my files is still doing it:
[root@poc-docker03 evs]# touch x
[root@poc-docker03 evs]# ls
[root@poc-docker03 evs]#
evs is a path on an NFS volume that I have added directly to some
deployment configs
-
name: evs
nfs:
server:
Files in other dirs in the same NFS server don't get deleted (e.g. /poc_runtime/test/)
There is something in my Openshift node deleting files in /poc_runtime/evs as soon as I put them there!
On 18 November 2016 at 18:04, Lionel Orellana wrote:
>
> In fact, whatever is deleting my files is still
This sounds very very familiar:
https://github.com/kubernetes/kubernetes/issues/30637
Particularly comment:
https://github.com/kubernetes/kubernetes/issues/30637#issuecomment-243276076
That is a nasty bug. How can I upgrade Kubernetes in my cluster?
My current versions are
-bash-4.2$ oc version
Good find on that bug. Our upgrade guide can help you get started on a fix.
https://docs.openshift.com/container-platform/3.3/install_config/upgrading/index.html
Mark
On Fri, Nov 18, 2016 at 3:13 AM, Lionel Orellana wrote:
>
> This sounds very very familiar: https://github.com/
> kubernetes/k
So the fix is on Kubernetes 1.3.6. The upgrade guide you mention is for
Openshift as a whole unless I'm missing something.
On Sat., 19 Nov. 2016 at 12:29 am, Mark Turansky
wrote:
> Good find on that bug. Our upgrade guide can help you get started on a
> fix.
>
>
> https://docs.openshift.com/conta
OpenShift is a distribution of Kubernetes, so I don't think you can upgrade
Kubernetes without upgrading OpenShift.
On Fri, Nov 18, 2016 at 1:52 PM, Lionel Orellana wrote:
> So the fix is on Kubernetes 1.3.6. The upgrade guide you mention is for
> Openshift as a whole unless I'm missing somethin
The only "released" version of Openshift that includes Kubernetes 1.3.6 is
v1.4.0.-alpha1. I don't want to upgrade to an alpha1 release.
Can I request a patch of Openshift Origin to include Kubernetes 1.3.6 or
higher? ( the Kubernetes 1.3 branch is up to 1.3.10).
On 19 November 2016 at 07:26, Ale
This is a pretty bad issue in Kubernetes. We are talking about deleting
data from NFS volumes. Lucky for me I'm just doing a POC. Is this not
considered bad enough to warrant a patch release for Origin 1.3.x?
Cheers
Lionel.
On 19 November 2016 at 07:38, Lionel Orellana wrote:
> The only "relea
It's likely, don't have an eta yet while the scope of the pick is assessed.
On Thu, Nov 24, 2016 at 5:52 PM, Lionel Orellana wrote:
> This is a pretty bad issue in Kubernetes. We are talking about deleting
> data from NFS volumes. Lucky for me I'm just doing a POC. Is this not
> considered bad e
Thanks Clayton. Keep us posted.
On Wed., 30 Nov. 2016 at 2:48 am, Clayton Coleman
wrote:
> It's likely, don't have an eta yet while the scope of the pick is assessed.
>
> On Thu, Nov 24, 2016 at 5:52 PM, Lionel Orellana
> wrote:
>
> This is a pretty bad issue in Kubernetes. We are talking about
Hello,
Has anyone been able to get a wildcard cert chain working successfully in a
OSE3.3 HA configuration successfully?
I believe my issue resides in the way I'm encoding the PEM file and presenting
it with Ansible. Any help would be greatly appreciated.
Current config is 3 masters/etcd, 3 no
15 matches
Mail list logo