Re: ceph pv
If you do see behavior with ceph locks like that please file a bug - most of the serious issues were fixed in 1.3 but we definitely want to ensure no such issues are still occurring. On Jan 12, 2017, at 6:40 AM, James Wilkins <james.wilk...@fasthosts.com> wrote: Out of interest, assuming your using ceph RBD’s – do you hit an issue where by the locks don’t correctly ‘move’ when a container migrates? Historically we’ve had to clean up manually with rbd lock list / remove the lock to permit the pod to move properly. Admittedly, we haven’t tested this since 1.2 *From:* users-boun...@lists.openshift.redhat.com [ mailto:users-boun...@lists.openshift.redhat.com <users-boun...@lists.openshift.redhat.com>] *On Behalf Of *Diego Castro *Sent:* 10 January 2017 15:59 *To:* Philippe Lafoucrière <philippe.lafoucri...@tech-angels.com> *Cc:* users@lists.openshift.redhat.com *Subject:* Re: ceph pv Hello. You can use pv without worrying about secrets if you create a keyring file on each node at /etc/ceph/ceph.client.openshift.keyring and point the pv object to it: apiVersion: v1 kind: PersistentVolume metadata: name: mypv labels: size: 1024 spec: capacity: storage: 1024 accessModes: - "ReadWriteOnce" rbd: monitors: - "osm-0:6789" - "osm-1:6789" - "osm-2:6789" pool: rbd image: mypv user: openshift keyring: /etc/ceph/ceph.client.openshift.keyring fsType: ext4 readOnly: false persistentVolumeReclaimPolicy: "Retain" For more information to create a ceph user: http://docs.ceph.com/docs/giant/rados/operations/user-management/#managing-users --- Diego Castro / The CloudFather GetupCloud.com - Eliminamos a Gravidade 2017-01-09 17:42 GMT-03:00 Philippe Lafoucrière < philippe.lafoucri...@tech-angels.com>: On Mon, Jan 9, 2017 at 3:42 AM, James Eckersall <ja...@jeckersall.co.uk> wrote: Our use case would be utilisation of openshift clusters with untrusted clients in distinct projects, so we’re trying to ensure they can’t access each/others storage. We are in the same situation, and we generally let our clients access their projects without permissions for secrets :) ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
RE: ceph pv
Out of interest, assuming your using ceph RBD’s – do you hit an issue where by the locks don’t correctly ‘move’ when a container migrates? Historically we’ve had to clean up manually with rbd lock list / remove the lock to permit the pod to move properly. Admittedly, we haven’t tested this since 1.2 From: users-boun...@lists.openshift.redhat.com [mailto:users-boun...@lists.openshift.redhat.com] On Behalf Of Diego Castro Sent: 10 January 2017 15:59 To: Philippe Lafoucrière <philippe.lafoucri...@tech-angels.com> Cc: users@lists.openshift.redhat.com Subject: Re: ceph pv Hello. You can use pv without worrying about secrets if you create a keyring file on each node at /etc/ceph/ceph.client.openshift.keyring and point the pv object to it: apiVersion: v1 kind: PersistentVolume metadata: name: mypv labels: size: 1024 spec: capacity: storage: 1024 accessModes: - "ReadWriteOnce" rbd: monitors: - "osm-0:6789" - "osm-1:6789" - "osm-2:6789" pool: rbd image: mypv user: openshift keyring: /etc/ceph/ceph.client.openshift.keyring fsType: ext4 readOnly: false persistentVolumeReclaimPolicy: "Retain" For more information to create a ceph user: http://docs.ceph.com/docs/giant/rados/operations/user-management/#managing-users --- Diego Castro / The CloudFather GetupCloud.com - Eliminamos a Gravidade 2017-01-09 17:42 GMT-03:00 Philippe Lafoucrière <philippe.lafoucri...@tech-angels.com<mailto:philippe.lafoucri...@tech-angels.com>>: On Mon, Jan 9, 2017 at 3:42 AM, James Eckersall <ja...@jeckersall.co.uk<mailto:ja...@jeckersall.co.uk>> wrote: Our use case would be utilisation of openshift clusters with untrusted clients in distinct projects, so we’re trying to ensure they can’t access each/others storage. We are in the same situation, and we generally let our clients access their projects without permissions for secrets :) ___ users mailing list users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com> http://lists.openshift.redhat.com/openshiftmm/listinfo/users ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: ceph pv
Hello. You can use pv without worrying about secrets if you create a keyring file on each node at /etc/ceph/ceph.client.openshift.keyring and point the pv object to it: apiVersion: v1 kind: PersistentVolume metadata: name: mypv labels: size: 1024 spec: capacity: storage: 1024 accessModes: - "ReadWriteOnce" rbd: monitors: - "osm-0:6789" - "osm-1:6789" - "osm-2:6789" pool: rbd image: mypv user: openshift keyring: /etc/ceph/ceph.client.openshift.keyring fsType: ext4 readOnly: false persistentVolumeReclaimPolicy: "Retain" For more information to create a ceph user: http://docs.ceph.com/docs/giant/rados/operations/user-management/#managing-users --- Diego Castro / The CloudFather GetupCloud.com - Eliminamos a Gravidade 2017-01-09 17:42 GMT-03:00 Philippe Lafoucrière < philippe.lafoucri...@tech-angels.com>: > > On Mon, Jan 9, 2017 at 3:42 AM, James Eckersall> wrote: > >> Our use case would be utilisation of openshift clusters with untrusted >> clients in distinct projects, so we’re trying to ensure they can’t access >> each/others storage. > > > We are in the same situation, and we generally let our clients access > their projects without permissions for secrets :) > > > ___ > users mailing list > users@lists.openshift.redhat.com > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > > ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: ceph pv
On Mon, Jan 9, 2017 at 3:42 AM, James Eckersallwrote: > Our use case would be utilisation of openshift clusters with untrusted > clients in distinct projects, so we’re trying to ensure they can’t access > each/others storage. We are in the same situation, and we generally let our clients access their projects without permissions for secrets :) ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
ceph pv
Hi, Looking for some feedback with regards to utilisation of RBD devices as PV’s in the area of a multi-tenanted openshift platform. At present, it appears you need to define the secret as such within a PV declaration apiVersion: v1 kind: PersistentVolume metadata: name: ceph-pv spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce rbd: monitors: - 192.168.122.133:6789 <http://192.168.122.133:6789> pool: rbd image: ceph-image user: admin secretRef: name: ceph-secret fsType: ext4 readOnly: false persistentVolumeReclaimPolicy: Recycle This means the following (unless I’m missing something!) o) ‘ceph-secret’ needs to exist within the correct project/name-space that wants to create a PVC against a RBD-backed-PV. I can’t see a way to have a general secret (for example, located within the openshift namespace) o) On this basis – it means the contents of ceph-secret can be read by any project that requires access to the storage system? (And thus expose the required keys to mount any volumes within that pool space). Or is there a way to make it so only the openshift processes (and not the user) can read the contents of ceph-secret? Our use case would be utilisation of openshift clusters with untrusted clients in distinct projects, so we’re trying to ensure they can’t access each/others storage. Any input appreciated – cheers! James. ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users