Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-23 Thread Mykola Golub
On Tue, Jan 22, 2019 at 01:26:29PM -0800, Void Star Nill wrote: > Regarding Mykola's suggestion to use Read-Only snapshots, what is the > overhead of creating these snapshots? I assume these are copy-on-write > snapshots, so there's no extra space consumed except for the metadata? Yes. --

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-22 Thread Void Star Nill
Thanks all for the great advices and inputs. Regarding Mykola's suggestion to use Read-Only snapshots, what is the overhead of creating these snapshots? I assume these are copy-on-write snapshots, so there's no extra space consumed except for the metadata? Thanks, Shridhar On Fri, 18 Jan 2019

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-18 Thread Ilya Dryomov
On Fri, Jan 18, 2019 at 11:25 AM Mykola Golub wrote: > > On Thu, Jan 17, 2019 at 10:27:20AM -0800, Void Star Nill wrote: > > Hi, > > > > We am trying to use Ceph in our products to address some of the use cases. > > We think Ceph block device for us. One of the use cases is that we have a > >

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-18 Thread Ilya Dryomov
On Fri, Jan 18, 2019 at 9:25 AM Burkhard Linke wrote: > > Hi, > > On 1/17/19 7:27 PM, Void Star Nill wrote: > > Hi, > > We am trying to use Ceph in our products to address some of the use cases. We > think Ceph block device for us. One of the use cases is that we have a number > of jobs running

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-18 Thread Mykola Golub
On Thu, Jan 17, 2019 at 10:27:20AM -0800, Void Star Nill wrote: > Hi, > > We am trying to use Ceph in our products to address some of the use cases. > We think Ceph block device for us. One of the use cases is that we have a > number of jobs running in containers that need to have Read-Only

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-18 Thread Burkhard Linke
Hi, On 1/17/19 7:27 PM, Void Star Nill wrote: Hi, We am trying to use Ceph in our products to address some of the use cases. We think Ceph block device for us. One of the use cases is that we have a number of jobs running in containers that need to have Read-Only access to shared data. The

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-17 Thread Oliver Freyermuth
Hi, first of: I'm probably not the expert you are waiting for, but we are using CephFS for HPC / HTC (storing datafiles), and make use of containers for all jobs (up to ~2000 running in parallel). We also use RBD, but for our virtualization infrastructure. While I'm always one of the first

[ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-17 Thread Void Star Nill
Hi, We am trying to use Ceph in our products to address some of the use cases. We think Ceph block device for us. One of the use cases is that we have a number of jobs running in containers that need to have Read-Only access to shared data. The data is written once and is consumed multiple times.