Re: [ceph-users] Ceph RBD latencies

2016-03-06 Thread Christian Balzer
Hello, On Mon, 7 Mar 2016 00:38:46 + Adrian Saul wrote: > > >The Samsungs are the 850 2TB > > > (MZ-75E2T0BW). Chosen primarily on price. > > > > These are spec'ed at 150TBW, or an amazingly low 0.04 DWPD (over 5 > > years). Unless you have a read-only cluster, you will wind up spending >

Re: [ceph-users] Ceph RBD latencies

2016-03-06 Thread Adrian Saul
> >The Samsungs are the 850 2TB > > (MZ-75E2T0BW). Chosen primarily on price. > > These are spec'ed at 150TBW, or an amazingly low 0.04 DWPD (over 5 years). > Unless you have a read-only cluster, you will wind up spending MORE on > replacing them (and/or loosing data when 2 fail at the same time)

Re: [ceph-users] Ceph RBD latencies

2016-03-04 Thread Christian Balzer
Hello, On Thu, 3 Mar 2016 23:26:13 + Adrian Saul wrote: > > > Samsung EVO... > > Which exact model, I presume this is not a DC one? > > > > If you had put your journals on those, you would already be pulling > > your hairs out due to abysmal performance. > > > > Also with Evo ones, I'd be

Re: [ceph-users] Ceph RBD latencies

2016-03-03 Thread Adrian Saul
> Samsung EVO... > Which exact model, I presume this is not a DC one? > > If you had put your journals on those, you would already be pulling your hairs > out due to abysmal performance. > > Also with Evo ones, I'd be worried about endurance. No, I am using the P3700DCs for journals. The

Re: [ceph-users] Ceph RBD latencies

2016-03-03 Thread Nick Fisk
s-boun...@lists.ceph.com] On Behalf Of > Jan Schermer > Sent: 03 March 2016 14:38 > To: RDS <rs3...@me.com> > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Ceph RBD latencies > > I think the latency comes from journal flushing > > Try tuning > > filestore min syn

Re: [ceph-users] Ceph RBD latencies

2016-03-03 Thread Jan Schermer
I think the latency comes from journal flushing Try tuning filestore min sync interval = .1 filestore max sync interval = 5 and also /proc/sys/vm/dirty_bytes (I suggest 512MB) /proc/sys/vm/dirty_background_bytes (I suggest 256MB) See if that helps It would be useful to see the job you are

Re: [ceph-users] Ceph RBD latencies

2016-03-03 Thread RDS
A couple of suggestions: 1) # of pgs per OSD should be 100-200 2) When dealing with SSD or Flash, performance of these devices hinge on how you partition them and how you tune linux: a) if using partitions, did you align the partitions on a 4k boundary? I start at sector 2048 using

Re: [ceph-users] Ceph RBD latencies

2016-03-03 Thread Christian Balzer
Hello, On Thu, 3 Mar 2016 07:41:09 + Adrian Saul wrote: > Hi Ceph-users, > > TL;DR - I can't seem to pin down why an unloaded system with flash based > OSD journals has higher than desired write latencies for RBD devices. > Any ideas? > > > I am developing a storage system based on