I am not sure about the enterprise grade and underprovisioning, but for the 
Intel 520s i've got 240gbs (the speeds of 240 is a bit better than 120s). and 
i've left 50% underprovisioned. I've got 10GB for journals and I am using 4 
osds per ssd. 

Andrei 

----- Original Message -----

> From: "Tony Harris" <neth...@gmail.com>
> To: "Andrei Mikhailovsky" <and...@arhont.com>
> Cc: ceph-users@lists.ceph.com, "Christian Balzer" <ch...@gol.com>
> Sent: Sunday, 1 March, 2015 8:49:56 PM
> Subject: Re: [ceph-users] SSD selection

> Ok, any size suggestion? Can I get a 120 and be ok? I see I can get
> DCS3500 120GB for within $120/drive so it's possible to get 6 of
> them...

> -Tony

> On Sun, Mar 1, 2015 at 12:46 PM, Andrei Mikhailovsky <
> and...@arhont.com > wrote:

> > I would not use a single ssd for 5 osds. I would recommend the 3-4
> > osds max per ssd or you will get the bottleneck on the ssd side.
> 

> > I've had a reasonable experience with Intel 520 ssds (which are not
> > produced anymore). I've found Samsung 840 Pro to be horrible!
> 

> > Otherwise, it seems that everyone here recommends the DC3500 or
> > DC3700 and it has the best wear per $ ratio out of all the drives.
> 

> > Andrei
> 

> > > From: "Tony Harris" < neth...@gmail.com >
> > 
> 
> > > To: "Christian Balzer" < ch...@gol.com >
> > 
> 
> > > Cc: ceph-users@lists.ceph.com
> > 
> 
> > > Sent: Sunday, 1 March, 2015 4:19:30 PM
> > 
> 
> > > Subject: Re: [ceph-users] SSD selection
> > 
> 

> > > Well, although I have 7 now per node, you make a good point and
> > > I'm
> > > in a position where I can either increase to 8 and split 4/4 and
> > > have 2 ssds, or reduce to 5 and use a single osd per node (the
> > > system is not in production yet).
> > 
> 

> > > Do all the DC lines have caps in them or just the DC s line?
> > 
> 

> > > -Tony
> > 
> 

> > > On Sat, Feb 28, 2015 at 11:21 PM, Christian Balzer <
> > > ch...@gol.com
> > > >
> > > wrote:
> > 
> 

> > > > On Sat, 28 Feb 2015 20:42:35 -0600 Tony Harris wrote:
> > > 
> > 
> 

> > > > > Hi all,
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > > I have a small cluster together and it's running fairly well
> > > > > (3
> > > > > nodes, 21
> > > 
> > 
> 
> > > > > osds). I'm looking to improve the write performance a bit
> > > > > though,
> > > > > which
> > > 
> > 
> 
> > > > > I was hoping that using SSDs for journals would do. But, I
> > > > > was
> > > > > wondering
> > > 
> > 
> 
> > > > > what people had as recommendations for SSDs to act as journal
> > > > > drives.
> > > 
> > 
> 
> > > > > If I read the docs on ceph.com correctly, I'll need 2 ssds
> > > > > per
> > > > > node
> > > 
> > 
> 
> > > > > (with 7 drives in each node, I think the recommendation was
> > > > > 1ssd
> > > > > per 4-5
> > > 
> > 
> 
> > > > > drives?) so I'm looking for drives that will work well
> > > > > without
> > > > > breaking
> > > 
> > 
> 
> > > > > the bank for where I work (I'll probably have to purchase
> > > > > them
> > > > > myself
> > > 
> > 
> 
> > > > > and donate, so my budget is somewhat small). Any suggestions?
> > > > > I'd
> > > 
> > 
> 
> > > > > prefer one that can finish its write in a power outage case,
> > > > > the
> > > > > only
> > > 
> > 
> 
> > > > > one I know of off hand is the intel dcs3700 I think, but at
> > > > > $300
> > > > > it's
> > > 
> > 
> 
> > > > > WAY above my affordability range.
> > > 
> > 
> 

> > > > Firstly, an uneven number of OSDs (HDDs) per node will bite you
> > > > in
> > > > the
> > > 
> > 
> 
> > > > proverbial behind down the road when combined with journal
> > > > SSDs,
> > > > as
> > > > one of
> > > 
> > 
> 
> > > > those SSDs will wear our faster than the other.
> > > 
> > 
> 

> > > > Secondly, how many SSDs you need is basically a trade-off
> > > > between
> > > > price,
> > > 
> > 
> 
> > > > performance, endurance and limiting failure impact.
> > > 
> > 
> 

> > > > I have cluster where I used 4 100GB DC S3700s with 8 HDD OSDs,
> > > > optimizing
> > > 
> > 
> 
> > > > the write paths and IOPS and failure domain, but not the
> > > > sequential
> > > > speed
> > > 
> > 
> 
> > > > or cost.
> > > 
> > 
> 

> > > > Depending on what your write load is and the expected lifetime
> > > > of
> > > > this
> > > 
> > 
> 
> > > > cluster, you might be able to get away with DC S3500s or even
> > > > better
> > > > the
> > > 
> > 
> 
> > > > new DC S3610s.
> > > 
> > 
> 
> > > > Keep in mind that buying a cheap, low endurance SSD now might
> > > > cost
> > > > you
> > > 
> > 
> 
> > > > more down the road if you have to replace it after a year
> > > > (TBW/$).
> > > 
> > 
> 

> > > > All the cheap alternatives to DC level SSDs tend to wear out
> > > > too
> > > > fast,
> > > 
> > 
> 
> > > > have no powercaps and tend to have unpredictable (caused by
> > > > garbage
> > > 
> > 
> 
> > > > collection) and steadily decreasing performance.
> > > 
> > 
> 

> > > > Christian
> > > 
> > 
> 
> > > > --
> > > 
> > 
> 
> > > > Christian Balzer Network/Systems Engineer
> > > 
> > 
> 
> > > > ch...@gol.com Global OnLine Japan/Fusion Communications
> > > 
> > 
> 
> > > > http://www.gol.com/
> > > 
> > 
> 

> > > _______________________________________________
> > 
> 
> > > ceph-users mailing list
> > 
> 
> > > ceph-users@lists.ceph.com
> > 
> 
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to