Hello,

On Wed, 1 Oct 2014 13:31:38 +0200 Martin B Nielsen wrote:

> Hi,
> 
> We settled on Samsung pro 840 240GB drives 1½ year ago and we've been
> happy so far. We've over-provisioned them a lot (left 120GB
> unpartitioned).
> 
> We have 16x 240GB and 32x 500GB - we've lost 1x 500GB so far.
> 
> smartctl states something like
> Wear = 092%, Hours = 12883, Datawritten = 15321.83 TB avg on those. I
> think that is ~30TB/day if I'm doing the calc right.
>
Something very much does not add up there.
Either you've written 15321.83 GB on those drives, making it about
30GB/day and well withing the Samsung specs, or you've written 10-20 times
the expected TBW level of those drives...

In the article I mentioned previously:
http://www.anandtech.com/show/8239/update-on-samsung-850-pro-endurance-vnand-die-size

The author clearly comes with a relationship of durability versus SSD
size, as one would expect. But the Samsung homepage just stated 150TBW,
for all those models...

Christian

> Not to advertise or say every samsung 840 ssd is like this:
> http://www.vojcik.net/samsung-ssd-840-endurance-destruct-test/
>
Seen it before, but I have a feeling that this test doesn't quite put the
same strain on the poor NANDs as Emmanuel's environment. 
 
Christian

> Cheers,
> Martin
> 
> 
> On Wed, Oct 1, 2014 at 10:18 AM, Christian Balzer <ch...@gol.com> wrote:
> 
> > On Wed, 1 Oct 2014 09:28:12 +0200 Kasper Dieter wrote:
> >
> > > On Tue, Sep 30, 2014 at 04:38:41PM +0200, Mark Nelson wrote:
> > > > On 09/29/2014 03:58 AM, Dan Van Der Ster wrote:
> > > > > Hi Emmanuel,
> > > > > This is interesting, because we?ve had sales guys telling us that
> > > > > those Samsung drives are definitely the best for a Ceph journal
> > > > > O_o !
> > > >
> > > > Our sales guys or Samsung sales guys?  :)  If it was ours, let me
> > > > know.
> > > >
> > > > > The conventional wisdom has been to use the Intel DC S3700
> > > > > because of its massive durability.
> > > >
> > > > The S3700 is definitely one of the better drives on the market for
> > > > Ceph journals.  Some of the higher end PCIE SSDs have pretty high
> > > > durability (and performance) as well, but cost more (though you can
> > > > save SAS bay space, so it's a trade-off).
> > > Intel P3700 could be an alternative with 10 Drive-Writes/Day for 5
> > > years (see attachment)
> > >
> > They're certainly nice and competitively priced (TBW/$ wise at least).
> > However as I said in another thread, once your SSDs start to outlive
> > your planned server deployment time (in our case 5 years) that's
> > probably good enough.
> >
> > It's all about finding the balance between cost, speed (BW and IOPS),
> > durability and space.
> >
> > For example I'm currently building a cluster based on 2U, 12 hotswap
> > bays servers (because I already had 2 floating around) and am using 4
> > 100GB DC S3700 (at US$200 each) and 8 HDDS in them.
> > Putting in a 400GB DC P3700 (US$1200( instead and 4 more HDDs would
> > have pushed me over the budget and left me with a less than 30% "used"
> > SSD 5 years later, at a time when we clearly can expect these things
> > to be massively faster and cheaper.
> >
> > Now if you're actually having a cluster that would wear out a P3700 in
> > 5 years (or you're planning to run your machines until they burst into
> > flames), then that's another story. ^.^
> >
> > Christian
> >
> > > -Dieter
> > >
> > > >
> > > > >
> > > > > Anyway, I?m curious what do the SMART counters say on your SSDs??
> > > > > are they really failing due to worn out P/E cycles or is it
> > > > > something else?
> > > > >
> > > > > Cheers, Dan
> > > > >
> > > > >
> > > > >> On 29 Sep 2014, at 10:31, Emmanuel Lacour
> > > > >> <elac...@easter-eggs.com> wrote:
> > > > >>
> > > > >>
> > > > >> Dear ceph users,
> > > > >>
> > > > >>
> > > > >> we are managing ceph clusters since 1 year now. Our setup is
> > > > >> typically made of Supermicro servers with OSD sata drives and
> > > > >> journal on SSD.
> > > > >>
> > > > >> Those SSD are all failing one after the other after one year :(
> > > > >>
> > > > >> We used Samsung 850 pro (120Go) with two setup (small nodes
> > > > >> with 2 ssd, 2 HD in 1U):
> > > > >>
> > > > >> 1) raid 1 :( (bad idea, each SSD support all the OSDs journals
> > > > >> writes :() 2) raid 1 for OS (nearly no writes) and dedicated
> > > > >> partition for journals (one per OSD)
> > > > >>
> > > > >>
> > > > >> I'm convinced that the second setup is better and we migrate old
> > > > >> setup to this one.
> > > > >>
> > > > >> Thought, statistics gives 60GB (option 2) to 100 GB (option 1)
> > > > >> writes per day on SSD on a not really over loaded cluster.
> > > > >> Samsung claims to give 5 years warranty if under 40GB/day.
> > > > >> Those numbers seems very low to me.
> > > > >>
> > > > >> What are your experiences on this? What write volumes do you
> > > > >> encounter, on wich SSD models, which setup and what MTBF?
> > > > >>
> > > > >>
> > > > >> --
> > > > >> Easter-eggs                              Spécialiste GNU/Linux
> > > > >> 44-46 rue de l'Ouest  -  75014 Paris  -  France -  Métro Gaité
> > > > >> Phone: +33 (0) 1 43 35 00 37    -   Fax: +33 (0) 1 43 35 00 76
> > > > >> mailto:elac...@easter-eggs.com  -   http://www.easter-eggs.com
> > > > >> _______________________________________________
> > > > >> ceph-users mailing list
> > > > >> ceph-users@lists.ceph.com
> > > > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > >
> > > > > _______________________________________________
> > > > > ceph-users mailing list
> > > > > ceph-users@lists.ceph.com
> > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > >
> > > >
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> > --
> > Christian Balzer        Network/Systems Engineer
> > ch...@gol.com           Global OnLine Japan/Fusion Communications
> > http://www.gol.com/
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to