If you can accept the failure domain, we find 12:1 ratio of SATA spinners
to a 400GB P3700 is reasonable. Benchmarks can saturate it, but it is
entirely bored in our real-world workload and only 30-50% utilized during
backfills. I am sure one could go even further than 12:1 if they wanted,
but we haven't tested.

On Thu, Jul 9, 2015 at 4:47 AM, Götz Reinicke - IT Koordinator <
goetz.reini...@filmakademie.de> wrote:

> Hi Christian,
> Am 09.07.15 um 09:36 schrieb Christian Balzer:
> >
> > Hello,
> >
> > On Thu, 09 Jul 2015 08:57:27 +0200 Götz Reinicke - IT Koordinator wrote:
> >
> >> Hi again,
> >>
> >> time is passing, so is my budget :-/ and I have to recheck the options
> >> for a "starter" cluster. An expansion next year for may be an openstack
> >> installation or more performance if the demands rise is possible. The
> >> "starter" could always be used as test or slow dark archive.
> >>
> >> At the beginning I was at 16SATA OSDs with 4 SSDs for journal per node,
> >> but now I'm looking for 12 SATA OSDs without SSD journal. Less
> >> performance, less capacity I know. But thats ok!
> >>
> > Leave the space to upgrade these nodes with SSDs in the future.
> > If your cluster grows large enough (more than 20 nodes) even a single
> > P3700 might do the trick and will need only a PCIe slot.
>
> If I get you right, the 12Disk is not a bad idea, if there would be the
> need of SSD Journal I can add the PCIe P3700.
>
> In the 12 OSD Setup I should get 2 P3700 one per 6 OSDs.
>
> God or bad idea?
>
> >
> >> There should be 6 may be with the 12 OSDs 8 Nodes with a repl. of 2.
> >>
> > Danger, Will Robinson.
> > This is essentially a RAID5 and you're plain asking for a double disk
> > failure to happen.
>
> May be I do not understand that. size = 2 I think is more sort of raid1
> ... ? And why am I asking for for a double disk failure?
>
> To less nodes, OSDs or because of the size = 2.
>
> >
> > See this recent thread:
> > "calculating maximum number of disk and node failure that can be handled
> > by cluster with out data loss"
> > for some discussion and python script which you will need to modify for
> > 2 disk replication.
> >
> > With a RAID5 failure calculator you're at 1 data loss event per 3.5
> > years...
> >
>
> Thanks for that thread, but I dont get the point out of it for me.
>
> I see that calculating the reliability is some sort of complex math ...
>
> >> The workload I expect is more writes of may be some GB of Office files
> >> per day and some TB of larger video Files from a few users per week.
> >>
> >> At the end of this year we calculate to have +- 60 to 80 TB of lager
> >> videofiles in that cluster, which are accessed from time to time.
> >>
> >> Any suggestion on the drop of ssd journals?
> >>
> > You will miss them when the cluster does write, be it from clients or
> when
> > re-balancing a lost OSD.
>
> I can imagine, that I might miss the SSD Journal, but if I can add the
> P3700 later I feel comfy with it for now. Budget and evaluation related.
>
>         Thanks for your helpful input and feedback. /Götz
>
> --
> Götz Reinicke
> IT-Koordinator
>
> Tel. +49 7141 969 82420
> E-Mail goetz.reini...@filmakademie.de
>
> Filmakademie Baden-Württemberg GmbH
> Akademiehof 10
> 71638 Ludwigsburg
> www.filmakademie.de
>
> Eintragung Amtsgericht Stuttgart HRB 205016
>
> Vorsitzender des Aufsichtsrats: Jürgen Walter MdL
> Staatssekretär im Ministerium für Wissenschaft,
> Forschung und Kunst Baden-Württemberg
>
> Geschäftsführer: Prof. Thomas Schadt
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media

e: da...@slashdotmedia.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to