Thank you.
Here I have NVMes from Intel. but as the support of these NVMes not
there from Intel, we decided not to use these NVMes as a journal.
Btw, if we split this SSD with multiple OSD (for ex: 1 SSD with 4 or 2
OSDs), is  this help any performance numbers?

On Sun, Aug 20, 2017 at 9:33 AM, Christian Balzer <ch...@gol.com> wrote:
>
> Hello,
>
> On Sat, 19 Aug 2017 23:22:11 +0530 M Ranga Swami Reddy wrote:
>
>> SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage -
>> MZ-75E4T0B/AM | Samsung
>>
> And there's your answer.
>
> A bit of googling in the archives here would have shown you that these are
> TOTALLY unsuitable for use with Ceph.
> Not only because of the horrid speed when used with/for Ceph journaling
> (direct/sync I/O) but also their abysmal endurance of 0.04 DWPD over 5
> years.
> Or in other words 160GB/day, which after the Ceph journal double writes
> and FS journals, other overhead and write amplification in general
> probably means less that effective 40GB/day.
>
> In contrast the lowest endurance DC grade SSDs tend to be 0.3 DWPD and
> more commonly 1 DWPD.
> And I'm not buying anything below 3 DWPD for use with Ceph.
>
> Your only chance to improve the speed here is to take the journals off
> them and put them onto fast and durable enough NVMes like the Intel DC P
> 3700 or at worst 3600 types.
>
> That still leaves you with their crappy endurance, only twice as high than
> before with the journals offloaded.
>
> Christian
>
>> On Sat, Aug 19, 2017 at 10:44 PM, M Ranga Swami Reddy
>> <swamire...@gmail.com> wrote:
>> > Yes, Its in production and used the pg count as per the pg calcuator @ 
>> > ceph.com.
>> >
>> > On Fri, Aug 18, 2017 at 3:30 AM, Mehmet <c...@elchaka.de> wrote:
>> >> Which ssds are used? Are they in production? If so how is your PG Count?
>> >>
>> >> Am 17. August 2017 20:04:25 MESZ schrieb M Ranga Swami Reddy
>> >> <swamire...@gmail.com>:
>> >>>
>> >>> Hello,
>> >>> I am using the Ceph cluster with HDDs and SSDs. Created separate pool for
>> >>> each.
>> >>> Now, when I ran the "ceph osd bench", HDD's OSDs show around 500 MB/s
>> >>> and SSD's OSD show around 280MB/s.
>> >>>
>> >>> Ideally, what I expected was - SSD's OSDs should be at-least 40% high
>> >>> as compared with HDD's OSD bench.
>> >>>
>> >>> Did I miss anything here? Any hint is appreciated.
>> >>>
>> >>> Thanks
>> >>> Swami
>> >>> ________________________________
>> >>>
>> >>> ceph-users mailing list
>> >>> ceph-users@lists.ceph.com
>> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> >>
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Christian Balzer        Network/Systems Engineer
> ch...@gol.com           Rakuten Communications
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to