Honestly there isn't enough information about your use case.  RBD usage
with small IO vs ObjectStore with large files vs ObjectStore with small
files vs any number of things.  The answer to your question might be that
for your needs you should look at having a completely different hardware
configuration than what you're running.  There is no correct way to
configure your cluster based on what hardware you have.  What hardware you
use and what configuration settings you use should be based on your needs
and use case.

On Wed, Aug 16, 2017 at 12:13 PM Mehmet <c...@elchaka.de> wrote:

> :( no suggestions or recommendations on this?
>
> Am 14. August 2017 16:50:15 MESZ schrieb Mehmet <c...@elchaka.de>:
>
>> Hi friends,
>>
>> my actual hardware setup per OSD-node is as follow:
>>
>> # 3 OSD-Nodes with
>> - 2x Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz ==> 12 Cores, no
>> Hyper-Threading
>> - 64GB RAM
>> - 12x 4TB HGST 7K4000 SAS2 (6GB/s) Disks as OSDs
>> - 1x INTEL SSDPEDMD400G4 (Intel DC P3700 NVMe) as Journaling Device for
>> 12 Disks (20G Journal size)
>> - 1x Samsung SSD 840/850 Pro only for the OS
>>
>> # and 1x OSD Node with
>> - 1x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz (10 Cores 20 Threads)
>> - 64GB RAM
>> - 23x 2TB TOSHIBA MK2001TRKB SAS2 (6GB/s) Disks as OSDs
>> - 1x SEAGATE ST32000445SS SAS2 (6GB/s) Disk as OSDs
>> - 1x INTEL SSDPEDMD400G4 (Intel DC P3700 NVMe) as Journaling Device for
>> 24 Disks (15G Journal size)
>> - 1x Samsung SSD 850 Pro only for the OS
>>
>> As you can see, i am using 1 (one) NVMe (Intel DC P3700 NVMe – 400G)
>> Device for whole Spinning Disks (partitioned) on each OSD-node.
>>
>> When „Luminous“ is available (as next LTE) i plan to switch vom
>> „filestore“ to „bluestore“ 😊
>>
>> As far as i have read bluestore consists of
>> - „the device“
>> - „block-DB“: device that store RocksDB metadata
>> - „block-WAL“: device that stores RocksDB „write-ahead journal“
>>
>> Which setup would be usefull in my case?
>> I Would setup the disks via "ceph-deploy".
>>
>> Thanks in advance for your suggestions!
>> - Mehmet
>> ------------------------------
>>
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to