Hi Jake,

I would definitely go for the "leave the rest unused" solution.

Regards,
Mattia

On 5/29/19 4:25 PM, Jake ` wrote:
> Thank you for a lot of detailed and useful information :)
> 
> I'm tempted to ask a related question on SSD endurance...
> 
> If 60GB is the sweet spot for each DB/WAL partition, and the SSD has
> spare capacity, for example, I'd budgeted 266GB per DB/WAL.
> 
> Would it then be better to make a 60GB "sweet spot" sized DB/WALs, and
> leave the remaining SSD unused, as this would maximise the lifespan of
> the SSD, and speedup  garbage collection?
> 
> many thanks
> 
> Jake
> 
> 
> 
> On 5/29/19 9:56 AM, Mattia Belluco wrote:
>> On 5/29/19 5:40 AM, Konstantin Shalygin wrote:
>>> block.db should be 30Gb or 300Gb - anything between is pointless. There
>>> is described why:
>>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-February/033286.html
>>
>> Following some discussions we had at the past Cephalocon I beg to differ
>> on this point: when RocksDB needs to compact a layer it rewrites it
>> *before* deleting the old data; if you'd like to be sure you db does not
>> spill over to the spindle you should allocate twice the size of the
>> biggest layer to allow for compaction. I guess ~60 GB would be the sweet
>> spot assuming you don't plan to mess with size and multiplier of the
>> rocksDB layers and don't want to go all the way to 600 GB (300 GB x2)
>>
>> regards,
>> Mattia
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Mattia Belluco
S3IT Services and Support for Science IT
Office Y11 F 52
University of Zürich
Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)
Tel: +41 44 635 42 22
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to