On Fri, Jul 13, 2018 at 2:50 AM Robert Stanford <rstanford8...@gmail.com>
wrote:

>
>  This is what leads me to believe it's other settings being referred to as
> well:
> https://ceph.com/community/new-luminous-rados-improvements/
>
> *"There are dozens of documents floating around with long lists of Ceph
> configurables that have been tuned for optimal performance on specific
> hardware or for specific workloads.  In most cases these ceph.conf
> fragments tend to induce funny looks on developers’ faces because the
> settings being adjusted seem counter-intuitive, unrelated to the
> performance of the system, and/or outright dangerous.  Our goal is to make
> Ceph work as well as we can out of the box without requiring any tuning at
> all, so we are always striving to choose sane defaults.  And generally, we
> discourage tuning by users. "*
>
> To me it's not just bluestore settings / sdd vs. hdd they're talking about
> ("dozens of documents floating around"... "our goal... without any tuning
> at all".  Am I off base?
>

Ceph is *extremely* tunable, because whenever we set up a new behavior
(snapshot trimming sleeps, scrub IO priorities, whatever) and we're not
sure how it should behave we add a config option. Most of these config
options we come up with some value through testing or informed guesswork,
set it in the config, and expect that users won't ever see it. Some of
these settings we don't know what they should be, and we really hope the
whole mechanism gets replaced before users see it, but they don't. Some of
the settings should be auto-tuning or manually set to a different value for
each deployment to get optimal performance.
So there are lots of options for people to make things much better or much
worse for themselves.

However, by far the biggest impact and most common tunables are those that
basically vary on if the OSD is using a hard drive or an SSD for its local
storage — those are order-of-magnitude differences in expected latency and
throughput. So we now have separate default tunables for those cases which
are automatically applied.

Could somebody who knows what they're doing tweak things even better for a
particular deployment? Undoubtedly. But do *most* people know what they're
doing that well? They don't.
In particular, the old "fix it" configuration settings that a lot of people
were sharing and using starting in the Cuttlefish days are rather
dangerously out of date, and we no longer have defaults that are quite as
stupid as some of those were.

So I'd generally recommend you remove any custom tuning you've set up
unless you have specific reason to think it will do better than the
defaults for your currently-deployed release.
-Greg


>
>  Regards
>
> On Thu, Jul 12, 2018 at 9:12 PM, Konstantin Shalygin <k0...@k0ste.ru>
> wrote:
>
>>   I saw this in the Luminous release notes:
>>>
>>>   "Each OSD now adjusts its default configuration based on whether the
>>> backing device is an HDD or SSD. Manual tuning generally not required"
>>>
>>>   Which tuning in particular?  The ones in my configuration are
>>> osd_op_threads, osd_disk_threads, osd_recovery_max_active,
>>> osd_op_thread_suicide_timeout, and osd_crush_chooseleaf_type, among
>>> others.  Can I rip these out when I upgrade to
>>> Luminous?
>>>
>>
>> This mean that some "bluestore_*" settings tuned for nvme/hdd separately.
>>
>> Also with Luminous we have:
>>
>> osd_op_num_shards_(ssd|hdd)
>>
>> osd_op_num_threads_per_shard_(ssd|hdd)
>>
>> osd_recovery_sleep_(ssd|hdd)
>>
>>
>>
>>
>> k
>>
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to