After thinking this through a bit more I propose to do the following:

1) Fully document IOQ1 as the workload management / IO queueing capability in 
CouchDB
2) Enable IOQ1 by default
3) Add a global bypass switch so users with big, fast servers can quickly 
configure CouchDB to make the most of that hardware

IOQ2 will still be included in the codebase but not publicly documented. 
Interested parties can continue to refine and simplify it and we can consider 
cutting over to it in a future 3.x build.

I think this is a conservative “do no harm” approach that will result in a 
similar performance profile as 2.x out of the box while delivering a couple of 
extra knobs to refine the workload management, or bypass it altogether in the 
name of performance.

Adam

> On Sep 16, 2019, at 11:58 AM, Adam Kocoloski <kocol...@apache.org> wrote:
> 
> Maybe it makes sense to look at the 2.x -> 3.x progression of each of these 
> individually:
> 
> ## Compaction
> 
> Smoosh replaces an earlier compaction daemon. It can certainly be configured 
> to use more resources than the old one. Changing the default configuration to 
> a single channel with no parallelism would I think put it more in line with 
> 2.x. https://github.com/apache/couchdb-smoosh/pull/3 restores the ability to 
> scope compaction to certain hours of the day which is the other big gap.
> 
> ## View Builds
> 
> Does 2.x have a built-in background view updater? I didn’t think so. Ken 
> could cause a lot of IO to show up unexpectedly, for sure. The daemon doesn’t 
> have a global on/off switch at the moment.
> 
> ## IO Queueing
> 
> 2.x has an undocumented IOQ implementation. If I’m reading the code correctly 
> it de-prioritized compaction IO and otherwise dumps everything (including 
> view updates) into a single queue. The architecture is otherwise similar to 
> what I called IOQ1 in my original email. It does not appear possible to 
> bypass the queueing system in this version. Tracing back to the original 
> COUCHDB-1775 issue in JIRA one finds
> 
>> Note: For demonstration purposes at the moment, the code is likely too slow 
>> for production use.
> 
> And yet, as far as I can tell this is substantively the same code that’s been 
> in production for the entire 2.x line …
> 
> —
> 
> Knowing that our users have lived with the IOQ1 performance ceiling for all 
> of 2.x does change my perspective on the options. I agree that we shouldn’t 
> bypass the whole thing at this juncture, especially not if we’re making it 
> easy to crank up more background jobs. At the same time I’m really reticent 
> to introduce a whole bunch of knobs and dials. I’m not sure where to go from 
> here but maybe others will find the background details above to be helpful.
> 
> Adam
> 
>> On Sep 14, 2019, at 3:10 PM, Joan Touzet <woh...@apache.org> wrote:
>> 
>> On 2019-09-12 6:00 a.m., Will Holley wrote:
>>> I defer to those with more operational experience of ken and smoosh but
>>> wouldn't those new subsystems radically impact performance if IOQ is
>>> completely bypassed (assuming ken/smoosh are enabled by default)?
>> 
>> A very good point. I'd be uncomfortable with a ken+smoosh+IOQ1 combination 
>> without safeguards of some sort - a modified version of 2 I guess.
>> 
>> Disabling those daemons by default is a regression from 2.x so I don't 
>> consider that a realistic option, either.
>> 
>> We want CouchDB 3.x to be "the best home-grown clustered CouchDB available," 
>> and completely disabling IOQ sounds like not that.
>> 
>> I guess my preferences in order are 1, 2, 3.
>> 
>> -Joan
>> 
>>> On Wed, 11 Sep 2019 at 22:04, Adam Kocoloski <kocol...@apache.org> wrote:
>>>> A few months ago a bunch of code landed on master around IO QoS and
>>>> prioritization. I think we need to have a conversation about the defaults
>>>> for that system and what we want to allow users to enable.
>>>> 
>>>> First topic - there are actually two different generations of the IOQ
>>>> system: IOQ and IOQ2. Only one can be active at a given time, and the
>>>> configurations are not compatible. The best use case for this queueing
>>>> system is to de-prioritize IO for bookkeeping tasks like internal
>>>> replication and compaction in favor of IO to respond to client requests.
>>>> 
>>>> The original and currently default IOQ system primarily works by
>>>> classifying the IO based on whether it’s serving an interactive read or
>>>> write request, an index build, a compaction job, etc. It builds queues for
>>>> each of these IO classes and allows for relative prioritization of the
>>>> different classes of IO. The main downside of this system is that it can
>>>> only sustain a total throughput of about 20,000 operations/sec/node.
>>>> Heavily-loaded systems frequently have to configure “bypasses” for certain
>>>> classes of IO to keep latencies low.
>>>> 
>>>> IOQ2 was conceived to deliver higher throughput without resorting to
>>>> bypasses and thus defeating the QoS. It’s a significantly more complex
>>>> system. Tenants are a first-class concept in IOQ2, but of course they’re
>>>> not in the rest of the CouchDB, so some of the code in there that computes
>>>> per-user priorities will not work correctly. As far as I can tell it will
>>>> fail gracefully (i.e., it will bucket every database as belonging to the
>>>> same “user”), but I doubt this has been tested. IOQ2 definitely can sustain
>>>> higher throughputs, though it has been known to enqueue so many more IO
>>>> requests than it can issue that it effectively led to an outage anyway. It
>>>> is still a material overhead compared to bypassing the QoS entirely.
>>>> 
>>>> I think there are a few possible paths forward:
>>>> 
>>>> 1) Switch to IOQ2 and only document that one.
>>>> 2) Document IOQ, installing bypasses across the board by default to avoid
>>>> a big performance regression on upgrade
>>>> 3) Just bypass the whole thing and don’t document it, to avoid introducing
>>>> a big new admin capability in 3.0 and removing it in 4.0
>>>> 
>>>> Personally I think I’m leaning towards 3) at this point, but could be
>>>> convinced otherwise. Regards,
>>>> 
>>>> Adam
> 

Reply via email to