Hello David Ribeiro Alves, Todd Lipcon,

I'd like you to do a code review. Please visit

    http://gerrit.cloudera.org:8080/9522

to review the following change.


Change subject: KUDU-1913: cap number of threads on server-wide pools
......................................................................

KUDU-1913: cap number of threads on server-wide pools

The last remaining piece of work is to do away with the unbounded number of
threads that may be started in the Raft and Prepare server-wide threadpools.
These caps make it easier for admins to reason about appropriate values for
the configuration of the Kudu processes RLIMIT_NPROC resource.

KUDU-1913 proposed a cap of "number of cores + number of disks", but a
lively Slack discussion yielded a better solution: set the cap at some
percentage of the process' RLIMIT_NPROC value. Given that the rest of Kudu
generally uses a constant number of threads, this should prevent spikes from
ever exceeding the RLIMIT_NPROC and crashing the server due to an election
storm. This patch implements a cap of 10% per pool and also provides a new
gflag as an "escape hatch" (in case we were horribly wrong).

Note: it's still possible for a massive number of "hot" replicas to exceed
RLIMIT_NPROC by virtue of each replica's log append thread, but the server
is more likely to run out of memory for MemRowSets before that happens.

Change-Id: I194907a7f8a483c9cba71eba8caed6bc6090f618
---
M src/kudu/kserver/kserver.cc
1 file changed, 55 insertions(+), 11 deletions(-)



  git pull ssh://gerrit.cloudera.org:29418/kudu refs/changes/22/9522/1
--
To view, visit http://gerrit.cloudera.org:8080/9522
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-Project: kudu
Gerrit-Branch: master
Gerrit-MessageType: newchange
Gerrit-Change-Id: I194907a7f8a483c9cba71eba8caed6bc6090f618
Gerrit-Change-Number: 9522
Gerrit-PatchSet: 1
Gerrit-Owner: Adar Dembo <a...@cloudera.com>
Gerrit-Reviewer: David Ribeiro Alves <davidral...@gmail.com>
Gerrit-Reviewer: Todd Lipcon <t...@apache.org>

Reply via email to