Sean Mackrory created HADOOP-15729:
--------------------------------------
Summary: [s3a] stop treat fs.s3a.max.threads as the long-term
minimum
Key: HADOOP-15729
URL: https://issues.apache.org/jira/browse/HADOOP-15729
Project: Hadoop Common
Issue Type: Bug
Reporter: Sean Mackrory
Assignee: Sean Mackrory
A while ago the s3a connector started experiencing deadlocks because the AWS
SDK requires an unbounded threadpool. It places monitoring tasks on the work
queue before the tasks they wait on, so it's possible (has even happened with
larger-than-default threadpools) for the executor to become permanently
saturated and deadlock.
So we started giving an unbounded threadpool executor to the SDK, and using a
bounded, blocking threadpool service for everything else S3A needs (although
currently that's only in the S3ABlockOutputStream). fs.s3a.max.threads then
only limits this threadpool, however we also specified fs.s3a.max.threads as
the number of core threads in the unbounded threadpool, which in hindsight is
pretty terrible.
Currently those core threads do not timeout, so this is actually setting a sort
of minimum. Once that many tasks have been submitted, the threadpool will be
locked at that number until it bursts beyond that, but it will only spin down
that far. If fs.s3a.max.threads is set reasonably high and someone uses a bunch
of S3 buckets, they could easily have thousands of idle threads constantly.
We should either not use fs.s3a.max.threads for the corepool size and introduce
a new configuration, or we should simply allow core threads to timeout. I'm
reading the OpenJDK source now to see what subtle differences there are between
core threads and other threads if core threads can timeout.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]