[
https://issues.apache.org/jira/browse/HTTPCLIENT-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15844013#comment-15844013
]
Oleg Kalnichevski edited comment on HTTPCLIENT-1809 at 1/28/17 10:41 AM:
-------------------------------------------------------------------------
Andrew,
Pool lock contention is a known issue. We do get similar reports every one in a
while. However in most cases that I had a chance to take a look at it turned
out to be a configuration issue or a design flaw: having too many threads
contend for a few connections or having thousands of threads whereas a hundred
threads would be sufficient to process the same amount of requests per second.
Having said that we do have plans to build a lock-less pool implementation at
some point (see HTTPCORE-390) but right now I have no bandwidth to do it
myself. I am in the process of refactoring the connection management code in
trunk (5.0.x branch) and making sure that that all potentially expensive
operations are executed outside the pool lock. This should help reduce lock
contention.
If I were to look into this problem I would start off by finding out whether
the application actually needs so many threads to begin with and only then
proceed with an idea of building a custom connection manager.
Oleg
was (Author: olegk):
Andrew,
Pool lock contention is a known issue. We do get similar reports every one in a
while. However in most cases that I had a chance to take a look at it turned
out to be a configuration issue or a design flaw: having too many threads
contend for a few connections or having thousands of threads whereas a hundred
threads would be sufficient to process the same amount of requests per second.
Having said that we do have plans to build a lock-less pool implementation at
some point (see HTTPCORE-390) but right now I have no bandwidth to do it
myself. I am in the process of refactoring the connection management code in
trunk (5.0.x branch) and making sure that that all potentially expensive
operations are executed out of the pool lock. This should help reduce lock
contention.
If I were to look into this problem I would start off by finding out whether
the application actually needs so many threads to begin with and only then
proceed with an idea of building a custom connection manager.
Oleg
> Thread contention in PoolingHttpClientConnectionManager
> -------------------------------------------------------
>
> Key: HTTPCLIENT-1809
> URL: https://issues.apache.org/jira/browse/HTTPCLIENT-1809
> Project: HttpComponents HttpClient
> Issue Type: Bug
> Components: HttpClient (classic)
> Affects Versions: 4.5
> Reporter: Andrew Shore
> Priority: Minor
> Labels: perfomance
>
> We (AWS SDK for Java) have been investigating reports of poor performance in
> the SDK and have narrowed it down to thread contention issues in
> PoolingHttpClientConnectionManager. Up to a certain TPS, performance is great
> and their is no issue. After a certain TPS (approx 8000 in our load tests),
> performance tanks hard and most threads end up stuck waiting on a lock in
> AbstractConnPool (in either lease or releaseConnection).
> https://github.com/apache/httpcore/blob/4.4.x/httpcore/src/main/java/org/apache/http/pool/AbstractConnPool.java#L403
> This quickly locks up the application as it tries to meet the incoming TPS.
> We have been able to workaround this and achieve much higher throughput but
> having multiple SDK clients and round robin selecting them to hand off to
> threads. This allowed us to easily scale up to 16, 000 TPS. We wanted to open
> up a dialog with the maintainers of the Apache HTTP client to see if this is
> a known issue/limitation and what options we have for getting around it. We
> aren’t opposed to re-implementing the connection manager to be more
> performant but since it’s a pretty sizable chunk of work we wanted to ensure
> that’s the best path forward before proceeding.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]