Chris

Sure I will use DNS and try to change the service calls.

Also once in a while we see the below error and tomcat stops serving
requests. Could you please let me know the cause of this issue ?

org.apache.tomcat.util.net.NioEndpoint$Acceptor run
SEVERE: Socket accept failed
java.nio.channels.ClosedChannelException
        at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:235)
        at
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:682)
        at java.lang.Thread.run(Thread.java:748)

On Thu, 4 Jun 2020, 19:47 Christopher Schultz, <ch...@christopherschultz.net>
wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Ayub,
>
> On 6/4/20 11:05, Ayub Khan wrote:
> > Christopher Schultz wrote:
> >> There's no particular reason why a request to node1:/app1 needs
> >> to have its loopback request call node1:/app2, is there? Can
> >> node1:/app1 call node2:/app2?
> >
> >
> > Yes we can do that but then we would have to use the DNS urls and
> > wont this cause network latency compared to a localhost call?
>
> DNS lookups are cheap and cached. Connecting to "localhost"
> technically performs a DNS lookup, too. Once the DNS resolver has
> "node1" in its cache, it'll be just as fast as looking-up "localhost".
>
> > I have not changed the maxThreads config on each of the
> > connectors? If I have to customize it, how to decide what value to
> > use for maxThreads?
> The number of threads you allocate has more to do with your
> application than anything else. I've seen applications (on hardware)
> that can handle thousands of simultaneous threads. Others, I've seen
> will fall-over if more than 4 or 5 requests come in simultaneously.
>
> So you'll need to load-test your application to be sure what the right
> numbers are.
>
> Remember that if your application is database-heavy, then the number
> of connections to the database will soon become a bottleneck. There's
> no sense accepting connections from 100 users if every requests
> requires a database connection and you can only handle 20 connections
> to the database.
>
> - -chris
>
> > On Mon, Jun 1, 2020 at 10:24 PM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,
> >
> > On 6/1/20 11:12, Ayub Khan wrote:
> >>>> Chris,
> >>>>
> >>>> As you described I have added two new connectors in
> >>>> server.xml and using nginx to redirect requests to different
> >>>> connector ports. Also configured nginx to route traffic of
> >>>> each app on a different connector port of tomcat
> >>>>
> >>>> In config of each app I am using port specific for the app
> >>>> which is being called localhost:8081 for app2 and
> >>>> localhost:8082 for app 3
> >>>>
> >>>> So now in config of app1 we call app2 using
> >>>> localhost:8081/app2 and localhost:8082/app3
> >
> > Perfect.
> >
> >>>> Could you explain the benefit of using this type of config?
> >>>> Will this be useful to not block requests for each app?
> >
> > This ensures that requests for /app1 do not starve the thread pool
> > for requests to /app2. Imagine that you have a single connector,
> > single thread pool, etc. for two apps: /app1 and /app2 and there is
> > only a SINGLE thread in the pool available, and that each request
> > to /app1 makes a call to /app2. Here's what happens:
> >
> > 1. Client requests /app1 2. /app1 makes connection to /app2 3.
> > Request to /app2 stalls waiting for a thread to become available
> > (it's already allocated to the request from #1 above)
> >
> > You basically have a deadlock, here, because /app1 isn't going to
> > give-up its thread, and the thread for the request to /app2 will
> > not be allocated until the request to /app1 gives up that thread.
> >
> > Now, nobody runs their application server with a SINGLE thread in
> > the pool, but this is instructive: it means that deadlock CAN
> > occur.
> >
> > Let's take a more reasonable situation: you have 100 threads in the
> > pool .
> >
> > Let's say that you get REALLY unlucky and the following series of
> > events occurs:
> >
> > 1. 100 requests come in simultaneously for /app1. All requests are
> > allocated a thread from the thread pool for these requests before
> > anything else happens. Note that the thread pool is currently
> > completely exhausted with requests to /app1. 2. All 100 threads
> > from /app1 make requests to /app2. Now you have 100 threads in
> > deadlock similar to the contrived SINGLE thread situation above.
> >
> > Sure, it's unlikely, but it CAN happen, especially if requests to
> > /app1 are mostly waiting on requests to /app2 to complete: you can
> > very easily run out of threads in a high-load situation. And a
> > high-load situation is EXACTLY what you reported.
> >
> > Let's take the example of separate thread pools per application.
> > Same number of total threads, except that 50 are in one pool for
> > /app1 and the other 50 are in the pool for /app2. Here's the series
> > of events:
> >
> > 1. 100 requests come in simultaneously for /app1. 50 requests are
> > allocated a thread from the thread pool for these requests before
> > anything else happens. The other 50 requests are queued waiting on
> > a request-processing thread to become available. Note that the
> > thread pool for /app1 is currently completely exhausted with
> > requests to /app1. 2. 50 threads from /app1 make requests to /app2.
> > All 50 requests get request-processing threads allocated, perform
> > their work, and complete. 3. The 50 queued requests from step #1
> > above are now allocated request-processing threads and proceed to
> > make requests to /app2 4. 50 threads (the second batch) from /app1
> > make requests to /app2. All 50 requests get request-processing
> > threads allocated, perform their work, and complete.
> >
> > Here, you have avoided any possibility of deadlock.
> >
> > Personally, I'd further decouple these services so that they are
> > running (possibly) on other servers, with different
> > load-balancers, etc. There's no particular reason why a request to
> > node1:/app1 needs to have its loopback request call node1:/app2, is
> > there? Can node1:/app1 call node2:/app2? If so, you should let it
> > happen. It will make your overall service mode robust. If not, you
> > should fix things so it CAN be done.
> >
> > You might also want to make sure that you do the same thing for
> > any database connections you might use, although holding a
> > database connection open while making a REST API call might be
> > considered a Bad Idea.
> >
> > Hope that helps, -chris
> >
> >>>> On Mon, 1 Jun 2020, 16:27 Christopher Schultz,
> > <ch...@christopherschultz.net>
> >>>> wrote:
> >>>>
> >>>> Ayub,
> >>>>
> >>>> On 5/31/20 09:20, Ayub Khan wrote:
> >>>>>>> On single tomcat instance how to map each app to
> >>>>>>> different port number?
> >>>>
> >>>> You'd have to use multiple <Engine> elements, which means
> >>>> separate everything and not just the <Connector>. It's more
> >>>> work on the Tomcat side with the same problem of having a
> >>>> different port number which you can get just by using a
> >>>> separate <Connector>.
> >>>>
> >>>> Since you have a reverse-proxy already, it's simpler to use
> >>>> the reverse-proxy as the port-selector and not worry about
> >>>> trying to actually enforce it at the Tomcat level.
> >>>>
> >>>> -chris
> >>>>
> >>>>>>> On Sun, 31 May 2020, 15:44 Christopher Schultz, <
> >>>>>>> ch...@christopherschultz.net> wrote:
> >>>>>>>
> >>>>>>> Ayub,
> >>>>>>>
> >>>>>>> On 5/29/20 20:23, Ayub Khan wrote:
> >>>>>>>>>> Chris,
> >>>>>>>>>>
> >>>>>>>>>> You might want (2) and (3) to have their own,
> >>>>>>>>>> independent connector and thread pool, just to
> >>>>>>>>>> be safe. You don't want a connection in (1) to
> >>>>>>>>>> stall because a loopback connection can't be made
> >>>>>>>>>> to (2)/(3). Meanwhile, it's sitting there making
> >>>>>>>>>> no progress but also consuming a
> >>>>>>>>>> connection+thread.
> >>>>>>>>>>
> >>>>>>>>>> *there is only one connector per tomcat where all
> >>>>>>>>>> the applications
> >>>>>>> receive
> >>>>>>>>>> the requests, they do not have independent
> >>>>>>>>>> connector and thread pool per tomcat. How to
> >>>>>>>>>> configure independent connector and thread pool
> >>>>>>>>>> per application per tomcat instance ? below is
> >>>>>>>>>> the current connector
> >>>>>>> config in
> >>>>>>>>>> each tomcat instance:*
> >>>>>>>
> >>>>>>> You can't allocate a connector to a particular web
> >>>>>>> application -- at least not in the way that you think.
> >>>>>>>
> >>>>>>> What you have to do if use different port numbers.
> >>>>>>> Users will never use them, though. But since you have
> >>>>>>> nginx (finally! A reason to have it!), you can map
> >>>>>>> /app1 to port 8080 and /app2 to port 8081 and /app3 to
> >>>>>>> port 8083 or whatever you want.
> >>>>>>>
> >>>>>>> Internal loopback connections will either have to go
> >>>>>>> through nginx (which I wouldn't recommend) or know the
> >>>>>>> correct port numbers to use (which I *do* recommend).
> >>>>>>>
> >>>>>>> -chris
> >>>>>>>
> >>>>>>>>>> *<Connector port="8080"
> >>>>>>>>>> protocol="org.apache.coyote.http11.Http11NioProtocol"
> >>>>>>>>>>
> >>>>>>>>>>
> connectionTimeout="20000" URIEncoding="UTF-8"
> >>>>>>>>>> redirectPort="8443" />*
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> On Fri, May 29, 2020 at 9:05 PM Christopher
> >>>>>>>>>> Schultz < ch...@christopherschultz.net> wrote:
> >>>>>>>>>>
> >>>>>>>>>> Ayub,
> >>>>>>>>>>
> >>>>>>>>>> On 5/28/20 17:25, Ayub Khan wrote:
> >>>>>>>>>>>>> Nginx is being used for image caching and
> >>>>>>>>>>>>> converting https to http requests before
> >>>>>>>>>>>>> hitting tomcat.
> >>>>>>>>>> So you encrypt between the ALB and your app
> >>>>>>>>>> server nodes? That's fine, though nginx probably
> >>>>>>>>>> won't offer any performance improvement for
> >>>>>>>>>> images (unless it's really caching
> >>>>>>>>>> dynamically-generated images from your
> >>>>>>>>>> application) or TLS termination.
> >>>>>>>>>>
> >>>>>>>>>>>>> The behavior I am noticing is application
> >>>>>>>>>>>>> first throws Borken pipe client abort
> >>>>>>>>>>>>> exception at random apis calls followed by
> >>>>>>>>>>>>> socket timeout and then database connection
> >>>>>>>>>>>>> leak errors. This happens only during high
> >>>>>>>>>>>>> load.
> >>>>>>>>>>
> >>>>>>>>>> If you are leaking connections, that's going to
> >>>>>>>>>> be an application resource-management problem.
> >>>>>>>>>> Definitely solve that, but it shouldn't affect
> >>>>>>>>>> anything with Tomcat connections and/or threads.
> >>>>>>>>>>
> >>>>>>>>>>>>> During normal traffic open files for
> >>>>>>>>>>>>> tomcat process goes up and down to not more
> >>>>>>>>>>>>> than 500. However during high traffic if I
> >>>>>>>>>>>>> keep track of the open files for each
> >>>>>>>>>>>>> tomcat process as soon as the open files
> >>>>>>>>>>>>> count reaches above 10k that tomcat
> >>>>>>>>>>>>> instance stops serving the requests.
> >>>>>>>>>>
> >>>>>>>>>> Any other errors shown in the logs? Like
> >>>>>>>>>> OutOfMemoryError (for e.g. open files)?
> >>>>>>>>>>
> >>>>>>>>>>>>> If the open file count goes beyond 5k its
> >>>>>>>>>>>>> sure that this number will never come back
> >>>>>>>>>>>>> to below 500 at this point we need to
> >>>>>>>>>>>>> restart tomcat.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> There are three application installed on
> >>>>>>>>>>>>> each tomcat instance,
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> 1) portal: portal calls (2) and (3) using
> >>>>>>>>>>>>> localhost, should we change this to use dns
> >>>>>>>>>>>>> name instead of localhost calls ?
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> 2) Services for portal 3) Services for
> >>>>>>>>>>>>> portal and mobile clients
> >>>>>>>>>>
> >>>>>>>>>> Are they all sharing the same connector / thread
> >>>>>>>>>> pool?
> >>>>>>>>>>
> >>>>>>>>>> You might want (2) and (3) to have their own,
> >>>>>>>>>> independent connector and thread pool, just to
> >>>>>>>>>> be safe. You don't want a connection in (1) to
> >>>>>>>>>> stall because a loopback connection can't be made
> >>>>>>>>>> to (2)/(3). Meanwhile, it's sitting there making
> >>>>>>>>>> no progress but also consuming a
> >>>>>>>>>> connection+thread.
> >>>>>>>>>>
> >>>>>>>>>> -chris
> >>>>>>>>>>
> >>>>>>>>>>>>> On Thu, May 28, 2020 at 4:50 PM
> >>>>>>>>>>>>> Christopher Schultz <
> >>>>>>>>>>>>> ch...@christopherschultz.net> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Ayub,
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> On 5/27/20 19:43, Ayub Khan wrote:
> >>>>>>>>>>>>>>>> If we have 18 core CPU and 100GB RAM.
> >>>>>>>>>>>>>>>> What value can I set for
> >>>>>>>>>>>>>>>> maxConnections ?
> >>>>>>>>>>>>> Your CPU and RAM really have nothing to do
> >>>>>>>>>>>>> with it. It's more about your usage
> >>>>>>>>>>>>> profile.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> For example, if you are serving small
> >>>>>>>>>>>>> static files, you can serve a million
> >>>>>>>>>>>>> requests a minute on a Raspberry Pi, many
> >>>>>>>>>>>>> of them concurrently.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> But if you are performing fluid dynamic
> >>>>>>>>>>>>> simulations with each request, you will
> >>>>>>>>>>>>> obviously need more horsepower to service
> >>>>>>>>>>>>> a single request, let alone thousands of
> >>>>>>>>>>>>> concurrent requests.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> If you have tons of CPU and memory to
> >>>>>>>>>>>>> spare, feel free to crank-up the max
> >>>>>>>>>>>>> connections. The default is 10000 which is
> >>>>>>>>>>>>> fairly high. At some point, you will run
> >>>>>>>>>>>>> out of connection allocation space in the
> >>>>>>>>>>>>> OS's TCP/IP stack, so that is really your
> >>>>>>>>>>>>> upper-limit. You simply cannot have more
> >>>>>>>>>>>>> than the OS will allow. See
> >>>>>>>>>>>>> https://stackoverflow.com/a/2332756/276232
> >>>>>>>>>>>>> for some information about that.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Once you adjust your settings, perform a
> >>>>>>>>>>>>> load-test. You may find that adding more
> >>>>>>>>>>>>> resources actually slows things down.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Want to make sure we are utilizing
> >>>>>>>>>>>>>>>> the hardware to the max capacity. Is
> >>>>>>>>>>>>>>>> there any config of tomcat which
> >>>>>>>>>>>>>>>> enabled could help serve more
> >>>>>>>>>>>>>>>> requests per tomcat instance.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Not really. Improving performance usually
> >>>>>>>>>>>>> come down to tuning the application to make
> >>>>>>>>>>>>> the requests take less time to process.
> >>>>>>>>>>>>> Tomcat is rarely the source of performance
> >>>>>>>>>>>>> problems (but /sometimes/ is, and it's
> >>>>>>>>>>>>> usually a bug that can be fixed).
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> You can improve throughput somewhat by
> >>>>>>>>>>>>> pipelineing requests. That means HTTP
> >>>>>>>>>>>>> keepalive for direct connections (but with
> >>>>>>>>>>>>> a small timeout; you don't want clients who
> >>>>>>>>>>>>> aren't making any follow-up requests to
> >>>>>>>>>>>>> waste your resources waiting for a
> >>>>>>>>>>>>> keep-alive timeout to close a connection).
> >>>>>>>>>>>>> For proxy connections (e.g. from nginx),
> >>>>>>>>>>>>> you'll want those connections to remain
> >>>>>>>>>>>>> open as long as possible to avoid the
> >>>>>>>>>>>>> re-negotiation of TCP and possibly TLS
> >>>>>>>>>>>>> handshakes. Using HTTP/2 can be helpful
> >>>>>>>>>>>>> for performance, at the cost of some CPU on
> >>>>>>>>>>>>> the back-end to perform the complicated
> >>>>>>>>>>>>> connection management that h2 requires.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Eliminating useless buffering is often
> >>>>>>>>>>>>> very helpful. That's why I asked about
> >>>>>>>>>>>>> nginx. What are you using it for, other
> >>>>>>>>>>>>> than as a barrier between the load-balancer
> >>>>>>>>>>>>> and your Tomcat instances? If you remove
> >>>>>>>>>>>>> nginx, I suspect you'll see a measurable
> >>>>>>>>>>>>> performance increase. This isn't a knock
> >>>>>>>>>>>>> against nginx: you'd see a performance
> >>>>>>>>>>>>> improvement by removing *any* reverse-proxy
> >>>>>>>>>>>>> that isn't providing any value. But you
> >>>>>>>>>>>>> haven't said anything about why it's there
> >>>>>>>>>>>>> in the first place, so I don't know if it
> >>>>>>>>>>>>> /is/ providing any value to you.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> The current setup is able to handle
> >>>>>>>>>>>>>>>> most of the load, however there are
> >>>>>>>>>>>>>>>> predictable times where there is an
> >>>>>>>>>>>>>>>> avalanche of requests and thinking
> >>>>>>>>>>>>>>>> how to handle it gracefully.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> You are using AWS: use auto-scaling. That's
> >>>>>>>>>>>>> what it's for.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> -chris
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On Wed, May 27, 2020 at 5:38 PM
> >>>>>>>>>>>>>>>> Christopher Schultz <
> >>>>>>>>>>>>>>>> ch...@christopherschultz.net> wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Ayub,
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On 5/27/20 09:26, Ayub Khan wrote:
> >>>>>>>>>>>>>>>>>>> previously I was using
> >>>>>>>>>>>>>>>>>>> HTTP/1.1 connector, recently I
> >>>>>>>>>>>>>>>>>>> changed to NIO2 to see the
> >>>>>>>>>>>>>>>>>>> performance. I read that NIO2
> >>>>>>>>>>>>>>>>>>> is non blocking so trying to
> >>>>>>>>>>>>>>>>>>> check how this works.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Both NIO and NIO2 are non-blocking.
> >>>>>>>>>>>>>>>> They use different strategies for
> >>>>>>>>>>>>>>>> certain things. Anything but the
> >>>>>>>>>>>>>>>> "BIO" connector will be non-blocking
> >>>>>>>>>>>>>>>> for most operations. The default is
> >>>>>>>>>>>>>>>> NIO.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> which connector protocol do
> >>>>>>>>>>>>>>>>>>> you recommend and best
> >>>>>>>>>>>>>>>>>>> configuration for the connector
> >>>>>>>>>>>>>>>>>>> ?
> >>>>>>>>>>>>>>>> This depends on your environment,
> >>>>>>>>>>>>>>>> usage profile, etc. Note that
> >>>>>>>>>>>>>>>> non-blocking IO means more CPU usage:
> >>>>>>>>>>>>>>>> there is a trade-off.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Which stable version of tomcat
> >>>>>>>>>>>>>>>>>>> would you recommend ?
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Always the latest, of course. Tomcat
> >>>>>>>>>>>>>>>> 8.0 is unsupported, replaced by
> >>>>>>>>>>>>>>>> Tomcat 8.5. Tomcat 9.0 is stable and
> >>>>>>>>>>>>>>>> probably the best version if you are
> >>>>>>>>>>>>>>>> looking to upgrade. Both Tomcat 8.5
> >>>>>>>>>>>>>>>> and 9.0 are continuing to get regular
> >>>>>>>>>>>>>>>> updates. But definitely move away
> >>>>>>>>>>>>>>>> from 8.0.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Are there any ubuntu specific
> >>>>>>>>>>>>>>>>>>> configs for tomcat ?
> >>>>>>>>>>>>>>>> No. There is nothing particular
> >>>>>>>>>>>>>>>> special about Ubuntu. Linux is one of
> >>>>>>>>>>>>>>>> the most well-performing platforms
> >>>>>>>>>>>>>>>> for the JVM. I wouldn't recommend
> >>>>>>>>>>>>>>>> switching platforms.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Why are you using nginx? You already
> >>>>>>>>>>>>>>>> have load-balancing happening in the
> >>>>>>>>>>>>>>>> ALB. Inserting another layer of
> >>>>>>>>>>>>>>>> proxying is probably just adding
> >>>>>>>>>>>>>>>> another buffer to the mix. I'd remove
> >>>>>>>>>>>>>>>> nginx if it's not providing any
> >>>>>>>>>>>>>>>> specific, measurable benefit.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> We are using OkHttp client
> >>>>>>>>>>>>>>>>>>> library to call rest api and
> >>>>>>>>>>>>>>>>>>> stack trace shows failure at
> >>>>>>>>>>>>>>>>>>> the api call. The api being
> >>>>>>>>>>>>>>>>>>> called is running on the same
> >>>>>>>>>>>>>>>>>>> tomcat instance (different
> >>>>>>>>>>>>>>>>>>> context) usring url localhost.
> >>>>>>>>>>>>>>>>>>> This does not happen when
> >>>>>>>>>>>>>>>>>>> number of requests is less.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Your Tomcat server is calling this
> >>>>>>>>>>>>>>>> REST API? Or your server is serving
> >>>>>>>>>>>>>>>> those API requests? If your service
> >>>>>>>>>>>>>>>> is calling itself, then you have to
> >>>>>>>>>>>>>>>> make sure you have double-capacity:
> >>>>>>>>>>>>>>>> every incoming request will cause a
> >>>>>>>>>>>>>>>> loopback request to your own
> >>>>>>>>>>>>>>>> service.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Other than the timeouts, are you able
> >>>>>>>>>>>>>>>> to handle the load with your
> >>>>>>>>>>>>>>>> existing infrastructure? Sometimes,
> >>>>>>>>>>>>>>>> the solution is simply to throw most
> >>>>>>>>>>>>>>>> hardware at the problem.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> -chris
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> On Wed, May 27, 2020 at 11:48
> >>>>>>>>>>>>>>>>>>> AM Mark Thomas
> >>>>>>>>>>>>>>>>>>> <ma...@apache.org> wrote:
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> On 26/05/2020 23:28, Ayub
> >>>>>>>>>>>>>>>>>>>> Khan wrote:
> >>>>>>>>>>>>>>>>>>>>> Hi,
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> During high load I am
> >>>>>>>>>>>>>>>>>>>>> seeing below error on
> >>>>>>>>>>>>>>>>>>>>> tomcat logs
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> java.util.concurrent.ExecutionException:
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >
> >>>>>>>>>>>>>>>>>>>>>
> java.net
> >>>>>>>>>>>>>>>>>>>> .SocketTimeoutException:
> >>>>>>>>>>>>>>>>>>>>> timeout
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> And the rest of that stack
> >>>>>>>>>>>>>>>>>>>> trace? It is hard to provide
> >>>>>>>>>>>>>>>>>>>> advice without context. We
> >>>>>>>>>>>>>>>>>>>> need to know what is timing
> >>>>>>>>>>>>>>>>>>>> out when trying to do what.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> We have 4 C5.18x large vms
> >>>>>>>>>>>>>>>>>>>>> running tomcat 8 behind
> >>>>>>>>>>>>>>>>>>>>> AWS application load
> >>>>>>>>>>>>>>>>>>>>> balancer. We are seeing
> >>>>>>>>>>>>>>>>>>>>> socket timeouts during peak
> >>>>>>>>>>>>>>>>>>>>> hours. What should be the
> >>>>>>>>>>>>>>>>>>>>> configuration of tomcat if
> >>>>>>>>>>>>>>>>>>>>> we get 60,000 to 70,000
> >>>>>>>>>>>>>>>>>>>>> requests per
> >>>>>>>>>>>>>>>>>>>> minute
> >>>>>>>>>>>>>>>>>>>>> on an average ?
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Tomcat 8.0.32 on Ubuntu
> >>>>>>>>>>>>>>>>>>>>> 16.04.5 LTS
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Tomcat 8.0.x is no longer
> >>>>>>>>>>>>>>>>>>>> supported.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Below is the java version:
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> java version "1.8.0_181"
> >>>>>>>>>>>>>>>>>>>>> Java(TM) SE Runtime
> >>>>>>>>>>>>>>>>>>>>> Environment (build
> >>>>>>>>>>>>>>>>>>>>> 1.8.0_181-b13) Java
> >>>>>>>>>>>>>>>>>>>>> HotSpot(TM) 64-Bit Server
> >>>>>>>>>>>>>>>>>>>>> VM (build 25.181-b13, mixed
> >>>>>>>>>>>>>>>>>>>>> mode)
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Below is the server.xml
> >>>>>>>>>>>>>>>>>>>>> connector configuration:
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> <Connector port="8080"
> >>>>>>>>>>>>>>>>>>>>> protocol="org.apache.coyote.http11.Http11Nio2Proto
> col
> >
> >>>>>>>>>>>>>>>>>>>>>
> "
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> > Why NIO2?
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Mark
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> connectionTimeout="20000"
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> URIEncoding="UTF-8"
> >>>>>>>>>>>>>>>>>>>>> redirectPort="8443" />
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> We have 4  C5.18x large vms
> >>>>>>>>>>>>>>>>>>>>> and each vm has nginx and
> >>>>>>>>>>>>>>>>>>>>> tomcat instance running.
> >>>>>>>>>>>>>>>>>>>>> All the 4 vms are connected
> >>>>>>>>>>>>>>>>>>>>> to AWS application load
> >>>>>>>>>>>>>>>>>>>>> balancer.
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> I tried to add
> >>>>>>>>>>>>>>>>>>>>> maxConnections=50000 but
> >>>>>>>>>>>>>>>>>>>>> this does not seem to have
> >>>>>>>>>>>>>>>>>>>>> any affect and still saw
> >>>>>>>>>>>>>>>>>>>>> the timeouts
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Thanks and Regards Ayub
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> ---------------------------------------------------
> - ---
> >
> >>>>>>>>>>>>>>>>>>>>
> - ---
> >>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> > ---
> >>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>> ---
> >>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>> ---
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>> ---
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> To unsubscribe, e-mail:
> >>>>>>>>>>>>> users-unsubscr...@tomcat.apache.org
> >>>>>>>>>>>>>>>>>>>> For additional commands,
> >>>>>>>>>>>>>>>>>>>> e-mail:
> >>>>>>>>>>>>>>>>>>>> users-h...@tomcat.apache.org
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> ------------------------------------------------------
> - ---
> >
> >>>>>>>>>>>>>>>>>
> - ---
> >>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> > ---
> >>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>> ---
> >>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>> ---
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>> To unsubscribe, e-mail:
> >>>>>>>>>> users-unsubscr...@tomcat.apache.org
> >>>>>>>>>>>>>>>>> For additional commands, e-mail:
> >>>>>>>>>>>>>>>>> users-h...@tomcat.apache.org
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> ---------------------------------------------------------
> - ---
> >
> >>>>>>>>>>>>>>
> - ---
> >>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> > ---
> >>>>>>>
> >>>>>>>>>>>>>>
> >>>> ---
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>> To unsubscribe, e-mail:
> >>>>>>> users-unsubscr...@tomcat.apache.org
> >>>>>>>>>>>>>> For additional commands, e-mail:
> >>>>>>>>>>>>>> users-h...@tomcat.apache.org
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> --
> >>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> ------------------------------------------------------------
> - ---
> >
> >>>>>>>>>>>
> - ---
> >>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> > ---
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>
> >>>>>>>>>>>
> >>>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >>>>>>>>>>> For additional commands, e-mail:
> >>>>>>>>>>> users-h...@tomcat.apache.org
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>
> >>>>>>>> ---------------------------------------------------------------
> - ---
> >
> >>>>>>>>
> - ---
> >>>>>>>>
> >>>>>>>>
> >>>>
> >>>>>>>>
> >>>>>>>>
> > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >>>>>>>> For additional commands, e-mail:
> >>>>>>>> users-h...@tomcat.apache.org
> >>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>
> >>>>> ------------------------------------------------------------------
> - ---
> >>>>>
> >>>>>
> >>>>>
> >
> >>>>>
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >>>>> For additional commands, e-mail:
> >>>>> users-h...@tomcat.apache.org
> >>>>>
> >>>>>
> >>>>
> >>
> >> ---------------------------------------------------------------------
> >>
> >>
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>
> >>
> >
> -----BEGIN PGP SIGNATURE-----
> Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7ZJXwACgkQHPApP6U8
> pFglUA/9GoXOmQMcoYLoliIRQEbT7ED9z1cxIrvI6YTotyBCEoPAC89hPGIhB2Ah
> L2uY02pGN2xKWc0ryMNItmW8I4SzBe7xsHUjuXaPZpyGus1KSLL6cgDVHErhb6Rt
> uSWi4rhJLsCZgcs9upCCVni1XH/LTPT01eZRVbEZIW8uQl60tOKV2p8RWeY+dXoo
> 7NvDi6vEND21srIeKE9876E9MR/SMd1aZQmjboXC8jbM4CivjgCwSYfpg5d0GWju
> CWY3oDx2nGFx1c8PwW/Ux8iHZejYcUz21ywY+MUiAMBgKXDIMvDkOOwjxvDVlPm5
> /TY4uxio+QILvH90XaNFkTzlO6QK++/q2qtSdFPnssW6gAICVfwDqn+M6UK+vfl4
> jvH3uPOeJvBE2QWXQzGUsLMbHswK0IpmuLFZN+IWLMPxgDbAKjnmAIxhBwhuKdyK
> UiFrEi1d4LI3t0Qd+N/4s4uikfo+E6TM+JUnIxqUCVUzpeD5SXD978aAPK6bJwA9
> 3OYoMHDmyuKfQhi6vL41LKwGmnaaxuQg+6tAQWMZS2ShvD/Ev9e9Lja0rgS1iBGj
> XwGu+6Hnq6MgQglW95anqiX8gdP6uCKRx0Xn7kc4zGj4/opsAjqaf5nWi2lUCwOy
> O3AlgoNFhuWhEUwcnC2jrcoiW1yj/3is1ROC46Ei9OcDdqJUo7g=
> =LTgo
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Reply via email to