Mark,
There's no particular reason why a request to node1:/app1 needs
to have its loopback request call node1:/app2, is there? Can
node1:/app1 call node2:/app2?
Yes we can do that but then we would have to use the DNS urls and wont
this cause network latency compared to a localhost call ?
I have not changed the maxThreads config on each of the connectors ? If I
have to customize it, how to decide what value to use for maxThreads ?
Also once in a while we see the below error and tomcat stops serving
requests. Could you please let me know the cause of this issue ?
org.apache.tomcat.util.net.NioEndpoint$Acceptor run
SEVERE: Socket accept failed
java.nio.channels.ClosedChannelException
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:235)
at
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:682)
at java.lang.Thread.run(Thread.java:748)
On Thu, Jun 4, 2020 at 6:05 PM Ayub Khan <[email protected]> wrote:
> Mark,
>
>
> There's no particular reason why a request to node1:/app1 needs
> to have its loopback request call node1:/app2, is there? Can
> node1:/app1 call node2:/app2?
>
>
> Yes we can do that but then we would have to use the DNS urls and wont
> this cause network latency compared to a localhost call ?
>
> I have not changed the maxThreads config on each of the connectors ? If I
> have to customize it, how to decide what value to use for maxThreads ?
>
>
>
>
> On Mon, Jun 1, 2020 at 10:24 PM Christopher Schultz <
> [email protected]> wrote:
>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>>
>> Ayub,
>>
>> On 6/1/20 11:12, Ayub Khan wrote:
>> > Chris,
>> >
>> > As you described I have added two new connectors in server.xml and
>> > using nginx to redirect requests to different connector ports.
>> > Also configured nginx to route traffic of each app on a different
>> > connector port of tomcat
>> >
>> > In config of each app I am using port specific for the app which
>> > is being called localhost:8081 for app2 and localhost:8082 for app
>> > 3
>> >
>> > So now in config of app1 we call app2 using localhost:8081/app2 and
>> > localhost:8082/app3
>>
>> Perfect.
>>
>> > Could you explain the benefit of using this type of config? Will
>> > this be useful to not block requests for each app?
>>
>> This ensures that requests for /app1 do not starve the thread pool for
>> requests to /app2. Imagine that you have a single connector, single
>> thread pool, etc. for two apps: /app1 and /app2 and there is only a
>> SINGLE thread in the pool available, and that each request to /app1
>> makes a call to /app2. Here's what happens:
>>
>> 1. Client requests /app1
>> 2. /app1 makes connection to /app2
>> 3. Request to /app2 stalls waiting for a thread to become available
>> (it's already allocated to the request from #1 above)
>>
>> You basically have a deadlock, here, because /app1 isn't going to
>> give-up its thread, and the thread for the request to /app2 will not
>> be allocated until the request to /app1 gives up that thread.
>>
>> Now, nobody runs their application server with a SINGLE thread in the
>> pool, but this is instructive: it means that deadlock CAN occur.
>>
>> Let's take a more reasonable situation: you have 100 threads in the pool
>> .
>>
>> Let's say that you get REALLY unlucky and the following series of
>> events occurs:
>>
>> 1. 100 requests come in simultaneously for /app1. All requests are
>> allocated a thread from the thread pool for these requests before
>> anything else happens. Note that the thread pool is currently
>> completely exhausted with requests to /app1.
>> 2. All 100 threads from /app1 make requests to /app2. Now you have 100
>> threads in deadlock similar to the contrived SINGLE thread situation
>> above.
>>
>> Sure, it's unlikely, but it CAN happen, especially if requests to
>> /app1 are mostly waiting on requests to /app2 to complete: you can
>> very easily run out of threads in a high-load situation. And a
>> high-load situation is EXACTLY what you reported.
>>
>> Let's take the example of separate thread pools per application. Same
>> number of total threads, except that 50 are in one pool for /app1 and
>> the other 50 are in the pool for /app2. Here's the series of events:
>>
>> 1. 100 requests come in simultaneously for /app1. 50 requests are
>> allocated a thread from the thread pool for these requests before
>> anything else happens. The other 50 requests are queued waiting on a
>> request-processing thread to become available. Note that the thread
>> pool for /app1 is currently completely exhausted with requests to /app1.
>> 2. 50 threads from /app1 make requests to /app2. All 50 requests get
>> request-processing threads allocated, perform their work, and complete.
>> 3. The 50 queued requests from step #1 above are now allocated
>> request-processing threads and proceed to make requests to /app2
>> 4. 50 threads (the second batch) from /app1 make requests to /app2.
>> All 50 requests get request-processing threads allocated, perform
>> their work, and complete.
>>
>> Here, you have avoided any possibility of deadlock.
>>
>> Personally, I'd further decouple these services so that they are
>> running (possibly) on other servers, with different load-balancers,
>> etc. There's no particular reason why a request to node1:/app1 needs
>> to have its loopback request call node1:/app2, is there? Can
>> node1:/app1 call node2:/app2? If so, you should let it happen. It will
>> make your overall service mode robust. If not, you should fix things
>> so it CAN be done.
>>
>> You might also want to make sure that you do the same thing for any
>> database connections you might use, although holding a database
>> connection open while making a REST API call might be considered a Bad
>> Idea.
>>
>> Hope that helps,
>> - -chris
>>
>> > On Mon, 1 Jun 2020, 16:27 Christopher Schultz,
>> <[email protected]>
>> > wrote:
>> >
>> > Ayub,
>> >
>> > On 5/31/20 09:20, Ayub Khan wrote:
>> >>>> On single tomcat instance how to map each app to different
>> >>>> port number?
>> >
>> > You'd have to use multiple <Engine> elements, which means separate
>> > everything and not just the <Connector>. It's more work on the
>> > Tomcat side with the same problem of having a different port
>> > number which you can get just by using a separate <Connector>.
>> >
>> > Since you have a reverse-proxy already, it's simpler to use the
>> > reverse-proxy as the port-selector and not worry about trying to
>> > actually enforce it at the Tomcat level.
>> >
>> > -chris
>> >
>> >>>> On Sun, 31 May 2020, 15:44 Christopher Schultz, <
>> >>>> [email protected]> wrote:
>> >>>>
>> >>>> Ayub,
>> >>>>
>> >>>> On 5/29/20 20:23, Ayub Khan wrote:
>> >>>>>>> Chris,
>> >>>>>>>
>> >>>>>>> You might want (2) and (3) to have their own,
>> >>>>>>> independent connector and thread pool, just to be
>> >>>>>>> safe. You don't want a connection in (1) to stall
>> >>>>>>> because a loopback connection can't be made to (2)/(3).
>> >>>>>>> Meanwhile, it's sitting there making no progress but
>> >>>>>>> also consuming a connection+thread.
>> >>>>>>>
>> >>>>>>> *there is only one connector per tomcat where all the
>> >>>>>>> applications
>> >>>> receive
>> >>>>>>> the requests, they do not have independent connector
>> >>>>>>> and thread pool per tomcat. How to configure
>> >>>>>>> independent connector and thread pool per application
>> >>>>>>> per tomcat instance ? below is the current connector
>> >>>> config in
>> >>>>>>> each tomcat instance:*
>> >>>>
>> >>>> You can't allocate a connector to a particular web
>> >>>> application -- at least not in the way that you think.
>> >>>>
>> >>>> What you have to do if use different port numbers. Users will
>> >>>> never use them, though. But since you have nginx (finally!
>> >>>> A reason to have it!), you can map /app1 to port 8080 and
>> >>>> /app2 to port 8081 and /app3 to port 8083 or whatever you
>> >>>> want.
>> >>>>
>> >>>> Internal loopback connections will either have to go through
>> >>>> nginx (which I wouldn't recommend) or know the correct port
>> >>>> numbers to use (which I *do* recommend).
>> >>>>
>> >>>> -chris
>> >>>>
>> >>>>>>> *<Connector port="8080"
>> >>>>>>> protocol="org.apache.coyote.http11.Http11NioProtocol"
>> >>>>>>> connectionTimeout="20000" URIEncoding="UTF-8"
>> >>>>>>> redirectPort="8443" />*
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> On Fri, May 29, 2020 at 9:05 PM Christopher Schultz <
>> >>>>>>> [email protected]> wrote:
>> >>>>>>>
>> >>>>>>> Ayub,
>> >>>>>>>
>> >>>>>>> On 5/28/20 17:25, Ayub Khan wrote:
>> >>>>>>>>>> Nginx is being used for image caching and
>> >>>>>>>>>> converting https to http requests before hitting
>> >>>>>>>>>> tomcat.
>> >>>>>>> So you encrypt between the ALB and your app server
>> >>>>>>> nodes? That's fine, though nginx probably won't offer
>> >>>>>>> any performance improvement for images (unless it's
>> >>>>>>> really caching dynamically-generated images from your
>> >>>>>>> application) or TLS termination.
>> >>>>>>>
>> >>>>>>>>>> The behavior I am noticing is application first
>> >>>>>>>>>> throws Borken pipe client abort exception at
>> >>>>>>>>>> random apis calls followed by socket timeout and
>> >>>>>>>>>> then database connection leak errors. This
>> >>>>>>>>>> happens only during high load.
>> >>>>>>>
>> >>>>>>> If you are leaking connections, that's going to be an
>> >>>>>>> application resource-management problem. Definitely
>> >>>>>>> solve that, but it shouldn't affect anything with
>> >>>>>>> Tomcat connections and/or threads.
>> >>>>>>>
>> >>>>>>>>>> During normal traffic open files for tomcat
>> >>>>>>>>>> process goes up and down to not more than 500.
>> >>>>>>>>>> However during high traffic if I keep track of
>> >>>>>>>>>> the open files for each tomcat process as soon as
>> >>>>>>>>>> the open files count reaches above 10k that
>> >>>>>>>>>> tomcat instance stops serving the requests.
>> >>>>>>>
>> >>>>>>> Any other errors shown in the logs? Like
>> >>>>>>> OutOfMemoryError (for e.g. open files)?
>> >>>>>>>
>> >>>>>>>>>> If the open file count goes beyond 5k its sure
>> >>>>>>>>>> that this number will never come back to below
>> >>>>>>>>>> 500 at this point we need to restart tomcat.
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> There are three application installed on each
>> >>>>>>>>>> tomcat instance,
>> >>>>>>>>>>
>> >>>>>>>>>> 1) portal: portal calls (2) and (3) using
>> >>>>>>>>>> localhost, should we change this to use dns name
>> >>>>>>>>>> instead of localhost calls ?
>> >>>>>>>>>>
>> >>>>>>>>>> 2) Services for portal 3) Services for portal and
>> >>>>>>>>>> mobile clients
>> >>>>>>>
>> >>>>>>> Are they all sharing the same connector / thread pool?
>> >>>>>>>
>> >>>>>>> You might want (2) and (3) to have their own,
>> >>>>>>> independent connector and thread pool, just to be
>> >>>>>>> safe. You don't want a connection in (1) to stall
>> >>>>>>> because a loopback connection can't be made to (2)/(3).
>> >>>>>>> Meanwhile, it's sitting there making no progress but
>> >>>>>>> also consuming a connection+thread.
>> >>>>>>>
>> >>>>>>> -chris
>> >>>>>>>
>> >>>>>>>>>> On Thu, May 28, 2020 at 4:50 PM Christopher
>> >>>>>>>>>> Schultz < [email protected]> wrote:
>> >>>>>>>>>>
>> >>>>>>>>>> Ayub,
>> >>>>>>>>>>
>> >>>>>>>>>> On 5/27/20 19:43, Ayub Khan wrote:
>> >>>>>>>>>>>>> If we have 18 core CPU and 100GB RAM. What
>> >>>>>>>>>>>>> value can I set for maxConnections ?
>> >>>>>>>>>> Your CPU and RAM really have nothing to do with
>> >>>>>>>>>> it. It's more about your usage profile.
>> >>>>>>>>>>
>> >>>>>>>>>> For example, if you are serving small static
>> >>>>>>>>>> files, you can serve a million requests a minute
>> >>>>>>>>>> on a Raspberry Pi, many of them concurrently.
>> >>>>>>>>>>
>> >>>>>>>>>> But if you are performing fluid dynamic
>> >>>>>>>>>> simulations with each request, you will
>> >>>>>>>>>> obviously need more horsepower to service a
>> >>>>>>>>>> single request, let alone thousands of concurrent
>> >>>>>>>>>> requests.
>> >>>>>>>>>>
>> >>>>>>>>>> If you have tons of CPU and memory to spare,
>> >>>>>>>>>> feel free to crank-up the max connections. The
>> >>>>>>>>>> default is 10000 which is fairly high. At some
>> >>>>>>>>>> point, you will run out of connection allocation
>> >>>>>>>>>> space in the OS's TCP/IP stack, so that is really
>> >>>>>>>>>> your upper-limit. You simply cannot have more
>> >>>>>>>>>> than the OS will allow. See
>> >>>>>>>>>> https://stackoverflow.com/a/2332756/276232 for
>> >>>>>>>>>> some information about that.
>> >>>>>>>>>>
>> >>>>>>>>>> Once you adjust your settings, perform a
>> >>>>>>>>>> load-test. You may find that adding more
>> >>>>>>>>>> resources actually slows things down.
>> >>>>>>>>>>
>> >>>>>>>>>>>>> Want to make sure we are utilizing the
>> >>>>>>>>>>>>> hardware to the max capacity. Is there any
>> >>>>>>>>>>>>> config of tomcat which enabled could help
>> >>>>>>>>>>>>> serve more requests per tomcat instance.
>> >>>>>>>>>>
>> >>>>>>>>>> Not really. Improving performance usually come
>> >>>>>>>>>> down to tuning the application to make the
>> >>>>>>>>>> requests take less time to process. Tomcat is
>> >>>>>>>>>> rarely the source of performance problems (but
>> >>>>>>>>>> /sometimes/ is, and it's usually a bug that can
>> >>>>>>>>>> be fixed).
>> >>>>>>>>>>
>> >>>>>>>>>> You can improve throughput somewhat by
>> >>>>>>>>>> pipelineing requests. That means HTTP keepalive
>> >>>>>>>>>> for direct connections (but with a small timeout;
>> >>>>>>>>>> you don't want clients who aren't making any
>> >>>>>>>>>> follow-up requests to waste your resources
>> >>>>>>>>>> waiting for a keep-alive timeout to close a
>> >>>>>>>>>> connection). For proxy connections (e.g. from
>> >>>>>>>>>> nginx), you'll want those connections to remain
>> >>>>>>>>>> open as long as possible to avoid the
>> >>>>>>>>>> re-negotiation of TCP and possibly TLS
>> >>>>>>>>>> handshakes. Using HTTP/2 can be helpful for
>> >>>>>>>>>> performance, at the cost of some CPU on the
>> >>>>>>>>>> back-end to perform the complicated connection
>> >>>>>>>>>> management that h2 requires.
>> >>>>>>>>>>
>> >>>>>>>>>> Eliminating useless buffering is often very
>> >>>>>>>>>> helpful. That's why I asked about nginx. What
>> >>>>>>>>>> are you using it for, other than as a barrier
>> >>>>>>>>>> between the load-balancer and your Tomcat
>> >>>>>>>>>> instances? If you remove nginx, I suspect you'll
>> >>>>>>>>>> see a measurable performance increase. This isn't
>> >>>>>>>>>> a knock against nginx: you'd see a performance
>> >>>>>>>>>> improvement by removing *any* reverse-proxy that
>> >>>>>>>>>> isn't providing any value. But you haven't said
>> >>>>>>>>>> anything about why it's there in the first place,
>> >>>>>>>>>> so I don't know if it /is/ providing any value to
>> >>>>>>>>>> you.
>> >>>>>>>>>>
>> >>>>>>>>>>>>> The current setup is able to handle most
>> >>>>>>>>>>>>> of the load, however there are predictable
>> >>>>>>>>>>>>> times where there is an avalanche of
>> >>>>>>>>>>>>> requests and thinking how to handle it
>> >>>>>>>>>>>>> gracefully.
>> >>>>>>>>>>
>> >>>>>>>>>> You are using AWS: use auto-scaling. That's what
>> >>>>>>>>>> it's for.
>> >>>>>>>>>>
>> >>>>>>>>>> -chris
>> >>>>>>>>>>
>> >>>>>>>>>>>>> On Wed, May 27, 2020 at 5:38 PM Christopher
>> >>>>>>>>>>>>> Schultz < [email protected]>
>> >>>>>>>>>>>>> wrote:
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> Ayub,
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> On 5/27/20 09:26, Ayub Khan wrote:
>> >>>>>>>>>>>>>>>> previously I was using HTTP/1.1
>> >>>>>>>>>>>>>>>> connector, recently I changed to
>> >>>>>>>>>>>>>>>> NIO2 to see the performance. I read
>> >>>>>>>>>>>>>>>> that NIO2 is non blocking so trying
>> >>>>>>>>>>>>>>>> to check how this works.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> Both NIO and NIO2 are non-blocking. They
>> >>>>>>>>>>>>> use different strategies for certain
>> >>>>>>>>>>>>> things. Anything but the "BIO" connector
>> >>>>>>>>>>>>> will be non-blocking for most operations.
>> >>>>>>>>>>>>> The default is NIO.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>> which connector protocol do you
>> >>>>>>>>>>>>>>>> recommend and best configuration for
>> >>>>>>>>>>>>>>>> the connector ?
>> >>>>>>>>>>>>> This depends on your environment, usage
>> >>>>>>>>>>>>> profile, etc. Note that non-blocking IO
>> >>>>>>>>>>>>> means more CPU usage: there is a
>> >>>>>>>>>>>>> trade-off.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>> Which stable version of tomcat would
>> >>>>>>>>>>>>>>>> you recommend ?
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> Always the latest, of course. Tomcat 8.0 is
>> >>>>>>>>>>>>> unsupported, replaced by Tomcat 8.5.
>> >>>>>>>>>>>>> Tomcat 9.0 is stable and probably the best
>> >>>>>>>>>>>>> version if you are looking to upgrade. Both
>> >>>>>>>>>>>>> Tomcat 8.5 and 9.0 are continuing to get
>> >>>>>>>>>>>>> regular updates. But definitely move away
>> >>>>>>>>>>>>> from 8.0.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>> Are there any ubuntu specific
>> >>>>>>>>>>>>>>>> configs for tomcat ?
>> >>>>>>>>>>>>> No. There is nothing particular special
>> >>>>>>>>>>>>> about Ubuntu. Linux is one of the most
>> >>>>>>>>>>>>> well-performing platforms for the JVM. I
>> >>>>>>>>>>>>> wouldn't recommend switching platforms.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> Why are you using nginx? You already have
>> >>>>>>>>>>>>> load-balancing happening in the ALB.
>> >>>>>>>>>>>>> Inserting another layer of proxying is
>> >>>>>>>>>>>>> probably just adding another buffer to the
>> >>>>>>>>>>>>> mix. I'd remove nginx if it's not
>> >>>>>>>>>>>>> providing any specific, measurable
>> >>>>>>>>>>>>> benefit.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>> We are using OkHttp client library
>> >>>>>>>>>>>>>>>> to call rest api and stack trace
>> >>>>>>>>>>>>>>>> shows failure at the api call. The
>> >>>>>>>>>>>>>>>> api being called is running on the
>> >>>>>>>>>>>>>>>> same tomcat instance (different
>> >>>>>>>>>>>>>>>> context) usring url localhost. This
>> >>>>>>>>>>>>>>>> does not happen when number of
>> >>>>>>>>>>>>>>>> requests is less.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> Your Tomcat server is calling this REST
>> >>>>>>>>>>>>> API? Or your server is serving those API
>> >>>>>>>>>>>>> requests? If your service is calling
>> >>>>>>>>>>>>> itself, then you have to make sure you have
>> >>>>>>>>>>>>> double-capacity: every incoming request
>> >>>>>>>>>>>>> will cause a loopback request to your own
>> >>>>>>>>>>>>> service.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> Other than the timeouts, are you able to
>> >>>>>>>>>>>>> handle the load with your existing
>> >>>>>>>>>>>>> infrastructure? Sometimes, the solution is
>> >>>>>>>>>>>>> simply to throw most hardware at the
>> >>>>>>>>>>>>> problem.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> -chris
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>> On Wed, May 27, 2020 at 11:48 AM Mark
>> >>>>>>>>>>>>>>>> Thomas <[email protected]> wrote:
>> >>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>> On 26/05/2020 23:28, Ayub Khan
>> >>>>>>>>>>>>>>>>> wrote:
>> >>>>>>>>>>>>>>>>>> Hi,
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> During high load I am seeing
>> >>>>>>>>>>>>>>>>>> below error on tomcat logs
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> java.util.concurrent.ExecutionException:
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>>
>> >
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>>
>> java.net
>> >>>>>>>>>>>>>>>>> .SocketTimeoutException:
>> >>>>>>>>>>>>>>>>>> timeout
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>> And the rest of that stack trace?
>> >>>>>>>>>>>>>>>>> It is hard to provide advice
>> >>>>>>>>>>>>>>>>> without context. We need to know
>> >>>>>>>>>>>>>>>>> what is timing out when trying to
>> >>>>>>>>>>>>>>>>> do what.
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> We have 4 C5.18x large vms
>> >>>>>>>>>>>>>>>>>> running tomcat 8 behind AWS
>> >>>>>>>>>>>>>>>>>> application load balancer. We are
>> >>>>>>>>>>>>>>>>>> seeing socket timeouts during
>> >>>>>>>>>>>>>>>>>> peak hours. What should be the
>> >>>>>>>>>>>>>>>>>> configuration of tomcat if we get
>> >>>>>>>>>>>>>>>>>> 60,000 to 70,000 requests per
>> >>>>>>>>>>>>>>>>> minute
>> >>>>>>>>>>>>>>>>>> on an average ?
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> Tomcat 8.0.32 on Ubuntu 16.04.5
>> >>>>>>>>>>>>>>>>>> LTS
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>> Tomcat 8.0.x is no longer
>> >>>>>>>>>>>>>>>>> supported.
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> Below is the java version:
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> java version "1.8.0_181"
>> >>>>>>>>>>>>>>>>>> Java(TM) SE Runtime Environment
>> >>>>>>>>>>>>>>>>>> (build 1.8.0_181-b13) Java
>> >>>>>>>>>>>>>>>>>> HotSpot(TM) 64-Bit Server VM
>> >>>>>>>>>>>>>>>>>> (build 25.181-b13, mixed mode)
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> Below is the server.xml connector
>> >>>>>>>>>>>>>>>>>> configuration:
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> <Connector port="8080"
>> >>>>>>>>>>>>>>>>>> protocol="org.apache.coyote.http11.Http11Nio2Protocol
>> "
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>
>> >>>>>>>>>>>>>>>>>>
>> >>>>
>> >>>>>>>>>>>>>>>>>>
>> >
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>>
>> Why NIO2?
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>> Mark
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> connectionTimeout="20000"
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> URIEncoding="UTF-8"
>> >>>>>>>>>>>>>>>>>> redirectPort="8443" />
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> We have 4 C5.18x large vms and
>> >>>>>>>>>>>>>>>>>> each vm has nginx and tomcat
>> >>>>>>>>>>>>>>>>>> instance running. All the 4 vms
>> >>>>>>>>>>>>>>>>>> are connected to AWS application
>> >>>>>>>>>>>>>>>>>> load balancer.
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> I tried to add
>> >>>>>>>>>>>>>>>>>> maxConnections=50000 but this
>> >>>>>>>>>>>>>>>>>> does not seem to have any affect
>> >>>>>>>>>>>>>>>>>> and still saw the timeouts
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>> Thanks and Regards Ayub
>> >>>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>> ------------------------------------------------------
>> - ---
>> >
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> - ---
>> >>>>
>> >>>>>>>>>>>>>>>>>
>> > ---
>> >>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>> ---
>> >>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>>>>> ---
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>> To unsubscribe, e-mail:
>> >>>>>>>>>> [email protected]
>> >>>>>>>>>>>>>>>>> For additional commands, e-mail:
>> >>>>>>>>>>>>>>>>> [email protected]
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> ---------------------------------------------------------
>> - ---
>> >
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> - ---
>> >>>>
>> >>>>>>>>>>>>>>
>> > ---
>> >>>>>>>
>> >>>>>>>>>>>>>>
>> >>>> ---
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>> To unsubscribe, e-mail:
>> >>>>>>> [email protected]
>> >>>>>>>>>>>>>> For additional commands, e-mail:
>> >>>>>>>>>>>>>> [email protected]
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>> ------------------------------------------------------------
>> - ---
>> >
>> >>>>>>>>>>>
>> >>>>>>>>>>>
>> - ---
>> >>>>
>> >>>>>>>>>>>
>> > ---
>> >>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>
>> >>>>>>>>>>>
>> >>>> To unsubscribe, e-mail: [email protected]
>> >>>>>>>>>>> For additional commands, e-mail:
>> >>>>>>>>>>> [email protected]
>> >>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> --
>> >>>>>>>>>>
>> >>>>>>>>
>> >>>>>>>> ---------------------------------------------------------------
>> - ---
>> >
>> >>>>>>>>
>> >>>>>>>>
>> - ---
>> >>>>>>>>
>> >>>>>>>>
>> >>>>
>> >>>>>>>>
>> > To unsubscribe, e-mail: [email protected]
>> >>>>>>>> For additional commands, e-mail:
>> >>>>>>>> [email protected]
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>
>> >>>>>
>> >>>>> ------------------------------------------------------------------
>> - ---
>> >>>>>
>> >>>>>
>> >
>> >>>>>
>> >>>>>
>> To unsubscribe, e-mail: [email protected]
>> >>>>> For additional commands, e-mail:
>> >>>>> [email protected]
>> >>>>>
>> >>>>>
>> >>>>
>> >>
>> >> ---------------------------------------------------------------------
>> >>
>> >>
>> >>
>> To unsubscribe, e-mail: [email protected]
>> >> For additional commands, e-mail: [email protected]
>> >>
>> >>
>> >
>> -----BEGIN PGP SIGNATURE-----
>> Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
>>
>> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7VVeUACgkQHPApP6U8
>> pFg0nRAArSP4jFkc9gumNgi8TRUg4O+aP8ZfK6usPOcXEpaOWGrXARkZhaDdMre6
>> 3cQLUJM/lz9Xl/bdl/XJ2bsJlBMAM3Kl+n2HxfWKSnfEK4ku5uY9q7Cb6gSkVxtp
>> 4JjY6LH62DZzV6+5gcHsGSXxeyL6zR7ktn/o/BgV5xUruAArQBFURsfaliC1QXIe
>> 63D3XDcpm62nqw6DRp+v0h9J0zjaNSQmz3vx3chK9Vpb72UxNjMoXTITWqG6hu3D
>> f5oSlPF8m8pUgd29JV2xBI7OLj4+1llWIA2zHwMLPMglXvHll481xHoBuVlxpKcR
>> OPmjfTpBK4zZlG0NRfwgCY3OQT5sVRXcvARMlXlV2F+jMzcQGWcMV3Z8fLKfmHnR
>> 3YoM6n73wzXV61yi8CwY472kOT79Icei4yHxwEDVrTlTfO1R3x2P6zPvPQ3gK7RT
>> U4n6RVpewo9i3d7+CSHvwunwc/vSkAksASEf5mWBBHVF+w45+eVdN8oaqfqnYmvi
>> z0ZXtlUwheU1LZI49fouGKmEh9CYjAGZmV1o7n9nv9xF6QPSaHF/KWbQAL2MGKgr
>> 7n1ZHQytLxcAFXS1KAf7DNC663jz+O/KtFuqisdDgvo2ORdQCFYjlOtmvs8ZtfXb
>> zsH/rMMuu5l0oX0tTe5gk9YDxB+4iGhvb2p9FMLYXbcgm7fm8Z0=
>> =mzi1
>> -----END PGP SIGNATURE-----
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [email protected]
>> For additional commands, e-mail: [email protected]
>>
>>
>
> --
> --------------------------------------------------------------------
> Sun Certified Enterprise Architect 1.5
> Sun Certified Java Programmer 1.4
> Microsoft Certified Systems Engineer 2000
> http://in.linkedin.com/pub/ayub-khan/a/811/b81
> mobile:+966-502674604
> ----------------------------------------------------------------------
> It is proved that Hard Work and kowledge will get you close but attitude
> will get you there. However, it's the Love
> of God that will put you over the top!!
>
--
--------------------------------------------------------------------
Sun Certified Enterprise Architect 1.5
Sun Certified Java Programmer 1.4
Microsoft Certified Systems Engineer 2000
http://in.linkedin.com/pub/ayub-khan/a/811/b81
mobile:+966-502674604
----------------------------------------------------------------------
It is proved that Hard Work and kowledge will get you close but attitude
will get you there. However, it's the Love
of God that will put you over the top!!