Web socket connections scalability
Hi All, We are on Tomcat 9.0.44 . I understand NIO HTTP connector is used by default in Tomcat. We are planning to enable web socket communication. I would like to understand how many parallel web socket connections can be opened ? I understand that there is no default maxConnections value but actual connection counts in this case depend on the OS and environment where the server is running. Did some one perform any load testing on the number of web socket connections that can be achieved with the above configuration ? Best Regards, Saurav
Re: CVE-2021-44228 Log4j 2 Vulnerability -- How does this affect Tomcat?
Hi All, How does tomcat access valves/logs work ? Since it prints the whole URL , will it be any issue if the access logs are using Log4j2 implementation? Best Regards, Saurav On Sun, Dec 12, 2021 at 7:32 PM Christopher Schultz < ch...@christopherschultz.net> wrote: > Mark, > > On 12/11/21 18:39, Mark Thomas wrote: > > On 11/12/2021 22:04, Sebastian Hennebrüder wrote: > >> Hi all, > >> > >> I reproduced the attack against Tomcat 9.0.56 with latest Java 8 and > >> Java 11. Actually the Java path version is not relevant. > > > > Utter nonsense. Tomcat is not vulnerable to this attack. > > > >> It is possible with a deployed Tomcat 9 and Spring Boot with Tomcat > >> embedded. > > > > The above statement fails to make clear that it is only true if a number > > of of pre-conditions are also true. > > > > The Spring team have a blog that describes the vulnerable configurations > > and provides several possible workarounds: > > > > https://spring.io/blog/2021/12/10/log4j2-vulnerability-and-spring-boot > > > >> If your server can reach arbitrary servers on the Internet, you can > >> execute random code in the shell. > > > > The above statement fails to make clear that it is only true if a number > > of of pre-conditions are also true. > > > >> The attack is not using RMI remote class loading but uses Tomcats > >> BeanFactory to create an ELExpression library. As the BeanFactory has > >> features to manipulate instantiated classes, it can inject a Script. > >> In plain Java application this would still be blocked by RMI class > >> loading but Tomcat circumvents this. > > > > More mis-leading nonsense attempting to suggest that Tomcat is > > vulnerable. It isn't. > > > >> The attack is explained in 2019 by > >> https://www.veracode.com/blog/research/exploiting-jndi-injections-java > > > > What the authors of that blog make clear, but appears to have been > > completely ignored by the person posting to this list is nicely summed > > up at the end of the article: > > > > > > The actual problem here is not within the JDK or Apache Tomcat library, > > but rather in custom applications that pass user-controllable data to > > the "InitialContext.lookup()" function, as it still represents a > > security risk even in fully patched JDK installations. > > > > > > Any application that takes any user provided data and uses it without > > performing any validation and/or sanitization - particularly if that > > data is then used to construct a request to an external service - is > > probably going to create a security vulnerability in the application. > > +1 > > *This* is what makes the log4j vulnerability such a problem: it takes > something that should be allowed to be untrustworthy (raw user input) > and turns it into logic signalling. > > It is very easy to write an application that contains vulnerabilities. > Those vulnerabilities are NOT vulnerabilities in the hosting service, etc. > > Anyone running a reasonably recent version of Java should *not* be > subject to RCE. Exfiltration of data available through JNDI (which /may/ > be very interesting to attackers) is much more likely and much more > difficult to mitigate without either upgrading log4j or applying log4j's > mitigations (system property or format-modifier). > > -chris > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Internals of setMaxInactiveInterval
Hi All, I would like to understand the internals of Session~setMaxInactiveInterval in tomcat. I understand that if HTTP requests are not received within the said interval then the session is cleared. All the objects belonging to the session will be gone. Does that also mean that the existing requests part of the session will be terminated ? I have a scenario for large file transfer to happen over a single request. So if one request (A) is long running and there are no other requests sent within the time interval. Then will the request A will be terminated and subsequent requests even if with valid cookie will not be part of the same session ? Or will the ongoing request's fate is determined by other timeouts like read time out? Thanks a lot, Best Regards, Saurav
HTTP error response payload
Hi All, Through tomcat access valve i can view the HTTP request url ,response code etc. But i can not view the error response being sent in the form of JSON payload. Is their any valve/filter/ any other setting on tomcat level which can enable this or applications (server and clients) themselves have to log it. Best Regards, Saurav
Re: Getting more information on connection refused error
Yes...i thought so...thanks for the JMX hint Chris. Best Regards, Saurav On Wed, Nov 13, 2019 at 10:00 PM Christopher Schultz < ch...@christopherschultz.net> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Saurav, > > On 11/13/19 10:19, Saurav Sarkar wrote: > > Hi All, > > > > We invoke one service which runs on a OSGI Virgo based embeddable > > tomcat servlet container. > > org.eclipse.gemini.web.tomcat_2.2.6.RELEASE.jar is the version we > > use. > > > > After sending some load to the service, it starts giving > > "connection refused error". > > > > errno: 111 (Connection refused), error: Connection refused (local > > port to address (), remote port > > 8443 to address ). > > > > I assume this is due to fact that the server socket has run out of > > connections and is not accepting any any more connection because > > this happens under heavy load testing. > > > > I would like to know how can i get more information on why > > connection was refused ? > > > > Will tomcat log this and what is logged under this scenario ? or > > does this reaches at all to tomcat and rather terminated at the OS > > layer itself ? > > The connections are being refused by the OS's TCP/IP stack. Tomcat > does not even know that these connections are being refused, so it > cannot put anything in the logs. > > If you monitor your Tomcat instance[1], you should be able to see how > many connections are in-flight and guess that you might be sometimes > hitting your limit. > > - -chris > > [1] http://tomcat.apache.org/presentations.html#latest-monitoring-with-j > mx > -BEGIN PGP SIGNATURE- > Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/ > > iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl3ML3MACgkQHPApP6U8 > pFh1EBAAhtUFRfSlsWdfHlcqrePQTV3wnVoNmJ3xQErZ4Cu26vM1Z6h+XhvvKxTw > WGJkLxLBm+BO/D+rrXY+DRczVZSCx1PnXM4YgsTtiegvwFj4F1C3n1HQaIqLYttZ > HSUh6iN06oFObDnNgQnkImx9l8QYB6snePoDM5fyQQaVdfJtm2yt8Ot+7TYJUZXW > tavrc5+jkGcbr/tlwUehN2OI+u+lZcK+gme8GKT60dBbllcLQ/D5IFTmsk/6bTya > iMvQ3NHPbji4J5hKxBYqMYB+wGoi7UljRusyOd4AQY8cbZ4UsuWrmWYb6yIAg6SW > YuHdzKQh2uy8Lrvljn43Nh4RguKnxMVycsWjhsXVP1HzbkEo/RdPXlaM4tLAdQak > y5A1ygb8Vlv1sYEzIYpsvgMdXO2eXSJ6UwvE+jC+Z+eKKrNMgBlYc0rS0f6KuCrB > rRjqw0sSZVmoOERbNJfYHU8JqxUC1TTkx9bb9AX1S2BuR/jq/9cC2lZllzjqe6Uc > P6kSavsW7r8CxB+pt6+GCG281Sa2tuednxeOXXom0liLmmRr8saWYd4H7zzMEGMq > ifiiMTsHyoSfBeyV7Ncp23RswywFeWe86/IpU6CSdlx7tb3YEU9i0Wtwl9Vof4Fq > nuQjVMznej/FCE+B156rLgwBUV5KWu5FM20R1cRdvmMJC+1AbUM= > =ebmC > -END PGP SIGNATURE- > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Getting more information on connection refused error
Hi All, We invoke one service which runs on a OSGI Virgo based embeddable tomcat servlet container. org.eclipse.gemini.web.tomcat_2.2.6.RELEASE.jar is the version we use. After sending some load to the service, it starts giving "connection refused error". errno: 111 (Connection refused), error: Connection refused (local port to address (), remote port 8443 to address ). I assume this is due to fact that the server socket has run out of connections and is not accepting any any more connection because this happens under heavy load testing. I would like to know how can i get more information on why connection was refused ? Will tomcat log this and what is logged under this scenario ? or does this reaches at all to tomcat and rather terminated at the OS layer itself ? Best Regards, Saurav
Re: Default Max response size in Tomcat
Thanks a lot Olaf for the response. Yes , i have taken care of that condition Please find the full code below protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { try { String param = request.getParameter("size"); if (param != null) { int kByte = Integer.parseInt(param); response.setContentType("application/octet-stream"); long length = kByte * 1024; if (length <= Integer.MAX_VALUE) { response.setContentLength((int)length); } else { response.addHeader("Content-Length", Long.toString(length)); } // response.setContentLength(kByte * 1024); ServletOutputStream outputStream = response.getOutputStream(); byte[] buffer = new byte[1024]; Random random = new Random(System.currentTimeMillis()); while (size < kByte) { random.nextBytes(buffer); outputStream.write(buffer); size += 1; } outputStream.flush(); return; } }catch (Exception e) { e.printStackTrace(); response.sendError(500, e.getMessage()); return; } } On Wed, Mar 20, 2019 at 6:26 PM Olaf Kock wrote: > > > On 20.03.19 12:08, Saurav Sarkar wrote: > > Just to add the stack trace. > > > > I am getting ClientAbortException "Connection reset by peer" when i am > > trying to write to the response stream > > > > 2019-03-20T10:32:28.501+ [APP/PROC/WEB/0] ERR > > org.apache.catalina.connector.ClientAbortException: java.io.IOException: > > Connection reset by peer > > 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at > > > org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:364) > ... > > > > > On Wed, Mar 20, 2019 at 3:51 PM Saurav Sarkar > > wrote: > > > >> Hi All > >> > >> I have a very basic test application which serves bytes from memory and > >> gives it back to the client. > >> > >> Whenever i try to send the request for byte size which is of over 2 GB i > >> get a connection reset error in my server code and a 502 error in my > chrome > >> console. Below 2 GB it is working fine. > >> > >> In my client side i execute java script which i execute from the > browser. > >> This basically executes an XMLHTTPRequest , gets the response (stores in > >> browser memory) and asks for a save. > >> > >> I would like to know if there Is there max response size default value > >> which is set in default tomcat configuration. ? or any other hints which > >> you can provide in my use. > >> > >> > >> Thanks and Regards, > >> > >> Saurav > >> > >> Below is the servlet or server side code > >> > >> > >> > >> response.setContentLength((int)length); > >> > >> } > >> > >> else > >> > >> { > >> > >> response.addHeader("Content-Length", Long.toString(length)); > >> > >> } > > You don't include the initial condition in your code, but I'm assuming > that the first line is hit: response.setContentLength((int)length); > > int in Java is defined to be 32 bit, and always signed. That means that > any value larger than 2^31 or Integer.MAX_VALUE can't be expressed in > int as a positive number. In fact, anything between 2^31 and 2^32 will > be interpreted as a negative number, so you're effectively setting the > content length to be negative. > > Note that there's also a setContentLengthLong method > > https://docs.oracle.com/javaee/7/api/javax/servlet/ServletResponse.html#setContentLengthLong-long- > > > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Re: Default Max response size in Tomcat
/0] ERR at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at java.lang.Thread.run(Thread.java:836) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR Caused by: java.io.IOException: Connection reset by peer 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at sun.nio.ch.FileDispatcherImpl.write0(Native Method) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:50) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at sun.nio.ch.IOUtil.write(IOUtil.java:65) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:478) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.tomcat.util.net.NioChannel.write(NioChannel.java:134) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.tomcat.util.net.NioBlockingSelector.write(NioBlockingSelector.java:101) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.tomcat.util.net.NioSelectorPool.write(NioSelectorPool.java:157) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.doWrite(NioEndpoint.java:1306) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.tomcat.util.net.SocketWrapperBase.doWrite(SocketWrapperBase.java:726) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.tomcat.util.net.SocketWrapperBase.writeBlocking(SocketWrapperBase.java:496) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:434) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.coyote.http11.Http11OutputBuffer$SocketOutputBuffer.doWrite(Http11OutputBuffer.java:623) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:127) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.coyote.http11.Http11OutputBuffer.doWrite(Http11OutputBuffer.java:225) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.coyote.Response.doWrite(Response.java:602) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:352) 2019-03-20T10:32:28.502+ [APP/PROC/WEB/0] ERR ... 39 more On Wed, Mar 20, 2019 at 3:51 PM Saurav Sarkar wrote: > Hi All > > I have a very basic test application which serves bytes from memory and > gives it back to the client. > > Whenever i try to send the request for byte size which is of over 2 GB i > get a connection reset error in my server code and a 502 error in my chrome > console. Below 2 GB it is working fine. > > In my client side i execute java script which i execute from the browser. > This basically executes an XMLHTTPRequest , gets the response (stores in > browser memory) and asks for a save. > > I would like to know if there Is there max response size default value > which is set in default tomcat configuration. ? or any other hints which > you can provide in my use. > > > Thanks and Regards, > > Saurav > > Below is the servlet or server side code > > > > response.setContentLength((int)length); > > } > > else > > { > > response.addHeader("Content-Length", Long.toString(length)); > > } > > // response.setContentLength(kByte * 1024); > > > ServletOutputStream outputStream = response.getOutputStream(); > > byte[] buffer = new byte[1024]; > > Random random = new Random(System.currentTimeMillis()); > > > long size = 0; > > while (size < kByte) { > > random.nextBytes(buffer); > > outputStream.write(buffer); > > size += 1; > > } > > > outputStream.flush(); > > > return; > > } > > }catch (Exception e) { > > e.printStackTrace(); > > response.sendError(500, e.getMessage()); > > return; > > } > > } >
Default Max response size in Tomcat
Hi All I have a very basic test application which serves bytes from memory and gives it back to the client. Whenever i try to send the request for byte size which is of over 2 GB i get a connection reset error in my server code and a 502 error in my chrome console. Below 2 GB it is working fine. In my client side i execute java script which i execute from the browser. This basically executes an XMLHTTPRequest , gets the response (stores in browser memory) and asks for a save. I would like to know if there Is there max response size default value which is set in default tomcat configuration. ? or any other hints which you can provide in my use. Thanks and Regards, Saurav Below is the servlet or server side code response.setContentLength((int)length); } else { response.addHeader("Content-Length", Long.toString(length)); } // response.setContentLength(kByte * 1024); ServletOutputStream outputStream = response.getOutputStream(); byte[] buffer = new byte[1024]; Random random = new Random(System.currentTimeMillis()); long size = 0; while (size < kByte) { random.nextBytes(buffer); outputStream.write(buffer); size += 1; } outputStream.flush(); return; } }catch (Exception e) { e.printStackTrace(); response.sendError(500, e.getMessage()); return; } }
Re: Parsing of multi part content
Thanks a lot Chris for the reply. i think even if i parse the request myself i have to always load the content in memory/disk. Because in order to extract the file uploaded from the request , i have to go through the whole request stream and trim down the boundaries. Best Regards, Saurav On Thu, Jan 3, 2019 at 3:20 AM Christopher Schultz < ch...@christopherschultz.net> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Saurav, > > On 1/2/19 12:20, Saurav Sarkar wrote: > > Hi All, > > > > This is regarding the reading of multi part content in java server > > side. > > > > ServletRequest has an API getParts() API for reading the parts of a > > multi part request > > https://docs.oracle.com/javaee/6/api/javax/servlet/http/HttpServletReq > uest.html#getParts() > <https://docs.oracle.com/javaee/6/api/javax/servlet/http/HttpServletRequest.html#getParts()> > > > > > . > > > > It has part.getInputStream which can be used to read the content of > > a specific part. > > > > Tomcat also provides an implementation for this API. > > > > But this API parses the multi part content and keeps it in memory. > > If the size increase then the content can be offloaded to disk. > > > > Why does the getPart API or any multi part parsing need to load the > > content in memory ? Why can't direct streaming of content happen ? > > Loading the content in memory and reading/writing to disk brings > > extra cost. This will be specially costly when large files are > > getting uploaded. > > True. You can always limit the part-size or request-size, but you > can't stream huge uploads if you want to use getParts(). > > > Is there no way where at least the file content loading could be > > avoided ? > > Yes, there is a way. > > Instead of calling HttpServletRequest.getParameter* or > HttpServletRequest.getPart*, you can call > HttpServletRequest.getInputStream and parse everything yourself. > > > It may be not be a very specific question for tomcat but more > > applicable to any servlet container. > > Correct, this is applicable to any servlet container. > > The multipart code in Tomcat parses everything to memory/disk at once > because servlet code needs to be able to call > HttpServletRequest.getParameter(String) in any order regardless of > what how the request data is actually ordered. Also, getParts must > return before the calling code can actually do anything with the data. > There is no "register a stream handler for a multipart request part > called 'foo'" or anything like that. > > If you want those semantics, you'll have to parse the request yourself. > > - -chris > -BEGIN PGP SIGNATURE- > Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/ > > iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlwtMjkACgkQHPApP6U8 > pFgtJA//eRy6nI9gSj10Ok1PYRSYNGJHcblGieiK8aicWqq0xV/RQgMVarq6PK8i > OHaBt3e/plA9Z5fr7tNs0jyT6dEhrVONYgkLJmyNxLC/EBtYXFD4M2q2R+YnIZbO > GZBjse/O5xzJAK3jJbWVe9w+rJfz6FCp6mPn/0AUNMUVhOgzC5/1oeKvMkyooEHY > 598ULLioK0ZvHWHeVJNe/hNdggjwm9jNuDrxuvrNLX6fY44ed/jlfzUh3G0tAw8B > Ik1Ug8AJi1EQU0sVPfik5Fos7D740DI0KiRcQWsjvEqvelJhWfNTQkkY9GWUmPzW > EMvCJH1T+ehGYo8HD1w+I74SsFlfTRyI/muzzlT5Gy2hCzN56JN4QU+oUQhGfS1E > njF0SAmB47XYdMq2fKSaaqmi+zfsvr1AgaPBE/TyfXhCRUYe7K34ThXBpbqon0dd > UdphHvka7gyBp/dqrufyhr/EjfnCi6MWUoLSWEIhrMvfeEFsrKshRlql3B+aE9Vk > iuwb0p2TT7vu79oCeHr+eANdIurM8vrBx5+PYWJ8AbMqarHeCyvyR0tfgAzokI9w > 2rVlg2NuiVN3ByuK9ytDGp94m5BwxdQ1jC8zeJUgCpKesXxzrB4c1IhaY6CRTEFa > S3K6IfGtc1zSKGMaN/gz8Mqq5ljm2P8GfkwrzxoDLETjjgVjxKE= > =DKX3 > -END PGP SIGNATURE- > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Parsing of multi part content
Hi All, This is regarding the reading of multi part content in java server side. ServletRequest has an API getParts() API for reading the parts of a multi part request https://docs.oracle.com/javaee/6/api/javax/servlet/http/HttpServletRequest.html#getParts() . It has part.getInputStream which can be used to read the content of a specific part. Tomcat also provides an implementation for this API. But this API parses the multi part content and keeps it in memory. If the size increase then the content can be offloaded to disk. Why does the getPart API or any multi part parsing need to load the content in memory ? Why can't direct streaming of content happen ? Loading the content in memory and reading/writing to disk brings extra cost. This will be specially costly when large files are getting uploaded. Is there no way where at least the file content loading could be avoided ? It may be not be a very specific question for tomcat but more applicable to any servlet container. Best Regards, Saurav
Re: Connector protocol and request handling in Tomcat 8
Thanks a lot Mark again. Actually i made mistake in getting the correct thread dump from my server when it was not accepting any further requests. We make blocking network I/O calls which blocks the threads . I see that the current threads go to park state when expecting to read from the input stream. The system stops responding when 200 concurrent requests have been fired. This situation obviously improves after some time when request processing finishes. I think this situation can be improved by implementing non blocking asynchronous calls in my java servlet code. This should free up my threads. Please provide your further inputs. Best Regards, Saurav On Mon, Dec 3, 2018 at 10:26 PM Mark Thomas wrote: > On 03/12/2018 15:26, Saurav Sarkar wrote: > > Thanks a lot Mark for the reply. > > > > Please bear with me for my follow up questions :) > > > > Does the park state (in visual vm) depicts the connection is idle and > > waiting for requests ? > > There is no direct correlation between thread and connection. A thread > is only assigned to a connection when there is a request on that > connection to process. Once the request has been processed the thread > returns to the pool. > > The park state (as shown below) means the thread is idle in the pool > waiting to be assigned to a connection with a request to process. > > > I see all threads reaching to this stage and my tomcat stops accepting > any > > further requests > > Then there is something wrong in your system but it isn't related to the > size of Tomcat's thread pool. > > > "http-nio-0.0.0.0-8080-exec-1357" - Thread t@80536 > >java.lang.Thread.State: WAITING > > at jdk.internal.misc.Unsafe.park(Native Method) > > - parking to wait for (a > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > > at sun.misc.Unsafe.park(Unsafe.java:1079) > > at > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > > at > > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > > at > > > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > > at > org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:103) > > at > org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:31) > > at > > > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) > > at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > > at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > at > > > org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) > > at java.lang.Thread.run(Thread.java:836) > > > > Also one general question : Isn't the persistent connection mechanism > > counter productive with nio handling ? > > No. > > > Because i will be never able to achieve high throughput if persistent > > connections are established. > > Also incorrect. > > > Only way for me to achieve is to increase the number of threads. > > Given you have a large number of idle threads, that statement does not > seem logical. > > > We have 8G instances for 200 threads. I don't know how many threads we > can > > scale up to. > > That is highly application dependent. I've seen apps that can choke a > server with 8G RAM and just 5 concurrent requests and apps that are > barely loading a server with 1G RAM and over 1500 concurrent requests. > > There is something else going wrong in your system if the system freezes > with Tomcat threads in the idle state. > > What other components are there between the clients and Tomcat (proxies, > firewalls, etc.)? > > If you provide a complete thread dump for when the system is hung we can > try and provide additional pointers. > > Mark > > > > > > Best Regards, > > Saurav > > > > > > On Mon, Dec 3, 2018 at 4:14 PM Mark Thomas wrote: > > > >> On 03/12/2018 09:24, Saurav Sarkar wrote: > >>> Hi All, > >>> > >>> I want to know the connector's protocol which is being used in my > tomcat > >> 8 > >>> container and clear the behaviour of request handling > >>> > >>> We have a cloud foundry based application running on java build pack. > >>> > >>> Below is the connector settings in server.xml > >>> > >>>>>> > >
Re: Connector protocol and request handling in Tomcat 8
Thanks a lot Mark for the reply. Please bear with me for my follow up questions :) Does the park state (in visual vm) depicts the connection is idle and waiting for requests ? I see all threads reaching to this stage and my tomcat stops accepting any further requests "http-nio-0.0.0.0-8080-exec-1357" - Thread t@80536 java.lang.Thread.State: WAITING at jdk.internal.misc.Unsafe.park(Native Method) - parking to wait for (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at sun.misc.Unsafe.park(Unsafe.java:1079) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:103) at org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:31) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:836) Also one general question : Isn't the persistent connection mechanism counter productive with nio handling ? Because i will be never able to achieve high throughput if persistent connections are established. Only way for me to achieve is to increase the number of threads. We have 8G instances for 200 threads. I don't know how many threads we can scale up to. Best Regards, Saurav On Mon, Dec 3, 2018 at 4:14 PM Mark Thomas wrote: > On 03/12/2018 09:24, Saurav Sarkar wrote: > > Hi All, > > > > I want to know the connector's protocol which is being used in my tomcat > 8 > > container and clear the behaviour of request handling > > > > We have a cloud foundry based application running on java build pack. > > > > Below is the connector settings in server.xml > > > >> > >bindOnInit="false" > > > >compression="on" > > > > > > > compressableMimeType="text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json" > > > >allowTrace="false" > > > >address="${connector.address}" > > > >maxHttpHeaderSize="8192" > > > >maxThreads="200" > > > >server="tomcat" /> > > > > > > It does not show any connector details. > > > > > > My thread dumps shows http-nio-exec threads and reaches to maximum of 200 > > threads. > > > > > > > > Does that mean Nio connector is used ? > > Yes. > > > But i am not able to address more than 200 threads . I understand that if > > Nio connector is used then maxThreads values be ignored and i can at > least > > accept more requests. > > maxThreads is not ignored in your configuration. > > That configuration will support a maximum of 200 concurrent requests and > 1 concurrent connections. > > Note that with HTTP keep-alive connections are often idle (not currently > processing request) so concurrent connections > concurrent requests. > > Mark > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Connector protocol and request handling in Tomcat 8
Hi All, I want to know the connector's protocol which is being used in my tomcat 8 container and clear the behaviour of request handling We have a cloud foundry based application running on java build pack. Below is the connector settings in server.xml It does not show any connector details. My thread dumps shows http-nio-exec threads and reaches to maximum of 200 threads. Does that mean Nio connector is used ? But i am not able to address more than 200 threads . I understand that if Nio connector is used then maxThreads values be ignored and i can at least accept more requests. Best Regards, Saurav
Basic question related to NIO connector and Async servlet processing
Hi All, I have got a basic question related to usage of Async servlet with tomcat NIO connector. I want to use Async servlet with Non Block I/O as per servlet spec https://docs.oracle.com/javaee/7/tutorial/servlets013.htm?lipi=urn%3Ali%3Apage%3Ad_flagship3_pulse_read%3BmL0Q5Y7ESTy4lpYPU%2Br77w%3D%3D Such that the http worker threads are released and the container threads won't be sitting idle for I/O operations too. I am on Tomcat 7. As i understand the default tomcat connector (BIO) is a blocking one and is on a thread per connection model. I am not clear on whether using async Non Blocking I/o in servlets won't suffice ? Won't the http worker threads be released here or will it be held for the lifetime of the connection ? NIO connector will use request per threads or allocate threads when processing is required .Will using NIO selector only release the http worker threads if it is used in conjunction with Asynchronous Non blocking I/O servlets ? Best Regards, Saurav