Re: [External] Re: Supporting Proxy Protocol in Tomcat

2023-11-27 Thread Jonathan S. Fisher
Hello Adwait,
I originally was going to sponsor this to get it done before year end.
Unfortunately my timeline got pushed to 2024 as we found a more
impactful area to make performance improvements. It's still a very
valuable and important Tomcat feature. The original PR is a good
starting place, and needs Mark's feedback implemented along with the
discussion notes I sent a few months back. After that, some tests
written to check for edge cases, especially around protocol parsing.

On Tue, Nov 21, 2023 at 3:56 PM Adwait Kumar Singh  wrote:
>
> Hey,
>
> Checking in on this thread. Is someone actively working on this?
>
> I am more than happy to contribute/help in any way to move this forward
> quickly.
>
>
> Thanks,
> Adwait.
>
> On Tue, Sep 5, 2023 at 1:11 PM Mark Thomas  wrote:
>
> > On 04/09/2023 15:41, Jonathan S. Fisher wrote:
> > > Mark thank you again for your leadership and setting expectations. I'm
> > > going to commit to working on this with anyone else that wants to help
> > with
> > > the goal of a patch by year end. I want to nail the patch with minimal
> > > rework that meets Tomcat project quality standards. To that end, I'll
> > > attempt to summarize what you expect here and if you could comment and
> > > correct my understanding that would be appreciated.
> > >
> > > It sounds like you're satisfied with the ubiquity of the Proxy protocol
> > and
> > > that it has an RFC
> > > We'll target just implementing the latest version of the Proxy protocol
> > > We'll implement a "TrustedProxies" feature similar to what the Remote IP
> > > Valve does
> > > We'll implement a, or modify the RemoteIp, valve to be able to set the
> > > remote IP from Proxy protocol headers
> > > We'll follow the RFC spec and reject any request that does a proper Proxy
> > > protocol header
> > > I'm particularly interested in the Proxy protocol over Unix Domain
> > Sockets,
> > > so expect to see a lot of the work focused on this, but accepting Proxy
> > > Protocol over TCP looks to be quite important from the comments on this
> > > email chain
> > >
> > > If I may ask two things:
> > > Can you summarize your desired implementation? What point in the stack
> > > should we target to implement this?
> >
> > See my response earlier in this thread that suggested it sits alongside
> > SNI processing. I still think that makes sense. If during implementation
> > you reach a different conclusion then make the case for the alternative
> > approach on list.
> >
> > > One thing I'm not familiar with on Tomcat is the testing expectations. If
> > > you can point to a set of unit tests and a set of integration tests and
> > say
> > > "Do it like this"
> >
> > Something like (only a guide)
> >
> >
> > https://github.com/apache/tomcat/blob/main/test/org/apache/tomcat/util/net/TestTLSClientHelloExtractor.java
> >
> > to test the implementation directly and probably something based on
> > SimpleHttpClient see
> >
> >
> > https://github.com/apache/tomcat/blob/main/test/org/apache/coyote/http11/TestHttp11Processor.java
> >
> > for various examples. The main thing is I suspect you'll need control of
> > the individual bytes and SimpleHttpClient provides a reasonably simple
> > basis for that.
> >
> > What we often do when we want to test things like setting remote IP
> > addresses etc. is echo the value in the response body and then check
> > that value in the client.
> >
> > > Anything else on the original patch you liked/didn't like? (
> > > https://bz.apache.org/bugzilla/show_bug.cgi?id=57830)
> >
> > It helps if you enable Checkstyle for your local build. It helps keep
> > things in roughly the same coding style (we are slowly tightening up on
> > that). Ideally, use the clean-up and formatting configurations we have
> > for Eclipse in res/ide-support/eclipse .
> >
> > This is sufficiently complex that I am expecting several iterations to
> > be required. if it is simpler for you to manage with a PR then that is
> > fine and probably easier to work with than a patch in Bugzilla.
> >
> > Mark
> >
> > >
> > > Thank you,
> > >
> > >
> > > On Tue, Aug 29, 2023 at 3:13 PM Mark Thomas  wrote:
> > >
> > >> On 28/08/2023 18:44, Amit Pande wrote:
> > >>> Oh, sure. So, what would be the best way to get some conclusion on this
> > >> thread?
> > >>
> > >> Provide a patch for review based on the feedback provided here and in
> > >> the BZ issue.
> > >>
> > >>> https://bz.apache.org/bugzilla/show_bug.cgi?id=57830 The state of the
> > >> ticket isn't updated for long. Perhaps add comments/ask the folks on
> > user
> > >> list to vote?
> > >>
> > >> That is more likely to irritate folks rather than encourage them to help
> > >> you progress your patch.
> > >>
> > >> Mark
> > >>
> > >>
> > >>>
> > >>> Thanks,
> > >>> Amit
> > >>>
> > >>> -Original Message-
> > >>> From: Mark Thomas 
> > >>> Sent: Monday, August 28, 2023 11:20 AM
> > >>> To: Tomcat Users List 
> > >>> Subject: Re: [External] Re: Supporting Proxy Protocol in Tomcat
> > >>>
> > 

400 Bad Request - where do I find the detailed reason for the bad request so I can fix it?

2023-11-27 Thread Graham Leggett
Hi all,

Long running webapps, tomcat recently updated from tomcat7 to tomcat v9.0.65. 
One webapp sends a request to another.

The request fails with a 400 Bad Request, with the detail message "The server 
cannot or will not process the request due to something that is perceived to be 
a client error (e.g., malformed request syntax, invalid request message 
framing, or deceptive request routing).”

I am aware what a 400 bad request is, however the message above gives me an 
incomplete list of possible reasons for the bad request, rather than the actual 
specific reason for this specific bad request. Google is filled with generic 
results, and is of no help.

What do I need to do to see the exception that generated the bad request, so 
that I know specifically what’s wrong and can fix it?

Regards,
Graham
—


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Performance tuning embedded Tomcat 10.1.7: High requests/second, HTTPs and a lot of keep alive connections

2023-11-27 Thread Christopher Schultz

Daniel,

This is obviously a "big" question whose answer likely take months to 
really determine. But we can get started :)


On 11/27/23 08:59, Daniel Andres Pelaez Lopez wrote:

We are facing some challenges with performance tunning for embedded
Tomcat using Spring Boot 3 (Tomcat version 10.1.7) and we would like
to ask for advice. The following is an overview of how our workload
looks like:
- The client is a CDN distributed around the world
- Tomcat serves files and media for video streaming, around hundreds
of kilobytes by media file
- The files and media are in memory (most of the time)
- The CND opens a lot of keep alive HTTPs connections, we have seen up to 25000
- There is no proxy or similar in front of Tomcat. Tomcat is handling
the HTTPs connection directly
- We have only one instance of Tomcat running.
- We are avoiding to scale Tomcat horizontally, as it is pretty hard
for our domain problem
- We can scale Tomcat up, today in some cases we are using an EC2 with
64 cores and 62 GiB memory. We can scale up more if we must, but
better if we can downscale instead.
- The EC2 is shared with other processes, like transcoders. This is to
decrease the latency as much as we can between the components of the
solution
- We have virtual threads active in Tomcat
- We have seen up to 2000 requests/second for light files (less than
10 kilobytes), and 500 requests/second for bigger files.
- Spike requests happen in a short time, from 100 requests/second to
1700 requests/second in 2 minutes.

>

We have seen the server eating 75 % of CPU, so, we want to optimize as
much as we can Tomcat to downscale the machine.


Thank you for the summary.


We have researched and we found some possible points to check:
- Should we use NIO or NIO2 connectors? I didn't find an answer for
this, we are using NIO. Maybe NIO2 handles better a lot of keep alive
connections?


NIO vs NIO2 shouldn't matter much. If it were me, I'd stick with NIO 
since it gets /much/ more usage than NIO2 and most of the issues in NIO 
that NIO2 was supposed to resolve have actually been fixed in NIO itself 
retroactively.



- Should we use tcnative to improve the performance for SSL? We are
concerned about virtual threads and possible pinning here, as this
might use JNI


If you require TLS, then tcnative is definitely an option you will want 
to consider. In most of our tests, OpenSSL outperforms JSSE's 
cryptographic implementation significantly (something like 2x 
improvement with OpenSSL).


Your use of Virtual Threads might complicate things, here, but the good 
news is that I/O through JNI -- which would pin a Virtual Thread to a 
Platform Thread -- should be "fast". It seems that your VM is mostly 
dedicated to pushing bytes around, anyway, so maybe letting it use the 
CPU to push them around isn't so bad.


Only testing will tell you whether this is a "good idea" or a "bad 
idea". I suspect it will be a little of both for you.



- Should we put a nginx or similar server in front of Tomcat to handle
SSL? we are avoiding this for latency reasons, and also, nginx will
add up to the other processes we have in the same machine


Using Tomcat with JSSE+OpenSSL or even APR+OpenSSL is essentially the 
same as using Apache httpd. I don't have enough experience with nginx to 
know if it's much different, but I suspect not. The time "wasted" 
re-interpreting everything -- not just TLS but also HTTP itself -- will 
likely lose any gains you get by adding them to the mix.


Now, if Apache httpd, Nginx, etc. can get you *caching* as well as TLS 
termination, etc. then maybe it's worth it. But my guess is that the CDN 
itself is supposed to be the primary cache in this equation.



- Should we increase maxKeepAliveRequests? We don't understand how
this work entirely, is this the max of requests by one keep alive
connection? parallel requests or sequential? seems like the default is
100, and probably we should increase it as the CND might not open more
connections if he can send more requests in previous ones.


This might be a good idea, depending upon how much "traffic" each of 
your persistent connections actually gets. If you find that KeepAlive 
connections are being "wasted" than you might want to limit the total 
number of requests each connection will allow. My guess is that you 
probably want to re-use the connections from the CDN for as long as you 
possibly can.


You will want to use NIO connectors here and specifically /not/ APR if 
you are going to use tcnative/OpenSSL because APR-keep-alive is a 
*blocking* operation which will kill your threads.



- Should we increase socket.txBufSize? seems like we should, as we are
sending media files, having a bigger buffer makes sense
- Should we use direct buffers socket.directSslBuffer?
- Should we increase the socket.appWriteBufSize?


These settings are detailed-enough that I'm not a good person to answer 
those questions. I suspect that increasing the socket transmission 
buffer might help, but 

Performance tuning embedded Tomcat 10.1.7: High requests/second, HTTPs and a lot of keep alive connections

2023-11-27 Thread Daniel Andres Pelaez Lopez
Hi community,

We are facing some challenges with performance tunning for embedded
Tomcat using Spring Boot 3 (Tomcat version 10.1.7) and we would like
to ask for advice. The following is an overview of how our workload
looks like:
- The client is a CDN distributed around the world
- Tomcat serves files and media for video streaming, around hundreds
of kilobytes by media file
- The files and media are in memory (most of the time)
- The CND opens a lot of keep alive HTTPs connections, we have seen up to 25000
- There is no proxy or similar in front of Tomcat. Tomcat is handling
the HTTPs connection directly
- We have only one instance of Tomcat running.
- We are avoiding to scale Tomcat horizontally, as it is pretty hard
for our domain problem
- We can scale Tomcat up, today in some cases we are using an EC2 with
64 cores and 62 GiB memory. We can scale up more if we must, but
better if we can downscale instead.
- The EC2 is shared with other processes, like transcoders. This is to
decrease the latency as much as we can between the components of the
solution
- We have virtual threads active in Tomcat
- We have seen up to 2000 requests/second for light files (less than
10 kilobytes), and 500 requests/second for bigger files.
- Spike requests happen in a short time, from 100 requests/second to
1700 requests/second in 2 minutes.

We have seen the server eating 75 % of CPU, so, we want to optimize as
much as we can Tomcat to downscale the machine.

We have researched and we found some possible points to check:
- Should we use NIO or NIO2 connectors? I didn't find an answer for
this, we are using NIO. Maybe NIO2 handles better a lot of keep alive
connections?
- Should we use tcnative to improve the performance for SSL? We are
concerned about virtual threads and possible pinning here, as this
might use JNI
- Should we put a nginx or similar server in front of Tomcat to handle
SSL? we are avoiding this for latency reasons, and also, nginx will
add up to the other processes we have in the same machine
- Should we increase maxKeepAliveRequests? We don't understand how
this work entirely, is this the max of requests by one keep alive
connection? parallel requests or sequential? seems like the default is
100, and probably we should increase it as the CND might not open more
connections if he can send more requests in previous ones.
- Should we increase socket.txBufSize? seems like we should, as we are
sending media files, having a bigger buffer makes sense
- Should we use direct buffers socket.directSslBuffer?
- Should we increase the socket.appWriteBufSize?

We are exploring JVM performance options also, but any help regarding
Tomcat will be appreciated.

Regards.




-- 
Daniel Andrés Pelaez López

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: 9.0.83 addSslHostConfig failures?

2023-11-27 Thread Daniel Skiles
Thanks for taking a look.  My lightly scrubbed connector example is
attached.

On Tue, Nov 21, 2023 at 6:45 AM Michael Osipov  wrote:

> On 2023/11/21 11:25:11 Michael Osipov wrote:
> > On 2023/11/20 22:14:14 Daniel Skiles wrote:
> > > Was there a change to the addSslHostConfig JMX mbean operation between
> > > 9.0.82 and 9.0.83?  I have some code that works in 82, but fails with
> an
> > > MBeanException: Cannot find operation [addSslHostConfig] in 9.0.83.
> > >
> > > When I attempt to look at the available operations on ProtocolHandler
> in
> > > jconsole, it throws an exception in 83 that opens a new window, but
> works
> > > in 82.
> >
> > I have the following with 8.5.x:
> > > Error setting Operation panel :org.apache.coyote.Request
> > > Error setting Operation panel :org.apache.tomcat.util.net
> .SSLHostConfig
> >
> > addSslHostConfig is greyed out for me...let me go back a patch version...
>
> Tried on 8.5.92, same behavior. You should share your connector config.
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

	
		
	
	
		
	

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Re: Possible way to avoid Tomcat from recycling the request/response on error?

2023-11-27 Thread Mark Thomas

On 27/11/2023 01:49, Adwait Kumar Singh wrote:

Hmm, this gives me an impression that the Servlet APIs expect the
request/response processing to *always *happen on the container thread.
If I attempt to perform it on a non-container thread after making the
request async, I run into the risk of the Request/Response objects being
recycled without my non-container thread being aware of it or having to
block my container thread.


The concurrency requirements for asynchronous processing are set out in 
section 2.3.3.4 of the Servlet specification.


Implementing Error handling is significantly more complicated with 
asynchronous servlets but it boils down to avoid accessing the request, 
response and associated objects after complete()/dispatch() have been 
called.


Mark



On Sat, Nov 25, 2023 at 5:42 AM Mark Thomas  wrote:


On 25/11/2023 05:30, Adwait Kumar Singh wrote:


Is there a way around this, to keep the async context open even on an

error

and not close it till complete is invoked?


No. The spec requires the error handler to call complete() in onError()
and error handler doesn't, the container must.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org






-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org