Re: [External] Re: Supporting Proxy Protocol in Tomcat

2023-07-27 Thread Christopher Schultz

All,

On 7/27/23 12:39, Mark Thomas wrote:

On 27/07/2023 16:27, Jonathan S. Fisher wrote:

On the topic of security, may we consider a trustedProxies setting?


Seems reasonable.


We should probably look at what httpd did for all of this.

-chris


 This
would be an analog to the internalProxies setting on RemoteIpValve. It
would need to be able to function with APR/NIO listening in a Unix Domain
Socket.

I'm not sure if this is super useful, but the goal would be an added 
layer

of security to prevent Proxy Protocol header injection.

On Thu, Jul 27, 2023 at 3:47 AM Mark Thomas  wrote:


On 26/07/2023 21:53, Christopher Schultz wrote:

Mark,

On 7/26/23 13:58, Mark Thomas wrote:

I'm not a huge fan of this feature in general. I prefer supporting
features backed by specifications rather than vendor specific hacks.


I think the PROXY protocol is fairly standard, even if it's not backed
by an RFC. It's published by haproxy, but supported by nginx,
(obviously) haproxy, AWS, httpd[1], and a whole bunch of others
(

https://www.haproxy.com/blog/use-the-proxy-protocol-to-preserve-a-clients-ip-address
).

ACK. That reduces my concerns somewhat.


Well, the reality is that people want to use this in the real world and
this is essentially the only way to do it, barring coming up with a
whole new protocol for the purpose (I'm looking at /you/ AJP!).


Indeed.


So why not use /the/ protocol that (a) exists and (b) is supported by
every single product that currently supports this type of thing?


My support for any patch is going to depend on the specifics of the
patch.

In addition to the comments in the BZ
- exposing the data as a request attribute is inconsistent with other
    mechanisms that solve the same problem (e.g. see RemoteIpFilter)


+1

The whole point of PROXY is to kind of mix-together the capabilities of
both the RemoteIPFilter/Valve (which uses HTTP headers for
source-information) and the top-level idea of a Connector (something
that binds to a socket and pushes bytes around).

The confusing thing here is that those two jobs are performed at
relatively different levels in Tomcat at the moment, as I understand
things.


Yes and no. RemoteIP[Filter|Valve] insert/modify the data at a higher
level because that is where they sit but the data originates from the
SocketWrapper.


If some kind of UberConnector could be built which essentially does
something like the following, it would be ideal:

public void accept(Socket s) {
    ProxyHeader proxyHeader = readProxyHeader(s);

    Connector realConnector = getRealConnector();

    realConnector.setRemoteIP(proxyHeader.getRemoteIP());
    realConnector.setRemotePort(proxyHeader.getRemotePort());

    realConnector.takeItAway(s);
}

I'm sure there are other pieces of information that would be good to
pass-through, but the identity of the remote client is the most
interesting one.


Yes, that is the general idea. Just a couple of minor tweaks to use the
SocketWrapper rather than the Connector and to do it in a slightly
different place. The Acceptor is too early as we want to do as little as
possible on the Acceptor thread.


- needs to be implemented for all Connectors


I hope not. The connectors should be able to just have a thin layer in
front of them "sipping" the header off the beginning of the connection.
I am *way* out of my depth here when it comes to Tomcat internals 
and so

I don't want to appear to be telling you (Mark) "how it works/should
work", but conceptually it "seems easy". That may not translate into
"easy implementation" or it may mean "tons of refactoring that we
wouldn't need if we didn't care that much."


My point was that the provided patch only implements this for NIO. It
needs to implement it for NIO2 as well. APR/Native looks to be a lot
more difficult to implement and I'd be happy not implementing it for
APR/Native.


- I'd expect it to look more like the SNI processing


SNI processing is very connector-dependent, of course, because it's
HTTPS-only. PROXY should allow HTTP, HTTPS, AJP, SFTP, JDBC, anything.
So if it can be implemented as something that can just "sit in front 
of"

*any* connector now or in the future of Tomcat, that would be ideal. It
could definitely be implemented as an "optional feature" on a
Connector-by-Connector basis, but my sense is that it can be done
separately and globally.


Ah. You are thinking Connector as in protocol (HTTP, AJP, etc) whereas I
am thinking in terms of implementation (NIO, NIO2, etc).

SNI is handled independently of implementation and I think PROXY should
be handled the same way. They also sit at almost the same point in the
processing (PROXY needs to be first). PROXY parsing could be implemented
within the existing handshake() method but I think it would be much
cleaner in a separate method.

Without looking at it too closely I think the implementation would look
something like:

- a new method on SocketWrapperBase that
    - checks if PROXY is enabled
    - returns 

Re: [External] Re: Supporting Proxy Protocol in Tomcat

2023-07-27 Thread Mark Thomas

On 27/07/2023 16:27, Jonathan S. Fisher wrote:

On the topic of security, may we consider a trustedProxies setting?


Seems reasonable.

Mark


 This
would be an analog to the internalProxies setting on RemoteIpValve. It
would need to be able to function with APR/NIO listening in a Unix Domain
Socket.

I'm not sure if this is super useful, but the goal would be an added layer
of security to prevent Proxy Protocol header injection.

On Thu, Jul 27, 2023 at 3:47 AM Mark Thomas  wrote:


On 26/07/2023 21:53, Christopher Schultz wrote:

Mark,

On 7/26/23 13:58, Mark Thomas wrote:

I'm not a huge fan of this feature in general. I prefer supporting
features backed by specifications rather than vendor specific hacks.


I think the PROXY protocol is fairly standard, even if it's not backed
by an RFC. It's published by haproxy, but supported by nginx,
(obviously) haproxy, AWS, httpd[1], and a whole bunch of others
(

https://www.haproxy.com/blog/use-the-proxy-protocol-to-preserve-a-clients-ip-address
).

ACK. That reduces my concerns somewhat.


Well, the reality is that people want to use this in the real world and
this is essentially the only way to do it, barring coming up with a
whole new protocol for the purpose (I'm looking at /you/ AJP!).


Indeed.


So why not use /the/ protocol that (a) exists and (b) is supported by
every single product that currently supports this type of thing?


My support for any patch is going to depend on the specifics of the
patch.

In addition to the comments in the BZ
- exposing the data as a request attribute is inconsistent with other
mechanisms that solve the same problem (e.g. see RemoteIpFilter)


+1

The whole point of PROXY is to kind of mix-together the capabilities of
both the RemoteIPFilter/Valve (which uses HTTP headers for
source-information) and the top-level idea of a Connector (something
that binds to a socket and pushes bytes around).

The confusing thing here is that those two jobs are performed at
relatively different levels in Tomcat at the moment, as I understand
things.


Yes and no. RemoteIP[Filter|Valve] insert/modify the data at a higher
level because that is where they sit but the data originates from the
SocketWrapper.


If some kind of UberConnector could be built which essentially does
something like the following, it would be ideal:

public void accept(Socket s) {
ProxyHeader proxyHeader = readProxyHeader(s);

Connector realConnector = getRealConnector();

realConnector.setRemoteIP(proxyHeader.getRemoteIP());
realConnector.setRemotePort(proxyHeader.getRemotePort());

realConnector.takeItAway(s);
}

I'm sure there are other pieces of information that would be good to
pass-through, but the identity of the remote client is the most
interesting one.


Yes, that is the general idea. Just a couple of minor tweaks to use the
SocketWrapper rather than the Connector and to do it in a slightly
different place. The Acceptor is too early as we want to do as little as
possible on the Acceptor thread.


- needs to be implemented for all Connectors


I hope not. The connectors should be able to just have a thin layer in
front of them "sipping" the header off the beginning of the connection.
I am *way* out of my depth here when it comes to Tomcat internals and so
I don't want to appear to be telling you (Mark) "how it works/should
work", but conceptually it "seems easy". That may not translate into
"easy implementation" or it may mean "tons of refactoring that we
wouldn't need if we didn't care that much."


My point was that the provided patch only implements this for NIO. It
needs to implement it for NIO2 as well. APR/Native looks to be a lot
more difficult to implement and I'd be happy not implementing it for
APR/Native.


- I'd expect it to look more like the SNI processing


SNI processing is very connector-dependent, of course, because it's
HTTPS-only. PROXY should allow HTTP, HTTPS, AJP, SFTP, JDBC, anything.
So if it can be implemented as something that can just "sit in front of"
*any* connector now or in the future of Tomcat, that would be ideal. It
could definitely be implemented as an "optional feature" on a
Connector-by-Connector basis, but my sense is that it can be done
separately and globally.


Ah. You are thinking Connector as in protocol (HTTP, AJP, etc) whereas I
am thinking in terms of implementation (NIO, NIO2, etc).

SNI is handled independently of implementation and I think PROXY should
be handled the same way. They also sit at almost the same point in the
processing (PROXY needs to be first). PROXY parsing could be implemented
within the existing handshake() method but I think it would be much
cleaner in a separate method.

Without looking at it too closely I think the implementation would look
something like:

- a new method on SocketWrapperBase that
- checks if PROXY is enabled
- returns immediately if PROXY is not enabled or has already
  been parsed
- uses a new utility class (or classes) to parse 

Re: [External] Re: Supporting Proxy Protocol in Tomcat

2023-07-27 Thread Jonathan S. Fisher
On the topic of security, may we consider a trustedProxies setting? This
would be an analog to the internalProxies setting on RemoteIpValve. It
would need to be able to function with APR/NIO listening in a Unix Domain
Socket.

I'm not sure if this is super useful, but the goal would be an added layer
of security to prevent Proxy Protocol header injection.

On Thu, Jul 27, 2023 at 3:47 AM Mark Thomas  wrote:

> On 26/07/2023 21:53, Christopher Schultz wrote:
> > Mark,
> >
> > On 7/26/23 13:58, Mark Thomas wrote:
> >> I'm not a huge fan of this feature in general. I prefer supporting
> >> features backed by specifications rather than vendor specific hacks.
> >
> > I think the PROXY protocol is fairly standard, even if it's not backed
> > by an RFC. It's published by haproxy, but supported by nginx,
> > (obviously) haproxy, AWS, httpd[1], and a whole bunch of others
> > (
> https://www.haproxy.com/blog/use-the-proxy-protocol-to-preserve-a-clients-ip-address
> ).
>
> ACK. That reduces my concerns somewhat.
>
> > Well, the reality is that people want to use this in the real world and
> > this is essentially the only way to do it, barring coming up with a
> > whole new protocol for the purpose (I'm looking at /you/ AJP!).
>
> Indeed.
>
> > So why not use /the/ protocol that (a) exists and (b) is supported by
> > every single product that currently supports this type of thing?
> >
> >> My support for any patch is going to depend on the specifics of the
> >> patch.
> >>
> >> In addition to the comments in the BZ
> >> - exposing the data as a request attribute is inconsistent with other
> >>mechanisms that solve the same problem (e.g. see RemoteIpFilter)
> >
> > +1
> >
> > The whole point of PROXY is to kind of mix-together the capabilities of
> > both the RemoteIPFilter/Valve (which uses HTTP headers for
> > source-information) and the top-level idea of a Connector (something
> > that binds to a socket and pushes bytes around).
> >
> > The confusing thing here is that those two jobs are performed at
> > relatively different levels in Tomcat at the moment, as I understand
> > things.
>
> Yes and no. RemoteIP[Filter|Valve] insert/modify the data at a higher
> level because that is where they sit but the data originates from the
> SocketWrapper.
>
> > If some kind of UberConnector could be built which essentially does
> > something like the following, it would be ideal:
> >
> > public void accept(Socket s) {
> >ProxyHeader proxyHeader = readProxyHeader(s);
> >
> >Connector realConnector = getRealConnector();
> >
> >realConnector.setRemoteIP(proxyHeader.getRemoteIP());
> >realConnector.setRemotePort(proxyHeader.getRemotePort());
> >
> >realConnector.takeItAway(s);
> > }
> >
> > I'm sure there are other pieces of information that would be good to
> > pass-through, but the identity of the remote client is the most
> > interesting one.
>
> Yes, that is the general idea. Just a couple of minor tweaks to use the
> SocketWrapper rather than the Connector and to do it in a slightly
> different place. The Acceptor is too early as we want to do as little as
> possible on the Acceptor thread.
>
> >> - needs to be implemented for all Connectors
> >
> > I hope not. The connectors should be able to just have a thin layer in
> > front of them "sipping" the header off the beginning of the connection.
> > I am *way* out of my depth here when it comes to Tomcat internals and so
> > I don't want to appear to be telling you (Mark) "how it works/should
> > work", but conceptually it "seems easy". That may not translate into
> > "easy implementation" or it may mean "tons of refactoring that we
> > wouldn't need if we didn't care that much."
>
> My point was that the provided patch only implements this for NIO. It
> needs to implement it for NIO2 as well. APR/Native looks to be a lot
> more difficult to implement and I'd be happy not implementing it for
> APR/Native.
>
> >> - I'd expect it to look more like the SNI processing
> >
> > SNI processing is very connector-dependent, of course, because it's
> > HTTPS-only. PROXY should allow HTTP, HTTPS, AJP, SFTP, JDBC, anything.
> > So if it can be implemented as something that can just "sit in front of"
> > *any* connector now or in the future of Tomcat, that would be ideal. It
> > could definitely be implemented as an "optional feature" on a
> > Connector-by-Connector basis, but my sense is that it can be done
> > separately and globally.
>
> Ah. You are thinking Connector as in protocol (HTTP, AJP, etc) whereas I
> am thinking in terms of implementation (NIO, NIO2, etc).
>
> SNI is handled independently of implementation and I think PROXY should
> be handled the same way. They also sit at almost the same point in the
> processing (PROXY needs to be first). PROXY parsing could be implemented
> within the existing handshake() method but I think it would be much
> cleaner in a separate method.
>
> Without looking at it too closely I think the implementation would look
> 

Re: Possible AbstractProtocol.waitingProcessors leak in Tomcat 9.0.75

2023-07-27 Thread mario
I also spent quite some time already to create such a test case, but did not 
yet manage to find something.
For now it works in Production to remove those dangling processes once per hour 
(via bad reflection stuff ;-) ) … will try to add some logging to probably get 
a grasp about what is going on.

Best regards,
Mario


> Am 27.07.2023 um 12:45 schrieb Mark Thomas :
> 
> I've taken a look at the code and can't see how this might be happening. I 
> think a reproducible test case is going to be required to investigate this 
> further.
> 
> Mark
> 
> 
> On 12/07/2023 09:25, Mark Thomas wrote:
>> Hi Mario,
>> That does look like a possible bug.
>> I'll try and do a code review before the next release but from experience f 
>> you are able to figure out how to reproduce it that would help a lot.
>> Thanks,
>> Mark
>> On 06/07/2023 15:19, ma...@datenwort.at.INVALID wrote:
>>> Hello!
>>> 
>>> I guess I found a memory leak which I am not quite sure how to reproduce 
>>> yet.
>>> Our application is using WebSockets and sometimes it seems that Tomcat 
>>> 9.0.75 does not always remove the „UpgradeProcessorInternal“ process from 
>>> the list of „waitingProcessors“ in „org.apache.coyote.AbstractProtocol“.
>>> 
>>> I managed to create a dump (in that e-mail below) of that data from a live 
>>> VM and found that some of those UpgradeProcessorInternal instances are 
>>> referencing a "Closed Socket", which somehow feels odd to me. This list 
>>> slowly grows. The entries are never removed.
>>> Those UpgradeProcessorInternal-s referencing a live socket are matching 
>>> exactly the number of live WebSocket sessions.
>>> 
>>> What do you think?
>>> 
>>> Env:
>>> Tomcat: 9.0.75.0
>>> Spring Boot: 2.7.12
>>> Java: 17.0.7
>>> 
>>> 
>>> Best regards,
>>> Mario
>>> 
>>> 
>>> connector :Connector[HTTP/1.1-2888]
>>> waitingProcessors: 325
>>> processor#1: UpgradeProcessorInternal@1087825622 socketWrapper state: 
>>> Closed NioChannel msgSent=11 msgReceived=13
>>> processor#2: UpgradeProcessorInternal@92090237 socketWrapper state: Closed 
>>> NioChannel msgSent=10 msgReceived=13
>>> processor#3: UpgradeProcessorInternal@1641815002 socketWrapper state: 
>>> Closed NioChannel msgSent=11 msgReceived=9
>>> processor#4: UpgradeProcessorInternal@1749164616 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=13
>>> processor#5: UpgradeProcessorInternal@508142663 socketWrapper state: Closed 
>>> NioChannel msgSent=11 msgReceived=12
>>> processor#6: UpgradeProcessorInternal@1482593872 socketWrapper state: 
>>> Closed NioChannel msgSent=11 msgReceived=12
>>> processor#7: UpgradeProcessorInternal@1176145940 socketWrapper state: 
>>> Closed NioChannel msgSent=4389 msgReceived=205
>>> processor#8: UpgradeProcessorInternal@1420482234 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=12
>>> processor#9: UpgradeProcessorInternal@31846900 socketWrapper state: Closed 
>>> NioChannel msgSent=10 msgReceived=9
>>> processor#10: UpgradeProcessorInternal@1726295282 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=12
>>> processor#11: UpgradeProcessorInternal@288536869 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=13
>>> processor#12: UpgradeProcessorInternal@969872343 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=13
>>> processor#13: UpgradeProcessorInternal@420227862 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=13
>>> processor#14: UpgradeProcessorInternal@552002299 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=12
>>> processor#15: UpgradeProcessorInternal@255507237 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=12
>>> processor#16: UpgradeProcessorInternal@724602640 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=9
>>> processor#17: UpgradeProcessorInternal@1706491794 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=12
>>> processor#18: UpgradeProcessorInternal@1925947118 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=13
>>> processor#19: UpgradeProcessorInternal@2044888516 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=12
>>> processor#20: UpgradeProcessorInternal@905596380 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=13
>>> processor#21: UpgradeProcessorInternal@897538387 socketWrapper state: 
>>> org.apache.tomcat.util.net.SecureNioChannel@61aa8978:java.nio.channels.SocketChannel[connected
>>>  local=/***.***.***.***:2888 remote=/***.***.***.***:40566] msgSent=7277 
>>> msgReceived=447
>>> processor#22: UpgradeProcessorInternal@1070005238 socketWrapper state: 
>>> Closed NioChannel msgSent=10 msgReceived=13
>>> processor#23: UpgradeProcessorInternal@34919463 socketWrapper state: Closed 
>>> NioChannel msgSent=10 msgReceived=9
>>> processor#24: UpgradeProcessorInternal@565924232 socketWrapper state: 
>>> Closed NioChannel msgSent=12 msgReceived=14
>>> processor#25: 

Re: Pinned threads for HTTP2 using Virtual Threads on Tomcat 10.1.7

2023-07-27 Thread Mark Thomas
I've refactored things to the point where the unit tests run without 
generating any warnings for pinning. I suspect further issues will be 
identified over time and we can address those as they are found.


Mark


On 25/07/2023 10:21, Mark Thomas wrote:
Never mind. Pretty much as soon as I hit send I managed to trigger the 
issue.


Mark


On 25/07/2023 10:19, Mark Thomas wrote:

Daniel,

How did you trigger the pinning? I'm running some basic tests with 
-Djdk.tracePinnedThreads=short and I'm not seeing any pinned threads 
reported.


Mark


On 07/07/2023 13:45, Daniel Andres Pelaez Lopez wrote:

Mark,

Thanks for letting me know. I will wait for the August release to test.

Regards.


El jue, 6 jul 2023 a las 15:13, Mark Thomas () 
escribió:



6 Jul 2023 20:09:01 Daniel Andres Pelaez Lopez :


I am aware Tomcat community did a great effort to move Tomat to
Virtual Threads friendly, but I am not sure why HTTP2 was not part of
that effort?


The plan was always to see where the bottlenecks were as folks start to
experiment with Loom support and fix issues as they arose. It helps 
focus

effort on where it is really needed.

These fixes look fairly simple. We should be able to get them done for
the August releases (the July releases have already been tagged). We 
can
make -dev builds available earlier or you can build Tomcat from 
source to

test the changes if you are interested.

As it happens I've spent most of today looking at ThreadLocal vs
SynchronizedStack vs new Object() in various places in the Tomcat code
base without reaching a clear conclusion. Which kind of proves the 
point

that if we guess where bottlenecks might be we'll probably be wrong.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org






-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Possible AbstractProtocol.waitingProcessors leak in Tomcat 9.0.75

2023-07-27 Thread Mark Thomas
I've taken a look at the code and can't see how this might be happening. 
I think a reproducible test case is going to be required to investigate 
this further.


Mark


On 12/07/2023 09:25, Mark Thomas wrote:

Hi Mario,

That does look like a possible bug.

I'll try and do a code review before the next release but from 
experience f you are able to figure out how to reproduce it that would 
help a lot.


Thanks,

Mark


On 06/07/2023 15:19, ma...@datenwort.at.INVALID wrote:

Hello!

I guess I found a memory leak which I am not quite sure how to 
reproduce yet.
Our application is using WebSockets and sometimes it seems that Tomcat 
9.0.75 does not always remove the „UpgradeProcessorInternal“ process 
from the list of „waitingProcessors“ in 
„org.apache.coyote.AbstractProtocol“.


I managed to create a dump (in that e-mail below) of that data from a 
live VM and found that some of those UpgradeProcessorInternal 
instances are referencing a "Closed Socket", which somehow feels odd 
to me. This list slowly grows. The entries are never removed.
Those UpgradeProcessorInternal-s referencing a live socket are 
matching exactly the number of live WebSocket sessions.


What do you think?

Env:
Tomcat: 9.0.75.0
Spring Boot: 2.7.12
Java: 17.0.7


Best regards,
Mario


connector :Connector[HTTP/1.1-2888]
waitingProcessors: 325
processor#1: UpgradeProcessorInternal@1087825622 socketWrapper state: 
Closed NioChannel msgSent=11 msgReceived=13
processor#2: UpgradeProcessorInternal@92090237 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=13
processor#3: UpgradeProcessorInternal@1641815002 socketWrapper state: 
Closed NioChannel msgSent=11 msgReceived=9
processor#4: UpgradeProcessorInternal@1749164616 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=13
processor#5: UpgradeProcessorInternal@508142663 socketWrapper state: 
Closed NioChannel msgSent=11 msgReceived=12
processor#6: UpgradeProcessorInternal@1482593872 socketWrapper state: 
Closed NioChannel msgSent=11 msgReceived=12
processor#7: UpgradeProcessorInternal@1176145940 socketWrapper state: 
Closed NioChannel msgSent=4389 msgReceived=205
processor#8: UpgradeProcessorInternal@1420482234 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=12
processor#9: UpgradeProcessorInternal@31846900 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=9
processor#10: UpgradeProcessorInternal@1726295282 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=12
processor#11: UpgradeProcessorInternal@288536869 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=13
processor#12: UpgradeProcessorInternal@969872343 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=13
processor#13: UpgradeProcessorInternal@420227862 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=13
processor#14: UpgradeProcessorInternal@552002299 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=12
processor#15: UpgradeProcessorInternal@255507237 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=12
processor#16: UpgradeProcessorInternal@724602640 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=9
processor#17: UpgradeProcessorInternal@1706491794 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=12
processor#18: UpgradeProcessorInternal@1925947118 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=13
processor#19: UpgradeProcessorInternal@2044888516 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=12
processor#20: UpgradeProcessorInternal@905596380 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=13
processor#21: UpgradeProcessorInternal@897538387 socketWrapper state: 
org.apache.tomcat.util.net.SecureNioChannel@61aa8978:java.nio.channels.SocketChannel[connected local=/***.***.***.***:2888 remote=/***.***.***.***:40566] msgSent=7277 msgReceived=447
processor#22: UpgradeProcessorInternal@1070005238 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=13
processor#23: UpgradeProcessorInternal@34919463 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=9
processor#24: UpgradeProcessorInternal@565924232 socketWrapper state: 
Closed NioChannel msgSent=12 msgReceived=14
processor#25: UpgradeProcessorInternal@1146279009 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=9
processor#26: UpgradeProcessorInternal@1467208007 socketWrapper state: 
Closed NioChannel msgSent=11 msgReceived=9
processor#27: UpgradeProcessorInternal@348911347 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=9
processor#28: UpgradeProcessorInternal@38694518 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=9
processor#29: UpgradeProcessorInternal@1164156250 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=9
processor#30: UpgradeProcessorInternal@1675936734 socketWrapper state: 
Closed NioChannel msgSent=10 msgReceived=13
processor#31: UpgradeProcessorInternal@441511021 socketWrapper state: 

Re: [External] Re: Supporting Proxy Protocol in Tomcat

2023-07-27 Thread Mark Thomas

On 26/07/2023 21:53, Christopher Schultz wrote:

Mark,

On 7/26/23 13:58, Mark Thomas wrote:
I'm not a huge fan of this feature in general. I prefer supporting 
features backed by specifications rather than vendor specific hacks.


I think the PROXY protocol is fairly standard, even if it's not backed 
by an RFC. It's published by haproxy, but supported by nginx, 
(obviously) haproxy, AWS, httpd[1], and a whole bunch of others 
(https://www.haproxy.com/blog/use-the-proxy-protocol-to-preserve-a-clients-ip-address).


ACK. That reduces my concerns somewhat.

Well, the reality is that people want to use this in the real world and 
this is essentially the only way to do it, barring coming up with a 
whole new protocol for the purpose (I'm looking at /you/ AJP!).


Indeed.

So why not use /the/ protocol that (a) exists and (b) is supported by 
every single product that currently supports this type of thing?


My support for any patch is going to depend on the specifics of the 
patch.


In addition to the comments in the BZ
- exposing the data as a request attribute is inconsistent with other
   mechanisms that solve the same problem (e.g. see RemoteIpFilter)


+1

The whole point of PROXY is to kind of mix-together the capabilities of 
both the RemoteIPFilter/Valve (which uses HTTP headers for 
source-information) and the top-level idea of a Connector (something 
that binds to a socket and pushes bytes around).


The confusing thing here is that those two jobs are performed at 
relatively different levels in Tomcat at the moment, as I understand 
things.


Yes and no. RemoteIP[Filter|Valve] insert/modify the data at a higher 
level because that is where they sit but the data originates from the 
SocketWrapper.


If some kind of UberConnector could be built which essentially does 
something like the following, it would be ideal:


public void accept(Socket s) {
   ProxyHeader proxyHeader = readProxyHeader(s);

   Connector realConnector = getRealConnector();

   realConnector.setRemoteIP(proxyHeader.getRemoteIP());
   realConnector.setRemotePort(proxyHeader.getRemotePort());

   realConnector.takeItAway(s);
}

I'm sure there are other pieces of information that would be good to 
pass-through, but the identity of the remote client is the most 
interesting one.


Yes, that is the general idea. Just a couple of minor tweaks to use the 
SocketWrapper rather than the Connector and to do it in a slightly 
different place. The Acceptor is too early as we want to do as little as 
possible on the Acceptor thread.



- needs to be implemented for all Connectors


I hope not. The connectors should be able to just have a thin layer in 
front of them "sipping" the header off the beginning of the connection. 
I am *way* out of my depth here when it comes to Tomcat internals and so 
I don't want to appear to be telling you (Mark) "how it works/should 
work", but conceptually it "seems easy". That may not translate into 
"easy implementation" or it may mean "tons of refactoring that we 
wouldn't need if we didn't care that much."


My point was that the provided patch only implements this for NIO. It 
needs to implement it for NIO2 as well. APR/Native looks to be a lot 
more difficult to implement and I'd be happy not implementing it for 
APR/Native.



- I'd expect it to look more like the SNI processing


SNI processing is very connector-dependent, of course, because it's 
HTTPS-only. PROXY should allow HTTP, HTTPS, AJP, SFTP, JDBC, anything. 
So if it can be implemented as something that can just "sit in front of" 
*any* connector now or in the future of Tomcat, that would be ideal. It 
could definitely be implemented as an "optional feature" on a 
Connector-by-Connector basis, but my sense is that it can be done 
separately and globally.


Ah. You are thinking Connector as in protocol (HTTP, AJP, etc) whereas I 
am thinking in terms of implementation (NIO, NIO2, etc).


SNI is handled independently of implementation and I think PROXY should 
be handled the same way. They also sit at almost the same point in the 
processing (PROXY needs to be first). PROXY parsing could be implemented 
within the existing handshake() method but I think it would be much 
cleaner in a separate method.


Without looking at it too closely I think the implementation would look 
something like:


- a new method on SocketWrapperBase that
  - checks if PROXY is enabled
  - returns immediately if PROXY is not enabled or has already
been parsed
  - uses a new utility class (or classes) to parse the header
(reading via the read() methods on SocketWrapperBase)
  - sets the cached values for remoteAddr, remoteHost,
remotePort etc
- The SocketProcessor.doRun() implementations add a call to this new
  method just before the TLS handshake

If we want to support the TLS information then a little additional 
refactoring will be required (probably to cache the result of 
SocketWrapperBase.getSslSupport) so the new utility classes can insert a