Am 2020-05-29 um 10:04 schrieb Oleg Kalnichevski:
On Fri, 2020-05-29 at 00:46 +0200, Michael Osipov wrote:
Am 2020-05-28 um 22:51 schrieb Oleg Kalnichevski:
On Thu, 2020-05-28 at 22:01 +0200, Michael Osipov wrote:
Am 2020-05-28 um 10:40 schrieb Oleg Kalnichevski:
On Thu, 2020-05-28 at 00:02 +0200, Michael Osipov wrote:
...
Please try out this patch. This should now give us proper
out
of
sequence response check with minimal overhead. The check
will
now
be
performed for chunks of 2048 bytes instead of each and
every
write
operations.
https://github.com/ok2c/httpcomponents-core/commit/7baa51f873da759d4e81973d4ca0a16f4199c51b
This one does not work at all. It does even break the final
(authenticated) request. See for yourself:
http://home.apache.org/~michaelo/issues/early-response/try-3/
Please confirm this one works, at least for plain http:
https://github.com/ok2c/httpcomponents-core/commit/fb115a562921d8249a6bd8f3a5f5c4d6dba76949
Good news, this looks very decent for HTTP. HTTPS always fails.
I am quite amazed that the SSL socket stream (AppInputStream) has
such a
different behavior. Other buffers internally which make
#available()
fail? There is an internal byte buffer of 4 KiB.
Output:
http://home.apache.org/~michaelo/issues/early-response/try-4/
Great. Please pull one more commit from the same branch. It
_should_
fix the out of sequence response check with SSL connections.
Eureka, it works!
Can you explain why SSLSocket requires different handling? Moreover,
why
not use "isDataAvailable(Timeout.ONE_MILLISECOND)" all the time?
Avoiding filling the buffer?
#isDataAvailable is not cheap even with 1 ms timeout. Socket timeout
granularity is far from being 1 ms. In fact it is pretty close to 50
ms. If we are not careful we can easily end up with 50 ms overhead for
every write operation.
Where did you get the 50 ms? Is that a Java thing? OS thing? If OS which
OS did you try, reference for this value?
I am _hoping_ that SSL input stream would not actually read anything
from the underlying socket as long as there is unencrypted data stuck
in its buffers.
Thanks, this makes sense. 50 ms is quite surprising and absolutely
unacceptable by default. Do you think that there is an impl error in
AppInputStream because #available() behaves differently compares to
plain sockets?
If the different handling is necessary, here is a small proposal:
diff --git
a/httpcore5/src/main/java/org/apache/hc/core5/http/impl/io/DefaultB
HttpClientConnection.java
b/httpcore5/src/main/java/org/apache/hc/core5/http/impl/io/DefaultB
HttpClientConnection.java
index 37d750fc2..050da1864 100644
---
a/httpcore5/src/main/java/org/apache/hc/core5/http/impl/io/DefaultB
HttpClientConnection.java
+++
b/httpcore5/src/main/java/org/apache/hc/core5/http/impl/io/DefaultB
HttpClientConnection.java
@@ -165,8 +165,4 @@ void checkForEarlyResponse() throws IOException
{
- if (ssl) {
- if
(isDataAvailable(Timeout.ONE_MILLISECOND)) {
- throw new
ResponseOutOfOrderException();
- }
- } else {
- if (socketInputStream.available() > 0)
{
- throw new
ResponseOutOfOrderException();
- }
+ final boolean dataAvailable = ssl ?
isDataAvailable(Timeout.ONE_MILLISECOND) :
+ (socketInputStream.available() >
0);
+ if (dataAvailable) {
+ throw new
ResponseOutOfOrderException();
Makes sense. Please see the updates in my branch.
Will pick this up on Monday as soon as I get access to our infrastructure.
Now we need to decide if this is as good as it gets or we want to put
more research into it.
I read liked you idea of the chunks. 2048 seems too low for me because
most well write with a buffer of at least 4 KiB.
So I think we can try to things to miminize overhead:
* Make this feature optional, just like we do with Expectation.
* Try to increase the step chunk size, e.g., 64 KiB or greater. With
that we don't peek on each and every invocation.
WDYT?
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]