Re: [squid-users] Squid: Small packets and low performance between squid and icap

2016-02-23 Thread Prashanth Prabhu
[+ squid-dev; bcc ssquid-users]

Hi Alex,

Sorry about the late reply.

Please see inline.

>> Here's the behavior I have seen: When the connection is set up, the
>> buffer gets a size of 16KB (default). Squid reads from the socket,
>> parses the data, and then sends it towards c-icap as appropriate. Now,
>> as part of parsing the data, the buffer is NUL-terminated via a call
>> to c_str(). This NUL-termination, however, is not accounted for by an
>> increase in the "offset" (off) in the underlying MemBlob, therefore,
>> the offset and size go out of sync.
>
> Just to avoid a misunderstanding:
>
> * MemBlob does not have an "offset".

Indeed. I was imprecise in my explanation -- the effect of drafting
email even as I was in the middle of investigating the code.

The c_str() code doesn't increment SBuf::len_. As a result the
MemBlob::canAppend() call, which takes in SBuf::off_ and SBuf::len_
doesn't match the MemBlob::size_, which was incremented as part of the
c_str() call.


> * A call to c_str() should not increase SBuf::len_  either because it
> does not add a new character to the SBuf object. That call just
> terminates the underlying buffer.

Well, without an increment of the MemBlob::size_ (or with an increment
to the SBuf::len_) this would have been OK. However, once that 'size_'
is incremented, we have the SBuf concept of how much buffer is used
being "out of sync" with the MemBlob's conception of buffer usage.

FWIW I don't quite understand how the NUL-char doesn't add a new
character to the SBuf object. Yes, it is terminating the string, but
in 'C' it is also a legit character. So, unclear what we are
attempting with this magic; or why. Seems to me, not incrementing
'len_' is a mistake.


> Single-owner optimizations aside (a known TODO), the above is the
> desired behavior according to the documented c_str() guarantees:

Can you please explain or point me to a document that has more info
about this "Single-owner" optimization?

>
>>  * The returned value points to an internal location whose contents
>>  * are guaranteed to remain unchanged only until the next call
>>  * to a non-constant member function of the SBuf object.
>
> In other words, we cannot allow some _other_ SBuf object to overwrite
> our null-termination character in the MemBlob we share with that other SBuf.
>
> The high price for that strong guarantee is one of the reasons we should
> avoid c_str() calls in Squid code.

Note that the issue that I have explained is from a mostly stock squid
3.5.1 codebase. This isn't stemming from new c_str() calls added to
the codebase.


>> When canAppend() fails, a new
>> buffer is re-allocated. When this reallocation occurs, however, the
>> new size of the buffer is dependent on the size being reserved.
>
> If we are still talking about the I/O buffer (and not just some random
> SBuf string somewhere), then the I/O buffer _capacity_ should not shrink
> below a certain minimum, regardless of how much content the buffer has
> already stored. There should be some Squid code that ensures the minimum
> capacity of the I/O buffer used to read requests. If it is missing, it
> is a Squid bug.

It does shrink, as you can see from the debugs that I posted earlier.


>> As a temporary measure, I have an experimental change that checks
>> whether the body size is known and if known always reserves a large
>> enough size (currently 16K).
>
> It is difficult to discuss this without seeing your changes, but the
> reservation should probably be unconditional -- the I/O buffer capacity
> should always be at least 16KB (or whatever size we start with).

Yes, that would be another way of fixing this issue.

I have posted the current changes that are currently working for the
most part below. It uses current capacity size as a heuristic to bump
size up. The diff also has some previous fixes, that were pointed out
to me on the thread, embedded in it.

---
diff --git a/3rdparty/squid-3.5.1/src/MemBlob.h b/3rdparty/squid-3.5.1/src/Mem
Blob.h
index b96330e..d265576 100644
--- a/3rdparty/squid-3.5.1/src/MemBlob.h
+++ b/3rdparty/squid-3.5.1/src/MemBlob.h
@@ -94,6 +94,8 @@ public:
 /// extends the available space to the entire allocated blob
 void clear() { size = 0; }

+size_type currentCapacity() const { return (capacity); };
+
 /// dump debugging information
 std::ostream & dump(std::ostream &os) const;

diff --git a/3rdparty/squid-3.5.1/src/SBuf.cc b/3rdparty/squid-3.5.1/src/SBuf.
cc
index 53221d6..91886a0 100644
--- a/3rdparty/squid-3.5.1/src/SBuf.cc
+++ b/3rdparty/squid-3.5.1/src/SBuf.cc
@@ -76,7 +76,7 @@ SBufStats::operator +=(const SBufStats& ss)
 SBuf::SBuf()
 : store_(GetStorePrototype()), off_(0), len_(0)
 {
-debugs(24, 8, id << " created");
+debugs(24, 8, id << " created, size=" << spaceSize());
 ++stats.alloc;
 ++stats.live;
 }
@@ -171,6 +171,7 @@ SBuf::rawSpace(size_type minSpace)
 // the store knows the last-used portion. If
 // it's available, w

Re: [squid-users] Squid: Small packets and low performance between squid and icap

2016-02-09 Thread Alex Rousskov
[this should be on squid-dev instead]

On 02/09/2016 01:20 PM, Prashanth Prabhu wrote:

> Here's the behavior I have seen: When the connection is set up, the
> buffer gets a size of 16KB (default). Squid reads from the socket,
> parses the data, and then sends it towards c-icap as appropriate. Now,
> as part of parsing the data, the buffer is NUL-terminated via a call
> to c_str(). This NUL-termination, however, is not accounted for by an
> increase in the "offset" (off) in the underlying MemBlob, therefore,
> the offset and size go out of sync.

Just to avoid a misunderstanding:

* MemBlob does not have an "offset".

* SBuf::off_ should not change when we are adding characters to SBuf
because it is the start of the buffer, not the end of it.

* A call to c_str() should not increase SBuf::len_  either because it
does not add a new character to the SBuf object. That call just
terminates the underlying buffer.

Based on your comments below, I think I know what you mean by "go out of
sync", but everything is as "in sync" as it can be when one adds
termination characters that are not really there from SBuf::length()
point of view. The bug is elsewhere.


> MemBlob::canAppend() failing because
> MemBlob::isAppendOffset() fails -- the 'off' and 'size' are not the
> same due to the above c_str() call.

Single-owner optimizations aside (a known TODO), the above is the
desired behavior according to the documented c_str() guarantees:

>  * The returned value points to an internal location whose contents
>  * are guaranteed to remain unchanged only until the next call
>  * to a non-constant member function of the SBuf object.

In other words, we cannot allow some _other_ SBuf object to overwrite
our null-termination character in the MemBlob we share with that other SBuf.

The high price for that strong guarantee is one of the reasons we should
avoid c_str() calls in Squid code.


> When canAppend() fails, a new
> buffer is re-allocated. When this reallocation occurs, however, the
> new size of the buffer is dependent on the size being reserved.

If we are still talking about the I/O buffer (and not just some random
SBuf string somewhere), then the I/O buffer _capacity_ should not shrink
below a certain minimum, regardless of how much content the buffer has
already stored. There should be some Squid code that ensures the minimum
capacity of the I/O buffer used to read requests. If it is missing, it
is a Squid bug.


> As a temporary measure, I have an experimental change that checks
> whether the body size is known and if known always reserves a large
> enough size (currently 16K). 

It is difficult to discuss this without seeing your changes, but the
reservation should probably be unconditional -- the I/O buffer capacity
should always be at least 16KB (or whatever size we start with).


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid: Small packets and low performance between squid and icap

2016-02-09 Thread Prashanth Prabhu
Hi Amos,

I have had a chance to perform some further investigation into the
slow-upload issue. And, it appears to be due to how the buffer is used
when reading from the client-socket.

Here's the behavior I have seen: When the connection is set up, the
buffer gets a size of 16KB (default). Squid reads from the socket,
parses the data, and then sends it towards c-icap as appropriate. Now,
as part of parsing the data, the buffer is NUL-terminated via a call
to c_str(). This NUL-termination, however, is not accounted for by an
increase in the "offset" (off) in the underlying MemBlob, therefore,
the offset and size go out of sync. This seems to be OK in some cases,
but in others this out-of-sync accounting causes problems.
Specifically, it can result in MemBlob::canAppend() failing because
MemBlob::isAppendOffset() fails -- the 'off' and 'size' are not the
same due to the above c_str() call. When canAppend() fails, a new
buffer is re-allocated. When this reallocation occurs, however, the
new size of the buffer is dependent on the size being reserved. Since
that size is usually smaller than 16KB (as an example), the new buffer
is going to require a (usually) smaller buffer. Sometimes this buffer
drops down to a few hundred bytes or as low as 40B. But, once the new
buffer is allocated, its size now becomes the new maximum, with no
subsequent reads being able to be larger than the new size. Therefore,
read calls end up reduced to a few bytes at a time.

As a temporary measure, I have an experimental change that checks
whether the body size is known and if known always reserves a large
enough size (currently 16K). With this in place, although there are
occasional low-byte-count read calls, overall larger reads occur and
therefore upload speed remains consistently high.

The version I have is 3.5.1.

I have some snippets from the logs below, to help with the flow. You
can see, for instance, that between 22:09:07.469 and 22:09:07.470, the
buffer drops down to the smallest possible 40B. Let me know if you
need any further data on this.

Regards.
Prashanth


src/SBuf.cc: SBuf::c_str

const char*
SBuf::c_str()
{
++stats.rawAccess;
/* null-terminate the current buffer, by hand-appending a \0 at its tail but
 * without increasing its length. May COW, the side-effect is to
guarantee that
 * the MemBlob's tail is availabe for us to use */
*rawSpace(1) = '\0';
++store_->size;
++stats.setChar;
++stats.nulTerminate;
return buf();
}



Snippets from the logs, showing the buffer SBuf2851

2016/01/06 22:09:06.398| SBuf.cc(79) SBuf: SBuf2851 created
2016/01/06 22:09:06.398| SBuf.cc(79) SBuf: SBuf2852 created
2016/01/06 22:09:06.398| SBuf.cc(79) SBuf: SBuf2853 created
...
2016/01/06 22:09:06.399| client_side.cc(3228) clientReadRequest:
local=10.0.49.133:443 remote=10.0.0.254:59837 FD 15 flags=1
2016/01/06 22:09:06.399| cbdata.cc(394) cbdataInternalLock: 0x1123d58=7
2016/01/06 22:09:06.399| SBuf.cc(168) rawSpace: reserving 16382 for SBuf2851
2016/01/06 22:09:06.399| SBuf.cc(910) cow: new size:16382
2016/01/06 22:09:06.399| SBuf.cc(880) reAlloc: new size: 16382
2016/01/06 22:09:06.399| MemBlob.cc(57) MemBlob: constructed,
this=0x12b10f0 id=blob4211 reserveSize=16382
2016/01/06 22:09:06.399| MemBlob.cc(102) memAlloc: blob4211 memAlloc:
requested=16382, received=16384
2016/01/06 22:09:06.399| SBuf.cc(889) reAlloc: new store capacity: 16384
2016/01/06 22:09:06.399| Read.cc(91) ReadNow: local=10.0.49.133:443
remote=10.0.0.254:59837 FD 15 flags=1, size 16382, retval 202, errno 0
2016/01/06 22:09:06.399| SBuf.cc(215) append: from c-string to id SBuf2851
2016/01/06 22:09:06.399| SBuf.cc(168) rawSpace: reserving 202 for SBuf2851
2016/01/06 22:09:06.399| SBuf.cc(175) rawSpace: not growing
2016/01/06 22:09:06.399| client_side.cc(3177) clientParseRequests:
local=10.0.49.133:443 remote=10.0.0.254:59837 FD 15 flags=1:
attempting to parse
2016/01/06 22:09:06.399| SBuf.cc(168) rawSpace: reserving 1 for SBuf2851
2016/01/06 22:09:06.399| SBuf.cc(175) rawSpace: not growing
2016/01/06 22:09:06.399| HttpParser.cc(37) reset: Request buffer is
CONNECT www.box.com:443 HTTP/1.1^M
...
2016/01/06 22:09:06.400| client_side.h(95) mayUseConnection: This
0x125d2b8 marked 1
2016/01/06 22:09:06.400| SBuf.cc(487) consume: consume 202
2016/01/06 22:09:06.400| SBuf.cc(87) SBuf: SBuf2857 created from id SBuf2851
2016/01/06 22:09:06.400| SBuf.cc(124) ~SBuf: SBuf2857 destructed
...
2016/01/06 22:09:06.462| client_side.cc(3228) clientReadRequest:
local=10.0.49.133:443 remote=10.0.0.254:59837 FD 15 flags=1
2016/01/06 22:09:06.462| cbdata.cc(394) cbdataInternalLock: 0x1123d58=13
2016/01/06 22:09:06.462| SBuf.cc(168) rawSpace: reserving 16181 for SBuf2851
2016/01/06 22:09:06.462| SBuf.cc(910) cow: new size:16181
2016/01/06 22:09:06.462| SBuf.cc(880) reAlloc: new size: 16181
2016/01/06 22:09:06.462| MemBlob.cc(57) MemBlob: constructed,
this=0xcab240 id=blob4215 reserveSize=16181
2016/01/06 22:09:06.462| MemBlob.cc(102) memAllo

Re: [squid-users] Squid: Small packets and low performance between squid and icap

2015-11-05 Thread Prashanth Prabhu
Hi Amos,

>> I failed to mention that I am on 3.5.1. And, readSomeData() is already 
>> "fixed":
>
> Bug 4353 exists because the initial fix for 4206 was not enough to fully
> remove the behaviour. Sometimes yes, sometimes no.
>
> Only the nasty hack of allocating buffers twice and throwing one away
> unused seems to work fully so far. That is the patch in 4353.


To be clear, the code in 3.5.1 is already using the
in.maybeMakeSpaceAvailable() call, therefore the patch for 4353 is
useless for me.

It appears that sometime during 3.5.3 the code was modified to use the
following check instead and that is being backed out with 4353.

 if (Config.maxRequestBufferSize - in.buf.length() < 2)


I thought that perhaps the first patch from 4206 would help, but a
quick test has shown that it doesn't.

Are there any documents on how buffer management is done in Squid? I
am seeing small buffers being used to read from the client-side
connection and I don't quite understand why. Why not read as much as
possible, within the bounds of the space available in the "bodypipe",
so we maximize the reads?


Regards.
Prashanth

On 5 November 2015 at 07:14, Amos Jeffries  wrote:
> On 5/11/2015 10:41 p.m., Prashanth Prabhu wrote:
>> Hello Amos,
>>
>> Thanks for the quick response.
>>
>> I failed to mention that I am on 3.5.1. And, readSomeData() is already 
>> "fixed":
>
> Bug 4353 exists because the initial fix for 4206 was not enough to fully
> remove the behaviour. Sometimes yes, sometimes no.
>
> Only the nasty hack of allocating buffers twice and throwing one away
> unused seems to work fully so far. That is the patch in 4353.
>
>
>> 
>> void
>> ConnStateData::readSomeData()
>> {
>> if (reading())
>> return;
>>
>> debugs(33, 4, HERE << clientConnection << ": reading request...");
>>
>> if (!in.maybeMakeSpaceAvailable())
>> return;
>>
>> typedef CommCbMemFunT Dialer;
>> reader = JobCallback(33, 5, Dialer, this, 
>> ConnStateData::clientReadRequest);
>> Comm::Read(clientConnection, reader);
>> }
>> 
>>
>> I am planning to try the "patch client_side.cc to call
>> maybeMakeSpaceAvailable()" from #4206. Anything else, I should try?
>
> The patch from 4353.
>
> And also upgrading to 3.5.11 unless that was a typo in the version
> number *.1 above.
>
> Amos
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid: Small packets and low performance between squid and icap

2015-11-05 Thread Amos Jeffries
On 5/11/2015 10:41 p.m., Prashanth Prabhu wrote:
> Hello Amos,
> 
> Thanks for the quick response.
> 
> I failed to mention that I am on 3.5.1. And, readSomeData() is already 
> "fixed":

Bug 4353 exists because the initial fix for 4206 was not enough to fully
remove the behaviour. Sometimes yes, sometimes no.

Only the nasty hack of allocating buffers twice and throwing one away
unused seems to work fully so far. That is the patch in 4353.


> 
> void
> ConnStateData::readSomeData()
> {
> if (reading())
> return;
> 
> debugs(33, 4, HERE << clientConnection << ": reading request...");
> 
> if (!in.maybeMakeSpaceAvailable())
> return;
> 
> typedef CommCbMemFunT Dialer;
> reader = JobCallback(33, 5, Dialer, this, 
> ConnStateData::clientReadRequest);
> Comm::Read(clientConnection, reader);
> }
> 
> 
> I am planning to try the "patch client_side.cc to call
> maybeMakeSpaceAvailable()" from #4206. Anything else, I should try?

The patch from 4353.

And also upgrading to 3.5.11 unless that was a typo in the version
number *.1 above.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid: Small packets and low performance between squid and icap

2015-11-05 Thread Prashanth Prabhu
Hello Amos,

Thanks for the quick response.

I failed to mention that I am on 3.5.1. And, readSomeData() is already "fixed":

void
ConnStateData::readSomeData()
{
if (reading())
return;

debugs(33, 4, HERE << clientConnection << ": reading request...");

if (!in.maybeMakeSpaceAvailable())
return;

typedef CommCbMemFunT Dialer;
reader = JobCallback(33, 5, Dialer, this, ConnStateData::clientReadRequest);
Comm::Read(clientConnection, reader);
}


I am planning to try the "patch client_side.cc to call
maybeMakeSpaceAvailable()" from #4206. Anything else, I should try?


Regards.
Prashanth

On 4 November 2015 at 19:40, Amos Jeffries  wrote:
> On 5/11/2015 4:30 p.m., Amos Jeffries wrote:
>> On 5/11/2015 3:43 p.m., Prashanth Prabhu wrote:
>>> Hi folks,
>>>
>>> I have a setup with ICAP running a custom server alongside Squid.
>>> While testing file upload scenarios, I ran into a slow upload issue
>>> and have narrowed it down to slowness between squid and icap,
>>> especially in the request handling path.
>>
>>
>> Hi Prashanth.
>>
>> This is bugs 4353 and 4206. There is a workaround patch in bug 4353.
>
> Sorry, here is the link 
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid: Small packets and low performance between squid and icap

2015-11-04 Thread Amos Jeffries
On 5/11/2015 4:30 p.m., Amos Jeffries wrote:
> On 5/11/2015 3:43 p.m., Prashanth Prabhu wrote:
>> Hi folks,
>>
>> I have a setup with ICAP running a custom server alongside Squid.
>> While testing file upload scenarios, I ran into a slow upload issue
>> and have narrowed it down to slowness between squid and icap,
>> especially in the request handling path.
> 
> 
> Hi Prashanth.
> 
> This is bugs 4353 and 4206. There is a workaround patch in bug 4353.

Sorry, here is the link 

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid: Small packets and low performance between squid and icap

2015-11-04 Thread Amos Jeffries
On 5/11/2015 3:43 p.m., Prashanth Prabhu wrote:
> Hi folks,
> 
> I have a setup with ICAP running a custom server alongside Squid.
> While testing file upload scenarios, I ran into a slow upload issue
> and have narrowed it down to slowness between squid and icap,
> especially in the request handling path.


Hi Prashanth.

This is bugs 4353 and 4206. There is a workaround patch in bug 4353.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid: Small packets and low performance between squid and icap

2015-11-04 Thread Prashanth Prabhu
Hi folks,

I have a setup with ICAP running a custom server alongside Squid.
While testing file upload scenarios, I ran into a slow upload issue
and have narrowed it down to slowness between squid and icap,
especially in the request handling path.

The slowness is down to extremely small packets sent by squid towards
the ICAP server. These packets are a few 10s of bytes in size. This
despite receiving large-sized packets from the client over the HTTPS
connection. The ICAP server responds with the ACK quickly enough, so
this isn't a case of small packets being generated because the server
isn't quick enough to read.

The debugs haven't shown any hints. It appears that there are times
when Squid allocates only small buffers to read from the HTTPS
connection. I see buffers of even a single byte being allocated during
message processing. I am new to the Squid code, so I might be reading
it all wrong.

I have pasted below a sample TCP dump (incomplete) showing the
behavior, resulting from a curl request. The request was generated on
the same node where squid and ICAP are resident.

Any hints/tips on what may be going wrong here? Appreciate any help in
this matter. Thank you.

Regards.
Prashanth


TCP dump:
Note that on my setup, squid is running on port 443.
Note also that the packets to-from both ports 443 and 1344 are
available in this sequence.

20:53:31.479166 IP localhost.56475 > localhost.https: Flags [S], seq
2166915705, win 32792, options [mss 16396,sackOK,TS val 3300947254 ecr
0,nop,wscale 12], length 0
20:53:31.479178 IP localhost.https > localhost.56475: Flags [S.], seq
728006122, ack 2166915706, win 32768, options [mss 16396,sackOK,TS val
3300947254 ecr 3300947254,nop,wscale 12], length 0
20:53:31.479186 IP localhost.56475 > localhost.https: Flags [.], ack
1, win 9, options [nop,nop,TS val 3300947254 ecr 3300947254], length 0
20:53:31.479308 IP localhost.56475 > localhost.https: Flags [P.], seq
1:221, ack 1, win 9, options [nop,nop,TS val 3300947254 ecr
3300947254], length 220
20:53:31.479317 IP localhost.https > localhost.56475: Flags [.], ack
221, win 9, options [nop,nop,TS val 3300947254 ecr 3300947254], length
0
20:53:31.483620 IP localhost.https > localhost.56475: Flags [P.], seq
1:40, ack 221, win 9, options [nop,nop,TS val 3300947255 ecr
3300947254], length 39
20:53:31.483636 IP localhost.56475 > localhost.https: Flags [.], ack
40, win 9, options [nop,nop,TS val 3300947255 ecr 3300947255], length
0
20:53:31.497413 IP localhost.56475 > localhost.https: Flags [P.], seq
221:534, ack 40, win 9, options [nop,nop,TS val 3300947259 ecr
3300947255], length 313
20:53:31.530394 IP localhost.https > localhost.56475: Flags [P.], seq
40:3153, ack 534, win 9, options [nop,nop,TS val 3300947267 ecr
3300947259], length 3113
20:53:31.531331 IP localhost.56475 > localhost.https: Flags [P.], seq
534:1108, ack 3153, win 10, options [nop,nop,TS val 3300947267 ecr
3300947267], length 574
20:53:31.549229 IP localhost.https > localhost.56475: Flags [P.], seq
3153:3204, ack 1108, win 9, options [nop,nop,TS val 3300947272 ecr
3300947267], length 51
20:53:31.549589 IP localhost.56475 > localhost.https: Flags [P.], seq
1108:1453, ack 3204, win 10, options [nop,nop,TS val 3300947272 ecr
3300947272], length 345

20:53:31.556517 IP localhost.46489 > localhost.1344: Flags [S], seq
2773005283, win 32792, options [mss 16396,sackOK,TS val 3300947274 ecr
0,nop,wscale 12], length 0
20:53:31.556527 IP localhost.1344 > localhost.46489: Flags [S.], seq
2778855454, ack 2773005284, win 32768, options [mss 16396,sackOK,TS
val 3300947274 ecr 3300947274,nop,wscale 12], length 0
20:53:31.556534 IP localhost.46489 > localhost.1344: Flags [.], ack 1,
win 9, options [nop,nop,TS val 3300947274 ecr 3300947274], length 0
20:53:31.559075 IP localhost.46489 > localhost.1344: Flags [P.], seq
1:602, ack 1, win 9, options [nop,nop,TS val 3300947274 ecr
3300947274], length 601
20:53:31.559092 IP localhost.1344 > localhost.46489: Flags [.], ack
602, win 9, options [nop,nop,TS val 3300947274 ecr 3300947274], length
0

20:53:31.588467 IP localhost.https > localhost.56475: Flags [.], ack
1453, win 10, options [nop,nop,TS val 3300947282 ecr 3300947272],
length 0
20:53:32.550821 IP localhost.56475 > localhost.https: Flags [.], seq
1453:17837, ack 3204, win 10, options [nop,nop,TS val 3300947522 ecr
3300947282], length 16384
20:53:32.550849 IP localhost.https > localhost.56475: Flags [.], ack
17837, win 12, options [nop,nop,TS val 3300947522 ecr 3300947522],
length 0
20:53:32.550856 IP localhost.56475 > localhost.https: Flags [P.], seq
17837:17866, ack 3204, win 10, options [nop,nop,TS val 3300947522 ecr
3300947282], length 29
20:53:32.550859 IP localhost.https > localhost.56475: Flags [.], ack
17866, win 12, options [nop,nop,TS val 3300947522 ecr 3300947522],
length 0
20:53:32.550916 IP localhost.56475 > localhost.https: Flags [.], seq
17866:34250, ack 3204, win 10, options [nop,nop,TS val 3300947522 ecr
3300947522], length 16384
20:53:32.550938 IP localh