Re: Load Balancing with haproxy makes my application slower?

2011-01-29 Thread Sean Hess
Unfortunately, the results are almost exactly the same with haproxy 1.4 and
those changes you recommended. I'm so confused...

Thanks for your help!

On Sat, Jan 29, 2011 at 3:25 PM, Sean Hess  wrote:

> Ok, here are the results from apache benchmark *before* making any other
> changes to the system (1.4, timeouts, etc).
>
> The Test - https://gist.github.com/802251
>
> The Results against the 1*256 haproxy -> 4*512 node cluster -
> https://gist.github.com/802268
> Here's the haproxy status after the test -
> http://dl.dropbox.com/u/1165308/ab_haproxy.png
>
> The Results against the 1*512 node instance -
> https://gist.github.com/802271
>
> So, it looks about the same. The single instance outperforms the cluster,
> which doesn't make any sense. I'll try those changes and see if it gets any
> better.
>
>
>
> On Sat, Jan 29, 2011 at 2:44 PM, Sean Hess  wrote:
>
>> Thanks Joel,
>>
>> I'm working on converting the test to ab (shouldn't take long) and trying
>> out 1.4, but to answer your questions. RSTavg is average response time.
>> There's a 500ms timer in the http response, and some serialization. It's
>> over the local network. So that should be about 550ms under no load.
>>
>> Users per second, yes.
>>
>> I didn't use ab to start because I'm not interested in response time, per
>> se, but at what load response time starts to fail. I don't know an effective
>> way to do this with ab, partially because it doesn't support stepping (my
>> test steps through the concurrency levels specified by "users", I should
>> rename Usersps to sessions per second, because if a "user" takes less
>> than 1 second they start again right away). My testing harness allows me to
>> write tests in my application language, blah blah.. you get the idea. But
>> yes, I'll run ab and see if I get the same results.
>>
>> I'll also try your changes to the timeouts. Thanks for your help!
>>
>>
>>
>> On Sat, Jan 29, 2011 at 1:00 PM, Joel Krauska  wrote:
>>
>>> Speculation, but using a newer version of haproxy (1.4) might also
>>> improve performance for you.
>>>
>>> --Joel
>>>
>>>
>>> On 1/29/11 10:53 AM, Sean Hess wrote:
>>>
 I'm performing real-world load tests for the first time, and my results
 aren't making a lot of sense.

 Just to make sure I have the test harness working, I'm not testing
 "real" application code yet, I'm just hitting a web page that simulates
 an IO delay (500 ms), and then serializes out some json (about 85 bytes
 of content). It's not accessing the database, or doing anything other
 than printing out that data. My application servers are written in
 node.js, on 512MB VPSes on rackspace (centos55).

 Here are the results that don't make sense:

 https://gist.github.com/802082

 When I run this test against a single application server (bottom one),
 You can see that it stays pretty flat (about 550ms response time) until
 it gets to 1500 simultaneous users, when it starts to error out and get
 slow.

 When I run it against an haproxy instance in front of 4 of the same
 nodes (top one), my performance is worse. It doesn't drop any
 connections, but the response time edges up much earlier than against a
 single node.

 Does this make any sense to you? Does haproxy need more RAM? I was
 watching the box while the test was running and the haproxy process
 didn't get higher than 20% CPU and 10% RAM.

 Please help, thanks!

>>>
>>>
>>
>


Re: Load Balancing with haproxy makes my application slower?

2011-01-29 Thread Sean Hess
Ok, here are the results from apache benchmark *before* making any other
changes to the system (1.4, timeouts, etc).

The Test - https://gist.github.com/802251

The Results against the 1*256 haproxy -> 4*512 node cluster -
https://gist.github.com/802268
Here's the haproxy status after the test -
http://dl.dropbox.com/u/1165308/ab_haproxy.png

The Results against the 1*512 node instance - https://gist.github.com/802271

So, it looks about the same. The single instance outperforms the cluster,
which doesn't make any sense. I'll try those changes and see if it gets any
better.



On Sat, Jan 29, 2011 at 2:44 PM, Sean Hess  wrote:

> Thanks Joel,
>
> I'm working on converting the test to ab (shouldn't take long) and trying
> out 1.4, but to answer your questions. RSTavg is average response time.
> There's a 500ms timer in the http response, and some serialization. It's
> over the local network. So that should be about 550ms under no load.
>
> Users per second, yes.
>
> I didn't use ab to start because I'm not interested in response time, per
> se, but at what load response time starts to fail. I don't know an effective
> way to do this with ab, partially because it doesn't support stepping (my
> test steps through the concurrency levels specified by "users", I should
> rename Usersps to sessions per second, because if a "user" takes less than
> 1 second they start again right away). My testing harness allows me to write
> tests in my application language, blah blah.. you get the idea. But yes,
> I'll run ab and see if I get the same results.
>
> I'll also try your changes to the timeouts. Thanks for your help!
>
>
>
> On Sat, Jan 29, 2011 at 1:00 PM, Joel Krauska  wrote:
>
>> Speculation, but using a newer version of haproxy (1.4) might also improve
>> performance for you.
>>
>> --Joel
>>
>>
>> On 1/29/11 10:53 AM, Sean Hess wrote:
>>
>>> I'm performing real-world load tests for the first time, and my results
>>> aren't making a lot of sense.
>>>
>>> Just to make sure I have the test harness working, I'm not testing
>>> "real" application code yet, I'm just hitting a web page that simulates
>>> an IO delay (500 ms), and then serializes out some json (about 85 bytes
>>> of content). It's not accessing the database, or doing anything other
>>> than printing out that data. My application servers are written in
>>> node.js, on 512MB VPSes on rackspace (centos55).
>>>
>>> Here are the results that don't make sense:
>>>
>>> https://gist.github.com/802082
>>>
>>> When I run this test against a single application server (bottom one),
>>> You can see that it stays pretty flat (about 550ms response time) until
>>> it gets to 1500 simultaneous users, when it starts to error out and get
>>> slow.
>>>
>>> When I run it against an haproxy instance in front of 4 of the same
>>> nodes (top one), my performance is worse. It doesn't drop any
>>> connections, but the response time edges up much earlier than against a
>>> single node.
>>>
>>> Does this make any sense to you? Does haproxy need more RAM? I was
>>> watching the box while the test was running and the haproxy process
>>> didn't get higher than 20% CPU and 10% RAM.
>>>
>>> Please help, thanks!
>>>
>>
>>
>


Re: Load Balancing with haproxy makes my application slower?

2011-01-29 Thread Sean Hess
(Sorry for the double-post Joel, I accidentally only sent this to you
instead of the mailing list)

Thanks Joel,

I'm working on converting the test to ab (shouldn't take long) and trying
out 1.4, but to answer your questions. RSTavg is average response time.
There's a 500ms timer in the http response, and some serialization. It's
over the local network. So that should be about 550ms under no load.

Users per second, yes.

I didn't use ab to start because I'm not interested in response time, per
se, but at what load response time starts to fail. I don't know an effective
way to do this with ab, partially because it doesn't support stepping (my
test steps through the concurrency levels specified by "users", I should
rename Usersps to sessions per second, because if a "user" takes less than 1
second they start again right away). My testing harness allows me to write
tests in my application language, blah blah.. you get the idea. But yes,
I'll run ab and see if I get the same results.

I'll also try your changes to the timeouts. Thanks for your help!

On Sat, Jan 29, 2011 at 12:57 PM, Joel Krauska  wrote:

> Sean,
>
> I think it would be helpful to further explain your testing scenario.
>
> How do you simulate concurrent users?
>
> What is RSTav?
>
> Usersps is sessions per second??
>
> I think most folks use Apache Bench
> http://httpd.apache.org/docs/2.0/programs/ab.html
> as a fairly common industry standard for HTTP server performance.
>
> Would you consider rerunning your test using ab as well?
>
> Equivalently, you might look at httpperf (see the haproxy web page for some
> notes)
>
>
> One tuning thing you might try is dropping down your timeouts.
> You have:
>timeout connect 1
>timeout client 30
>timeout server 30
>
> I typically use an order of magnitude smaller.
> 5000
> 5
> 5
> (these are exaple defaults listed in an example in 2.3 of the HA proxy
> docs)
> http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
>
>
> Best of luck,
>
> Joel
>
>
>
> On 1/29/11 10:53 AM, Sean Hess wrote:
>
>> I'm performing real-world load tests for the first time, and my results
>> aren't making a lot of sense.
>>
>> Just to make sure I have the test harness working, I'm not testing
>> "real" application code yet, I'm just hitting a web page that simulates
>> an IO delay (500 ms), and then serializes out some json (about 85 bytes
>> of content). It's not accessing the database, or doing anything other
>> than printing out that data. My application servers are written in
>> node.js, on 512MB VPSes on rackspace (centos55).
>>
>> Here are the results that don't make sense:
>>
>> https://gist.github.com/802082
>>
>> When I run this test against a single application server (bottom one),
>> You can see that it stays pretty flat (about 550ms response time) until
>> it gets to 1500 simultaneous users, when it starts to error out and get
>> slow.
>>
>> When I run it against an haproxy instance in front of 4 of the same
>> nodes (top one), my performance is worse. It doesn't drop any
>> connections, but the response time edges up much earlier than against a
>> single node.
>>
>> Does this make any sense to you? Does haproxy need more RAM? I was
>> watching the box while the test was running and the haproxy process
>> didn't get higher than 20% CPU and 10% RAM.
>>
>> Please help, thanks!
>>
>
>


Re: Load Balancing with haproxy makes my application slower?

2011-01-29 Thread Joel Krauska

Sean,

I think it would be helpful to further explain your testing scenario.

How do you simulate concurrent users?

What is RSTav?

Usersps is sessions per second??

I think most folks use Apache Bench
http://httpd.apache.org/docs/2.0/programs/ab.html
as a fairly common industry standard for HTTP server performance.

Would you consider rerunning your test using ab as well?

Equivalently, you might look at httpperf (see the haproxy web page for 
some notes)



One tuning thing you might try is dropping down your timeouts.
You have:
timeout connect 1
timeout client 30
timeout server 30

I typically use an order of magnitude smaller.
5000
5
5
(these are exaple defaults listed in an example in 2.3 of the HA proxy docs)
http://haproxy.1wt.eu/download/1.4/doc/configuration.txt


Best of luck,

Joel


On 1/29/11 10:53 AM, Sean Hess wrote:

I'm performing real-world load tests for the first time, and my results
aren't making a lot of sense.

Just to make sure I have the test harness working, I'm not testing
"real" application code yet, I'm just hitting a web page that simulates
an IO delay (500 ms), and then serializes out some json (about 85 bytes
of content). It's not accessing the database, or doing anything other
than printing out that data. My application servers are written in
node.js, on 512MB VPSes on rackspace (centos55).

Here are the results that don't make sense:

https://gist.github.com/802082

When I run this test against a single application server (bottom one),
You can see that it stays pretty flat (about 550ms response time) until
it gets to 1500 simultaneous users, when it starts to error out and get
slow.

When I run it against an haproxy instance in front of 4 of the same
nodes (top one), my performance is worse. It doesn't drop any
connections, but the response time edges up much earlier than against a
single node.

Does this make any sense to you? Does haproxy need more RAM? I was
watching the box while the test was running and the haproxy process
didn't get higher than 20% CPU and 10% RAM.

Please help, thanks!





Re: Load Balancing with haproxy makes my application slower?

2011-01-29 Thread Sean Hess
Oh, here's my ha proxy config

https://gist.github.com/802098

and here's why my haproxy status looks like shortly after the test

http://dl.dropbox.com/u/1165308/haproxy.png

On Sat, Jan 29, 2011 at 11:53 AM, Sean Hess  wrote:

> I'm performing real-world load tests for the first time, and my results
> aren't making a lot of sense.
>
> Just to make sure I have the test harness working, I'm not testing "real"
> application code yet, I'm just hitting a web page that simulates an IO delay
> (500 ms), and then serializes out some json (about 85 bytes of content).
> It's not accessing the database, or doing anything other than printing out
> that data. My application servers are written in node.js, on 512MB VPSes on
> rackspace (centos55).
>
> Here are the results that don't make sense:
>
> https://gist.github.com/802082
>
> When I run this test against a single application server (bottom one), You
> can see that it stays pretty flat (about 550ms response time) until it gets
> to 1500 simultaneous users, when it starts to error out and get slow.
>
> When I run it against an haproxy instance in front of 4 of the same nodes
> (top one), my performance is worse. It doesn't drop any connections, but the
> response time edges up much earlier than against a single node.
>
> Does this make any sense to you? Does haproxy need more RAM? I was watching
> the box while the test was running and the haproxy process didn't get higher
> than 20% CPU and 10% RAM.
>
> Please help, thanks!


Load Balancing with haproxy makes my application slower?

2011-01-29 Thread Sean Hess
I'm performing real-world load tests for the first time, and my results
aren't making a lot of sense.

Just to make sure I have the test harness working, I'm not testing "real"
application code yet, I'm just hitting a web page that simulates an IO delay
(500 ms), and then serializes out some json (about 85 bytes of content).
It's not accessing the database, or doing anything other than printing out
that data. My application servers are written in node.js, on 512MB VPSes on
rackspace (centos55).

Here are the results that don't make sense:

https://gist.github.com/802082

When I run this test against a single application server (bottom one), You
can see that it stays pretty flat (about 550ms response time) until it gets
to 1500 simultaneous users, when it starts to error out and get slow.

When I run it against an haproxy instance in front of 4 of the same nodes
(top one), my performance is worse. It doesn't drop any connections, but the
response time edges up much earlier than against a single node.

Does this make any sense to you? Does haproxy need more RAM? I was watching
the box while the test was running and the haproxy process didn't get higher
than 20% CPU and 10% RAM.

Please help, thanks!


Re: x-forwarded-for rpaf

2011-01-29 Thread Willy Tarreau
Hi Phil,

On Sat, Jan 29, 2011 at 10:19:15AM -0600, Phil Parris wrote:
> Something I don't see mentioned in any documentation, something that
> took me days to find is rpaf http://stderr.net/apache/rpaf/  This will
> allow apache, php, logs etc to see the real x-forwarded-for ip without
> any program modifications.  Install the apache addon, enable it and
> now all code sees the x-forwarded-for ip.   Please add this to
> documentation so others don't waste their time finding it.

rpaf is regularly suggested here on the list. I agree it could also be
mentionned in the doc. Care to send a patch for the "xforwardfor" section ?

Thanks,
Willy




x-forwarded-for rpaf

2011-01-29 Thread Phil Parris
Something I don't see mentioned in any documentation, something that
took me days to find is rpaf http://stderr.net/apache/rpaf/  This will
allow apache, php, logs etc to see the real x-forwarded-for ip without
any program modifications.  Install the apache addon, enable it and
now all code sees the x-forwarded-for ip.   Please add this to
documentation so others don't waste their time finding it.

Thanks



Re: BADREQ with large cookies?

2011-01-29 Thread Willy Tarreau
Hi Seth,

On Sat, Jan 29, 2011 at 11:52:16PM +1100, Seth Yates wrote:
> Hi,
> 
> We're getting quite a few BADREQ entries in the logs, and we're thinking its
> because some clients are accumulating a lot of cookies.  Is HAPROXY
> returning 400 Bad Request when a request or cookie or header size is
> exceeded?

Yes, a request must fit in a buffer in order to be processed. The default
buffer size is 16kB with one half reserved for rewrite purposes, which leaves
you with 8kB max per request. You can change that in your global section
using tune.bufsize and tune.maxrewrite (the later being the reserved size).
You can for instance just reduce maxrewrite to 1024 so that you'll have a
limit of 15kB per request.

> If so, is there a way to turn off these checks?

It's not a check per se, it's a limitation by design. Parsing and processing
HTTP requires some memory and all products have limits.

> We're using the
> latest haproxy downloaded from the 1wt website.  Here's a tcpdump of one of
> the sessions:
(...)
> seq 1:1381, ack 1, win 4140, length 1380
> GET /serve?p=3&n=4d4406...
(...)
> Referer: http://.xx...
> Cookie: cv-%21%21%21%21%...

> seq 1381:2761, ack 1, win 4140, length 1380
> seq 2761:4141, ack 1, win 4140, length 1380
> seq 4141:5521, ack 1, win 4140, length 1380
> seq 5521:6901, ack 1, win 4140, length 1380
> seq 6901:8281, ack 1, win 4140, length 1380
> ack 8281, win 183, length 0

See above ? your client is sending more than 8kB of data, of which approx
7kB are for the cookie alone. The request and the referrer are very large
too.

The problem your site's visitors will face with this is a very slow access
to your site. All requests will have a large path and a large referrer, and
above all an extremely large cookie. If your site has 20 images to display,
the 8kB above will have to be posted 20 times, which means 160 kB of requests
to upload from a slow client. With a 512/128 ADSL line, this means that the
line is saturated for 10 seconds before the page can load. If your site has
to be accessed from smartphones, it will be even worse, because the upload
speed will be even smaller, and the amount of uploaded data will force the
client to wait for ACKs to be sent every two packets or so, resulting in the
RTT being added many times to the download time.

You should really find a way to reduce these cookies and URLs. A 1kB cookie
should already be considered the worst tolerable case, and URLs should be
a lot shorter to avoid them appearing multiple times in referrers (or use
POST instead of GET).

Also, while haproxy (and many other products) has a per-request size limit,
others such as Apache have a per-line size limit. Apache limits headers to
8kB. That means that your cookie is about to be rejected on Apache too, and
as such on many other products, because Apache is often considered as a
reference for setting limits : what does not pass through it has no reason
to pass through something else given it's everywhere.

Hoping this helps,
Willy




BADREQ with large cookies?

2011-01-29 Thread Seth Yates
Hi,

We're getting quite a few BADREQ entries in the logs, and we're thinking its
because some clients are accumulating a lot of cookies.  Is HAPROXY
returning 400 Bad Request when a request or cookie or header size is
exceeded?  If so, is there a way to turn off these checks?  We're using the
latest haproxy downloaded from the 1wt website.  Here's a tcpdump of one of
the sessions:

20:23:37.752552 IP 124.179.205.xxx.62316 > 122.200.132.xxx.80: Flags [S],
seq 296260122, win 8192, options [mss 1380,nop,wscale 2,nop,nop,sackOK],
length 0
E..4..@.r...|...zl.P.. .m..d
20:23:37.752563 IP 122.200.132.xxx.80 > 124.179.205.xxx.62316: Flags [S.],
seq 1379012113, ack 296260123, win 5840, options [mss
1460,nop,nop,sackOK,nop,wscale 7], length 0
E..4..@.@...z...|P.lR2...S..
20:23:38.066083 IP 124.179.205.xxx.62316 > 122.200.132.xxx.80: Flags [.],
ack 1, win 4140, length 0
E..(..@.r...|...zl.PR2..P..,].
20:23:38.100677 IP 124.179.205.xxx.62316 > 122.200.132.xxx.80: Flags [.],
seq 1:1381, ack 1, win 4140, length 1380
E.@.r..Q|...zl.PR2..P..,GET
/serve?p=3&n=4d4406c99a4983a8438569f8&ad=c0669601ff51c32406000200&wid=1660829904&vid=0&mod=0&type=js&sz=160x600&wb=0.483000&ord=20110129T122337&click=
http://track..com/AdServer/AdDisplayTrackerServlet?clickData=MEQAAERjAAC1TgAAaQgAAAEA8wAAAKBYAgI4RDFEQTdGNi03Nzc3LTQyMTQtOUQwOS02NzhDMjgxQzdDMEIAAE5DT0xPUgAATkNPTE9SAABOQ09MT1IAAE5DT0xPUgAATkNPTE9SAA==_url=HTTP/1.1
Host: xxx.x.net
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.13)
Gecko/20101203 Firefox/3.6.13
Accept: */*
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Referer:
http://..com/sda/b/20100920/yl_160x600_catchall_www_facebook_com.htm?clientId=bf740958-a2f1-43c9-9f34-08cd9a53fcfc
Cookie:
cv-%21%21%21%21%21%21%21%21%21%21%21%40%40-%23=Eh4IABIMJRZwAf9RwywVIgIAGLvDhNjDJSC7s6qK3iUSHggAEgzUmv8E_1HDXBA3AAAY787K68QlIIiu_cDFJRIeCAASDJhP_gT_UcNcEDUAABiLsNCgxSUgp-uowcUlEh4IABIMHxAWAf9RwwgKoQAAGJPSt8LEJSCTgrbexyUSHggAEgyMMP0E_1HDXBAxAAAYhPXzmMUlIOeEj43GJRIeCAASDMn0GwD_UcNcEHgAABj2nsGjxSUg_urSrsglEh4IABIMY1IjBf9Rw1wQYQAAGMaSg5jFJSDDtf_ayCUSHggAEgzyinEB_1HDLBUmAgAYm-XV9cQlIJvl7IjKJRIeCAASDI5o-wT_UcNcEC0AABjypYuYxSUg8vWjj8clEh4IABIMBM79BP9Rw1wQMwAAGKKBvsDFJSCIyK-CyiUSHggAEgx4ufwE_1HDXBAvAAA
20:23:38.133713 IP 124.179.205.xxx.62316 > 122.200.132.xxx.80: Flags [.],
seq 1381:2761, ack 1, win 4140, length 1380
E.@.r..P|...zl.PR2..P..,0...YvYnimcUlIIvRhLrHJQ..;
cv-%21%21%21%21%21%21%21%21%21%21%21%5B%5B6%24=Eh4IABIM1Jr_BP9Rw1wQNwAAGLv0l8HFJSD1h6aS2SUSHggAEgwEzv0E_1HDXBAzAAAY7srKwcUlIO6i2pLZJRIpCAASDNNh1wL_UcNkF00CABi-ht-c1yUg0I2wofIlKNivs8j6_wE.;
bsuid=%3DiNfHXl%23Fg%3Fis4L;
cv-jkhVJx%28%25_g%25wV0%2F=EikIABIM3OtbAP9RwxgNcwUAGKq4lZrYJSC7gfiU3yUo2K-zyPr_ARIpCAASDL_NXAD_UcMYDXcFABjw95WR2CUg-Iu9lt8lKNivs8j6_wESKQgAEgwQaFwA_1HDGA11BQAYvoKe0NglIKD3zZrfJSjYr7PI-v8B;
cv-%21%21%21%21%21%21%21%21%21%211kbTP=EiQIABIMmKYsAf9Rw1AQ7wAAGIq7-8HVJSCK06WgrSsolpLl4AQ.;
cv-%21%21%21%21%21%21%21%21%21%212LBgR=EikIABIMmKYsAf9Rw1AQ7wAAGLC75sPVJSCw49qi5Sso2K-zyPr_AQ..;
cv-k.9S%3Ax%28%25a%5D%2A%29y%21E=EiQIABIMmKYsAf9Rw1AQ7wAAGPavu_jVJSDSq-eZ3yUolpLl4AQSKQgAEgwRrfIA_1HDzBYpAQAYrNTwgNYlIK-KiZ3fJSjYr7PI-v8BEiQIABIMRzxGAf9RwzQL1gUAGJKb37PYJSC3rIH33yUolpLl4AQSKQgAEgwWtUUB_1HDNAvUBQAYstHR8NclIOayoangJSjYr7PI-v8B;
cv-%21%21%21%21%21%21%21%21%21%21%23%3D%3CI%27=EikIABIMEa3yAP9Rw8wWKQEAGLCZq8rVJSCw4eXp7y0o2K-zyPr_AQ..;
cv-%21%21%21%21%21%21%21%21%21%21%3ChWmp=EikIABIMEa3yAP9Rw8wWKQEAGJmnx_jVJSCZx6vA4zIo2K-zyPr_AQ..;
cv-%21%21%21%21%21%21%21%21%21%211OFKO=EikIABIMmKYsAf9Rw1AQ7wAAGMHllfrVJSDB7b-t8TIo2K-zyPr_AQ..;
cv-HSYZ.x%28%25a%3C%29%21%2B%298=EikIABIMBPlbAf9Rw3AOcgEAGLHCws3YJSDHh-Da2iUo2K-zyPr_ARIkCAASDHWbdgH_UcNwDnQBABihpcvh2CUgiajG3NwlKJaS5eAEEikIABIMbIx3Af9Rw3AOdgEAGMS13tDYJSC
20:23:38.133722 IP 122.200.132.xxx.80 > 124.179.205.xxx.62316: Flags [.],
ack 2761, win 92, length 0
E..(.d@.@..Pz...|P.lR2..P..\b...
20:23:38.478917 IP 124.179.205.xxx.62316 > 122.200.132.xxx.80: Flags [P.],
seq 2761:4141, ack 1, win 4140, length 1380
E.@.r..O|...zl.PR2..P..,Eu5qR3SUo2K-zyPr_AQ..;
cv-%21%21%21%21%21%21%21%21%21%21%21ww%40%25=EikIABIMbIx3Af9Rw3AOdgEAGIfJs6LWJSCHkd-y5SUo2K-zyPr_AQ..;
cv-%21%21%21%21%21%21%21%21%21%21%29ea79=EikIABIMFrVFAf9RwzQL1AUAGPXqlOfWJSD1os3V_Cco2K-zyPr_ARIpCAASDEc8RgH_UcM0C9YFABjTwpnn1iUg0_rR1fwnKNivs8j6_wE.;
cv-%21%21%21%21%21%21%21%21%21%21%2FqitJ=EiQIABIMBdtYAP9Rw8wX7wgAGLji8-fWJSC4srqF4yUolpLl4AQ.;
cv-U%40KP8x%28%25__%24%5BQjf=EikIABIMLpRXAP9Rw8wXsggAGIK4z-jWJSCCyMq84CUo2K-zyPr_ARIpCAASDCmjSAD_UcNwDioAABiYnJij2CUgtqmgl-ElKNivs8j6_wESKQgAEgzP3kcA_1HDcA4kAAAYsJ3fs9glIJjug5nhJSjYr7PI-v8BEikIABIMCSdKAP9Rw3AONAAAGKC3x83XJSDj-YSh4SUo2K-zyPr_ARIpCAASDErISgD_UcNwDjo