David Lobron wrote:

2007-07-26 20:18:04.375 [3317] GS: Got response from sendDataPending
2007-07-26 20:18:04.376 [3317] GS: Calling poll with timeout 60000
2007-07-26 20:18:04.376 [3317] GS: Checking poll results
2007-07-26 20:18:04.376 [3317] GS: calling SSL_write on buffer of length 1281
2007-07-26 20:18:04.376 [3317] GS: done with SSL_write
2007-07-26 20:18:04.376 [3317] Called advanceSendBuffer:len
2007-07-26 20:18:04.377 [3317] GS: Calling poll with timeout 60000
2007-07-26 20:18:04.426 [3317] GS: Checking poll results <------------- 50ms delay occurs here
2007-07-26 20:18:04.426 [3317] GS: calling SSL_read
2007-07-26 20:18:04.427 [3317] GS: done with SSL_read, len = 142, text = GET


And if you run tcpdump on the TCP packets, does it also agree there is a 50ms delay ? which end is causing the delay ?


I ran a session with tcpdump on the server, stepping the client in a debugger, and I am having some trouble interpreting the results. The tcpdump output for a session is as followed (I've filtered to show only TCP transmissions in both directions with the client, and replaced the host names with CLIENT and SERVER):

21:08:30.190534 CLIENT.39044 > SERVER.8443: P 844:1025(181) ack 4827 win 65535 <nop,nop,timestamp 173778704 299402767> (DF) 21:08:30.199567 SERVER.8443 > CLIENT.39044: P 4827:4917(90) ack 1025 win 8576 <nop,nop,timestamp 299406197 173778704> (DF) 21:08:30.202752 SERVER.8443 > CLIENT.39044: . 4917:6285(1368) ack 1025 win 8576 <nop,nop,timestamp 299406200 173778704> (DF) 21:08:30.290932 CLIENT.39044 > SERVER.8443: . ack 4917 win 65535 <nop,nop,timestamp 173778704 299406197> (DF) 21:08:30.290977 SERVER.8443 > CLIENT.39044: P 6285:6483(198) ack 1025 win 8576 <nop,nop,timestamp 299406288 173778704> (DF)

That does "feel" like the server TCP was waiting for the Client's ack before sending the 198 bytes of data. However, it isn't a slam-dunk that nagle was involved here since we don't know the MSS (since we don't have the connection establishment) and cannot tell for certain if the previous exchanges would have set the TCP congestion window to a size that would have allowed things anyway.

Given the spread of the ACKno (4917) and the SEQno of what the server sends (6285) I'm inclined to think this was a congestion window-induced delay rather than Nagle. Typically for Nagle one sees the ACKno and the subsequent SEQno as the same value. Congestion windows can be in bytes, some stacks count them in terms of segments (packets). Which OSes are involved here again and what revs?

21:08:30.366362 CLIENT.39044 > SERVER.8443: . ack 6285 win 65535 <nop,nop,timestamp 173778704 299406200> (DF)

That this is the ACKno which I would have expcted to "shake loose" the Nagle delayed data, and since it is fairly far in time from when that data was sent, I'm guessing it was just a plain standalone ACK and not an inadvertant reordering of traffic in the trace.

21:08:30.382245 CLIENT.39044 > SERVER.8443: . ack 6483 win 65466 <nop,nop,timestamp 173778704 299406288> (DF)


The greater the quantity of data you can present to the transport at one time, the better (broad handwaving terms). For example, if the stack is doing congestion windows in units of segments (packets) then you want to get as much into those segments as you can.

rickjones
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to