[ 
https://issues.apache.org/jira/browse/TS-2983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14107020#comment-14107020
 ] 

Sudheer Vinukonda commented on TS-2983:
---------------------------------------

[~amc] asked me to test the following fix and it seems to solve the issue. With 
the probe enabled, and with the fix from Alan, the logs did not show any 
corrupted requests (I also confirmed that, without Alan's fix, the logs show 
tons of corrupted requests).

{code}
diff --git a/proxy/ProtocolProbeSessionAccept.cc 
b/proxy/ProtocolProbeSessionAccept.cc
index 46adb48..97a397a 100644
--- a/proxy/ProtocolProbeSessionAccept.cc
+++ b/proxy/ProtocolProbeSessionAccept.cc
@@ -30,19 +30,20 @@ struct ProtocolProbeTrampoline : public Continuation, 
public ProtocolProbeSessio
 {
   static const size_t minimum_read_size = 1;
   static const unsigned buffer_size_index = 
CLIENT_CONNECTION_FIRST_READ_BUFFER_SIZE_INDEX;
+  IOBufferReader *  reader;
 
   explicit
   ProtocolProbeTrampoline(const ProtocolProbeSessionAccept * probe, ProxyMutex 
* mutex)
     : Continuation(mutex), probeParent(probe)
   {
     this->iobuf = new_MIOBuffer(buffer_size_index);
+    reader = iobuf->alloc_reader();
     SET_HANDLER(&ProtocolProbeTrampoline::ioCompletionEvent);
   }
 
   int ioCompletionEvent(int event, void * edata)
   {
     VIO *             vio;
-    IOBufferReader *  reader;
     NetVConnection *  netvc;
     ProtoGroupKey  key = N_PROTO_GROUPS; // use this as an invalid value.
 
@@ -64,7 +65,6 @@ struct ProtocolProbeTrampoline : public Continuation, public 
ProtocolProbeSessio
       return EVENT_ERROR;
     }
 
-    reader = iobuf->alloc_reader();
     ink_assert(netvc != NULL);
 
     if (!reader->is_read_avail_more_than(minimum_read_size - 1)) {
{code}

> request headers, http object corrupted in 5.0.x
> -----------------------------------------------
>
>                 Key: TS-2983
>                 URL: https://issues.apache.org/jira/browse/TS-2983
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 5.1.0
>            Reporter: Sudheer Vinukonda
>            Assignee: Alan M. Carroll
>            Priority: Blocker
>             Fix For: 5.1.0
>
>
>  We have run into a http header/field corruption issue on our proxy 
> infrastructure production hosts when we enabled 5.0.x. The issue results in 
> host header/method and other field corruption. 
> For example, this is what we see in our squid access logs:
> {code}
> 1406999819.698 0 69.109.120.92 ERR_CONNECT_FAIL 404 0 nas=0&disclaimer=2; 
> https://AMCV_att1=xxxxxxxxxxxxxxxxx;%20ypcdb=xxxxxxxxxxxxxxxxxx - NONE/- - 
> {code}
> After a lot of debugging, figured that the request was getting corrupted even 
> before remap and in fact, is being parsed incorrectly at the read request 
> state. Further analysis lead me to the commit TS-2197 (commit 
> 30fcc2b2e698831d1a9e4db1474d8cfc202818a3 in Oct'13), which has altered the 
> way the request is read slightly. Reverting the commit seems to have fixed 
> the issue. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to