[jira] [Updated] (TS-2269) regex_remap plugin has problem handling the case when url path is empty

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2269:
--

Fix Version/s: 4.1.0
 Assignee: Leif Hedstrom

 regex_remap plugin has problem handling the case when url path is empty
 ---

 Key: TS-2269
 URL: https://issues.apache.org/jira/browse/TS-2269
 Project: Traffic Server
  Issue Type: Bug
  Components: Plugins
Reporter: Kit Chan
Assignee: Leif Hedstrom
 Fix For: 4.1.0

 Attachments: regex_remap.diff


 Specifically the block of code is here - 
 https://github.com/apache/trafficserver/blob/master/plugins/regex_remap/regex_remap.cc#L802-806
   *(match_buf + match_len) = '/';
   if (req_url.path  req_url.path_len  0) {
 memcpy(match_buf + match_len + 1, req_url.path, req_url.path_len);
 match_len += (req_url.path_len + 1);
   }
 So if req_url.path is empty (e.g. in the case of http://www.xyx.com/ being 
 the request url), match_len will not increment by 1.
 so e.g. there won't be a match for this case for the regular expression of 
 '^/$'



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2269) regex_remap plugin has problem handling the case when url path is empty

2013-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793366#comment-13793366
 ] 

ASF subversion and git services commented on TS-2269:
-

Commit 98d06d2df771798954801ea53ee3148ff0f1f631 in branch refs/heads/master 
from [~kichan]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=98d06d2 ]

TS-2269 regex_remap plugin does not deal with empty path's properly.

This is a problem when the request URL path is just '/', which inside
of ATS is an emptry string.

Review: leif


 regex_remap plugin has problem handling the case when url path is empty
 ---

 Key: TS-2269
 URL: https://issues.apache.org/jira/browse/TS-2269
 Project: Traffic Server
  Issue Type: Bug
  Components: Plugins
Reporter: Kit Chan
Assignee: Leif Hedstrom
 Fix For: 4.1.0

 Attachments: regex_remap.diff


 Specifically the block of code is here - 
 https://github.com/apache/trafficserver/blob/master/plugins/regex_remap/regex_remap.cc#L802-806
   *(match_buf + match_len) = '/';
   if (req_url.path  req_url.path_len  0) {
 memcpy(match_buf + match_len + 1, req_url.path, req_url.path_len);
 match_len += (req_url.path_len + 1);
   }
 So if req_url.path is empty (e.g. in the case of http://www.xyx.com/ being 
 the request url), match_len will not increment by 1.
 so e.g. there won't be a match for this case for the regular expression of 
 '^/$'



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2264) The current cache connection counter is not reliably decremented with use_client_source_port

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2264:
--

Fix Version/s: 4.1.0

 The current cache connection counter is not reliably decremented with 
 use_client_source_port
 

 Key: TS-2264
 URL: https://issues.apache.org/jira/browse/TS-2264
 Project: Traffic Server
  Issue Type: Bug
  Components: TProxy
Reporter: Alan M. Carroll
Assignee: Alan M. Carroll
Priority: Minor
 Fix For: 4.1.0


 If use_client_target_addr and use_client_source_port are enabled with 
 transparent proxy, in some cases the current cache connection counter is not 
 decremented. This happens when a client connection is terminated due to an 
 EADDRNOTAVAIL failure.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2262) range request for cached content with small size(around 2k bytes) fails.

2013-10-12 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793368#comment-13793368
 ] 

Leif Hedstrom commented on TS-2262:
---

I'm going to mark this for v3.2.6, but I don't know if anyone will work on it.

 range request for cached content with small size(around 2k bytes) fails.
 

 Key: TS-2262
 URL: https://issues.apache.org/jira/browse/TS-2262
 Project: Traffic Server
  Issue Type: Bug
Reporter: jaekyung oh
 Fix For: 3.2.6


 after cache a content with 2k bytes size, range request for it fails showing 
 timeout.
 Version : ATS 3.2.4
 curl -v -o /dev/null --range 100-200 http://ats-test.test.net/1-test.2k; 
 shows
 * About to connect() to ats-test.test.net port 80 (#0)
 *   Trying 110.45.197.30...   % Total% Received % Xferd  Average Speed   
 TimeTime Time  Current
  Dload  Upload   Total   SpentLeft  Speed
   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
 0connected
 * Connected to ats-test.test.net (xxx.xxx.xxx.xxx) port 80 (#0)
  GET ats-test.test.net/1-test.2k HTTP/1.1
  Range: bytes=100-200
  User-Agent: curl/7.20.1 (x86_64-unknown-linux-gnu) libcurl/7.20.1 
  OpenSSL/1.0.0 zlib/1.2.5 libidn/1.15 libssh2/1.2.2_DEV
  Host: ats-test.test.net
  Accept: */*
  
  HTTP/1.1 206 Partial Content
  Accept-Ranges: bytes
  ETag: 2429143783
  Last-Modified: Mon, 22 Apr 2013 07:46:30 GMT
  Date: Tue, 01 Oct 2013 09:15:00 GMT
  Server: ATS/3.2.4.3.0
  Content-Type: multipart/byteranges; boundary=RANGE_SEPARATOR
  Content-Length: 1000
  Age: 172
  Connection: keep-alive
  
 { [data not shown]
   0  10000 10 0  0  0 --:--:--  0:00:30 --:--:-- 
 0* transfer closed with 999 bytes remaining to read
   0  10000 10 0  0  0 --:--:--  0:00:30 --:--:-- 
 0* Closing connection #0
 curl: (18) transfer closed with 999 bytes remaining to read
 is there any limit on the size of contents for range request processing?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2269) regex_remap plugin has problem handling the case when url path is empty

2013-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793367#comment-13793367
 ] 

ASF subversion and git services commented on TS-2269:
-

Commit 4c5fbfa06830ce9167a9be4a1992de22ba753c5e in branch refs/heads/master 
from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=4c5fbfa ]

Added TS-2269.


 regex_remap plugin has problem handling the case when url path is empty
 ---

 Key: TS-2269
 URL: https://issues.apache.org/jira/browse/TS-2269
 Project: Traffic Server
  Issue Type: Bug
  Components: Plugins
Reporter: Kit Chan
Assignee: Leif Hedstrom
 Fix For: 4.1.0

 Attachments: regex_remap.diff


 Specifically the block of code is here - 
 https://github.com/apache/trafficserver/blob/master/plugins/regex_remap/regex_remap.cc#L802-806
   *(match_buf + match_len) = '/';
   if (req_url.path  req_url.path_len  0) {
 memcpy(match_buf + match_len + 1, req_url.path, req_url.path_len);
 match_len += (req_url.path_len + 1);
   }
 So if req_url.path is empty (e.g. in the case of http://www.xyx.com/ being 
 the request url), match_len will not increment by 1.
 so e.g. there won't be a match for this case for the regular expression of 
 '^/$'



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2262) range request for cached content with small size(around 2k bytes) fails.

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2262:
--

Fix Version/s: 3.2.6

 range request for cached content with small size(around 2k bytes) fails.
 

 Key: TS-2262
 URL: https://issues.apache.org/jira/browse/TS-2262
 Project: Traffic Server
  Issue Type: Bug
Reporter: jaekyung oh
 Fix For: 3.2.6


 after cache a content with 2k bytes size, range request for it fails showing 
 timeout.
 Version : ATS 3.2.4
 curl -v -o /dev/null --range 100-200 http://ats-test.test.net/1-test.2k; 
 shows
 * About to connect() to ats-test.test.net port 80 (#0)
 *   Trying 110.45.197.30...   % Total% Received % Xferd  Average Speed   
 TimeTime Time  Current
  Dload  Upload   Total   SpentLeft  Speed
   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
 0connected
 * Connected to ats-test.test.net (xxx.xxx.xxx.xxx) port 80 (#0)
  GET ats-test.test.net/1-test.2k HTTP/1.1
  Range: bytes=100-200
  User-Agent: curl/7.20.1 (x86_64-unknown-linux-gnu) libcurl/7.20.1 
  OpenSSL/1.0.0 zlib/1.2.5 libidn/1.15 libssh2/1.2.2_DEV
  Host: ats-test.test.net
  Accept: */*
  
  HTTP/1.1 206 Partial Content
  Accept-Ranges: bytes
  ETag: 2429143783
  Last-Modified: Mon, 22 Apr 2013 07:46:30 GMT
  Date: Tue, 01 Oct 2013 09:15:00 GMT
  Server: ATS/3.2.4.3.0
  Content-Type: multipart/byteranges; boundary=RANGE_SEPARATOR
  Content-Length: 1000
  Age: 172
  Connection: keep-alive
  
 { [data not shown]
   0  10000 10 0  0  0 --:--:--  0:00:30 --:--:-- 
 0* transfer closed with 999 bytes remaining to read
   0  10000 10 0  0  0 --:--:--  0:00:30 --:--:-- 
 0* Closing connection #0
 curl: (18) transfer closed with 999 bytes remaining to read
 is there any limit on the size of contents for range request processing?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2254) ink_atomic_increment should return the old value

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2254:
--

Fix Version/s: 4.1.0

 ink_atomic_increment should return the old value
 

 Key: TS-2254
 URL: https://issues.apache.org/jira/browse/TS-2254
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Yu Qing
Assignee: Yu Qing
 Fix For: 4.1.0

 Attachments: 
 0001-TS-2254-ink_atomic_increment-should-return-the-old-v.patch


 lib/ts/ink_atomic.h
 197 template
 198 inline int64_t
 199 ink_atomic_incrementint64_t(pvint64 mem, int64_t value) {
 200   int64_t curr;
 201   ink_mutex_acquire(__global_death);
 202   curr = *mem;
 203   *mem = curr + value;
 204   ink_mutex_release(__global_death);
 205   return curr + value;  //SHOULD return curr!
 206 }
 this function should return the old value (curr, NOT curr + value). it should 
 return same value as  gcc inline function __sync_fetch_and_add.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2254) ink_atomic_increment should return the old value

2013-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793370#comment-13793370
 ] 

ASF subversion and git services commented on TS-2254:
-

Commit a3d98482c1ad8b81494efae5222dd2fff4504a3c in branch refs/heads/master 
from [~happy_fish100]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=a3d9848 ]

TS-2254 On ARM arch, ink_atomic_increment returns wrong value

It should return the old value, before the increment. This is
consistent with how the other ink_atomic_increment implementation
works.

Review: leif


 ink_atomic_increment should return the old value
 

 Key: TS-2254
 URL: https://issues.apache.org/jira/browse/TS-2254
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Yu Qing
Assignee: Yu Qing
 Fix For: 4.1.0

 Attachments: 
 0001-TS-2254-ink_atomic_increment-should-return-the-old-v.patch


 lib/ts/ink_atomic.h
 197 template
 198 inline int64_t
 199 ink_atomic_incrementint64_t(pvint64 mem, int64_t value) {
 200   int64_t curr;
 201   ink_mutex_acquire(__global_death);
 202   curr = *mem;
 203   *mem = curr + value;
 204   ink_mutex_release(__global_death);
 205   return curr + value;  //SHOULD return curr!
 206 }
 this function should return the old value (curr, NOT curr + value). it should 
 return same value as  gcc inline function __sync_fetch_and_add.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2254) ink_atomic_increment should return the old value

2013-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793371#comment-13793371
 ] 

ASF subversion and git services commented on TS-2254:
-

Commit 898d0a558dc15c9dddc5bad48ee582f5a1471a3c in branch refs/heads/master 
from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=898d0a5 ]

Added TS-2254.


 ink_atomic_increment should return the old value
 

 Key: TS-2254
 URL: https://issues.apache.org/jira/browse/TS-2254
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Yu Qing
Assignee: Yu Qing
 Fix For: 4.1.0

 Attachments: 
 0001-TS-2254-ink_atomic_increment-should-return-the-old-v.patch


 lib/ts/ink_atomic.h
 197 template
 198 inline int64_t
 199 ink_atomic_incrementint64_t(pvint64 mem, int64_t value) {
 200   int64_t curr;
 201   ink_mutex_acquire(__global_death);
 202   curr = *mem;
 203   *mem = curr + value;
 204   ink_mutex_release(__global_death);
 205   return curr + value;  //SHOULD return curr!
 206 }
 this function should return the old value (curr, NOT curr + value). it should 
 return same value as  gcc inline function __sync_fetch_and_add.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2252) bison version mis-detected on ubuntu 13.10

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2252:
--

Fix Version/s: 4.2.0

 bison version mis-detected on ubuntu 13.10
 --

 Key: TS-2252
 URL: https://issues.apache.org/jira/browse/TS-2252
 Project: Traffic Server
  Issue Type: Bug
  Components: Portability
Reporter: James Peach
Assignee: Alan M. Carroll
 Fix For: 4.2.0


 On Ubuntu 13.10 (Saucy), the --enable-wccp option mis-detects bison 
 2.7.12-4996:
 {code}
 checking for bison... bison
 checking for flex... flex
 checking lex output file root... lex.yy
 checking lex library... -lfl
 checking whether yytext is a pointer... yes
 configure: error: Need bison version 2.4.1 or better to enable WCCP (found no 
 version data)
 ...
 vagrant@vagrant-ubuntu-saucy-64:~/build$ bison --version
 bison (GNU Bison) 2.7.12-4996
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2251) LogBuffer::destroy() defeated by compiler optimizations

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2251:
--

Fix Version/s: 4.1.0

 LogBuffer::destroy() defeated by compiler optimizations
 ---

 Key: TS-2251
 URL: https://issues.apache.org/jira/browse/TS-2251
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging, Portability, Quality
Reporter: James Peach
Assignee: James Peach
Priority: Trivial
 Fix For: 4.1.0


 {{LogBuffer::destroy()}} uses atomic compare and swaps on the {{LogBuffer}} 
 reference count to decrement the refcount and destroy the LogBuffer object. 
 However, the compiler (Apple clang-500.2.75) hoists the read of 
 LogBuffer::m_references out of the loop, so it won't work correctly if 
 {{ink_atomic_cas}} ever fails:
 {code}
 __ZN9LogBuffer7destroyEPS_: ## @_ZN9LogBuffer7destroyEPS_
   .cfi_startproc
   .cfi_personality 155, ___gxx_personality_v0
 Leh_func_begin1:
   .cfi_lsda 16, Lexception1
 Lfunc_begin1:
   .loc1 66 0  ## 
 /Users/jpeach/src/trafficserver.git/proxy/logging/LogBuffer.cc:66:0
 ## BB#0:
   pushq   %rbp
 Ltmp13:
   .cfi_def_cfa_offset 16
 Ltmp14:
   .cfi_offset %rbp, -16
   movq%rsp, %rbp
 Ltmp15:
   .cfi_def_cfa_register %rbp
   pushq   %r14
   pushq   %rbx
   subq$32, %rsp
 Ltmp16:
   .cfi_offset %rbx, -32
 Ltmp17:
   .cfi_offset %r14, -24
   ##DEBUG_VALUE: destroy:lb - RDI+0
   movq%rdi, %rbx
 {code}
 Notice that the following load of LogBuffer::m_references is outside the loop 
 labelled {{LBB1_1}}:
 {code}
 Ltmp18:
   ##DEBUG_VALUE: destroy:lb - RBX+0
   .loc1 70 0 prologue_end ## 
 /Users/jpeach/src/trafficserver.git/proxy/logging/LogBuffer.cc:70:0
   leaq104(%rbx), %rsi
 Ltmp19:
   ##DEBUG_VALUE: ink_atomic_casint:mem - RSI+0
   .align  4, 0x90
 LBB1_1: ## =This Inner Loop Header: Depth=1
   ##DEBUG_VALUE: destroy:lb - RBX+0
   ##DEBUG_VALUE: ink_atomic_casint:mem - RSI+0
   movl(%rsi), %ecx
 Ltmp20:
   ##DEBUG_VALUE: old_ref - ECX+0
   ##DEBUG_VALUE: ink_atomic_casint:prev - ECX+0
   .loc1 71 0  ## 
 /Users/jpeach/src/trafficserver.git/proxy/logging/LogBuffer.cc:71:0
   leal-1(%rcx), %edx
 Ltmp21:
   ##DEBUG_VALUE: ink_atomic_casint:next - EDX+0
   ##DEBUG_VALUE: new_ref - EDX+0
   .loc29 153 0## 
 /Users/jpeach/src/trafficserver.git/lib/ts/ink_atomic.h:153:0
   movl%ecx, %eax
   lock
   cmpxchgl%edx, (%rsi)
   cmpl%ecx, %eax
 Ltmp22:
   .loc1 73 0  ## 
 /Users/jpeach/src/trafficserver.git/proxy/logging/LogBuffer.cc:73:0
   jne LBB1_1
 Ltmp23:
 ## BB#2:
   ##DEBUG_VALUE: destroy:lb - RBX+0
   .loc1 75 0  ## 
 /Users/jpeach/src/trafficserver.git/proxy/logging/LogBuffer.cc:75:0
   testl   %ecx, %ecx
   jle LBB1_15
 ## BB#3:
   ##DEBUG_VALUE: destroy:lb - RBX+0
   .loc1 77 0  ## 
 /Users/jpeach/src/trafficserver.git/proxy/logging/LogBuffer.cc:77:0
   testl   %edx, %edx
   jne LBB1_14
 ## BB#4:
   ##DEBUG_VALUE: destroy:lb - RBX+0
   testq   %rbx, %rbx
   jne LBB1_5
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2149) loop in dir_clean_bucket()

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2149:
--

Fix Version/s: 4.1.0

 loop in dir_clean_bucket()
 --

 Key: TS-2149
 URL: https://issues.apache.org/jira/browse/TS-2149
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: Bin Chen
 Fix For: 4.1.0

 Attachments: Screenshot.png


 Ts will enter loop in dir_clean_bucket(). The define LOOP_CHECK_MODE not use 
 default.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2140) Stats module is showing proxy.process.cache.direntries.used greater then proxy.process.cache.direntries.total

2013-10-12 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793372#comment-13793372
 ] 

Leif Hedstrom commented on TS-2140:
---

Is anyone else experiencing this? Yahoo or Taobao, you seeing this too ? 

 Stats module is showing proxy.process.cache.direntries.used greater then 
 proxy.process.cache.direntries.total
 -

 Key: TS-2140
 URL: https://issues.apache.org/jira/browse/TS-2140
 Project: Traffic Server
  Issue Type: Bug
  Components: Stats
Affects Versions: 3.3.4
Reporter: Adam Twardowski
Priority: Minor
 Fix For: 4.2.0


 using the stats_over_http.so module, reported 
 proxy.process.cache.direntries.used is greater then 
 proxy.process.cache.direntries.total
 Also, direntries.used is continuously increasing over time.  traffic_line -r 
 key gives the same numbers as the http stats.
 { global: {
 proxy.process.version.server.short: 3.3.4-dev,
 proxy.process.version.server.long: Apache Traffic Server - traffic_server 
 - 3.3.4-dev - (build # 63013 on Jul 30 2013 at 13:23:55),
 proxy.process.version.server.build_number: 63013,
 proxy.process.version.server.build_time: 13:23:55,
 proxy.process.version.server.build_date: Jul 30 2013,
 proxy.process.version.server.build_machine: X,
 proxy.process.version.server.build_person: root,
 proxy.process.http.completed_requests: 1771631878,
 proxy.process.http.total_incoming_connections: 1771779452,
 proxy.process.http.total_client_connections: 1771779452,
 proxy.process.http.total_client_connections_ipv4: 1771779452,
 proxy.process.http.total_client_connections_ipv6: 0,
 proxy.process.http.total_server_connections: 486478731,
 proxy.process.http.total_parent_proxy_connections: 0,
 proxy.process.http.avg_transactions_per_client_connection: 1.15,
 proxy.process.http.avg_transactions_per_server_connection: 1.00,
 proxy.process.http.avg_transactions_per_parent_connection: 0.00,
 proxy.process.http.client_connection_time: 0,
 proxy.process.http.parent_proxy_connection_time: 0,
 proxy.process.http.server_connection_time: 0,
 proxy.process.http.cache_connection_time: 0,
 proxy.process.http.transaction_counts.errors.pre_accept_hangups: 0,
 proxy.process.http.transaction_totaltime.errors.pre_accept_hangups: 
 0.00,
 proxy.process.http.transaction_counts.errors.empty_hangups: 0,
 proxy.process.http.transaction_totaltime.errors.empty_hangups: 0.00,
 proxy.process.http.transaction_counts.errors.early_hangups: 0,
 proxy.process.http.transaction_totaltime.errors.early_hangups: 0.00,
 proxy.process.http.incoming_requests: 1766073976,
 proxy.process.http.outgoing_requests: 484585207,
 proxy.process.http.incoming_responses: 486478535,
 proxy.process.http.invalid_client_requests: 240,
 proxy.process.http.missing_host_hdr: 0,
 proxy.process.http.get_requests: 1765677046,
 proxy.process.http.head_requests: 392884,
 proxy.process.http.trace_requests: 8,
 proxy.process.http.options_requests: 1562,
 proxy.process.http.post_requests: 662,
 proxy.process.http.put_requests: 7,
 proxy.process.http.push_requests: 0,
 proxy.process.http.delete_requests: 0,
 proxy.process.http.purge_requests: 0,
 proxy.process.http.connect_requests: 0,
 proxy.process.http.extension_method_requests: 1807,
 proxy.process.http.client_no_cache_requests: 0,
 proxy.process.http.broken_server_connections: 63475,
 proxy.process.http.cache_lookups: 1765896837,
 proxy.process.http.cache_writes: 119147534,
 proxy.process.http.cache_updates: 7812961,
 proxy.process.http.cache_deletes: 60058,
 proxy.process.http.tunnels: 5017,
 proxy.process.http.throttled_proxy_only: 0,
 proxy.process.http.request_taxonomy.i0_n0_m0: 0,
 proxy.process.http.request_taxonomy.i1_n0_m0: 0,
 proxy.process.http.request_taxonomy.i0_n1_m0: 0,
 proxy.process.http.request_taxonomy.i1_n1_m0: 0,
 proxy.process.http.request_taxonomy.i0_n0_m1: 0,
 proxy.process.http.request_taxonomy.i1_n0_m1: 0,
 proxy.process.http.request_taxonomy.i0_n1_m1: 0,
 proxy.process.http.request_taxonomy.i1_n1_m1: 0,
 proxy.process.http.icp_suggested_lookups: 0,
 proxy.process.http.client_transaction_time: 0,
 proxy.process.http.client_write_time: 0,
 proxy.process.http.server_read_time: 0,
 proxy.process.http.icp_transaction_time: 0,
 proxy.process.http.icp_raw_transaction_time: 0,
 proxy.process.http.parent_proxy_transaction_time: 0,
 proxy.process.http.parent_proxy_raw_transaction_time: 0,
 proxy.process.http.server_transaction_time: 0,
 proxy.process.http.server_raw_transaction_time: 0,
 proxy.process.http.user_agent_request_header_total_size: 797103745655,
 proxy.process.http.user_agent_response_header_total_size: 697699513781,
 proxy.process.http.user_agent_request_document_total_size: 419549,
 

[jira] [Updated] (TS-2140) Stats module is showing proxy.process.cache.direntries.used greater then proxy.process.cache.direntries.total

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2140:
--

Fix Version/s: 4.2.0

 Stats module is showing proxy.process.cache.direntries.used greater then 
 proxy.process.cache.direntries.total
 -

 Key: TS-2140
 URL: https://issues.apache.org/jira/browse/TS-2140
 Project: Traffic Server
  Issue Type: Bug
  Components: Stats
Affects Versions: 3.3.4
Reporter: Adam Twardowski
Priority: Minor
 Fix For: 4.2.0


 using the stats_over_http.so module, reported 
 proxy.process.cache.direntries.used is greater then 
 proxy.process.cache.direntries.total
 Also, direntries.used is continuously increasing over time.  traffic_line -r 
 key gives the same numbers as the http stats.
 { global: {
 proxy.process.version.server.short: 3.3.4-dev,
 proxy.process.version.server.long: Apache Traffic Server - traffic_server 
 - 3.3.4-dev - (build # 63013 on Jul 30 2013 at 13:23:55),
 proxy.process.version.server.build_number: 63013,
 proxy.process.version.server.build_time: 13:23:55,
 proxy.process.version.server.build_date: Jul 30 2013,
 proxy.process.version.server.build_machine: X,
 proxy.process.version.server.build_person: root,
 proxy.process.http.completed_requests: 1771631878,
 proxy.process.http.total_incoming_connections: 1771779452,
 proxy.process.http.total_client_connections: 1771779452,
 proxy.process.http.total_client_connections_ipv4: 1771779452,
 proxy.process.http.total_client_connections_ipv6: 0,
 proxy.process.http.total_server_connections: 486478731,
 proxy.process.http.total_parent_proxy_connections: 0,
 proxy.process.http.avg_transactions_per_client_connection: 1.15,
 proxy.process.http.avg_transactions_per_server_connection: 1.00,
 proxy.process.http.avg_transactions_per_parent_connection: 0.00,
 proxy.process.http.client_connection_time: 0,
 proxy.process.http.parent_proxy_connection_time: 0,
 proxy.process.http.server_connection_time: 0,
 proxy.process.http.cache_connection_time: 0,
 proxy.process.http.transaction_counts.errors.pre_accept_hangups: 0,
 proxy.process.http.transaction_totaltime.errors.pre_accept_hangups: 
 0.00,
 proxy.process.http.transaction_counts.errors.empty_hangups: 0,
 proxy.process.http.transaction_totaltime.errors.empty_hangups: 0.00,
 proxy.process.http.transaction_counts.errors.early_hangups: 0,
 proxy.process.http.transaction_totaltime.errors.early_hangups: 0.00,
 proxy.process.http.incoming_requests: 1766073976,
 proxy.process.http.outgoing_requests: 484585207,
 proxy.process.http.incoming_responses: 486478535,
 proxy.process.http.invalid_client_requests: 240,
 proxy.process.http.missing_host_hdr: 0,
 proxy.process.http.get_requests: 1765677046,
 proxy.process.http.head_requests: 392884,
 proxy.process.http.trace_requests: 8,
 proxy.process.http.options_requests: 1562,
 proxy.process.http.post_requests: 662,
 proxy.process.http.put_requests: 7,
 proxy.process.http.push_requests: 0,
 proxy.process.http.delete_requests: 0,
 proxy.process.http.purge_requests: 0,
 proxy.process.http.connect_requests: 0,
 proxy.process.http.extension_method_requests: 1807,
 proxy.process.http.client_no_cache_requests: 0,
 proxy.process.http.broken_server_connections: 63475,
 proxy.process.http.cache_lookups: 1765896837,
 proxy.process.http.cache_writes: 119147534,
 proxy.process.http.cache_updates: 7812961,
 proxy.process.http.cache_deletes: 60058,
 proxy.process.http.tunnels: 5017,
 proxy.process.http.throttled_proxy_only: 0,
 proxy.process.http.request_taxonomy.i0_n0_m0: 0,
 proxy.process.http.request_taxonomy.i1_n0_m0: 0,
 proxy.process.http.request_taxonomy.i0_n1_m0: 0,
 proxy.process.http.request_taxonomy.i1_n1_m0: 0,
 proxy.process.http.request_taxonomy.i0_n0_m1: 0,
 proxy.process.http.request_taxonomy.i1_n0_m1: 0,
 proxy.process.http.request_taxonomy.i0_n1_m1: 0,
 proxy.process.http.request_taxonomy.i1_n1_m1: 0,
 proxy.process.http.icp_suggested_lookups: 0,
 proxy.process.http.client_transaction_time: 0,
 proxy.process.http.client_write_time: 0,
 proxy.process.http.server_read_time: 0,
 proxy.process.http.icp_transaction_time: 0,
 proxy.process.http.icp_raw_transaction_time: 0,
 proxy.process.http.parent_proxy_transaction_time: 0,
 proxy.process.http.parent_proxy_raw_transaction_time: 0,
 proxy.process.http.server_transaction_time: 0,
 proxy.process.http.server_raw_transaction_time: 0,
 proxy.process.http.user_agent_request_header_total_size: 797103745655,
 proxy.process.http.user_agent_response_header_total_size: 697699513781,
 proxy.process.http.user_agent_request_document_total_size: 419549,
 proxy.process.http.user_agent_response_document_total_size: 
 190278962740080,
 

[jira] [Commented] (TS-2149) loop in dir_clean_bucket()

2013-10-12 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793374#comment-13793374
 ] 

Leif Hedstrom commented on TS-2149:
---

Can we expect a fix for this? If not, please move out to v4.2.0 or 5.0.0.

 loop in dir_clean_bucket()
 --

 Key: TS-2149
 URL: https://issues.apache.org/jira/browse/TS-2149
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: Bin Chen
 Fix For: 4.1.0

 Attachments: Screenshot.png


 Ts will enter loop in dir_clean_bucket(). The define LOOP_CHECK_MODE not use 
 default.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2184) Fetch from cluster with proxy.config.http.cache.cluster_cache_local enabled

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2184:
--

Fix Version/s: 5.0.0

 Fetch from cluster with proxy.config.http.cache.cluster_cache_local enabled
 ---

 Key: TS-2184
 URL: https://issues.apache.org/jira/browse/TS-2184
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache, Clustering
Reporter: Scott Harris
Assignee: Bin Chen
 Fix For: 5.0.0


 With proxy.config.http.cache.cluster_cache_local enabled I would like cluster 
 nodes to store content locally but try to retrieve content from the cluster 
 first (if not cached locally) and if no cluster nodes have content cached 
 then retrieve from origin.
 Example - 2 Cluster nodes in Full cluster mode.
 1. Node1 and Node2 are both empty.
 2. Request to Node1 for http://www.example.com/foo.html;.
 3. Query Cluster for object
 4. Not cached in cluster so retrieve from orgin, serve to client, object now 
 cached on Node1.
 5. Request comes to Node2 for http://www.example.com/foo.html;.
 6. Node2 retrieves cached version from Node1, serves to client, stores 
 locally.
 7. Subsequent request comes to Node1 or Node2 for 
 http://www.example.com/foo.html;, object is served to client from local 
 cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2192) sm_list may miss some http_sms if it is lock contention

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2192:
--

Fix Version/s: 4.2.0

 sm_list may miss some http_sms if it is lock contention
 ---

 Key: TS-2192
 URL: https://issues.apache.org/jira/browse/TS-2192
 Project: Traffic Server
  Issue Type: Improvement
  Components: HTTP
Reporter: weijin
 Fix For: 4.2.0


 it is a sub task of TS-2191.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2222) mgmt/api/CfgContextImpl.cc duplicates the effort of lib/ts/MatcherUtils.cc specifically for SplitDNS

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-:
--

Fix Version/s: 4.2.0

 mgmt/api/CfgContextImpl.cc duplicates the effort of lib/ts/MatcherUtils.cc 
 specifically for SplitDNS
 

 Key: TS-
 URL: https://issues.apache.org/jira/browse/TS-
 Project: Traffic Server
  Issue Type: Bug
  Components: Cleanup, Configuration, DNS
Reporter: Igor Galić
 Fix For: 4.2.0


 In trying to answer a user's question of whether {{url_regex}} matches the 
 Scheme, or just the host, I crawled through our source-code and decided that 
 the answer is: Maybe. It depends on where you're putting that {{url_regex}}. 
 In {{splitdns.config}} it definitely only just matches the host.
 Speaking of which. Why *is* there a duplicate implementation of this for 
 splitdns specifically?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2248) Segmentation fault HttpTunnel

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2248:
--

Fix Version/s: 4.2.0

 Segmentation fault  HttpTunnel
 --

 Key: TS-2248
 URL: https://issues.apache.org/jira/browse/TS-2248
 Project: Traffic Server
  Issue Type: Bug
Affects Versions: 4.0.1
Reporter: bettydramit
 Fix For: 4.2.0


 ENV: centos 6 x86_64 ts-4.0.1
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0(+0xf500)[0x2b934f388500]
 /usr/bin/traffic_server(_ZN10HttpTunnel17consumer_reenableEP18HttpTunnelConsumer+0x13a)[0x56aa2a]
 /usr/bin/traffic_server(_ZN10HttpTunnel16consumer_handlerEiP18HttpTunnelConsumer+0x16b)[0x56aceb]
 /usr/bin/traffic_server(_ZN10HttpTunnel12main_handlerEiPv+0x10d)[0x56bc2d]
 /usr/bin/traffic_server[0x6807bb]
 /usr/bin/traffic_server(_Z15write_to_net_ioP10NetHandlerP18UnixNetVConnectionP7EThread+0x553)[0x6841a3]
 /usr/bin/traffic_server(_ZN10NetHandler12mainNetEventEiP5Event+0x283)[0x67bd93]
 /usr/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x8f)[0x6a36df]
 /usr/bin/traffic_server(_ZN7EThread7executeEv+0x4a3)[0x6a40c3]
 /usr/bin/traffic_server[0x6a257a]
 /lib64/libpthread.so.0(+0x7851)[0x2b934f380851]
 /lib64/libc.so.6(clone+0x6d)[0x2b935002494d]



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2248) Segmentation fault HttpTunnel

2013-10-12 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793382#comment-13793382
 ] 

Leif Hedstrom commented on TS-2248:
---

Any more details on how to reproduce or test this? Are there any plugins 
involved?

 Segmentation fault  HttpTunnel
 --

 Key: TS-2248
 URL: https://issues.apache.org/jira/browse/TS-2248
 Project: Traffic Server
  Issue Type: Bug
Affects Versions: 4.0.1
Reporter: bettydramit
 Fix For: 4.2.0


 ENV: centos 6 x86_64 ts-4.0.1
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0(+0xf500)[0x2b934f388500]
 /usr/bin/traffic_server(_ZN10HttpTunnel17consumer_reenableEP18HttpTunnelConsumer+0x13a)[0x56aa2a]
 /usr/bin/traffic_server(_ZN10HttpTunnel16consumer_handlerEiP18HttpTunnelConsumer+0x16b)[0x56aceb]
 /usr/bin/traffic_server(_ZN10HttpTunnel12main_handlerEiPv+0x10d)[0x56bc2d]
 /usr/bin/traffic_server[0x6807bb]
 /usr/bin/traffic_server(_Z15write_to_net_ioP10NetHandlerP18UnixNetVConnectionP7EThread+0x553)[0x6841a3]
 /usr/bin/traffic_server(_ZN10NetHandler12mainNetEventEiP5Event+0x283)[0x67bd93]
 /usr/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x8f)[0x6a36df]
 /usr/bin/traffic_server(_ZN7EThread7executeEv+0x4a3)[0x6a40c3]
 /usr/bin/traffic_server[0x6a257a]
 /lib64/libpthread.so.0(+0x7851)[0x2b934f380851]
 /lib64/libc.so.6(clone+0x6d)[0x2b935002494d]



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-1807) shutdown on a write VIO to TSHttpConnect() doesn't propogate

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-1807:
--

Fix Version/s: 5.0.0

 shutdown on a write VIO to TSHttpConnect() doesn't propogate
 

 Key: TS-1807
 URL: https://issues.apache.org/jira/browse/TS-1807
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: William Bardwell
 Fix For: 5.0.0


 In a plugin I am doing a TSHttpConnect() and then sending HTTP requests and 
 getting responses.  But when I try to do TSVIONBytesSet() and 
 TSVConnShutdown() on the write vio (due to the client side being done sending 
 requests) the write vio just sits there and never wakes up the other side, 
 and the response side doesn't try to close up until an inactivity timeout 
 happens.
 I think that PluginVC::do_io_shutdown() needs to do  
 other_side-read_state.vio.reenable(); when a shutdown for write shows up.  
 Then the otherside wakes up and sees the EOF due to the shutdown.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2184) Fetch from cluster with proxy.config.http.cache.cluster_cache_local enabled

2013-10-12 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793383#comment-13793383
 ] 

Leif Hedstrom commented on TS-2184:
---

Moving out to v5.0.0 for now, move back to v4.2.0 if it'll be worked on in the 
next 3-4 months.

 Fetch from cluster with proxy.config.http.cache.cluster_cache_local enabled
 ---

 Key: TS-2184
 URL: https://issues.apache.org/jira/browse/TS-2184
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache, Clustering
Reporter: Scott Harris
Assignee: Bin Chen
 Fix For: 5.0.0


 With proxy.config.http.cache.cluster_cache_local enabled I would like cluster 
 nodes to store content locally but try to retrieve content from the cluster 
 first (if not cached locally) and if no cluster nodes have content cached 
 then retrieve from origin.
 Example - 2 Cluster nodes in Full cluster mode.
 1. Node1 and Node2 are both empty.
 2. Request to Node1 for http://www.example.com/foo.html;.
 3. Query Cluster for object
 4. Not cached in cluster so retrieve from orgin, serve to client, object now 
 cached on Node1.
 5. Request comes to Node2 for http://www.example.com/foo.html;.
 6. Node2 retrieves cached version from Node1, serves to client, stores 
 locally.
 7. Subsequent request comes to Node1 or Node2 for 
 http://www.example.com/foo.html;, object is served to client from local 
 cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-1807) shutdown on a write VIO to TSHttpConnect() doesn't propogate

2013-10-12 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793384#comment-13793384
 ] 

Leif Hedstrom commented on TS-1807:
---

William: Is this a bug you plan on working on? If so, can you please assign it 
to yourself, and give it a Fix Version.

 shutdown on a write VIO to TSHttpConnect() doesn't propogate
 

 Key: TS-1807
 URL: https://issues.apache.org/jira/browse/TS-1807
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: William Bardwell
 Fix For: 5.0.0


 In a plugin I am doing a TSHttpConnect() and then sending HTTP requests and 
 getting responses.  But when I try to do TSVIONBytesSet() and 
 TSVConnShutdown() on the write vio (due to the client side being done sending 
 requests) the write vio just sits there and never wakes up the other side, 
 and the response side doesn't try to close up until an inactivity timeout 
 happens.
 I think that PluginVC::do_io_shutdown() needs to do  
 other_side-read_state.vio.reenable(); when a shutdown for write shows up.  
 Then the otherside wakes up and sees the EOF due to the shutdown.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2197) http_sm may not perceive client_vc`s aborted if it is not a new connection

2013-10-12 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793386#comment-13793386
 ] 

Leif Hedstrom commented on TS-2197:
---

Weijin: Is this going to land in v4.1.0 ?

 http_sm may not perceive client_vc`s aborted if it is not a new connection
 --

 Key: TS-2197
 URL: https://issues.apache.org/jira/browse/TS-2197
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: weijin
 Fix For: 4.1.0

 Attachments: TS-2197.wj.diff


 for a keepalive connection, the second request maybe already in the 
 ua_buffer, the current code disabled the client_vc`s read when parse the 
 http_hdr in such case, which means we can not notified the client_vc is 
 aborted as soon as possible.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2197) http_sm may not perceive client_vc`s aborted if it is not a new connection

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2197:
--

Fix Version/s: 4.1.0

 http_sm may not perceive client_vc`s aborted if it is not a new connection
 --

 Key: TS-2197
 URL: https://issues.apache.org/jira/browse/TS-2197
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: weijin
 Fix For: 4.1.0

 Attachments: TS-2197.wj.diff


 for a keepalive connection, the second request maybe already in the 
 ua_buffer, the current code disabled the client_vc`s read when parse the 
 http_hdr in such case, which means we can not notified the client_vc is 
 aborted as soon as possible.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2189) enabling proxy.config.http.cache.cluster_cache_local causes all traffic to store local

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2189:
--

Fix Version/s: 4.1.0

 enabling proxy.config.http.cache.cluster_cache_local causes all traffic to 
 store local
 --

 Key: TS-2189
 URL: https://issues.apache.org/jira/browse/TS-2189
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache, Clustering
Reporter: Scott Harris
 Fix For: 4.1.0


 Setting proxy.config.http.cache.cluster_cache_local=1 is causing all requests 
 to cache locally and not work in cluster mode.
 cache.config contains no rules so nothing should be caching locally.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2189) enabling proxy.config.http.cache.cluster_cache_local causes all traffic to store local

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2189:
--

Component/s: Documentation

 enabling proxy.config.http.cache.cluster_cache_local causes all traffic to 
 store local
 --

 Key: TS-2189
 URL: https://issues.apache.org/jira/browse/TS-2189
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache, Clustering, Documentation
Reporter: Scott Harris
 Fix For: Docs


 Setting proxy.config.http.cache.cluster_cache_local=1 is causing all requests 
 to cache locally and not work in cluster mode.
 cache.config contains no rules so nothing should be caching locally.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2189) enabling proxy.config.http.cache.cluster_cache_local causes all traffic to store local

2013-10-12 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793401#comment-13793401
 ] 

Leif Hedstrom commented on TS-2189:
---

I'm wondering if this is a misunderstanding / undocumented problem. As far as I 
can tell, this configuration does exactly what it says; it'll force all 
requests to be cached locally. You probably almost never want to enable this in 
records.config, but instead via conf_remap.so (remap plugin) or a custom 
plugin, and enable it per remap rule or per transaction.

This stems from the fact that cache.config behavior can not be modified per 
remap rule or from a plugin API, so I think they added this records.config. 
Basically, this is an alternative way to configure this behavior, outside of 
cache.config.

We really ought to document this, so I'm going to move this to a documentation 
bug.

 enabling proxy.config.http.cache.cluster_cache_local causes all traffic to 
 store local
 --

 Key: TS-2189
 URL: https://issues.apache.org/jira/browse/TS-2189
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache, Clustering, Documentation
Reporter: Scott Harris
 Fix For: Docs


 Setting proxy.config.http.cache.cluster_cache_local=1 is causing all requests 
 to cache locally and not work in cluster mode.
 cache.config contains no rules so nothing should be caching locally.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2189) enabling proxy.config.http.cache.cluster_cache_local causes all traffic to store local

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2189:
--

Fix Version/s: (was: 4.1.0)
   Docs

 enabling proxy.config.http.cache.cluster_cache_local causes all traffic to 
 store local
 --

 Key: TS-2189
 URL: https://issues.apache.org/jira/browse/TS-2189
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache, Clustering, Documentation
Reporter: Scott Harris
 Fix For: Docs


 Setting proxy.config.http.cache.cluster_cache_local=1 is causing all requests 
 to cache locally and not work in cluster mode.
 cache.config contains no rules so nothing should be caching locally.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (TS-2189) enabling proxy.config.http.cache.cluster_cache_local causes all traffic to store local

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom reassigned TS-2189:
-

Assignee: Leif Hedstrom

 enabling proxy.config.http.cache.cluster_cache_local causes all traffic to 
 store local
 --

 Key: TS-2189
 URL: https://issues.apache.org/jira/browse/TS-2189
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache, Clustering, Documentation
Reporter: Scott Harris
Assignee: Leif Hedstrom
 Fix For: Docs


 Setting proxy.config.http.cache.cluster_cache_local=1 is causing all requests 
 to cache locally and not work in cluster mode.
 cache.config contains no rules so nothing should be caching locally.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2189) enabling proxy.config.http.cache.cluster_cache_local causes all traffic to store local

2013-10-12 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793405#comment-13793405
 ] 

Leif Hedstrom commented on TS-2189:
---

I think long term, we ought to make cache.config overridable per transactions / 
plugin APIs, and then the purpose of this records.config setting is eliminated.

 enabling proxy.config.http.cache.cluster_cache_local causes all traffic to 
 store local
 --

 Key: TS-2189
 URL: https://issues.apache.org/jira/browse/TS-2189
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache, Clustering, Documentation
Reporter: Scott Harris
Assignee: Leif Hedstrom
 Fix For: Docs


 Setting proxy.config.http.cache.cluster_cache_local=1 is causing all requests 
 to cache locally and not work in cluster mode.
 cache.config contains no rules so nothing should be caching locally.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2189) enabling proxy.config.http.cache.cluster_cache_local causes all traffic to store local

2013-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793407#comment-13793407
 ] 

ASF subversion and git services commented on TS-2189:
-

Commit cfc86f1eab0e95b25bc57be7d47720d23e42f6c3 in branch refs/heads/master 
from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=cfc86f1 ]

TS-2189 Document proxy.config.http.cache.cluster_cache_local.

This is confusing for sure, in that it's actually an alternative to
controlling local caching via records.config. The intent, as far as I can
tell, is that you'd use this as an overridable configuration.


 enabling proxy.config.http.cache.cluster_cache_local causes all traffic to 
 store local
 --

 Key: TS-2189
 URL: https://issues.apache.org/jira/browse/TS-2189
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache, Clustering, Documentation
Reporter: Scott Harris
Assignee: Leif Hedstrom
 Fix For: Docs


 Setting proxy.config.http.cache.cluster_cache_local=1 is causing all requests 
 to cache locally and not work in cluster mode.
 cache.config contains no rules so nothing should be caching locally.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (TS-2189) enabling proxy.config.http.cache.cluster_cache_local causes all traffic to store local

2013-10-12 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom resolved TS-2189.
---

Resolution: Fixed

Closing this as resolved. Hopefully I got this right in the (update) 
documentation, please reopen if I did not.

 enabling proxy.config.http.cache.cluster_cache_local causes all traffic to 
 store local
 --

 Key: TS-2189
 URL: https://issues.apache.org/jira/browse/TS-2189
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache, Clustering, Documentation
Reporter: Scott Harris
Assignee: Leif Hedstrom
 Fix For: Docs


 Setting proxy.config.http.cache.cluster_cache_local=1 is causing all requests 
 to cache locally and not work in cluster mode.
 cache.config contains no rules so nothing should be caching locally.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2189) enabling proxy.config.http.cache.cluster_cache_local causes all traffic to store local

2013-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793493#comment-13793493
 ] 

ASF subversion and git services commented on TS-2189:
-

Commit a32bc3a83538eff27fedf0f6664a9180c027b368 in branch refs/heads/master 
from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=a32bc3a ]

TS-2189 Document cluster-cache-local in cache.config


 enabling proxy.config.http.cache.cluster_cache_local causes all traffic to 
 store local
 --

 Key: TS-2189
 URL: https://issues.apache.org/jira/browse/TS-2189
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache, Clustering, Documentation
Reporter: Scott Harris
Assignee: Leif Hedstrom
 Fix For: Docs


 Setting proxy.config.http.cache.cluster_cache_local=1 is causing all requests 
 to cache locally and not work in cluster mode.
 cache.config contains no rules so nothing should be caching locally.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-1807) shutdown on a write VIO to TSHttpConnect() doesn't propogate

2013-10-12 Thread William Bardwell (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793530#comment-13793530
 ] 

William Bardwell commented on TS-1807:
--

I have some tentative fixes, not sure about a release...some time in the next 
couple months...

 shutdown on a write VIO to TSHttpConnect() doesn't propogate
 

 Key: TS-1807
 URL: https://issues.apache.org/jira/browse/TS-1807
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: William Bardwell
Assignee: William Bardwell
 Fix For: 5.0.0


 In a plugin I am doing a TSHttpConnect() and then sending HTTP requests and 
 getting responses.  But when I try to do TSVIONBytesSet() and 
 TSVConnShutdown() on the write vio (due to the client side being done sending 
 requests) the write vio just sits there and never wakes up the other side, 
 and the response side doesn't try to close up until an inactivity timeout 
 happens.
 I think that PluginVC::do_io_shutdown() needs to do  
 other_side-read_state.vio.reenable(); when a shutdown for write shows up.  
 Then the otherside wakes up and sees the EOF due to the shutdown.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (TS-1807) shutdown on a write VIO to TSHttpConnect() doesn't propogate

2013-10-12 Thread William Bardwell (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Bardwell reassigned TS-1807:


Assignee: William Bardwell

 shutdown on a write VIO to TSHttpConnect() doesn't propogate
 

 Key: TS-1807
 URL: https://issues.apache.org/jira/browse/TS-1807
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: William Bardwell
Assignee: William Bardwell
 Fix For: 5.0.0


 In a plugin I am doing a TSHttpConnect() and then sending HTTP requests and 
 getting responses.  But when I try to do TSVIONBytesSet() and 
 TSVConnShutdown() on the write vio (due to the client side being done sending 
 requests) the write vio just sits there and never wakes up the other side, 
 and the response side doesn't try to close up until an inactivity timeout 
 happens.
 I think that PluginVC::do_io_shutdown() needs to do  
 other_side-read_state.vio.reenable(); when a shutdown for write shows up.  
 Then the otherside wakes up and sees the EOF due to the shutdown.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-1807) shutdown on a write VIO to TSHttpConnect() doesn't propogate

2013-10-12 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793535#comment-13793535
 ] 

Leif Hedstrom commented on TS-1807:
---

Great! Mark it for 4.2 then , which we will release mid February.



 shutdown on a write VIO to TSHttpConnect() doesn't propogate
 

 Key: TS-1807
 URL: https://issues.apache.org/jira/browse/TS-1807
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: William Bardwell
Assignee: William Bardwell
 Fix For: 5.0.0


 In a plugin I am doing a TSHttpConnect() and then sending HTTP requests and 
 getting responses.  But when I try to do TSVIONBytesSet() and 
 TSVConnShutdown() on the write vio (due to the client side being done sending 
 requests) the write vio just sits there and never wakes up the other side, 
 and the response side doesn't try to close up until an inactivity timeout 
 happens.
 I think that PluginVC::do_io_shutdown() needs to do  
 other_side-read_state.vio.reenable(); when a shutdown for write shows up.  
 Then the otherside wakes up and sees the EOF due to the shutdown.



--
This message was sent by Atlassian JIRA
(v6.1#6144)