[jira] [Commented] (TS-965) cache.config can't deal with both revalidate= and ttl-in-cache= specified

2015-03-08 Thread Zhao Yongming (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352036#comment-14352036
 ] 

Zhao Yongming commented on TS-965:
--

I have no idea of what is the detail, as cache.config is a multi-matching rule 
system, and there is some hard-coded rules which is not explained anywhare, for 
example: if you matched with 'no-cache', then it will not cache.

I don't like the idea with the cache-control matching, which is hard to extend 
and hard to use in real world, maybe we should avoid using it in fever of the 
LUA remaping and LUA plugins

> cache.config can't deal with both revalidate= and ttl-in-cache= specified
> -
>
> Key: TS-965
> URL: https://issues.apache.org/jira/browse/TS-965
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Igor Galić
>Assignee: Alan M. Carroll
>  Labels: A, cache-control
> Fix For: 5.3.0
>
>
> If both of these options are specified (with the same time?), nothing is 
> cached at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3374) Issues with cache.config implementation

2015-03-08 Thread Zhao Yongming (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352039#comment-14352039
 ] 

Zhao Yongming commented on TS-3374:
---

the current matching in cache.config is that when it matched with many actions, 
things will get very complex, for example in your case, the 'never-cache' is a 
killer actions, when any url matched with it, regards what ever the URL matchs 
on other rules, it will not be cached.

so, the example in the cache.config is applied on the same action, which is the 
'revalidate='.

> Issues with cache.config implementation
> ---
>
> Key: TS-3374
> URL: https://issues.apache.org/jira/browse/TS-3374
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache
>Reporter: Dan Morgan
>  Labels: cache-control
> Fix For: sometime
>
>
> The documentation implies that entries in the cache.config file are processed 
> in 'order'.
> For example, this example in the docs:
> ---
> The following example configures Traffic Server to revalidate gif and jpeg 
> objects in the domain mydomain.com every 6 hours, and all other objects in 
> mydomain.com every hour. The rules are applied in the order listed.
> dest_domain=mydomain.com suffix=gif revalidate=6h
> dest_domain=mydomain.com suffix=jpeg revalidate=6h
> dest_domain=mydomain.com revalidate=1h
> ---
> However, running with version 5.1.2 and having the following lines:
> dest_domain=mydomain.com prefix=somepath suffix=js revalidate=7d
> dest_domain=mydomain.com suffix=js action=never-cache
> I would expect it to not cache any .js URL's from mydomain.com, except those 
> that have a prefix of 'somepath'.  However what happens is that the 
> action=never-cache is applied to all URL's having mydomain.com (even the ones 
> that have a prefix of 'somepath').



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3431) enable_read_while_writer delays requests for mis-matched HTTP methods

2015-03-08 Thread Nick Muerdter (JIRA)
Nick Muerdter created TS-3431:
-

 Summary: enable_read_while_writer delays requests for mis-matched 
HTTP methods
 Key: TS-3431
 URL: https://issues.apache.org/jira/browse/TS-3431
 Project: Traffic Server
  Issue Type: Bug
Reporter: Nick Muerdter


If enable_read_while_writer is enabled (which it is by default), then a GET 
request can hold up the processing of a POST request to the same URL endpoint. 
Since the POST request is fundamentally different, it doesn't seem like the 
POST request should be waiting for the fulfillment of the GET request before 
processing.

An example might be the easiest way to demonstrate this: Let's say you have a 
backend that has both a GET and POST endpoint at the same URL. Each of these 
requests takes 10 seconds to complete. If you make the GET request first, and 
then quickly follow it by the POST request to the same URL, then the GET 
request will complete in 10 seconds, while the POST request will take 20 
seconds (since it first waits 10 seconds for the GET request to complete, then 
apparently realizes it can't actually use the cache, and then proceeds to the 
POST request which takes another 10 seconds). However, it's worth noting that 
if you make the requests in the opposite order (POST first, and then GET), then 
there are no delays.

Here's some example scripts to demonstrate this. Here's a node.js backend that 
will respond to both GET and POST requests at the same URL and take 10 seconds:

{code}
var http = require("http");
http.createServer(function(request, response) {
  setTimeout(function() {
response.writeHead(200);
response.write('example response');
response.end();
  }, 1);
}).listen(3000);
{code}

I then took a default TrafficServer 5.2.0 install with the only change being to 
use this backend in remap.config:

{code}
map / http://127.0.0.1:3000/
{code}

Here's the output from a GET request with a POST request following shortly 
after and happening in parallel (note the POST request takes nearly 20 seconds 
to complete):

{code}
$ time curl -v "http://127.0.0.1:8080/";
* About to connect() to 127.0.0.1 port 8080 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:8080
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Sun, 08 Mar 2015 21:49:36 GMT
< Age: 10
< Transfer-Encoding: chunked
< Connection: keep-alive
< Server: ATS/5.2.0
< 
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
example response
real0m10.017s
user0m0.005s
sys 0m0.002s


$ time curl --data "foo=bar" -v "http://127.0.0.1:8080/";
* About to connect() to 127.0.0.1 port 8080 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> POST / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:8080
> Accept: */*
> Content-Length: 7
> Content-Type: application/x-www-form-urlencoded
> 
< HTTP/1.1 200 OK
< Date: Sun, 08 Mar 2015 21:49:46 GMT
< Age: 10
< Transfer-Encoding: chunked
< Connection: keep-alive
< Server: ATS/5.2.0
< 
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
example response
real0m19.531s
user0m0.004s
sys 0m0.002s
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3431) enable_read_while_writer delays requests for mis-matched HTTP methods

2015-03-08 Thread Nick Muerdter (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Muerdter updated TS-3431:
--
Priority: Minor  (was: Major)

> enable_read_while_writer delays requests for mis-matched HTTP methods
> -
>
> Key: TS-3431
> URL: https://issues.apache.org/jira/browse/TS-3431
> Project: Traffic Server
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Nick Muerdter
>Priority: Minor
>
> If enable_read_while_writer is enabled (which it is by default), then a GET 
> request can hold up the processing of a POST request to the same URL 
> endpoint. Since the POST request is fundamentally different, it doesn't seem 
> like the POST request should be waiting for the fulfillment of the GET 
> request before processing.
> An example might be the easiest way to demonstrate this: Let's say you have a 
> backend that has both a GET and POST endpoint at the same URL. Each of these 
> requests takes 10 seconds to complete. If you make the GET request first, and 
> then quickly follow it by the POST request to the same URL, then the GET 
> request will complete in 10 seconds, while the POST request will take 20 
> seconds (since it first waits 10 seconds for the GET request to complete, 
> then apparently realizes it can't actually use the cache, and then proceeds 
> to the POST request which takes another 10 seconds). However, it's worth 
> noting that if you make the requests in the opposite order (POST first, and 
> then GET), then there are no delays.
> Here's some example scripts to demonstrate this. Here's a node.js backend 
> that will respond to both GET and POST requests at the same URL and take 10 
> seconds:
> {code}
> var http = require("http");
> http.createServer(function(request, response) {
>   setTimeout(function() {
> response.writeHead(200);
> response.write('example response');
> response.end();
>   }, 1);
> }).listen(3000);
> {code}
> I then took a default TrafficServer 5.2.0 install with the only change being 
> to use this backend in remap.config:
> {code}
> map / http://127.0.0.1:3000/
> {code}
> Here's the output from a GET request with a POST request following shortly 
> after and happening in parallel (note the POST request takes nearly 20 
> seconds to complete):
> {code}
> $ time curl -v "http://127.0.0.1:8080/";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > GET / HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Date: Sun, 08 Mar 2015 21:49:36 GMT
> < Age: 10
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Server: ATS/5.2.0
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response
> real  0m10.017s
> user  0m0.005s
> sys   0m0.002s
> $ time curl --data "foo=bar" -v "http://127.0.0.1:8080/";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > POST / HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > Content-Length: 7
> > Content-Type: application/x-www-form-urlencoded
> > 
> < HTTP/1.1 200 OK
> < Date: Sun, 08 Mar 2015 21:49:46 GMT
> < Age: 10
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Server: ATS/5.2.0
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response
> real  0m19.531s
> user  0m0.004s
> sys   0m0.002s
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3431) enable_read_while_writer delays requests for mis-matched HTTP methods

2015-03-08 Thread Nick Muerdter (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Muerdter updated TS-3431:
--
Affects Version/s: 5.2.0

> enable_read_while_writer delays requests for mis-matched HTTP methods
> -
>
> Key: TS-3431
> URL: https://issues.apache.org/jira/browse/TS-3431
> Project: Traffic Server
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Nick Muerdter
>
> If enable_read_while_writer is enabled (which it is by default), then a GET 
> request can hold up the processing of a POST request to the same URL 
> endpoint. Since the POST request is fundamentally different, it doesn't seem 
> like the POST request should be waiting for the fulfillment of the GET 
> request before processing.
> An example might be the easiest way to demonstrate this: Let's say you have a 
> backend that has both a GET and POST endpoint at the same URL. Each of these 
> requests takes 10 seconds to complete. If you make the GET request first, and 
> then quickly follow it by the POST request to the same URL, then the GET 
> request will complete in 10 seconds, while the POST request will take 20 
> seconds (since it first waits 10 seconds for the GET request to complete, 
> then apparently realizes it can't actually use the cache, and then proceeds 
> to the POST request which takes another 10 seconds). However, it's worth 
> noting that if you make the requests in the opposite order (POST first, and 
> then GET), then there are no delays.
> Here's some example scripts to demonstrate this. Here's a node.js backend 
> that will respond to both GET and POST requests at the same URL and take 10 
> seconds:
> {code}
> var http = require("http");
> http.createServer(function(request, response) {
>   setTimeout(function() {
> response.writeHead(200);
> response.write('example response');
> response.end();
>   }, 1);
> }).listen(3000);
> {code}
> I then took a default TrafficServer 5.2.0 install with the only change being 
> to use this backend in remap.config:
> {code}
> map / http://127.0.0.1:3000/
> {code}
> Here's the output from a GET request with a POST request following shortly 
> after and happening in parallel (note the POST request takes nearly 20 
> seconds to complete):
> {code}
> $ time curl -v "http://127.0.0.1:8080/";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > GET / HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Date: Sun, 08 Mar 2015 21:49:36 GMT
> < Age: 10
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Server: ATS/5.2.0
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response
> real  0m10.017s
> user  0m0.005s
> sys   0m0.002s
> $ time curl --data "foo=bar" -v "http://127.0.0.1:8080/";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > POST / HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > Content-Length: 7
> > Content-Type: application/x-www-form-urlencoded
> > 
> < HTTP/1.1 200 OK
> < Date: Sun, 08 Mar 2015 21:49:46 GMT
> < Age: 10
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Server: ATS/5.2.0
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response
> real  0m19.531s
> user  0m0.004s
> sys   0m0.002s
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3432) XDebug X-Cache header erroneously reports "hit-fresh" for mismatched HTTP methods

2015-03-08 Thread Nick Muerdter (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Muerdter updated TS-3432:
--
Affects Version/s: 5.2.0

> XDebug X-Cache header erroneously reports "hit-fresh" for mismatched HTTP 
> methods
> -
>
> Key: TS-3432
> URL: https://issues.apache.org/jira/browse/TS-3432
> Project: Traffic Server
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Nick Muerdter
>Priority: Trivial
>
> I noticed the XDebug experimental plugin can sometimes give what appears to 
> be incorrect responses to POST requests if there is a cached GET response at 
> the same URL endpoint. If a GET response is cached at a specific URL, and 
> then a POST request is made to the same URL, the XDebug plugin reports that 
> it's a cache hit according to the "X-Cache: hit-fresh" header. However, 
> TrafficServer is correctly not serving up the cached GET request in response 
> to the POST, so the issue appears to simply be XDebug's "X-Cache" header 
> returning incorrect information.
> Here's a some example scripts that demonstrate the issue. First here's a 
> simple nodejs backend server that will respond to both GET and POST requests:
> {code}
> var http = require("http");
> http.createServer(function(request, response) {
>   if(request.method == 'GET') {
> response.writeHead(200, { 'Cache-Control': 'max-age=300' });
>   } else {
> response.writeHead(200);
>   }
>   response.write('example response');
>   response.end();
> }).listen(3000);
> {code}
> Here's the response to the initial GET request:
> {code}
> $ curl -v -H "X-Debug: X-Cache" "http://127.0.0.1:8080/test";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > GET /test HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > X-Debug: X-Cache
> > 
> < HTTP/1.1 200 OK
> < Cache-Control: max-age=300
> < Date: Sun, 08 Mar 2015 22:12:07 GMT
> < Age: 0
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Via: http/1.1  (ApacheTrafficServer/5.2.0 [uScMsSfWpSeN:t cCMi 
> p sS])
> < Server: ATS/5.2.0
> < X-Cache: miss
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response - HTTP method: GET
> {code}
> Here's the response to a subsequent POST request. Note the "X-Cache: 
> hit-fresh" response header despite the fact that it's not delivering a cached 
> response.
> {code}
> $ curl --data "foo=bar" -H "X-Debug: X-Cache" -v "http://127.0.0.1:8080/test";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > POST /test HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > X-Debug: X-Cache
> > Content-Length: 7
> > Content-Type: application/x-www-form-urlencoded
> > 
> < HTTP/1.1 200 OK
> < Date: Sun, 08 Mar 2015 22:12:32 GMT
> < Age: 0
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Via: http/1.1  (ApacheTrafficServer/5.2.0 [uScSsSfDpSeN:t cCDi 
> p sS])
> < Server: ATS/5.2.0
> < X-Cache: hit-fresh
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response - HTTP method: POST
> {code}
> In this case, I have the detailed Via response headers turned on, and 
> according to the cache-lookup value in there, the POST response is "in cache, 
> stale (a cache “MISS”)" ("cS" code).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3432) XDebug X-Cache header erroneously reports "hit-fresh" for mismatched HTTP methods

2015-03-08 Thread Nick Muerdter (JIRA)
Nick Muerdter created TS-3432:
-

 Summary: XDebug X-Cache header erroneously reports "hit-fresh" for 
mismatched HTTP methods
 Key: TS-3432
 URL: https://issues.apache.org/jira/browse/TS-3432
 Project: Traffic Server
  Issue Type: Bug
Reporter: Nick Muerdter


I noticed the XDebug experimental plugin can sometimes give what appears to be 
incorrect responses to POST requests if there is a cached GET response at the 
same URL endpoint. If a GET response is cached at a specific URL, and then a 
POST request is made to the same URL, the XDebug plugin reports that it's a 
cache hit according to the "X-Cache: hit-fresh" header. However, TrafficServer 
is correctly not serving up the cached GET request in response to the POST, so 
the issue appears to simply be XDebug's "X-Cache" header returning incorrect 
information.

Here's a some example scripts that demonstrate the issue. First here's a simple 
nodejs backend server that will respond to both GET and POST requests:

{code}
var http = require("http");
http.createServer(function(request, response) {
  if(request.method == 'GET') {
response.writeHead(200, { 'Cache-Control': 'max-age=300' });
  } else {
response.writeHead(200);
  }

  response.write('example response');
  response.end();
}).listen(3000);
{code}

Here's the response to the initial GET request:

{code}
$ curl -v -H "X-Debug: X-Cache" "http://127.0.0.1:8080/test";
* About to connect() to 127.0.0.1 port 8080 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET /test HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:8080
> Accept: */*
> X-Debug: X-Cache
> 
< HTTP/1.1 200 OK
< Cache-Control: max-age=300
< Date: Sun, 08 Mar 2015 22:12:07 GMT
< Age: 0
< Transfer-Encoding: chunked
< Connection: keep-alive
< Via: http/1.1  (ApacheTrafficServer/5.2.0 [uScMsSfWpSeN:t cCMi p 
sS])
< Server: ATS/5.2.0
< X-Cache: miss
< 
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
example response - HTTP method: GET
{code}

Here's the response to a subsequent POST request. Note the "X-Cache: hit-fresh" 
response header despite the fact that it's not delivering a cached response.

{code}
$ curl --data "foo=bar" -H "X-Debug: X-Cache" -v "http://127.0.0.1:8080/test";
* About to connect() to 127.0.0.1 port 8080 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> POST /test HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:8080
> Accept: */*
> X-Debug: X-Cache
> Content-Length: 7
> Content-Type: application/x-www-form-urlencoded
> 
< HTTP/1.1 200 OK
< Date: Sun, 08 Mar 2015 22:12:32 GMT
< Age: 0
< Transfer-Encoding: chunked
< Connection: keep-alive
< Via: http/1.1  (ApacheTrafficServer/5.2.0 [uScSsSfDpSeN:t cCDi p 
sS])
< Server: ATS/5.2.0
< X-Cache: hit-fresh
< 
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
example response - HTTP method: POST
{code}

In this case, I have the detailed Via response headers turned on, and according 
to the cache-lookup value in there, the POST response is "in cache, stale (a 
cache “MISS”)" ("cS" code).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3432) XDebug X-Cache header erroneously reports "hit-fresh" for mismatched HTTP methods

2015-03-08 Thread Nick Muerdter (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Muerdter updated TS-3432:
--
Priority: Trivial  (was: Major)

> XDebug X-Cache header erroneously reports "hit-fresh" for mismatched HTTP 
> methods
> -
>
> Key: TS-3432
> URL: https://issues.apache.org/jira/browse/TS-3432
> Project: Traffic Server
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Nick Muerdter
>Priority: Trivial
>
> I noticed the XDebug experimental plugin can sometimes give what appears to 
> be incorrect responses to POST requests if there is a cached GET response at 
> the same URL endpoint. If a GET response is cached at a specific URL, and 
> then a POST request is made to the same URL, the XDebug plugin reports that 
> it's a cache hit according to the "X-Cache: hit-fresh" header. However, 
> TrafficServer is correctly not serving up the cached GET request in response 
> to the POST, so the issue appears to simply be XDebug's "X-Cache" header 
> returning incorrect information.
> Here's a some example scripts that demonstrate the issue. First here's a 
> simple nodejs backend server that will respond to both GET and POST requests:
> {code}
> var http = require("http");
> http.createServer(function(request, response) {
>   if(request.method == 'GET') {
> response.writeHead(200, { 'Cache-Control': 'max-age=300' });
>   } else {
> response.writeHead(200);
>   }
>   response.write('example response');
>   response.end();
> }).listen(3000);
> {code}
> Here's the response to the initial GET request:
> {code}
> $ curl -v -H "X-Debug: X-Cache" "http://127.0.0.1:8080/test";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > GET /test HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > X-Debug: X-Cache
> > 
> < HTTP/1.1 200 OK
> < Cache-Control: max-age=300
> < Date: Sun, 08 Mar 2015 22:12:07 GMT
> < Age: 0
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Via: http/1.1  (ApacheTrafficServer/5.2.0 [uScMsSfWpSeN:t cCMi 
> p sS])
> < Server: ATS/5.2.0
> < X-Cache: miss
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response - HTTP method: GET
> {code}
> Here's the response to a subsequent POST request. Note the "X-Cache: 
> hit-fresh" response header despite the fact that it's not delivering a cached 
> response.
> {code}
> $ curl --data "foo=bar" -H "X-Debug: X-Cache" -v "http://127.0.0.1:8080/test";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > POST /test HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > X-Debug: X-Cache
> > Content-Length: 7
> > Content-Type: application/x-www-form-urlencoded
> > 
> < HTTP/1.1 200 OK
> < Date: Sun, 08 Mar 2015 22:12:32 GMT
> < Age: 0
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Via: http/1.1  (ApacheTrafficServer/5.2.0 [uScSsSfDpSeN:t cCDi 
> p sS])
> < Server: ATS/5.2.0
> < X-Cache: hit-fresh
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response - HTTP method: POST
> {code}
> In this case, I have the detailed Via response headers turned on, and 
> according to the cache-lookup value in there, the POST response is "in cache, 
> stale (a cache “MISS”)" ("cS" code).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3431) enable_read_while_writer delays requests for mis-matched HTTP methods

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3431:
--
Fix Version/s: sometime

> enable_read_while_writer delays requests for mis-matched HTTP methods
> -
>
> Key: TS-3431
> URL: https://issues.apache.org/jira/browse/TS-3431
> Project: Traffic Server
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Nick Muerdter
>Priority: Minor
> Fix For: sometime
>
>
> If enable_read_while_writer is enabled (which it is by default), then a GET 
> request can hold up the processing of a POST request to the same URL 
> endpoint. Since the POST request is fundamentally different, it doesn't seem 
> like the POST request should be waiting for the fulfillment of the GET 
> request before processing.
> An example might be the easiest way to demonstrate this: Let's say you have a 
> backend that has both a GET and POST endpoint at the same URL. Each of these 
> requests takes 10 seconds to complete. If you make the GET request first, and 
> then quickly follow it by the POST request to the same URL, then the GET 
> request will complete in 10 seconds, while the POST request will take 20 
> seconds (since it first waits 10 seconds for the GET request to complete, 
> then apparently realizes it can't actually use the cache, and then proceeds 
> to the POST request which takes another 10 seconds). However, it's worth 
> noting that if you make the requests in the opposite order (POST first, and 
> then GET), then there are no delays.
> Here's some example scripts to demonstrate this. Here's a node.js backend 
> that will respond to both GET and POST requests at the same URL and take 10 
> seconds:
> {code}
> var http = require("http");
> http.createServer(function(request, response) {
>   setTimeout(function() {
> response.writeHead(200);
> response.write('example response');
> response.end();
>   }, 1);
> }).listen(3000);
> {code}
> I then took a default TrafficServer 5.2.0 install with the only change being 
> to use this backend in remap.config:
> {code}
> map / http://127.0.0.1:3000/
> {code}
> Here's the output from a GET request with a POST request following shortly 
> after and happening in parallel (note the POST request takes nearly 20 
> seconds to complete):
> {code}
> $ time curl -v "http://127.0.0.1:8080/";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > GET / HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Date: Sun, 08 Mar 2015 21:49:36 GMT
> < Age: 10
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Server: ATS/5.2.0
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response
> real  0m10.017s
> user  0m0.005s
> sys   0m0.002s
> $ time curl --data "foo=bar" -v "http://127.0.0.1:8080/";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > POST / HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > Content-Length: 7
> > Content-Type: application/x-www-form-urlencoded
> > 
> < HTTP/1.1 200 OK
> < Date: Sun, 08 Mar 2015 21:49:46 GMT
> < Age: 10
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Server: ATS/5.2.0
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response
> real  0m19.531s
> user  0m0.004s
> sys   0m0.002s
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3432) XDebug X-Cache header erroneously reports "hit-fresh" for mismatched HTTP methods

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3432:
--
Fix Version/s: 6.0.0

> XDebug X-Cache header erroneously reports "hit-fresh" for mismatched HTTP 
> methods
> -
>
> Key: TS-3432
> URL: https://issues.apache.org/jira/browse/TS-3432
> Project: Traffic Server
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Nick Muerdter
>Priority: Trivial
> Fix For: 6.0.0
>
>
> I noticed the XDebug experimental plugin can sometimes give what appears to 
> be incorrect responses to POST requests if there is a cached GET response at 
> the same URL endpoint. If a GET response is cached at a specific URL, and 
> then a POST request is made to the same URL, the XDebug plugin reports that 
> it's a cache hit according to the "X-Cache: hit-fresh" header. However, 
> TrafficServer is correctly not serving up the cached GET request in response 
> to the POST, so the issue appears to simply be XDebug's "X-Cache" header 
> returning incorrect information.
> Here's a some example scripts that demonstrate the issue. First here's a 
> simple nodejs backend server that will respond to both GET and POST requests:
> {code}
> var http = require("http");
> http.createServer(function(request, response) {
>   if(request.method == 'GET') {
> response.writeHead(200, { 'Cache-Control': 'max-age=300' });
>   } else {
> response.writeHead(200);
>   }
>   response.write('example response');
>   response.end();
> }).listen(3000);
> {code}
> Here's the response to the initial GET request:
> {code}
> $ curl -v -H "X-Debug: X-Cache" "http://127.0.0.1:8080/test";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > GET /test HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > X-Debug: X-Cache
> > 
> < HTTP/1.1 200 OK
> < Cache-Control: max-age=300
> < Date: Sun, 08 Mar 2015 22:12:07 GMT
> < Age: 0
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Via: http/1.1  (ApacheTrafficServer/5.2.0 [uScMsSfWpSeN:t cCMi 
> p sS])
> < Server: ATS/5.2.0
> < X-Cache: miss
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response - HTTP method: GET
> {code}
> Here's the response to a subsequent POST request. Note the "X-Cache: 
> hit-fresh" response header despite the fact that it's not delivering a cached 
> response.
> {code}
> $ curl --data "foo=bar" -H "X-Debug: X-Cache" -v "http://127.0.0.1:8080/test";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > POST /test HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > X-Debug: X-Cache
> > Content-Length: 7
> > Content-Type: application/x-www-form-urlencoded
> > 
> < HTTP/1.1 200 OK
> < Date: Sun, 08 Mar 2015 22:12:32 GMT
> < Age: 0
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Via: http/1.1  (ApacheTrafficServer/5.2.0 [uScSsSfDpSeN:t cCDi 
> p sS])
> < Server: ATS/5.2.0
> < X-Cache: hit-fresh
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response - HTTP method: POST
> {code}
> In this case, I have the detailed Via response headers turned on, and 
> according to the cache-lookup value in there, the POST response is "in cache, 
> stale (a cache “MISS”)" ("cS" code).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3432) XDebug X-Cache header erroneously reports "hit-fresh" for mismatched HTTP methods

2015-03-08 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352367#comment-14352367
 ] 

Leif Hedstrom commented on TS-3432:
---

This is likely not so much an issue with X-Debug plugin itself, as how this is 
exposed inside the core.

> XDebug X-Cache header erroneously reports "hit-fresh" for mismatched HTTP 
> methods
> -
>
> Key: TS-3432
> URL: https://issues.apache.org/jira/browse/TS-3432
> Project: Traffic Server
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Nick Muerdter
>Priority: Trivial
> Fix For: 6.0.0
>
>
> I noticed the XDebug experimental plugin can sometimes give what appears to 
> be incorrect responses to POST requests if there is a cached GET response at 
> the same URL endpoint. If a GET response is cached at a specific URL, and 
> then a POST request is made to the same URL, the XDebug plugin reports that 
> it's a cache hit according to the "X-Cache: hit-fresh" header. However, 
> TrafficServer is correctly not serving up the cached GET request in response 
> to the POST, so the issue appears to simply be XDebug's "X-Cache" header 
> returning incorrect information.
> Here's a some example scripts that demonstrate the issue. First here's a 
> simple nodejs backend server that will respond to both GET and POST requests:
> {code}
> var http = require("http");
> http.createServer(function(request, response) {
>   if(request.method == 'GET') {
> response.writeHead(200, { 'Cache-Control': 'max-age=300' });
>   } else {
> response.writeHead(200);
>   }
>   response.write('example response');
>   response.end();
> }).listen(3000);
> {code}
> Here's the response to the initial GET request:
> {code}
> $ curl -v -H "X-Debug: X-Cache" "http://127.0.0.1:8080/test";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > GET /test HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > X-Debug: X-Cache
> > 
> < HTTP/1.1 200 OK
> < Cache-Control: max-age=300
> < Date: Sun, 08 Mar 2015 22:12:07 GMT
> < Age: 0
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Via: http/1.1  (ApacheTrafficServer/5.2.0 [uScMsSfWpSeN:t cCMi 
> p sS])
> < Server: ATS/5.2.0
> < X-Cache: miss
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response - HTTP method: GET
> {code}
> Here's the response to a subsequent POST request. Note the "X-Cache: 
> hit-fresh" response header despite the fact that it's not delivering a cached 
> response.
> {code}
> $ curl --data "foo=bar" -H "X-Debug: X-Cache" -v "http://127.0.0.1:8080/test";
> * About to connect() to 127.0.0.1 port 8080 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> > POST /test HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: 127.0.0.1:8080
> > Accept: */*
> > X-Debug: X-Cache
> > Content-Length: 7
> > Content-Type: application/x-www-form-urlencoded
> > 
> < HTTP/1.1 200 OK
> < Date: Sun, 08 Mar 2015 22:12:32 GMT
> < Age: 0
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < Via: http/1.1  (ApacheTrafficServer/5.2.0 [uScSsSfDpSeN:t cCDi 
> p sS])
> < Server: ATS/5.2.0
> < X-Cache: hit-fresh
> < 
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> example response - HTTP method: POST
> {code}
> In this case, I have the detailed Via response headers turned on, and 
> according to the cache-lookup value in there, the POST response is "in cache, 
> stale (a cache “MISS”)" ("cS" code).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3423) Support for wildcard (globbing) on .include directives

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3423:
--
Fix Version/s: 6.0.0

> Support for wildcard (globbing) on .include directives
> --
>
> Key: TS-3423
> URL: https://issues.apache.org/jira/browse/TS-3423
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Neil Craig
> Fix For: 6.0.0
>
>
> Hi guys
> I would really love to see support for wildcards in a glob style for the 
> .include directive. That'd dramatically improve the possible config file 
> architectures (critical for my PoC project).
> I've not written any non-web code (apart from bash) for years but will try to 
> find the time to work on it (since it's me asking for it :-)) if that's 
> preferable and the concept is acceptable to the direction of ATS.
> Cheers
> Neil



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3423) Support for wildcard (globbing) on .include directives

2015-03-08 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352369#comment-14352369
 ] 

Leif Hedstrom commented on TS-3423:
---

I think this is similar to TS-2325, we should consider doing this combined in 
some way such we get good and consistent semantics / behavior.

> Support for wildcard (globbing) on .include directives
> --
>
> Key: TS-3423
> URL: https://issues.apache.org/jira/browse/TS-3423
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Neil Craig
> Fix For: 6.0.0
>
>
> Hi guys
> I would really love to see support for wildcards in a glob style for the 
> .include directive. That'd dramatically improve the possible config file 
> architectures (critical for my PoC project).
> I've not written any non-web code (apart from bash) for years but will try to 
> find the time to work on it (since it's me asking for it :-)) if that's 
> preferable and the concept is acceptable to the direction of ATS.
> Cheers
> Neil



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-2325) remap.config .include should support directories

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2325:
--
Fix Version/s: (was: 5.3.0)
   6.0.0

> remap.config .include should support directories
> 
>
> Key: TS-2325
> URL: https://issues.apache.org/jira/browse/TS-2325
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Configuration, Core
>Reporter: James Peach
>Assignee: Leif Hedstrom
>  Labels: review
> Fix For: 6.0.0
>
> Attachments: ts2325.diff
>
>
> The remap.config .include directive should support including a directory. The 
> implementation for this would be to simply read all the files in the 
> directory and include each one.
> I don't think the files in the directory should be sorted, since that 
> requires us to read all the names into memory, and there might be a very 
> large number of them. Typical ordering constraints can be expressed using 
> multiple directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-2325) remap.config .include should support directories

2015-03-08 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352370#comment-14352370
 ] 

Leif Hedstrom commented on TS-2325:
---

James, do we want to land this before 5.3.0 branch (imminent) or land for 6.0.0 
?

> remap.config .include should support directories
> 
>
> Key: TS-2325
> URL: https://issues.apache.org/jira/browse/TS-2325
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Configuration, Core
>Reporter: James Peach
>Assignee: Leif Hedstrom
>  Labels: review
> Fix For: 6.0.0
>
> Attachments: ts2325.diff
>
>
> The remap.config .include directive should support including a directory. The 
> implementation for this would be to simply read all the files in the 
> directory and include each one.
> I don't think the files in the directory should be sorted, since that 
> requires us to read all the names into memory, and there might be a very 
> large number of them. Typical ordering constraints can be expressed using 
> multiple directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3411) Add stat to track total number of network connections

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3411:
--
Fix Version/s: 6.0.0

> Add stat to track total number of network connections
> -
>
> Key: TS-3411
> URL: https://issues.apache.org/jira/browse/TS-3411
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Network, Performance
>Reporter: Susan Hinrichs
>Assignee: Susan Hinrichs
> Fix For: 6.0.0
>
>
> Currently, we can use proxy.process.http.total_client_connections or 
> proxy.process.http.total_incoming_connections to determine the number of TCP 
> connections that have been made from the user_agent to ATS, assuming that 
> HTTP/1.0 or HTTP/1.1 are used.
> If SPDY or HTTP/2 are used, these counters will be incremented for each 
> stream not for each network connection.
> It would be useful to add counters in the iocore/net area to directly track 
> the number of TCP connections that have been open over time.  Right now only 
> the number of currently_open connections are tracked via 
> proxy.process.net.connections_currently_open.
> I propose adding two new metrics proxy.process.net.connections_successful_in 
> and proxy.process.net.connections_successful_out



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-2794) Build failure related to header requirements of atscppapi

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom resolved TS-2794.
---
   Resolution: Duplicate
Fix Version/s: (was: 5.3.0)

Closing this as a duplicate of TS-3427, which has a patch.

> Build failure related to header requirements of atscppapi
> -
>
> Key: TS-2794
> URL: https://issues.apache.org/jira/browse/TS-2794
> Project: Traffic Server
>  Issue Type: Bug
>  Components: CPP API
>Reporter: Ryo Okubo
>Assignee: Brian Geffon
> Attachments: extend-tsxs.diff, shared_ptr_h_in.patch
>
>
> When I built my plugin outside of trafficserver source tree, I found build 
> failure related to header requirements of atscppapi as below logs.
> {noformat}
> # I set /usr/local/trafficserver/ as prefix.
> In file included from 
> /usr/local/trafficserver/include/atscppapi/Transaction.h:30:
> /usr/local/trafficserver/include/atscppapi/shared_ptr.h:28:10: fatal error: 
> 'ink_autoconf.h' file not found
> #include "ink_autoconf.h"
>  ^
> 1 error generated.
> {noformat}
> The shared_ptr.h requires a variable defined on ink_autoconf.h but it doesn't 
> exist in destination directory. so I've already posted Pull-Request on GitHub 
> to fix it. please review, and show me better solution if you have.
> https://github.com/apache/trafficserver/pull/80



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3427) compilation error of atscppapi when configured for a out-of-tree build

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3427:
--
Fix Version/s: 5.3.0

> compilation error of atscppapi when configured for a out-of-tree build
> --
>
> Key: TS-3427
> URL: https://issues.apache.org/jira/browse/TS-3427
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Build, CPP API
>Reporter: Bin
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: atscppapi_1.diff
>
>
> Header file not found error when --enable-cppapi is enabled for an 
> out-of-tree build on RHEL 6.4. It has no complaints if it is a in-tree build. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3417) Use madvise() with MADV_DONTDUMP option to limit core sizes

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3417:
--
Fix Version/s: 5.3.0

> Use madvise() with MADV_DONTDUMP option to limit core sizes
> ---
>
> Key: TS-3417
> URL: https://issues.apache.org/jira/browse/TS-3417
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Reporter: Phil Sorber
>Assignee: Phil Sorber
> Fix For: 5.3.0
>
>
> When ATS crashes it often leaves behind very large core files, in the 
> hundreds of gigabytes. A large percentage of these core files are useless 
> data in the IO buffers. We can limit the pages that the kernel dumps with 
> madvise().
> Note: This will only work on Linux 3.4+.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3422) mismatched ID in certificate indexing warnings

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3422:
--
Fix Version/s: 5.3.0

> mismatched ID in certificate indexing warnings
> --
>
> Key: TS-3422
> URL: https://issues.apache.org/jira/browse/TS-3422
> Project: Traffic Server
>  Issue Type: Bug
>  Components: SSL
>Reporter: James Peach
>Assignee: Susan Hinrichs
> Fix For: 5.3.0
>
>
> Previously, the SSL certificate SNI collision warnings always referenced the 
> pointer value ({{%p}}) of the context so that you could use SSL diagnostic 
> output to find out which certificates contained the collision.
> Now, the relevant warnings are:
> {code}
> Warning("previously indexed wildcard certificate for '%s' as '%s', 
> cannot index it with SSL_CTX #%d now",
> name, reversed, idx);
> {code}
> This message has the index number rather than the pointer value.
> {code}
>   Warning("previously indexed '%s' with SSL_CTX %p, cannot index it with 
> SSL_CTX #%d now", name, value, idx);
> {code}
> This message shows the pointer value of the existing context and the index 
> value of the new one.
> To make these more helpful, we should use consistent IDs across all the 
> messages. Either pointer value or index is probably OK, though AFAICT indices 
> can be reused after collisions which could be confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3416) Enabling HTTP2 breaks proxying

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3416:
--
Fix Version/s: sometime

> Enabling HTTP2 breaks proxying
> --
>
> Key: TS-3416
> URL: https://issues.apache.org/jira/browse/TS-3416
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, HTTP/2
>Reporter: Neil Craig
> Fix For: sometime
>
>
> Hi guys
> Firstly, apologies if this is the wrong place to ask.
> I have ATS 5.3, compiled (with experimental plugins) from a github pull a 
> couple of days ago, runnning on CentOS 6.6 64 bit. I'm trying to get ATS 
> working with h2 as a reverse proxy but every time I enable h2 via 
> proxy.config.http2.enabled in records.config, proxying breaks. I've tried 
> both http and https backends and many variants of http_ports config.
> H2 is working in that the chrome/firefox indicator shows it and i can see it 
> in chrome:net-internals but as i say, proxying breaks. The moment i disable 
> h2 via proxy.config.http2.enabled INT 0, proxying works again (as does 
> vanilla TLS).
> I can't see anything in the logs which is helpful. My configs are below:
> records.config:
> CONFIG proxy.config.http2.enabled INT 1
> CONFIG proxy.config.http.server_ports STRING 80:http 443:ssl:proto=http2
> CONFIG proxy.config.log.logfile_dir STRING /var/log/trafficserver
> CONFIG proxy.config.body_factory.template_sets_dir STRING 
> etc/trafficserver/body_factory
> CONFIG proxy.config.url_remap.filename STRING remap.config
> proxy.config.log.common_log_enabled INT 1
> proxy.config.log.common_log_is_ascii INT 1
> proxy.config.log.common_log_name STRING nutscrape.log
> CONFIG proxy.config.cache.control.filename STRING cache.config
> CONFIG proxy.config.ssl.server.multicert.filename STRING ssl_multicert.config
> CONFIG proxy.config.log.extended_log_enabled INT 1
> CONFIG proxy.config.log.extended_log_is_ascii INT 1
> CONFIG proxy.config.log.extended_log_name STRING ext.log
> CONFIG proxy.config.ssl.server.cert.path STRING /usr/local/etc/tls-certs/
> CONFIG proxy.config.ssl.server.private_key.path STRING 
> /usr/local/etc/tls-certs/
> remap.config:
> map_with_recv_port https:// http://
> reverse_map http:// https://
> I haven't changed anything else I can think of and have no plugins running.
> In terms of logs, the error.log shows 404s for the origin requests (but i can 
> curl/wget the same resources from the server). The diag.log looks like this:
> [Feb 27 11:23:55.589] {0x2b0a0481e060} STATUS: opened 
> /var/log/trafficserver/diags.log
> [Feb 27 11:23:55.589] {0x2b0a0481e060} NOTE: updated diags config
> [Feb 27 11:23:55.591] Server {0x2b0a0481e060} NOTE: cache clustering disabled
> [Feb 27 11:23:55.592] Server {0x2b0a0481e060} NOTE: ip_allow.config updated, 
> reloading
> [Feb 27 11:23:55.594] Server {0x2b0a0481e060} NOTE: cache clustering disabled
> [Feb 27 11:23:55.595] Server {0x2b0a0481e060} NOTE: logging initialized[3], 
> logging_mode = 3
> [Feb 27 11:23:55.600] Server {0x2b0a0481e060} NOTE: loading SSL certificate 
> configuration from /usr/local/etc/trafficserver/ssl_multicert.config
> [Feb 27 11:23:55.627] Server {0x2b0a0481e060} NOTE: traffic server running
> [Feb 27 11:23:55.637] Server {0x2b0a06e94700} WARNING: skipping access 
> control checks for HTTP/2 connection
> [Feb 27 11:23:55.653] Server {0x2b0a06e94700} WARNING: skipping access 
> control checks for HTTP/2 connection
> [Feb 27 11:23:55.727] Server {0x2b0a0481e060} NOTE: cache enabled
> (after a restart).
> That's about all i can think of that's likely to be useful.
> Any advice or a pointer to a better place to ask would be very gratefully 
> received.
> Cheers
> Neil



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3429) TSContScheduleEvery does not increment event count correctly

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3429:
--
Fix Version/s: 6.0.0

> TSContScheduleEvery does not increment event count correctly
> 
>
> Key: TS-3429
> URL: https://issues.apache.org/jira/browse/TS-3429
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: Bin
> Fix For: 6.0.0
>
>
> TSContScheduleEvery only increments the event count the first time it is 
> scheduled. When the event handler gets invoked, it decrements the event 
> count. So it triggers the assertion at InkAPI.cc:987. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3430) why cpu 100% on a occasion?

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3430:
--
Fix Version/s: sometime

> why cpu 100% on a occasion?
> ---
>
> Key: TS-3430
> URL: https://issues.apache.org/jira/browse/TS-3430
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: Zhaonanli
> Fix For: sometime
>
>
> trafficserver 4.2.2; Centos 6.5 64bit; 32G mem.
> 1. top:
> Cpu0  : 99.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  1.0%si,  0.0%st
> Cpu1  :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu2  : 99.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  1.0%si,  0.0%st
> Cpu3  :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu4  :  0.0%us,100.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu5  : 99.0%us,  1.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu6  :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu7  : 99.0%us,  1.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu8  : 99.0%us,  1.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu9  : 99.0%us,  1.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu10 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu11 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu12 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu13 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu14 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu15 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem:  32819596k total, 32507016k used,   312580k free,   325852k buffers
> Swap: 16777212k total,25276k used, 16751936k free, 11826164k cached
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND  
>
> 21089 traffics  20   0 22.2g  18g  29m R 100.1 58.9  17:20.61 [ET_NET 0]  
>
> 21091 traffics  20   0 22.2g  18g  29m R 100.1 58.9  17:11.08 [ET_NET 1]  
> all thread is 100%.
> 2. perf top:
>  58.50%  traffic_server   [.] LogObject::_checkout_write(unsigned 
> long*, unsigned long)
>  34.01%  traffic_server   [.] bool 
> ink_atomic_cas<__int128>(__int128 volatile*, __int128, __int128)
> is log questions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3430) why cpu 100% on a occasion?

2015-03-08 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352412#comment-14352412
 ] 

Leif Hedstrom commented on TS-3430:
---

Interesting, sounds like some sort of race / deadlock (intermittent) in the 
logging code.

> why cpu 100% on a occasion?
> ---
>
> Key: TS-3430
> URL: https://issues.apache.org/jira/browse/TS-3430
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: Zhaonanli
> Fix For: sometime
>
>
> trafficserver 4.2.2; Centos 6.5 64bit; 32G mem.
> 1. top:
> Cpu0  : 99.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  1.0%si,  0.0%st
> Cpu1  :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu2  : 99.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  1.0%si,  0.0%st
> Cpu3  :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu4  :  0.0%us,100.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu5  : 99.0%us,  1.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu6  :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu7  : 99.0%us,  1.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu8  : 99.0%us,  1.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu9  : 99.0%us,  1.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu10 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu11 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu12 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu13 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu14 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu15 :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem:  32819596k total, 32507016k used,   312580k free,   325852k buffers
> Swap: 16777212k total,25276k used, 16751936k free, 11826164k cached
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND  
>
> 21089 traffics  20   0 22.2g  18g  29m R 100.1 58.9  17:20.61 [ET_NET 0]  
>
> 21091 traffics  20   0 22.2g  18g  29m R 100.1 58.9  17:11.08 [ET_NET 1]  
> all thread is 100%.
> 2. perf top:
>  58.50%  traffic_server   [.] LogObject::_checkout_write(unsigned 
> long*, unsigned long)
>  34.01%  traffic_server   [.] bool 
> ink_atomic_cas<__int128>(__int128 volatile*, __int128, __int128)
> is log questions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3362) Do not staple negative OCSP response

2015-03-08 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352430#comment-14352430
 ] 

Leif Hedstrom commented on TS-3362:
---

Do we still want to do this? If not, please close (remove fix version) as won't 
fix.

> Do not staple negative OCSP response
> 
>
> Key: TS-3362
> URL: https://issues.apache.org/jira/browse/TS-3362
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: SSL
>Reporter: Feifei Cai
>  Labels: review
> Fix For: sometime
>
> Attachments: TS-3362.diff
>
>
> When get OCSP response, we check it before cache/staple it. If it's negative, 
> I think we'd better discard it instead of sending back to user agent. This 
> would not increase security risk: User agent would query CA for OCSP response 
> if ATS does not staple it with certificate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3348) read while write config and range issues

2015-03-08 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352431#comment-14352431
 ] 

Leif Hedstrom commented on TS-3348:
---

[~amc] Any thoughts on this? Time is running short on 5.3.0.

> read while write config and range issues
> 
>
> Key: TS-3348
> URL: https://issues.apache.org/jira/browse/TS-3348
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: William Bardwell
>Assignee: William Bardwell
>  Labels: review
> Fix For: 5.3.0
>
> Attachments: TS-3348.diff
>
>
> We had a number of problems with the read-while-write logic.
> #1) you can't set background fill config options to keep background fill from 
> behaving badly because they are shared too much with read-while-write
> #2) logic around filling range requests out of partial cache entries is too 
> restrictive
> #3) issues around read_while_write not working if there is a transform 
> anywhere
> #4) some related config is not overridable
> So we think that our patch fixes all of these issues...mostly.
> (The background fill timeout doesn't get re-instated if a download switches 
> to read-while-write and then back.  The Range is in cache code doesn't seem 
> write for small things or possibly for seeing the current fragment that is 
> only partially downloaded.)
> But we would like some review of this to see if we are doing anything 
> dangerous/not right/not helpful.
> Might also help TS-2761 and issue around range handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3348) read while write config and range issues

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3348:
--
Labels: review  (was: )

> read while write config and range issues
> 
>
> Key: TS-3348
> URL: https://issues.apache.org/jira/browse/TS-3348
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: William Bardwell
>Assignee: William Bardwell
>  Labels: review
> Fix For: 5.3.0
>
> Attachments: TS-3348.diff
>
>
> We had a number of problems with the read-while-write logic.
> #1) you can't set background fill config options to keep background fill from 
> behaving badly because they are shared too much with read-while-write
> #2) logic around filling range requests out of partial cache entries is too 
> restrictive
> #3) issues around read_while_write not working if there is a transform 
> anywhere
> #4) some related config is not overridable
> So we think that our patch fixes all of these issues...mostly.
> (The background fill timeout doesn't get re-instated if a download switches 
> to read-while-write and then back.  The Range is in cache code doesn't seem 
> write for small things or possibly for seeing the current fragment that is 
> only partially downloaded.)
> But we would like some review of this to see if we are doing anything 
> dangerous/not right/not helpful.
> Might also help TS-2761 and issue around range handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3408) Add a "config describe" command to traffic_ctl

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3408:
--
Fix Version/s: (was: 5.3.0)
   6.0.0

> Add a "config describe" command to traffic_ctl
> --
>
> Key: TS-3408
> URL: https://issues.apache.org/jira/browse/TS-3408
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Configuration, Management API
>Reporter: James Peach
>Assignee: James Peach
> Fix For: 6.0.0
>
>
> Add a {{config describe}} command to {{traffic_ctl}} so that operators can 
> get easy access to everything that the records system knows about a 
> configuration variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3331) negative responses cached even when headers indicate otherwise

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3331:
--
Labels: review  (was: )

> negative responses cached even when headers indicate otherwise
> --
>
> Key: TS-3331
> URL: https://issues.apache.org/jira/browse/TS-3331
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: William Bardwell
>Assignee: William Bardwell
>  Labels: review
> Fix For: 5.3.0
>
>
> Negative type status codes get cached even when there are Cache-Control: 
> no-store or the like headers and positive caching would be paying attention 
> to that.  So the fix is to apply response headers (and general caching 
> config) to negative caching choices too.
> My patch might fix [TS-2633] 406 negative responses being cached for too long



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3342) Non-standard method in bad request can cause crash

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3342:
--
Labels: review  (was: )

> Non-standard method in bad request can cause crash
> --
>
> Key: TS-3342
> URL: https://issues.apache.org/jira/browse/TS-3342
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: William Bardwell
>Assignee: William Bardwell
>  Labels: review
> Fix For: 5.3.0
>
> Attachments: TS-3342.diff
>
>
> Fix is to check for a normal sort of method (that would actually need a cache 
> lookup) in HttpTransact::HandleCacheOpenReadMiss() to do
> {code}
>  s->cache_info.action = CACHE_DO_NO_ACTION;
> {code}
> instead of
> {code}
>  s->cache_info.action = CACHE_PREPARE_TO_WRITE;
> {code}
> for anything weird.  But I am concerned that this might cause problems if 
> someone wants to add support for a weird method...but maybe that never works 
> right with the cache anyway...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-871) Subversion (1.7) with serf fails with ATS in forward and reverse proxy mode

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-871:
-
Fix Version/s: (was: sometime)
   6.0.0

> Subversion (1.7) with serf fails with ATS in forward and reverse proxy mode
> ---
>
> Key: TS-871
> URL: https://issues.apache.org/jira/browse/TS-871
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HTTP
>Affects Versions: 3.0.0
>Reporter: Igor Galić
>Assignee: Leif Hedstrom
> Fix For: 6.0.0
>
> Attachments: TS-871-20121107.diff, TS-871.diff, 
> ats_Thttp.debug.notime.txt, ats_Thttp.debug.txt, 
> revats_Thttp.debug.notime.txt, revats_Thttp.debug.txt, serf_proxy.cap, 
> serf_revproxy.cap, stats.diff
>
>
> When accessing a remote subversion repository via http or https with svn 1.7, 
> it will currently timeout:
> {noformat}
> igalic@tynix ~/src/asf % svn co 
> http://svn.apache.org/repos/asf/trafficserver/plugins/stats_over_http/
> svn: E020014: Unable to connect to a repository at URL 
> 'http://svn.apache.org/repos/asf/trafficserver/plugins/stats_over_http'
> svn: E020014: Unspecified error message: 504 Connection Timed Out
> 1 igalic@tynix ~/src/asf %
> {noformat}
> I have started traffic_server -Thttp and captured the output, which I'm 
> attaching.
> There's also a capture from the network.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3235) PluginVC crashed with unrecognized event

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3235:
--
Fix Version/s: (was: 5.3.0)
   6.0.0

> PluginVC crashed with unrecognized event
> 
>
> Key: TS-3235
> URL: https://issues.apache.org/jira/browse/TS-3235
> Project: Traffic Server
>  Issue Type: Bug
>  Components: CPP API, HTTP, Plugins
>Reporter: kang li
>Assignee: Brian Geffon
> Fix For: 6.0.0
>
> Attachments: pluginvc-crash.diff
>
>
> We are using atscppapi to create Intercept plugin.
>  
> From the coredump , that seems Continuation of the InterceptPlugin was 
> already been destroyed. 
> {code}
> #0  0x00375ac32925 in raise () from /lib64/libc.so.6
> #1  0x00375ac34105 in abort () from /lib64/libc.so.6
> #2  0x2b21eeae3458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b21eeae3525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`", 
> ap=0x2b21f4913ad0) at ink_error.cc:65
> #4  0x2b21eeae35ee in ink_fatal (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b21eeae2160 in _ink_assert (expression=0x76ddb8 "call_event == 
> core_lock_retry_event", file=0x76dd04 "PluginVC.cc", line=203)
> at ink_assert.cc:37
> #6  0x00530217 in PluginVC::main_handler (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at PluginVC.cc:203
> #7  0x004f5854 in Continuation::handleEvent (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at ../iocore/eventsystem/I_Continuation.h:146
> #8  0x00755d26 in EThread::process_event (this=0x309b250, 
> e=0xe0f5b80, calling_code=1) at UnixEThread.cc:145
> #9  0x0075610a in EThread::execute (this=0x309b250) at 
> UnixEThread.cc:239
> #10 0x00755284 in spawn_thread_internal (a=0x2849330) at Thread.cc:88
> #11 0x2b21ef05f9d1 in start_thread () from /lib64/libpthread.so.0
> #12 0x00375ace8b7d in clone () from /lib64/libc.so.6
> (gdb) p sm_lock_retry_event
> $13 = (Event *) 0x2b2496146e90
> (gdb) p core_lock_retry_event
> $14 = (Event *) 0x0
> (gdb) p active_event
> $15 = (Event *) 0x0
> (gdb) p inactive_event
> $16 = (Event *) 0x0
> (gdb) p *(INKContInternal*)this->core_obj->connect_to
> Cannot access memory at address 0x2b269cd46c10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-1807) shutdown on a write VIO to TSHttpConnect() doesn't propogate

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-1807:
--
Labels: review  (was: )

> shutdown on a write VIO to TSHttpConnect() doesn't propogate
> 
>
> Key: TS-1807
> URL: https://issues.apache.org/jira/browse/TS-1807
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HTTP
>Reporter: William Bardwell
>  Labels: review
> Fix For: 5.3.0
>
> Attachments: TS-1807.diff
>
>
> In a plugin I am doing a TSHttpConnect() and then sending HTTP requests and 
> getting responses.  But when I try to do TSVIONBytesSet() and 
> TSVConnShutdown() on the write vio (due to the client side being done sending 
> requests) the write vio just sits there and never wakes up the other side, 
> and the response side doesn't try to close up until an inactivity timeout 
> happens.
> I think that PluginVC::do_io_shutdown() needs to do  
> other_side->read_state.vio.reenable(); when a shutdown for write shows up.  
> Then the otherside wakes up and sees the EOF due to the shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-2439) next_slotnum is not recomputed

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2439:
--
Fix Version/s: (was: 5.3.0)
   60.0

> next_slotnum is not recomputed 
> ---
>
> Key: TS-2439
> URL: https://issues.apache.org/jira/browse/TS-2439
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HTTP
>Reporter: xiongzongtao
>Assignee: Alan M. Carroll
> Fix For: 6.0.0
>
> Attachments: ts_2439.diff
>
>
> in function mime_hdr_field_attach of file MIME.cc
> while (prev_slotnum < field_slotnum)// break if prev after field
> {
>   if (next_dup == NULL)
> break;  // no next dup, we're done
>   if (next_slotnum > field_slotnum)
> break;  // next dup is after us, we're done
>   prev_dup = next_dup;
>   prev_slotnum = next_slotnum;
>   next_dup = prev_dup->m_next_dup;
> }
> in  while loop  above, next_slotnum is not recomputed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-2439) next_slotnum is not recomputed

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2439:
--
Fix Version/s: (was: 60.0)
   6.0.0

> next_slotnum is not recomputed 
> ---
>
> Key: TS-2439
> URL: https://issues.apache.org/jira/browse/TS-2439
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HTTP
>Reporter: xiongzongtao
>Assignee: Alan M. Carroll
> Fix For: 6.0.0
>
> Attachments: ts_2439.diff
>
>
> in function mime_hdr_field_attach of file MIME.cc
> while (prev_slotnum < field_slotnum)// break if prev after field
> {
>   if (next_dup == NULL)
> break;  // no next dup, we're done
>   if (next_slotnum > field_slotnum)
> break;  // next dup is after us, we're done
>   prev_dup = next_dup;
>   prev_slotnum = next_slotnum;
>   next_dup = prev_dup->m_next_dup;
> }
> in  while loop  above, next_slotnum is not recomputed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-1334) congestion control - observed issues

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-1334:
--
Labels: review  (was: )

> congestion control - observed issues
> 
>
> Key: TS-1334
> URL: https://issues.apache.org/jira/browse/TS-1334
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 3.0.2
>Reporter: Aidan McGurn
>Assignee: Alan M. Carroll
>  Labels: review
> Fix For: 5.3.0
>
> Attachments: TS-1334.diff
>
>
> Hi,
> I have investigated the use of using ATS congestion control but I had some 
> observations. i can split out if these are bugs which need separate attention.
> (queries are with ATS v3.0.2 as test code, assuming not much changed here for 
> v3.2)
> • Is it feasible for a new Congestion hook to be added to the 
> architecture at some point i.e. for these events:
> CONGESTION_EVENT_CONGESTED_ON_F
> CONGESTION_EVENT_CONGESTED_ON_M
> It would be desirable to send a hook event upwards to inform any plugins of a 
> congested site.
> • How is the congestion cache managed in that I don’t see it deleting 
> entries –
> In CongestionDB.cc/function remove_congested_entry  - I set breakpoints here, 
> I congest, then I uncongest but I never see this function called.
> Therefore does the cache grow and grow with old entries?
> The reason for checking this is I would also need to inform plugin land when 
> a site becomes UNCONGESTED but I don’t even see a httpSM event for this. 
> (this is the biggest issue with CC for me)
> • Traffic_line –q //doesn’t appear to work? i.e. no congested stats 
> returned
> there is a Jira open for along time on this without further response:
> https://issues.apache.org/jira/browse/TS-1221
> • Some other lesser important observations like parameters:
> live_os_conn_retries
> live_os_conn_timeout
> dead_os_conn_timeout
> dead_os_conn_retries
> appear to have no effect whatsoever but not as important as previous points.
> . doesn't look like status response code can be customised
> Maybe this is not supported much as an ATS feature? 
> Any pointers on any of these appreciated even to let me know if the 
> observations are correct and won't fixed in coming releases…
> Thanks,
> /aidan



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-2848) ATS crash in HttpSM::release_server_session

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2848:
--
Labels: crash review yahoo  (was: crash yahoo)

> ATS crash in HttpSM::release_server_session
> ---
>
> Key: TS-2848
> URL: https://issues.apache.org/jira/browse/TS-2848
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HTTP
>Reporter: Feifei Cai
>Assignee: Alan M. Carroll
>  Labels: crash, review, yahoo
> Fix For: 5.3.0
>
> Attachments: TS-2848.diff
>
>
> We deploy ATS on production hosts, and noticed crashes with the following 
> stack trace. This happens not very frequently, about 1 week or even longer. 
> It crashes repeatedly in the last 2 months, however, the root cause is not 
> found and we can not reproduce the crash as wish, only wait for it happens.
> {noformat}
> NOTE: Traffic Server received Sig 11: Segmentation fault
> /home/y/bin/traffic_server - STACK TRACE:
> /lib64/libpthread.so.0(+0x321e60f500)[0x2b69adf8f500]
> /home/y/bin/traffic_server(_ZN6HttpSM22release_server_sessionEb+0x35)[0x529eb5]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x2db)[0x5362bb]
> /home/y/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x3aa)[0x53537a]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52dbd0]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x1f2)[0x5361d2]
> /home/y/bin/traffic_server(_ZN6HttpSM16do_hostdb_lookupEv+0x282)[0x51e422]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0xbad)[0x536b8d]
> /home/y/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x3aa)[0x53537a]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52dbd0]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x1f2)[0x5361d2]
> /home/y/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x3aa)[0x53537a]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52dbd0]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x1f2)[0x5361d2]
> /home/y/bin/traffic_server(_ZN6HttpSM21state_cache_open_readEiPv+0xfe)[0x52ff8e]
> /home/y/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xd8)[0x533098]
> /home/y/bin/traffic_server(_ZN11HttpCacheSM21state_cache_open_readEiPv+0x1b2)[0x50bef2]
> /home/y/bin/traffic_server(_ZN7CacheVC8callcontEi+0x53)[0x5f0a93]
> /home/y/bin/traffic_server(_ZN7CacheVC17openReadStartHeadEiP5Event+0x7cf)[0x65934f]
> /home/y/bin/traffic_server(_ZN5Cache9open_readEP12ContinuationP7INK_MD5P7HTTPHdrP21CacheLookupHttpConfig13CacheFragTypePci+0x383)[0x656373]
> /home/y/bin/traffic_server(_ZN14CacheProcessor9open_readEP12ContinuationP3URLbP7HTTPHdrP21CacheLookupHttpConfigl13CacheFragType+0xad)[0x633a6d]
> /home/y/bin/traffic_server(_ZN11HttpCacheSM9open_readEP3URLP7HTTPHdrP21CacheLookupHttpConfigl+0x94)[0x50b944]
> /home/y/bin/traffic_server(_ZN6HttpSM24do_cache_lookup_and_readEv+0xf3)[0x51d893]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x722)[0x536702]
> /home/y/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x49d)[0x53546d]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52dbd0]
> /home/y/bin/traffic_server(_ZN6HttpSM18state_api_callbackEiPv+0x8b)[0x53328b]
> /home/y/bin/traffic_server(TSHttpTxnReenable+0x404)[0x4b9b14]
> /home/y/libexec64/trafficserver/header_filter.so(+0x2d5d)[0x2b69c3471d5d]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x114)[0x52da34]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x85d)[0x53683d]
> /home/y/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x3aa)[0x53537a]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52dbd0]
> /home/y/bin/traffic_server(_ZN6HttpSM18state_api_callbackEiPv+0x8b)[0x53328b]
> /home/y/bin/traffic_server(TSHttpTxnReenable+0x404)[0x4b9b14]
> /home/y/libexec64/trafficserver/header_rewrite.so(+0x1288d)[0x2b69c36d]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x114)[0x52da34]
> /home/y/bin/traffic_server(_ZN6HttpSM18state_api_callbackEiPv+0x8b)[0x53328b]
> /home/y/bin/traffic_server(TSHttpTxnReenable+0x404)[0x4b9b14]
> /home/y/libexec64/trafficserver/header_filter.so(+0x2d5d)[0x2b69c3471d5d]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x114)[0x52da34]
> /home/y/bin/traffic_server(_ZN6HttpSM33state_read_server_response_headerEiPv+0x398)[0x530828]
> /home/y/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xd8)[0x533098]
> /home/y/bin/traffic_server[0x68606b]
> /home/y/bin/traffic_server[0x688a14]
> /home/y/bin/traffic_server(_ZN10NetHandler12mainNetEventEiP5Event+0x1f2)[0x681582]
> /home/y/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x8f)[0x6a89bf]
> /home/y/bin/traffic_server(_ZN7EThread7executeEv+0x4a3)[0x6a93a3]
> /home/y/bin/traffic_server[0x6a785a]
> /lib64/libpthread

[jira] [Updated] (TS-3036) Add logging field to define the cache medium used to serve a HIT

2015-03-08 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3036:
--
Labels: review  (was: )

> Add logging field to define the cache medium used to serve a HIT
> 
>
> Key: TS-3036
> URL: https://issues.apache.org/jira/browse/TS-3036
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Logging
>Reporter: Ryan Frantz
>Assignee: Leif Hedstrom
>  Labels: review
> Fix For: 5.3.0
>
>
> I want to be able to differentiate between RAM cache HITs and disk cache 
> HITs. Add a logging field to inform the administrator if the HIT came from 
> RAM, at least.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3362) Do not staple negative OCSP response

2015-03-08 Thread Feifei Cai (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352446#comment-14352446
 ] 

Feifei Cai commented on TS-3362:


Thanks [~zwoop]. I'll close this ticket.

> Do not staple negative OCSP response
> 
>
> Key: TS-3362
> URL: https://issues.apache.org/jira/browse/TS-3362
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: SSL
>Reporter: Feifei Cai
>  Labels: review
> Attachments: TS-3362.diff
>
>
> When get OCSP response, we check it before cache/staple it. If it's negative, 
> I think we'd better discard it instead of sending back to user agent. This 
> would not increase security risk: User agent would query CA for OCSP response 
> if ATS does not staple it with certificate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-3362) Do not staple negative OCSP response

2015-03-08 Thread Feifei Cai (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feifei Cai closed TS-3362.
--
   Resolution: Won't Fix
Fix Version/s: (was: sometime)

> Do not staple negative OCSP response
> 
>
> Key: TS-3362
> URL: https://issues.apache.org/jira/browse/TS-3362
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: SSL
>Reporter: Feifei Cai
>  Labels: review
> Attachments: TS-3362.diff
>
>
> When get OCSP response, we check it before cache/staple it. If it's negative, 
> I think we'd better discard it instead of sending back to user agent. This 
> would not increase security risk: User agent would query CA for OCSP response 
> if ATS does not staple it with certificate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3433) Use extra memory than expected

2015-03-08 Thread pjack (JIRA)
pjack created TS-3433:
-

 Summary: Use extra memory than expected
 Key: TS-3433
 URL: https://issues.apache.org/jira/browse/TS-3433
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: pjack


the storage size is 16G, and ram_cache.size use the default value : (-1)
I expect the memory of the process should be less than 200MB, but it used 1.5G 
and more until my server out of memory.
I checked the ram_cache size and I found bytes_used > total_bytes.

OS: Ubuntu 12.04 with 3.13.0-32 kernal
traffic server version: 4.2.3

$curl http://localhost:8080/_status | grep ram
"proxy.process.cache.ram_cache.total_bytes": "13410304",
"proxy.process.cache.ram_cache.bytes_used": "133754880",
"proxy.process.cache.ram_cache.hits": "12322",
"proxy.process.cache.ram_cache.misses": "55584",


But if I setup the ram_cache.size to a fixed value, then the bytes_used will be 
always smaller than total_bytes. Please let me know if it is a correct scenario 
or some kind of bug?

Thanks!
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3408) Add a "config describe" command to traffic_ctl

2015-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352555#comment-14352555
 ] 

ASF subversion and git services commented on TS-3408:
-

Commit 857766c5caefd2c1b890e34e379cfcd7bbde475d in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=857766c ]

TS-3408: add a "config describe" command to traffic_ctl

Add a new management API TSConfigRecordDescribe() to publish all
the information that we know about a configuration record. Plumb
this through the messaging layer and expose it in traffic_ctl as
the "config describe" subcommand.


> Add a "config describe" command to traffic_ctl
> --
>
> Key: TS-3408
> URL: https://issues.apache.org/jira/browse/TS-3408
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Configuration, Management API
>Reporter: James Peach
>Assignee: James Peach
> Fix For: 6.0.0
>
>
> Add a {{config describe}} command to {{traffic_ctl}} so that operators can 
> get easy access to everything that the records system knows about a 
> configuration variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)