[jira] [Updated] (TS-4285) Add support for overriding proxy.config.http.attach_server_session_to_client

2016-03-19 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-4285:
--
Fix Version/s: 6.2.0

> Add support for overriding proxy.config.http.attach_server_session_to_client
> 
>
> Key: TS-4285
> URL: https://issues.apache.org/jira/browse/TS-4285
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: TS API
>Reporter: Phillip Moore
> Fix For: 6.2.0
>
>
> I need to enable "proxy.config.http.attach_server_session_to_client" setting 
> for a single remap rule in our configuration. Currently this isn't one of the 
> supported settings that can be overridden via the conf_remap plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4279:
---
Affects Version/s: 5.3.1

> ats fallen into dead loop for cache directory overflow
> --
>
> Key: TS-4279
> URL: https://issues.apache.org/jira/browse/TS-4279
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache
>Affects Versions: 5.3.1
>Reporter: taoyunxing
> Fix For: 6.2.0
>
>
> CPU 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, 
> records.config:
> CONFIG proxy.config.cache.min_average_object_size INT 1048576
> CONFIG proxy.config.cache.ram_cache.algorithm INT 1
> CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
> CONFIG proxy.config.cache.ram_cache.size INT 64424509440
> storage.config:
> /dev/sdc id=cache.disk.1
> I encountered a kind of dead loop situation of ats 5.3.1 on two production 
> hosts, a burst of warning is seen by me in the diags.log like this:
> {code}
> [Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> {code}
> ats restart in every serval hours, and the TIMEWAIT count is huge above the 
> ESTABLISH TCP connection count.
> the following is the current dir snapshot of those host:
> {code}
> Directory for [cache.disk.1 172032:109741163]
> Bytes: 8573600
> Segments:  14
> Buckets:   15310
> Entries:   857360
> Full:  852904
> Empty: 4085
> Stale: 0
> Free:  371
> Bucket Fullness:   4085158003204441621 
>   42175331372223212605 
>
> Segment Fullness:  60903 60918 60914 60947 60956 
>60947 60872 60943 60918 60927 
>60858 60917 60927 60957 
> Freelist Fullness:45302713 0 
>789 53212 
>   83 020 8 
> {code}
> I wonder why, anyone help me? thinks a lot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4282) Socket read failure isn't handled in AdminClient.pm

2016-03-19 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-4282:
--
Fix Version/s: 6.2.0

> Socket read failure isn't handled in AdminClient.pm
> ---
>
> Key: TS-4282
> URL: https://issues.apache.org/jira/browse/TS-4282
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Management API
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
> Fix For: 6.2.0
>
>
> {{_do_read}} can return {{undef}} but its caller {{get_stat}} doesn't check 
> and so can generate errors by attempting to deference {{undef}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3999) Add option to go direct for specific parent entry

2016-03-19 Thread Alan M. Carroll (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201645#comment-15201645
 ] 

Alan M. Carroll commented on TS-3999:
-

Although, maybe we should change it to
{code}
peers="tidus:8080, yuna:8080, rikku:8080"
{code}

> Add option to go direct for specific parent entry
> -
>
> Key: TS-3999
> URL: https://issues.apache.org/jira/browse/TS-3999
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Parent Proxy
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
>  Labels: A
> Fix For: 7.0.0
>
>
> We want to use parenty proxying in a peer relationship so that a host can be 
> both a parent and child in general but only for a specific URL. Currently 
> this can be done using the CARP plugin but that can create some difficulties. 
> Being able to specify that one parent in the list is actually direct to 
> origin would make that much easier.
> The current suggestion is to overload the port specifier as a indicator of 
> direct. For example, if you had three hosts in a pod, {{rikku}}, {{tidus}}, 
> and {{yuna}}, then you would configure the parent proxying on {{tidus}} as
> {code}
> "tidus:@direct, yuna:8080, rikku:8080"
> {code}
> while the configuration on {{yuna}} would be
> {code}
> "tidus:8080, yuna:@direct, rikku:8080"
> {code}.
> Similarly for {{rikku}} the port would be changed to "@direct" just for the 
> "rikku" parent. I discussed several configuration options for this with 
> [~dcarlin] and putting the override directly in the parent list was his 
> preferred mechanism. Note that in order to have consistency between hosts in 
> a pod, the list of names *must* be exactly the same across all the peers. In 
> this case, it must be "tidus, yuna, rikku" for all three or loops will occur 
> because of different hash seeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3999) Add option to go direct for specific parent entry

2016-03-19 Thread Alan M. Carroll (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201636#comment-15201636
 ] 

Alan M. Carroll commented on TS-3999:
-

In addition a flag will be added to the parent rule that indicates that one of 
the parents is self.  If this flag is present and self cannot be identified 
then ATS will terminate with an appropriate warning. This would make case (2) 
much safer while allowing the same configuration across multiple machines 
because the flag is attached to the rule, not to a specific parent.

For example
{code}
parent="tidus:8080, yuna:8080, rikku:8080" peers=true
{code}
If the host cannot be positively identified as one of {{tidus}}, {{yuna}}, or 
{{rikku}} then ATS fails out. Setting {{peers}} is allowed but not required if 
{{direct}} is used because use of {{direct}} implies {{peers}}.

> Add option to go direct for specific parent entry
> -
>
> Key: TS-3999
> URL: https://issues.apache.org/jira/browse/TS-3999
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Parent Proxy
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
>  Labels: A
> Fix For: 7.0.0
>
>
> We want to use parenty proxying in a peer relationship so that a host can be 
> both a parent and child in general but only for a specific URL. Currently 
> this can be done using the CARP plugin but that can create some difficulties. 
> Being able to specify that one parent in the list is actually direct to 
> origin would make that much easier.
> The current suggestion is to overload the port specifier as a indicator of 
> direct. For example, if you had three hosts in a pod, {{rikku}}, {{tidus}}, 
> and {{yuna}}, then you would configure the parent proxying on {{tidus}} as
> {code}
> "tidus:@direct, yuna:8080, rikku:8080"
> {code}
> while the configuration on {{yuna}} would be
> {code}
> "tidus:8080, yuna:@direct, rikku:8080"
> {code}.
> Similarly for {{rikku}} the port would be changed to "@direct" just for the 
> "rikku" parent. I discussed several configuration options for this with 
> [~dcarlin] and putting the override directly in the parent list was his 
> preferred mechanism. Note that in order to have consistency between hosts in 
> a pod, the list of names *must* be exactly the same across all the peers. In 
> this case, it must be "tidus, yuna, rikku" for all three or loops will occur 
> because of different hash seeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4260) Change event loop to always stall on waiting for I/O.

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198718#comment-15198718
 ] 

ASF subversion and git services commented on TS-4260:
-

Commit dfd9776990a6cf04385e5e315c277f2dad3f01d8 in trafficserver's branch 
refs/heads/master from [~amc]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=dfd9776 ]

TS-4261: Split stats API from process API.

This is useful for TS-4260 as noted in the related bugs. As part
of that work I set up some statistics to track the performance of
the event loop. Without this change doing that requires bringing
all of the process management support in to the event loop component
which is problematic. It seemed much simpler and better overall to
just split those unrelated items apart.

This closes #516.


> Change event loop to always stall on waiting for I/O.
> -
>
> Key: TS-4260
> URL: https://issues.apache.org/jira/browse/TS-4260
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
> Fix For: 6.2.0
>
>
> Currently the event loop has two wait conditions, one a condition variable 
> and the other I/O ({{epoll}} or equivalent). As far as I can tell the 
> conditiona variable is useful only during start up when the I/O wait data is 
> not yet available. The event loop should be changed to wait on one or the 
> other but not both as this can create artificial latency when an event breaks 
> one wait condition but not both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-4271) metrics fail to clear

2016-03-19 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach resolved TS-4271.
-
Resolution: Fixed

> metrics fail to clear
> -
>
> Key: TS-4271
> URL: https://issues.apache.org/jira/browse/TS-4271
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Management API, Metrics
>Reporter: James Peach
>Assignee: James Peach
> Fix For: 6.2.0
>
>
> {{traffic_ctl metrics clear}} fails to clear metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4282) Socket read failure isn't handled in AdminClient.pm

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198304#comment-15198304
 ] 

ASF subversion and git services commented on TS-4282:
-

Commit 73dedc34b61033978032941a1408487f760b0d49 in trafficserver's branch 
refs/heads/master from [~amc]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=73dedc3 ]

TS-4282: Check for _do_read returning undef in AdminClient.pm
This closes #527.


> Socket read failure isn't handled in AdminClient.pm
> ---
>
> Key: TS-4282
> URL: https://issues.apache.org/jira/browse/TS-4282
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Management API
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
> Fix For: 6.2.0
>
>
> {{_do_read}} can return {{undef}} but its caller {{get_stat}} doesn't check 
> and so can generate errors by attempting to deference {{undef}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-4283) Remove some remnant comments and code from 64-bit conversion

2016-03-19 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom reassigned TS-4283:
-

Assignee: Leif Hedstrom

> Remove some remnant comments and code from 64-bit conversion
> 
>
> Key: TS-4283
> URL: https://issues.apache.org/jira/browse/TS-4283
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Leif Hedstrom
> Fix For: 6.2.0
>
>
> Some of these comments no longer applies, so we should just remove these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4278) HostDB sync causes active transactions to block for 100's of ms

2016-03-19 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-4278:
--
Fix Version/s: 6.2.0

> HostDB sync causes active transactions to block for 100's of ms
> ---
>
> Key: TS-4278
> URL: https://issues.apache.org/jira/browse/TS-4278
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Susan Hinrichs
> Fix For: 6.2.0
>
>
> When HostDB syncs to disk (by default every two minutes), active transactions 
> will block when they reach HttpSM::do_hostdb_lookup.  This is because 
> do_hostdb_lookup calls hostDBProcessor.getbyname_imm which attempts to get 
> the bucket locks.   The delays generally last for 500-1200ms.  This blocks 
> the event loop so no other actions will be performed by the net handler until 
> the lock is dropped.
> I'm assuming that the bucket locks are grabbed by the sync logic.  When I 
> increased proxy.config.cache.hostdb.sync_frequency to 1200, the every two 
> minute slow down went away.  Fortunately 
> proxy.config.cache.hostdb.sync_frequency set to 0 seems to completely 
> eliminate the sync, which will be my suggested solution internally.
> I tried reducing the size of the hostdb table, but that didn't seem to affect 
> the delay time.
> The delay only reliably exhibited on loaded system.  Running my httperf test 
> case on a machine with no other activity did not show the delays.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread taoyunxing (JIRA)
taoyunxing created TS-4279:
--

 Summary: ats fallen into dead loop for cache directory overflow
 Key: TS-4279
 URL: https://issues.apache.org/jira/browse/TS-4279
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: taoyunxing


CPU 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, 

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
{code}
ats restart in every serval hours, and the TIMEWAIT count is huge above the 
ESTABLISH TCP connection count.
the following is the current dir snapshot of those host:
{code}
Directory for [cache.disk.1 172032:109741163]
Bytes: 8573600
Segments:  14
Buckets:   15310
Entries:   857360
Full:  852904
Empty: 4085
Stale: 0
Free:  371
Bucket Fullness:   4085158003204441621 
  42175331372223212605 
   
Segment Fullness:  60903 60918 60914 60947 60956 
   60947 60872 60943 60918 60927 
   60858 60917 60927 60957 
Freelist Fullness:45302713 0 
   789 53212 
  83 020 8 
{code}
I wonder why, anyone help me? thinks a lot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4286) Wrong configuration instructions in docs for disabling caching

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202509#comment-15202509
 ] 

ASF subversion and git services commented on TS-4286:
-

Commit 0468512bbd89d31a5239e0e9b077104fb8ac8e44 in trafficserver's branch 
refs/heads/master from [~jsime]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=0468512 ]

TS-4286: docs: correction to cache disabling instructions


> Wrong configuration instructions in docs for disabling caching
> --
>
> Key: TS-4286
> URL: https://issues.apache.org/jira/browse/TS-4286
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Docs
>Reporter: Daniel Xu
>Assignee: Jon Sime
> Fix For: Docs
>
>
> https://docs.trafficserver.apache.org/en/latest/admin-guide/configuration/cache-basics.en.html#disabling-http-object-caching
> The docs tell you to turn off HTTP proxying via proxy.config.http.enabled 
> instead of telling you to turn off caching via proxy.config.http.cache.http



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (TS-4286) Wrong configuration instructions in docs for disabling caching

2016-03-19 Thread Jon Sime (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TS-4286 started by Jon Sime.

> Wrong configuration instructions in docs for disabling caching
> --
>
> Key: TS-4286
> URL: https://issues.apache.org/jira/browse/TS-4286
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Docs
>Reporter: Daniel Xu
>Assignee: Jon Sime
> Fix For: Docs
>
>
> https://docs.trafficserver.apache.org/en/latest/admin-guide/configuration/cache-basics.en.html#disabling-http-object-caching
> The docs tell you to turn off HTTP proxying via proxy.config.http.enabled 
> instead of telling you to turn off caching via proxy.config.http.cache.http



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-4287) Add a simple and dead server retry feature to Parent Selection

2016-03-19 Thread John Rushford (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Rushford reassigned TS-4287:
-

Assignee: John Rushford

> Add a simple and dead server retry feature to Parent Selection
> --
>
> Key: TS-4287
> URL: https://issues.apache.org/jira/browse/TS-4287
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Parent Proxy
>Reporter: John Rushford
>Assignee: John Rushford
>
> Parent Selection nows supports the use of origin servers in the parent list.  
> It would be useful to add a simple retry feature that would try another 
> parent when a 404 response is received for content that may not be available 
> on one origin but may be available on another parent origin in the parent 
> list. This can happen when packagers are pushing live video chunks to 
> multiple origins and a request comes into an origin that may not yet have 
> received the requested file.
> It would also be useful to mark a parent origin down and retry a request 
> using another parent origin if a 503 unavailable response is received or if 
> some other  other application 5xx response were received.
> A pull request will follow this ticket that adds the following configuration 
> parameters to parent.config and that implements the retry functionality.
> The following new configuration parameters are available in parent.config 
> when parent_is_proxy is false (parent origin):
> 'parent_retry', 'dead_server_retry_responses', 'max_simple_retries', and 
> 'max_dead_server_retries'.
> 'parent_retry' - May be set to the value: 'simple_retry', 
> 'dead_server_retry', or 'both'.   parent_retry is disabled by default and may 
> only be enabled with one of these values when 'parent_is_proxy' is false 
> (parent origin).
> if 'parent_retry' is set to 'simple_retry' another parent will be retried 
> when a 404 response is received from the parent origin.  By default only one 
> retry will be attempted for a 404 response but this may be increased from 1 
> to 5 with 'max_simple_retries' parameter.
> if 'parent_retry' is set to 'dead_server_retry' and a response is received 
> that is contained in a configurable list of response codes (503 by default), 
> the parent that returned the code is marked down and another parent is 
> retried.  By default only one retry will be attempted but his may be 
> increased from 1 to 5 using the 'max_dead_server_retries' configuration 
> parameter.
> 'dead_server_retry_responses' is an optional comma separated list of response 
> codes that may be configured to trigger a dead_server_retry when 
> 'parent_retry' is set to 'dead_server_retry'.  If not specified in the 
> parent.config, the default response code is 503.
> If 'parent_retry' is set to 'both' then both simple_retry and 
> dead_server_retry are enabled for this list of parents.
> 'max_simple_retries' is set to 1 by default but may be increased within the 
> range 1 to 5.
> 'max_dead_server_retries' is set to 1 by default but may be increased within 
> the range 1 to 5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4284) Modernize the geoip_acl plugin

2016-03-19 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-4284:
--
Description: 
I'd like to

1) Refactor it a little bit, to make it a little cleaner.

2) Add support for IPv6

  was:
I'd like to

1) Refactor it a little bit, to make it a little cleaner.

2) Add support for IPv6

3) Possibly, add support for the newer MaxMind APIs.


> Modernize the geoip_acl plugin
> --
>
> Key: TS-4284
> URL: https://issues.apache.org/jira/browse/TS-4284
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Plugins
>Reporter: Leif Hedstrom
>Assignee: Leif Hedstrom
> Fix For: 6.2.0
>
>
> I'd like to
> 1) Refactor it a little bit, to make it a little cleaner.
> 2) Add support for IPv6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-4282) Socket read failure isn't handled in AdminClient.pm

2016-03-19 Thread Alan M. Carroll (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan M. Carroll resolved TS-4282.
-
Resolution: Fixed

> Socket read failure isn't handled in AdminClient.pm
> ---
>
> Key: TS-4282
> URL: https://issues.apache.org/jira/browse/TS-4282
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Management API
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
> Fix For: 6.2.0
>
>
> {{_do_read}} can return {{undef}} but its caller {{get_stat}} doesn't check 
> and so can generate errors by attempting to deference {{undef}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3999) Add option to go direct for specific parent entry

2016-03-19 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15200748#comment-15200748
 ] 

Leif Hedstrom commented on TS-3999:
---

Talked with [~amc] again, and I think we agreed that a solution allowing for 
both explicit, and implicit, self detection would be doable. Something like:

1) If explicitly given a self identifier, use that (this then requires a unique 
config file for each host in the cluster).

- otherwise- 

2) We do a heuristic, trying to use either the hostname or check the interface 
IPs, and compare that with the selected parent.

The heuristics would likely not cover every possible setup or environment, but 
we feel it could handle a good portion of common setups. For those where the 
heuristics can't work, the operator simply has to use the explicit self 
identification tagging.

> Add option to go direct for specific parent entry
> -
>
> Key: TS-3999
> URL: https://issues.apache.org/jira/browse/TS-3999
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Parent Proxy
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
>  Labels: A
> Fix For: 7.0.0
>
>
> We want to use parenty proxying in a peer relationship so that a host can be 
> both a parent and child in general but only for a specific URL. Currently 
> this can be done using the CARP plugin but that can create some difficulties. 
> Being able to specify that one parent in the list is actually direct to 
> origin would make that much easier.
> The current suggestion is to overload the port specifier as a indicator of 
> direct. For example, if you had three hosts in a pod, {{rikku}}, {{tidus}}, 
> and {{yuna}}, then you would configure the parent proxying on {{tidus}} as
> {code}
> "tidus:@direct, yuna:8080, rikku:8080"
> {code}
> while the configuration on {{yuna}} would be
> {code}
> "tidus:8080, yuna:@direct, rikku:8080"
> {code}.
> Similarly for {{rikku}} the port would be changed to "@direct" just for the 
> "rikku" parent. I discussed several configuration options for this with 
> [~dcarlin] and putting the override directly in the parent list was his 
> preferred mechanism. Note that in order to have consistency between hosts in 
> a pod, the list of names *must* be exactly the same across all the peers. In 
> this case, it must be "tidus, yuna, rikku" for all three or loops will occur 
> because of different hash seeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4115) Add a multi origin hierarchy to parent selection.

2016-03-19 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15200743#comment-15200743
 ] 

Leif Hedstrom commented on TS-4115:
---

[~jrushford] Can this be closed ?

> Add a multi origin hierarchy to parent selection.
> -
>
> Key: TS-4115
> URL: https://issues.apache.org/jira/browse/TS-4115
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: John Rushford
>Assignee: John Rushford
>Priority: Minor
> Fix For: 6.2.0
>
>
> Parent Selection is currently used to create a hierarchy of cache server 
> groups.  It would be useful to create an origin server hierarchy through 
> parent selection so that say mid tier caches could round robin requests to 
> multiple origin servers possibly at different sites using round robin load 
> balancing and possibly consistent hashing algorithms.
> FEATURE DESCRIPTION:
> A pull request accompanies this ticket that adds this feature to parent 
> selection.  A new configuration parameter "parent_is_proxy" is available in 
> parent config.  parent_is_proxy=true is the default and indicates that a list 
> of parents and secondary_parents are the usual parent caches but, when set to 
> false, it indicates that the list of parents are origin servers.
> When marked as origin servers, the server FQDN is removed from the http GET 
> request so that only the relative path is in the request.  Note that if 
> connectivity fails to all the origins listed or all are marked down, there is 
> no go direct behavior as these are the origins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4271) metrics fail to clear

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198708#comment-15198708
 ] 

ASF subversion and git services commented on TS-4271:
-

Commit cfcf6c64ee0bc9251f0817214dc04552d190bb41 in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=cfcf6c6 ]

TS-4271: Fix clearing metrics.


> metrics fail to clear
> -
>
> Key: TS-4271
> URL: https://issues.apache.org/jira/browse/TS-4271
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Management API, Metrics
>Reporter: James Peach
>Assignee: James Peach
> Fix For: 6.2.0
>
>
> {{traffic_ctl metrics clear}} fails to clear metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-4284) Modernize the geoip_acl plugin

2016-03-19 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-4284:
-

 Summary: Modernize the geoip_acl plugin
 Key: TS-4284
 URL: https://issues.apache.org/jira/browse/TS-4284
 Project: Traffic Server
  Issue Type: Improvement
  Components: Plugins
Reporter: Leif Hedstrom


I'd like to

1) Refactor it a little bit, to make it a little cleaner.

2) Add support for IPv6

3) Possibly, add support for the newer MaxMind APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4115) Add a multi origin hierarchy to parent selection.

2016-03-19 Thread John Rushford (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15200795#comment-15200795
 ] 

John Rushford commented on TS-4115:
---

Leif,

Yes, I'll go in and close it.

thanks




> Add a multi origin hierarchy to parent selection.
> -
>
> Key: TS-4115
> URL: https://issues.apache.org/jira/browse/TS-4115
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: John Rushford
>Assignee: John Rushford
>Priority: Minor
> Fix For: 6.2.0
>
>
> Parent Selection is currently used to create a hierarchy of cache server 
> groups.  It would be useful to create an origin server hierarchy through 
> parent selection so that say mid tier caches could round robin requests to 
> multiple origin servers possibly at different sites using round robin load 
> balancing and possibly consistent hashing algorithms.
> FEATURE DESCRIPTION:
> A pull request accompanies this ticket that adds this feature to parent 
> selection.  A new configuration parameter "parent_is_proxy" is available in 
> parent config.  parent_is_proxy=true is the default and indicates that a list 
> of parents and secondary_parents are the usual parent caches but, when set to 
> false, it indicates that the list of parents are origin servers.
> When marked as origin servers, the server FQDN is removed from the http GET 
> request so that only the relative path is in the request.  Note that if 
> connectivity fails to all the origins listed or all are marked down, there is 
> no go direct behavior as these are the origins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-4285) Add support for overriding proxy.config.http.attach_server_session_to_client

2016-03-19 Thread Phillip Moore (JIRA)
Phillip Moore created TS-4285:
-

 Summary: Add support for overriding 
proxy.config.http.attach_server_session_to_client
 Key: TS-4285
 URL: https://issues.apache.org/jira/browse/TS-4285
 Project: Traffic Server
  Issue Type: Improvement
  Components: TS API
Reporter: Phillip Moore


I need to enable "proxy.config.http.attach_server_session_to_client" setting 
for a single remap rule in our configuration. Currently this isn't one of the 
supported settings that can be overridden via the conf_remap plugin.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4087) H2 flexible resource limitation

2016-03-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202370#comment-15202370
 ] 

ASF GitHub Bot commented on TS-4087:


Github user bryancall commented on the pull request:

https://github.com/apache/trafficserver/pull/485#issuecomment-198585206
  
How would this prevent a DDOS attack if clients established a bunch of 
connections and then made requests up to the max number of streams per 
connection (100)?  I think it would be better to dynamically adjust the max 
stream when a new stream is created.

This is better then nothing, which we have now, so I am OK with it. :+1: 


> H2 flexible resource limitation
> ---
>
> Key: TS-4087
> URL: https://issues.apache.org/jira/browse/TS-4087
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: HTTP/2
>Reporter: Ryo Okubo
>Assignee: Masaori Koshiba
> Fix For: 6.2.0
>
>
> Current H2 implementation depends on FetchSM and PluginVC to forward 
> requests. But their memory footprint is very high. It may be vulnerable to 
> DoS attack.
> As simple ways to avoid the problem, we can use two limitations, 
> _proxy.config.net.connections_throttle_ and 
> _proxy.config.http2.max_concurrent_streams_in_. But reducing number of 
> _proxy.config.net.connections_throttle_ causes that number of acceptable 
> HTTP/1.1 requests become lower. And reducing 
> _proxy.config.http2.max_concurrent_streams_in_ restricts benefits of H2.
> I'd like to propose more flexible resource limitation for current H2 impl 
> based on number of active H2 streams. Its adding an upper limit of active H2 
> streams. If tis exceeded, ATS send low number of 
> SETTINGS_MAX_CONCURRENT_STREAMS to clients and/or RST_STREAM frame.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-4256) fix abort with ats-inliner plug-in

2016-03-19 Thread Alan M. Carroll (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan M. Carroll resolved TS-4256.
-
Resolution: Fixed

> fix abort with ats-inliner plug-in
> --
>
> Key: TS-4256
> URL: https://issues.apache.org/jira/browse/TS-4256
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Plugins
>Reporter: Daniel Vitor Morilha
>Assignee: Daniel Vitor Morilha
> Fix For: 6.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4256) fix abort with ats-inliner plug-in

2016-03-19 Thread Alan M. Carroll (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan M. Carroll updated TS-4256:

Assignee: Daniel Vitor Morilha

> fix abort with ats-inliner plug-in
> --
>
> Key: TS-4256
> URL: https://issues.apache.org/jira/browse/TS-4256
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Plugins
>Reporter: Daniel Vitor Morilha
>Assignee: Daniel Vitor Morilha
> Fix For: 6.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4256) fix abort with ats-inliner plug-in

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202835#comment-15202835
 ] 

ASF subversion and git services commented on TS-4256:
-

Commit 1e9c9484cacbfaa4ded7ad2c7e7baad74cb0f4f6 in trafficserver's branch 
refs/heads/master from [~dmorilha]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=1e9c948 ]

[TS-4256] flagging aborts and avoiding consuming the reader
This closes #511.


> fix abort with ats-inliner plug-in
> --
>
> Key: TS-4256
> URL: https://issues.apache.org/jira/browse/TS-4256
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Plugins
>Reporter: Daniel Vitor Morilha
> Fix For: 6.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-4290) Cleanup header_rewrite, such that we don't pollute the statement base class

2016-03-19 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-4290:
-

 Summary: Cleanup header_rewrite, such that we don't pollute the 
statement base class
 Key: TS-4290
 URL: https://issues.apache.org/jira/browse/TS-4290
 Project: Traffic Server
  Issue Type: Improvement
  Components: Plugins
Reporter: Leif Hedstrom


There's a bunch of junk in the statement base class, which is clearly specific 
to the implementation of the various feature specific classes. We should move 
this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4290) Cleanup header_rewrite, such that we don't pollute the statement base class

2016-03-19 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-4290:
--
Fix Version/s: 6.2.0

> Cleanup header_rewrite, such that we don't pollute the statement base class
> ---
>
> Key: TS-4290
> URL: https://issues.apache.org/jira/browse/TS-4290
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Plugins
>Reporter: Leif Hedstrom
> Fix For: 6.2.0
>
>
> There's a bunch of junk in the statement base class, which is clearly 
> specific to the implementation of the various feature specific classes. We 
> should move this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4056) MemLeak: ~NetAccept() do not free alloc_cache(vc)

2016-03-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202407#comment-15202407
 ] 

ASF GitHub Bot commented on TS-4056:


Github user asfgit closed the pull request at:

https://github.com/apache/trafficserver/pull/381


> MemLeak: ~NetAccept() do not free alloc_cache(vc)
> -
>
> Key: TS-4056
> URL: https://issues.apache.org/jira/browse/TS-4056
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 6.1.0
>Reporter: Oknet Xu
>Assignee: Bryan Call
>  Labels: review
> Fix For: 6.2.0
>
>
> NetAccpet::alloc_cache is a void pointor is used in net_accept().
> the alloc_cache does not release after NetAccept canceled.
> I'm looking for all code, believe the "alloc_cache" is a bad idea here.
> I create a pull request on github: 
> https://github.com/apache/trafficserver/pull/366
> also add a condition check for vc==NULL after allocate_vc()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-4290) Cleanup header_rewrite, such that we don't pollute the statement base class

2016-03-19 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom reassigned TS-4290:
-

Assignee: Leif Hedstrom

> Cleanup header_rewrite, such that we don't pollute the statement base class
> ---
>
> Key: TS-4290
> URL: https://issues.apache.org/jira/browse/TS-4290
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Plugins
>Reporter: Leif Hedstrom
>Assignee: Leif Hedstrom
> Fix For: 6.2.0
>
>
> There's a bunch of junk in the statement base class, which is clearly 
> specific to the implementation of the various feature specific classes. We 
> should move this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-4115) Add a multi origin hierarchy to parent selection.

2016-03-19 Thread John Rushford (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Rushford closed TS-4115.
-
Resolution: Fixed

This feature has been implemented and the code has been committed to the master 
branch.

> Add a multi origin hierarchy to parent selection.
> -
>
> Key: TS-4115
> URL: https://issues.apache.org/jira/browse/TS-4115
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: John Rushford
>Assignee: John Rushford
>Priority: Minor
> Fix For: 6.2.0
>
>
> Parent Selection is currently used to create a hierarchy of cache server 
> groups.  It would be useful to create an origin server hierarchy through 
> parent selection so that say mid tier caches could round robin requests to 
> multiple origin servers possibly at different sites using round robin load 
> balancing and possibly consistent hashing algorithms.
> FEATURE DESCRIPTION:
> A pull request accompanies this ticket that adds this feature to parent 
> selection.  A new configuration parameter "parent_is_proxy" is available in 
> parent config.  parent_is_proxy=true is the default and indicates that a list 
> of parents and secondary_parents are the usual parent caches but, when set to 
> false, it indicates that the list of parents are origin servers.
> When marked as origin servers, the server FQDN is removed from the http GET 
> request so that only the relative path is in the request.  Note that if 
> connectivity fails to all the origins listed or all are marked down, there is 
> no go direct behavior as these are the origins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-4282) Socket read failure isn't handled in AdminClient.pm

2016-03-19 Thread Alan M. Carroll (JIRA)
Alan M. Carroll created TS-4282:
---

 Summary: Socket read failure isn't handled in AdminClient.pm
 Key: TS-4282
 URL: https://issues.apache.org/jira/browse/TS-4282
 Project: Traffic Server
  Issue Type: Bug
  Components: Management API
Reporter: Alan M. Carroll


{{_do_read}} can return {{undef}} but its caller {{get_stat}} doesn't check and 
so can generate errors by attempting to deference {{undef}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4282) Socket read failure isn't handled in AdminClient.pm

2016-03-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198294#comment-15198294
 ] 

ASF GitHub Bot commented on TS-4282:


Github user SolidWallOfCode commented on the pull request:

https://github.com/apache/trafficserver/pull/527#issuecomment-197579454
  
Nope, it's wrong - let me fix it (should use `undefined` rather than `== 
undef`). Too long since I really worked in Perl.


> Socket read failure isn't handled in AdminClient.pm
> ---
>
> Key: TS-4282
> URL: https://issues.apache.org/jira/browse/TS-4282
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Management API
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
> Fix For: 6.2.0
>
>
> {{_do_read}} can return {{undef}} but its caller {{get_stat}} doesn't check 
> and so can generate errors by attempting to deference {{undef}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3999) Add option to go direct for specific parent entry

2016-03-19 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3999:
--
Labels: A  (was: )

> Add option to go direct for specific parent entry
> -
>
> Key: TS-3999
> URL: https://issues.apache.org/jira/browse/TS-3999
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Parent Proxy
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
>  Labels: A
> Fix For: 7.0.0
>
>
> We want to use parenty proxying in a peer relationship so that a host can be 
> both a parent and child in general but only for a specific URL. Currently 
> this can be done using the CARP plugin but that can create some difficulties. 
> Being able to specify that one parent in the list is actually direct to 
> origin would make that much easier.
> The current suggestion is to overload the port specifier as a indicator of 
> direct. For example, if you had three hosts in a pod, {{rikku}}, {{tidus}}, 
> and {{yuna}}, then you would configure the parent proxying on {{tidus}} as
> {code}
> "tidus:@direct, yuna:8080, rikku:8080"
> {code}
> while the configuration on {{yuna}} would be
> {code}
> "tidus:8080, yuna:@direct, rikku:8080"
> {code}.
> Similarly for {{rikku}} the port would be changed to "@direct" just for the 
> "rikku" parent. I discussed several configuration options for this with 
> [~dcarlin] and putting the override directly in the parent list was his 
> preferred mechanism. Note that in order to have consistency between hosts in 
> a pod, the list of names *must* be exactly the same across all the peers. In 
> this case, it must be "tidus, yuna, rikku" for all three or loops will occur 
> because of different hash seeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4115) Add a multi origin hierarchy to parent selection.

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15197599#comment-15197599
 ] 

ASF subversion and git services commented on TS-4115:
-

Commit cd04bda906c5bcba0fc810ddb6ccc9f912fd5909 in trafficserver's branch 
refs/heads/master from John J. Rushford
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=cd04bda ]

TS-4115: fix clang-format warning.


> Add a multi origin hierarchy to parent selection.
> -
>
> Key: TS-4115
> URL: https://issues.apache.org/jira/browse/TS-4115
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: John Rushford
>Assignee: John Rushford
>Priority: Minor
> Fix For: 6.2.0
>
>
> Parent Selection is currently used to create a hierarchy of cache server 
> groups.  It would be useful to create an origin server hierarchy through 
> parent selection so that say mid tier caches could round robin requests to 
> multiple origin servers possibly at different sites using round robin load 
> balancing and possibly consistent hashing algorithms.
> FEATURE DESCRIPTION:
> A pull request accompanies this ticket that adds this feature to parent 
> selection.  A new configuration parameter "parent_is_proxy" is available in 
> parent config.  parent_is_proxy=true is the default and indicates that a list 
> of parents and secondary_parents are the usual parent caches but, when set to 
> false, it indicates that the list of parents are origin servers.
> When marked as origin servers, the server FQDN is removed from the http GET 
> request so that only the relative path is in the request.  Note that if 
> connectivity fails to all the origins listed or all are marked down, there is 
> no go direct behavior as these are the origins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4056) MemLeak: ~NetAccept() do not free alloc_cache(vc)

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202406#comment-15202406
 ] 

ASF subversion and git services commented on TS-4056:
-

Commit fbb5c07162bc6d3a8b44bc2a0d30b6cbeb2153bc in trafficserver's branch 
refs/heads/master from Oknet
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=fbb5c07 ]

TS-4056: Remove cached netVC from NetAccept.
This close #381.


> MemLeak: ~NetAccept() do not free alloc_cache(vc)
> -
>
> Key: TS-4056
> URL: https://issues.apache.org/jira/browse/TS-4056
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 6.1.0
>Reporter: Oknet Xu
>Assignee: Bryan Call
>  Labels: review
> Fix For: 6.2.0
>
>
> NetAccpet::alloc_cache is a void pointor is used in net_accept().
> the alloc_cache does not release after NetAccept canceled.
> I'm looking for all code, believe the "alloc_cache" is a bad idea here.
> I create a pull request on github: 
> https://github.com/apache/trafficserver/pull/366
> also add a condition check for vc==NULL after allocate_vc()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-4281) Slow log and milestone updates

2016-03-19 Thread Bryan Call (JIRA)
Bryan Call created TS-4281:
--

 Summary: Slow log and milestone updates
 Key: TS-4281
 URL: https://issues.apache.org/jira/browse/TS-4281
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core, HTTP
Reporter: Bryan Call


Add additional milestones/slow log for:
* Receiving the post body from the client
* Sending the post body to the origin

Add additional slow log information to help diagnose the request:
* origin ip and port
* method
* protocol used (http/1.1 or http/2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4261) Split statistic update logic from process handling.

2016-03-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198719#comment-15198719
 ] 

ASF GitHub Bot commented on TS-4261:


Github user asfgit closed the pull request at:

https://github.com/apache/trafficserver/pull/516


> Split statistic update logic from process handling.
> ---
>
> Key: TS-4261
> URL: https://issues.apache.org/jira/browse/TS-4261
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Manager
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
> Fix For: 6.2.0
>
>
> The logic for updating statistics is mixed in the same source file as process 
> management. This creates unnecessary circular dependencies if lower level 
> components use statistics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-4261) Split statistic update logic from process handling.

2016-03-19 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach resolved TS-4261.
-
Resolution: Fixed

> Split statistic update logic from process handling.
> ---
>
> Key: TS-4261
> URL: https://issues.apache.org/jira/browse/TS-4261
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Manager
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
> Fix For: 6.2.0
>
>
> The logic for updating statistics is mixed in the same source file as process 
> management. This creates unnecessary circular dependencies if lower level 
> components use statistics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: clang-format #642

2016-03-19 Thread jenkins
See 

Changes:

[John_Rushford] TS-4115: Add a multi origin hierarchy to parent selection.

--
[...truncated 1606 lines...]
./proxy/logging/LogHost.h
./proxy/logging/LogAccessHttp.h
./proxy/logging/LogCollationAccept.h
./proxy/logging/LogBuffer.h
./proxy/logging/LogLimits.h
./proxy/logging/LogAccessTest.cc
./proxy/logging/LogAccess.cc
./proxy/logging/LogCollationHostSM.cc
./proxy/logging/Log.h
./proxy/logging/LogAccess.h
./proxy/logging/LogAccessTest.h
./proxy/logging/LogStandalone.cc
./proxy/logging/LogField.cc
./proxy/logging/LogConfig.h
./proxy/logging/LogCollationClientSM.h
./proxy/ParentConsistentHash.cc
./proxy/StufferUdpReceiver.cc
./proxy/ICPevents.h
./proxy/logstats.cc
./proxy/UDPAPITest.cc
./proxy/api/ts/InkAPIPrivateIOCore.h
./proxy/api/ts/ts.h
./proxy/api/ts/experimental.h
./proxy/api/ts/TsException.h
./proxy/api/ts/remap.h
./proxy/Milestones.h
./proxy/Transform.cc
./proxy/test_xml_parser.cc
./proxy/AbstractBuffer.h
./proxy/CoreUtils.h
./proxy/ControlMatcher.cc
./proxy/ICPStats.cc
./proxy/TransformInternal.h
./proxy/Main.h
./proxy/ICPlog.h
./proxy/StatPages.cc
./proxy/ProtocolProbeSessionAccept.cc
./proxy/logcat.cc
./proxy/AbstractBuffer.cc
./proxy/TestDNS.cc
./proxy/Plugin.h
./proxy/congest/Congestion.h
./proxy/congest/CongestionTest.cc
./proxy/congest/CongestionStats.h
./proxy/congest/CongestionDB.h
./proxy/congest/CongestionStats.cc
./proxy/congest/CongestionDB.cc
./proxy/congest/Congestion.cc
./proxy/congest/MT_hashtable.h
./proxy/TestPreProc.cc
./proxy/ProtocolProbeSessionAccept.h
./proxy/FetchSM.h
./proxy/Crash.cc
./proxy/ParentRoundRobin.h
./proxy/InkAPIInternal.h
./proxy/Show.h
./proxy/spdy/SpdyDefs.h
./proxy/spdy/SpdySessionAccept.h
./proxy/spdy/SpdySessionAccept.cc
./proxy/spdy/SpdyClientSession.h
./proxy/spdy/SpdyClientSession.cc
./proxy/spdy/SpdyCallbacks.cc
./proxy/spdy/SpdyCommon.h
./proxy/spdy/SpdyCommon.cc
./proxy/spdy/SpdyCallbacks.h
./proxy/PluginVC.h
./proxy/RegressionSM.cc
./proxy/FetchSM.cc
./proxy/TestClusterHash.cc
./proxy/ParentRoundRobin.cc
./proxy/TimeTrace.h
./proxy/ParentSelection.h
./proxy/InkIOCoreAPI.cc
./proxy/Transform.h
./proxy/InkAPI.cc
./proxy/RegressionSM.h
./proxy/ProxyClientSession.cc
./proxy/CoreUtils.cc
./proxy/shared/Error.h
./proxy/shared/UglyLogStubs.cc
./proxy/shared/DiagsConfig.h
./proxy/shared/InkXml.cc
./proxy/shared/DiagsConfig.cc
./proxy/shared/InkXml.h
./proxy/shared/Error.cc
./proxy/UnixCompletionUtil.h
./proxy/ICPConfig.cc
./proxy/IPAllow.h
./proxy/TestProxy.cc
./proxy/ParentSelection.cc
./proxy/HttpTransStats.h
./proxy/PluginVC.cc
./proxy/EventName.h
./proxy/TestSimpleProxy.cc
./proxy/ConfigParse.h
./proxy/Prefetch.h
./proxy/http/TestUrl.cc
./proxy/http/HttpBodyFactory.h
./proxy/http/HttpClientSession.h
./proxy/http/HttpTransactHeaders.cc
./proxy/http/HttpTransact.cc
./proxy/http/HttpPages.h
./proxy/http/HttpProxyServerMain.cc
./proxy/http/HttpServerSession.h
./proxy/http/HttpPages.cc
./proxy/http/HttpTunnel.cc
./proxy/http/TestHttpTransact.cc
./proxy/http/test_socket_close.cc
./proxy/http/HttpSM.cc
./proxy/http/HttpConfig.h
./proxy/http/HttpUpdateTester.cc
./proxy/http/HttpSessionManager.h
./proxy/http/HttpProxyAPIEnums.h
./proxy/http/HttpCacheSM.cc
./proxy/http/HttpTransactHeaders.h
./proxy/http/HttpTransactCache.h
./proxy/http/HttpTransactCache.cc
./proxy/http/HttpProxyServerMain.h
./proxy/http/HttpTunnel.h
./proxy/http/HttpUpdateSM.h
./proxy/http/HttpDebugNames.h
./proxy/http/HttpBodyFactory.cc
./proxy/http/HttpClientSession.cc
./proxy/http/HttpTransact.h
./proxy/http/HttpConfig.cc
./proxy/http/HttpServerSession.cc
./proxy/http/HttpUpdateSM.cc
./proxy/http/HttpConnectionCount.h
./proxy/http/testheaders.cc
./proxy/http/HttpConnectionCount.cc
./proxy/http/HttpSessionAccept.h
./proxy/http/HttpDebugNames.cc
./proxy/http/RegressionHttpTransact.cc
./proxy/http/HttpSessionAccept.cc
./proxy/http/remap/RemapConfig.cc
./proxy/http/remap/UrlMapping.h
./proxy/http/remap/RemapProcessor.h
./proxy/http/remap/UrlRewrite.h
./proxy/http/remap/RemapPlugins.cc
./proxy/http/remap/UrlMapping.cc
./proxy/http/remap/AclFiltering.h
./proxy/http/remap/RemapPluginInfo.h
./proxy/http/remap/UrlMappingPathIndex.h
./proxy/http/remap/RemapProcessor.cc
./proxy/http/remap/RemapPlugins.h
./proxy/http/remap/UrlRewrite.cc
./proxy/http/remap/RemapPluginInfo.cc
./proxy/http/remap/RemapConfig.h
./proxy/http/remap/AclFiltering.cc
./proxy/http/remap/UrlMappingPathIndex.cc
./proxy/http/HttpSessionManager.cc
./proxy/http/HttpCacheSM.h
./proxy/http/HttpSM.h
./proxy/TestRegex.cc
./proxy/IPAllow.cc
./proxy/ICPProcessor.cc
./proxy/ReverseProxy.cc
./proxy/ControlMatcher.h
./proxy/SocksProxy.cc
./proxy/UDPAPITest.h
./proxy/UserNameCacheTest.h
./proxy/ICP.h
./proxy/ProxyClientSession.h
./proxy/ParentConsistentHash.h
./proxy/CacheControl.h
./proxy/InkAPITestTool.cc
./proxy/ControlBase.cc
./proxy/http2/Http2DebugNames.cc
./proxy/http2/Http2ClientSession.cc
./proxy/http2/RegressionHPACK.

[jira] [Commented] (TS-4282) Socket read failure isn't handled in AdminClient.pm

2016-03-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198290#comment-15198290
 ] 

ASF GitHub Bot commented on TS-4282:


Github user SolidWallOfCode commented on the pull request:

https://github.com/apache/trafficserver/pull/527#issuecomment-197578795
  
I took out {{max_read_attempts}} as well because it was never used. Perhaps 
it was originally intended to call {{_do_read}} that many times.


> Socket read failure isn't handled in AdminClient.pm
> ---
>
> Key: TS-4282
> URL: https://issues.apache.org/jira/browse/TS-4282
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Management API
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
> Fix For: 6.2.0
>
>
> {{_do_read}} can return {{undef}} but its caller {{get_stat}} doesn't check 
> and so can generate errors by attempting to deference {{undef}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-4280) Refactor gzip plugin to eliminate memory leak and reduce global hooks

2016-03-19 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-4280:
-

 Summary: Refactor gzip plugin to eliminate memory leak and reduce 
global hooks
 Key: TS-4280
 URL: https://issues.apache.org/jira/browse/TS-4280
 Project: Traffic Server
  Issue Type: Bug
  Components: Plugins
Reporter: Leif Hedstrom


I'd like to achieve three things with this refactoring:

1) Reduce the number of global hooks (this makes TS-4147 nicer)

2) Eliminate all use of TXN data slots (it now uses 3, which is very excessive, 
and we can get away with using none, thanks to 1) ).

3) Fix the memory leaks on configuration reloads, by ref-counting the 
HostConfiguration's properly through the TXN's lifetime.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4283) Remove some remnant comments and code from 64-bit conversion

2016-03-19 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-4283:
--
Fix Version/s: 6.2.0

> Remove some remnant comments and code from 64-bit conversion
> 
>
> Key: TS-4283
> URL: https://issues.apache.org/jira/browse/TS-4283
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Reporter: Leif Hedstrom
> Fix For: 6.2.0
>
>
> Some of these comments no longer applies, so we should just remove these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4279:
---
Description: 
CPU: 40 cores, Mem: 120GB, Disk: 1*300GB sys + 11 * 899GB data(naked), OS: 
CentOS 6.6

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
{code}
ats restart in every serval hours, and the TIMEWAIT count is huge above the 
ESTABLISH TCP connection count.
the following is the current dir snapshot of disk  /dev/sdc on one of the hosts:
{code}
Directory for [cache.disk.1 172032:109741163]
Bytes: 8573600
Segments:  14
Buckets:   15310
Entries:   857360
Full:  852904
Empty: 4085
Stale: 0
Free:  371
Bucket Fullness:   4085158003204441621 
  42175331372223212605 
   
Segment Fullness:  60903 60918 60914 60947 60956 
   60947 60872 60943 60918 60927 
   60858 60917 60927 60957 
Freelist Fullness:45302713 0 
   789 53212 
  83 020 8 
{code}
I wonder why the value of freelist[4] is zero, which cause ats dead loop, 
anyone help me? thinks a lot.

  was:
CPU: 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, OS: CentOS 6.6

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32

[jira] [Updated] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4279:
---
Fix Version/s: 6.2.0

> ats fallen into dead loop for cache directory overflow
> --
>
> Key: TS-4279
> URL: https://issues.apache.org/jira/browse/TS-4279
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache
>Affects Versions: 5.3.1
>Reporter: taoyunxing
> Fix For: 6.2.0
>
>
> CPU 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, 
> records.config:
> CONFIG proxy.config.cache.min_average_object_size INT 1048576
> CONFIG proxy.config.cache.ram_cache.algorithm INT 1
> CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
> CONFIG proxy.config.cache.ram_cache.size INT 64424509440
> storage.config:
> /dev/sdc id=cache.disk.1
> I encountered a kind of dead loop situation of ats 5.3.1 on two production 
> hosts, a burst of warning is seen by me in the diags.log like this:
> {code}
> [Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> {code}
> ats restart in every serval hours, and the TIMEWAIT count is huge above the 
> ESTABLISH TCP connection count.
> the following is the current dir snapshot of those host:
> {code}
> Directory for [cache.disk.1 172032:109741163]
> Bytes: 8573600
> Segments:  14
> Buckets:   15310
> Entries:   857360
> Full:  852904
> Empty: 4085
> Stale: 0
> Free:  371
> Bucket Fullness:   4085158003204441621 
>   42175331372223212605 
>
> Segment Fullness:  60903 60918 60914 60947 60956 
>60947 60872 60943 60918 60927 
>60858 60917 60927 60957 
> Freelist Fullness:45302713 0 
>789 53212 
>   83 020 8 
> {code}
> I wonder why, anyone help me? thinks a lot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread Alan M. Carroll (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15200197#comment-15200197
 ] 

Alan M. Carroll commented on TS-4279:
-

Yes. Having a free list of length 371 out of 875360 is a very full situation 
and you need to set your average object size down at least by half, possibly 
more. The only cost of making that value smaller is a larger memory footprint 
which you'll want to keep an eye on. Do note that changing the value will 
invalidate the cache contents. I suspect the crash is from bad interactions 
with the HTTP state machine when the cache is full and is unlikely to be fixed 
any time soon unfortunately. I am working on other cache fixes and I may 
eventually be able to look at this or do a better job of reclaiming. Currently 
it does quite a poor job of it (essentially decimating the first doc entries in 
a segment, which is not very successful if there are lots of multi-fragment 
objects or alternates). What might do better is to pretend to write to a large 
chunk of the stripe and use the reclaim logic for that to clear space.

> ats fallen into dead loop for cache directory overflow
> --
>
> Key: TS-4279
> URL: https://issues.apache.org/jira/browse/TS-4279
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache
>Affects Versions: 5.3.1
>Reporter: taoyunxing
> Fix For: 6.2.0
>
>
> CPU: 40 cores, Mem: 120GB, Disk: 1*300GB sys + 11 * 899GB data(naked), OS: 
> CentOS 6.6, ATS: 5.3.1
> records.config:
> CONFIG proxy.config.cache.min_average_object_size INT 1048576
> CONFIG proxy.config.cache.ram_cache.algorithm INT 1
> CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
> CONFIG proxy.config.cache.ram_cache.size INT 64424509440
> storage.config:
> /dev/sdc id=cache.disk.1
> I encountered a kind of dead loop situation of ats 5.3.1 on two production 
> hosts, a burst of warning is seen by me in the diags.log for a long time like 
> this:
> {code}
> [Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> [Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sdc' segment 4, purging...
> {code}
> ats restart in every serval hours, and the TIMEWAIT count is huge above the 
> ESTABLISH TCP connection count.
> the following is the current dir snapshot of disk  /dev/sdc on one of the 
> hosts:
> {code}
> Directory for [cache.disk.1 172032:109741163]
> Bytes: 8573600
> Segments:  14
> Buckets:   15310
> Entries:   857360
> Full:  852904
> Empty: 4085
> Stale: 0
> Free:  371
> Bucket Fullness:   4085158003204441621 
>   42175331372223212605 
>
> Segment Fullness:  60903 60918 60914 60947 60956 
>60947 60872 60943 60918 60927 
>60858 60917 60927 60957 
> Freelist Fullness:45302713 0 
>789 53212 
>   83 020 8 
> {code}
> I wonder why the value of freelist[4] is zero, which cause ats dead loop, 
> anyone help me? thinks a lot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4279:
---
Description: 
CPU: 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, OS: CentOS 6.6

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
{code}
ats restart in every serval hours, and the TIMEWAIT count is huge above the 
ESTABLISH TCP connection count.
the following is the current dir snapshot of disk  /dev/sdc on one of the hosts:
{code}
Directory for [cache.disk.1 172032:109741163]
Bytes: 8573600
Segments:  14
Buckets:   15310
Entries:   857360
Full:  852904
Empty: 4085
Stale: 0
Free:  371
Bucket Fullness:   4085158003204441621 
  42175331372223212605 
   
Segment Fullness:  60903 60918 60914 60947 60956 
   60947 60872 60943 60918 60927 
   60858 60917 60927 60957 
Freelist Fullness:45302713 0 
   789 53212 
  83 020 8 
{code}
I wonder why the value of freelist[4] is zero, which cause ats dead loop, 
anyone help me? thinks a lot.

  was:
CPU 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, 

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.756] Server {0x2b8ffc544700} 

[jira] [Commented] (TS-4278) HostDB sync causes active transactions to block for 100's of ms

2016-03-19 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15197581#comment-15197581
 ] 

Leif Hedstrom commented on TS-4278:
---

Nice catch. I'm +1 on setting this to 0 as a default for v7.0.0 (file an 
appropriate Jira for that maybe if you agree). Unless of course we manage to 
replace HostDB entirely for 7.0.0 :). That much said, would it help if we 
scheduled this on a task thread ? That's also imply changing the MultiCacheBase 
and how it schedules as well.

Fwiw, we run traffic_server with the "-k" option in production, to force a 
flush on every startup. But this is better IMO.

Also, I noticed that setting this config to 0 on a running system does not stop 
it from syncing. Checking the code, we do reload the config, but it doesn't 
seem to allow for the case of disabling the continuation when we set it to 0. 
[~shinrich] Can you confirm this? If so, should we fix that for 6.2?

> HostDB sync causes active transactions to block for 100's of ms
> ---
>
> Key: TS-4278
> URL: https://issues.apache.org/jira/browse/TS-4278
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Susan Hinrichs
> Fix For: 6.2.0
>
>
> When HostDB syncs to disk (by default every two minutes), active transactions 
> will block when they reach HttpSM::do_hostdb_lookup.  This is because 
> do_hostdb_lookup calls hostDBProcessor.getbyname_imm which attempts to get 
> the bucket locks.   The delays generally last for 500-1200ms.  This blocks 
> the event loop so no other actions will be performed by the net handler until 
> the lock is dropped.
> I'm assuming that the bucket locks are grabbed by the sync logic.  When I 
> increased proxy.config.cache.hostdb.sync_frequency to 1200, the every two 
> minute slow down went away.  Fortunately 
> proxy.config.cache.hostdb.sync_frequency set to 0 seems to completely 
> eliminate the sync, which will be my suggested solution internally.
> I tried reducing the size of the hostdb table, but that didn't seem to affect 
> the delay time.
> The delay only reliably exhibited on loaded system.  Running my httperf test 
> case on a machine with no other activity did not show the delays.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4272) HostDB not clearing HostDBInfos for hosts file entries

2016-03-19 Thread Thomas Jackson (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Jackson updated TS-4272:
---
Summary: HostDB not clearing HostDBInfos for hosts file entries  (was: 
HostDB not clearning HostDBInfos for hosts file entries)

> HostDB not clearing HostDBInfos for hosts file entries
> --
>
> Key: TS-4272
> URL: https://issues.apache.org/jira/browse/TS-4272
> Project: Traffic Server
>  Issue Type: Bug
>Reporter: Thomas Jackson
>Assignee: Thomas Jackson
> Fix For: 6.2.0
>
>
> std::map is creating an entry, and we are setting a subset of values. Since 
> we were not clearing the memory that we got, we end up with some 
> non-initialized structure fields-- meaning the HostDBInfo object we return is 
> corrupt.
> Found during investigation of TS-4207



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-4261) Split statistic update logic from process handling.

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198717#comment-15198717
 ] 

ASF subversion and git services commented on TS-4261:
-

Commit dfd9776990a6cf04385e5e315c277f2dad3f01d8 in trafficserver's branch 
refs/heads/master from [~amc]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=dfd9776 ]

TS-4261: Split stats API from process API.

This is useful for TS-4260 as noted in the related bugs. As part
of that work I set up some statistics to track the performance of
the event loop. Without this change doing that requires bringing
all of the process management support in to the event loop component
which is problematic. It seemed much simpler and better overall to
just split those unrelated items apart.

This closes #516.


> Split statistic update logic from process handling.
> ---
>
> Key: TS-4261
> URL: https://issues.apache.org/jira/browse/TS-4261
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Manager
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
> Fix For: 6.2.0
>
>
> The logic for updating statistics is mixed in the same source file as process 
> management. This creates unnecessary circular dependencies if lower level 
> components use statistics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3999) Add option to go direct for specific parent entry

2016-03-19 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201911#comment-15201911
 ] 

Leif Hedstrom commented on TS-3999:
---

Also, I think we'd want some additional information here, about the intent of 
cache ability. If this is to replace cache clustering (which I think it could / 
should), we want to have control both via configuration (in parent.config) and 
perhaps even via API if the peer's data should be cached on the local node or 
note. A configuration is a minimum requirement, but an API opens up some 
interesting possibilities, such as

   1) Duplicate certain content types across the peering caches (e.g. the HLS 
playlists)

   2) Duplicate content of a particular size (small or large)

   3) Duplicate content for a particular path or domain


etc. This allows for a much better cache clustering than we have today. 
However, maybe now we're wandering into HTCP land? :-) But the consistent 
hashing is a really nice feature, which HTCP doesn't have afaik.

> Add option to go direct for specific parent entry
> -
>
> Key: TS-3999
> URL: https://issues.apache.org/jira/browse/TS-3999
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Parent Proxy
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
>  Labels: A
> Fix For: 7.0.0
>
>
> We want to use parenty proxying in a peer relationship so that a host can be 
> both a parent and child in general but only for a specific URL. Currently 
> this can be done using the CARP plugin but that can create some difficulties. 
> Being able to specify that one parent in the list is actually direct to 
> origin would make that much easier.
> The current suggestion is to overload the port specifier as a indicator of 
> direct. For example, if you had three hosts in a pod, {{rikku}}, {{tidus}}, 
> and {{yuna}}, then you would configure the parent proxying on {{tidus}} as
> {code}
> "tidus:@direct, yuna:8080, rikku:8080"
> {code}
> while the configuration on {{yuna}} would be
> {code}
> "tidus:8080, yuna:@direct, rikku:8080"
> {code}.
> Similarly for {{rikku}} the port would be changed to "@direct" just for the 
> "rikku" parent. I discussed several configuration options for this with 
> [~dcarlin] and putting the override directly in the parent list was his 
> preferred mechanism. Note that in order to have consistency between hosts in 
> a pod, the list of names *must* be exactly the same across all the peers. In 
> this case, it must be "tidus, yuna, rikku" for all three or loops will occur 
> because of different hash seeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3999) Add option to go direct for specific parent entry

2016-03-19 Thread John Rushford (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202074#comment-15202074
 ] 

John Rushford commented on TS-3999:
---

I agree with and like the idea of using a list labeled "peers" rather than 
"parent" as I think that makes the intent clear as was pointed out.  So when 
the list is labeled "peers" one of the hostnames is self and if on startup ATS 
cannot identify itself from the list, the entire rule is rejected and the error 
logged. 

Now I added that secondary_parent list with the secondary hash ring feature, 
this is entirely optional when using consistent_hash.  Do you think that this 
secondary_parent list should be allowed as part of a "peers" config?  In case  
the "peers" are unreachable then the secondary_parents list could be used as 
alternate "peers"?

> Add option to go direct for specific parent entry
> -
>
> Key: TS-3999
> URL: https://issues.apache.org/jira/browse/TS-3999
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Parent Proxy
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
>  Labels: A
> Fix For: 7.0.0
>
>
> We want to use parenty proxying in a peer relationship so that a host can be 
> both a parent and child in general but only for a specific URL. Currently 
> this can be done using the CARP plugin but that can create some difficulties. 
> Being able to specify that one parent in the list is actually direct to 
> origin would make that much easier.
> The current suggestion is to overload the port specifier as a indicator of 
> direct. For example, if you had three hosts in a pod, {{rikku}}, {{tidus}}, 
> and {{yuna}}, then you would configure the parent proxying on {{tidus}} as
> {code}
> "tidus:@direct, yuna:8080, rikku:8080"
> {code}
> while the configuration on {{yuna}} would be
> {code}
> "tidus:8080, yuna:@direct, rikku:8080"
> {code}.
> Similarly for {{rikku}} the port would be changed to "@direct" just for the 
> "rikku" parent. I discussed several configuration options for this with 
> [~dcarlin] and putting the override directly in the parent list was his 
> preferred mechanism. Note that in order to have consistency between hosts in 
> a pod, the list of names *must* be exactly the same across all the peers. In 
> this case, it must be "tidus, yuna, rikku" for all three or loops will occur 
> because of different hash seeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4282) Socket read failure isn't handled in AdminClient.pm

2016-03-19 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-4282:
--
Assignee: Alan M. Carroll

> Socket read failure isn't handled in AdminClient.pm
> ---
>
> Key: TS-4282
> URL: https://issues.apache.org/jira/browse/TS-4282
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Management API
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
> Fix For: 6.2.0
>
>
> {{_do_read}} can return {{undef}} but its caller {{get_stat}} doesn't check 
> and so can generate errors by attempting to deference {{undef}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)