[jira] [Commented] (TS-3395) Hit ratio drops with high concurrency

2015-02-23 Thread Luca Bruno (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333179#comment-14333179
 ] 

Luca Bruno commented on TS-3395:


This is with 1 vol (same with 5 vols) and 
`proxy.config.http.origin_max_connections 10` (same with 100 connections):

!http://i.imgur.com/vYXDE0J.png?1!

With 10 vols and max 10 origin connections:

!http://i.imgur.com/frRQ5r6.png?1!

!http://i.imgur.com/Xwt09Hj.png?1!

So same result as without setting the max origin connections. At this point the 
only tuning option I see working is the fact of having 10 vols, or who knows 
depending on the number of client connections apparently.


 Hit ratio drops with high concurrency
 -

 Key: TS-3395
 URL: https://issues.apache.org/jira/browse/TS-3395
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: Luca Bruno
 Fix For: 5.3.0


 I'm doing some tests and I've noticed that the hit ratio drops with more than 
 300 simultaneous http connections.
 The cache is on a raw disk of 500gb and it's not filled, so no eviction. The 
 ram cache is disabled.
 The test is done with web-polygraph. Content size vary from 5kb to 20kb 
 uniformly, expected hit ratio 60%, 2000 http connections, documents expire 
 after months. There's no Vary.
 !http://i.imgur.com/Zxlhgnf.png!
 Then I thought it could be a problem of polygraph. I wrote my own 
 client/server test code, it works fine also with squid, varnish and nginx. I 
 register a hit if I get either cR or cH in the headers.
 {noformat}
 2015/02/19 12:38:28 Starting 100 requests
 2015/02/19 12:37:58 Elapsed: 3m51.23552164s
 2015/02/19 12:37:58 Total average: 231.235µs/req, 4324.60req/s
 2015/02/19 12:37:58 Average size: 12.50kb/req
 2015/02/19 12:37:58 Bytes read: 12498412.45kb, 54050.57kb/s
 2015/02/19 12:37:58 Errors: 0
 2015/02/19 12:37:58 Offered Hit ratio: 59.95%
 2015/02/19 12:37:58 Measured Hit ratio: 37.20%
 2015/02/19 12:37:58 Hit bytes: 4649000609
 2015/02/19 12:37:58 Hit success: 599476/599476 (100.00%), 469.840902ms/req
 2015/02/19 12:37:58 Miss success: 400524/400524 (100.00%), 336.301464ms/req
 {noformat}
 So similar results, 37.20% on average. Then I thought that could be a problem 
 of how I'm testing stuff, and tried with nginx cache. It achieves 60% hit 
 ratio, but request rate is very slow compared to ATS for obvious reasons.
 Then I wanted to check if with 200 connections but with longer test time hit 
 ratio also dropped, but no, it's fine:
 !http://i.imgur.com/oMHscuf.png!
 So not a problem of my tests I guess.
 Then I realized by debugging the test server that the same url was asked 
 twice.
 Out of 100 requests, 78600 urls were asked at least twice. An url was 
 even requested 9 times. These same url are not requested close to each other: 
 even more than 30sec can pass from one request to the other for the same url.
 I also tweaked the following parameters:
 {noformat}
 CONFIG proxy.config.http.cache.fuzz.time INT 0
 CONFIG proxy.config.http.cache.fuzz.min_time INT 0
 CONFIG proxy.config.http.cache.fuzz.probability FLOAT 0.00
 CONFIG proxy.config.http.cache.max_open_read_retries INT 4
 CONFIG proxy.config.http.cache.open_read_retry_time INT 500
 {noformat}
 And this is the result with polygraph, similar results:
 !http://i.imgur.com/YgOndhY.png!
 Tweaked the read-while-writer option, and yet having similar results.
 Then I've enabled 1GB of ram, it is slightly better at the beginning, but 
 then it drops:
 !http://i.imgur.com/dFTJI16.png!
 traffic_top says 25% ram hit, 37% fresh, 63% cold.
 So given that it doesn't seem to be a concurrency problem when requesting the 
 url to the origin server, could it be a problem of concurrent write access to 
 the cache? So that some pages are not cached at all? The traffoc_top fresh 
 percentage also makes me think it can be a problem in writing the cache.
 Not sure if I explained the problem correctly, ask me further information in 
 case. But in summary: hit ratio drops with a high number of connections, and 
 the problem seems related to pages that are not written to the cache.
 This is some related issue: 
 http://mail-archives.apache.org/mod_mbox/trafficserver-users/201301.mbox/%3ccd28cb1f.1f44a%25peter.wa...@email.disney.com%3E
 Also this: 
 http://apache-traffic-server.24303.n7.nabble.com/why-my-proxy-node-cache-hit-ratio-drops-td928.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (TS-3395) Hit ratio drops with high concurrency

2015-02-23 Thread Luca Bruno (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333179#comment-14333179
 ] 

Luca Bruno edited comment on TS-3395 at 2/23/15 10:17 AM:
--

This is with 1 vol (same with 5 vols) and 
`proxy.config.http.origin_max_connections 10` (same with 100 connections):

!http://i.imgur.com/vYXDE0J.png?1!

With 10 vols and max 10 origin connections:

!http://i.imgur.com/frRQ5r6.png?1!

!http://i.imgur.com/Xwt09Hj.png?1!

So same result as without setting the max origin connections. At this point the 
only tuning option I see working is the fact of having 10 vols, or who knows 
depending on the number of simultaneous client connections.



was (Author: lethalman):
This is with 1 vol (same with 5 vols) and 
`proxy.config.http.origin_max_connections 10` (same with 100 connections):

!http://i.imgur.com/vYXDE0J.png?1!

With 10 vols and max 10 origin connections:

!http://i.imgur.com/frRQ5r6.png?1!

!http://i.imgur.com/Xwt09Hj.png?1!

So same result as without setting the max origin connections. At this point the 
only tuning option I see working is the fact of having 10 vols, or who knows 
depending on the number of client connections apparently.


 Hit ratio drops with high concurrency
 -

 Key: TS-3395
 URL: https://issues.apache.org/jira/browse/TS-3395
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: Luca Bruno
 Fix For: 5.3.0


 I'm doing some tests and I've noticed that the hit ratio drops with more than 
 300 simultaneous http connections.
 The cache is on a raw disk of 500gb and it's not filled, so no eviction. The 
 ram cache is disabled.
 The test is done with web-polygraph. Content size vary from 5kb to 20kb 
 uniformly, expected hit ratio 60%, 2000 http connections, documents expire 
 after months. There's no Vary.
 !http://i.imgur.com/Zxlhgnf.png!
 Then I thought it could be a problem of polygraph. I wrote my own 
 client/server test code, it works fine also with squid, varnish and nginx. I 
 register a hit if I get either cR or cH in the headers.
 {noformat}
 2015/02/19 12:38:28 Starting 100 requests
 2015/02/19 12:37:58 Elapsed: 3m51.23552164s
 2015/02/19 12:37:58 Total average: 231.235µs/req, 4324.60req/s
 2015/02/19 12:37:58 Average size: 12.50kb/req
 2015/02/19 12:37:58 Bytes read: 12498412.45kb, 54050.57kb/s
 2015/02/19 12:37:58 Errors: 0
 2015/02/19 12:37:58 Offered Hit ratio: 59.95%
 2015/02/19 12:37:58 Measured Hit ratio: 37.20%
 2015/02/19 12:37:58 Hit bytes: 4649000609
 2015/02/19 12:37:58 Hit success: 599476/599476 (100.00%), 469.840902ms/req
 2015/02/19 12:37:58 Miss success: 400524/400524 (100.00%), 336.301464ms/req
 {noformat}
 So similar results, 37.20% on average. Then I thought that could be a problem 
 of how I'm testing stuff, and tried with nginx cache. It achieves 60% hit 
 ratio, but request rate is very slow compared to ATS for obvious reasons.
 Then I wanted to check if with 200 connections but with longer test time hit 
 ratio also dropped, but no, it's fine:
 !http://i.imgur.com/oMHscuf.png!
 So not a problem of my tests I guess.
 Then I realized by debugging the test server that the same url was asked 
 twice.
 Out of 100 requests, 78600 urls were asked at least twice. An url was 
 even requested 9 times. These same url are not requested close to each other: 
 even more than 30sec can pass from one request to the other for the same url.
 I also tweaked the following parameters:
 {noformat}
 CONFIG proxy.config.http.cache.fuzz.time INT 0
 CONFIG proxy.config.http.cache.fuzz.min_time INT 0
 CONFIG proxy.config.http.cache.fuzz.probability FLOAT 0.00
 CONFIG proxy.config.http.cache.max_open_read_retries INT 4
 CONFIG proxy.config.http.cache.open_read_retry_time INT 500
 {noformat}
 And this is the result with polygraph, similar results:
 !http://i.imgur.com/YgOndhY.png!
 Tweaked the read-while-writer option, and yet having similar results.
 Then I've enabled 1GB of ram, it is slightly better at the beginning, but 
 then it drops:
 !http://i.imgur.com/dFTJI16.png!
 traffic_top says 25% ram hit, 37% fresh, 63% cold.
 So given that it doesn't seem to be a concurrency problem when requesting the 
 url to the origin server, could it be a problem of concurrent write access to 
 the cache? So that some pages are not cached at all? The traffoc_top fresh 
 percentage also makes me think it can be a problem in writing the cache.
 Not sure if I explained the problem correctly, ask me further information in 
 case. But in summary: hit ratio drops with a high number of connections, and 
 the problem seems related to pages that are not written to the cache.
 This is some related issue: 
 

[jira] [Commented] (TS-3311) Possible lookups on NULL hostnames in HostDB

2015-02-23 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1468#comment-1468
 ] 

Leif Hedstrom commented on TS-3311:
---

[~smalenfant] Are you by chance using either of parent proxy (parent.config 
etc.) and/or the authproxy plugin?

 Possible lookups on NULL hostnames in HostDB
 

 Key: TS-3311
 URL: https://issues.apache.org/jira/browse/TS-3311
 Project: Traffic Server
  Issue Type: Bug
  Components: HostDB
Reporter: Steve Malenfant
Assignee: Alan M. Carroll
 Fix For: 5.3.0


 Getting multiple segfaults per day on 4.2.1. 
 [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
 [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
 [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
 [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
 [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
 Stack Trace :
 (gdb) bt
 #0  ink_inet_addr (s=value optimized out) at ink_inet.cc:107
 #1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
 ignore_timeout=false) at P_HostDBProcessor.h:545
 #2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
 #3  0x005e2b34 in HostDBProcessor::getby (this=value optimized out, 
 cont=0x2b514cc749d0, hostname=0x0, len=value optimized out, 
 ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
 dns_lookup_timeout=0)
 at HostDB.cc:772
 #4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
 ../../iocore/hostdb/I_HostDBProcessor.h:417
 #5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
 #6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
 HttpSM.cc:6932
 #7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
 HttpSM.cc:3950
 #8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
 HttpSM.cc:6925
 #9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
 HttpSM.cc:1559
 #10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
 HttpSM.cc:6825
 #11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
 HttpSM.cc:7224
 #12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
 HttpSM.cc:1559
 #13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
 HttpSM.cc:6825
 #14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
 HttpSM.cc:1559
 #15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
 HttpSM.cc:6825
 #16 0x0052fef6 in HttpSM::state_read_client_request_header 
 (this=0x2b514cc749d0, event=100, data=value optimized out) at HttpSM.cc:821
 #17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, 
 event=100, data=0x2b514802ca08) at HttpSM.cc:2539
 #18 0x0068793b in handleEvent (event=value optimized out, 
 vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
 #19 read_signal_and_update (event=value optimized out, vc=0x2b514802c900) 
 at UnixNetVConnection.cc:138
 #20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, 
 vc=0x2b514802c900, thread=value optimized out) at UnixNetVConnection.cc:320
 #21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
 event=value optimized out, e=value optimized out) at UnixNet.cc:384
 #22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
 calling_code=5) at I_Continuation.h:146
 #23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
 at UnixEThread.cc:145
 #24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
 UnixEThread.cc:269
 #25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
 #26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
 #27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3400) Use common FNV hash code

2015-02-23 Thread James Peach (JIRA)
James Peach created TS-3400:
---

 Summary: Use common FNV hash code
 Key: TS-3400
 URL: https://issues.apache.org/jira/browse/TS-3400
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup, Core
Reporter: James Peach


There are multiple copies of the FNV hash. Use the central one from {{libts}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-3400) Use common FNV hash code

2015-02-23 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach resolved TS-3400.
-
   Resolution: Fixed
Fix Version/s: 5.3.0
 Assignee: James Peach

 Use common FNV hash code
 

 Key: TS-3400
 URL: https://issues.apache.org/jira/browse/TS-3400
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup, Core
Reporter: James Peach
Assignee: James Peach
 Fix For: 5.3.0


 There are multiple copies of the FNV hash. Use the central one from {{libts}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3403) stop parsing command-line options at the first non-option

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333804#comment-14333804
 ] 

ASF subversion and git services commented on TS-3403:
-

Commit 1d6f7d156c2dbbd66be769ea943c146458ed0860 in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=1d6f7d1 ]

TS-3403: stop parsing command-line options at the first non-option

ink_args was skipping over file arguments to parse subsequent option
flags. While this allows options and file arguments to intermingle,
it makes it impossible to pass unparsed options down to subcommands.

To solve this, we stop parsing the command line at the first
non-option and declare that the rest of the options are file arguments.


 stop parsing command-line options at the first non-option
 -

 Key: TS-3403
 URL: https://issues.apache.org/jira/browse/TS-3403
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Reporter: James Peach

 The {{ink_args}} API ignores file arguments that are intermingled with 
 command flags. This is not really a useful feature, and it makes it 
 impossible to implement subcommands with options as found in {{traffic_ctl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-3403) stop parsing command-line options at the first non-option

2015-02-23 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach resolved TS-3403.
-
   Resolution: Fixed
Fix Version/s: 5.3.0
 Assignee: James Peach

 stop parsing command-line options at the first non-option
 -

 Key: TS-3403
 URL: https://issues.apache.org/jira/browse/TS-3403
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Reporter: James Peach
Assignee: James Peach
 Fix For: 5.3.0


 The {{ink_args}} API ignores file arguments that are intermingled with 
 command flags. This is not really a useful feature, and it makes it 
 impossible to implement subcommands with options as found in {{traffic_ctl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3367) add a new command line management tool

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333842#comment-14333842
 ] 

ASF subversion and git services commented on TS-3367:
-

Commit 683a377d199bddb41b1b5497bb894778df44ff90 in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=683a377 ]

TS-3367: add traffic_ctl, a new command-line interface to the management API

traffic_ctl is a new commanline to administer Traffic Server using
the management API. It uses the subcommand pattern, making the
syntax more regular and extensible than traffic_line.  This first
implementation is feature-equivalent to traffic_line.


 add a new command line management tool
 --

 Key: TS-3367
 URL: https://issues.apache.org/jira/browse/TS-3367
 Project: Traffic Server
  Issue Type: New Feature
  Components: Configuration, Console, Management API
Reporter: James Peach
Assignee: James Peach
 Fix For: 5.3.0


 There's a lot of potential in the management API that can't be exposed in 
 {{traffic_line}} due to it's poor command-line interface. Replace this with a 
  new tool, {{traffic_ctl}}, which uses a subcommand-oriented interface to 
 expose more features in a more regular way.
 For example:
 {code}
 [vagrant@localhost ~]$ sudo /opt/ats/bin/traffic_ctl
 Usage: traffic_ctl [OPTIONS] CMD [ARGS ...]
 Subcommands:
 alarm   Manipulate alarms
 cluster Stop, restart and examine the cluster
 config  Manipulate configuration records
 metric  Manipulate performance metrics
 server  Stop, restart and examine the server
 storage Manipulate cache storage
 Options:
   switch__type__default___description
   --debug onfalse Enable debugging output
   -h, --help  Print usage information
   -V, --version   Print version string
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-3367) add a new command line management tool

2015-02-23 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach resolved TS-3367.
-
Resolution: Fixed

 add a new command line management tool
 --

 Key: TS-3367
 URL: https://issues.apache.org/jira/browse/TS-3367
 Project: Traffic Server
  Issue Type: New Feature
  Components: Configuration, Console, Management API
Reporter: James Peach
Assignee: James Peach
 Fix For: 5.3.0


 There's a lot of potential in the management API that can't be exposed in 
 {{traffic_line}} due to it's poor command-line interface. Replace this with a 
  new tool, {{traffic_ctl}}, which uses a subcommand-oriented interface to 
 expose more features in a more regular way.
 For example:
 {code}
 [vagrant@localhost ~]$ sudo /opt/ats/bin/traffic_ctl
 Usage: traffic_ctl [OPTIONS] CMD [ARGS ...]
 Subcommands:
 alarm   Manipulate alarms
 cluster Stop, restart and examine the cluster
 config  Manipulate configuration records
 metric  Manipulate performance metrics
 server  Stop, restart and examine the server
 storage Manipulate cache storage
 Options:
   switch__type__default___description
   --debug onfalse Enable debugging output
   -h, --help  Print usage information
   -V, --version   Print version string
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3393) traffic_line gives cryptic error message when trying to set records which cannot be changed dynamically

2015-02-23 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333838#comment-14333838
 ] 

James Peach commented on TS-3393:
-

This happens because specifies the {{RECC_STR}} checker but fails to provide a 
checking regex. {{traffic_server}} crashes when it tries to check the 
non-existent regex.

{code}
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f3b27fff700 (LWP 19519)]
0x7f3b30aa2008 in pcre_compile2 () from /lib/x86_64-linux-gnu/libpcre.so.3
(gdb) where
#0  0x7f3b30aa2008 in pcre_compile2 () from 
/lib/x86_64-linux-gnu/libpcre.so.3
#1  0x004712ef in recordRegexCheck (pattern=0x0, value=0x7f3b14004d60 
james)
at /opt/src/trafficserver.git/mgmt/WebMgmtUtils.cc:1176
#2  0x00471243 in recordValidityCheck (varName=0x7f3b14004bf0 
proxy.config.hostdb.ip_resolve,
value=0x7f3b14004d60 james) at 
/opt/src/trafficserver.git/mgmt/WebMgmtUtils.cc:1142
#3  0x00446ebf in MgmtRecordSet (rec_name=0x7f3b14004bf0 
proxy.config.hostdb.ip_resolve,
val=0x7f3b14004d60 james, action_need=0x7f3b27ffed0c) at 
/opt/src/trafficserver.git/mgmt/api/CoreAPI.cc:615
#4  0x0044a01a in handle_record_set (fd=16, req=0x7f3b14004d20, 
reqlen=49)
at /opt/src/trafficserver.git/mgmt/api/TSControlMain.cc:448
#5  0x0044af3c in handle_control_message (fd=16, req=0x7f3b14004d20, 
reqlen=49)
at /opt/src/trafficserver.git/mgmt/api/TSControlMain.cc:1038
#6  0x0044995b in ts_ctrl_main (arg=0x7f3b2d0a4d34)
at /opt/src/trafficserver.git/mgmt/api/TSControlMain.cc:212
#7  0x7f3b30881182 in start_thread (arg=0x7f3b27fff700) at 
pthread_create.c:312
#8  0x7f3b2ff43fbd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb) up
#1  0x004712ef in recordRegexCheck (pattern=0x0, value=0x7f3b14004d60 
james)
at /opt/src/trafficserver.git/mgmt/WebMgmtUtils.cc:1176
1176  regex = pcre_compile(pattern, 0, error, erroffset, NULL);
(gdb) up
#2  0x00471243 in recordValidityCheck (varName=0x7f3b14004bf0 
proxy.config.hostdb.ip_resolve,
value=0x7f3b14004d60 james) at 
/opt/src/trafficserver.git/mgmt/WebMgmtUtils.cc:1142
1142if (recordRegexCheck(pattern, value)) {
(gdb) quit
{code}

{{traffic_line}} has a hack to emit the misleading message on any failure to 
set a record.

 traffic_line gives cryptic error message when trying to set records which 
 cannot be changed dynamically
 ---

 Key: TS-3393
 URL: https://issues.apache.org/jira/browse/TS-3393
 Project: Traffic Server
  Issue Type: Bug
  Components: Configuration
Reporter: Igor Galić
Assignee: James Peach
 Fix For: 5.3.0


 for example, trying to set {{proxy.config.hostdb.ip_resolve}}, which for 
 whatever reason cannot be changed dynamically, gives the following error:
 {code}
 traffic_line -s proxy.config.hostdb.ip_resolve -v 'ipv6;none'
 traffic_line: Please correct your variable name and|or value
 {code}
 perhaps we should consider this in the upcoming {{traffic_ctl}} effort



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (TS-3072) Debug logging for a single connection in production traffic.

2015-02-23 Thread Sudheer Vinukonda (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14137626#comment-14137626
 ] 

Sudheer Vinukonda edited comment on TS-3072 at 2/23/15 5:54 PM:


Thanks, [~jpe...@apache.org]] - 

Yes, I am using the same framework as that of TSHttpTxnDebugGet()/Set(), except 
that I'm making it a little bit more easier to use (for e.g. adding a setting 
for the user to specify any arbitrary client/ip using traffic_line rather than 
having to write/code a plugin etc) and also extend the logging support beyond 
http (for e.g. NetVC, Spdy, FetchSM etc). Currently, we have it only in 
HttpSM/HttpTransact.




was (Author: sudheerv):
Thanks, [~jpeach] - 

Yes, I am using the same framework as that of TSHttpTxnDebugGet()/Set(), except 
that I'm making it a little bit more easier to use (for e.g. adding a setting 
for the user to specify any arbitrary client/ip using traffic_line rather than 
having to write/code a plugin etc) and also extend the logging support beyond 
http (for e.g. NetVC, Spdy, FetchSM etc). Currently, we have it only in 
HttpSM/HttpTransact.



 Debug logging for a single connection in production traffic.
 

 Key: TS-3072
 URL: https://issues.apache.org/jira/browse/TS-3072
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core, Logging
Affects Versions: 5.0.1
Reporter: Sudheer Vinukonda
  Labels: Yahoo
 Fix For: 5.3.0


 Presently, when there's a production issue (e.g. TS-3049, TS-2983 etc), it is 
 really hard to isolate/debug with the high traffic. Turning on debug logs in 
 traffic is unfortunately not an option due to performance impacts. Even if 
 you took a performance hit and turned on the logs, it is just as hard to 
 separate out the logs for a single connection/transaction among the millions 
 of the logs output in a short period of time.
 I think it would be good if there's a way to turn on debug logs in a 
 controlled manner in production environment. One simple option is to support 
 a config setting for example, with a client-ip, which when set, would turn on 
 debug logs for any connection made by just that one client. If needed, 
 instead of one client-ip, we may allow configuring up to 'n' (say, 5) 
 client-ips. 
 If there are other ideas, please comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3396) new_tsqa SPDY protocol selection tests

2015-02-23 Thread Eric Schwartz (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333631#comment-14333631
 ] 

Eric Schwartz commented on TS-3396:
---

Sorry for the confusion w/ 2 PRs here. The first one contained a cert that was 
unnecessary. I've rewritten the test to simply use one of [~jacksontj]'s certs 
from test_ssl_multicert, as reflected in the more recent PR.

 new_tsqa SPDY protocol selection tests
 --

 Key: TS-3396
 URL: https://issues.apache.org/jira/browse/TS-3396
 Project: Traffic Server
  Issue Type: Test
  Components: SPDY, tsqa
Reporter: Eric Schwartz
Assignee: Thomas Jackson
Priority: Minor
 Fix For: sometime


 Opening this JIRA to submit some SPDY protocol selection tests using spdycat 
 I wrote using the new_tsqa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3356) make inactivity cop default inactivity timeout dynamic

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333617#comment-14333617
 ] 

ASF subversion and git services commented on TS-3356:
-

Commit d4263b1f7d1dd6468db87842030bf79e1e604fd0 in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=d4263b1 ]

TS-3356: make proxy.config.net.default_inactivity_timeout reloadable

Make proxy.config.net.default_inactivity_timeout reloadable.  Update
the docs and add a metric to count how many times this happens.


 make inactivity cop default inactivity timeout dynamic
 --

 Key: TS-3356
 URL: https://issues.apache.org/jira/browse/TS-3356
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Reporter: James Peach
  Labels: A
 Fix For: 5.3.0


 Make {{proxy.config.net.default_inactivity_timeout}} reloadable. Update the 
 docs and add a metric to count how many times this happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3401) AIO blocks under lock contention

2015-02-23 Thread Brian Geffon (JIRA)
Brian Geffon created TS-3401:


 Summary: AIO blocks under lock contention
 Key: TS-3401
 URL: https://issues.apache.org/jira/browse/TS-3401
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Brian Geffon


In {{aio_thread_main()}} while trying to process AIO ops the AIO thread will 
wait on the mutex for the op which obviously blocks other AIO ops from 
processing. We should use a try lock instead and reschedule the ops that we 
couldn't immediately process. Patch attached. Waiting for reviews.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-3356) make inactivity cop default inactivity timeout dynamic

2015-02-23 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach resolved TS-3356.
-
Resolution: Fixed
  Assignee: James Peach

 make inactivity cop default inactivity timeout dynamic
 --

 Key: TS-3356
 URL: https://issues.apache.org/jira/browse/TS-3356
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Reporter: James Peach
Assignee: James Peach
  Labels: A
 Fix For: 5.3.0


 Make {{proxy.config.net.default_inactivity_timeout}} reloadable. Update the 
 docs and add a metric to count how many times this happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3118) Feature to stop accepting new connections

2015-02-23 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333641#comment-14333641
 ] 

James Peach commented on TS-3118:
-

There's already a mechanism to automatically delay a restart until the number 
of client sessions drops below a configured threshold. It would not be that 
hard to extend this to stop accepting new connections. However, typically there 
is a layer above that is draining connections (ie. some balancing 
infrastructure), so implementing this behavior would imply serving errors to 
clients, which is probably not desirable.

If we published the shutting down state as a metric, then it would be 
possible to write a plugin to take appropriate action. Options could be serving 
the request, dropping the client, serving a HTTP redirect, etc.

 Feature to stop accepting new connections
 -

 Key: TS-3118
 URL: https://issues.apache.org/jira/browse/TS-3118
 Project: Traffic Server
  Issue Type: New Feature
Reporter: Miles Libbey
  Labels: A
 Fix For: 5.3.0


 When taking an ATS machine out of production, it would be nice to have ATS 
 stop accepting new connections without affecting the existing client 
 connections to minimize client disruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3401) AIO blocks under lock contention

2015-02-23 Thread Brian Geffon (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Geffon updated TS-3401:
-
Attachment: aio.patch

 AIO blocks under lock contention
 

 Key: TS-3401
 URL: https://issues.apache.org/jira/browse/TS-3401
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Brian Geffon
 Attachments: aio.patch


 In {{aio_thread_main()}} while trying to process AIO ops the AIO thread will 
 wait on the mutex for the op which obviously blocks other AIO ops from 
 processing. We should use a try lock instead and reschedule the ops that we 
 couldn't immediately process. Patch attached. Waiting for reviews.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-3401) AIO blocks under lock contention

2015-02-23 Thread Brian Geffon (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Geffon reassigned TS-3401:


Assignee: Brian Geffon

 AIO blocks under lock contention
 

 Key: TS-3401
 URL: https://issues.apache.org/jira/browse/TS-3401
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Brian Geffon
Assignee: Brian Geffon
 Attachments: aio.patch


 In {{aio_thread_main()}} while trying to process AIO ops the AIO thread will 
 wait on the mutex for the op which obviously blocks other AIO ops from 
 processing. We should use a try lock instead and reschedule the ops that we 
 couldn't immediately process. Patch attached. Waiting for reviews.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3347) make ink_assert a no-op in release builds

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333572#comment-14333572
 ] 

ASF subversion and git services commented on TS-3347:
-

Commit 0143ca195270fb17f003b05cf457570aa5a807ba in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=0143ca1 ]

TS-3347: fix assertions that have side-effects

Coverity CID #1242347
Coverity CID #1242346
Coverity CID #1242344
Coverity CID #1242343
Coverity CID #1242341
Coverity CID #1242342


 make ink_assert a no-op in release builds
 -

 Key: TS-3347
 URL: https://issues.apache.org/jira/browse/TS-3347
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Reporter: Sudheer Vinukonda
 Fix For: 5.3.0


 ink_assert() is expected to be enabled only in DEBUG builds. However, the 
 current definition of ink_assert() for non-DEBUG (release) builds, still 
 evaluates the expression passed to it, which seems totally unnecessary. 
 Opening this jira to make ink_assert() a complete no-op for non-DEBUG release 
 builds. Along with the change of definition, need to scan the entire TS code 
 to fix the code that relies on the expression evaluated in the ink_assert() 
 accordingly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3287) Coverity fixes for v5.3.0 by zwoop

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333592#comment-14333592
 ] 

ASF subversion and git services commented on TS-3287:
-

Commit 8c71ba11240fe6cd62cf2a5d818616b5adee33b1 in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=8c71ba1 ]

TS-3287: fix memory management in the escalate plugin

Keeping duplicate pointers in the escalation map will double-free.
Convert this code to just copy the retry target to each entry.

This also fixes Coverity CID #1200022.


 Coverity fixes for v5.3.0 by zwoop
 --

 Key: TS-3287
 URL: https://issues.apache.org/jira/browse/TS-3287
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
 Fix For: 5.3.0


 This is my JIRA for Coverity commits for v5.3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3072) Debug logging for a single connection in production traffic.

2015-02-23 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333597#comment-14333597
 ] 

James Peach commented on TS-3072:
-

I'll be committing {{traffic_ctl}} today. From now on, please add any 
management support to {{traffic_ctl}} rather than {{traffic_line}}.

 Debug logging for a single connection in production traffic.
 

 Key: TS-3072
 URL: https://issues.apache.org/jira/browse/TS-3072
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core, Logging
Affects Versions: 5.0.1
Reporter: Sudheer Vinukonda
  Labels: Yahoo
 Fix For: 5.3.0


 Presently, when there's a production issue (e.g. TS-3049, TS-2983 etc), it is 
 really hard to isolate/debug with the high traffic. Turning on debug logs in 
 traffic is unfortunately not an option due to performance impacts. Even if 
 you took a performance hit and turned on the logs, it is just as hard to 
 separate out the logs for a single connection/transaction among the millions 
 of the logs output in a short period of time.
 I think it would be good if there's a way to turn on debug logs in a 
 controlled manner in production environment. One simple option is to support 
 a config setting for example, with a client-ip, which when set, would turn on 
 debug logs for any connection made by just that one client. If needed, 
 instead of one client-ip, we may allow configuring up to 'n' (say, 5) 
 client-ips. 
 If there are other ideas, please comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3358) add access checking to the management API

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333746#comment-14333746
 ] 

ASF subversion and git services commented on TS-3358:
-

Commit 5f332c4b9a9f471af4a043f041e02ab8766f0c50 in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=5f332c4 ]

TS-3358: peer credential checking on the management socket

Add peer credential checking to the management API socket. This
allows non-privileged processes to perform read-only operations,
reducing the need to run traffic_line as root, and reducing the
level of privilege needed by monitoring tools.

Factor out common unix domain socket creation. Add
proxy.config.admin.api.restricted configuration option to retain
the original socket permissions.


 add access checking to the management API
 -

 Key: TS-3358
 URL: https://issues.apache.org/jira/browse/TS-3358
 Project: Traffic Server
  Issue Type: Improvement
  Components: Configuration, Management API
Reporter: James Peach
Assignee: James Peach
 Fix For: 5.3.0


 Many of the most common uses of the management API are checking metrics and 
 configuration values. For these read-only cases, running {{traffic_line}} as 
 root is overkill. Add unix domain socket credential checking so that we can 
 allow read-only operations to be performed by unprivileged processes. This 
 change also adds a configuration option to retain the current behaviour 
 (retained by default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3402) rationalize lock debugging diagnostics

2015-02-23 Thread James Peach (JIRA)
James Peach created TS-3402:
---

 Summary: rationalize lock debugging diagnostics
 Key: TS-3402
 URL: https://issues.apache.org/jira/browse/TS-3402
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup, Core
Reporter: James Peach


The locking object have various debug capabilities that are conditional on the 
{{ERROR_CONFIG_TAG_LOCKS}} define. No-one ever knows to turn this on, so make 
it conditional on {{DEBUG}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3401) AIO blocks under lock contention

2015-02-23 Thread Brian Geffon (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333769#comment-14333769
 ] 

Brian Geffon commented on TS-3401:
--

After looking into this issue with [~amc] the blocking is actually required. 
After more digging it appears the only place that does an 
{{AIO_CALLBACK_THREAD_AIO}} is in CacheWrite.cc in {{aggWrite()}}. There is a 
comment that states:

{code}
  /*
Callback on AIO thread so that we can issue a new write ASAP
as all writes are serialized in the volume.  This is not necessary
for reads proceed independently.
   */
{code}

Because the offsets are calculated before hand the writes must be serialized. 
Thus we cannot reschedule the writes. I'll close this as NOT A BUG.

 AIO blocks under lock contention
 

 Key: TS-3401
 URL: https://issues.apache.org/jira/browse/TS-3401
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Brian Geffon
Assignee: Brian Geffon
 Attachments: aio.patch


 In {{aio_thread_main()}} while trying to process AIO ops the AIO thread will 
 wait on the mutex for the op which obviously blocks other AIO ops from 
 processing. We should use a try lock instead and reschedule the ops that we 
 couldn't immediately process. Patch attached. Waiting for reviews.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-3402) rationalize lock debugging diagnostics

2015-02-23 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach resolved TS-3402.
-
   Resolution: Fixed
Fix Version/s: 5.3.0
 Assignee: James Peach

 rationalize lock debugging diagnostics
 --

 Key: TS-3402
 URL: https://issues.apache.org/jira/browse/TS-3402
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup, Core
Reporter: James Peach
Assignee: James Peach
 Fix For: 5.3.0


 The locking object have various debug capabilities that are conditional on 
 the {{ERROR_CONFIG_TAG_LOCKS}} define. No-one ever knows to turn this on, so 
 make it conditional on {{DEBUG}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-3401) AIO blocks under lock contention

2015-02-23 Thread Brian Geffon (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Geffon closed TS-3401.

Resolution: Not a Problem

 AIO blocks under lock contention
 

 Key: TS-3401
 URL: https://issues.apache.org/jira/browse/TS-3401
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Brian Geffon
Assignee: Brian Geffon
 Attachments: aio.patch


 In {{aio_thread_main()}} while trying to process AIO ops the AIO thread will 
 wait on the mutex for the op which obviously blocks other AIO ops from 
 processing. We should use a try lock instead and reschedule the ops that we 
 couldn't immediately process. Patch attached. Waiting for reviews.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3402) rationalize lock debugging diagnostics

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333771#comment-14333771
 ] 

ASF subversion and git services commented on TS-3402:
-

Commit ac4a7dbe26ec83b66d311d3b9e1f9280a99172f6 in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=ac4a7db ]

TS-3402: rationalize lock debugging infrastructure

Use SrcLoc to track lock holding locations. Enable lock diagnostics
in DEBUG builds when the lock debug tag is set. Previously you
had to do this *and* toggle some there magic #defines.


 rationalize lock debugging diagnostics
 --

 Key: TS-3402
 URL: https://issues.apache.org/jira/browse/TS-3402
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cleanup, Core
Reporter: James Peach
 Fix For: 5.3.0


 The locking object have various debug capabilities that are conditional on 
 the {{ERROR_CONFIG_TAG_LOCKS}} define. No-one ever knows to turn this on, so 
 make it conditional on {{DEBUG}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Build failed in Jenkins: tsqa-master #158

2015-02-23 Thread James Peach
This failed because TS-3358 added explicit access checks to the management 
socket. Unless proxy.config.admin.api.restricted is 0, access is restricted 
to root processes. In the case of tsqa, we run the whole thing unprivileged. 
This used to work because access was controlled by filesystem permissions.

I'm open to suggestions as to what the right behaviour should be in thisc case 
...


 On Feb 23, 2015, at 2:25 PM, jenk...@ci.trafficserver.apache.org wrote:
 
 See https://ci.trafficserver.apache.org/job/tsqa-master/158/changes
 
 Changes:
 
 [James Peach] TS-3358: peer credential checking on the management socket
 
 --
 [...truncated 14737 lines...]
 FAIL: failed to fetch value for proxy.config.log.extended2_log_name
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.extended2_log_header
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.separate_icp_logs
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.separate_host_logs
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_host
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_port
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_secret
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_host_tagged
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_retry_sec
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_max_send_buffers
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_preproc_threads
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.rolling_offset_hr
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.sampling_frequency
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.space_used_frequency
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.file_stat_frequency
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.ascii_buffer_size
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.max_line_size
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_rolling_interval_sec
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_log_enabled
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_server_ip_addr
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_server_port
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_top_sites
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_url_filter
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_log_filters
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.url_remap.default_to_server_pac
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for 
 proxy.config.url_remap.default_to_server_pac_port
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.url_remap.filename
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.url_remap.url_remap_mode
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.url_remap.handle_backdoor_urls
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.enabled
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.SSLv2
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.SSLv3
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.TLSv1
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.TLSv1_1
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.TLSv1_2
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.client.SSLv2
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.client.SSLv3
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.client.TLSv1
 traffic_line: [13] Operation not permitted.
 FAIL: 

Build failed in Jenkins: clang-analyzer #453

2015-02-23 Thread jenkins
See https://ci.trafficserver.apache.org/job/clang-analyzer/453/changes

Changes:

[Bryan Call] TS-2729: Add HTTP/2 support to ATS

--
[...truncated 1878 lines...]
  CXX  tcpinfo.lo
  CXXLDtcpinfo.la
make[2]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/tcpinfo'
Making all in experimental
make[2]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental'
Making all in authproxy
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/authproxy'
  CXX  utils.lo
  CXX  authproxy.lo
  CXXLDauthproxy.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/authproxy'
Making all in background_fetch
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/background_fetch'
  CXX  background_fetch.lo
  CXXLDbackground_fetch.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/background_fetch'
Making all in balancer
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/balancer'
  CXX  roundrobin.lo
  CXX  hash.lo
  CXX  balancer.lo
  CXXLDbalancer.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/balancer'
Making all in buffer_upload
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/buffer_upload'
  CXX  buffer_upload.lo
  CXXLDbuffer_upload.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/buffer_upload'
Making all in channel_stats
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/channel_stats'
  CXX  channel_stats.lo
  CXXLDchannel_stats.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/channel_stats'
Making all in collapsed_connection
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/collapsed_connection'
  CXX  collapsed_connection.lo
  CXX  MurmurHash3.lo
  CXXLDcollapsed_connection.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/collapsed_connection'
Making all in custom_redirect
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/custom_redirect'
  CXX  custom_redirect.lo
  CXXLDcustom_redirect.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/custom_redirect'
Making all in epic
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/epic'
  CXX  epic.lo
  CXXLDepic.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/epic'
Making all in escalate
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/escalate'
  CXX  escalate.lo
  CXXLDescalate.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/escalate'
Making all in esi
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/esi'
  CXX  esi.lo
  CXX  serverIntercept.lo
  CXX  lib/DocNode.lo
  CXX  combo_handler.lo
  CXX  lib/EsiParser.lo
  CXX  lib/EsiGzip.lo
  CXX  lib/EsiGunzip.lo
  CXX  lib/EsiProcessor.lo
In file included from combo_handler.cc:27:
In file included from 
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.8.3/../../../../include/c++/4.8.3/vector:64:
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.8.3/../../../../include/c++/4.8.3/bits/stl_vector.h:771:9:
 warning: Returning null reference
  { return *(this-_M_impl._M_start + __n); }
^~
1 warning generated.
  CXX  lib/Expression.lo
  CXX  lib/FailureInfo.lo
  CXX  lib/HandlerManager.lo
  CXX  lib/Stats.lo
  CXX  lib/Utils.lo
  CXX  lib/Variables.lo
  CXX  lib/gzip.lo
  CXX  test/print_funcs.lo
  CXX  test/HandlerMap.lo
  CXX  test/StubIncludeHandler.lo
  CXX  test/TestHandlerManager.lo
  CXX  fetcher/HttpDataFetcherImpl.lo
  CXXLDlibtest.la
  CXXLDlibesicore.la
  CXXLDcombo_handler.la
  CXXLDesi.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/esi'
Making all in generator
make[3]: Entering directory 

[jira] [Commented] (TS-2729) Add HTTP/2 support to ATS

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333851#comment-14333851
 ] 

ASF subversion and git services commented on TS-2729:
-

Commit e4347ef80f2d4035eef7e3f5bd899d176e68cc23 in trafficserver's branch 
refs/heads/master from [~rokubo]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=e4347ef ]

TS-2729: Add HTTP/2 support to ATS


 Add HTTP/2 support to ATS
 -

 Key: TS-2729
 URL: https://issues.apache.org/jira/browse/TS-2729
 Project: Traffic Server
  Issue Type: New Feature
  Components: HTTP/2
Reporter: Ryo Okubo
Assignee: James Peach
  Labels: review
 Fix For: 5.3.0

 Attachments: 0003-h2-prototype.patch, 0004-h2-prototype.patch, 
 0005-h2-prototype.patch, h2c_upgrade.patch, hpack.patch, http2-0004.patch, 
 improve-mime.patch


 h2. Overview
 Support HTTP/2 as a client side L7 protocol. This feature is implemented into 
 ATS core.
 Now, it supports the latest HTTP/2 draft version, h2-16.
 https://tools.ietf.org/html/draft-ietf-httpbis-http2-16
 h2. How to test
 # Build ATS codes normally. you need neither any build option nor external 
 HTTP/2 library.
 # Configure settings to use https.
 # Add settings to records.config to use http2.
 {noformat}
 CONFIG proxy.config.http2.enabled INT 1
 {noformat}
 # Access to ATS by HTTP/2 client.
 h2. Descriptions of current attached patches.
 * 0003-h2-prototype.patch
 ** For experiment. Please don't merge it. It enables to interpret HTTP/2 
 requests and respond for it. But now this code is unsafe and dirty. More 
 refactoring is required.
 h2. DONE
 * Fundamental HTTP/2 frame handling
 * Flow control
 * Some error handlings
 h2. TODO
 * Refactoring
 * More debugging
 * Write documents
 * Add test tools for HPACK, HTTP/2 frames
 h2. No plan
 * [Server 
 Push|https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-8.2] 
 This would probably require support for [Link 
 preload|http://w3c.github.io/preload/#interoperability-with-http-link-header]?
 * [Stream 
 Priority|https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-5.3]
 * [Alternative 
 Services|https://tools.ietf.org/html/draft-ietf-httpbis-alt-svc-06]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-2729) Add HTTP/2 support to ATS

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14333998#comment-14333998
 ] 

ASF subversion and git services commented on TS-2729:
-

Commit a12b6d29a0de12623b253c99348897ae322c6693 in trafficserver's branch 
refs/heads/master from [~bcall]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=a12b6d2 ]

TS-2729: Add HTTP/2 support to ATS
Fixed sign comparison and cleaned up the code


 Add HTTP/2 support to ATS
 -

 Key: TS-2729
 URL: https://issues.apache.org/jira/browse/TS-2729
 Project: Traffic Server
  Issue Type: New Feature
  Components: HTTP/2
Reporter: Ryo Okubo
Assignee: James Peach
  Labels: review
 Fix For: 5.3.0

 Attachments: 0003-h2-prototype.patch, 0004-h2-prototype.patch, 
 0005-h2-prototype.patch, h2c_upgrade.patch, hpack.patch, http2-0004.patch, 
 improve-mime.patch


 h2. Overview
 Support HTTP/2 as a client side L7 protocol. This feature is implemented into 
 ATS core.
 Now, it supports the latest HTTP/2 draft version, h2-16.
 https://tools.ietf.org/html/draft-ietf-httpbis-http2-16
 h2. How to test
 # Build ATS codes normally. you need neither any build option nor external 
 HTTP/2 library.
 # Configure settings to use https.
 # Add settings to records.config to use http2.
 {noformat}
 CONFIG proxy.config.http2.enabled INT 1
 {noformat}
 # Access to ATS by HTTP/2 client.
 h2. Descriptions of current attached patches.
 * 0003-h2-prototype.patch
 ** For experiment. Please don't merge it. It enables to interpret HTTP/2 
 requests and respond for it. But now this code is unsafe and dirty. More 
 refactoring is required.
 h2. DONE
 * Fundamental HTTP/2 frame handling
 * Flow control
 * Some error handlings
 h2. TODO
 * Refactoring
 * More debugging
 * Write documents
 * Add test tools for HPACK, HTTP/2 frames
 h2. No plan
 * [Server 
 Push|https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-8.2] 
 This would probably require support for [Link 
 preload|http://w3c.github.io/preload/#interoperability-with-http-link-header]?
 * [Stream 
 Priority|https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-5.3]
 * [Alternative 
 Services|https://tools.ietf.org/html/draft-ietf-httpbis-alt-svc-06]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: clang-analyzer #452

2015-02-23 Thread jenkins
See https://ci.trafficserver.apache.org/job/clang-analyzer/452/changes

Changes:

[Bryan Call] TS-2729: Add HTTP/2 support to ATS

--
[...truncated 1878 lines...]
  CXX  tcpinfo.lo
  CXXLDtcpinfo.la
make[2]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/tcpinfo'
Making all in experimental
make[2]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental'
Making all in authproxy
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/authproxy'
  CXX  utils.lo
  CXX  authproxy.lo
  CXXLDauthproxy.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/authproxy'
Making all in background_fetch
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/background_fetch'
  CXX  background_fetch.lo
  CXXLDbackground_fetch.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/background_fetch'
Making all in balancer
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/balancer'
  CXX  hash.lo
  CXX  roundrobin.lo
  CXX  balancer.lo
  CXXLDbalancer.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/balancer'
Making all in buffer_upload
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/buffer_upload'
  CXX  buffer_upload.lo
  CXXLDbuffer_upload.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/buffer_upload'
Making all in channel_stats
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/channel_stats'
  CXX  channel_stats.lo
  CXXLDchannel_stats.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/channel_stats'
Making all in collapsed_connection
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/collapsed_connection'
  CXX  collapsed_connection.lo
  CXX  MurmurHash3.lo
  CXXLDcollapsed_connection.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/collapsed_connection'
Making all in custom_redirect
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/custom_redirect'
  CXX  custom_redirect.lo
  CXXLDcustom_redirect.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/custom_redirect'
Making all in epic
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/epic'
  CXX  epic.lo
  CXXLDepic.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/epic'
Making all in escalate
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/escalate'
  CXX  escalate.lo
  CXXLDescalate.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/escalate'
Making all in esi
make[3]: Entering directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/esi'
  CXX  esi.lo
  CXX  serverIntercept.lo
  CXX  combo_handler.lo
  CXX  lib/DocNode.lo
  CXX  lib/EsiParser.lo
In file included from combo_handler.cc:27:
In file included from 
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.8.3/../../../../include/c++/4.8.3/vector:64:
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.8.3/../../../../include/c++/4.8.3/bits/stl_vector.h:771:9:
 warning: Returning null reference
  { return *(this-_M_impl._M_start + __n); }
^~
1 warning generated.
  CXX  lib/EsiGzip.lo
  CXX  lib/EsiGunzip.lo
  CXX  lib/EsiProcessor.lo
  CXX  lib/Expression.lo
  CXX  lib/FailureInfo.lo
  CXX  lib/HandlerManager.lo
  CXX  lib/Stats.lo
  CXX  lib/Utils.lo
  CXX  lib/Variables.lo
  CXX  lib/gzip.lo
  CXX  test/print_funcs.lo
  CXX  test/HandlerMap.lo
  CXX  test/StubIncludeHandler.lo
  CXX  test/TestHandlerManager.lo
  CXX  fetcher/HttpDataFetcherImpl.lo
  CXXLDlibtest.la
  CXXLDlibesicore.la
  CXXLDesi.la
  CXXLDcombo_handler.la
make[3]: Leaving directory 
`https://ci.trafficserver.apache.org/job/clang-analyzer/ws/src/plugins/experimental/esi'
Making all in generator
make[3]: Entering directory 

Jenkins build is back to normal : clang-analyzer #454

2015-02-23 Thread jenkins
See https://ci.trafficserver.apache.org/job/clang-analyzer/454/changes



Build failed in Jenkins: tsqa-master #159

2015-02-23 Thread jenkins
See https://ci.trafficserver.apache.org/job/tsqa-master/159/changes

Changes:

[James Peach] TS-3402: rationalize lock debugging infrastructure

[James Peach] TS-3403: stop parsing command-line options at the first non-option

[James Peach] TS-3367: add traffic_ctl, a new command-line interface to the 
management API

[Bryan Call] TS-2729: Add HTTP/2 support to ATS

--
[...truncated 14793 lines...]
FAIL: failed to fetch value for proxy.config.log.extended2_log_header
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.separate_icp_logs
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.separate_host_logs
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_host
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_secret
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_host_tagged
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_retry_sec
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_max_send_buffers
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_preproc_threads
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.rolling_offset_hr
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.sampling_frequency
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.space_used_frequency
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.file_stat_frequency
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.ascii_buffer_size
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.max_line_size
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_rolling_interval_sec
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_log_enabled
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_server_ip_addr
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_server_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_top_sites
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_url_filter
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_log_filters
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.default_to_server_pac
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for 
proxy.config.url_remap.default_to_server_pac_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.filename
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.url_remap_mode
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.handle_backdoor_urls
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.enabled
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.SSLv2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.SSLv3
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.TLSv1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.TLSv1_1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.TLSv1_2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.SSLv2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.SSLv3
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.TLSv1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.TLSv1_1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.TLSv1_2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.compression
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.cipher_suite
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch 

[jira] [Commented] (TS-3118) Feature to stop accepting new connections

2015-02-23 Thread Adam W. Dace (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334158#comment-14334158
 ] 

Adam W. Dace commented on TS-3118:
--

If it helps, our environment was configured solely on the basis of a -socket- 
layer.

That's what makes it so easy to take an application node out of rotation.

I'm sure different shops manage these types of things in different ways.

But honestly, and I'm no consultant, I almost guarantee many people have come 
to the same conclusion about the same problem.
It happens a lot in the IT industry.  Frankly, if it weren't for the long 
hours, I'd still be -in- the IT industry.  Amazing stuff.

 Feature to stop accepting new connections
 -

 Key: TS-3118
 URL: https://issues.apache.org/jira/browse/TS-3118
 Project: Traffic Server
  Issue Type: New Feature
Reporter: Miles Libbey
  Labels: A
 Fix For: 5.3.0


 When taking an ATS machine out of production, it would be nice to have ATS 
 stop accepting new connections without affecting the existing client 
 connections to minimize client disruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (TS-3118) Feature to stop accepting new connections

2015-02-23 Thread Adam W. Dace (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334158#comment-14334158
 ] 

Adam W. Dace edited comment on TS-3118 at 2/24/15 12:32 AM:


If it helps, our environment was configured solely on the basis of a -socket- 
layer.

That's what makes it so easy to take an application node out of rotation.

I'm sure different shops manage these types of things in different ways.

But honestly, and I'm no consultant, I almost guarantee many people have come 
to the same conclusion about the same problem.
It happens a lot in the IT industry.  Frankly, if it weren't for the long 
hours, I'd still be in the IT industry.  Amazing stuff.


was (Author: adace):
If it helps, our environment was configured solely on the basis of a -socket- 
layer.

That's what makes it so easy to take an application node out of rotation.

I'm sure different shops manage these types of things in different ways.

But honestly, and I'm no consultant, I almost guarantee many people have come 
to the same conclusion about the same problem.
It happens a lot in the IT industry.  Frankly, if it weren't for the long 
hours, I'd still be -in- the IT industry.  Amazing stuff.

 Feature to stop accepting new connections
 -

 Key: TS-3118
 URL: https://issues.apache.org/jira/browse/TS-3118
 Project: Traffic Server
  Issue Type: New Feature
Reporter: Miles Libbey
  Labels: A
 Fix For: 5.3.0


 When taking an ATS machine out of production, it would be nice to have ATS 
 stop accepting new connections without affecting the existing client 
 connections to minimize client disruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-3406) Change TS_NPN_PROTOCOL_HTTP_2_0 to h2

2015-02-23 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom reassigned TS-3406:
-

Assignee: Leif Hedstrom

 Change TS_NPN_PROTOCOL_HTTP_2_0 to h2
 ---

 Key: TS-3406
 URL: https://issues.apache.org/jira/browse/TS-3406
 Project: Traffic Server
  Issue Type: Improvement
  Components: HTTP/2
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
 Fix For: 5.3.0


 With H2 landed on master, and the RFC finalized, we should change the 
 identifier string to just h2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-3406) Change TS_NPN_PROTOCOL_HTTP_2_0 to h2

2015-02-23 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom resolved TS-3406.
---
Resolution: Fixed

 Change TS_NPN_PROTOCOL_HTTP_2_0 to h2
 ---

 Key: TS-3406
 URL: https://issues.apache.org/jira/browse/TS-3406
 Project: Traffic Server
  Issue Type: Improvement
  Components: HTTP/2
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
 Fix For: 5.3.0


 With H2 landed on master, and the RFC finalized, we should change the 
 identifier string to just h2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3406) Change TS_NPN_PROTOCOL_HTTP_2_0 to h2

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334422#comment-14334422
 ] 

ASF subversion and git services commented on TS-3406:
-

Commit 6041b439aa78e74d0e35bbfb2aeb3e66bc64368d in trafficserver's branch 
refs/heads/master from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=6041b43 ]

TS-3406 Change TS_NPN_PROTOCOL_HTTP_2_0 to h2


 Change TS_NPN_PROTOCOL_HTTP_2_0 to h2
 ---

 Key: TS-3406
 URL: https://issues.apache.org/jira/browse/TS-3406
 Project: Traffic Server
  Issue Type: Improvement
  Components: HTTP/2
Reporter: Leif Hedstrom
 Fix For: 5.3.0


 With H2 landed on master, and the RFC finalized, we should change the 
 identifier string to just h2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3118) Feature to stop accepting new connections

2015-02-23 Thread Adam W. Dace (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334140#comment-14334140
 ] 

Adam W. Dace commented on TS-3118:
--

Meh.  Sorry for my reluctance but honestly I'm not sure if I'm violating any 
legal non-disclosure terms I'd agreed to by talking about this too much.

To directly answer your question, I believe the logic(from the load-balancer) 
went something like this:

1) Client HTTP request comes in.  We hold the socket open as the load-balancer 
is also acting as a socket-layer endpoint(security reasons).
2) Find a server that can handle the request.
3) Server isn't listening on the required port?  No problem, failover to the 
next node(various failover schemes, round-robin, etc).  Note, no errors here.  
Just delays.
4) Deliver HTTP request to an active node, with the socket-layer coming from 
the load-balancer(again, security).
5) Fulfill request.
6) Deliver that HTTP request to the actual client, waiting for its HTTP 
response on another socket.
7) HTTP request complete.

FWIW, I hope this helps.  This was my understanding of the behavior of the 
load-balancer itself.
Not impossible in coding terms, but definitely impressive.  :-)


 Feature to stop accepting new connections
 -

 Key: TS-3118
 URL: https://issues.apache.org/jira/browse/TS-3118
 Project: Traffic Server
  Issue Type: New Feature
Reporter: Miles Libbey
  Labels: A
 Fix For: 5.3.0


 When taking an ATS machine out of production, it would be nice to have ATS 
 stop accepting new connections without affecting the existing client 
 connections to minimize client disruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : freebsd_10-master » clang,freebsd_10,release #772

2015-02-23 Thread jenkins
See 
https://ci.trafficserver.apache.org/job/freebsd_10-master/compiler=clang,label=freebsd_10,type=release/772/



[jira] [Work started] (TS-3299) InactivityCop broken

2015-02-23 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TS-3299 started by Sudheer Vinukonda.
-
 InactivityCop broken
 

 Key: TS-3299
 URL: https://issues.apache.org/jira/browse/TS-3299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Affects Versions: 5.2.0
Reporter: Sudheer Vinukonda
Assignee: Sudheer Vinukonda
 Fix For: 5.3.0


 The patch in TS-3196 seems to result in fd leak in our prod. There are a 
 bunch of hung sockets (in close-wait state), stuck forever (remain leaked for 
 days after stopping the traffic). Debugging further, it seems that the 
 InactivityCop is broken by this patch. [NOTE: We have spdy enabled in prod, 
 but, I am not entirely sure, if this bug only affects spdy connections]
 Some info below for the leaked sockets (in close_wait state):
 {code}
 $ ss -s ; sudo traffic_line -r
 proxy.process.net.connections_currently_open; sudo traffic_line -r
 proxy.process.http.current_client_connections; sudo traffic_line -r
 proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
 traffic_server)/fd/ 3/dev/null| wc -l
 Total: 29367 (kernel 29437)
 TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
 ports 918
 Transport Total IPIPv6
 *  29437 - -
 RAW  0 0 0
 UDP  16133
 TCP  31642 31637 5
 INET  31658 31650 8
 FRAG  0 0 0
 Password: 
 27689
 1
 1
 27939
 A snippet from lsof -p $(pidof traffic_server)
 [ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
 [ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
 [ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
 [ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
 [ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
 [ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
 [ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
 [ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
 [ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
 [ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
 [ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
 [ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
 [ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
 [ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
 [ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
 [ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
 [ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
 [ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
 [ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
 [ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
 [ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
 [ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
 [ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
 [ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
 [ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
 [ET_NET 10024 nobody  

[jira] [Resolved] (TS-3299) InactivityCop broken

2015-02-23 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda resolved TS-3299.
---
Resolution: Fixed

 InactivityCop broken
 

 Key: TS-3299
 URL: https://issues.apache.org/jira/browse/TS-3299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Affects Versions: 5.2.0
Reporter: Sudheer Vinukonda
Assignee: Sudheer Vinukonda
 Fix For: 5.3.0


 The patch in TS-3196 seems to result in fd leak in our prod. There are a 
 bunch of hung sockets (in close-wait state), stuck forever (remain leaked for 
 days after stopping the traffic). Debugging further, it seems that the 
 InactivityCop is broken by this patch. [NOTE: We have spdy enabled in prod, 
 but, I am not entirely sure, if this bug only affects spdy connections]
 Some info below for the leaked sockets (in close_wait state):
 {code}
 $ ss -s ; sudo traffic_line -r
 proxy.process.net.connections_currently_open; sudo traffic_line -r
 proxy.process.http.current_client_connections; sudo traffic_line -r
 proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
 traffic_server)/fd/ 3/dev/null| wc -l
 Total: 29367 (kernel 29437)
 TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
 ports 918
 Transport Total IPIPv6
 *  29437 - -
 RAW  0 0 0
 UDP  16133
 TCP  31642 31637 5
 INET  31658 31650 8
 FRAG  0 0 0
 Password: 
 27689
 1
 1
 27939
 A snippet from lsof -p $(pidof traffic_server)
 [ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
 [ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
 [ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
 [ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
 [ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
 [ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
 [ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
 [ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
 [ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
 [ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
 [ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
 [ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
 [ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
 [ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
 [ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
 [ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
 [ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
 [ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
 [ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
 [ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
 [ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
 [ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
 [ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
 [ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
 [ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
 [ET_NET 10024 

[jira] [Assigned] (TS-3299) InactivityCop broken

2015-02-23 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda reassigned TS-3299:
-

Assignee: Sudheer Vinukonda

 InactivityCop broken
 

 Key: TS-3299
 URL: https://issues.apache.org/jira/browse/TS-3299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Affects Versions: 5.2.0
Reporter: Sudheer Vinukonda
Assignee: Sudheer Vinukonda
 Fix For: 5.3.0


 The patch in TS-3196 seems to result in fd leak in our prod. There are a 
 bunch of hung sockets (in close-wait state), stuck forever (remain leaked for 
 days after stopping the traffic). Debugging further, it seems that the 
 InactivityCop is broken by this patch. [NOTE: We have spdy enabled in prod, 
 but, I am not entirely sure, if this bug only affects spdy connections]
 Some info below for the leaked sockets (in close_wait state):
 {code}
 $ ss -s ; sudo traffic_line -r
 proxy.process.net.connections_currently_open; sudo traffic_line -r
 proxy.process.http.current_client_connections; sudo traffic_line -r
 proxy.process.http.current_server_connections; sudo ls -l /proc/$(pidof
 traffic_server)/fd/ 3/dev/null| wc -l
 Total: 29367 (kernel 29437)
 TCP:   78235 (estab 5064, closed 46593, orphaned 0, synrecv 0, timewait 15/0),
 ports 918
 Transport Total IPIPv6
 *  29437 - -
 RAW  0 0 0
 UDP  16133
 TCP  31642 31637 5
 INET  31658 31650 8
 FRAG  0 0 0
 Password: 
 27689
 1
 1
 27939
 A snippet from lsof -p $(pidof traffic_server)
 [ET_NET 10024 nobody  240u  IPv4 21385754290t0TCP 
 67.195.33.62:443-66.87.139.114:29381 (CLOSE_WAIT)
 [ET_NET 10024 nobody  241u  IPv4 21370939450t0TCP 
 67.195.33.62:443-201.122.209.180:49274 (CLOSE_WAIT)
 [ET_NET 10024 nobody  243u  IPv4 21360187890t0TCP 
 67.195.33.62:443-173.225.111.9:38133 (CLOSE_WAIT)
 [ET_NET 10024 nobody  245u  IPv4 21359962930t0TCP 
 67.195.33.62:443-172.243.79.180:52701 (CLOSE_WAIT)
 [ET_NET 10024 nobody  248u  IPv4 21364688960t0TCP 
 67.195.33.62:443-173.225.111.82:42273 (CLOSE_WAIT)
 [ET_NET 10024 nobody  253u  IPv4 21402138640t0TCP 
 67.195.33.62:443-174.138.185.120:34936 (CLOSE_WAIT)
 [ET_NET 10024 nobody  259u  IPv4 21378611760t0TCP 
 67.195.33.62:443-76.199.250.133:60631 (CLOSE_WAIT)
 [ET_NET 10024 nobody  260u  IPv4 21390814930t0TCP 
 67.195.33.62:443-187.74.154.214:58800 (CLOSE_WAIT)
 [ET_NET 10024 nobody  261u  IPv4 21349485650t0TCP 
 67.195.33.62:443-23.242.49.117:4127 (CLOSE_WAIT)
 [ET_NET 10024 nobody  262u  IPv4 21357080460t0TCP 
 67.195.33.62:443-66.241.71.243:50318 (CLOSE_WAIT)
 [ET_NET 10024 nobody  263u  IPv4 21388968970t0TCP 
 67.195.33.62:443-73.35.151.106:52414 (CLOSE_WAIT)
 [ET_NET 10024 nobody  264u  IPv4 21355890290t0TCP 
 67.195.33.62:443-96.251.12.27:62426 (CLOSE_WAIT)
 [ET_NET 10024 nobody  265u  IPv4 21349302350t0TCP 
 67.195.33.62:443-207.118.3.196:50690 (CLOSE_WAIT)
 [ET_NET 10024 nobody  267u  IPv4 21378375150t0TCP 
 67.195.33.62:443-98.112.195.98:52028 (CLOSE_WAIT)
 [ET_NET 10024 nobody  269u  IPv4 21352728550t0TCP 
 67.195.33.62:443-24.1.230.25:57265 (CLOSE_WAIT)
 [ET_NET 10024 nobody  270u  IPv4 21358208020t0TCP 
 67.195.33.62:443-24.75.122.66:14345 (CLOSE_WAIT)
 [ET_NET 10024 nobody  271u  IPv4 21354750420t0TCP 
 67.195.33.62:443-65.102.35.112:49188 (CLOSE_WAIT)
 [ET_NET 10024 nobody  272u  IPv4 21353289740t0TCP 
 67.195.33.62:443-209.242.195.252:54890 (CLOSE_WAIT)
 [ET_NET 10024 nobody  273u  IPv4 21375427910t0TCP 
 67.195.33.62:443-76.79.183.188:47048 (CLOSE_WAIT)
 [ET_NET 10024 nobody  274u  IPv4 21348061350t0TCP 
 67.195.33.62:443-189.251.149.36:58106 (CLOSE_WAIT)
 [ET_NET 10024 nobody  275u  IPv4 21401260170t0TCP 
 67.195.33.62:443-68.19.173.44:1397 (CLOSE_WAIT)
 [ET_NET 10024 nobody  276u  IPv4 21346360890t0TCP 
 67.195.33.62:443-67.44.192.72:22112 (CLOSE_WAIT)
 [ET_NET 10024 nobody  278u  IPv4 21347083390t0TCP 
 67.195.33.62:443-107.220.216.155:51242 (CLOSE_WAIT)
 [ET_NET 10024 nobody  279u  IPv4 21345808880t0TCP 
 67.195.33.62:443-50.126.116.209:59432 (CLOSE_WAIT)
 [ET_NET 10024 nobody  281u  IPv4 21348681310t0TCP 
 67.195.33.62:443-108.38.255.44:4612 (CLOSE_WAIT)
 

[jira] [Commented] (TS-2729) Add HTTP/2 support to ATS

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334415#comment-14334415
 ] 

ASF subversion and git services commented on TS-2729:
-

Commit fb08ddd0952a53391b9ba654cf2bcec1fd2ea33f in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=fb08ddd ]

TS-2729: fix format string error


 Add HTTP/2 support to ATS
 -

 Key: TS-2729
 URL: https://issues.apache.org/jira/browse/TS-2729
 Project: Traffic Server
  Issue Type: New Feature
  Components: HTTP/2
Reporter: Ryo Okubo
Assignee: Bryan Call
  Labels: review
 Fix For: 5.3.0

 Attachments: 0003-h2-prototype.patch, 0004-h2-prototype.patch, 
 0005-h2-prototype.patch, h2c_upgrade.patch, hpack.patch, http2-0004.patch, 
 improve-mime.patch


 h2. Overview
 Support HTTP/2 as a client side L7 protocol. This feature is implemented into 
 ATS core.
 Now, it supports the latest HTTP/2 draft version, h2-16.
 https://tools.ietf.org/html/draft-ietf-httpbis-http2-16
 h2. How to test
 # Build ATS codes normally. you need neither any build option nor external 
 HTTP/2 library.
 # Configure settings to use https.
 # Add settings to records.config to use http2.
 {noformat}
 CONFIG proxy.config.http2.enabled INT 1
 {noformat}
 # Access to ATS by HTTP/2 client.
 h2. Descriptions of current attached patches.
 * 0003-h2-prototype.patch
 ** For experiment. Please don't merge it. It enables to interpret HTTP/2 
 requests and respond for it. But now this code is unsafe and dirty. More 
 refactoring is required.
 h2. DONE
 * Fundamental HTTP/2 frame handling
 * Flow control
 * Some error handlings
 h2. TODO
 * Refactoring
 * More debugging
 * Write documents
 * Add test tools for HPACK, HTTP/2 frames
 h2. No plan
 * [Server 
 Push|https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-8.2] 
 This would probably require support for [Link 
 preload|http://w3c.github.io/preload/#interoperability-with-http-link-header]?
 * [Stream 
 Priority|https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-5.3]
 * [Alternative 
 Services|https://tools.ietf.org/html/draft-ietf-httpbis-alt-svc-06]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-2193) Trafficserver 4.1 Crash with proxy.config.dns.dedicated_thread = 1

2015-02-23 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334254#comment-14334254
 ] 

Leif Hedstrom commented on TS-2193:
---

I stumbled up on this, more or less by mistake. In my setup, if I enable parent 
proxy, then the synthetic health checks can trigger this. It'd only trigger 
once per day in my test, when the entry expires out of HostDB, but it pretty 
consistently reproduces if I clear HostDB first. All I did was enable the DNS 
thread, and parent proxying (the details there probably don't matter, as long 
as you have a least one rule in parent.config), and then I did

{code}
curl -X localhost:80 http://127.0.0.1/synthetic.txt
{code}

 Trafficserver 4.1 Crash with proxy.config.dns.dedicated_thread = 1
 --

 Key: TS-2193
 URL: https://issues.apache.org/jira/browse/TS-2193
 Project: Traffic Server
  Issue Type: Bug
  Components: DNS
Affects Versions: 4.1.2
Reporter: Tommy Lee
Assignee: Alan M. Carroll
  Labels: Crash
 Fix For: 5.3.0

 Attachments: bt-01.txt


 Hi all,
   I've tried to enable DNS Thread without luck.
   When i set proxy.config.dns.dedicated_thread to 1, it crashes with the 
 information below.
   The ATS is working in Forward Proxy mode.
   Thanks in advance.
 --
 traffic.out
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/cache-4.1/bin/traffic_server - STACK TRACE: 
 /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x2af714875cb0]
 /usr/local/cache-4.1/bin/traffic_server(_Z16_acquire_sessionP13SessionBucketPK8sockaddrR7INK_MD5P6HttpSM+0x52)[0x51dac2]
 /usr/local/cache-4.1/bin/traffic_server(_ZN18HttpSessionManager15acquire_sessionEP12ContinuationPK8sockaddrPKcP17HttpClientSessionP6HttpSM+0x3d1)[0x51e0f1]
 /usr/local/cache-4.1/bin/traffic_server(_ZN6HttpSM19do_http_server_openEb+0x30c)[0x53644c]
 /usr/local/cache-4.1/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x6a0)[0x537560]
 /usr/local/cache-4.1/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x57e)[0x53743e]
 /usr/local/cache-4.1/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x57e)[0x53743e]
 /usr/local/cache-4.1/bin/traffic_server(_ZN6HttpSM27state_hostdb_reverse_lookupEiPv+0xb9)[0x526b99]
 /usr/local/cache-4.1/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xd8)[0x531be8]
 /usr/local/cache-4.1/bin/traffic_server[0x5d7c8a]
 /usr/local/cache-4.1/bin/traffic_server(_ZN18HostDBContinuation8dnsEventEiP7HostEnt+0x821)[0x5decd1]
 /usr/local/cache-4.1/bin/traffic_server(_ZN8DNSEntry9postEventEiP5Event+0x44)[0x5f7a94]
 /usr/local/cache-4.1/bin/traffic_server[0x5fd382]
 /usr/local/cache-4.1/bin/traffic_server(_ZN10DNSHandler8recv_dnsEiP5Event+0x852)[0x5fee72]
 /usr/local/cache-4.1/bin/traffic_server(_ZN10DNSHandler9mainEventEiP5Event+0x14)[0x5ffd94]
 /usr/local/cache-4.1/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x91)[0x6b2a41]
 /usr/local/cache-4.1/bin/traffic_server(_ZN7EThread7executeEv+0x514)[0x6b3534]
 /usr/local/cache-4.1/bin/traffic_server[0x6b17ea]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x2af71486de9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x2af71558dccd]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: tsqa-master #161

2015-02-23 Thread jenkins
See https://ci.trafficserver.apache.org/job/tsqa-master/161/changes

Changes:

[Bryan Call] TS-2729: Add HTTP/2 support to ATS

--
[...truncated 14778 lines...]
FAIL: failed to fetch value for proxy.config.log.extended2_log_header
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.separate_icp_logs
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.separate_host_logs
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_host
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_secret
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_host_tagged
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_retry_sec
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_max_send_buffers
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_preproc_threads
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.rolling_offset_hr
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.sampling_frequency
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.space_used_frequency
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.file_stat_frequency
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.ascii_buffer_size
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.max_line_size
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_rolling_interval_sec
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_log_enabled
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_server_ip_addr
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_server_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_top_sites
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_url_filter
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_log_filters
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.default_to_server_pac
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for 
proxy.config.url_remap.default_to_server_pac_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.filename
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.url_remap_mode
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.handle_backdoor_urls
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.enabled
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.SSLv2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.SSLv3
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.TLSv1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.TLSv1_1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.TLSv1_2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.SSLv2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.SSLv3
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.TLSv1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.TLSv1_1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.TLSv1_2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.compression
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.cipher_suite
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.server.honor_cipher_order
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.server_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for 

[jira] [Updated] (TS-3405) Memory use after free in HTTP/2

2015-02-23 Thread Bryan Call (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Call updated TS-3405:
---
Fix Version/s: 5.3.0

 Memory use after free in HTTP/2
 ---

 Key: TS-3405
 URL: https://issues.apache.org/jira/browse/TS-3405
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP/2
Reporter: Bryan Call
 Fix For: 5.3.0


 From Leif running on docs.trafficserver.apache.org:
  
 {code}
 traffic_server: using root directory '/opt/ats'
 =
 ==31101==ERROR: AddressSanitizer: heap-use-after-free on address 
 0x6180c888 at pc 0x4f3558 bp 0x2aaf10c88930 sp 0x2aaf10c88928
 READ of size 8 at 0x6180c888 thread T2 ([ET_NET 1])
 #0 0x4f3557 in Continuation::handleEvent(int, void*) 
 ../iocore/eventsystem/I_Continuation.h:146
 #1 0x4f3557 in FetchSM::InvokePluginExt(int) 
 /usr/local/src/trafficserver/proxy/FetchSM.cc:301
 #2 0x4f3a7a in FetchSM::process_fetch_read(int) 
 /usr/local/src/trafficserver/proxy/FetchSM.cc:465
 #3 0x4f5112 in FetchSM::fetch_handler(int, void*) 
 /usr/local/src/trafficserver/proxy/FetchSM.cc:514
 #4 0x59f1b7 in Continuation::handleEvent(int, void*) 
 ../iocore/eventsystem/I_Continuation.h:146
 #5 0x59f1b7 in PluginVC::process_read_side(bool) 
 /usr/local/src/trafficserver/proxy/PluginVC.cc:640
 #6 0x5abcb9 in PluginVC::main_handler(int, void*) 
 /usr/local/src/trafficserver/proxy/PluginVC.cc:206
 #7 0xc821fe in Continuation::handleEvent(int, void*) 
 /usr/local/src/trafficserver/iocore/eventsystem/I_Continuation.h:146
 #8 0xc821fe in EThread::process_event(Event*, int) 
 /usr/local/src/trafficserver/iocore/eventsystem/UnixEThread.cc:144
 #9 0xc84819 in EThread::execute() 
 /usr/local/src/trafficserver/iocore/eventsystem/UnixEThread.cc:238
 #10 0xc80e18 in spawn_thread_internal 
 /usr/local/src/trafficserver/iocore/eventsystem/Thread.cc:88
 #11 0x2aaf0b083df2 in start_thread (/lib64/libpthread.so.0+0x7df2)
 #12 0x2aaf0c8ec1ac in clone (/lib64/libc.so.6+0xf61ac)
 0x6180c888 is located 8 bytes inside of 816-byte region 
 [0x6180c880,0x6180cbb0)
 freed by thread T0 ([ET_NET 0]) here:
 #0 0x2aaf08c131c7 in __interceptor_free 
 ../../.././libsanitizer/asan/asan_malloc_linux.cc:62
 #1 0x7b7d42 in Http2ClientSession::do_io_close(int) 
 /usr/local/src/trafficserver/proxy/http2/Http2ClientSession.cc:194
 #2 0x7b7d42 in Http2ClientSession::main_event_handler(int, void*) 
 /usr/local/src/trafficserver/proxy/http2/Http2ClientSession.cc:237
 #3 0xc1351f in Continuation::handleEvent(int, void*) 
 ../../iocore/eventsystem/I_Continuation.h:146
 #4 0xc1351f in read_signal_and_update 
 /usr/local/src/trafficserver/iocore/net/UnixNetVConnection.cc:140
 #5 0xc1351f in read_signal_done 
 /usr/local/src/trafficserver/iocore/net/UnixNetVConnection.cc:185
 #6 0xc1351f in UnixNetVConnection::readSignalDone(int, NetHandler*) 
 /usr/local/src/trafficserver/iocore/net/UnixNetVConnection.cc:939
 #7 0xbbabf8 in SSLNetVConnection::net_read_io(NetHandler*, EThread*) 
 /usr/local/src/trafficserver/iocore/net/SSLNetVConnection.cc:596
 #8 0xbda09c in NetHandler::mainNetEvent(int, Event*) 
 /usr/local/src/trafficserver/iocore/net/UnixNet.cc:513
 #9 0xc85089 in Continuation::handleEvent(int, void*) 
 /usr/local/src/trafficserver/iocore/eventsystem/I_Continuation.h:146
 #10 0xc85089 in EThread::process_event(Event*, int) 
 /usr/local/src/trafficserver/iocore/eventsystem/UnixEThread.cc:144
 #11 0xc85089 in EThread::execute() 
 /usr/local/src/trafficserver/iocore/eventsystem/UnixEThread.cc:268
 #12 0x498f96 in main /usr/local/src/trafficserver/proxy/Main.cc:1826
 #13 0x2aaf0c817af4 in __libc_start_main (/lib64/libc.so.6+0x21af4)
 previously allocated by thread T0 ([ET_NET 0]) here:
 #0 0x2aaf08c1393b in __interceptor_posix_memalign 
 ../../.././libsanitizer/asan/asan_malloc_linux.cc:130
 #1 0x2aaf09afd2f9 in ats_memalign 
 /usr/local/src/trafficserver/lib/ts/ink_memory.cc:96
 #2 0x7cd804 in ClassAllocatorHttp2ClientSession::alloc() 
 ../../lib/ts/Allocator.h:124
 #3 0x7cd804 in Http2SessionAccept::accept(NetVConnection*, MIOBuffer*, 
 IOBufferReader*) 
 /usr/local/src/trafficserver/proxy/http2/Http2SessionAccept.cc:57
 #4 0x7cd3c4 in Http2SessionAccept::mainEvent(int, void*) 
 /usr/local/src/trafficserver/proxy/http2/Http2SessionAccept.cc:69
 #5 0xbc2fae in SSLNextProtocolTrampoline::ioCompletionEvent(int, void*) 
 /usr/local/src/trafficserver/iocore/net/SSLNextProtocolAccept.cc:101
 #6 0xc1351f in Continuation::handleEvent(int, void*) 
 ../../iocore/eventsystem/I_Continuation.h:146
 #7 0xc1351f in read_signal_and_update 
 /usr/local/src/trafficserver/iocore/net/UnixNetVConnection.cc:140
 #8 0xc1351f in 

[jira] [Created] (TS-3405) Memory use after free in HTTP/2

2015-02-23 Thread Bryan Call (JIRA)
Bryan Call created TS-3405:
--

 Summary: Memory use after free in HTTP/2
 Key: TS-3405
 URL: https://issues.apache.org/jira/browse/TS-3405
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP/2
Reporter: Bryan Call


From Leif running on docs.trafficserver.apache.org:
 
{code}
traffic_server: using root directory '/opt/ats'
=
==31101==ERROR: AddressSanitizer: heap-use-after-free on address 0x6180c888 
at pc 0x4f3558 bp 0x2aaf10c88930 sp 0x2aaf10c88928
READ of size 8 at 0x6180c888 thread T2 ([ET_NET 1])
#0 0x4f3557 in Continuation::handleEvent(int, void*) 
../iocore/eventsystem/I_Continuation.h:146
#1 0x4f3557 in FetchSM::InvokePluginExt(int) 
/usr/local/src/trafficserver/proxy/FetchSM.cc:301
#2 0x4f3a7a in FetchSM::process_fetch_read(int) 
/usr/local/src/trafficserver/proxy/FetchSM.cc:465
#3 0x4f5112 in FetchSM::fetch_handler(int, void*) 
/usr/local/src/trafficserver/proxy/FetchSM.cc:514
#4 0x59f1b7 in Continuation::handleEvent(int, void*) 
../iocore/eventsystem/I_Continuation.h:146
#5 0x59f1b7 in PluginVC::process_read_side(bool) 
/usr/local/src/trafficserver/proxy/PluginVC.cc:640
#6 0x5abcb9 in PluginVC::main_handler(int, void*) 
/usr/local/src/trafficserver/proxy/PluginVC.cc:206
#7 0xc821fe in Continuation::handleEvent(int, void*) 
/usr/local/src/trafficserver/iocore/eventsystem/I_Continuation.h:146
#8 0xc821fe in EThread::process_event(Event*, int) 
/usr/local/src/trafficserver/iocore/eventsystem/UnixEThread.cc:144
#9 0xc84819 in EThread::execute() 
/usr/local/src/trafficserver/iocore/eventsystem/UnixEThread.cc:238
#10 0xc80e18 in spawn_thread_internal 
/usr/local/src/trafficserver/iocore/eventsystem/Thread.cc:88
#11 0x2aaf0b083df2 in start_thread (/lib64/libpthread.so.0+0x7df2)
#12 0x2aaf0c8ec1ac in clone (/lib64/libc.so.6+0xf61ac)

0x6180c888 is located 8 bytes inside of 816-byte region 
[0x6180c880,0x6180cbb0)
freed by thread T0 ([ET_NET 0]) here:
#0 0x2aaf08c131c7 in __interceptor_free 
../../.././libsanitizer/asan/asan_malloc_linux.cc:62
#1 0x7b7d42 in Http2ClientSession::do_io_close(int) 
/usr/local/src/trafficserver/proxy/http2/Http2ClientSession.cc:194
#2 0x7b7d42 in Http2ClientSession::main_event_handler(int, void*) 
/usr/local/src/trafficserver/proxy/http2/Http2ClientSession.cc:237
#3 0xc1351f in Continuation::handleEvent(int, void*) 
../../iocore/eventsystem/I_Continuation.h:146
#4 0xc1351f in read_signal_and_update 
/usr/local/src/trafficserver/iocore/net/UnixNetVConnection.cc:140
#5 0xc1351f in read_signal_done 
/usr/local/src/trafficserver/iocore/net/UnixNetVConnection.cc:185
#6 0xc1351f in UnixNetVConnection::readSignalDone(int, NetHandler*) 
/usr/local/src/trafficserver/iocore/net/UnixNetVConnection.cc:939
#7 0xbbabf8 in SSLNetVConnection::net_read_io(NetHandler*, EThread*) 
/usr/local/src/trafficserver/iocore/net/SSLNetVConnection.cc:596
#8 0xbda09c in NetHandler::mainNetEvent(int, Event*) 
/usr/local/src/trafficserver/iocore/net/UnixNet.cc:513
#9 0xc85089 in Continuation::handleEvent(int, void*) 
/usr/local/src/trafficserver/iocore/eventsystem/I_Continuation.h:146
#10 0xc85089 in EThread::process_event(Event*, int) 
/usr/local/src/trafficserver/iocore/eventsystem/UnixEThread.cc:144
#11 0xc85089 in EThread::execute() 
/usr/local/src/trafficserver/iocore/eventsystem/UnixEThread.cc:268
#12 0x498f96 in main /usr/local/src/trafficserver/proxy/Main.cc:1826
#13 0x2aaf0c817af4 in __libc_start_main (/lib64/libc.so.6+0x21af4)

previously allocated by thread T0 ([ET_NET 0]) here:
#0 0x2aaf08c1393b in __interceptor_posix_memalign 
../../.././libsanitizer/asan/asan_malloc_linux.cc:130
#1 0x2aaf09afd2f9 in ats_memalign 
/usr/local/src/trafficserver/lib/ts/ink_memory.cc:96
#2 0x7cd804 in ClassAllocatorHttp2ClientSession::alloc() 
../../lib/ts/Allocator.h:124
#3 0x7cd804 in Http2SessionAccept::accept(NetVConnection*, MIOBuffer*, 
IOBufferReader*) 
/usr/local/src/trafficserver/proxy/http2/Http2SessionAccept.cc:57
#4 0x7cd3c4 in Http2SessionAccept::mainEvent(int, void*) 
/usr/local/src/trafficserver/proxy/http2/Http2SessionAccept.cc:69
#5 0xbc2fae in SSLNextProtocolTrampoline::ioCompletionEvent(int, void*) 
/usr/local/src/trafficserver/iocore/net/SSLNextProtocolAccept.cc:101
#6 0xc1351f in Continuation::handleEvent(int, void*) 
../../iocore/eventsystem/I_Continuation.h:146
#7 0xc1351f in read_signal_and_update 
/usr/local/src/trafficserver/iocore/net/UnixNetVConnection.cc:140
#8 0xc1351f in read_signal_done 
/usr/local/src/trafficserver/iocore/net/UnixNetVConnection.cc:185
#9 0xc1351f in UnixNetVConnection::readSignalDone(int, NetHandler*) 
/usr/local/src/trafficserver/iocore/net/UnixNetVConnection.cc:939
#10 0xbbba59 in 

[jira] [Commented] (TS-3118) Feature to stop accepting new connections

2015-02-23 Thread Adam W. Dace (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334113#comment-14334113
 ] 

Adam W. Dace commented on TS-3118:
--

Oh that's easy.  Where I worked we had something called a RedLine(hardware 
load-balancer) providing
both load-balancing and SSL compression.

Basically, any load-balancer worth its weight can handle this sort of thing.  
That was my impression, anyways.

The load-balancer itself is responsible for knowing and managing the difference 
between a new connection,
and an existing connection.  Ideally, you'd want all existing connections 
serviced properly, right?
Typically, the load-balancer(with some configuration) will handle this.

That being the case, the application itself is responsible for handling any 
outstanding connections.
And to handle those connection without error.  That's where the coding 
difficulty comes in.


 Feature to stop accepting new connections
 -

 Key: TS-3118
 URL: https://issues.apache.org/jira/browse/TS-3118
 Project: Traffic Server
  Issue Type: New Feature
Reporter: Miles Libbey
  Labels: A
 Fix For: 5.3.0


 When taking an ATS machine out of production, it would be nice to have ATS 
 stop accepting new connections without affecting the existing client 
 connections to minimize client disruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (TS-3118) Feature to stop accepting new connections

2015-02-23 Thread Adam W. Dace (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334140#comment-14334140
 ] 

Adam W. Dace edited comment on TS-3118 at 2/24/15 12:21 AM:


Meh.  Sorry for my reluctance but honestly I'm not sure if I'm violating any 
legal non-disclosure terms I'd agreed to by talking about this too much.

To directly answer your question, I believe the logic(from the load-balancer) 
went something like this:

1) Client HTTP request comes in.  We hold the socket open as the load-balancer 
is also acting as a socket-layer endpoint(security reasons).
2) Find a server that can handle the request.
3) Server isn't listening on the required port?  No problem, failover to the 
next node(various failover schemes, round-robin, etc).  Note, no errors here.  
Just delays.
4) Deliver HTTP request to an active node, with the socket-layer coming from 
the load-balancer(again, security).
5) Fulfill request.
6) Deliver that HTTP response to the actual client, waiting for its HTTP 
response on another socket.
7) HTTP request complete.

FWIW, I hope this helps.  This was my understanding of the behavior of the 
load-balancer itself.
Not impossible in coding terms, but definitely impressive.  :-)



was (Author: adace):
Meh.  Sorry for my reluctance but honestly I'm not sure if I'm violating any 
legal non-disclosure terms I'd agreed to by talking about this too much.

To directly answer your question, I believe the logic(from the load-balancer) 
went something like this:

1) Client HTTP request comes in.  We hold the socket open as the load-balancer 
is also acting as a socket-layer endpoint(security reasons).
2) Find a server that can handle the request.
3) Server isn't listening on the required port?  No problem, failover to the 
next node(various failover schemes, round-robin, etc).  Note, no errors here.  
Just delays.
4) Deliver HTTP request to an active node, with the socket-layer coming from 
the load-balancer(again, security).
5) Fulfill request.
6) Deliver that HTTP request to the actual client, waiting for its HTTP 
response on another socket.
7) HTTP request complete.

FWIW, I hope this helps.  This was my understanding of the behavior of the 
load-balancer itself.
Not impossible in coding terms, but definitely impressive.  :-)


 Feature to stop accepting new connections
 -

 Key: TS-3118
 URL: https://issues.apache.org/jira/browse/TS-3118
 Project: Traffic Server
  Issue Type: New Feature
Reporter: Miles Libbey
  Labels: A
 Fix For: 5.3.0


 When taking an ATS machine out of production, it would be nice to have ATS 
 stop accepting new connections without affecting the existing client 
 connections to minimize client disruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Build failed in Jenkins: tsqa-master #158

2015-02-23 Thread Leif Hedstrom

 On Feb 23, 2015, at 3:49 PM, James Peach jpe...@apache.org wrote:
 
 This failed because TS-3358 added explicit access checks to the management 
 socket. Unless proxy.config.admin.api.restricted is 0, access is restricted 
 to root processes. In the case of tsqa, we run the whole thing unprivileged. 
 This used to work because access was controlled by filesystem permissions.
 
 I'm open to suggestions as to what the right behaviour should be in thisc 
 case …


Hmmm, at a minimum, that seems like a incompatible change no matter what? So, 
maybe we should make proxy.config.admin.api.restricted = 0 by default, and use 
the file system permissions as people are used to ?

It feels rather sketchy to require CI / Jenkins to run as user “root”. But if 
that’s what is required, we can try to modify Jenkins to run as “root” instead 
of Jenkins, it just gives me a really bad vibe to have a web UI running as 
“root”.

— Leif


 
 
 On Feb 23, 2015, at 2:25 PM, jenk...@ci.trafficserver.apache.org wrote:
 
 See https://ci.trafficserver.apache.org/job/tsqa-master/158/changes
 
 Changes:
 
 [James Peach] TS-3358: peer credential checking on the management socket
 
 --
 [...truncated 14737 lines...]
 FAIL: failed to fetch value for proxy.config.log.extended2_log_name
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.extended2_log_header
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.separate_icp_logs
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.separate_host_logs
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_host
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_port
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_secret
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_host_tagged
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_retry_sec
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_max_send_buffers
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.collation_preproc_threads
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.rolling_offset_hr
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.sampling_frequency
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.space_used_frequency
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.file_stat_frequency
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.ascii_buffer_size
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.max_line_size
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_rolling_interval_sec
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_log_enabled
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_server_ip_addr
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_server_port
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_top_sites
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_url_filter
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.log.search_log_filters
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.url_remap.default_to_server_pac
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for 
 proxy.config.url_remap.default_to_server_pac_port
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.url_remap.filename
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.url_remap.url_remap_mode
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.url_remap.handle_backdoor_urls
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.enabled
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.SSLv2
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.SSLv3
 traffic_line: [13] Operation not permitted.
 FAIL: failed to fetch value for proxy.config.ssl.TLSv1
 traffic_line: [13] Operation not 

[jira] [Updated] (TS-3404) PluginVC not notifying ActiveSide (FetchSM) of EOS due to race condition in handling terminating chunk.

2015-02-23 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda updated TS-3404:
--
Description: 
When there's a race condition in receiving the terminating chunk (of size 0), 
{{PluginVC}} is not notifying the ActiveSide {{FetchSM}} of EOS, causing it to 
hang until an eventual timeout occurs. 

The code below checks if the {{other_side}} is closed or in write shutdown 
state to send the EOS,
https://github.com/apache/trafficserver/blob/master/proxy/PluginVC.cc#L638

but, in the race condition observed in our environment, the {{PassiveSide}}'s 
write_state is in shutdown (set via consumer_handler handling the event 
{{VC_EVENT_WRITE_COMPLETE}} at the final terminating chunk and HttpSM calling 
{{do_io_close}} with {{IO_SHUTDOWN_WRITE}} on the passive side.

The below simple fix resolves the issue:

{code}
  if (act_on = 0) {
if (other_side-closed || other_side-write_state.shutdown || 
write_state.shutdown) {
  read_state.vio._cont-handleEvent(VC_EVENT_EOS, read_state.vio);
}
return;
  }
{code}

Below are the debug logs that indicate the failed and working cases due to the 
race condition:

Working Case:
{code}
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] adding 
producer 'http server'
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] adding 
consumer 'user agent'
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
perform_cache_write_action CACHE_DO_NO_ACTION
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) tunnel_run 
started, p_arg is NULL
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
[producer_run] do_dechunking p-chunked_handler.chunked_reader-read_avail() = 
368
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
[producer_run] do_dechunking::Copied header of size 179
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) 
tcp_init_cwnd_set 0
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) desired TCP 
congestion window is 0
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
[producer_run] do_dechunking p-chunked_handler.chunked_reader-read_avail() = 
368
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
[producer_run] do_dechunking p-chunked_handler.skip_bytes = 179
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
producer_handler [http server VC_EVENT_READ_READY]
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
producer_handler_chunked [http server VC_EVENT_READ_READY]
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
size of 57 bytes
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
read of chunk of 57 bytes
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
size of 120 bytes
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
read of chunk of 120 bytes
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
[HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 100
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
producer_handler [http server VC_EVENT_READ_READY]
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
producer_handler_chunked [http server VC_EVENT_READ_READY]
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
size of 3 bytes
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
read of chunk of 3 bytes
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
size of 0 bytes
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
read of trailers
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
[HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 102
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
[HttpSM::tunnel_handler_server, VC_EVENT_READ_COMPLETE]
[Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_ss) [205] session 
closing, netvc 0x7f85ec0158b0
[Feb 22 22:03:16.552] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
consumer_handler [user agent VC_EVENT_WRITE_COMPLETE]
[Feb 22 22:03:16.552] Server {0x7f865d664700} DEBUG: (http) [205] 
[HttpSM::tunnel_handler_ua, VC_EVENT_WRITE_COMPLETE]
[Feb 22 22:03:16.552] Server {0x7f865d664700} DEBUG: (http_cs) [205] session 
half close
[Feb 22 22:03:16.552] Server {0x7f865d664700} DEBUG: (http) [205] 
[HttpSM::main_handler, HTTP_TUNNEL_EVENT_DONE]
[Feb 22 22:03:16.552] Server {0x7f865d664700} DEBUG: (http) [205] 
[HttpSM::tunnel_handler, HTTP_TUNNEL_EVENT_DONE]
[Feb 22 22:03:16.552] Server {0x7f865d664700} DEBUG: (http_redirect) 
[HttpTunnel::deallocate_postdata_copy_buffers]
[Feb 22 22:03:16.552] Server {0x7f865d664700} DEBUG: (http) [205] calling 
plugin on 

[jira] [Assigned] (TS-2729) Add HTTP/2 support to ATS

2015-02-23 Thread Bryan Call (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Call reassigned TS-2729:
--

Assignee: Bryan Call  (was: James Peach)

 Add HTTP/2 support to ATS
 -

 Key: TS-2729
 URL: https://issues.apache.org/jira/browse/TS-2729
 Project: Traffic Server
  Issue Type: New Feature
  Components: HTTP/2
Reporter: Ryo Okubo
Assignee: Bryan Call
  Labels: review
 Fix For: 5.3.0

 Attachments: 0003-h2-prototype.patch, 0004-h2-prototype.patch, 
 0005-h2-prototype.patch, h2c_upgrade.patch, hpack.patch, http2-0004.patch, 
 improve-mime.patch


 h2. Overview
 Support HTTP/2 as a client side L7 protocol. This feature is implemented into 
 ATS core.
 Now, it supports the latest HTTP/2 draft version, h2-16.
 https://tools.ietf.org/html/draft-ietf-httpbis-http2-16
 h2. How to test
 # Build ATS codes normally. you need neither any build option nor external 
 HTTP/2 library.
 # Configure settings to use https.
 # Add settings to records.config to use http2.
 {noformat}
 CONFIG proxy.config.http2.enabled INT 1
 {noformat}
 # Access to ATS by HTTP/2 client.
 h2. Descriptions of current attached patches.
 * 0003-h2-prototype.patch
 ** For experiment. Please don't merge it. It enables to interpret HTTP/2 
 requests and respond for it. But now this code is unsafe and dirty. More 
 refactoring is required.
 h2. DONE
 * Fundamental HTTP/2 frame handling
 * Flow control
 * Some error handlings
 h2. TODO
 * Refactoring
 * More debugging
 * Write documents
 * Add test tools for HPACK, HTTP/2 frames
 h2. No plan
 * [Server 
 Push|https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-8.2] 
 This would probably require support for [Link 
 preload|http://w3c.github.io/preload/#interoperability-with-http-link-header]?
 * [Stream 
 Priority|https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-5.3]
 * [Alternative 
 Services|https://tools.ietf.org/html/draft-ietf-httpbis-alt-svc-06]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3401) AIO blocks under lock contention

2015-02-23 Thread John Plevyak (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334450#comment-14334450
 ] 

John Plevyak commented on TS-3401:
--

I generally agree, but it is true that aio_thread_main() uses 
ink_atomiclist_popall() to grab the entire atomic queue associated with a 
AIO_Req for a single file descriptor/disk.  This means that a bunch of reads 
could be blocked behind the disk operation (as well as acquiring the mutex for 
write callbacks, but that is probably less important).  We could switch to 
using ink_atomiclist_pop in aio_move which would cause only a single op to be 
moved to the local queue.  

That said, we should probably reexamine using linux native AIO now that the 
eventfd code has landed.  I think it will be more efficient, and the new linux 
multi-queue support for SSDs we can do millions of ops/sec, so we want to be 
able to load up that queue and native AIO with eventfd looks like a good way to 
do it.

We should also consider changing all the delay periods (e.g. AIO_PERIOD) to be 
100 mseconds or more if we have eventfd as we don't need to busy poll 
anything... we will be awoken if anything appears in a queue or on an file 
descriptor.

 AIO blocks under lock contention
 

 Key: TS-3401
 URL: https://issues.apache.org/jira/browse/TS-3401
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Brian Geffon
Assignee: Brian Geffon
 Attachments: aio.patch


 In {{aio_thread_main()}} while trying to process AIO ops the AIO thread will 
 wait on the mutex for the op which obviously blocks other AIO ops from 
 processing. We should use a try lock instead and reschedule the ops that we 
 couldn't immediately process. Patch attached. Waiting for reviews.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3118) Feature to stop accepting new connections

2015-02-23 Thread Adam W. Dace (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334117#comment-14334117
 ] 

Adam W. Dace commented on TS-3118:
--

That said, I'll side-step out of this conversation.

If I remember correctly, the load-balancer would also take application nodes 
out of the active pool if they returned HTTP error codes.
Honestly, though, that was years ago.  My memory may not be the best on that 
one.

 Feature to stop accepting new connections
 -

 Key: TS-3118
 URL: https://issues.apache.org/jira/browse/TS-3118
 Project: Traffic Server
  Issue Type: New Feature
Reporter: Miles Libbey
  Labels: A
 Fix For: 5.3.0


 When taking an ATS machine out of production, it would be nice to have ATS 
 stop accepting new connections without affecting the existing client 
 connections to minimize client disruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: tsqa-master #160

2015-02-23 Thread jenkins
See https://ci.trafficserver.apache.org/job/tsqa-master/160/changes

Changes:

[Bryan Call] TS-2729: Add HTTP/2 support to ATS

--
[...truncated 14772 lines...]
FAIL: failed to fetch value for proxy.config.log.extended2_log_header
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.separate_icp_logs
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.separate_host_logs
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_host
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_secret
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_host_tagged
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_retry_sec
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_max_send_buffers
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.collation_preproc_threads
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.rolling_offset_hr
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.sampling_frequency
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.space_used_frequency
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.file_stat_frequency
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.ascii_buffer_size
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.max_line_size
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_rolling_interval_sec
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_log_enabled
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_server_ip_addr
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_server_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_top_sites
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_url_filter
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.log.search_log_filters
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.default_to_server_pac
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for 
proxy.config.url_remap.default_to_server_pac_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.filename
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.url_remap_mode
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.url_remap.handle_backdoor_urls
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.enabled
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.SSLv2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.SSLv3
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.TLSv1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.TLSv1_1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.TLSv1_2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.SSLv2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.SSLv3
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.TLSv1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.TLSv1_1
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.TLSv1_2
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.compression
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.client.cipher_suite
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.server.honor_cipher_order
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for proxy.config.ssl.server_port
traffic_line: [13] Operation not permitted.
FAIL: failed to fetch value for 

[jira] [Created] (TS-3406) Change TS_NPN_PROTOCOL_HTTP_2_0 to h2

2015-02-23 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-3406:
-

 Summary: Change TS_NPN_PROTOCOL_HTTP_2_0 to h2
 Key: TS-3406
 URL: https://issues.apache.org/jira/browse/TS-3406
 Project: Traffic Server
  Issue Type: Improvement
  Components: HTTP/2
Reporter: Leif Hedstrom


With H2 landed on master, and the RFC finalized, we should change the 
identifier string to just h2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3406) Change TS_NPN_PROTOCOL_HTTP_2_0 to h2

2015-02-23 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3406:
--
Fix Version/s: 5.3.0

 Change TS_NPN_PROTOCOL_HTTP_2_0 to h2
 ---

 Key: TS-3406
 URL: https://issues.apache.org/jira/browse/TS-3406
 Project: Traffic Server
  Issue Type: Improvement
  Components: HTTP/2
Reporter: Leif Hedstrom
 Fix For: 5.3.0


 With H2 landed on master, and the RFC finalized, we should change the 
 identifier string to just h2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-2729) Add HTTP/2 support to ATS

2015-02-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334074#comment-14334074
 ] 

ASF subversion and git services commented on TS-2729:
-

Commit ca998f0ff504bcde2645561dfbd7bbdf23af5e36 in trafficserver's branch 
refs/heads/master from [~bcall]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=ca998f0 ]

TS-2729: Add HTTP/2 support to ATS
Fixed clang-analyzer bugs


 Add HTTP/2 support to ATS
 -

 Key: TS-2729
 URL: https://issues.apache.org/jira/browse/TS-2729
 Project: Traffic Server
  Issue Type: New Feature
  Components: HTTP/2
Reporter: Ryo Okubo
Assignee: James Peach
  Labels: review
 Fix For: 5.3.0

 Attachments: 0003-h2-prototype.patch, 0004-h2-prototype.patch, 
 0005-h2-prototype.patch, h2c_upgrade.patch, hpack.patch, http2-0004.patch, 
 improve-mime.patch


 h2. Overview
 Support HTTP/2 as a client side L7 protocol. This feature is implemented into 
 ATS core.
 Now, it supports the latest HTTP/2 draft version, h2-16.
 https://tools.ietf.org/html/draft-ietf-httpbis-http2-16
 h2. How to test
 # Build ATS codes normally. you need neither any build option nor external 
 HTTP/2 library.
 # Configure settings to use https.
 # Add settings to records.config to use http2.
 {noformat}
 CONFIG proxy.config.http2.enabled INT 1
 {noformat}
 # Access to ATS by HTTP/2 client.
 h2. Descriptions of current attached patches.
 * 0003-h2-prototype.patch
 ** For experiment. Please don't merge it. It enables to interpret HTTP/2 
 requests and respond for it. But now this code is unsafe and dirty. More 
 refactoring is required.
 h2. DONE
 * Fundamental HTTP/2 frame handling
 * Flow control
 * Some error handlings
 h2. TODO
 * Refactoring
 * More debugging
 * Write documents
 * Add test tools for HPACK, HTTP/2 frames
 h2. No plan
 * [Server 
 Push|https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-8.2] 
 This would probably require support for [Link 
 preload|http://w3c.github.io/preload/#interoperability-with-http-link-header]?
 * [Stream 
 Priority|https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-5.3]
 * [Alternative 
 Services|https://tools.ietf.org/html/draft-ietf-httpbis-alt-svc-06]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3118) Feature to stop accepting new connections

2015-02-23 Thread Adam W. Dace (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334078#comment-14334078
 ] 

Adam W. Dace commented on TS-3118:
--

Actually, having worked in such a shop, we usually handled this as such:

1) The application shuts down its listening connection.
2) Load-balancing hardware detects this and takes that server out of the active 
pool(no errors).
3) Existing connections are serviced.
4) Once all existing connections have been completed, the application begins 
actual shutdown.

My knowledge is a bit out of date...but this might be the behavior some are 
looking for.
The upside is this enables all sorts of operational work that would otherwise 
interrupt business.
There's nothing like shutting down an active application in the middle of the 
workday(i.e. high load).  :-)

P.S.  I know this is painful to implement, from a code perspective, but 
unfortunately attaining high availability isn't always easy.


 Feature to stop accepting new connections
 -

 Key: TS-3118
 URL: https://issues.apache.org/jira/browse/TS-3118
 Project: Traffic Server
  Issue Type: New Feature
Reporter: Miles Libbey
  Labels: A
 Fix For: 5.3.0


 When taking an ATS machine out of production, it would be nice to have ATS 
 stop accepting new connections without affecting the existing client 
 connections to minimize client disruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3118) Feature to stop accepting new connections

2015-02-23 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334093#comment-14334093
 ] 

James Peach commented on TS-3118:
-

In that scheme, what prevents clients receiving errors between steps (1) and 
(2)?

I think there's a lot of variety in this sort of orchestration. The typical 
workflow that I have seen is that the load balancer polls a HTTP endpoint. When 
you take the server out of rotation, you arrange for the endpoint to return a 
non-200 status. In this way, new client requests are still serviced until (2) 
takes place.

 Feature to stop accepting new connections
 -

 Key: TS-3118
 URL: https://issues.apache.org/jira/browse/TS-3118
 Project: Traffic Server
  Issue Type: New Feature
Reporter: Miles Libbey
  Labels: A
 Fix For: 5.3.0


 When taking an ATS machine out of production, it would be nice to have ATS 
 stop accepting new connections without affecting the existing client 
 connections to minimize client disruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: freebsd_10-master » clang,freebsd_10,release #771

2015-02-23 Thread jenkins
See 
https://ci.trafficserver.apache.org/job/freebsd_10-master/compiler=clang,label=freebsd_10,type=release/771/

--
[...truncated 6399 lines...]
*** TEST 96 *** PASSED ***
*** TEST 97 *** STARTING ***
*** TEST 97 *** PASSED ***
*** TEST 98 *** STARTING ***
*** TEST 98 *** PASSED ***
*** TEST 99 *** STARTING ***
*** TEST 99 *** PASSED ***
*** TEST 100 *** STARTING ***
*** TEST 100 *** PASSED ***
*** TEST 101 *** STARTING ***
*** TEST 101 *** PASSED ***
*** TEST 102 *** STARTING ***
*** TEST 102 *** PASSED ***
*** TEST 103 *** STARTING ***
*** TEST 103 *** PASSED ***
*** TEST 104 *** STARTING ***
*** TEST 104 *** PASSED ***
*** TEST 105 *** STARTING ***
*** TEST 105 *** PASSED ***
*** TEST 106 *** STARTING ***
*** TEST 106 *** PASSED ***
*** TEST 107 *** STARTING ***
*** TEST 107 *** PASSED ***
*** TEST 108 *** STARTING ***
*** TEST 108 *** PASSED ***
*** TEST 109 *** STARTING ***
*** TEST 109 *** PASSED ***
*** TEST 110 *** STARTING ***
*** TEST 110 *** PASSED ***
*** TEST 111 *** STARTING ***
*** TEST 111 *** PASSED ***
*** TEST 112 *** STARTING ***
*** TEST 112 *** PASSED ***
*** TEST 113 *** STARTING ***
*** TEST 113 *** PASSED ***
*** TEST 114 *** STARTING ***
*** TEST 114 *** PASSED ***
*** TEST 115 *** STARTING ***
*** TEST 115 *** PASSED ***
*** TEST 116 *** STARTING ***
*** TEST 116 *** PASSED ***
*** TEST 117 *** STARTING ***
*** TEST 117 *** PASSED ***
*** TEST 118 *** STARTING ***
*** TEST 118 *** PASSED ***
*** TEST 119 *** STARTING ***
*** TEST 119 *** PASSED ***
*** TEST 120 *** STARTING ***
*** TEST 120 *** PASSED ***
*** TEST 121 *** STARTING ***
*** TEST 121 *** PASSED ***
*** TEST 122 *** STARTING ***
*** TEST 122 *** PASSED ***
*** TEST 123 *** STARTING ***
*** TEST 123 *** PASSED ***
*** TEST 124 *** STARTING ***
*** TEST 124 *** PASSED ***
*** TEST 125 *** STARTING ***
*** TEST 125 *** PASSED ***
*** TEST 126 *** STARTING ***
*** TEST 126 *** PASSED ***
*** TEST 127 *** STARTING ***
*** TEST 127 *** PASSED ***
*** TEST 128 *** STARTING ***
*** TEST 128 *** PASSED ***
*** TEST 129 *** STARTING ***
*** TEST 129 *** PASSED ***
*** TEST 130 *** STARTING ***
*** TEST 130 *** PASSED ***
*** TEST 131 *** STARTING ***
*** TEST 131 *** PASSED ***
*** TEST 132 *** STARTING ***
*** TEST 132 *** PASSED ***
*** TEST 133 *** STARTING ***
*** TEST 133 *** PASSED ***
*** TEST 134 *** STARTING ***
[SDK_API_TSThread] TSThreadCreate : [TestCase2] PASS { ok }
*** TEST 134 *** PASSED ***
*** TEST 135 *** STARTING ***
*** TEST 135 *** PASSED ***
*** TEST 136 *** STARTING ***
*** TEST 136 *** PASSED ***
*** TEST 137 *** STARTING ***
*** TEST 137 *** PASSED ***
*** TEST 138 *** STARTING ***
*** TEST 138 *** PASSED ***
*** TEST 139 *** STARTING ***
*** TEST 139 *** PASSED ***
*** TEST 140 *** STARTING ***
*** TEST 140 *** PASSED ***
*** TEST 141 *** STARTING ***
*** TEST 141 *** PASSED ***
*** TEST 142 *** STARTING ***
*** TEST 142 *** PASSED ***
*** TEST 143 *** STARTING ***
*** TEST 143 *** PASSED ***
*** TEST 144 *** STARTING ***
*** TEST 144 *** PASSED ***
*** TEST 145 *** STARTING ***
*** TEST 145 *** PASSED ***
*** TEST 146 *** STARTING ***
*** TEST 146 *** PASSED ***
*** TEST 147 *** STARTING ***
*** TEST 147 *** PASSED ***
*** TEST 148 *** STARTING ***
*** TEST 148 *** PASSED ***
*** TEST 149 *** STARTING ***
*** TEST 149 *** PASSED ***
*** TEST 150 *** STARTING ***
*** TEST 150 *** PASSED ***
*** TEST 151 *** STARTING ***
*** TEST 151 *** PASSED ***
*** TEST 152 *** STARTING ***
*** TEST 152 *** PASSED ***
*** TEST 153 *** STARTING ***
*** TEST 153 *** PASSED ***
*** TEST 154 *** STARTING ***
*** TEST 154 *** PASSED ***
*** TEST 155 *** STARTING ***
*** TEST 155 *** PASSED ***
*** TEST 156 *** STARTING ***
*** TEST 156 *** PASSED ***
*** TEST 157 *** STARTING ***
*** TEST 157 *** PASSED ***
*** TEST 158 *** STARTING ***
*** TEST 158 *** PASSED ***
*** TEST 159 *** STARTING ***
*** TEST 159 *** PASSED ***
*** TEST 160 *** STARTING ***
*** TEST 160 *** PASSED ***
*** TEST 161 *** STARTING ***
*** TEST 161 *** PASSED ***
*** TEST 162 *** STARTING ***
*** TEST 162 *** PASSED ***
*** TEST 163 *** STARTING ***
*** TEST 163 *** PASSED ***
*** TEST 164 *** STARTING ***
*** TEST 164 *** PASSED ***
*** TEST 165 *** STARTING ***
*** TEST 165 *** PASSED ***
*** TEST 166 *** STARTING ***
*** TEST 166 *** PASSED ***
*** TEST 167 *** STARTING ***
*** TEST 167 *** PASSED ***
*** TEST 168 *** STARTING ***
*** TEST 168 *** PASSED ***
*** TEST 169 *** STARTING ***
*** TEST 169 *** PASSED ***
*** TEST 170 *** STARTING ***
*** TEST 170 *** PASSED ***
*** TEST 171 *** STARTING ***
*** TEST 171 *** PASSED ***
*** TEST 172 *** STARTING ***
*** TEST 172 *** PASSED ***
Tests Passed: 172
Tests Failed: 0
REGRESSION_RESULT PARENTSELECTION:  PASSED
REGRESSION TEST PVC started
[SDK_API_TSTextLog] TSTextLogObjectDestroy : [TestCase1] PASS { ok }
[SDK_API_TSTextLog] TSTextLogObject : [TestCase1] PASS { ok }
RPRINT DNS: host www.apple.com [e3191.dscc.akamaiedge.net]