[jira] [Commented] (TS-3216) Add HPKP (Public Key Pinning Extension for HTTP) support

2015-03-16 Thread bettydramit (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363134#comment-14363134
 ] 

bettydramit commented on TS-3216:
-

It is a very nice feature

> Add HPKP (Public Key Pinning Extension for HTTP) support
> 
>
> Key: TS-3216
> URL: https://issues.apache.org/jira/browse/TS-3216
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: SSL
>Reporter: Masaori Koshiba
>Assignee: James Peach
>  Labels: review
> Fix For: 5.3.0
>
> Attachments: hpkp-001.patch, hpkp-002.patch
>
>
> Add "Public Key Pinning Extension for HTTP" Support in Traffic Server.
> Public Key Pinning Extension for HTTP (draft-ietf-websec-key-pinning-21)
> - https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3036) Add logging field to define the cache medium used to serve a HIT

2015-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363295#comment-14363295
 ] 

ASF subversion and git services commented on TS-3036:
-

Commit 65b5917aac7e7461b65ffdedf956c986ff0e333b in trafficserver's branch 
refs/heads/master from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=65b5917 ]

TS-3036 Fix lookup table


> Add logging field to define the cache medium used to serve a HIT
> 
>
> Key: TS-3036
> URL: https://issues.apache.org/jira/browse/TS-3036
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Logging
>Reporter: Ryan Frantz
>Assignee: Leif Hedstrom
>  Labels: review
> Fix For: 5.3.0
>
>
> I want to be able to differentiate between RAM cache HITs and disk cache 
> HITs. Add a logging field to inform the administrator if the HIT came from 
> RAM, at least.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3060) Attempt to send back a HTTP status code (e.g 408) upon a transaction activity timeout from the client

2015-03-16 Thread Sudheer Vinukonda (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363316#comment-14363316
 ] 

Sudheer Vinukonda commented on TS-3060:
---

The memory leak turned out to be not related to this commit and is related to 
TS-2497. Closing this jira, since this works as expected.

> Attempt to send back a HTTP status code (e.g 408) upon a transaction activity 
> timeout from the client
> -
>
> Key: TS-3060
> URL: https://issues.apache.org/jira/browse/TS-3060
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core, HTTP
>Affects Versions: 4.0.2
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
>  Labels: yahoo
> Fix For: 5.3.0
>
> Attachments: TS-3060.diff
>
>
> This bug is similar to TS-3054, but, on the client connection.
> Currently, when ATS sees a transaction activity timeout on the client 
> connection, it just closes the connection and releases the resources. As long 
> as the socket is still active, it might be better to attempt sending back a 
> HTTP status code to the client. For example, the use case might be a client 
> sending a POST request with content-length, but doesn't send the body. ATS 
> times out and aborts the connection without notifying the client. Even 
> though, the inactivity timeout might indicate that the client connection is 
> "dead", it's possible that the body that the client sent was "lost" somewhere 
> on the network before reaching ATS. It's possible that the status code 
> response may never make it to the client for the same reasons, but, 
> nevertheless, it's worth to give it a try.
> Some things to keep in mind are if the response headers have already been 
> sent to the client, sending a status code is not possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : ubuntu_14_10-master » gcc,ubuntu_14_10,debug #314

2015-03-16 Thread jenkins
See 




Jenkins build is back to normal : fedora_21-master » gcc,fedora_21,debug #286

2015-03-16 Thread jenkins
See 




[jira] [Commented] (TS-3102) Improve memory reuse for SPDY contexts by reusing memory released by streams within a client session

2015-03-16 Thread Sudheer Vinukonda (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363329#comment-14363329
 ] 

Sudheer Vinukonda commented on TS-3102:
---

Closing this ticket (TS-3121 has indeed helped with releasing memory faster 
during origin outage scenarios).

> Improve memory reuse for SPDY contexts by reusing memory released by streams 
> within a client session
> 
>
> Key: TS-3102
> URL: https://issues.apache.org/jira/browse/TS-3102
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: SPDY
>Affects Versions: 5.0.1
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
>  Labels: yahoo
> Fix For: 5.3.0
>
> Attachments: TS-3102.diff
>
>
> In the present SPDY implementation in ATS, there is no client context reuse. 
> Though the spdy session is reused, each stream (even the non-concurrent ones) 
> is allocated a set of new/separate context buffers (including internal 
> plugin_vc, client_session, SM object, header heap, transaction buffers, 
> server session objects etc). Some of these objects are allocated from global 
> pool, while some are from per-thread pool. The context memory is not reused 
> unlike the non-spdy session, where there can be at most one transaction on a 
> given connection/session at a time.
> Besides being very inefficient (the allocation/deallocation) this also leads 
> to larger memory foot print over time, due to the relatively poor reuse of 
> per thread pool objects (especially, when there are a high number of threads 
> - e.g 100+ like we have).
> I am currently testing a patch that does not deallocate the streams when the 
> transaction completes and reuses those for new/subsequent streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3153) Ability to disable/modify protocols based on SNI information

2015-03-16 Thread Sudheer Vinukonda (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheer Vinukonda updated TS-3153:
--
Fix Version/s: (was: 5.3.0)
   6.0.0

> Ability to disable/modify protocols based on SNI information
> 
>
> Key: TS-3153
> URL: https://issues.apache.org/jira/browse/TS-3153
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: HTTP/2, SPDY
>Reporter: Bryan Call
>Assignee: Sudheer Vinukonda
> Fix For: 6.0.0
>
> Attachments: TS-3153.diff
>
>
> We are running into problems where certain origin servers are having issues 
> when SPDY is enabled.  It would be great to have more control over when 
> protocols are enabled.
> One way to do this would be to add a protocol options to the entry in the 
> ssl_multicert config.  We wound then add additional entries for domains that 
> need to disable the protocols.  All protocols should be enabled by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3153) Ability to disable/modify protocols based on SNI information

2015-03-16 Thread Sudheer Vinukonda (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363332#comment-14363332
 ] 

Sudheer Vinukonda commented on TS-3153:
---

Moving this to 6.0 (I plan to work on the new API and plugin described in the 
previous comment)

> Ability to disable/modify protocols based on SNI information
> 
>
> Key: TS-3153
> URL: https://issues.apache.org/jira/browse/TS-3153
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: HTTP/2, SPDY
>Reporter: Bryan Call
>Assignee: Sudheer Vinukonda
> Fix For: 6.0.0
>
> Attachments: TS-3153.diff
>
>
> We are running into problems where certain origin servers are having issues 
> when SPDY is enabled.  It would be great to have more control over when 
> protocols are enabled.
> One way to do this would be to add a protocol options to the entry in the 
> ssl_multicert config.  We wound then add additional entries for domains that 
> need to disable the protocols.  All protocols should be enabled by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-3287) Coverity fixes for v5.3.0 by zwoop

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom resolved TS-3287.
---
Resolution: Fixed

> Coverity fixes for v5.3.0 by zwoop
> --
>
> Key: TS-3287
> URL: https://issues.apache.org/jira/browse/TS-3287
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Leif Hedstrom
> Fix For: 5.3.0
>
>
> This is my JIRA for Coverity commits for v5.3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : debian_jessie-master » gcc,debian_jessie,debug #369

2015-03-16 Thread jenkins
See 




[jira] [Created] (TS-3444) Coverity fixes for v6.0.0 by zwoop

2015-03-16 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-3444:
-

 Summary: Coverity fixes for v6.0.0 by zwoop
 Key: TS-3444
 URL: https://issues.apache.org/jira/browse/TS-3444
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: Leif Hedstrom


Starting a new Jira for Coverity fixes for me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3287) Coverity fixes for v5.3.0 by zwoop

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3287:
--
Fix Version/s: (was: 6.0.0)
   5.3.0

> Coverity fixes for v5.3.0 by zwoop
> --
>
> Key: TS-3287
> URL: https://issues.apache.org/jira/browse/TS-3287
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Leif Hedstrom
> Fix For: 5.3.0
>
>
> This is my JIRA for Coverity commits for v5.3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3285) Seg fault when 100 CONT handling is enabled

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3285:
--
Fix Version/s: 5.2.1

> Seg fault when 100 CONT handling is enabled
> ---
>
> Key: TS-3285
> URL: https://issues.apache.org/jira/browse/TS-3285
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.0.1
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
> Fix For: 5.2.1, 5.3.0
>
>
> With 100 CONT handling enabled in our ats5 production hosts, we are seeing 
> the below seg fault.
> {code}
> (gdb) bt
> #0  0x00316e432925 in raise (sig=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:64
> #1  0x00316e434105 in abort () at abort.c:92
> #2  0x2b6869944458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b6869944525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b68699518d8 "%s:%d: failed assert `%s`", 
> ap=0x2b686bb1bf00) at ink_error.cc:65
> #4  0x2b68699445ee in ink_fatal (return_code=1, 
> message_format=0x2b68699518d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b6869943160 in _ink_assert (expression=0x7a984e "buf_index_inout 
> == NULL", file=0x7a96e3 "MIME.cc", line=2676) at ink_assert.cc:37
> #6  0x0068212d in mime_mem_print (src_d=0x2b686bb1c090 "HTTP/1.1", 
> src_l=8, buf_start=0x0, buf_length=-1811908575, 
> buf_index_inout=0x2b686bb1c1bc, buf_chars_to_skip_inout=0x2b686bb1c1b8) 
> at MIME.cc:2676
> #7  0x00671df3 in http_version_print (version=65537, buf=0x0, 
> bufsize=-1811908575, bufindex=0x2b686bb1c1bc, dumpoffset=0x2b686bb1c1b8)
> at HTTP.cc:415
> #8  0x006724fb in http_hdr_print (heap=0x2b6881019010, 
> hdr=0x2b6881019098, buf=0x0, bufsize=-1811908575, bufindex=0x2b686bb1c1bc, 
> dumpoffset=0x2b686bb1c1b8) at HTTP.cc:539
> #9  0x004f259b in HTTPHdr::print (this=0x2b68ac06f058, buf=0x0, 
> bufsize=-1811908575, bufindex=0x2b686bb1c1bc, dumpoffset=0x2b686bb1c1b8)
> at ./hdrs/HTTP.h:897
> #10 0x005da903 in HttpSM::write_header_into_buffer 
> (this=0x2b68ac06e910, h=0x2b68ac06f058, b=0x2f163e0) at HttpSM.cc:5554
> #11 0x005e5129 in HttpSM::write_response_header_into_buffer 
> (this=0x2b68ac06e910, h=0x2b68ac06f058, b=0x2f163e0) at HttpSM.h:594
> #12 0x005dcef2 in HttpSM::setup_server_transfer (this=0x2b68ac06e910) 
> at HttpSM.cc:6295
> #13 0x005cd336 in HttpSM::handle_api_return (this=0x2b68ac06e910) at 
> HttpSM.cc:1554
> #14 0x005cd040 in HttpSM::state_api_callout (this=0x2b68ac06e910, 
> event=0, data=0x0) at HttpSM.cc:1446
> #15 0x005d89b7 in HttpSM::do_api_callout_internal 
> (this=0x2b68ac06e910) at HttpSM.cc:4858
> #16 0x005dfdec in HttpSM::set_next_state (this=0x2b68ac06e910) at 
> HttpSM.cc:7115
> #17 0x005df0ec in HttpSM::call_transact_and_set_next_state 
> (this=0x2b68ac06e910, f=0) at HttpSM.cc:6900
> #18 0x005cd1e3 in HttpSM::handle_api_return (this=0x2b68ac06e910) at 
> HttpSM.cc:1514
> #19 0x005cd040 in HttpSM::state_api_callout (this=0x2b68ac06e910, 
> event=6, data=0x0) at HttpSM.cc:1446
> #20 0x005cc7d6 in HttpSM::state_api_callback (this=0x2b68ac06e910, 
> event=6, data=0x0) at HttpSM.cc:1264
> #21 0x00515bb5 in TSHttpTxnReenable (txnp=0x2b68ac06e910, 
> event=TS_EVENT_HTTP_CONTINUE) at InkAPI.cc:5554
> #22 0x2b68806f945b in transform_plugin 
> (event=TS_EVENT_HTTP_READ_RESPONSE_HDR, edata=0x2b68ac06e910) at gzip.cc:693
> #23 0x0050a40c in INKContInternal::handle_event (this=0x2ea2bb0, 
> event=60006, edata=0x2b68ac06e910) at InkAPI.cc:1000
> #24 0x004f597e in Continuation::handleEvent (this=0x2ea2bb0, 
> event=60006, data=0x2b68ac06e910) at 
> ../iocore/eventsystem/I_Continuation.h:146
> #25 0x0050ac53 in APIHook::invoke (this=0x2ea3c80, event=60006, 
> edata=0x2b68ac06e910) at InkAPI.cc:1219
> #26 0x005ccda9 in HttpSM::state_api_callout (this=0x2b68ac06e910, 
> event=0, data=0x0) at HttpSM.cc:1371
> #27 0x005d89b7 in HttpSM::do_api_callout_internal 
> (this=0x2b68ac06e910) at HttpSM.cc:4858
> #28 0x005e54fc in HttpSM::do_api_callout (this=0x2b68ac06e910) at 
> HttpSM.cc:448
> #29 0x005ce277 in HttpSM::state_read_server_response_header 
> (this=0x2b68ac06e910, event=100, data=0x2b68a802afc0) at HttpSM.cc:1861
> #30 0x005d0582 in HttpSM::main_handler (this=0x2b68ac06e910, 
> event=100, data=0x2b68a802afc0) at HttpSM.cc:2507
> #31 0x004f597e in Continuation::handleEvent (this=0x2b68ac06e910, 
> event=100, data=0x2b68a802afc0) at ../iocore/eventsystem/I_Continuation.h:146
> #32 0x00531d7d in PluginVC::process_read_side (this=0x2b68a802aec0, 
> other_side_call=true) at Plug

[jira] [Updated] (TS-3404) PluginVC not notifying ActiveSide of EOS due to race condition in handling terminating chunk.

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3404:
--
Backport to Version:   (was: 5.2.1)

> PluginVC not notifying ActiveSide of EOS due to race condition in handling 
> terminating chunk.
> -
>
> Key: TS-3404
> URL: https://issues.apache.org/jira/browse/TS-3404
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.3.0
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
> Fix For: 6.0.0
>
>
> When there's a race condition in receiving the terminating chunk (of size 0), 
> {{PluginVC}} is not notifying the ActiveSide (for e.g. {{FetchSM}}) of EOS, 
> causing it to hang until an eventual timeout occurs. 
> The code below checks if the {{other_side}} is closed or in write shutdown 
> state to send the EOS,
> https://github.com/apache/trafficserver/blob/master/proxy/PluginVC.cc#L638
> but, in the race condition observed in our environment, the {{PassiveSide}}'s 
> write_state is in shutdown (set via consumer_handler handling the event 
> {{VC_EVENT_WRITE_COMPLETE}} at the final terminating chunk and HttpSM calling 
> {{do_io_close}} with {{IO_SHUTDOWN_WRITE}} on the passive side.
> The below simple fix resolves the issue:
> {code}
>   if (act_on <= 0) {
> if (other_side->closed || other_side->write_state.shutdown || 
> write_state.shutdown) {
>   read_state.vio._cont->handleEvent(VC_EVENT_EOS, &read_state.vio);
> }
> return;
>   }
> {code}
> Below are the debug logs that indicate the failed and working cases due to 
> the race condition:
> Working Case:
> {code}
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding producer 'http server'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding consumer 'user agent'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
> perform_cache_write_action CACHE_DO_NO_ACTION
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) tunnel_run 
> started, p_arg is NULL
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking::Copied header of size 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) 
> tcp_init_cwnd_set 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) desired TCP 
> congestion window is 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.skip_bytes = 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 100
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 0 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of trailers
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 102
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
> [&HttpSM::tunnel_handler_server, VC_EVENT_READ_COMPLETE]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_ss) [205] session 
> closing, netvc 0x7f85ec0158b0
> [Feb 22 22:03:16.552] Server {0x7f865d664700} 

[jira] [Updated] (TS-3445) Allow purging a specific volume

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3445:
--
Fix Version/s: 6.0.0

> Allow purging a specific volume
> ---
>
> Key: TS-3445
> URL: https://issues.apache.org/jira/browse/TS-3445
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Cache
>Reporter: Leif Hedstrom
> Fix For: 6.0.0
>
>
> Today, we have a way to clear the entire cache, e.g.
> {code}
> traffic_server -Cclear
> {code}
> It'd be useful to be able to do this with more finer granularity, e.g.
> {code}
> traffic_server -Cclear=0,3
> {code}
> or some such (I don't even know if the above would work with our clunky 
> option parsing ...). The use case would be that you setup some number of 
> volumes, in volume.config, and then use hosting.config to allocate certain 
> volumes for certain content.
> For example, it's reasonable to have e.g.
> volume.config:
> {code}
> volume=1 scheme=http size=50%
> volume=2 scheme=http size=50%
> {code}
> hosting.config:
> {code}
> domain=ogre.com volume=1
> domain=boot.org volume=2
> {code}
> Wish we had  more APIs for this, but that's a separate Jira :).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3445) Allow purging a specific volume

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3445:
--
Description: 
Today, we have a way to clear the entire cache, e.g.

{code}
traffic_server -Cclear
{code}

It'd be useful to be able to do this with more finer granularity, e.g.

{code}
traffic_server -Cclear=1
{code}

or some such (I don't even know if the above would work with our clunky option 
parsing ...). The use case would be that you setup some number of volumes, in 
volume.config, and then use hosting.config to allocate certain volumes for 
certain content.

For example, it's reasonable to have e.g.

volume.config:
{code}
volume=1 scheme=http size=50%
volume=2 scheme=http size=50%
{code}

hosting.config:
{code}
domain=ogre.com volume=1
domain=boot.org volume=2
{code}

Wish we had  more APIs for this, but that's a separate Jira :).

  was:
Today, we have a way to clear the entire cache, e.g.

{code}
traffic_server -Cclear
{code}

It'd be useful to be able to do this with more finer granularity, e.g.

{code}
traffic_server -Cclear=0,3
{code}

or some such (I don't even know if the above would work with our clunky option 
parsing ...). The use case would be that you setup some number of volumes, in 
volume.config, and then use hosting.config to allocate certain volumes for 
certain content.

For example, it's reasonable to have e.g.

volume.config:
{code}
volume=1 scheme=http size=50%
volume=2 scheme=http size=50%
{code}

hosting.config:
{code}
domain=ogre.com volume=1
domain=boot.org volume=2
{code}

Wish we had  more APIs for this, but that's a separate Jira :).


> Allow purging a specific volume
> ---
>
> Key: TS-3445
> URL: https://issues.apache.org/jira/browse/TS-3445
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Cache
>Reporter: Leif Hedstrom
>Assignee: Alan M. Carroll
> Fix For: 6.0.0
>
>
> Today, we have a way to clear the entire cache, e.g.
> {code}
> traffic_server -Cclear
> {code}
> It'd be useful to be able to do this with more finer granularity, e.g.
> {code}
> traffic_server -Cclear=1
> {code}
> or some such (I don't even know if the above would work with our clunky 
> option parsing ...). The use case would be that you setup some number of 
> volumes, in volume.config, and then use hosting.config to allocate certain 
> volumes for certain content.
> For example, it's reasonable to have e.g.
> volume.config:
> {code}
> volume=1 scheme=http size=50%
> volume=2 scheme=http size=50%
> {code}
> hosting.config:
> {code}
> domain=ogre.com volume=1
> domain=boot.org volume=2
> {code}
> Wish we had  more APIs for this, but that's a separate Jira :).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3445) Allow purging a specific volume

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3445:
--
Description: 
Today, we have a way to clear the entire cache, e.g.

{code}
traffic_server -Cclear
{code}

It'd be useful to be able to do this with more finer granularity, e.g.

{code}
traffic_server -Cclear=1
{code}

or some such (I don't even know if the above would work with our clunky option 
parsing ...). The use case would be that you setup some number of volumes, in 
volume.config, and then use hosting.config to allocate certain volumes for 
certain content.

For example, it's reasonable to have e.g.

volume.config:
{code}
volume=1 scheme=http size=50%
volume=2 scheme=http size=50%
{code}

hosting.config:
{code}
domain=ogre.com volume=1
domain=boot.org volume=2
{code}

Wish we had  more APIs for this, but that's a separate Jira :). Note that 
hosting.config allows specifying multiple volumes for a domain (or host), e.g. 
volume=1,3. For that, it might be useful for the "clear" command to take the 
same option (-Cclear=1,3).

  was:
Today, we have a way to clear the entire cache, e.g.

{code}
traffic_server -Cclear
{code}

It'd be useful to be able to do this with more finer granularity, e.g.

{code}
traffic_server -Cclear=1
{code}

or some such (I don't even know if the above would work with our clunky option 
parsing ...). The use case would be that you setup some number of volumes, in 
volume.config, and then use hosting.config to allocate certain volumes for 
certain content.

For example, it's reasonable to have e.g.

volume.config:
{code}
volume=1 scheme=http size=50%
volume=2 scheme=http size=50%
{code}

hosting.config:
{code}
domain=ogre.com volume=1
domain=boot.org volume=2
{code}

Wish we had  more APIs for this, but that's a separate Jira :).


> Allow purging a specific volume
> ---
>
> Key: TS-3445
> URL: https://issues.apache.org/jira/browse/TS-3445
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Cache
>Reporter: Leif Hedstrom
>Assignee: Alan M. Carroll
> Fix For: 6.0.0
>
>
> Today, we have a way to clear the entire cache, e.g.
> {code}
> traffic_server -Cclear
> {code}
> It'd be useful to be able to do this with more finer granularity, e.g.
> {code}
> traffic_server -Cclear=1
> {code}
> or some such (I don't even know if the above would work with our clunky 
> option parsing ...). The use case would be that you setup some number of 
> volumes, in volume.config, and then use hosting.config to allocate certain 
> volumes for certain content.
> For example, it's reasonable to have e.g.
> volume.config:
> {code}
> volume=1 scheme=http size=50%
> volume=2 scheme=http size=50%
> {code}
> hosting.config:
> {code}
> domain=ogre.com volume=1
> domain=boot.org volume=2
> {code}
> Wish we had  more APIs for this, but that's a separate Jira :). Note that 
> hosting.config allows specifying multiple volumes for a domain (or host), 
> e.g. volume=1,3. For that, it might be useful for the "clear" command to take 
> the same option (-Cclear=1,3).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3404) PluginVC not notifying ActiveSide of EOS due to race condition in handling terminating chunk.

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3404:
--
Fix Version/s: (was: 6.0.0)
   5.3.0

> PluginVC not notifying ActiveSide of EOS due to race condition in handling 
> terminating chunk.
> -
>
> Key: TS-3404
> URL: https://issues.apache.org/jira/browse/TS-3404
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.3.0
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
> Fix For: 5.2.1, 5.3.0
>
>
> When there's a race condition in receiving the terminating chunk (of size 0), 
> {{PluginVC}} is not notifying the ActiveSide (for e.g. {{FetchSM}}) of EOS, 
> causing it to hang until an eventual timeout occurs. 
> The code below checks if the {{other_side}} is closed or in write shutdown 
> state to send the EOS,
> https://github.com/apache/trafficserver/blob/master/proxy/PluginVC.cc#L638
> but, in the race condition observed in our environment, the {{PassiveSide}}'s 
> write_state is in shutdown (set via consumer_handler handling the event 
> {{VC_EVENT_WRITE_COMPLETE}} at the final terminating chunk and HttpSM calling 
> {{do_io_close}} with {{IO_SHUTDOWN_WRITE}} on the passive side.
> The below simple fix resolves the issue:
> {code}
>   if (act_on <= 0) {
> if (other_side->closed || other_side->write_state.shutdown || 
> write_state.shutdown) {
>   read_state.vio._cont->handleEvent(VC_EVENT_EOS, &read_state.vio);
> }
> return;
>   }
> {code}
> Below are the debug logs that indicate the failed and working cases due to 
> the race condition:
> Working Case:
> {code}
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding producer 'http server'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding consumer 'user agent'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
> perform_cache_write_action CACHE_DO_NO_ACTION
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) tunnel_run 
> started, p_arg is NULL
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking::Copied header of size 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) 
> tcp_init_cwnd_set 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) desired TCP 
> congestion window is 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.skip_bytes = 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 100
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 0 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of trailers
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 102
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
> [&HttpSM::tunnel_handler_server, VC_EVENT_READ_COMPLETE]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_ss) [205] session 
> closing, netvc 0x7f85ec0158b0
> [Feb 22 22:03:16.5

[jira] [Closed] (TS-3404) PluginVC not notifying ActiveSide of EOS due to race condition in handling terminating chunk.

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom closed TS-3404.
-
Resolution: Fixed

> PluginVC not notifying ActiveSide of EOS due to race condition in handling 
> terminating chunk.
> -
>
> Key: TS-3404
> URL: https://issues.apache.org/jira/browse/TS-3404
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.3.0
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
> Fix For: 5.2.1, 5.3.0
>
>
> When there's a race condition in receiving the terminating chunk (of size 0), 
> {{PluginVC}} is not notifying the ActiveSide (for e.g. {{FetchSM}}) of EOS, 
> causing it to hang until an eventual timeout occurs. 
> The code below checks if the {{other_side}} is closed or in write shutdown 
> state to send the EOS,
> https://github.com/apache/trafficserver/blob/master/proxy/PluginVC.cc#L638
> but, in the race condition observed in our environment, the {{PassiveSide}}'s 
> write_state is in shutdown (set via consumer_handler handling the event 
> {{VC_EVENT_WRITE_COMPLETE}} at the final terminating chunk and HttpSM calling 
> {{do_io_close}} with {{IO_SHUTDOWN_WRITE}} on the passive side.
> The below simple fix resolves the issue:
> {code}
>   if (act_on <= 0) {
> if (other_side->closed || other_side->write_state.shutdown || 
> write_state.shutdown) {
>   read_state.vio._cont->handleEvent(VC_EVENT_EOS, &read_state.vio);
> }
> return;
>   }
> {code}
> Below are the debug logs that indicate the failed and working cases due to 
> the race condition:
> Working Case:
> {code}
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding producer 'http server'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding consumer 'user agent'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
> perform_cache_write_action CACHE_DO_NO_ACTION
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) tunnel_run 
> started, p_arg is NULL
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking::Copied header of size 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) 
> tcp_init_cwnd_set 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) desired TCP 
> congestion window is 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.skip_bytes = 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 100
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 0 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of trailers
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 102
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
> [&HttpSM::tunnel_handler_server, VC_EVENT_READ_COMPLETE]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_ss) [205] session 
> closing, netvc 0x7f85ec0158b0
> [Feb 22 22:03:16.552] Server {0x7f865d664700} DEBUG: (http_

[jira] [Updated] (TS-3439) Chunked responses don't honor keep-alive

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3439:
--
Fix Version/s: 5.2.1

> Chunked responses don't honor keep-alive
> 
>
> Key: TS-3439
> URL: https://issues.apache.org/jira/browse/TS-3439
> Project: Traffic Server
>  Issue Type: Bug
>Reporter: Thomas Jackson
>Assignee: Brian Geffon
> Fix For: 5.2.1, 5.3.0
>
>
> If you have ATS configured with keep_alive out disabled, and an origin that 
> responds with transfer-encoding chunked ATS puts the connection on the 
> keepalive pool after the transfer is finished. Since keep_alive_out is 
> disabled the request contains a connection close header. This means that we 
> now have a race condition between the origin actually closing the tcp session 
> (assuming its well behaved) and ATS re-using that keep-alive session (which 
> it shouldn't have kept).
> This means not only are we disobeying the configuration (which specified no 
> keep-alive) but we are "breaking" connections-- as they will 502 (since the 
> tunnel will be shutdown).
> test case: 
> https://github.com/jacksontj/trafficserver/commit/e221e91ad6466ef840f74a1016b8d51c821eb1e9#diff-ed49610150c2617c50f28a047a07c126R130



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3285) Seg fault when 100 CONT handling is enabled

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3285:
--
Backport to Version:   (was: 5.2.1)

> Seg fault when 100 CONT handling is enabled
> ---
>
> Key: TS-3285
> URL: https://issues.apache.org/jira/browse/TS-3285
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.0.1
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
> Fix For: 5.2.1, 5.3.0
>
>
> With 100 CONT handling enabled in our ats5 production hosts, we are seeing 
> the below seg fault.
> {code}
> (gdb) bt
> #0  0x00316e432925 in raise (sig=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:64
> #1  0x00316e434105 in abort () at abort.c:92
> #2  0x2b6869944458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b6869944525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b68699518d8 "%s:%d: failed assert `%s`", 
> ap=0x2b686bb1bf00) at ink_error.cc:65
> #4  0x2b68699445ee in ink_fatal (return_code=1, 
> message_format=0x2b68699518d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b6869943160 in _ink_assert (expression=0x7a984e "buf_index_inout 
> == NULL", file=0x7a96e3 "MIME.cc", line=2676) at ink_assert.cc:37
> #6  0x0068212d in mime_mem_print (src_d=0x2b686bb1c090 "HTTP/1.1", 
> src_l=8, buf_start=0x0, buf_length=-1811908575, 
> buf_index_inout=0x2b686bb1c1bc, buf_chars_to_skip_inout=0x2b686bb1c1b8) 
> at MIME.cc:2676
> #7  0x00671df3 in http_version_print (version=65537, buf=0x0, 
> bufsize=-1811908575, bufindex=0x2b686bb1c1bc, dumpoffset=0x2b686bb1c1b8)
> at HTTP.cc:415
> #8  0x006724fb in http_hdr_print (heap=0x2b6881019010, 
> hdr=0x2b6881019098, buf=0x0, bufsize=-1811908575, bufindex=0x2b686bb1c1bc, 
> dumpoffset=0x2b686bb1c1b8) at HTTP.cc:539
> #9  0x004f259b in HTTPHdr::print (this=0x2b68ac06f058, buf=0x0, 
> bufsize=-1811908575, bufindex=0x2b686bb1c1bc, dumpoffset=0x2b686bb1c1b8)
> at ./hdrs/HTTP.h:897
> #10 0x005da903 in HttpSM::write_header_into_buffer 
> (this=0x2b68ac06e910, h=0x2b68ac06f058, b=0x2f163e0) at HttpSM.cc:5554
> #11 0x005e5129 in HttpSM::write_response_header_into_buffer 
> (this=0x2b68ac06e910, h=0x2b68ac06f058, b=0x2f163e0) at HttpSM.h:594
> #12 0x005dcef2 in HttpSM::setup_server_transfer (this=0x2b68ac06e910) 
> at HttpSM.cc:6295
> #13 0x005cd336 in HttpSM::handle_api_return (this=0x2b68ac06e910) at 
> HttpSM.cc:1554
> #14 0x005cd040 in HttpSM::state_api_callout (this=0x2b68ac06e910, 
> event=0, data=0x0) at HttpSM.cc:1446
> #15 0x005d89b7 in HttpSM::do_api_callout_internal 
> (this=0x2b68ac06e910) at HttpSM.cc:4858
> #16 0x005dfdec in HttpSM::set_next_state (this=0x2b68ac06e910) at 
> HttpSM.cc:7115
> #17 0x005df0ec in HttpSM::call_transact_and_set_next_state 
> (this=0x2b68ac06e910, f=0) at HttpSM.cc:6900
> #18 0x005cd1e3 in HttpSM::handle_api_return (this=0x2b68ac06e910) at 
> HttpSM.cc:1514
> #19 0x005cd040 in HttpSM::state_api_callout (this=0x2b68ac06e910, 
> event=6, data=0x0) at HttpSM.cc:1446
> #20 0x005cc7d6 in HttpSM::state_api_callback (this=0x2b68ac06e910, 
> event=6, data=0x0) at HttpSM.cc:1264
> #21 0x00515bb5 in TSHttpTxnReenable (txnp=0x2b68ac06e910, 
> event=TS_EVENT_HTTP_CONTINUE) at InkAPI.cc:5554
> #22 0x2b68806f945b in transform_plugin 
> (event=TS_EVENT_HTTP_READ_RESPONSE_HDR, edata=0x2b68ac06e910) at gzip.cc:693
> #23 0x0050a40c in INKContInternal::handle_event (this=0x2ea2bb0, 
> event=60006, edata=0x2b68ac06e910) at InkAPI.cc:1000
> #24 0x004f597e in Continuation::handleEvent (this=0x2ea2bb0, 
> event=60006, data=0x2b68ac06e910) at 
> ../iocore/eventsystem/I_Continuation.h:146
> #25 0x0050ac53 in APIHook::invoke (this=0x2ea3c80, event=60006, 
> edata=0x2b68ac06e910) at InkAPI.cc:1219
> #26 0x005ccda9 in HttpSM::state_api_callout (this=0x2b68ac06e910, 
> event=0, data=0x0) at HttpSM.cc:1371
> #27 0x005d89b7 in HttpSM::do_api_callout_internal 
> (this=0x2b68ac06e910) at HttpSM.cc:4858
> #28 0x005e54fc in HttpSM::do_api_callout (this=0x2b68ac06e910) at 
> HttpSM.cc:448
> #29 0x005ce277 in HttpSM::state_read_server_response_header 
> (this=0x2b68ac06e910, event=100, data=0x2b68a802afc0) at HttpSM.cc:1861
> #30 0x005d0582 in HttpSM::main_handler (this=0x2b68ac06e910, 
> event=100, data=0x2b68a802afc0) at HttpSM.cc:2507
> #31 0x004f597e in Continuation::handleEvent (this=0x2b68ac06e910, 
> event=100, data=0x2b68a802afc0) at ../iocore/eventsystem/I_Continuation.h:146
> #32 0x00531d7d in PluginVC::process_read_side (this=0x2b68a802aec0, 
> other_side_cal

[jira] [Updated] (TS-3404) PluginVC not notifying ActiveSide of EOS due to race condition in handling terminating chunk.

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3404:
--
Fix Version/s: 5.2.1

> PluginVC not notifying ActiveSide of EOS due to race condition in handling 
> terminating chunk.
> -
>
> Key: TS-3404
> URL: https://issues.apache.org/jira/browse/TS-3404
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.3.0
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
> Fix For: 5.2.1, 5.3.0
>
>
> When there's a race condition in receiving the terminating chunk (of size 0), 
> {{PluginVC}} is not notifying the ActiveSide (for e.g. {{FetchSM}}) of EOS, 
> causing it to hang until an eventual timeout occurs. 
> The code below checks if the {{other_side}} is closed or in write shutdown 
> state to send the EOS,
> https://github.com/apache/trafficserver/blob/master/proxy/PluginVC.cc#L638
> but, in the race condition observed in our environment, the {{PassiveSide}}'s 
> write_state is in shutdown (set via consumer_handler handling the event 
> {{VC_EVENT_WRITE_COMPLETE}} at the final terminating chunk and HttpSM calling 
> {{do_io_close}} with {{IO_SHUTDOWN_WRITE}} on the passive side.
> The below simple fix resolves the issue:
> {code}
>   if (act_on <= 0) {
> if (other_side->closed || other_side->write_state.shutdown || 
> write_state.shutdown) {
>   read_state.vio._cont->handleEvent(VC_EVENT_EOS, &read_state.vio);
> }
> return;
>   }
> {code}
> Below are the debug logs that indicate the failed and working cases due to 
> the race condition:
> Working Case:
> {code}
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding producer 'http server'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding consumer 'user agent'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
> perform_cache_write_action CACHE_DO_NO_ACTION
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) tunnel_run 
> started, p_arg is NULL
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking::Copied header of size 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) 
> tcp_init_cwnd_set 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) desired TCP 
> congestion window is 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.skip_bytes = 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 100
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 0 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of trailers
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 102
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
> [&HttpSM::tunnel_handler_server, VC_EVENT_READ_COMPLETE]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_ss) [205] session 
> closing, netvc 0x7f85ec0158b0
> [Feb 22 22:03:16.552] Server {0x7f865d664700} DEBUG: (

[jira] [Updated] (TS-3445) Allow purging a specific volume

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3445:
--
Assignee: Alan M. Carroll

> Allow purging a specific volume
> ---
>
> Key: TS-3445
> URL: https://issues.apache.org/jira/browse/TS-3445
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Cache
>Reporter: Leif Hedstrom
>Assignee: Alan M. Carroll
> Fix For: 6.0.0
>
>
> Today, we have a way to clear the entire cache, e.g.
> {code}
> traffic_server -Cclear
> {code}
> It'd be useful to be able to do this with more finer granularity, e.g.
> {code}
> traffic_server -Cclear=0,3
> {code}
> or some such (I don't even know if the above would work with our clunky 
> option parsing ...). The use case would be that you setup some number of 
> volumes, in volume.config, and then use hosting.config to allocate certain 
> volumes for certain content.
> For example, it's reasonable to have e.g.
> volume.config:
> {code}
> volume=1 scheme=http size=50%
> volume=2 scheme=http size=50%
> {code}
> hosting.config:
> {code}
> domain=ogre.com volume=1
> domain=boot.org volume=2
> {code}
> Wish we had  more APIs for this, but that's a separate Jira :).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3424) SSL error: SSL3_GET_RECORD:decryption failed or bad record mac

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3424:
--
Fix Version/s: (was: 6.0.0)
   5.3.0

> SSL error: SSL3_GET_RECORD:decryption failed or bad record mac
> --
>
> Key: TS-3424
> URL: https://issues.apache.org/jira/browse/TS-3424
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, SSL
>Reporter: Brian Geffon
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: ts-3424-2.diff, ts-3424-3.diff, ts-3424-for-52-2.diff, 
> ts-3424-for-52.diff, ts-3424.diff, undo-handshake-buffer.diff
>
>
> Starting with 5.2.x we're seeing SSL_ERROR_SSL type errors in 
> {{ssl_read_from_net}}, when calling OpenSSL's {{ERR_error_string_n}} we see 
> the error is {{1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad 
> record mac}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3439) Chunked responses don't honor keep-alive

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3439:
--
Backport to Version:   (was: 5.2.1)

> Chunked responses don't honor keep-alive
> 
>
> Key: TS-3439
> URL: https://issues.apache.org/jira/browse/TS-3439
> Project: Traffic Server
>  Issue Type: Bug
>Reporter: Thomas Jackson
>Assignee: Brian Geffon
> Fix For: 5.2.1, 5.3.0
>
>
> If you have ATS configured with keep_alive out disabled, and an origin that 
> responds with transfer-encoding chunked ATS puts the connection on the 
> keepalive pool after the transfer is finished. Since keep_alive_out is 
> disabled the request contains a connection close header. This means that we 
> now have a race condition between the origin actually closing the tcp session 
> (assuming its well behaved) and ATS re-using that keep-alive session (which 
> it shouldn't have kept).
> This means not only are we disobeying the configuration (which specified no 
> keep-alive) but we are "breaking" connections-- as they will 502 (since the 
> tunnel will be shutdown).
> test case: 
> https://github.com/jacksontj/trafficserver/commit/e221e91ad6466ef840f74a1016b8d51c821eb1e9#diff-ed49610150c2617c50f28a047a07c126R130



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3445) Allow purging a specific volume

2015-03-16 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-3445:
-

 Summary: Allow purging a specific volume
 Key: TS-3445
 URL: https://issues.apache.org/jira/browse/TS-3445
 Project: Traffic Server
  Issue Type: Improvement
  Components: Cache
Reporter: Leif Hedstrom


Today, we have a way to clear the entire cache, e.g.

{code}
traffic_server -Cclear
{code}

It'd be useful to be able to do this with more finer granularity, e.g.

{code}
traffic_server -Cclear=0,3
{code}

or some such (I don't even know if the above would work with our clunky option 
parsing ...). The use case would be that you setup some number of volumes, in 
volume.config, and then use hosting.config to allocate certain volumes for 
certain content.

For example, it's reasonable to have e.g.

volume.config:
{code}
volume=1 scheme=http size=50%
volume=2 scheme=http size=50%
{code}

hosting.config:
{code}
domain=ogre.com volume=1
domain=boot.org volume=2
{code}

Wish we had  more APIs for this, but that's a separate Jira :).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3439) Chunked responses don't honor keep-alive

2015-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363376#comment-14363376
 ] 

ASF subversion and git services commented on TS-3439:
-

Commit 52cddbc866776d676e3bf84716bc0566c816bf4a in trafficserver's branch 
refs/heads/5.2.x from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=52cddbc ]

Add TS-3439


> Chunked responses don't honor keep-alive
> 
>
> Key: TS-3439
> URL: https://issues.apache.org/jira/browse/TS-3439
> Project: Traffic Server
>  Issue Type: Bug
>Reporter: Thomas Jackson
>Assignee: Brian Geffon
> Fix For: 5.2.1, 5.3.0
>
>
> If you have ATS configured with keep_alive out disabled, and an origin that 
> responds with transfer-encoding chunked ATS puts the connection on the 
> keepalive pool after the transfer is finished. Since keep_alive_out is 
> disabled the request contains a connection close header. This means that we 
> now have a race condition between the origin actually closing the tcp session 
> (assuming its well behaved) and ATS re-using that keep-alive session (which 
> it shouldn't have kept).
> This means not only are we disobeying the configuration (which specified no 
> keep-alive) but we are "breaking" connections-- as they will 502 (since the 
> tunnel will be shutdown).
> test case: 
> https://github.com/jacksontj/trafficserver/commit/e221e91ad6466ef840f74a1016b8d51c821eb1e9#diff-ed49610150c2617c50f28a047a07c126R130



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3285) Seg fault when 100 CONT handling is enabled

2015-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363372#comment-14363372
 ] 

ASF subversion and git services commented on TS-3285:
-

Commit 4a8bac11223dc8a408f2543d9d239946cb8b6967 in trafficserver's branch 
refs/heads/5.2.x from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=4a8bac1 ]

Add TS-3285


> Seg fault when 100 CONT handling is enabled
> ---
>
> Key: TS-3285
> URL: https://issues.apache.org/jira/browse/TS-3285
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.0.1
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
> Fix For: 5.2.1, 5.3.0
>
>
> With 100 CONT handling enabled in our ats5 production hosts, we are seeing 
> the below seg fault.
> {code}
> (gdb) bt
> #0  0x00316e432925 in raise (sig=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:64
> #1  0x00316e434105 in abort () at abort.c:92
> #2  0x2b6869944458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b6869944525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b68699518d8 "%s:%d: failed assert `%s`", 
> ap=0x2b686bb1bf00) at ink_error.cc:65
> #4  0x2b68699445ee in ink_fatal (return_code=1, 
> message_format=0x2b68699518d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b6869943160 in _ink_assert (expression=0x7a984e "buf_index_inout 
> == NULL", file=0x7a96e3 "MIME.cc", line=2676) at ink_assert.cc:37
> #6  0x0068212d in mime_mem_print (src_d=0x2b686bb1c090 "HTTP/1.1", 
> src_l=8, buf_start=0x0, buf_length=-1811908575, 
> buf_index_inout=0x2b686bb1c1bc, buf_chars_to_skip_inout=0x2b686bb1c1b8) 
> at MIME.cc:2676
> #7  0x00671df3 in http_version_print (version=65537, buf=0x0, 
> bufsize=-1811908575, bufindex=0x2b686bb1c1bc, dumpoffset=0x2b686bb1c1b8)
> at HTTP.cc:415
> #8  0x006724fb in http_hdr_print (heap=0x2b6881019010, 
> hdr=0x2b6881019098, buf=0x0, bufsize=-1811908575, bufindex=0x2b686bb1c1bc, 
> dumpoffset=0x2b686bb1c1b8) at HTTP.cc:539
> #9  0x004f259b in HTTPHdr::print (this=0x2b68ac06f058, buf=0x0, 
> bufsize=-1811908575, bufindex=0x2b686bb1c1bc, dumpoffset=0x2b686bb1c1b8)
> at ./hdrs/HTTP.h:897
> #10 0x005da903 in HttpSM::write_header_into_buffer 
> (this=0x2b68ac06e910, h=0x2b68ac06f058, b=0x2f163e0) at HttpSM.cc:5554
> #11 0x005e5129 in HttpSM::write_response_header_into_buffer 
> (this=0x2b68ac06e910, h=0x2b68ac06f058, b=0x2f163e0) at HttpSM.h:594
> #12 0x005dcef2 in HttpSM::setup_server_transfer (this=0x2b68ac06e910) 
> at HttpSM.cc:6295
> #13 0x005cd336 in HttpSM::handle_api_return (this=0x2b68ac06e910) at 
> HttpSM.cc:1554
> #14 0x005cd040 in HttpSM::state_api_callout (this=0x2b68ac06e910, 
> event=0, data=0x0) at HttpSM.cc:1446
> #15 0x005d89b7 in HttpSM::do_api_callout_internal 
> (this=0x2b68ac06e910) at HttpSM.cc:4858
> #16 0x005dfdec in HttpSM::set_next_state (this=0x2b68ac06e910) at 
> HttpSM.cc:7115
> #17 0x005df0ec in HttpSM::call_transact_and_set_next_state 
> (this=0x2b68ac06e910, f=0) at HttpSM.cc:6900
> #18 0x005cd1e3 in HttpSM::handle_api_return (this=0x2b68ac06e910) at 
> HttpSM.cc:1514
> #19 0x005cd040 in HttpSM::state_api_callout (this=0x2b68ac06e910, 
> event=6, data=0x0) at HttpSM.cc:1446
> #20 0x005cc7d6 in HttpSM::state_api_callback (this=0x2b68ac06e910, 
> event=6, data=0x0) at HttpSM.cc:1264
> #21 0x00515bb5 in TSHttpTxnReenable (txnp=0x2b68ac06e910, 
> event=TS_EVENT_HTTP_CONTINUE) at InkAPI.cc:5554
> #22 0x2b68806f945b in transform_plugin 
> (event=TS_EVENT_HTTP_READ_RESPONSE_HDR, edata=0x2b68ac06e910) at gzip.cc:693
> #23 0x0050a40c in INKContInternal::handle_event (this=0x2ea2bb0, 
> event=60006, edata=0x2b68ac06e910) at InkAPI.cc:1000
> #24 0x004f597e in Continuation::handleEvent (this=0x2ea2bb0, 
> event=60006, data=0x2b68ac06e910) at 
> ../iocore/eventsystem/I_Continuation.h:146
> #25 0x0050ac53 in APIHook::invoke (this=0x2ea3c80, event=60006, 
> edata=0x2b68ac06e910) at InkAPI.cc:1219
> #26 0x005ccda9 in HttpSM::state_api_callout (this=0x2b68ac06e910, 
> event=0, data=0x0) at HttpSM.cc:1371
> #27 0x005d89b7 in HttpSM::do_api_callout_internal 
> (this=0x2b68ac06e910) at HttpSM.cc:4858
> #28 0x005e54fc in HttpSM::do_api_callout (this=0x2b68ac06e910) at 
> HttpSM.cc:448
> #29 0x005ce277 in HttpSM::state_read_server_response_header 
> (this=0x2b68ac06e910, event=100, data=0x2b68a802afc0) at HttpSM.cc:1861
> #30 0x005d0582 in HttpSM::main_handler (this=0x2b68ac06e910, 
> event=100, data=0x2b68a802afc0) at HttpSM.cc:2507
> #3

[jira] [Commented] (TS-3404) PluginVC not notifying ActiveSide of EOS due to race condition in handling terminating chunk.

2015-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363374#comment-14363374
 ] 

ASF subversion and git services commented on TS-3404:
-

Commit 457f59f457f03d93b3f66c358eb3493bd44a in trafficserver's branch 
refs/heads/5.2.x from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=457 ]

Add TS-3404


> PluginVC not notifying ActiveSide of EOS due to race condition in handling 
> terminating chunk.
> -
>
> Key: TS-3404
> URL: https://issues.apache.org/jira/browse/TS-3404
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.3.0
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
> Fix For: 5.2.1, 5.3.0
>
>
> When there's a race condition in receiving the terminating chunk (of size 0), 
> {{PluginVC}} is not notifying the ActiveSide (for e.g. {{FetchSM}}) of EOS, 
> causing it to hang until an eventual timeout occurs. 
> The code below checks if the {{other_side}} is closed or in write shutdown 
> state to send the EOS,
> https://github.com/apache/trafficserver/blob/master/proxy/PluginVC.cc#L638
> but, in the race condition observed in our environment, the {{PassiveSide}}'s 
> write_state is in shutdown (set via consumer_handler handling the event 
> {{VC_EVENT_WRITE_COMPLETE}} at the final terminating chunk and HttpSM calling 
> {{do_io_close}} with {{IO_SHUTDOWN_WRITE}} on the passive side.
> The below simple fix resolves the issue:
> {code}
>   if (act_on <= 0) {
> if (other_side->closed || other_side->write_state.shutdown || 
> write_state.shutdown) {
>   read_state.vio._cont->handleEvent(VC_EVENT_EOS, &read_state.vio);
> }
> return;
>   }
> {code}
> Below are the debug logs that indicate the failed and working cases due to 
> the race condition:
> Working Case:
> {code}
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding producer 'http server'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding consumer 'user agent'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
> perform_cache_write_action CACHE_DO_NO_ACTION
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) tunnel_run 
> started, p_arg is NULL
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking::Copied header of size 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) 
> tcp_init_cwnd_set 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) desired TCP 
> congestion window is 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.skip_bytes = 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 100
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 0 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of trailers
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 102
> [Feb 22 22:03:16.551] Server {0

[jira] [Commented] (TS-3439) Chunked responses don't honor keep-alive

2015-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363375#comment-14363375
 ] 

ASF subversion and git services commented on TS-3439:
-

Commit 32b46b94d0c78bbebbcddf86493efb84b8d868a8 in trafficserver's branch 
refs/heads/5.2.x from [~briang]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=32b46b9 ]

TS-3439: Chunked responses don't honor keep-alive


> Chunked responses don't honor keep-alive
> 
>
> Key: TS-3439
> URL: https://issues.apache.org/jira/browse/TS-3439
> Project: Traffic Server
>  Issue Type: Bug
>Reporter: Thomas Jackson
>Assignee: Brian Geffon
> Fix For: 5.2.1, 5.3.0
>
>
> If you have ATS configured with keep_alive out disabled, and an origin that 
> responds with transfer-encoding chunked ATS puts the connection on the 
> keepalive pool after the transfer is finished. Since keep_alive_out is 
> disabled the request contains a connection close header. This means that we 
> now have a race condition between the origin actually closing the tcp session 
> (assuming its well behaved) and ATS re-using that keep-alive session (which 
> it shouldn't have kept).
> This means not only are we disobeying the configuration (which specified no 
> keep-alive) but we are "breaking" connections-- as they will 502 (since the 
> tunnel will be shutdown).
> test case: 
> https://github.com/jacksontj/trafficserver/commit/e221e91ad6466ef840f74a1016b8d51c821eb1e9#diff-ed49610150c2617c50f28a047a07c126R130



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3404) PluginVC not notifying ActiveSide of EOS due to race condition in handling terminating chunk.

2015-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363373#comment-14363373
 ] 

ASF subversion and git services commented on TS-3404:
-

Commit c7ae5881666a67c8cdf755a7b261ac12485d9604 in trafficserver's branch 
refs/heads/5.2.x from [~jacksontj]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=c7ae588 ]

TS-3404: PluginVC not notifying ActiveSide of EOS due to race condition in 
handling terminating chunk.


> PluginVC not notifying ActiveSide of EOS due to race condition in handling 
> terminating chunk.
> -
>
> Key: TS-3404
> URL: https://issues.apache.org/jira/browse/TS-3404
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.3.0
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
> Fix For: 5.2.1, 5.3.0
>
>
> When there's a race condition in receiving the terminating chunk (of size 0), 
> {{PluginVC}} is not notifying the ActiveSide (for e.g. {{FetchSM}}) of EOS, 
> causing it to hang until an eventual timeout occurs. 
> The code below checks if the {{other_side}} is closed or in write shutdown 
> state to send the EOS,
> https://github.com/apache/trafficserver/blob/master/proxy/PluginVC.cc#L638
> but, in the race condition observed in our environment, the {{PassiveSide}}'s 
> write_state is in shutdown (set via consumer_handler handling the event 
> {{VC_EVENT_WRITE_COMPLETE}} at the final terminating chunk and HttpSM calling 
> {{do_io_close}} with {{IO_SHUTDOWN_WRITE}} on the passive side.
> The below simple fix resolves the issue:
> {code}
>   if (act_on <= 0) {
> if (other_side->closed || other_side->write_state.shutdown || 
> write_state.shutdown) {
>   read_state.vio._cont->handleEvent(VC_EVENT_EOS, &read_state.vio);
> }
> return;
>   }
> {code}
> Below are the debug logs that indicate the failed and working cases due to 
> the race condition:
> Working Case:
> {code}
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding producer 'http server'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> adding consumer 'user agent'
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http) [205] 
> perform_cache_write_action CACHE_DO_NO_ACTION
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) tunnel_run 
> started, p_arg is NULL
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking::Copied header of size 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) 
> tcp_init_cwnd_set 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_cs) desired TCP 
> congestion window is 0
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.chunked_reader->read_avail() 
> = 368
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) 
> [producer_run] do_dechunking p->chunked_handler.skip_bytes = 179
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 57 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 120 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTunnel::producer_handler] enable_redirection: [1 0 0] event: 100
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_tunnel) [205] 
> producer_handler_chunked [http server VC_EVENT_READ_READY]
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of chunk of 3 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) read chunk 
> size of 0 bytes
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_chunk) completed 
> read of trailers
> [Feb 22 22:03:16.551] Server {0x7f865d664700} DEBUG: (http_redirect) 
> [HttpTu

Jenkins build is back to normal : tsqa-master #239

2015-03-16 Thread jenkins
See 



[jira] [Commented] (TS-3424) SSL error: SSL3_GET_RECORD:decryption failed or bad record mac

2015-03-16 Thread Susan Hinrichs (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363570#comment-14363570
 ] 

Susan Hinrichs commented on TS-3424:


Should definitely commit ts-3424-3.diff for 5.3 and ts-3424-for-52-2.diff for 
5.2.x

Probably want to launch a new bug to track the DHE issue.

> SSL error: SSL3_GET_RECORD:decryption failed or bad record mac
> --
>
> Key: TS-3424
> URL: https://issues.apache.org/jira/browse/TS-3424
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, SSL
>Reporter: Brian Geffon
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: ts-3424-2.diff, ts-3424-3.diff, ts-3424-for-52-2.diff, 
> ts-3424-for-52.diff, ts-3424.diff, undo-handshake-buffer.diff
>
>
> Starting with 5.2.x we're seeing SSL_ERROR_SSL type errors in 
> {{ssl_read_from_net}}, when calling OpenSSL's {{ERR_error_string_n}} we see 
> the error is {{1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad 
> record mac}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3440) Connect_retries re-connects even if request made it to origin

2015-03-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363675#comment-14363675
 ] 

ASF GitHub Bot commented on TS-3440:


Github user asfgit closed the pull request at:

https://github.com/apache/trafficserver/pull/179


> Connect_retries re-connects even if request made it to origin
> -
>
> Key: TS-3440
> URL: https://issues.apache.org/jira/browse/TS-3440
> Project: Traffic Server
>  Issue Type: Bug
>Reporter: Thomas Jackson
>Assignee: Brian Geffon
> Fix For: 6.0.0
>
>
> While trying to workaround TS-3439 I decided to test out the connect retries 
> option. During testing I found a case where it should not retry where it is.
> The scenario is as follows:
> - ATS makes a connection to an origin
> - the origin acks the entire request
> - the origin starts to send back a response (lets say first line of the 
> header)
> - the origin sends an RST
> In this scenario ATS will re-connect to the origin, which is bad since we 
> have already sent the request (and we aren't sure if the URL is re-entrant).
> Test case: 
> https://github.com/jacksontj/trafficserver/commit/28059ccb93f9fb173792aeebf90062882dfdf9d5#diff-06f9ddbe6cc45d76ebb2cb21479dc805R182



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-2157) Replace "addr" with appropriate "src_addr" and "dst_addr" in ConnectionAttributes

2015-03-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363673#comment-14363673
 ] 

ASF GitHub Bot commented on TS-2157:


Github user ericcarlschwartz closed the pull request at:

https://github.com/apache/trafficserver/pull/157


> Replace "addr" with appropriate "src_addr" and "dst_addr" in 
> ConnectionAttributes
> -
>
> Key: TS-2157
> URL: https://issues.apache.org/jira/browse/TS-2157
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Network
>Reporter: Leif Hedstrom
>Assignee: Eric Schwartz
> Fix For: 6.0.0
>
>
> This would more clearly let us encapsulate the two endpoint's (IpEndpoint) 
> for each connection. In addition, we ought to be able to remove the "port" 
> member from ConnectionAttributes as well, and its convoluted and overloaded 
> semantics. The appropriate IpEndpoint (src_addr or dst_addr) would hold the 
> port information as necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3424) SSL error: SSL3_GET_RECORD:decryption failed or bad record mac

2015-03-16 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs updated TS-3424:
---
Attachment: undo-handshake-buffer-for-52.diff

undo-handshake-buffer-for-52.diff undoes the handshake buffering for 5.2.x

> SSL error: SSL3_GET_RECORD:decryption failed or bad record mac
> --
>
> Key: TS-3424
> URL: https://issues.apache.org/jira/browse/TS-3424
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, SSL
>Reporter: Brian Geffon
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: ts-3424-2.diff, ts-3424-3.diff, ts-3424-for-52-2.diff, 
> ts-3424-for-52.diff, ts-3424.diff, undo-handshake-buffer-for-52.diff, 
> undo-handshake-buffer.diff
>
>
> Starting with 5.2.x we're seeing SSL_ERROR_SSL type errors in 
> {{ssl_read_from_net}}, when calling OpenSSL's {{ERR_error_string_n}} we see 
> the error is {{1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad 
> record mac}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-2907) Unix Sockets

2015-03-16 Thread Brian Geffon (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363917#comment-14363917
 ] 

Brian Geffon commented on TS-2907:
--

I apologize for the delay, I'll review.

> Unix Sockets
> 
>
> Key: TS-2907
> URL: https://issues.apache.org/jira/browse/TS-2907
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Luca Rea
>Assignee: Brian Geffon
>  Labels: review
> Fix For: 5.3.0
>
> Attachments: TS-2907.diff, unixsocket-backpost.diff
>
>
> Feature request for support listeners and parents on unix sockets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-2894) Spdy slow start..

2015-03-16 Thread Phil Sorber (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363922#comment-14363922
 ] 

Phil Sorber commented on TS-2894:
-

[~sudheerv], we should decide what to do with this, ASAP.

> Spdy slow start..
> -
>
> Key: TS-2894
> URL: https://issues.apache.org/jira/browse/TS-2894
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: SPDY
>Reporter: Sudheer Vinukonda
>Assignee: Sudheer Vinukonda
>  Labels: yahoo
> Fix For: 5.3.0
>
> Attachments: TS-2894.diff
>
>
> When production testing with spdy/5.0.0, we ran into an issue in some of our 
> systems, where, the spdy hosts would flap constantly due to the flood of 
> requests. We further noticed that, where the 4.0.x version or 5.0.0 w/ spdy 
> turned off, would recover quickly following a restart, spdy enabled hosts 
> would continue to receive flood of requests and continue to flap. During this 
> time, traffic server is generally busy reading from the disk and can not 
> handle too many requests, and is made miserable by spdy's support of multiple 
> concurrent streams. 
> To handle such a sudden flood of requests, I'm implementing a simple slow 
> start mechanism with spdy. The idea is to increase the 
> max_concurrent_streams_in gradually based on a configured timer, rather than 
> use the configured value right away. The steps I chose to implement are 1, 
> 25, 50, 75 and 100% of the configured max_concurrent_streams_in. Note that, 
> currently,
> max_concurrent_streams_in only affects new spdy sessions. Existing sessions 
> (if any) would continue to use their older values.
> Not too sure, if everyone would be interested in this..but, thought of still 
> uploading my patch, incase, someone is interested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-2848) ATS crash in HttpSM::release_server_session

2015-03-16 Thread Phil Sorber (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363923#comment-14363923
 ] 

Phil Sorber commented on TS-2848:
-

[~amc], any thoughts on this patch?

> ATS crash in HttpSM::release_server_session
> ---
>
> Key: TS-2848
> URL: https://issues.apache.org/jira/browse/TS-2848
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HTTP
>Reporter: Feifei Cai
>Assignee: Alan M. Carroll
>  Labels: crash, review, yahoo
> Fix For: 5.3.0
>
> Attachments: TS-2848.diff
>
>
> We deploy ATS on production hosts, and noticed crashes with the following 
> stack trace. This happens not very frequently, about 1 week or even longer. 
> It crashes repeatedly in the last 2 months, however, the root cause is not 
> found and we can not reproduce the crash as wish, only wait for it happens.
> {noformat}
> NOTE: Traffic Server received Sig 11: Segmentation fault
> /home/y/bin/traffic_server - STACK TRACE:
> /lib64/libpthread.so.0(+0x321e60f500)[0x2b69adf8f500]
> /home/y/bin/traffic_server(_ZN6HttpSM22release_server_sessionEb+0x35)[0x529eb5]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x2db)[0x5362bb]
> /home/y/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x3aa)[0x53537a]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52dbd0]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x1f2)[0x5361d2]
> /home/y/bin/traffic_server(_ZN6HttpSM16do_hostdb_lookupEv+0x282)[0x51e422]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0xbad)[0x536b8d]
> /home/y/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x3aa)[0x53537a]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52dbd0]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x1f2)[0x5361d2]
> /home/y/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x3aa)[0x53537a]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52dbd0]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x1f2)[0x5361d2]
> /home/y/bin/traffic_server(_ZN6HttpSM21state_cache_open_readEiPv+0xfe)[0x52ff8e]
> /home/y/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xd8)[0x533098]
> /home/y/bin/traffic_server(_ZN11HttpCacheSM21state_cache_open_readEiPv+0x1b2)[0x50bef2]
> /home/y/bin/traffic_server(_ZN7CacheVC8callcontEi+0x53)[0x5f0a93]
> /home/y/bin/traffic_server(_ZN7CacheVC17openReadStartHeadEiP5Event+0x7cf)[0x65934f]
> /home/y/bin/traffic_server(_ZN5Cache9open_readEP12ContinuationP7INK_MD5P7HTTPHdrP21CacheLookupHttpConfig13CacheFragTypePci+0x383)[0x656373]
> /home/y/bin/traffic_server(_ZN14CacheProcessor9open_readEP12ContinuationP3URLbP7HTTPHdrP21CacheLookupHttpConfigl13CacheFragType+0xad)[0x633a6d]
> /home/y/bin/traffic_server(_ZN11HttpCacheSM9open_readEP3URLP7HTTPHdrP21CacheLookupHttpConfigl+0x94)[0x50b944]
> /home/y/bin/traffic_server(_ZN6HttpSM24do_cache_lookup_and_readEv+0xf3)[0x51d893]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x722)[0x536702]
> /home/y/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x49d)[0x53546d]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52dbd0]
> /home/y/bin/traffic_server(_ZN6HttpSM18state_api_callbackEiPv+0x8b)[0x53328b]
> /home/y/bin/traffic_server(TSHttpTxnReenable+0x404)[0x4b9b14]
> /home/y/libexec64/trafficserver/header_filter.so(+0x2d5d)[0x2b69c3471d5d]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x114)[0x52da34]
> /home/y/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x85d)[0x53683d]
> /home/y/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x3aa)[0x53537a]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52dbd0]
> /home/y/bin/traffic_server(_ZN6HttpSM18state_api_callbackEiPv+0x8b)[0x53328b]
> /home/y/bin/traffic_server(TSHttpTxnReenable+0x404)[0x4b9b14]
> /home/y/libexec64/trafficserver/header_rewrite.so(+0x1288d)[0x2b69c36d]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x114)[0x52da34]
> /home/y/bin/traffic_server(_ZN6HttpSM18state_api_callbackEiPv+0x8b)[0x53328b]
> /home/y/bin/traffic_server(TSHttpTxnReenable+0x404)[0x4b9b14]
> /home/y/libexec64/trafficserver/header_filter.so(+0x2d5d)[0x2b69c3471d5d]
> /home/y/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x114)[0x52da34]
> /home/y/bin/traffic_server(_ZN6HttpSM33state_read_server_response_headerEiPv+0x398)[0x530828]
> /home/y/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xd8)[0x533098]
> /home/y/bin/traffic_server[0x68606b]
> /home/y/bin/traffic_server[0x688a14]
> /home/y/bin/traffic_server(_ZN10NetHandler12mainNetEventEiP5Event+0x1f2)[0x681582]
> /home/y/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x8f)[0x6a89bf]
> /home/y/bin/traffic_server(_ZN7EThread7executeEv+0x4a3)[0x6a93a3]
> /home/y/bin/traff

[jira] [Commented] (TS-1334) congestion control - observed issues

2015-03-16 Thread Phil Sorber (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363925#comment-14363925
 ] 

Phil Sorber commented on TS-1334:
-

[~sudheerv], [~amc], Thoughts on if we should commit or push out?

> congestion control - observed issues
> 
>
> Key: TS-1334
> URL: https://issues.apache.org/jira/browse/TS-1334
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 3.0.2
>Reporter: Aidan McGurn
>Assignee: Alan M. Carroll
>  Labels: review
> Fix For: 5.3.0
>
> Attachments: TS-1334.diff
>
>
> Hi,
> I have investigated the use of using ATS congestion control but I had some 
> observations. i can split out if these are bugs which need separate attention.
> (queries are with ATS v3.0.2 as test code, assuming not much changed here for 
> v3.2)
> • Is it feasible for a new Congestion hook to be added to the 
> architecture at some point i.e. for these events:
> CONGESTION_EVENT_CONGESTED_ON_F
> CONGESTION_EVENT_CONGESTED_ON_M
> It would be desirable to send a hook event upwards to inform any plugins of a 
> congested site.
> • How is the congestion cache managed in that I don’t see it deleting 
> entries –
> In CongestionDB.cc/function remove_congested_entry  - I set breakpoints here, 
> I congest, then I uncongest but I never see this function called.
> Therefore does the cache grow and grow with old entries?
> The reason for checking this is I would also need to inform plugin land when 
> a site becomes UNCONGESTED but I don’t even see a httpSM event for this. 
> (this is the biggest issue with CC for me)
> • Traffic_line –q //doesn’t appear to work? i.e. no congested stats 
> returned
> there is a Jira open for along time on this without further response:
> https://issues.apache.org/jira/browse/TS-1221
> • Some other lesser important observations like parameters:
> live_os_conn_retries
> live_os_conn_timeout
> dead_os_conn_timeout
> dead_os_conn_retries
> appear to have no effect whatsoever but not as important as previous points.
> . doesn't look like status response code can be customised
> Maybe this is not supported much as an ATS feature? 
> Any pointers on any of these appreciated even to let me know if the 
> observations are correct and won't fixed in coming releases…
> Thanks,
> /aidan



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-1807) shutdown on a write VIO to TSHttpConnect() doesn't propogate

2015-03-16 Thread Phil Sorber (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363934#comment-14363934
 ] 

Phil Sorber commented on TS-1807:
-

[~wbardwel], should I just push this out further?

> shutdown on a write VIO to TSHttpConnect() doesn't propogate
> 
>
> Key: TS-1807
> URL: https://issues.apache.org/jira/browse/TS-1807
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HTTP
>Reporter: William Bardwell
>  Labels: review
> Fix For: 5.3.0
>
> Attachments: TS-1807.diff
>
>
> In a plugin I am doing a TSHttpConnect() and then sending HTTP requests and 
> getting responses.  But when I try to do TSVIONBytesSet() and 
> TSVConnShutdown() on the write vio (due to the client side being done sending 
> requests) the write vio just sits there and never wakes up the other side, 
> and the response side doesn't try to close up until an inactivity timeout 
> happens.
> I think that PluginVC::do_io_shutdown() needs to do  
> other_side->read_state.vio.reenable(); when a shutdown for write shows up.  
> Then the otherside wakes up and sees the EOF due to the shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3424) SSL error: SSL3_GET_RECORD:decryption failed or bad record mac

2015-03-16 Thread Brian Geffon (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363939#comment-14363939
 ] 

Brian Geffon commented on TS-3424:
--

I'll give this a shot and get back to you, I'll also try to add more debug 
logging to expedite the resolution of this.

> SSL error: SSL3_GET_RECORD:decryption failed or bad record mac
> --
>
> Key: TS-3424
> URL: https://issues.apache.org/jira/browse/TS-3424
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, SSL
>Reporter: Brian Geffon
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: ts-3424-2.diff, ts-3424-3.diff, ts-3424-for-52-2.diff, 
> ts-3424-for-52.diff, ts-3424.diff, undo-handshake-buffer-for-52.diff, 
> undo-handshake-buffer.diff
>
>
> Starting with 5.2.x we're seeing SSL_ERROR_SSL type errors in 
> {{ssl_read_from_net}}, when calling OpenSSL's {{ERR_error_string_n}} we see 
> the error is {{1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad 
> record mac}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3348) read while write config and range issues

2015-03-16 Thread Phil Sorber (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Sorber updated TS-3348:

Fix Version/s: (was: 5.3.0)
   6.0.0

> read while write config and range issues
> 
>
> Key: TS-3348
> URL: https://issues.apache.org/jira/browse/TS-3348
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: William Bardwell
>Assignee: William Bardwell
>  Labels: review
> Fix For: 6.0.0
>
> Attachments: TS-3348.diff
>
>
> We had a number of problems with the read-while-write logic.
> #1) you can't set background fill config options to keep background fill from 
> behaving badly because they are shared too much with read-while-write
> #2) logic around filling range requests out of partial cache entries is too 
> restrictive
> #3) issues around read_while_write not working if there is a transform 
> anywhere
> #4) some related config is not overridable
> So we think that our patch fixes all of these issues...mostly.
> (The background fill timeout doesn't get re-instated if a download switches 
> to read-while-write and then back.  The Range is in cache code doesn't seem 
> write for small things or possibly for seeing the current fragment that is 
> only partially downloaded.)
> But we would like some review of this to see if we are doing anything 
> dangerous/not right/not helpful.
> Might also help TS-2761 and issue around range handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3348) read while write config and range issues

2015-03-16 Thread Phil Sorber (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363940#comment-14363940
 ] 

Phil Sorber commented on TS-3348:
-

This sound potentially superseded by stuff that [~amc] is working on and so I 
am just going to move this out to 6.0.0.

> read while write config and range issues
> 
>
> Key: TS-3348
> URL: https://issues.apache.org/jira/browse/TS-3348
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: William Bardwell
>Assignee: William Bardwell
>  Labels: review
> Fix For: 6.0.0
>
> Attachments: TS-3348.diff
>
>
> We had a number of problems with the read-while-write logic.
> #1) you can't set background fill config options to keep background fill from 
> behaving badly because they are shared too much with read-while-write
> #2) logic around filling range requests out of partial cache entries is too 
> restrictive
> #3) issues around read_while_write not working if there is a transform 
> anywhere
> #4) some related config is not overridable
> So we think that our patch fixes all of these issues...mostly.
> (The background fill timeout doesn't get re-instated if a download switches 
> to read-while-write and then back.  The Range is in cache code doesn't seem 
> write for small things or possibly for seeing the current fragment that is 
> only partially downloaded.)
> But we would like some review of this to see if we are doing anything 
> dangerous/not right/not helpful.
> Might also help TS-2761 and issue around range handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3312) KA timeout to origin does not seem to honor configurations

2015-03-16 Thread Dzmitry Markovich (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dzmitry Markovich updated TS-3312:
--
Attachment: (was: keep_alive_timeout.diff)

> KA timeout to origin does not seem to honor configurations
> --
>
> Key: TS-3312
> URL: https://issues.apache.org/jira/browse/TS-3312
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, HTTP
>Reporter: Leif Hedstrom
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
>
> Doing some basic testing, with the following settings:
> {code}
> CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 120
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> I see ATS timing out the origin sessions after 30sec, with a 
> {code}
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> What's also interesting, after I made a config change per Geffon's suggestion:
> {code}
> CONFIG proxy.config.http.origin_min_keep_alive_connections INT 10
> {code}
> I see the following in the diagnostic trace:
> {code}
> [Jan 21 14:19:19.416] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] [release 
> session] session placed into shared pool
> [Jan 21 14:19:49.558] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.633] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.670] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_pool] session 0x1cc5aa0 received io notice [VC_EVENT_EOS]
> {code}
> So, not only is it resetting the timeout twice, it also gets a VC_EVENT_EOS. 
> I first though it was the origin that closed the connection, but from what I 
> could tell, the timeout on the origin was set to 60s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3312) KA timeout to origin does not seem to honor configurations

2015-03-16 Thread Dzmitry Markovich (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364107#comment-14364107
 ] 

Dzmitry Markovich commented on TS-3312:
---

I did some learning around a connections code and did not find anything better 
than exposing timeout value as a parameter while we release the keep alive 
connection to the pool. I made it backward compatible, so default behavior 
still works as before. [~amc] - could you please take a look?

> KA timeout to origin does not seem to honor configurations
> --
>
> Key: TS-3312
> URL: https://issues.apache.org/jira/browse/TS-3312
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, HTTP
>Reporter: Leif Hedstrom
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: keep_alive2.diff
>
>
> Doing some basic testing, with the following settings:
> {code}
> CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 120
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> I see ATS timing out the origin sessions after 30sec, with a 
> {code}
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> What's also interesting, after I made a config change per Geffon's suggestion:
> {code}
> CONFIG proxy.config.http.origin_min_keep_alive_connections INT 10
> {code}
> I see the following in the diagnostic trace:
> {code}
> [Jan 21 14:19:19.416] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] [release 
> session] session placed into shared pool
> [Jan 21 14:19:49.558] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.633] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.670] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_pool] session 0x1cc5aa0 received io notice [VC_EVENT_EOS]
> {code}
> So, not only is it resetting the timeout twice, it also gets a VC_EVENT_EOS. 
> I first though it was the origin that closed the connection, but from what I 
> could tell, the timeout on the origin was set to 60s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3312) KA timeout to origin does not seem to honor configurations

2015-03-16 Thread Dzmitry Markovich (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dzmitry Markovich updated TS-3312:
--
Attachment: keep_alive2.diff

> KA timeout to origin does not seem to honor configurations
> --
>
> Key: TS-3312
> URL: https://issues.apache.org/jira/browse/TS-3312
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, HTTP
>Reporter: Leif Hedstrom
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: keep_alive2.diff
>
>
> Doing some basic testing, with the following settings:
> {code}
> CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 120
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> I see ATS timing out the origin sessions after 30sec, with a 
> {code}
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> What's also interesting, after I made a config change per Geffon's suggestion:
> {code}
> CONFIG proxy.config.http.origin_min_keep_alive_connections INT 10
> {code}
> I see the following in the diagnostic trace:
> {code}
> [Jan 21 14:19:19.416] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] [release 
> session] session placed into shared pool
> [Jan 21 14:19:49.558] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.633] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.670] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_pool] session 0x1cc5aa0 received io notice [VC_EVENT_EOS]
> {code}
> So, not only is it resetting the timeout twice, it also gets a VC_EVENT_EOS. 
> I first though it was the origin that closed the connection, but from what I 
> could tell, the timeout on the origin was set to 60s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (TS-3312) KA timeout to origin does not seem to honor configurations

2015-03-16 Thread Dzmitry Markovich (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364107#comment-14364107
 ] 

Dzmitry Markovich edited comment on TS-3312 at 3/16/15 10:38 PM:
-

I did some learning around connections code and did not find anything better 
than exposing timeout value as a parameter while we release the keep alive 
connection to the pool. I made it backward compatible, so default behavior 
still works as before. [~amc] - could you please take a look?


was (Author: dmich):
I did some learning around a connections code and did not find anything better 
than exposing timeout value as a parameter while we release the keep alive 
connection to the pool. I made it backward compatible, so default behavior 
still works as before. [~amc] - could you please take a look?

> KA timeout to origin does not seem to honor configurations
> --
>
> Key: TS-3312
> URL: https://issues.apache.org/jira/browse/TS-3312
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, HTTP
>Reporter: Leif Hedstrom
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: keep_alive2.diff
>
>
> Doing some basic testing, with the following settings:
> {code}
> CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 120
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> I see ATS timing out the origin sessions after 30sec, with a 
> {code}
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> What's also interesting, after I made a config change per Geffon's suggestion:
> {code}
> CONFIG proxy.config.http.origin_min_keep_alive_connections INT 10
> {code}
> I see the following in the diagnostic trace:
> {code}
> [Jan 21 14:19:19.416] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] [release 
> session] session placed into shared pool
> [Jan 21 14:19:49.558] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.633] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.670] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_pool] session 0x1cc5aa0 received io notice [VC_EVENT_EOS]
> {code}
> So, not only is it resetting the timeout twice, it also gets a VC_EVENT_EOS. 
> I first though it was the origin that closed the connection, but from what I 
> could tell, the timeout on the origin was set to 60s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3216) Add HPKP (Public Key Pinning Extension for HTTP) support

2015-03-16 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach updated TS-3216:

Fix Version/s: (was: 5.3.0)
   6.0.0

> Add HPKP (Public Key Pinning Extension for HTTP) support
> 
>
> Key: TS-3216
> URL: https://issues.apache.org/jira/browse/TS-3216
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: SSL
>Reporter: Masaori Koshiba
>Assignee: James Peach
>  Labels: review
> Fix For: 6.0.0
>
> Attachments: hpkp-001.patch, hpkp-002.patch
>
>
> Add "Public Key Pinning Extension for HTTP" Support in Traffic Server.
> Public Key Pinning Extension for HTTP (draft-ietf-websec-key-pinning-21)
> - https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3216) Add HPKP (Public Key Pinning Extension for HTTP) support

2015-03-16 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364147#comment-14364147
 ] 

James Peach commented on TS-3216:
-

I don't like this approach, for a number of reasons

- It's based on {{ssl_multicert.config}} configuration, so it is not consistent 
with HSTS which is based on {{records.config}}.

- It assumes that there is only 1 backup pin, the backup pin is contained in a 
CSR, and that the CSR is available to ATS. All of these assumptions seem shaky 
to me.

- There are many HPKP options missing (e.g., {{Public-Key-Pins-Report-Only}}, 
{{report-url}}) and it's not clear to me that configuring this in 
{{ssl_multicert.config}} would be a good approach.

- I really would like to avoid adding more knobs to {{ssl_multicert.config}}, 
since it is way to complex already.

> Add HPKP (Public Key Pinning Extension for HTTP) support
> 
>
> Key: TS-3216
> URL: https://issues.apache.org/jira/browse/TS-3216
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: SSL
>Reporter: Masaori Koshiba
>Assignee: James Peach
>  Labels: review
> Fix For: 5.3.0
>
> Attachments: hpkp-001.patch, hpkp-002.patch
>
>
> Add "Public Key Pinning Extension for HTTP" Support in Traffic Server.
> Public Key Pinning Extension for HTTP (draft-ietf-websec-key-pinning-21)
> - https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3036) Add logging field to define the cache medium used to serve a HIT

2015-03-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364406#comment-14364406
 ] 

ASF GitHub Bot commented on TS-3036:


Github user jacksontj commented on the pull request:

https://github.com/apache/trafficserver/pull/104#issuecomment-82027946
  
@zwoop 


> Add logging field to define the cache medium used to serve a HIT
> 
>
> Key: TS-3036
> URL: https://issues.apache.org/jira/browse/TS-3036
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Logging
>Reporter: Ryan Frantz
>Assignee: Leif Hedstrom
>  Labels: review
> Fix For: 5.3.0
>
>
> I want to be able to differentiate between RAM cache HITs and disk cache 
> HITs. Add a logging field to inform the administrator if the HIT came from 
> RAM, at least.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3446) custom logging for Cookie header

2015-03-16 Thread Scott Beardsley (JIRA)
Scott Beardsley created TS-3446:
---

 Summary: custom logging for Cookie header
 Key: TS-3446
 URL: https://issues.apache.org/jira/browse/TS-3446
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Reporter: Scott Beardsley


I have a use case that requires logging a specific cookie in the UA request. 
The only way to do this today is to log the entire Cookie request header then 
parse out the cookie name/value pair that I am interested in after the fact. 
The problem with that approach is that some cookie data is sensitive and must 
not exist in logs. Plus logging the entire cookie header is just wasteful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3446) custom logging for Cookie header

2015-03-16 Thread Scott Beardsley (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Beardsley updated TS-3446:

Priority: Minor  (was: Major)

> custom logging for Cookie header
> 
>
> Key: TS-3446
> URL: https://issues.apache.org/jira/browse/TS-3446
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Reporter: Scott Beardsley
>Priority: Minor
>
> I have a use case that requires logging a specific cookie in the UA request. 
> The only way to do this today is to log the entire Cookie request header then 
> parse out the cookie name/value pair that I am interested in after the fact. 
> The problem with that approach is that some cookie data is sensitive and must 
> not exist in logs. Plus logging the entire cookie header is just wasteful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3447) Failing to remap POST requests if buffer_upload plugin is enabled and Host header contains port number

2015-03-16 Thread Ethan Lai (JIRA)
Ethan Lai created TS-3447:
-

 Summary: Failing to remap POST requests if buffer_upload plugin is 
enabled and Host header contains port number
 Key: TS-3447
 URL: https://issues.apache.org/jira/browse/TS-3447
 Project: Traffic Server
  Issue Type: Bug
  Components: Plugins
Reporter: Ethan Lai


We've experienced POST request mapping issues in some situation while 
buffer_upload plugin is enabled.
After cross reference, we found that Host header with port value cannot be 
mapped correctly.

Sample remap.config:
{quote}
  map http://www.example.com/   http://127.0.0.1:8001/
  map http://www.example.com:8080/  http://127.0.0.1:8001/
{quote}

Sample plugin.config
{quote}
  buffer_upload.so conf/trafficserver/buffer_upload/buffer_upload.config
{quote}

Sample buffer_upload.config
{quote}
  use_disk_buffer 1
  convert_url 0
  chunk_size  1024
  url_list_file   conf/trafficserver/buffer_upload/url_list.config
  base_dirvar/buffer_upload_tmp
  subdir_num  100
  thread_num  10
  mem_buffer_size 51000
{quote}

Sample buffer upload url_list.config
{quote}
  http://www.example.com/upload/upload.php
{quote}

Sample cmds & responses
{quote}
  curl http://www.example.com/test.php -X POST -d 'blah'
  > map ok
  curl http://www.example.com:8080/test.php
  > map ok
  curl http://www.example.com:8080/test.php -X POST -d 'blah'
  > 404 Not Found! (Not Found on Accelerator)
{quote}


I've tried to correct this and will update with pull request shortly, thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3447) Failing to remap POST requests if buffer_upload plugin is enabled and Host header contains port number

2015-03-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364446#comment-14364446
 ] 

ASF GitHub Bot commented on TS-3447:


GitHub user yzlai opened a pull request:

https://github.com/apache/trafficserver/pull/181

TS-3447 [buffer_upload plugin] set UrlPort if port number present in Host 
header

...t header

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yzlai/trafficserver master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafficserver/pull/181.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #181


commit 3ad5a4c1653f2c68472a959cfe9eff40f91356a7
Author: Ethan Lai 
Date:   2015-03-17T02:28:44Z

TS-3447 [buffer_upload plugin] set UrlPort if port number present in Host 
header




> Failing to remap POST requests if buffer_upload plugin is enabled and Host 
> header contains port number
> --
>
> Key: TS-3447
> URL: https://issues.apache.org/jira/browse/TS-3447
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Plugins
>Reporter: Ethan Lai
>
> We've experienced POST request mapping issues in some situation while 
> buffer_upload plugin is enabled.
> After cross reference, we found that Host header with port value cannot be 
> mapped correctly.
> Sample remap.config:
> {quote}
>   map http://www.example.com/   http://127.0.0.1:8001/
>   map http://www.example.com:8080/  http://127.0.0.1:8001/
> {quote}
> Sample plugin.config
> {quote}
>   buffer_upload.so conf/trafficserver/buffer_upload/buffer_upload.config
> {quote}
> Sample buffer_upload.config
> {quote}
>   use_disk_buffer 1
>   convert_url 0
>   chunk_size  1024
>   url_list_file   conf/trafficserver/buffer_upload/url_list.config
>   base_dirvar/buffer_upload_tmp
>   subdir_num  100
>   thread_num  10
>   mem_buffer_size 51000
> {quote}
> Sample buffer upload url_list.config
> {quote}
>   http://www.example.com/upload/upload.php
> {quote}
> Sample cmds & responses
> {quote}
>   curl http://www.example.com/test.php -X POST -d 'blah'
>   > map ok
>   curl http://www.example.com:8080/test.php
>   > map ok
>   curl http://www.example.com:8080/test.php -X POST -d 'blah'
>   > 404 Not Found! (Not Found on Accelerator)
> {quote}
> I've tried to correct this and will update with pull request shortly, thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3447) Failing to remap POST requests if buffer_upload plugin is enabled and Host header contains port number

2015-03-16 Thread Ethan Lai (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364447#comment-14364447
 ] 

Ethan Lai commented on TS-3447:
---

https://github.com/apache/trafficserver/pull/181

> Failing to remap POST requests if buffer_upload plugin is enabled and Host 
> header contains port number
> --
>
> Key: TS-3447
> URL: https://issues.apache.org/jira/browse/TS-3447
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Plugins
>Reporter: Ethan Lai
>
> We've experienced POST request mapping issues in some situation while 
> buffer_upload plugin is enabled.
> After cross reference, we found that Host header with port value cannot be 
> mapped correctly.
> Sample remap.config:
> {quote}
>   map http://www.example.com/   http://127.0.0.1:8001/
>   map http://www.example.com:8080/  http://127.0.0.1:8001/
> {quote}
> Sample plugin.config
> {quote}
>   buffer_upload.so conf/trafficserver/buffer_upload/buffer_upload.config
> {quote}
> Sample buffer_upload.config
> {quote}
>   use_disk_buffer 1
>   convert_url 0
>   chunk_size  1024
>   url_list_file   conf/trafficserver/buffer_upload/url_list.config
>   base_dirvar/buffer_upload_tmp
>   subdir_num  100
>   thread_num  10
>   mem_buffer_size 51000
> {quote}
> Sample buffer upload url_list.config
> {quote}
>   http://www.example.com/upload/upload.php
> {quote}
> Sample cmds & responses
> {quote}
>   curl http://www.example.com/test.php -X POST -d 'blah'
>   > map ok
>   curl http://www.example.com:8080/test.php
>   > map ok
>   curl http://www.example.com:8080/test.php -X POST -d 'blah'
>   > 404 Not Found! (Not Found on Accelerator)
> {quote}
> I've tried to correct this and will update with pull request shortly, thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (TS-3447) Failing to remap POST requests if buffer_upload plugin is enabled and Host header contains port number

2015-03-16 Thread Ethan Lai (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Lai updated TS-3447:
--
Comment: was deleted

(was: https://github.com/apache/trafficserver/pull/181)

> Failing to remap POST requests if buffer_upload plugin is enabled and Host 
> header contains port number
> --
>
> Key: TS-3447
> URL: https://issues.apache.org/jira/browse/TS-3447
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Plugins
>Reporter: Ethan Lai
>
> We've experienced POST request mapping issues in some situation while 
> buffer_upload plugin is enabled.
> After cross reference, we found that Host header with port value cannot be 
> mapped correctly.
> Sample remap.config:
> {quote}
>   map http://www.example.com/   http://127.0.0.1:8001/
>   map http://www.example.com:8080/  http://127.0.0.1:8001/
> {quote}
> Sample plugin.config
> {quote}
>   buffer_upload.so conf/trafficserver/buffer_upload/buffer_upload.config
> {quote}
> Sample buffer_upload.config
> {quote}
>   use_disk_buffer 1
>   convert_url 0
>   chunk_size  1024
>   url_list_file   conf/trafficserver/buffer_upload/url_list.config
>   base_dirvar/buffer_upload_tmp
>   subdir_num  100
>   thread_num  10
>   mem_buffer_size 51000
> {quote}
> Sample buffer upload url_list.config
> {quote}
>   http://www.example.com/upload/upload.php
> {quote}
> Sample cmds & responses
> {quote}
>   curl http://www.example.com/test.php -X POST -d 'blah'
>   > map ok
>   curl http://www.example.com:8080/test.php
>   > map ok
>   curl http://www.example.com:8080/test.php -X POST -d 'blah'
>   > 404 Not Found! (Not Found on Accelerator)
> {quote}
> I've tried to correct this and will update with pull request shortly, thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3448) Add an "internal_request" Mod to ControlMatcher (a boolean value)

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3448:
--
Fix Version/s: 5.3.0

> Add an "internal_request" Mod to ControlMatcher (a boolean value)
> -
>
> Key: TS-3448
> URL: https://issues.apache.org/jira/browse/TS-3448
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Configuration, Core
>Reporter: Leif Hedstrom
> Fix For: 5.3.0
>
>
> This allows, as an example, exclusion of parent.config for requests that are 
> internal. Or, different cache.config rules for internal requests. Example 
> usage could be
> {code}
> dest_domain=.  parent="proxy1.example.com:8080; proxy2.example.com:8080" 
> internal_request=false
> {code}
> This would allow this rule to only trigger if the request is not an internal 
> (plugin) request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3448) Add an "internal_request" Mod to ControlMatcher (a boolean value)

2015-03-16 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-3448:
-

 Summary: Add an "internal_request" Mod to ControlMatcher (a 
boolean value)
 Key: TS-3448
 URL: https://issues.apache.org/jira/browse/TS-3448
 Project: Traffic Server
  Issue Type: New Feature
  Components: Configuration, Core
Reporter: Leif Hedstrom


This allows, as an example, exclusion of parent.config for requests that are 
internal. Or, different cache.config rules for internal requests. Example usage 
could be
{code}
dest_domain=.  parent="proxy1.example.com:8080; proxy2.example.com:8080" 
internal_request=false
{code}

This would allow this rule to only trigger if the request is not an internal 
(plugin) request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TS-3448) Add an "internal_request" Mod to ControlMatcher (a boolean value)

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom reassigned TS-3448:
-

Assignee: Leif Hedstrom

> Add an "internal_request" Mod to ControlMatcher (a boolean value)
> -
>
> Key: TS-3448
> URL: https://issues.apache.org/jira/browse/TS-3448
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Configuration, Core
>Reporter: Leif Hedstrom
>Assignee: Leif Hedstrom
> Fix For: 5.3.0
>
>
> This allows, as an example, exclusion of parent.config for requests that are 
> internal. Or, different cache.config rules for internal requests. Example 
> usage could be
> {code}
> dest_domain=.  parent="proxy1.example.com:8080; proxy2.example.com:8080" 
> internal_request=false
> {code}
> This would allow this rule to only trigger if the request is not an internal 
> (plugin) request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3216) Add HPKP (Public Key Pinning Extension for HTTP) support

2015-03-16 Thread Masaori Koshiba (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364576#comment-14364576
 ] 

Masaori Koshiba commented on TS-3216:
-

[~jpe...@apache.org], thanks for rewiew.

I agree with that I should avoid making {{ssl_multicert.config}} more complex.

> It assumes that there is only 1 backup pin, the backup pin is contained in a 
> CSR,
> and that the CSR is available to ATS. All of these assumptions seem shaky to 
> me.
Do you mean even if there are 2 cert settings in {{ssl_multicert.config}}, only 
one backup pin is enough?

I thought adding HPKP setting in {{records.config}} like HSTS at first.
But, AFAIU, when we have 2 certs, each certs needs different CSRs to generate 
backup pins.
Because when current cert is expired, current pin and backup pin is cached in 
browser,
so we have to generate new cert from CSR which used to generate backup pin.
This is why I add HPKP settings in {{ssl_multicert.config}}.


> Add HPKP (Public Key Pinning Extension for HTTP) support
> 
>
> Key: TS-3216
> URL: https://issues.apache.org/jira/browse/TS-3216
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: SSL
>Reporter: Masaori Koshiba
>Assignee: James Peach
>  Labels: review
> Fix For: 6.0.0
>
> Attachments: hpkp-001.patch, hpkp-002.patch
>
>
> Add "Public Key Pinning Extension for HTTP" Support in Traffic Server.
> Public Key Pinning Extension for HTTP (draft-ietf-websec-key-pinning-21)
> - https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3446) custom logging for Cookie header

2015-03-16 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364608#comment-14364608
 ] 

Leif Hedstrom commented on TS-3446:
---

Can you use the "trim" features of the custom logs?

> custom logging for Cookie header
> 
>
> Key: TS-3446
> URL: https://issues.apache.org/jira/browse/TS-3446
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Reporter: Scott Beardsley
>Priority: Minor
>
> I have a use case that requires logging a specific cookie in the UA request. 
> The only way to do this today is to log the entire Cookie request header then 
> parse out the cookie name/value pair that I am interested in after the fact. 
> The problem with that approach is that some cookie data is sensitive and must 
> not exist in logs. Plus logging the entire cookie header is just wasteful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3447) Failing to remap POST requests if buffer_upload plugin is enabled and Host header contains port number

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3447:
--
Fix Version/s: 6.0.0

> Failing to remap POST requests if buffer_upload plugin is enabled and Host 
> header contains port number
> --
>
> Key: TS-3447
> URL: https://issues.apache.org/jira/browse/TS-3447
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Plugins
>Reporter: Ethan Lai
> Fix For: 6.0.0
>
>
> We've experienced POST request mapping issues in some situation while 
> buffer_upload plugin is enabled.
> After cross reference, we found that Host header with port value cannot be 
> mapped correctly.
> Sample remap.config:
> {quote}
>   map http://www.example.com/   http://127.0.0.1:8001/
>   map http://www.example.com:8080/  http://127.0.0.1:8001/
> {quote}
> Sample plugin.config
> {quote}
>   buffer_upload.so conf/trafficserver/buffer_upload/buffer_upload.config
> {quote}
> Sample buffer_upload.config
> {quote}
>   use_disk_buffer 1
>   convert_url 0
>   chunk_size  1024
>   url_list_file   conf/trafficserver/buffer_upload/url_list.config
>   base_dirvar/buffer_upload_tmp
>   subdir_num  100
>   thread_num  10
>   mem_buffer_size 51000
> {quote}
> Sample buffer upload url_list.config
> {quote}
>   http://www.example.com/upload/upload.php
> {quote}
> Sample cmds & responses
> {quote}
>   curl http://www.example.com/test.php -X POST -d 'blah'
>   > map ok
>   curl http://www.example.com:8080/test.php
>   > map ok
>   curl http://www.example.com:8080/test.php -X POST -d 'blah'
>   > 404 Not Found! (Not Found on Accelerator)
> {quote}
> I've tried to correct this and will update with pull request shortly, thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3444) Coverity fixes for v6.0.0 by zwoop

2015-03-16 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3444:
--
Fix Version/s: 6.0.0

> Coverity fixes for v6.0.0 by zwoop
> --
>
> Key: TS-3444
> URL: https://issues.apache.org/jira/browse/TS-3444
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: Leif Hedstrom
> Fix For: 6.0.0
>
>
> Starting a new Jira for Coverity fixes for me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (TS-3216) Add HPKP (Public Key Pinning Extension for HTTP) support

2015-03-16 Thread Masaori Koshiba (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364576#comment-14364576
 ] 

Masaori Koshiba edited comment on TS-3216 at 3/17/15 6:08 AM:
--

[~jpe...@apache.org], thanks for rewiew.

I agree with that I should avoid making {{ssl_multicert.config}} more complex.

{quote}
It assumes that there is only 1 backup pin, the backup pin is contained in a 
CSR, and that the CSR is available to ATS. All of these assumptions seem shaky 
to me.
{quote}
Do you mean even if there are 2 cert settings in {{ssl_multicert.config}}, only 
one backup pin is enough?

I thought adding HPKP setting in {{records.config}} like HSTS at first.
But, AFAIU, when we have 2 certs, each certs needs different CSRs to generate 
backup pins.
Because when current cert is expired, current pin and backup pin is cached in 
browser,
so we have to generate new cert from CSR which used to generate backup pin.
This is why I add HPKP settings in {{ssl_multicert.config}}.



was (Author: masaori):
[~jpe...@apache.org], thanks for rewiew.

I agree with that I should avoid making {{ssl_multicert.config}} more complex.

> It assumes that there is only 1 backup pin, the backup pin is contained in a 
> CSR,
> and that the CSR is available to ATS. All of these assumptions seem shaky to 
> me.
Do you mean even if there are 2 cert settings in {{ssl_multicert.config}}, only 
one backup pin is enough?

I thought adding HPKP setting in {{records.config}} like HSTS at first.
But, AFAIU, when we have 2 certs, each certs needs different CSRs to generate 
backup pins.
Because when current cert is expired, current pin and backup pin is cached in 
browser,
so we have to generate new cert from CSR which used to generate backup pin.
This is why I add HPKP settings in {{ssl_multicert.config}}.


> Add HPKP (Public Key Pinning Extension for HTTP) support
> 
>
> Key: TS-3216
> URL: https://issues.apache.org/jira/browse/TS-3216
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: SSL
>Reporter: Masaori Koshiba
>Assignee: James Peach
>  Labels: review
> Fix For: 6.0.0
>
> Attachments: hpkp-001.patch, hpkp-002.patch
>
>
> Add "Public Key Pinning Extension for HTTP" Support in Traffic Server.
> Public Key Pinning Extension for HTTP (draft-ietf-websec-key-pinning-21)
> - https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)