[jira] [Updated] (TS-3156) Mutex[Try]Lock bool() operator change and unused API removal
[ https://issues.apache.org/jira/browse/TS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryo Okubo updated TS-3156: -- Attachment: fix-MutexLock.patch The cause of above issue is probably calling pthread_mutex_lock() for destroyed mutex lock. If a variable of ProxyMutex* is passed to MUTEX_LOCK(), it's destroyed when leaving that scope. For example, pthread_mutex_destroy() is called in MakeHttpProxyAcceptor(): {noformat} (gdb) bt #0 0x77830e70 in pthread_mutex_destroy () from /lib64/libpthread.so.0 #1 0x004e03da in ink_mutex_destroy (m=0xac7890) at ../lib/ts/ink_mutex.h:87 #2 0x004e15c2 in ProxyMutex::free (this=0xac7880) at ../iocore/eventsystem/I_Lock.h:543 #3 0x004e2f39 in PtrProxyMutex::~Ptr (this=0x7fffe3b0, __in_chrg=value optimized out) at ../lib/ts/Ptr.h:393 #4 0x00520f17 in MutexLock::~MutexLock (this=0x7fffe3b0, __in_chrg=value optimized out) at ../iocore/eventsystem/I_Lock.h:465 #5 0x005cea60 in MakeHttpProxyAcceptor (acceptor=..., port=..., nthreads=1) at HttpProxyServerMain.cc:230 #6 0x005cec45 in init_HttpProxyServer (n_accept_threads=1) at HttpProxyServerMain.cc:270 #7 0x0052aded in main (argv=0x7fffe7a8) at Main.cc:1554 {noformat} I tried to replace ProxyMutex* with PtrProxyMutex. It looks to work fine. @Powell, what do you think about fix-MutexLock.patch? Mutex[Try]Lock bool() operator change and unused API removal Key: TS-3156 URL: https://issues.apache.org/jira/browse/TS-3156 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Powell Molleti Assignee: James Peach Priority: Minor Labels: review Fix For: 5.2.0 Attachments: MutexLock-ats.patch, MutexLock-ats.patch, fix-MutexLock.patch Removed unused constructor in MutexLock along with set_and_take() method, had to change FORCE_PLUGIN_MUTEX() for that. Removed release() method. default bool and ! operator from both MutexLock and MutexTryLock with is_locked() API. Changes if (lock) to if (lock.is_locked()) across the code base. Ran make test will be performing more system testing. Posted before for early comments / feedback. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-2959) Compiler warnings from gcc 4.9.1
[ https://issues.apache.org/jira/browse/TS-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Susan Hinrichs updated TS-2959: --- Attachment: ts-2959.diff Compiler warnings from gcc 4.9.1 Key: TS-2959 URL: https://issues.apache.org/jira/browse/TS-2959 Project: Traffic Server Issue Type: Bug Components: Core, DNS Reporter: Leif Hedstrom Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-2959.diff, ts-2959.patch We get: {code} In file included from ../../iocore/hostdb/P_HostDB.h:47:0, from ../../proxy/Main.cc:63: ../../iocore/hostdb/P_MultiCache.h: In member function ‘void MultiCacheC::rebuild_element(int, char*, RebuildMC) [with C = HostDBInfo]’: ../../iocore/hostdb/P_MultiCache.h:468:23: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:468:65: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:487:29: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:509:39: error: array subscript is above array bounds [-Werror=array-bounds] if (hits ((max_hits / 2) + 1) * elements[level]) ^ ../../iocore/hostdb/P_MultiCache.h:511:33: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:468:23: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:468:65: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:487:29: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:509:39: error: array subscript is above array bounds [-Werror=array-bounds] if (hits ((max_hits / 2) + 1) * elements[level]) ^ ../../iocore/hostdb/P_MultiCache.h:511:33: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:552:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:558:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) ^ ../../iocore/hostdb/P_MultiCache.h:552:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:558:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) ^ {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-2959) Compiler warnings from gcc 4.9.1
[ https://issues.apache.org/jira/browse/TS-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204880#comment-14204880 ] ASF GitHub Bot commented on TS-2959: GitHub user shinrich opened a pull request: https://github.com/apache/trafficserver/pull/137 TS-2959 Fix gcc 4.9.2 compiler warning. You can merge this pull request into a Git repository by running: $ git pull https://github.com/shinrich/trafficserver ts-2959 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/trafficserver/pull/137.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #137 commit 91d67d5358e2892fa2690fb5106da0b19503dcbb Author: shinrich shinr...@network-geographics.com Date: 2014-11-10T15:20:28Z TS-2959 Fix gcc 4.9.2 compiler warning. Compiler warnings from gcc 4.9.1 Key: TS-2959 URL: https://issues.apache.org/jira/browse/TS-2959 Project: Traffic Server Issue Type: Bug Components: Core, DNS Reporter: Leif Hedstrom Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-2959.diff We get: {code} In file included from ../../iocore/hostdb/P_HostDB.h:47:0, from ../../proxy/Main.cc:63: ../../iocore/hostdb/P_MultiCache.h: In member function ‘void MultiCacheC::rebuild_element(int, char*, RebuildMC) [with C = HostDBInfo]’: ../../iocore/hostdb/P_MultiCache.h:468:23: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:468:65: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:487:29: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:509:39: error: array subscript is above array bounds [-Werror=array-bounds] if (hits ((max_hits / 2) + 1) * elements[level]) ^ ../../iocore/hostdb/P_MultiCache.h:511:33: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:468:23: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:468:65: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:487:29: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:509:39: error: array subscript is above array bounds [-Werror=array-bounds] if (hits ((max_hits / 2) + 1) * elements[level]) ^ ../../iocore/hostdb/P_MultiCache.h:511:33: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:552:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:558:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) ^ ../../iocore/hostdb/P_MultiCache.h:552:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:558:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) ^ {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (TS-3182) collapsed_connection plugin
Faysal Banna created TS-3182: Summary: collapsed_connection plugin Key: TS-3182 URL: https://issues.apache.org/jira/browse/TS-3182 Project: Traffic Server Issue Type: Bug Components: Plugins Reporter: Faysal Banna i have enabled the plugin . and i noticed that if background_fill threshold is anything other than 0.0 it won't work. but if i use background threshold value of 0.0 then i would be consuming more bandwidth than actually serving. and if background fill threshold set to 0.0 read_while_writer will always collapse connections and thus no need for the plugin. so can Collapsed_connection work if background_fill_threshold is for example 0.5 ? that would trigger lots of usefulness. Much Regards -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3024) build with OPENSSL_NO_SSL_INTERN
[ https://issues.apache.org/jira/browse/TS-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204887#comment-14204887 ] ASF GitHub Bot commented on TS-3024: GitHub user shinrich opened a pull request: https://github.com/apache/trafficserver/pull/138 TS-3024 Compile with the OPENSSL_NO_SSL_INTERN flag. Move intern exceptions into SSLInternal.c. You can merge this pull request into a Git repository by running: $ git pull https://github.com/shinrich/trafficserver ts-3024 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/trafficserver/pull/138.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #138 commit 340a84d392ad9792eb20f68692f8ea2bb4e2b8b7 Author: shinrich shinr...@network-geographics.com Date: 2014-11-07T15:05:10Z TS-3024 Add in the -DOPENSSL_NO_SSL_INTERN flag for compiling and isolate exceptions in SSLInternal.cc. commit 81a66b36b3b2370a3cc85fe478efda753a4e6bde Author: shinrich shinr...@network-geographics.com Date: 2014-11-07T21:02:50Z Fix up comment build with OPENSSL_NO_SSL_INTERN Key: TS-3024 URL: https://issues.apache.org/jira/browse/TS-3024 Project: Traffic Server Issue Type: Bug Components: Build, SSL Reporter: James Peach Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3024.patch I think we should enable {{OPENSSL_NO_SSL_INTERN}} to make ourselves more robust to OpenSSL implementation changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3155) Add a value test method to the MIMEField class
[ https://issues.apache.org/jira/browse/TS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204889#comment-14204889 ] ASF GitHub Bot commented on TS-3155: GitHub user shinrich opened a pull request: https://github.com/apache/trafficserver/pull/139 TS-3155 Adding value_get_index to test for presence of value in MIME header field. Using it in the keep alive checks. There are other places in the code where this change can be used to avoid full value copy and parsing. You can merge this pull request into a Git repository by running: $ git pull https://github.com/shinrich/trafficserver ts-3155 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/trafficserver/pull/139.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #139 commit e3dd78f250e40316895415614fe7a73b64ad061b Author: shinrich shinr...@network-geographics.com Date: 2014-11-07T17:31:03Z TS-3155 Adding value_get_index to test for presence of value in MIME header field. Using it in the keep alive checks. There are other places in the code where this change can be used to avoid full value copy and parsing. Add a value test method to the MIMEField class -- Key: TS-3155 URL: https://issues.apache.org/jira/browse/TS-3155 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Susan Hinrichs Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3155.patch In some cases, you don't need to directly manipulate the strings of values in a mime field. But you do need to test if a mime field contains a value (e.g. does the Connection field contain the value close). Currently, you must call MIMEField::value_get, but that does a bunch of copies and string allocation which is not needed in our case. We propose adding a MIMEField::value_get_index method which returns the index of the value in the list if it is present and -1 otherwise. Will still need to do the string parsing, but do not need to do the copies and allocation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3105) Combination of fixes for TS-3084 and TS-3073 causing asserts and segfaults on 5.1 and beyond
[ https://issues.apache.org/jira/browse/TS-3105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204891#comment-14204891 ] ASF GitHub Bot commented on TS-3105: GitHub user shinrich opened a pull request: https://github.com/apache/trafficserver/pull/140 TS-3105 Fixes to improve post, memory usage, and reduce connection leaks. You can merge this pull request into a Git repository by running: $ git pull https://github.com/shinrich/trafficserver ts-3105 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/trafficserver/pull/140.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #140 commit 0127f2d59893f1cb24694cccb3c5db8cb581c7d0 Author: shinrich shinr...@network-geographics.com Date: 2014-11-07T16:06:40Z TS-3105 Fixes to improve post, memory usage, and reduce connection leaks. commit c426fcf4f6fbc1f4ca0df1d528fb76ad9c1b7402 Author: shinrich shinr...@network-geographics.com Date: 2014-11-07T16:52:21Z Clean up some warnings and comments in preparation for a merge request. Combination of fixes for TS-3084 and TS-3073 causing asserts and segfaults on 5.1 and beyond Key: TS-3105 URL: https://issues.apache.org/jira/browse/TS-3105 Project: Traffic Server Issue Type: Bug Reporter: Susan Hinrichs Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3073-and-3084-and-3105-against-510.patch, ts-3105-master-7.patch, ts-3105-master-9.patch These two patches were run in a production environment on top of 5.0.1 without problem for several weeks. Now running with these patches on top of 5.1 causes either an assert or a segfault. Another person has reported the same segfault when running master in a production environment. In the assert, the handler_state of the producers is 0 (UNKNOWN) rather than a terminal state which is expected. I'm assuming either we are being directed into the terminal state from a connection that terminates too quickly. Or an event has hung around for too long and is being executed against the state machine after it has been recycled. The event is HTTP_TUNNEL_EVENT_DONE The assert stack trace is FATAL: HttpSM.cc:2632: failed assert `0` /z/bin/traffic_server - STACK TRACE: /z/lib/libtsutil.so.5(+0x25197)[0x2b8bd08dc197] /z/lib/libtsutil.so.5(+0x23def)[0x2b8bd08dadef] /z/bin/traffic_server(HttpSM::tunnel_handler_post_or_put(HttpTunnelProducer*)+0xcd)[0x5982ad] /z/bin/traffic_server(HttpSM::tunnel_handler_post(int, void*)+0x86)[0x5a32d6] /z/bin/traffic_server(HttpSM::main_handler(int, void*)+0xd8)[0x5a1e18] /z/bin/traffic_server(HttpTunnel::main_handler(int, void*)+0xee)[0x5dd6ae] /z/bin/traffic_server(write_to_net_io(NetHandler*, UnixNetVConnection*, EThread*)+0x136e)[0x721d1e] /z/bin/traffic_server(NetHandler::mainNetEvent(int, Event*)+0x28c)[0x7162fc] /z/bin/traffic_server(EThread::process_event(Event*, int)+0x91)[0x744df1] /z/bin/traffic_server(EThread::execute()+0x4fc)[0x7458ac] /z/bin/traffic_server[0x7440ca] /lib64/libpthread.so.0(+0x7034)[0x2b8bd1ee4034] /lib64/libc.so.6(clone+0x6d)[0x2b8bd2c2875d] The segfault stack trace is /z/bin/traffic_server - STACK TRACE: /lib64/libpthread.so.0(+0xf280)[0x2abccd0d8280] /z/bin/traffic_server(HttpSM::tunnel_handler_ua(int, HttpTunnelConsumer*)+0x122)[0x591462] /z/bin/traffic_server(HttpTunnel::consumer_handler(int, HttpTunnelConsumer*)+0x9e)[0x5dd15e] /z/bin/traffic_server(HttpTunnel::main_handler(int, void*)+0x117)[0x5dd6d7] /z/bin/traffic_server(UnixNetVConnection::mainEvent(int, Event*)+0x3f0)[0x725190] /z/bin/traffic_server(InactivityCop::check_inactivity(int, Event*)+0x275)[0x716b75] /z/bin/traffic_server(EThread::process_event(Event*, int)+0x91)[0x744df1] /z/bin/traffic_server(EThread::execute()+0x2fb)[0x7456ab] /z/bin/traffic_server[0x7440ca] /lib64/libpthread.so.0(+0x7034)[0x2abccd0d0034] /lib64/libc.so.6(clone+0x6d)[0x2abccde1475d] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3105) Combination of fixes for TS-3084 and TS-3073 causing asserts and segfaults on 5.1 and beyond
[ https://issues.apache.org/jira/browse/TS-3105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204920#comment-14204920 ] ASF GitHub Bot commented on TS-3105: Github user shinrich closed the pull request at: https://github.com/apache/trafficserver/pull/140 Combination of fixes for TS-3084 and TS-3073 causing asserts and segfaults on 5.1 and beyond Key: TS-3105 URL: https://issues.apache.org/jira/browse/TS-3105 Project: Traffic Server Issue Type: Bug Reporter: Susan Hinrichs Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3073-and-3084-and-3105-against-510.patch, ts-3105-master-7.patch, ts-3105-master-9.patch These two patches were run in a production environment on top of 5.0.1 without problem for several weeks. Now running with these patches on top of 5.1 causes either an assert or a segfault. Another person has reported the same segfault when running master in a production environment. In the assert, the handler_state of the producers is 0 (UNKNOWN) rather than a terminal state which is expected. I'm assuming either we are being directed into the terminal state from a connection that terminates too quickly. Or an event has hung around for too long and is being executed against the state machine after it has been recycled. The event is HTTP_TUNNEL_EVENT_DONE The assert stack trace is FATAL: HttpSM.cc:2632: failed assert `0` /z/bin/traffic_server - STACK TRACE: /z/lib/libtsutil.so.5(+0x25197)[0x2b8bd08dc197] /z/lib/libtsutil.so.5(+0x23def)[0x2b8bd08dadef] /z/bin/traffic_server(HttpSM::tunnel_handler_post_or_put(HttpTunnelProducer*)+0xcd)[0x5982ad] /z/bin/traffic_server(HttpSM::tunnel_handler_post(int, void*)+0x86)[0x5a32d6] /z/bin/traffic_server(HttpSM::main_handler(int, void*)+0xd8)[0x5a1e18] /z/bin/traffic_server(HttpTunnel::main_handler(int, void*)+0xee)[0x5dd6ae] /z/bin/traffic_server(write_to_net_io(NetHandler*, UnixNetVConnection*, EThread*)+0x136e)[0x721d1e] /z/bin/traffic_server(NetHandler::mainNetEvent(int, Event*)+0x28c)[0x7162fc] /z/bin/traffic_server(EThread::process_event(Event*, int)+0x91)[0x744df1] /z/bin/traffic_server(EThread::execute()+0x4fc)[0x7458ac] /z/bin/traffic_server[0x7440ca] /lib64/libpthread.so.0(+0x7034)[0x2b8bd1ee4034] /lib64/libc.so.6(clone+0x6d)[0x2b8bd2c2875d] The segfault stack trace is /z/bin/traffic_server - STACK TRACE: /lib64/libpthread.so.0(+0xf280)[0x2abccd0d8280] /z/bin/traffic_server(HttpSM::tunnel_handler_ua(int, HttpTunnelConsumer*)+0x122)[0x591462] /z/bin/traffic_server(HttpTunnel::consumer_handler(int, HttpTunnelConsumer*)+0x9e)[0x5dd15e] /z/bin/traffic_server(HttpTunnel::main_handler(int, void*)+0x117)[0x5dd6d7] /z/bin/traffic_server(UnixNetVConnection::mainEvent(int, Event*)+0x3f0)[0x725190] /z/bin/traffic_server(InactivityCop::check_inactivity(int, Event*)+0x275)[0x716b75] /z/bin/traffic_server(EThread::process_event(Event*, int)+0x91)[0x744df1] /z/bin/traffic_server(EThread::execute()+0x2fb)[0x7456ab] /z/bin/traffic_server[0x7440ca] /lib64/libpthread.so.0(+0x7034)[0x2abccd0d0034] /lib64/libc.so.6(clone+0x6d)[0x2abccde1475d] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3105) Combination of fixes for TS-3084 and TS-3073 causing asserts and segfaults on 5.1 and beyond
[ https://issues.apache.org/jira/browse/TS-3105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204919#comment-14204919 ] ASF GitHub Bot commented on TS-3105: Github user shinrich commented on the pull request: https://github.com/apache/trafficserver/pull/140#issuecomment-62404722 Tracking down one more timing issue. Will request again in a bit. Combination of fixes for TS-3084 and TS-3073 causing asserts and segfaults on 5.1 and beyond Key: TS-3105 URL: https://issues.apache.org/jira/browse/TS-3105 Project: Traffic Server Issue Type: Bug Reporter: Susan Hinrichs Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3073-and-3084-and-3105-against-510.patch, ts-3105-master-7.patch, ts-3105-master-9.patch These two patches were run in a production environment on top of 5.0.1 without problem for several weeks. Now running with these patches on top of 5.1 causes either an assert or a segfault. Another person has reported the same segfault when running master in a production environment. In the assert, the handler_state of the producers is 0 (UNKNOWN) rather than a terminal state which is expected. I'm assuming either we are being directed into the terminal state from a connection that terminates too quickly. Or an event has hung around for too long and is being executed against the state machine after it has been recycled. The event is HTTP_TUNNEL_EVENT_DONE The assert stack trace is FATAL: HttpSM.cc:2632: failed assert `0` /z/bin/traffic_server - STACK TRACE: /z/lib/libtsutil.so.5(+0x25197)[0x2b8bd08dc197] /z/lib/libtsutil.so.5(+0x23def)[0x2b8bd08dadef] /z/bin/traffic_server(HttpSM::tunnel_handler_post_or_put(HttpTunnelProducer*)+0xcd)[0x5982ad] /z/bin/traffic_server(HttpSM::tunnel_handler_post(int, void*)+0x86)[0x5a32d6] /z/bin/traffic_server(HttpSM::main_handler(int, void*)+0xd8)[0x5a1e18] /z/bin/traffic_server(HttpTunnel::main_handler(int, void*)+0xee)[0x5dd6ae] /z/bin/traffic_server(write_to_net_io(NetHandler*, UnixNetVConnection*, EThread*)+0x136e)[0x721d1e] /z/bin/traffic_server(NetHandler::mainNetEvent(int, Event*)+0x28c)[0x7162fc] /z/bin/traffic_server(EThread::process_event(Event*, int)+0x91)[0x744df1] /z/bin/traffic_server(EThread::execute()+0x4fc)[0x7458ac] /z/bin/traffic_server[0x7440ca] /lib64/libpthread.so.0(+0x7034)[0x2b8bd1ee4034] /lib64/libc.so.6(clone+0x6d)[0x2b8bd2c2875d] The segfault stack trace is /z/bin/traffic_server - STACK TRACE: /lib64/libpthread.so.0(+0xf280)[0x2abccd0d8280] /z/bin/traffic_server(HttpSM::tunnel_handler_ua(int, HttpTunnelConsumer*)+0x122)[0x591462] /z/bin/traffic_server(HttpTunnel::consumer_handler(int, HttpTunnelConsumer*)+0x9e)[0x5dd15e] /z/bin/traffic_server(HttpTunnel::main_handler(int, void*)+0x117)[0x5dd6d7] /z/bin/traffic_server(UnixNetVConnection::mainEvent(int, Event*)+0x3f0)[0x725190] /z/bin/traffic_server(InactivityCop::check_inactivity(int, Event*)+0x275)[0x716b75] /z/bin/traffic_server(EThread::process_event(Event*, int)+0x91)[0x744df1] /z/bin/traffic_server(EThread::execute()+0x2fb)[0x7456ab] /z/bin/traffic_server[0x7440ca] /lib64/libpthread.so.0(+0x7034)[0x2abccd0d0034] /lib64/libc.so.6(clone+0x6d)[0x2abccde1475d] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-2959) Compiler warnings from gcc 4.9.1
[ https://issues.apache.org/jira/browse/TS-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205029#comment-14205029 ] James Peach commented on TS-2959: - I think the compiler's analysis is reasonable. {{MultiCacheHeader::levels}} is a public instance variable, so in principle it's value can be anything. The unchecked use of {{level}} parameters throughout this file is dangerous and easily fixed. We should make the {{level}} parameter and member variable unsigned, since there's no rationale for a negative level. In {{MultiCacheBase::initialize}}, we should check the {{alevels}} variable against {{MULTI_CACHE_MAX_LEVELS}}. In the various {{MultiCache}} methods we should check that the {{level}} parameter is {{MULTI_CACHE_MAX_LEVELS}}. I'm not sure how we can tell the cimpiler about the invariant that {{MultiCacheHeader::levels}} should always be {{MULTI_CACHE_MAX_LEVELS}}, but it would be nice to be able to do so. Compiler warnings from gcc 4.9.1 Key: TS-2959 URL: https://issues.apache.org/jira/browse/TS-2959 Project: Traffic Server Issue Type: Bug Components: Core, DNS Reporter: Leif Hedstrom Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-2959.diff We get: {code} In file included from ../../iocore/hostdb/P_HostDB.h:47:0, from ../../proxy/Main.cc:63: ../../iocore/hostdb/P_MultiCache.h: In member function ‘void MultiCacheC::rebuild_element(int, char*, RebuildMC) [with C = HostDBInfo]’: ../../iocore/hostdb/P_MultiCache.h:468:23: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:468:65: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:487:29: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:509:39: error: array subscript is above array bounds [-Werror=array-bounds] if (hits ((max_hits / 2) + 1) * elements[level]) ^ ../../iocore/hostdb/P_MultiCache.h:511:33: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:468:23: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:468:65: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:487:29: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:509:39: error: array subscript is above array bounds [-Werror=array-bounds] if (hits ((max_hits / 2) + 1) * elements[level]) ^ ../../iocore/hostdb/P_MultiCache.h:511:33: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:552:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:558:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) ^ ../../iocore/hostdb/P_MultiCache.h:552:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:558:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) ^ {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3155) Add a value test method to the MIMEField class
[ https://issues.apache.org/jira/browse/TS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205046#comment-14205046 ] ASF GitHub Bot commented on TS-3155: Github user jpeach commented on the pull request: https://github.com/apache/trafficserver/pull/139#issuecomment-62421425 This looks pretty good. Please use `strncasecmp() == 0` instead of `!strncasecmp()`. The former is easier to read and consistent with the rest of the code. I'm not sure whether `strncasecmp` is the right comparison operation here. The code in `MIME.cc` uses a mixture of `strcasecmp` and the `ParseRules` API. I suspect that `ptr_len_casecmp` would be the right API to use. I'm not sure how involved it would be, but please consider adding a regression test for this. Add a value test method to the MIMEField class -- Key: TS-3155 URL: https://issues.apache.org/jira/browse/TS-3155 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Susan Hinrichs Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3155.patch In some cases, you don't need to directly manipulate the strings of values in a mime field. But you do need to test if a mime field contains a value (e.g. does the Connection field contain the value close). Currently, you must call MIMEField::value_get, but that does a bunch of copies and string allocation which is not needed in our case. We propose adding a MIMEField::value_get_index method which returns the index of the value in the list if it is present and -1 otherwise. Will still need to do the string parsing, but do not need to do the copies and allocation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-2959) Compiler warnings from gcc 4.9.1
[ https://issues.apache.org/jira/browse/TS-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205048#comment-14205048 ] ASF GitHub Bot commented on TS-2959: Github user jpeach commented on the pull request: https://github.com/apache/trafficserver/pull/137#issuecomment-62421501 I commented in TS-2959 Compiler warnings from gcc 4.9.1 Key: TS-2959 URL: https://issues.apache.org/jira/browse/TS-2959 Project: Traffic Server Issue Type: Bug Components: Core, DNS Reporter: Leif Hedstrom Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-2959.diff We get: {code} In file included from ../../iocore/hostdb/P_HostDB.h:47:0, from ../../proxy/Main.cc:63: ../../iocore/hostdb/P_MultiCache.h: In member function ‘void MultiCacheC::rebuild_element(int, char*, RebuildMC) [with C = HostDBInfo]’: ../../iocore/hostdb/P_MultiCache.h:468:23: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:468:65: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:487:29: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:509:39: error: array subscript is above array bounds [-Werror=array-bounds] if (hits ((max_hits / 2) + 1) * elements[level]) ^ ../../iocore/hostdb/P_MultiCache.h:511:33: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:468:23: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:468:65: error: array subscript is above array bounds [-Werror=array-bounds] char *offset = data + level_offset[level] + bucketsize[level] * bucket; ^ ../../iocore/hostdb/P_MultiCache.h:487:29: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:509:39: error: array subscript is above array bounds [-Werror=array-bounds] if (hits ((max_hits / 2) + 1) * elements[level]) ^ ../../iocore/hostdb/P_MultiCache.h:511:33: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:552:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:558:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) ^ ../../iocore/hostdb/P_MultiCache.h:552:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) { ^ ../../iocore/hostdb/P_MultiCache.h:558:31: error: array subscript is above array bounds [-Werror=array-bounds] for (block = b; block b + elements[level]; block++) ^ {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3156) Mutex[Try]Lock bool() operator change and unused API removal
[ https://issues.apache.org/jira/browse/TS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Powell Molleti updated TS-3156: --- Attachment: Use-Ryo-s-patch-to-pass-shared_ptr-to-MutexLock.patch Fix the issue caught by Ryo, modified Ryo's patch to bring the code closer to the original implementation. Mutex[Try]Lock bool() operator change and unused API removal Key: TS-3156 URL: https://issues.apache.org/jira/browse/TS-3156 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Powell Molleti Assignee: James Peach Priority: Minor Labels: review Fix For: 5.2.0 Attachments: MutexLock-ats.patch, MutexLock-ats.patch, Use-Ryo-s-patch-to-pass-shared_ptr-to-MutexLock.patch, fix-MutexLock.patch Removed unused constructor in MutexLock along with set_and_take() method, had to change FORCE_PLUGIN_MUTEX() for that. Removed release() method. default bool and ! operator from both MutexLock and MutexTryLock with is_locked() API. Changes if (lock) to if (lock.is_locked()) across the code base. Ran make test will be performing more system testing. Posted before for early comments / feedback. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3156) Mutex[Try]Lock bool() operator change and unused API removal
[ https://issues.apache.org/jira/browse/TS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205110#comment-14205110 ] Powell Molleti commented on TS-3156: Hi Ryo, I have attached the patch above. With slight modification on top of what you have attached. Thanks again for catching the issue. I appreciate it. Mutex[Try]Lock bool() operator change and unused API removal Key: TS-3156 URL: https://issues.apache.org/jira/browse/TS-3156 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Powell Molleti Assignee: James Peach Priority: Minor Labels: review Fix For: 5.2.0 Attachments: MutexLock-ats.patch, MutexLock-ats.patch, Use-Ryo-s-patch-to-pass-shared_ptr-to-MutexLock.patch, fix-MutexLock.patch Removed unused constructor in MutexLock along with set_and_take() method, had to change FORCE_PLUGIN_MUTEX() for that. Removed release() method. default bool and ! operator from both MutexLock and MutexTryLock with is_locked() API. Changes if (lock) to if (lock.is_locked()) across the code base. Ran make test will be performing more system testing. Posted before for early comments / feedback. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3156) Mutex[Try]Lock bool() operator change and unused API removal
[ https://issues.apache.org/jira/browse/TS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205141#comment-14205141 ] Powell Molleti commented on TS-3156: Alan/Lief, Any idea why MutexLock in iocore/eventsystem/I_Lock.h uses smart ptr for the actual lock?. Scoped/Autolock code should not take ownership of the lock it should just do a lock() (in constructor) and unlock() (in destructor). Reference: boost_1_57_0/boost/interprocess/sync/scoped_lock.hpp Powell. Mutex[Try]Lock bool() operator change and unused API removal Key: TS-3156 URL: https://issues.apache.org/jira/browse/TS-3156 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Powell Molleti Assignee: James Peach Priority: Minor Labels: review Fix For: 5.2.0 Attachments: MutexLock-ats.patch, MutexLock-ats.patch, Use-Ryo-s-patch-to-pass-shared_ptr-to-MutexLock.patch, fix-MutexLock.patch Removed unused constructor in MutexLock along with set_and_take() method, had to change FORCE_PLUGIN_MUTEX() for that. Removed release() method. default bool and ! operator from both MutexLock and MutexTryLock with is_locked() API. Changes if (lock) to if (lock.is_locked()) across the code base. Ran make test will be performing more system testing. Posted before for early comments / feedback. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3105) Combination of fixes for TS-3084 and TS-3073 causing asserts and segfaults on 5.1 and beyond
[ https://issues.apache.org/jira/browse/TS-3105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205145#comment-14205145 ] ASF GitHub Bot commented on TS-3105: Github user jpeach commented on the pull request: https://github.com/apache/trafficserver/pull/140#issuecomment-62433191 Overall, this makes me a bit nervous. There's some changes which looks obviously good, but other changes that I really don't know about. I'd feel a lot more comfortable if the different changes were broken out into smaller, independent commits. Combination of fixes for TS-3084 and TS-3073 causing asserts and segfaults on 5.1 and beyond Key: TS-3105 URL: https://issues.apache.org/jira/browse/TS-3105 Project: Traffic Server Issue Type: Bug Reporter: Susan Hinrichs Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3073-and-3084-and-3105-against-510.patch, ts-3105-master-7.patch, ts-3105-master-9.patch These two patches were run in a production environment on top of 5.0.1 without problem for several weeks. Now running with these patches on top of 5.1 causes either an assert or a segfault. Another person has reported the same segfault when running master in a production environment. In the assert, the handler_state of the producers is 0 (UNKNOWN) rather than a terminal state which is expected. I'm assuming either we are being directed into the terminal state from a connection that terminates too quickly. Or an event has hung around for too long and is being executed against the state machine after it has been recycled. The event is HTTP_TUNNEL_EVENT_DONE The assert stack trace is FATAL: HttpSM.cc:2632: failed assert `0` /z/bin/traffic_server - STACK TRACE: /z/lib/libtsutil.so.5(+0x25197)[0x2b8bd08dc197] /z/lib/libtsutil.so.5(+0x23def)[0x2b8bd08dadef] /z/bin/traffic_server(HttpSM::tunnel_handler_post_or_put(HttpTunnelProducer*)+0xcd)[0x5982ad] /z/bin/traffic_server(HttpSM::tunnel_handler_post(int, void*)+0x86)[0x5a32d6] /z/bin/traffic_server(HttpSM::main_handler(int, void*)+0xd8)[0x5a1e18] /z/bin/traffic_server(HttpTunnel::main_handler(int, void*)+0xee)[0x5dd6ae] /z/bin/traffic_server(write_to_net_io(NetHandler*, UnixNetVConnection*, EThread*)+0x136e)[0x721d1e] /z/bin/traffic_server(NetHandler::mainNetEvent(int, Event*)+0x28c)[0x7162fc] /z/bin/traffic_server(EThread::process_event(Event*, int)+0x91)[0x744df1] /z/bin/traffic_server(EThread::execute()+0x4fc)[0x7458ac] /z/bin/traffic_server[0x7440ca] /lib64/libpthread.so.0(+0x7034)[0x2b8bd1ee4034] /lib64/libc.so.6(clone+0x6d)[0x2b8bd2c2875d] The segfault stack trace is /z/bin/traffic_server - STACK TRACE: /lib64/libpthread.so.0(+0xf280)[0x2abccd0d8280] /z/bin/traffic_server(HttpSM::tunnel_handler_ua(int, HttpTunnelConsumer*)+0x122)[0x591462] /z/bin/traffic_server(HttpTunnel::consumer_handler(int, HttpTunnelConsumer*)+0x9e)[0x5dd15e] /z/bin/traffic_server(HttpTunnel::main_handler(int, void*)+0x117)[0x5dd6d7] /z/bin/traffic_server(UnixNetVConnection::mainEvent(int, Event*)+0x3f0)[0x725190] /z/bin/traffic_server(InactivityCop::check_inactivity(int, Event*)+0x275)[0x716b75] /z/bin/traffic_server(EThread::process_event(Event*, int)+0x91)[0x744df1] /z/bin/traffic_server(EThread::execute()+0x2fb)[0x7456ab] /z/bin/traffic_server[0x7440ca] /lib64/libpthread.so.0(+0x7034)[0x2abccd0d0034] /lib64/libc.so.6(clone+0x6d)[0x2abccde1475d] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3178) ProxyAllocator improvements
[ https://issues.apache.org/jira/browse/TS-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Geffon updated TS-3178: - Attachment: patch.diff ProxyAllocator improvements --- Key: TS-3178 URL: https://issues.apache.org/jira/browse/TS-3178 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Brian Geffon Assignee: Cynthia Gu Fix For: 5.2.0 Attachments: patch.diff, patch.diff Currently when a ProxyAllocator (Thread Local) has more than a configurable number of elements it will return them one-by-one to a ClassAllocator (Global Freelist). Returning every item in this fashion is inefficient as we'll likely need more items in the future. Therefore we should maintain a low watermark (a minimum number) of items that should be in a ProxyAllocator at any one time. When the number of elements reaches high watermark, the free up is triggers to keep it below the low watermark. Additionally, the free should be a block free instead of one-by-one as we can reduce several hundred compare-and-swap operations to a single CAS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3178) ProxyAllocator improvements
[ https://issues.apache.org/jira/browse/TS-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205177#comment-14205177 ] Brian Geffon commented on TS-3178: -- [~amc] a new version of [~cynthiagu]'s patch is attached. ProxyAllocator improvements --- Key: TS-3178 URL: https://issues.apache.org/jira/browse/TS-3178 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Brian Geffon Assignee: Cynthia Gu Fix For: 5.2.0 Attachments: patch.diff, patch.diff Currently when a ProxyAllocator (Thread Local) has more than a configurable number of elements it will return them one-by-one to a ClassAllocator (Global Freelist). Returning every item in this fashion is inefficient as we'll likely need more items in the future. Therefore we should maintain a low watermark (a minimum number) of items that should be in a ProxyAllocator at any one time. When the number of elements reaches high watermark, the free up is triggers to keep it below the low watermark. Additionally, the free should be a block free instead of one-by-one as we can reduce several hundred compare-and-swap operations to a single CAS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3178) ProxyAllocator improvements
[ https://issues.apache.org/jira/browse/TS-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Geffon updated TS-3178: - Attachment: (was: patch.diff) ProxyAllocator improvements --- Key: TS-3178 URL: https://issues.apache.org/jira/browse/TS-3178 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Brian Geffon Assignee: Cynthia Gu Fix For: 5.2.0 Attachments: patch.diff, patch.diff Currently when a ProxyAllocator (Thread Local) has more than a configurable number of elements it will return them one-by-one to a ClassAllocator (Global Freelist). Returning every item in this fashion is inefficient as we'll likely need more items in the future. Therefore we should maintain a low watermark (a minimum number) of items that should be in a ProxyAllocator at any one time. When the number of elements reaches high watermark, the free up is triggers to keep it below the low watermark. Additionally, the free should be a block free instead of one-by-one as we can reduce several hundred compare-and-swap operations to a single CAS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3178) ProxyAllocator improvements
[ https://issues.apache.org/jira/browse/TS-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Geffon updated TS-3178: - Attachment: patch.diff ProxyAllocator improvements --- Key: TS-3178 URL: https://issues.apache.org/jira/browse/TS-3178 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Brian Geffon Assignee: Cynthia Gu Fix For: 5.2.0 Attachments: patch.diff, patch.diff Currently when a ProxyAllocator (Thread Local) has more than a configurable number of elements it will return them one-by-one to a ClassAllocator (Global Freelist). Returning every item in this fashion is inefficient as we'll likely need more items in the future. Therefore we should maintain a low watermark (a minimum number) of items that should be in a ProxyAllocator at any one time. When the number of elements reaches high watermark, the free up is triggers to keep it below the low watermark. Additionally, the free should be a block free instead of one-by-one as we can reduce several hundred compare-and-swap operations to a single CAS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3024) build with OPENSSL_NO_SSL_INTERN
[ https://issues.apache.org/jira/browse/TS-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205194#comment-14205194 ] ASF subversion and git services commented on TS-3024: - Commit f1a144df2e5a3f81e3fe11187d3bcb7e8e0f44e5 in trafficserver's branch refs/heads/master from [~shinrich] [ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=f1a144d ] TS-3024: build with OPENSSL_NO_SSL_INTERN Add in the -DOPENSSL_NO_SSL_INTERN flag for compiling and isolate exceptions in SSLInternal.cc. This closes #138. build with OPENSSL_NO_SSL_INTERN Key: TS-3024 URL: https://issues.apache.org/jira/browse/TS-3024 Project: Traffic Server Issue Type: Bug Components: Build, SSL Reporter: James Peach Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3024.patch I think we should enable {{OPENSSL_NO_SSL_INTERN}} to make ourselves more robust to OpenSSL implementation changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3024) build with OPENSSL_NO_SSL_INTERN
[ https://issues.apache.org/jira/browse/TS-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205256#comment-14205256 ] ASF GitHub Bot commented on TS-3024: Github user asfgit closed the pull request at: https://github.com/apache/trafficserver/pull/138 build with OPENSSL_NO_SSL_INTERN Key: TS-3024 URL: https://issues.apache.org/jira/browse/TS-3024 Project: Traffic Server Issue Type: Bug Components: Build, SSL Reporter: James Peach Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3024.patch I think we should enable {{OPENSSL_NO_SSL_INTERN}} to make ourselves more robust to OpenSSL implementation changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3155) Add a value test method to the MIMEField class
[ https://issues.apache.org/jira/browse/TS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205280#comment-14205280 ] Susan Hinrichs commented on TS-3155: Looks like I should be able to extend test_mime() easily enough to exercise the new method. In terms of strncasecmp() vs ptr_len_cascmp(). They seem to be functionally equivalent. Since strncasecmp() is standard these days (which was not the case 10 years ago), shouldn't we be migrating to the OS standard version of these kind of utility functions? Add a value test method to the MIMEField class -- Key: TS-3155 URL: https://issues.apache.org/jira/browse/TS-3155 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Susan Hinrichs Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3155.patch In some cases, you don't need to directly manipulate the strings of values in a mime field. But you do need to test if a mime field contains a value (e.g. does the Connection field contain the value close). Currently, you must call MIMEField::value_get, but that does a bunch of copies and string allocation which is not needed in our case. We propose adding a MIMEField::value_get_index method which returns the index of the value in the list if it is present and -1 otherwise. Will still need to do the string parsing, but do not need to do the copies and allocation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (TS-3183) Clean up (and/or eliminate?) proxy.node metrics
Leif Hedstrom created TS-3183: - Summary: Clean up (and/or eliminate?) proxy.node metrics Key: TS-3183 URL: https://issues.apache.org/jira/browse/TS-3183 Project: Traffic Server Issue Type: Improvement Components: Metrics Reporter: Leif Hedstrom There's two types of metrics it seems, proxy.node and proxy.process. Unfortunately, in at least a few cases, the same metrics get duplicated into both. I'm not sure why that makes sense, ever, so I think we should clean that up. That would certainly make it less confusing. Secondly, it seems proxy.node metrics, which are registered via RecordsConfig.cc, are primarily used for computed metrics. These are metrics synthesized via the stats.config.xml mechanism. Albeit being useful, what's bad with the current design here is that adding new metrics to the XML config file still requires a change to RecordsConfig.cc. This makes no sense to me, so perhaps the proxy.node metrics should all be eliminated from RecordsConfig.cc, and exclusively owned (registered and updated) via the XML config format. Elimination, or renaming, of any of these metrics would be an incompatible change. Therefore, it would go into 6.0.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (TS-3183) Clean up (and/or eliminate?) proxy.node metrics
[ https://issues.apache.org/jira/browse/TS-3183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom reassigned TS-3183: - Assignee: Leif Hedstrom Clean up (and/or eliminate?) proxy.node metrics --- Key: TS-3183 URL: https://issues.apache.org/jira/browse/TS-3183 Project: Traffic Server Issue Type: Improvement Components: Metrics Reporter: Leif Hedstrom Assignee: Leif Hedstrom Labels: compatibility Fix For: 6.0.0 There's two types of metrics it seems, proxy.node and proxy.process. Unfortunately, in at least a few cases, the same metrics get duplicated into both. I'm not sure why that makes sense, ever, so I think we should clean that up. That would certainly make it less confusing. Secondly, it seems proxy.node metrics, which are registered via RecordsConfig.cc, are primarily used for computed metrics. These are metrics synthesized via the stats.config.xml mechanism. Albeit being useful, what's bad with the current design here is that adding new metrics to the XML config file still requires a change to RecordsConfig.cc. This makes no sense to me, so perhaps the proxy.node metrics should all be eliminated from RecordsConfig.cc, and exclusively owned (registered and updated) via the XML config format. Elimination, or renaming, of any of these metrics would be an incompatible change. Therefore, it would go into 6.0.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3183) Clean up (and/or eliminate?) proxy.node metrics
[ https://issues.apache.org/jira/browse/TS-3183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-3183: -- Fix Version/s: 6.0.0 Clean up (and/or eliminate?) proxy.node metrics --- Key: TS-3183 URL: https://issues.apache.org/jira/browse/TS-3183 Project: Traffic Server Issue Type: Improvement Components: Metrics Reporter: Leif Hedstrom Labels: compatibility Fix For: 6.0.0 There's two types of metrics it seems, proxy.node and proxy.process. Unfortunately, in at least a few cases, the same metrics get duplicated into both. I'm not sure why that makes sense, ever, so I think we should clean that up. That would certainly make it less confusing. Secondly, it seems proxy.node metrics, which are registered via RecordsConfig.cc, are primarily used for computed metrics. These are metrics synthesized via the stats.config.xml mechanism. Albeit being useful, what's bad with the current design here is that adding new metrics to the XML config file still requires a change to RecordsConfig.cc. This makes no sense to me, so perhaps the proxy.node metrics should all be eliminated from RecordsConfig.cc, and exclusively owned (registered and updated) via the XML config format. Elimination, or renaming, of any of these metrics would be an incompatible change. Therefore, it would go into 6.0.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3170) Remove autoconf.pac feature
[ https://issues.apache.org/jira/browse/TS-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-3170: -- Labels: compatibility (was: ) Remove autoconf.pac feature --- Key: TS-3170 URL: https://issues.apache.org/jira/browse/TS-3170 Project: Traffic Server Issue Type: Improvement Reporter: Susan Hinrichs Labels: compatibility Fix For: 6.0.0 This is a legacy feature no longer needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3168) Remove Log Collation
[ https://issues.apache.org/jira/browse/TS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-3168: -- Labels: compatibility (was: ) Remove Log Collation Key: TS-3168 URL: https://issues.apache.org/jira/browse/TS-3168 Project: Traffic Server Issue Type: Task Components: Logging Reporter: Susan Hinrichs Labels: compatibility Fix For: 6.0.0 In discussion with [~amc], [~zwoop], and [~bcall], decided to remove this feature. Does not work. Better to use newer dedicated log sub systems and syslog-ng -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3126) Deprecate the config setting that controls sending of HTTP status response on a POST failure (e.g timeout)
[ https://issues.apache.org/jira/browse/TS-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-3126: -- Labels: compatibility (was: ) Deprecate the config setting that controls sending of HTTP status response on a POST failure (e.g timeout) -- Key: TS-3126 URL: https://issues.apache.org/jira/browse/TS-3126 Project: Traffic Server Issue Type: Bug Components: Core, HTTP Affects Versions: 5.2.0 Reporter: Sudheer Vinukonda Assignee: Sudheer Vinukonda Labels: compatibility Fix For: 6.0.0 This is a follow up jira for TS-3060. TS-3060 enhances the handling of timeout scenario during POST transactions by sending a HTTP status responses where possible. However, this change is being made configurable with default OFF in 5.2. This jira proposed to deprecate that setting in 6.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3107) Remove RHEL5 from supported OS
[ https://issues.apache.org/jira/browse/TS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-3107: -- Labels: compatibility (was: ) Remove RHEL5 from supported OS -- Key: TS-3107 URL: https://issues.apache.org/jira/browse/TS-3107 Project: Traffic Server Issue Type: Task Reporter: Phil Sorber Priority: Blocker Labels: compatibility Fix For: 6.0.0 We want to drop support for RHEL5 (and clones) when we release 6.0. We need more modern supporting libs like Flex and Bison. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3045) set default value for proxy.config.ssl.number.threads to -1, to default ET_NET threads to handle SSL connections
[ https://issues.apache.org/jira/browse/TS-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-3045: -- Labels: compatibility (was: ) set default value for proxy.config.ssl.number.threads to -1, to default ET_NET threads to handle SSL connections -- Key: TS-3045 URL: https://issues.apache.org/jira/browse/TS-3045 Project: Traffic Server Issue Type: Bug Components: Core Reporter: Sudheer Vinukonda Priority: Critical Labels: compatibility Fix For: 6.0.0 Using ET_SSL threads to handle SSL connections affects sharing origin sessions per thread pool (refer TS-2574). This limitation is addressed in TS-2574 which forces to use ET_NET threads for SSL connections, when proxy.config.ssl.number.threads is configured to -1. This bug is to change the current default value 0 for proxy.config.ssl.number.threads to -1 to make this the default behavior. A separate bug for release 7.0 will be opened to phase out proxy.config.ssl.number.threads completely and always use ET_NET threads for ssl connections -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-2698) Change prototype of TSHttpTxnErrorBodySet) to take a length of the mime type string
[ https://issues.apache.org/jira/browse/TS-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-2698: -- Labels: api-change compatibility (was: api-change) Change prototype of TSHttpTxnErrorBodySet) to take a length of the mime type string --- Key: TS-2698 URL: https://issues.apache.org/jira/browse/TS-2698 Project: Traffic Server Issue Type: Improvement Components: TS API Reporter: Leif Hedstrom Assignee: Leif Hedstrom Labels: api-change, compatibility Fix For: 6.0.0 This would be more consistent with how we generally always pass along strings without an assumption of them being NULL terminated. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-2244) remove legacy proxy.config.log.search_log_enabled feature
[ https://issues.apache.org/jira/browse/TS-2244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-2244: -- Labels: compatibility (was: ) remove legacy proxy.config.log.search_log_enabled feature - Key: TS-2244 URL: https://issues.apache.org/jira/browse/TS-2244 Project: Traffic Server Issue Type: Bug Components: Cleanup, Logging Reporter: James Peach Assignee: James Peach Labels: compatibility Fix For: 6.0.0 While analyzing logging code, I came across the {{proxy.config.log.search_log_enabled}} setting. This enabled a hard-coded XML custom log format that may have been used to drive an Inktomi appliance feature. This has never been documented and is not generally useful to modern Traffic Server deployments. We should remove it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-2326) Should proxy.config.cluster.ethernet_interface be LOCAL?
[ https://issues.apache.org/jira/browse/TS-2326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-2326: -- Labels: compatibility (was: ) Should proxy.config.cluster.ethernet_interface be LOCAL? - Key: TS-2326 URL: https://issues.apache.org/jira/browse/TS-2326 Project: Traffic Server Issue Type: Bug Components: Clustering, Configuration Reporter: Leif Hedstrom Assignee: Leif Hedstrom Labels: compatibility Fix For: 6.0.0 In the default config, we have CONFIG proxy.config.cluster.ethernet_interface STRING lo But why isn't this LOCAL? It seems a bit draconian to require the interface to be the same on all cluster members ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-2213) Body log tags have a default value of 0, should they be -1 now?
[ https://issues.apache.org/jira/browse/TS-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-2213: -- Labels: compatibility (was: ) Body log tags have a default value of 0, should they be -1 now? --- Key: TS-2213 URL: https://issues.apache.org/jira/browse/TS-2213 Project: Traffic Server Issue Type: Bug Components: Logging Reporter: Leif Hedstrom Labels: compatibility Fix For: 6.0.0 In our logging tags that provides a value for the body length, we default to 0 in the absence of a value. This is somewhat bad now, because we actually supports caching objects with a zero length body. Hence, it's not distinguishable between a zero length body entry, or one where we couldn't retrieve a body length (for whatever reason). I'm wondering if we should change all the defaults for all byte counts in logging from 0 to -1? This would be an incompatible change, so marking this for v5.0.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3183) Clean up (and/or eliminate?) proxy.node metrics
[ https://issues.apache.org/jira/browse/TS-3183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-3183: -- Labels: compatibility (was: ) Clean up (and/or eliminate?) proxy.node metrics --- Key: TS-3183 URL: https://issues.apache.org/jira/browse/TS-3183 Project: Traffic Server Issue Type: Improvement Components: Metrics Reporter: Leif Hedstrom Labels: compatibility Fix For: 6.0.0 There's two types of metrics it seems, proxy.node and proxy.process. Unfortunately, in at least a few cases, the same metrics get duplicated into both. I'm not sure why that makes sense, ever, so I think we should clean that up. That would certainly make it less confusing. Secondly, it seems proxy.node metrics, which are registered via RecordsConfig.cc, are primarily used for computed metrics. These are metrics synthesized via the stats.config.xml mechanism. Albeit being useful, what's bad with the current design here is that adding new metrics to the XML config file still requires a change to RecordsConfig.cc. This makes no sense to me, so perhaps the proxy.node metrics should all be eliminated from RecordsConfig.cc, and exclusively owned (registered and updated) via the XML config format. Elimination, or renaming, of any of these metrics would be an incompatible change. Therefore, it would go into 6.0.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3158) switch traffic_manager to standard argument processing
[ https://issues.apache.org/jira/browse/TS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-3158: -- Labels: compatibility (was: ) switch traffic_manager to standard argument processing -- Key: TS-3158 URL: https://issues.apache.org/jira/browse/TS-3158 Project: Traffic Server Issue Type: Improvement Components: Manager Reporter: James Peach Assignee: James Peach Labels: compatibility Fix For: 6.0.0 {{traffic_manager}} does it's own argument parsing. We should nuke that code and use {{ink_args}} API. Unfortunately this is not backwards compatible because {{ink_args}} uses double hyphens, and {{traffic_manager}} peculiarly only uses a single hyphen for long options. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-2421) HostDB creates files FOR THE DEVIL
[ https://issues.apache.org/jira/browse/TS-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-2421: -- Labels: compatibility (was: ) HostDB creates files FOR THE DEVIL -- Key: TS-2421 URL: https://issues.apache.org/jira/browse/TS-2421 Project: Traffic Server Issue Type: Bug Components: DNS, Security Reporter: Igor Galić Assignee: Leif Hedstrom Labels: compatibility Fix For: 6.0.0 from iocore/hostdb/MultiCache.cc: {code} // XXX: Shouldn't that be 0664? // if ((fd =::open(p, O_CREAT | O_WRONLY | O_TRUNC, 0666)) = 0) { {code} Yes. Yes it should. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3174) Kill LRU Ram Cache
[ https://issues.apache.org/jira/browse/TS-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-3174: -- Labels: compatibility (was: ) Kill LRU Ram Cache -- Key: TS-3174 URL: https://issues.apache.org/jira/browse/TS-3174 Project: Traffic Server Issue Type: Task Reporter: Susan Hinrichs Labels: compatibility Fix For: 6.0.0 Comment from [~zwoop]. Now that CLFUS is both stable, and default, is there even a reason to keep the old LRU cache. If no objections should remove for the next major version change. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-1174) Should we eliminate all ERR_* status message in squid logging?
[ https://issues.apache.org/jira/browse/TS-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-1174: -- Labels: compatibility newbie (was: newbie) Should we eliminate all ERR_* status message in squid logging? Key: TS-1174 URL: https://issues.apache.org/jira/browse/TS-1174 Project: Traffic Server Issue Type: Improvement Components: Logging Reporter: Leif Hedstrom Labels: compatibility, newbie Fix For: 6.0.0 In more recent versions of Squid, ERR_* status messages have been merged into the status code. E.g. {code} ERR_* Errors are now contained in the status code. {code} Should we do likewise? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-1822) Do we still need proxy.config.system.mmap_max ?
[ https://issues.apache.org/jira/browse/TS-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-1822: -- Labels: compatibility (was: ) Do we still need proxy.config.system.mmap_max ? --- Key: TS-1822 URL: https://issues.apache.org/jira/browse/TS-1822 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Leif Hedstrom Assignee: Phil Sorber Labels: compatibility Fix For: 6.0.0 A long time ago, we added proxy.config.system.mmap_max to let the traffic_server increase the max number of mmap segments that we want to use. We currently set this to 2MM. I'm wondering, do we really need this still ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-203) config files ownership
[ https://issues.apache.org/jira/browse/TS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-203: - Labels: compatibility (was: ) config files ownership -- Key: TS-203 URL: https://issues.apache.org/jira/browse/TS-203 Project: Traffic Server Issue Type: Bug Components: Build Reporter: Leif Hedstrom Priority: Minor Labels: compatibility Fix For: 6.0.0 It's semi-odd that the admin user (nobody) is also the user as to which traffic_server process changes it's euid to. This means that the traffic_server process has write permissions on the config files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-1983) ACL rules in remap.config does not take precedence over rules in ip_allow.config
[ https://issues.apache.org/jira/browse/TS-1983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-1983: -- Labels: compatibility (was: ) ACL rules in remap.config does not take precedence over rules in ip_allow.config Key: TS-1983 URL: https://issues.apache.org/jira/browse/TS-1983 Project: Traffic Server Issue Type: Bug Components: Configuration Reporter: Leif Hedstrom Assignee: Alan M. Carroll Labels: compatibility Fix For: 6.0.0 Lets say you want to allow DELETE for a small sub-set of requests, based on remap.config rules. The reasonable configuration is to do e.g. {code} map http://dav.example.com http://127.0.0.1 @method=DELETE @action=allow {code} However, this does not work, since the global DENY in ip_allow.config takes precedence (it denies all DELETE's). This is actually sort of a regression I think, it did not use to behave like this I'm fairly certain. The workaround (which is incredibly cumbersom if you have even a moderately large remap.config, is to inverse the rules. E.g. {code} src_ip=0.0.0.0-255.255.255.255action=ip_deny method=PUSH|PURGE {code} and {code} map http://other.example.com http://123 @method=DELETE @action=deny map http://another.example.com http://123 @method=DELETE @action=deny map http://more.example.com http://123 @method=DELETE @action=deny . . . {code} This kinda sucks to maintain, and also opens up a PEBKAC security problem, when someone adds a new remap.config rule and forgets to deny the DELETEs. I really feel that the ACLs from remap.config (if they match, you can specify IP ranges etc. as well), should take precedence over ip_allow.config. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-2080) Remove arbitrary 1 year max cache freshness limit
[ https://issues.apache.org/jira/browse/TS-2080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-2080: -- Labels: compatibility (was: ) Remove arbitrary 1 year max cache freshness limit - Key: TS-2080 URL: https://issues.apache.org/jira/browse/TS-2080 Project: Traffic Server Issue Type: Improvement Components: Cache Reporter: Leif Hedstrom Assignee: Leif Hedstrom Labels: compatibility Fix For: 6.0.0 For some reason (maybe john know's ?) we have an upper limit on cache freshness at 1 year. I have no idea why this is, the only place it's used is in HttpTransact.cc: {code} max_freshness_bounds = min((MgmtInt)NUM_SECONDS_IN_ONE_YEAR, s-txn_conf-cache_guaranteed_max_lifetime); {code} Begs the question, why not just remove the min(), and always use the cache_guranteed_max_lifetime? This is a records.config setting, defaults to a 1 year (go figure). {code} {RECT_CONFIG, proxy.config.http.cache.guaranteed_max_lifetime, RECD_INT, 31536000, RECU_DYNAMIC, RR_NULL, RECC_NULL, NULL, RECA_NULL} {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-1985) Eliminate built-in log formats in favor of logs_xml.config
[ https://issues.apache.org/jira/browse/TS-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-1985: -- Labels: compatibility newbie (was: newbie) Eliminate built-in log formats in favor of logs_xml.config -- Key: TS-1985 URL: https://issues.apache.org/jira/browse/TS-1985 Project: Traffic Server Issue Type: Improvement Components: Logging Reporter: Leif Hedstrom Labels: compatibility, newbie Fix For: 6.0.0 I have a feeling that the hardcoded (built-in) log-formats was the old way of doing things, and logs_xml.config is the new way. As such, I'd like to propose that we eliminate all the built-in's, and provide all those formats in a default logs_xml.config. One thing that might be necessary to add is an option in the XML config to disable a log file. I don't know if that's easily doable without using XML comments, but would be easy to add and useful. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-1822) Do we still need proxy.config.system.mmap_max ?
[ https://issues.apache.org/jira/browse/TS-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205374#comment-14205374 ] Phil Sorber commented on TS-1822: - We will very likely need more mmaps for huge page support. Do we still need proxy.config.system.mmap_max ? --- Key: TS-1822 URL: https://issues.apache.org/jira/browse/TS-1822 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Leif Hedstrom Assignee: Phil Sorber Labels: compatibility Fix For: 6.0.0 A long time ago, we added proxy.config.system.mmap_max to let the traffic_server increase the max number of mmap segments that we want to use. We currently set this to 2MM. I'm wondering, do we really need this still ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3156) Mutex[Try]Lock bool() operator change and unused API removal
[ https://issues.apache.org/jira/browse/TS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205446#comment-14205446 ] Alan M. Carroll commented on TS-3156: - It doesn't take ownership, but it does hold a reference so the mutex can't evaporate before the destructor is called. This is generally a good idea if you need to access a smart pointer protected resource later. On Monday, November 10, 2014 12:43 PM, Powell Molleti (JIRA) j...@apache.org wrote: [ https://issues.apache.org/jira/browse/TS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205141#comment-14205141 ] Powell Molleti commented on TS-3156: Alan/Lief, Any idea why MutexLock in iocore/eventsystem/I_Lock.h uses smart ptr for the actual lock?. Scoped/Autolock code should not take ownership of the lock it should just do a lock() (in constructor) and unlock() (in destructor). Reference: boost_1_57_0/boost/interprocess/sync/scoped_lock.hpp Powell. -- This message was sent by Atlassian JIRA (v6.3.4#6332) Mutex[Try]Lock bool() operator change and unused API removal Key: TS-3156 URL: https://issues.apache.org/jira/browse/TS-3156 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Powell Molleti Assignee: James Peach Priority: Minor Labels: review Fix For: 5.2.0 Attachments: MutexLock-ats.patch, MutexLock-ats.patch, Use-Ryo-s-patch-to-pass-shared_ptr-to-MutexLock.patch, fix-MutexLock.patch Removed unused constructor in MutexLock along with set_and_take() method, had to change FORCE_PLUGIN_MUTEX() for that. Removed release() method. default bool and ! operator from both MutexLock and MutexTryLock with is_locked() API. Changes if (lock) to if (lock.is_locked()) across the code base. Ran make test will be performing more system testing. Posted before for early comments / feedback. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (TS-3184) window_update not triggered correctly..
Sudheer Vinukonda created TS-3184: - Summary: window_update not triggered correctly.. Key: TS-3184 URL: https://issues.apache.org/jira/browse/TS-3184 Project: Traffic Server Issue Type: Bug Components: SPDY Reporter: Sudheer Vinukonda During a session start, spdy advertises the initial window size as the configured {{proxy.config.spdy.initial_window_size_in}}. A window_update is triggered whenever the current delta_window_size reaches half this advertised window size. However, the condition that checks for triggering the window update compares the delta_window_size for each stream with the initial window size. This fails to trigger a window_update when the delta_window_size for each stream individually is not half the initial_window_size, but, the aggregate of all the streams is high enough. Consequently, the sender stalls upon exhausting the send window size and eventually times out waiting for a window update (which never happens, since, individually, each stream doesn't hit half the initial window size). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3184) spdy window_update not triggered correctly..
[ https://issues.apache.org/jira/browse/TS-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3184: -- Summary: spdy window_update not triggered correctly.. (was: window_update not triggered correctly..) spdy window_update not triggered correctly.. Key: TS-3184 URL: https://issues.apache.org/jira/browse/TS-3184 Project: Traffic Server Issue Type: Bug Components: SPDY Reporter: Sudheer Vinukonda During a session start, spdy advertises the initial window size as the configured {{proxy.config.spdy.initial_window_size_in}}. A window_update is triggered whenever the current delta_window_size reaches half this advertised window size. However, the condition that checks for triggering the window update compares the delta_window_size for each stream with the initial window size. This fails to trigger a window_update when the delta_window_size for each stream individually is not half the initial_window_size, but, the aggregate of all the streams is high enough. Consequently, the sender stalls upon exhausting the send window size and eventually times out waiting for a window update (which never happens, since, individually, each stream doesn't hit half the initial window size). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3184) spdy window_update not triggered correctly..
[ https://issues.apache.org/jira/browse/TS-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205561#comment-14205561 ] Sudheer Vinukonda commented on TS-3184: --- A simple fix (thanks to [~yzlai]) for this issue: {code} diff --git a/proxy/spdy/SpdyCallbacks.cc b/proxy/spdy/SpdyCallbacks.cc index 252f9a2..470ed71 100644 --- a/proxy/spdy/SpdyCallbacks.cc +++ b/proxy/spdy/SpdyCallbacks.cc @@ -373,6 +373,21 @@ spdy_on_data_chunk_recv_callback(spdylay_session * /*session*/, uint8_t /*flags* return; } +unsigned +spdy_session_delta_window_size(SpdyClientSession *sm) +{ + unsigned sess_delta_window_size = 0; + mapint, SpdyRequest*::iterator iter = sm-req_map.begin(); + mapint, SpdyRequest*::iterator endIter = sm-req_map.end(); + for (; iter != endIter; ++iter) { +SpdyRequest* req = iter-second; +sess_delta_window_size += req-delta_window_size; + } + Debug(spdy, sm_id:% PRId64 , session delta_window_size:%u, +sess_delta_window_size); + return sess_delta_window_size; +} + void spdy_on_data_recv_callback(spdylay_session *session, uint8_t flags, int32_t stream_id, int32_t length, void *user_data) @@ -397,7 +412,7 @@ spdy_on_data_recv_callback(spdylay_session *session, uint8_t flags, Debug(spdy, sm_id:% PRId64 , stream_id:%d, delta_window_size:%u, sm-sm_id, stream_id, req-delta_window_size); - if (req-delta_window_size = spdy_initial_window_size/2) { + if (spdy_session_delta_window_size(sm) = spdy_initial_window_size/2) { Debug(spdy, Reenable write_vio for WINDOW_UPDATE frame, delta_window_size:%u, req-delta_window_size); {code} spdy window_update not triggered correctly.. Key: TS-3184 URL: https://issues.apache.org/jira/browse/TS-3184 Project: Traffic Server Issue Type: Bug Components: SPDY Reporter: Sudheer Vinukonda During a session start, spdy advertises the initial window size as the configured {{proxy.config.spdy.initial_window_size_in}}. A window_update is triggered whenever the current delta_window_size reaches half this advertised window size. However, the condition that checks for triggering the window update compares the delta_window_size for each stream with the initial window size. This fails to trigger a window_update when the delta_window_size for each stream individually is not half the initial_window_size, but, the aggregate of all the streams is high enough. Consequently, the sender stalls upon exhausting the send window size and eventually times out waiting for a window update (which never happens, since, individually, each stream doesn't hit half the initial window size). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-2417) Add forward secrecy support with DHE (SSL related)
[ https://issues.apache.org/jira/browse/TS-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Eaglesham updated TS-2417: --- Attachment: ats_dhe-3.patch Patch updated with a clearer error message, documentation, and the config parameter added to the ancillary files. Add forward secrecy support with DHE (SSL related) -- Key: TS-2417 URL: https://issues.apache.org/jira/browse/TS-2417 Project: Traffic Server Issue Type: Improvement Components: HTTP, SSL Reporter: Bryan Call Assignee: John Eaglesham Fix For: 5.3.0 Attachments: ats_dhe-2.patch, ats_dhe-3.patch mod_ssl bug and changes: https://issues.apache.org/bugzilla/show_bug.cgi?id=49559 Discussion on httpd-dev list: http://mail-archives.apache.org/mod_mbox/httpd-dev/201309.mbox/%3c52358ed1.2070...@velox.ch%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3184) spdy window_update not triggered correctly..
[ https://issues.apache.org/jira/browse/TS-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205616#comment-14205616 ] ASF subversion and git services commented on TS-3184: - Commit 60bf04ddf9280af4dad4d267223ede29353b3a01 in trafficserver's branch refs/heads/master from [~sudheerv] [ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=60bf04d ] [TS-3184]: spdy window_update not triggered correctly spdy window_update not triggered correctly.. Key: TS-3184 URL: https://issues.apache.org/jira/browse/TS-3184 Project: Traffic Server Issue Type: Bug Components: SPDY Reporter: Sudheer Vinukonda During a session start, spdy advertises the initial window size as the configured {{proxy.config.spdy.initial_window_size_in}}. A window_update is triggered whenever the current delta_window_size reaches half this advertised window size. However, the condition that checks for triggering the window update compares the delta_window_size for each stream with the initial window size. This fails to trigger a window_update when the delta_window_size for each stream individually is not half the initial_window_size, but, the aggregate of all the streams is high enough. Consequently, the sender stalls upon exhausting the send window size and eventually times out waiting for a window update (which never happens, since, individually, each stream doesn't hit half the initial window size). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3105) Combination of fixes for TS-3084 and TS-3073 causing asserts and segfaults on 5.1 and beyond
[ https://issues.apache.org/jira/browse/TS-3105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205674#comment-14205674 ] Susan Hinrichs commented on TS-3105: Sigh, I think that you are probably right. Even though it will be tedious to split apart. While tracking down crashes and performance problems I've addressed a number of issues via this bug. By my count, I think there are around 6 separable bug fixes going on here. I'll start filing sub-bugs tomorrow and promoting the fixes separately. BTW, I think your analysis of the keep_alive flag is accurate. I think that was a spurious fix introduced along the way in an attempt to fix something that was due to some else that got fixed later. Combination of fixes for TS-3084 and TS-3073 causing asserts and segfaults on 5.1 and beyond Key: TS-3105 URL: https://issues.apache.org/jira/browse/TS-3105 Project: Traffic Server Issue Type: Bug Reporter: Susan Hinrichs Assignee: Susan Hinrichs Fix For: 5.2.0 Attachments: ts-3073-and-3084-and-3105-against-510.patch, ts-3105-master-7.patch, ts-3105-master-9.patch These two patches were run in a production environment on top of 5.0.1 without problem for several weeks. Now running with these patches on top of 5.1 causes either an assert or a segfault. Another person has reported the same segfault when running master in a production environment. In the assert, the handler_state of the producers is 0 (UNKNOWN) rather than a terminal state which is expected. I'm assuming either we are being directed into the terminal state from a connection that terminates too quickly. Or an event has hung around for too long and is being executed against the state machine after it has been recycled. The event is HTTP_TUNNEL_EVENT_DONE The assert stack trace is FATAL: HttpSM.cc:2632: failed assert `0` /z/bin/traffic_server - STACK TRACE: /z/lib/libtsutil.so.5(+0x25197)[0x2b8bd08dc197] /z/lib/libtsutil.so.5(+0x23def)[0x2b8bd08dadef] /z/bin/traffic_server(HttpSM::tunnel_handler_post_or_put(HttpTunnelProducer*)+0xcd)[0x5982ad] /z/bin/traffic_server(HttpSM::tunnel_handler_post(int, void*)+0x86)[0x5a32d6] /z/bin/traffic_server(HttpSM::main_handler(int, void*)+0xd8)[0x5a1e18] /z/bin/traffic_server(HttpTunnel::main_handler(int, void*)+0xee)[0x5dd6ae] /z/bin/traffic_server(write_to_net_io(NetHandler*, UnixNetVConnection*, EThread*)+0x136e)[0x721d1e] /z/bin/traffic_server(NetHandler::mainNetEvent(int, Event*)+0x28c)[0x7162fc] /z/bin/traffic_server(EThread::process_event(Event*, int)+0x91)[0x744df1] /z/bin/traffic_server(EThread::execute()+0x4fc)[0x7458ac] /z/bin/traffic_server[0x7440ca] /lib64/libpthread.so.0(+0x7034)[0x2b8bd1ee4034] /lib64/libc.so.6(clone+0x6d)[0x2b8bd2c2875d] The segfault stack trace is /z/bin/traffic_server - STACK TRACE: /lib64/libpthread.so.0(+0xf280)[0x2abccd0d8280] /z/bin/traffic_server(HttpSM::tunnel_handler_ua(int, HttpTunnelConsumer*)+0x122)[0x591462] /z/bin/traffic_server(HttpTunnel::consumer_handler(int, HttpTunnelConsumer*)+0x9e)[0x5dd15e] /z/bin/traffic_server(HttpTunnel::main_handler(int, void*)+0x117)[0x5dd6d7] /z/bin/traffic_server(UnixNetVConnection::mainEvent(int, Event*)+0x3f0)[0x725190] /z/bin/traffic_server(InactivityCop::check_inactivity(int, Event*)+0x275)[0x716b75] /z/bin/traffic_server(EThread::process_event(Event*, int)+0x91)[0x744df1] /z/bin/traffic_server(EThread::execute()+0x2fb)[0x7456ab] /z/bin/traffic_server[0x7440ca] /lib64/libpthread.so.0(+0x7034)[0x2abccd0d0034] /lib64/libc.so.6(clone+0x6d)[0x2abccde1475d] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-1570) remap doesn't reject request whose Host has extra characters after port (like test.com:80xxx)
[ https://issues.apache.org/jira/browse/TS-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205725#comment-14205725 ] Cynthia Gu commented on TS-1570: The root cause is in HTTPHdr::_fill_target_cache(). It processes port string and stops before the non-digit char. remap doesn't reject request whose Host has extra characters after port (like test.com:80xxx) --- Key: TS-1570 URL: https://issues.apache.org/jira/browse/TS-1570 Project: Traffic Server Issue Type: Bug Components: HTTP Affects Versions: 3.3.0 Reporter: Conan Wang Assignee: Cynthia Gu Priority: Minor Fix For: 5.3.0 remap.config:map http://test.com http://1.1.1.1 The request with Host: 'test.com:80xxx' or 'test.com:xxx' will get passed. Such host is not filtered strictly. Just report, didn't have big problem for me though. curl http://127.0.0.1:8080/ -H Host: test.com:80xxx or curl -x 127.0.0.1:8080 http://test.com:80xxx/ -v -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (TS-1570) remap doesn't reject request whose Host has extra characters after port (like test.com:80xxx)
[ https://issues.apache.org/jira/browse/TS-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205725#comment-14205725 ] Cynthia Gu edited comment on TS-1570 at 11/11/14 1:21 AM: -- The root cause is in HTTPHdr::_fill_target_cache(). It processes port string and stops before the non-digit char. // Check in the URL first, then the HOST field. if (0 != url-host_get(m_host_length)) { m_target_in_url = true; m_port = url-port_get(); m_port_in_header = 0 != url-port_get_raw(); m_host_mime = NULL; } else if (0 != (m_host_mime = const_castHTTPHdr*(this)-get_host_port_values(0, m_host_length, port_ptr, 0))) { if (port_ptr) { m_port = 0; for ( ; is_digit(*port_ptr) ; ++port_ptr ) m_port = m_port * 10 + *port_ptr - '0'; m_port_in_header = (0 != m_port); } m_port = url_canonicalize_port(url-m_url_impl-m_url_type, m_port); } was (Author: cynthiagu): The root cause is in HTTPHdr::_fill_target_cache(). It processes port string and stops before the non-digit char. remap doesn't reject request whose Host has extra characters after port (like test.com:80xxx) --- Key: TS-1570 URL: https://issues.apache.org/jira/browse/TS-1570 Project: Traffic Server Issue Type: Bug Components: HTTP Affects Versions: 3.3.0 Reporter: Conan Wang Assignee: Cynthia Gu Priority: Minor Fix For: 5.3.0 remap.config:map http://test.com http://1.1.1.1 The request with Host: 'test.com:80xxx' or 'test.com:xxx' will get passed. Such host is not filtered strictly. Just report, didn't have big problem for me though. curl http://127.0.0.1:8080/ -H Host: test.com:80xxx or curl -x 127.0.0.1:8080 http://test.com:80xxx/ -v -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3156) Mutex[Try]Lock bool() operator change and unused API removal
[ https://issues.apache.org/jira/browse/TS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205749#comment-14205749 ] Powell Molleti commented on TS-3156: Hi Alan, It does take ownership of the object since the api says: class MutexLock { private: PtrProxyMutex m; public: MutexLock(ProxyMutex * am, EThread * t):m(am) }; It is perfectly ok from compiler perspective to use it as follows: ProxyMutex foo; { MutexLock(lock, foo, this_ethread()); -- takes lock and smart pointer owners it with ref count 1 } -- lock object is destroyed and since smart pointer has ref count of zero when dying it will free ProxyMutex; The constructor should be MutexLock(PtrProxyMutex m, EThread * t); and the call should be enforced as MutexLock(lock, PtrProxyMutex(foo), this_ethread()); to force auto refcount up. Let me know. Powell Mutex[Try]Lock bool() operator change and unused API removal Key: TS-3156 URL: https://issues.apache.org/jira/browse/TS-3156 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Powell Molleti Assignee: James Peach Priority: Minor Labels: review Fix For: 5.2.0 Attachments: MutexLock-ats.patch, MutexLock-ats.patch, Use-Ryo-s-patch-to-pass-shared_ptr-to-MutexLock.patch, fix-MutexLock.patch Removed unused constructor in MutexLock along with set_and_take() method, had to change FORCE_PLUGIN_MUTEX() for that. Removed release() method. default bool and ! operator from both MutexLock and MutexTryLock with is_locked() API. Changes if (lock) to if (lock.is_locked()) across the code base. Ran make test will be performing more system testing. Posted before for early comments / feedback. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (TS-1570) remap doesn't reject request whose Host has extra characters after port (like test.com:80xxx)
[ https://issues.apache.org/jira/browse/TS-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205725#comment-14205725 ] Cynthia Gu edited comment on TS-1570 at 11/11/14 1:21 AM: -- The root cause is in HTTPHdr::_fill_target_cache(). It processes port string and stops before the non-digit char. Pasting the code below: // Check in the URL first, then the HOST field. if (0 != url-host_get(m_host_length)) { m_target_in_url = true; m_port = url-port_get(); m_port_in_header = 0 != url-port_get_raw(); m_host_mime = NULL; } else if (0 != (m_host_mime = const_castHTTPHdr*(this)-get_host_port_values(0, m_host_length, port_ptr, 0))) { if (port_ptr) { m_port = 0; for ( ; is_digit(*port_ptr) ; ++port_ptr ) m_port = m_port * 10 + *port_ptr - '0'; m_port_in_header = (0 != m_port); } m_port = url_canonicalize_port(url-m_url_impl-m_url_type, m_port); } was (Author: cynthiagu): The root cause is in HTTPHdr::_fill_target_cache(). It processes port string and stops before the non-digit char. // Check in the URL first, then the HOST field. if (0 != url-host_get(m_host_length)) { m_target_in_url = true; m_port = url-port_get(); m_port_in_header = 0 != url-port_get_raw(); m_host_mime = NULL; } else if (0 != (m_host_mime = const_castHTTPHdr*(this)-get_host_port_values(0, m_host_length, port_ptr, 0))) { if (port_ptr) { m_port = 0; for ( ; is_digit(*port_ptr) ; ++port_ptr ) m_port = m_port * 10 + *port_ptr - '0'; m_port_in_header = (0 != m_port); } m_port = url_canonicalize_port(url-m_url_impl-m_url_type, m_port); } remap doesn't reject request whose Host has extra characters after port (like test.com:80xxx) --- Key: TS-1570 URL: https://issues.apache.org/jira/browse/TS-1570 Project: Traffic Server Issue Type: Bug Components: HTTP Affects Versions: 3.3.0 Reporter: Conan Wang Assignee: Cynthia Gu Priority: Minor Fix For: 5.3.0 remap.config:map http://test.com http://1.1.1.1 The request with Host: 'test.com:80xxx' or 'test.com:xxx' will get passed. Such host is not filtered strictly. Just report, didn't have big problem for me though. curl http://127.0.0.1:8080/ -H Host: test.com:80xxx or curl -x 127.0.0.1:8080 http://test.com:80xxx/ -v -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-1822) Do we still need proxy.config.system.mmap_max ?
[ https://issues.apache.org/jira/browse/TS-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205773#comment-14205773 ] Zhao Yongming commented on TS-1822: --- we make use of the reclaim freelist on our 48G memory system, handling about 24-32G ram cache, with about 32KB everage content size, the default sysctl seting vm.max_map_count = 65530 is no enough, we have to rise it to 2x. so, I'd make this a option to rise the default sysctl setting if we choose to keep it, for example by cop process. Do we still need proxy.config.system.mmap_max ? --- Key: TS-1822 URL: https://issues.apache.org/jira/browse/TS-1822 Project: Traffic Server Issue Type: Improvement Components: Core Reporter: Leif Hedstrom Assignee: Phil Sorber Labels: compatibility Fix For: 6.0.0 A long time ago, we added proxy.config.system.mmap_max to let the traffic_server increase the max number of mmap segments that we want to use. We currently set this to 2MM. I'm wondering, do we really need this still ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3184) spdy window_update not triggered correctly..
[ https://issues.apache.org/jira/browse/TS-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3184: -- Description: During a session start, spdy advertises the initial window size as the configured {{proxy.config.spdy.initial_window_size_in}}. A window_update is triggered whenever the current delta_window_size reaches half this advertised window size. However, the condition that checks for triggering the window update compares the delta_window_size for each stream with the initial window size. This fails to trigger a window_update when the delta_window_size for each stream individually is not half the initial_window_size, even though, the aggregate of the delta_window_size for all the streams is high enough. Consequently, the sender stalls upon exhausting the send window size and eventually times out waiting for a window update (which never happens, since, individually, each stream doesn't hit half the initial window size). was: During a session start, spdy advertises the initial window size as the configured {{proxy.config.spdy.initial_window_size_in}}. A window_update is triggered whenever the current delta_window_size reaches half this advertised window size. However, the condition that checks for triggering the window update compares the delta_window_size for each stream with the initial window size. This fails to trigger a window_update when the delta_window_size for each stream individually is not half the initial_window_size, but, the aggregate of all the streams is high enough. Consequently, the sender stalls upon exhausting the send window size and eventually times out waiting for a window update (which never happens, since, individually, each stream doesn't hit half the initial window size). spdy window_update not triggered correctly.. Key: TS-3184 URL: https://issues.apache.org/jira/browse/TS-3184 Project: Traffic Server Issue Type: Bug Components: SPDY Reporter: Sudheer Vinukonda During a session start, spdy advertises the initial window size as the configured {{proxy.config.spdy.initial_window_size_in}}. A window_update is triggered whenever the current delta_window_size reaches half this advertised window size. However, the condition that checks for triggering the window update compares the delta_window_size for each stream with the initial window size. This fails to trigger a window_update when the delta_window_size for each stream individually is not half the initial_window_size, even though, the aggregate of the delta_window_size for all the streams is high enough. Consequently, the sender stalls upon exhausting the send window size and eventually times out waiting for a window update (which never happens, since, individually, each stream doesn't hit half the initial window size). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (TS-3185) Increase the default spdy initial_window_size_in setting to 1 mb
Sudheer Vinukonda created TS-3185: - Summary: Increase the default spdy initial_window_size_in setting to 1 mb Key: TS-3185 URL: https://issues.apache.org/jira/browse/TS-3185 Project: Traffic Server Issue Type: Improvement Components: SPDY Reporter: Sudheer Vinukonda Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably, 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data: 5 parallel uploads from APAC to the US with totally 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3185) Increase the default spdy initial_window_size_in setting to 1 mb
[ https://issues.apache.org/jira/browse/TS-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3185: -- Description: Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably, 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with a total of 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs was: Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably, 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with totally 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs Increase the default spdy initial_window_size_in setting to 1 mb Key: TS-3185 URL: https://issues.apache.org/jira/browse/TS-3185 Project: Traffic Server Issue Type: Improvement Components: SPDY Reporter: Sudheer Vinukonda Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably, 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with a total of 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3185) Increase the default spdy initial_window_size_in setting to 1 mb
[ https://issues.apache.org/jira/browse/TS-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3185: -- Description: Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably, 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with totally 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs was: Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably, 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data: 5 parallel uploads from APAC to the US with totally 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs Increase the default spdy initial_window_size_in setting to 1 mb Key: TS-3185 URL: https://issues.apache.org/jira/browse/TS-3185 Project: Traffic Server Issue Type: Improvement Components: SPDY Reporter: Sudheer Vinukonda Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably, 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with totally 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3185) Increase the default spdy initial_window_size_in setting to 1 mb
[ https://issues.apache.org/jira/browse/TS-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3185: -- Description: Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably to 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with a total of 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs was: Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably, 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with a total of 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs Increase the default spdy initial_window_size_in setting to 1 mb Key: TS-3185 URL: https://issues.apache.org/jira/browse/TS-3185 Project: Traffic Server Issue Type: Improvement Components: SPDY Reporter: Sudheer Vinukonda Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably to 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with a total of 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3185) Increase the default spdy initial_window_size_in setting to 1 mb
[ https://issues.apache.org/jira/browse/TS-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheer Vinukonda updated TS-3185: -- Description: Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably to 1 mb like some of the more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with a total of 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs was: Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably to 1 mb that more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with a total of 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs Increase the default spdy initial_window_size_in setting to 1 mb Key: TS-3185 URL: https://issues.apache.org/jira/browse/TS-3185 Project: Traffic Server Issue Type: Improvement Components: SPDY Reporter: Sudheer Vinukonda Currently, proxy.config.spdy.initial_window_size_in is set to the default value 64K (suggested in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1). This suggested value is perhaps due to historic reasons when the window_size was only allowed 2 octets in the older spdy versions. Note that, ideally, the client (sender) should still be able to defer the upload when the send window is exhausted until the server (receiver) sends an WINDOW_UPDATE frame indicating to resume the sending. Even so, 64K may still be too small and may result in increasing upload latencies, especially, in case of large concurrent upload scenarios. Hence, I would like to bump up the default value for the spdy initial window size to a higher value (preferably to 1 mb like some of the more popular spdy enabled web servers are using). Here's some basic benchmarking data (thanks to [~yzlai]): 5 parallel uploads from APAC to the US with a total of 2,299,250 bytes. spdy=off, 2.34 secs spdy=on, spdy.initial_window_size_in=64k: 11.28 secs spdy=on, spdy.initial_window_size_in=1M: 1.99 secs spdy=on, spdy.initial_window_size_in=10M: 2.01 secs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-2325) remap.config .include should support directories
[ https://issues.apache.org/jira/browse/TS-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205871#comment-14205871 ] Masakazu Kitajo commented on TS-2325: - Why is this postponed to 6.0? I think it doesn't break any compatibility. remap.config .include should support directories Key: TS-2325 URL: https://issues.apache.org/jira/browse/TS-2325 Project: Traffic Server Issue Type: Improvement Components: Configuration, Core Reporter: James Peach Fix For: 6.0.0 Attachments: ts2325.diff The remap.config .include directive should support including a directory. The implementation for this would be to simply read all the files in the directory and include each one. I don't think the files in the directory should be sorted, since that requires us to read all the names into memory, and there might be a very large number of them. Typical ordering constraints can be expressed using multiple directories. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (TS-3186) support ocsp queries through a proxy
Atsutomo Kotani created TS-3186: --- Summary: support ocsp queries through a proxy Key: TS-3186 URL: https://issues.apache.org/jira/browse/TS-3186 Project: Traffic Server Issue Type: Improvement Components: SSL Reporter: Atsutomo Kotani When ATS behind http proxy, it need ocsp queries through http proxy for ocsp stapling. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-3186) support ocsp queries through a proxy
[ https://issues.apache.org/jira/browse/TS-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atsutomo Kotani updated TS-3186: Attachment: ocsp_proxy.diff This patch supports ocsp queries through a proxy. {noformat} CONFIG proxy.config.ssl.ocsp.proxy_host STRING localhost CONFIG proxy.config.ssl.ocsp.proxy_port STRING 8001 {noformat} support ocsp queries through a proxy - Key: TS-3186 URL: https://issues.apache.org/jira/browse/TS-3186 Project: Traffic Server Issue Type: Improvement Components: SSL Reporter: Atsutomo Kotani Attachments: ocsp_proxy.diff When ATS behind http proxy, it need ocsp queries through http proxy for ocsp stapling. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (TS-2325) remap.config .include should support directories
[ https://issues.apache.org/jira/browse/TS-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom reassigned TS-2325: - Assignee: Leif Hedstrom remap.config .include should support directories Key: TS-2325 URL: https://issues.apache.org/jira/browse/TS-2325 Project: Traffic Server Issue Type: Improvement Components: Configuration, Core Reporter: James Peach Assignee: Leif Hedstrom Fix For: 5.2.0 Attachments: ts2325.diff The remap.config .include directive should support including a directory. The implementation for this would be to simply read all the files in the directory and include each one. I don't think the files in the directory should be sorted, since that requires us to read all the names into memory, and there might be a very large number of them. Typical ordering constraints can be expressed using multiple directories. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-2325) remap.config .include should support directories
[ https://issues.apache.org/jira/browse/TS-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-2325: -- Fix Version/s: (was: 6.0.0) 5.2.0 remap.config .include should support directories Key: TS-2325 URL: https://issues.apache.org/jira/browse/TS-2325 Project: Traffic Server Issue Type: Improvement Components: Configuration, Core Reporter: James Peach Fix For: 5.2.0 Attachments: ts2325.diff The remap.config .include directive should support including a directory. The implementation for this would be to simply read all the files in the directory and include each one. I don't think the files in the directory should be sorted, since that requires us to read all the names into memory, and there might be a very large number of them. Typical ordering constraints can be expressed using multiple directories. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-2325) remap.config .include should support directories
[ https://issues.apache.org/jira/browse/TS-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-2325: -- Labels: review (was: ) remap.config .include should support directories Key: TS-2325 URL: https://issues.apache.org/jira/browse/TS-2325 Project: Traffic Server Issue Type: Improvement Components: Configuration, Core Reporter: James Peach Assignee: Leif Hedstrom Labels: review Fix For: 5.2.0 Attachments: ts2325.diff The remap.config .include directive should support including a directory. The implementation for this would be to simply read all the files in the directory and include each one. I don't think the files in the directory should be sorted, since that requires us to read all the names into memory, and there might be a very large number of them. Typical ordering constraints can be expressed using multiple directories. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-2325) remap.config .include should support directories
[ https://issues.apache.org/jira/browse/TS-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205973#comment-14205973 ] Leif Hedstrom commented on TS-2325: --- Yeah, not sure why it got moved out, moved it to 5.2.0. remap.config .include should support directories Key: TS-2325 URL: https://issues.apache.org/jira/browse/TS-2325 Project: Traffic Server Issue Type: Improvement Components: Configuration, Core Reporter: James Peach Assignee: Leif Hedstrom Labels: review Fix For: 5.2.0 Attachments: ts2325.diff The remap.config .include directive should support including a directory. The implementation for this would be to simply read all the files in the directory and include each one. I don't think the files in the directory should be sorted, since that requires us to read all the names into memory, and there might be a very large number of them. Typical ordering constraints can be expressed using multiple directories. -- This message was sent by Atlassian JIRA (v6.3.4#6332)