[jira] [Commented] (TS-3235) PluginVC crashed with unrecognized event

2015-01-21 Thread zouyu (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285386#comment-14285386
 ] 

zouyu commented on TS-3235:
---

Attach the pull request for this problem. 

[TS-3235] fix crash problem caused by sync problem in PluginVC.
commit: e1ee3f517a6a916056bd95a4a3c78f3d81685da3


This fix is for the multi-thread sync problem exists in PluginVC of TS-3235.
Below is the scenario of this problem:
1. customer uses interceptplugin which calls TSVIOReenable and TSVConnClose and 
that will call PluginVC's APIs in their work threads
2. That customer creates their work threads by using TSThreadCreate which are 
not controlled by ATS.
3. Because the interceptplugin works in their work threads and the lock 
provided by interceptplugin can only sync their work threads and it cannot sync 
the threads between ATS and interceptplugin, the sync problem will occur when 
TSVIOReenable and TSVConnClose are called from intercetplugin in customer's 
work threads.

So, we need to consider this kind of scenario also, as it is common to customer 
to start their threads.

> PluginVC crashed with unrecognized event
> 
>
> Key: TS-3235
> URL: https://issues.apache.org/jira/browse/TS-3235
> Project: Traffic Server
>  Issue Type: Bug
>  Components: CPP API, HTTP, Plugins
>Reporter: kang li
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
>
> We are using atscppapi to create Intercept plugin.
>  
> From the coredump , that seems Continuation of the InterceptPlugin was 
> already been destroyed. 
> {code}
> #0  0x00375ac32925 in raise () from /lib64/libc.so.6
> #1  0x00375ac34105 in abort () from /lib64/libc.so.6
> #2  0x2b21eeae3458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b21eeae3525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`", 
> ap=0x2b21f4913ad0) at ink_error.cc:65
> #4  0x2b21eeae35ee in ink_fatal (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b21eeae2160 in _ink_assert (expression=0x76ddb8 "call_event == 
> core_lock_retry_event", file=0x76dd04 "PluginVC.cc", line=203)
> at ink_assert.cc:37
> #6  0x00530217 in PluginVC::main_handler (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at PluginVC.cc:203
> #7  0x004f5854 in Continuation::handleEvent (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at ../iocore/eventsystem/I_Continuation.h:146
> #8  0x00755d26 in EThread::process_event (this=0x309b250, 
> e=0xe0f5b80, calling_code=1) at UnixEThread.cc:145
> #9  0x0075610a in EThread::execute (this=0x309b250) at 
> UnixEThread.cc:239
> #10 0x00755284 in spawn_thread_internal (a=0x2849330) at Thread.cc:88
> #11 0x2b21ef05f9d1 in start_thread () from /lib64/libpthread.so.0
> #12 0x00375ace8b7d in clone () from /lib64/libc.so.6
> (gdb) p sm_lock_retry_event
> $13 = (Event *) 0x2b2496146e90
> (gdb) p core_lock_retry_event
> $14 = (Event *) 0x0
> (gdb) p active_event
> $15 = (Event *) 0x0
> (gdb) p inactive_event
> $16 = (Event *) 0x0
> (gdb) p *(INKContInternal*)this->core_obj->connect_to
> Cannot access memory at address 0x2b269cd46c10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: clang-analyzer #286

2015-01-21 Thread jenkins
See 

Changes:

[Leif Hedstrom] Fix indentation

[Leif Hedstrom] TS-3308 Add an explicit -lpthread for gcc to build with ASAN 
options

[Leif Hedstrom] Change the clean target for tsqa builds

[amc] Doc: proxy.config.http.attach_server_session_to_client

[Leif Hedstrom] [TS-2421] MultiCache could theoretically create world-writeable

[shinrich] TS-3307: TSVConnFdCreate does not allow non-socket file descriptor

--
[...truncated 1793 lines...]
Making all in tcpinfo
make[2]: Entering directory 
`
  CXX  tcpinfo.lo
  CXXLDtcpinfo.la
make[2]: Leaving directory 
`
Making all in experimental
make[2]: Entering directory 
`
Making all in authproxy
make[3]: Entering directory 
`
  CXX  authproxy.lo
  CXX  utils.lo
  CXXLDauthproxy.la
make[3]: Leaving directory 
`
Making all in background_fetch
make[3]: Entering directory 
`
  CXX  background_fetch.lo
  CXXLDbackground_fetch.la
make[3]: Leaving directory 
`
Making all in balancer
make[3]: Entering directory 
`
  CXX  roundrobin.lo
  CXX  balancer.lo
  CXX  hash.lo
  CXXLDbalancer.la
make[3]: Leaving directory 
`
Making all in buffer_upload
make[3]: Entering directory 
`
  CXX  buffer_upload.lo
  CXXLDbuffer_upload.la
make[3]: Leaving directory 
`
Making all in channel_stats
make[3]: Entering directory 
`
  CXX  channel_stats.lo
  CXXLDchannel_stats.la
make[3]: Leaving directory 
`
Making all in collapsed_connection
make[3]: Entering directory 
`
  CXX  collapsed_connection.lo
  CXX  MurmurHash3.lo
  CXXLDcollapsed_connection.la
make[3]: Leaving directory 
`
Making all in custom_redirect
make[3]: Entering directory 
`
  CXX  custom_redirect.lo
  CXXLDcustom_redirect.la
make[3]: Leaving directory 
`
Making all in epic
make[3]: Entering directory 
`
  CXX  epic.lo
  CXXLDepic.la
make[3]: Leaving directory 
`
Making all in escalate
make[3]: Entering directory 
`
  CXX  escalate.lo
  CXXLDescalate.la
make[3]: Leaving directory 
`
Making all in esi
make[3]: Entering directory 
`
  CXX  esi.lo
  CXX  serverIntercept.lo
  CXX  combo_handler.lo
  CXX  lib/DocNode.lo
  CXX  lib/EsiParser.lo
In file included from combo_handler.cc:27:
In file included from 
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.8.3/../../../../include/c++/4.8.3/vector:64:
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.8.3/../../../../include/c++/4.8.3/bits/stl_vector.h:771:9:
 warning: Returning null reference
  { return *(this->_M_impl._M_start + __n); }
^~
1 warning generated.
  CXX  lib/EsiGzip.lo
  CXX  lib/EsiGunzip.lo
  CXX  lib/EsiProcessor.lo
  CXX  lib/Expression.lo
  CXX  lib/FailureInfo.lo
  CXX  lib/HandlerManager.lo
  CXX  lib/Stats.lo
  CXX  lib/U

[jira] [Updated] (TS-3235) PluginVC crashed with unrecognized event

2015-01-21 Thread zouyu (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zouyu updated TS-3235:
--
Attachment: pluginvc-crash.diff

> PluginVC crashed with unrecognized event
> 
>
> Key: TS-3235
> URL: https://issues.apache.org/jira/browse/TS-3235
> Project: Traffic Server
>  Issue Type: Bug
>  Components: CPP API, HTTP, Plugins
>Reporter: kang li
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: pluginvc-crash.diff
>
>
> We are using atscppapi to create Intercept plugin.
>  
> From the coredump , that seems Continuation of the InterceptPlugin was 
> already been destroyed. 
> {code}
> #0  0x00375ac32925 in raise () from /lib64/libc.so.6
> #1  0x00375ac34105 in abort () from /lib64/libc.so.6
> #2  0x2b21eeae3458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b21eeae3525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`", 
> ap=0x2b21f4913ad0) at ink_error.cc:65
> #4  0x2b21eeae35ee in ink_fatal (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b21eeae2160 in _ink_assert (expression=0x76ddb8 "call_event == 
> core_lock_retry_event", file=0x76dd04 "PluginVC.cc", line=203)
> at ink_assert.cc:37
> #6  0x00530217 in PluginVC::main_handler (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at PluginVC.cc:203
> #7  0x004f5854 in Continuation::handleEvent (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at ../iocore/eventsystem/I_Continuation.h:146
> #8  0x00755d26 in EThread::process_event (this=0x309b250, 
> e=0xe0f5b80, calling_code=1) at UnixEThread.cc:145
> #9  0x0075610a in EThread::execute (this=0x309b250) at 
> UnixEThread.cc:239
> #10 0x00755284 in spawn_thread_internal (a=0x2849330) at Thread.cc:88
> #11 0x2b21ef05f9d1 in start_thread () from /lib64/libpthread.so.0
> #12 0x00375ace8b7d in clone () from /lib64/libc.so.6
> (gdb) p sm_lock_retry_event
> $13 = (Event *) 0x2b2496146e90
> (gdb) p core_lock_retry_event
> $14 = (Event *) 0x0
> (gdb) p active_event
> $15 = (Event *) 0x0
> (gdb) p inactive_event
> $16 = (Event *) 0x0
> (gdb) p *(INKContInternal*)this->core_obj->connect_to
> Cannot access memory at address 0x2b269cd46c10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (TS-3235) PluginVC crashed with unrecognized event

2015-01-21 Thread zouyu (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285386#comment-14285386
 ] 

zouyu edited comment on TS-3235 at 1/21/15 10:43 AM:
-

Attach the pull request for this problem. 

[TS-3235] fix crash problem caused by sync problem in PluginVC.
commit: e1ee3f517a6a916056bd95a4a3c78f3d81685da3
https://github.com/apache/trafficserver/pull/164

This fix is for the multi-thread sync problem exists in PluginVC of TS-3235.
Below is the scenario of this problem:
1. customer uses interceptplugin which calls TSVIOReenable and TSVConnClose and 
that will call PluginVC's APIs in their work threads
2. That customer creates their work threads by using TSThreadCreate which are 
not controlled by ATS.
3. Because the interceptplugin works in their work threads and the lock 
provided by interceptplugin can only sync their work threads and it cannot sync 
the threads between ATS and interceptplugin, the sync problem will occur when 
TSVIOReenable and TSVConnClose are called from intercetplugin in customer's 
work threads.

So, we need to consider this kind of scenario also, as it is common to customer 
to start their threads.


was (Author: zouy):
Attach the pull request for this problem. 

[TS-3235] fix crash problem caused by sync problem in PluginVC.
commit: e1ee3f517a6a916056bd95a4a3c78f3d81685da3


This fix is for the multi-thread sync problem exists in PluginVC of TS-3235.
Below is the scenario of this problem:
1. customer uses interceptplugin which calls TSVIOReenable and TSVConnClose and 
that will call PluginVC's APIs in their work threads
2. That customer creates their work threads by using TSThreadCreate which are 
not controlled by ATS.
3. Because the interceptplugin works in their work threads and the lock 
provided by interceptplugin can only sync their work threads and it cannot sync 
the threads between ATS and interceptplugin, the sync problem will occur when 
TSVIOReenable and TSVConnClose are called from intercetplugin in customer's 
work threads.

So, we need to consider this kind of scenario also, as it is common to customer 
to start their threads.

> PluginVC crashed with unrecognized event
> 
>
> Key: TS-3235
> URL: https://issues.apache.org/jira/browse/TS-3235
> Project: Traffic Server
>  Issue Type: Bug
>  Components: CPP API, HTTP, Plugins
>Reporter: kang li
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: pluginvc-crash.diff
>
>
> We are using atscppapi to create Intercept plugin.
>  
> From the coredump , that seems Continuation of the InterceptPlugin was 
> already been destroyed. 
> {code}
> #0  0x00375ac32925 in raise () from /lib64/libc.so.6
> #1  0x00375ac34105 in abort () from /lib64/libc.so.6
> #2  0x2b21eeae3458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b21eeae3525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`", 
> ap=0x2b21f4913ad0) at ink_error.cc:65
> #4  0x2b21eeae35ee in ink_fatal (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b21eeae2160 in _ink_assert (expression=0x76ddb8 "call_event == 
> core_lock_retry_event", file=0x76dd04 "PluginVC.cc", line=203)
> at ink_assert.cc:37
> #6  0x00530217 in PluginVC::main_handler (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at PluginVC.cc:203
> #7  0x004f5854 in Continuation::handleEvent (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at ../iocore/eventsystem/I_Continuation.h:146
> #8  0x00755d26 in EThread::process_event (this=0x309b250, 
> e=0xe0f5b80, calling_code=1) at UnixEThread.cc:145
> #9  0x0075610a in EThread::execute (this=0x309b250) at 
> UnixEThread.cc:239
> #10 0x00755284 in spawn_thread_internal (a=0x2849330) at Thread.cc:88
> #11 0x2b21ef05f9d1 in start_thread () from /lib64/libpthread.so.0
> #12 0x00375ace8b7d in clone () from /lib64/libc.so.6
> (gdb) p sm_lock_retry_event
> $13 = (Event *) 0x2b2496146e90
> (gdb) p core_lock_retry_event
> $14 = (Event *) 0x0
> (gdb) p active_event
> $15 = (Event *) 0x0
> (gdb) p inactive_event
> $16 = (Event *) 0x0
> (gdb) p *(INKContInternal*)this->core_obj->connect_to
> Cannot access memory at address 0x2b269cd46c10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3304) segfault in libtsutils

2015-01-21 Thread Steve Malenfant (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285620#comment-14285620
 ] 

Steve Malenfant commented on TS-3304:
-

>From the records.config, not sure if this has anything to do with it since 
>that's the only place I could see a reference to "localhost" in the config.
records.config:CONFIG proxy.config.log.hostname STRING localhost

With hostdb and dns debug :

[Jan 21 12:18:01.733] Server {0x2aaca5e1d700} DEBUG: (hostdb) hostname = 
localhost
[Jan 21 12:18:11.733] Server {0x2aac9dc7c000} DEBUG: (hostdb) probe 127.0.0.1 
c3aab60f33f8019c 1 [ignore_timeout = 0]
[Jan 21 12:18:11.734] Server {0x2aac9dc7c000} DEBUG: (hostdb) immediate answer 
for 127.0.0.1
[Jan 21 12:18:11.734] Server {0x2aac9dc7c000} DEBUG: (hostdb) probe  
aa30de0f80a82135 1 [ignore_timeout = 0]
[Jan 21 12:18:11.734] Server {0x2aac9dc7c000} DEBUG: (hostdb) immediate answer 
for 127.0.0.1
[Jan 21 12:18:11.734] Server {0x2aac9dc7c000} DEBUG: (hostdb) hostname = 
localhost
[Jan 21 12:18:21.734] Server {0x2aaca4a09700} DEBUG: (hostdb) probe 127.0.0.1 
c3aab60f33f8019c 1 [ignore_timeout = 0]
[Jan 21 12:18:21.734] Server {0x2aaca4a09700} DEBUG: (hostdb) immediate answer 
for 127.0.0.1
[Jan 21 12:18:21.734] Server {0x2aaca4a09700} DEBUG: (hostdb) probe  
aa30de0f80a82135 1 [ignore_timeout = 0]
[Jan 21 12:18:21.734] Server {0x2aaca4a09700} DEBUG: (hostdb) serving stale 
entry 86400 | 6 | 86400 as requested by config
[Jan 21 12:18:21.734] Server {0x2aaca4a09700} DEBUG: (hostdb) serving stale 
entry 86400 | 6 | 86400 as requested by config
[Jan 21 12:18:21.734] Server {0x2aaca4a09700} DEBUG: (hostdb) stale 86400 
1421756537 86400, using it and refreshing it
NOTE: Traffic Server received Sig 11: Segmentation fault
/opt/trafficserver/bin/traffic_server - STACK TRACE:
/lib64/libpthread.so.0(+0x381b60f710)[0x2aac9d2e5710]
/opt/trafficserver/lib/libtsutil.so.4(_Z13ink_inet_addrPKc+0x8)[0x2aac9d0ba1d8]
/opt/trafficserver/bin/traffic_server(_Z5probeP10ProxyMutexRK9HostDBMD5b+0x315)[0x5e0df5]
/opt/trafficserver/bin/traffic_server(_ZN15HostDBProcessor5getbyEP12ContinuationPKciPK8sockaddrb12HostResStylei+0x4e4)[0x5e2b34]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM24do_hostdb_reverse_lookupEv+0x4c)[0x517f2c]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x778)[0x52f028]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM16do_hostdb_lookupEv+0x282)[0x518242]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0xb9a)[0x52f44a]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x32a)[0x5284fa]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x1ea)[0x52ea9a]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x1da)[0x52ea8a]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x32a)[0x5284fa]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x1ea)[0x52ea9a]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x32a)[0x5284fa]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x1ea)[0x52ea9a]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM32state_read_client_request_headerEiPv+0x226)[0x52fef6]
/opt/trafficserver/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xd8)[0x52a5b8]
/opt/trafficserver/bin/traffic_server[0x68793b]
/opt/trafficserver/bin/traffic_server[0x689ec4]
/opt/trafficserver/bin/traffic_server(_ZN10NetHandler12mainNetEventEiP5Event+0x1f2)[0x67fb12]
/opt/trafficserver/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x8f)[0x6ac8cf]
/opt/trafficserver/bin/traffic_server(_ZN7EThread7executeEv+0x493)[0x6ad273]
/opt/trafficserver/bin/traffic_server[0x6abc2a]
/lib64/libpthread.so.0(+0x381b6079d1)[0x2aac9d2dd9d1]


> segfault in libtsutils
> --
>
> Key: TS-3304
> URL: https://issues.apache.org/jira/browse/TS-3304
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Steve Malenfant
>Assignee: Leif Hedstrom
> Fix For: 5.3.0
>
>
> Getting multiple segfaults per day on 4.2.1. 
> [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
> 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
> [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
> 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
> [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
> 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
> [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
> 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
> [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
> 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
> Stack Trace :
> (gdb) bt
> #0  ink_

[jira] [Commented] (TS-3235) PluginVC crashed with unrecognized event

2015-01-21 Thread portl4t (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285654#comment-14285654
 ] 

portl4t commented on TS-3235:
-

Well, I think there is no doubt that the continuation can not be shared by 
serveral threads concurrently, some continuations are operating in the plugin 
on the condition that their locks had been keeped by the ats working threads.

> PluginVC crashed with unrecognized event
> 
>
> Key: TS-3235
> URL: https://issues.apache.org/jira/browse/TS-3235
> Project: Traffic Server
>  Issue Type: Bug
>  Components: CPP API, HTTP, Plugins
>Reporter: kang li
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: pluginvc-crash.diff
>
>
> We are using atscppapi to create Intercept plugin.
>  
> From the coredump , that seems Continuation of the InterceptPlugin was 
> already been destroyed. 
> {code}
> #0  0x00375ac32925 in raise () from /lib64/libc.so.6
> #1  0x00375ac34105 in abort () from /lib64/libc.so.6
> #2  0x2b21eeae3458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b21eeae3525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`", 
> ap=0x2b21f4913ad0) at ink_error.cc:65
> #4  0x2b21eeae35ee in ink_fatal (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b21eeae2160 in _ink_assert (expression=0x76ddb8 "call_event == 
> core_lock_retry_event", file=0x76dd04 "PluginVC.cc", line=203)
> at ink_assert.cc:37
> #6  0x00530217 in PluginVC::main_handler (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at PluginVC.cc:203
> #7  0x004f5854 in Continuation::handleEvent (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at ../iocore/eventsystem/I_Continuation.h:146
> #8  0x00755d26 in EThread::process_event (this=0x309b250, 
> e=0xe0f5b80, calling_code=1) at UnixEThread.cc:145
> #9  0x0075610a in EThread::execute (this=0x309b250) at 
> UnixEThread.cc:239
> #10 0x00755284 in spawn_thread_internal (a=0x2849330) at Thread.cc:88
> #11 0x2b21ef05f9d1 in start_thread () from /lib64/libpthread.so.0
> #12 0x00375ace8b7d in clone () from /lib64/libc.so.6
> (gdb) p sm_lock_retry_event
> $13 = (Event *) 0x2b2496146e90
> (gdb) p core_lock_retry_event
> $14 = (Event *) 0x0
> (gdb) p active_event
> $15 = (Event *) 0x0
> (gdb) p inactive_event
> $16 = (Event *) 0x0
> (gdb) p *(INKContInternal*)this->core_obj->connect_to
> Cannot access memory at address 0x2b269cd46c10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3304) segfault in libtsutils

2015-01-21 Thread Phil Sorber (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285962#comment-14285962
 ] 

Phil Sorber commented on TS-3304:
-

[~amc], [~zwoop],

Are we ok with my patch as the backport fix, or is there some better patch that 
is imminent?

> segfault in libtsutils
> --
>
> Key: TS-3304
> URL: https://issues.apache.org/jira/browse/TS-3304
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Steve Malenfant
>Assignee: Leif Hedstrom
> Fix For: 5.3.0
>
>
> Getting multiple segfaults per day on 4.2.1. 
> [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
> 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
> [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
> 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
> [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
> 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
> [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
> 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
> [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
> 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
> Stack Trace :
> (gdb) bt
> #0  ink_inet_addr (s=) at ink_inet.cc:107
> #1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
> ignore_timeout=false) at P_HostDBProcessor.h:545
> #2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
> #3  0x005e2b34 in HostDBProcessor::getby (this=, 
> cont=0x2b514cc749d0, hostname=0x0, len=, 
> ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
> dns_lookup_timeout=0)
> at HostDB.cc:772
> #4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
> ../../iocore/hostdb/I_HostDBProcessor.h:417
> #5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
> #6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6932
> #7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
> HttpSM.cc:3950
> #8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6925
> #9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:7224
> #12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #16 0x0052fef6 in HttpSM::state_read_client_request_header 
> (this=0x2b514cc749d0, event=100, data=) at HttpSM.cc:821
> #17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, 
> event=100, data=0x2b514802ca08) at HttpSM.cc:2539
> #18 0x0068793b in handleEvent (event=, 
> vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
> #19 read_signal_and_update (event=, vc=0x2b514802c900) 
> at UnixNetVConnection.cc:138
> #20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, 
> vc=0x2b514802c900, thread=) at UnixNetVConnection.cc:320
> #21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
> event=, e=) at UnixNet.cc:384
> #22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
> calling_code=5) at I_Continuation.h:146
> #23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
> at UnixEThread.cc:145
> #24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
> UnixEThread.cc:269
> #25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
> #26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
> #27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3309) TLS Session tickets docs

2015-01-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285911#comment-14285911
 ] 

ASF subversion and git services commented on TS-3309:
-

Commit 86bda3532dc72389f9e88d58af07309cf0d92411 in trafficserver's branch 
refs/heads/master from [~bzeng]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=86bda35 ]

TS-3309: document TLS session ticket rotation


> TLS Session tickets docs
> 
>
> Key: TS-3309
> URL: https://issues.apache.org/jira/browse/TS-3309
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, Security, SSL
>Reporter: Bin
>Assignee: James Peach
> Fix For: 5.3.0
>
> Attachments: traffic_line_rotation_doc.diff
>
>
> Add a few words to describe the TLS session ticket keys rotation for TS-3301. 
> jpeach is the best person to review it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3283) Certain SSL handshake error during client-hello hangs the client and leaves network connection open

2015-01-21 Thread Phil Sorber (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Sorber updated TS-3283:

Backport to Version: 4.2.3

> Certain SSL handshake error during client-hello hangs the client and leaves 
> network connection open
> ---
>
> Key: TS-3283
> URL: https://issues.apache.org/jira/browse/TS-3283
> Project: Traffic Server
>  Issue Type: Bug
>  Components: SSL
>Reporter: Joe Chung
>Assignee: Susan Hinrichs
> Fix For: 5.3.0
>
>
> h3. Problem Description
> Send an SSLv2 Client Hello with an old cipher suite request against Traffic 
> Server 4.2.2, and the connection will freeze on the client side and 
> eventually time out after 120 seconds.
> The Traffic Server detects the SSL error, but instead of closing the 
> connection, goes on to accept new connections.
> h3. Reproduction
> === Client: Macbook Pro running OSX Mavericks 10.9.5 ===
> {code:none}
> $ openssl version -a
> OpenSSL 0.9.8za 5 Jun 2014
> built on: Aug 10 2014
> platform: darwin64-x86_64-llvm
> options:  bn(64,64) md2(int) rc4(ptr,char) des(idx,cisc,16,int) blowfish(idx)
> compiler: -arch x86_64 -fmessage-length=0 -pipe -Wno-trigraphs 
> -fpascal-strings -fasm-blocks -O3 -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H 
> -DL_ENDIAN -DMD32_REG_T=int -DOPENSSL_NO_IDEA -DOPENSSL_PIC -DOPENSSL_THREADS 
> -DZLIB -mmacosx-version-min=10.6
> OPENSSLDIR: "/System/Library/OpenSSL"
> {code}
> h4. The following command triggers the bad behavior on the 4.2.2 server.
> {code:none}
> $ openssl s_client -connect 192.168.20.130:443 -ssl2 -debug
> CONNECTED(0003)
> write to 0x7fb9f2508610 [0x7fb9f300f201] (45 bytes => 45 (0x2D))
>  - 80 2b 01 00 02 00 12 00-00 00 10 07 00 c0 03 00   .+..
> 0010 - 80 01 00 80 06 00 40 04-00 80 02 00 80 f4 71 1a   ..@...q.
> 0020 - ad 23 06 59 4d f8 d2 c5-b2 57 a9 66 4c.#.YMW.fL
> ^C
> {code}
> At this point, the client is hung, and I have to hit ctrl-c to interrupt it 
> or wait 120 seconds for tcp timeout.
> h3. Server: Lubuntu 13.10 on VMware
> {code:none}
> $ openssl version -a
> OpenSSL 1.0.1e 11 Feb 2013
> built on: Fri Jun 20 18:52:25 UTC 2014
> platform: debian-i386
> options:  bn(64,32) rc4(8x,mmx) des(ptr,risc1,16,long) blowfish(idx) 
> compiler: cc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT 
> -DDSO_DLFCN -DHAVE_DLFCN_H -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector 
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security 
> -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack 
> -Wall -DOPENSSL_NO_TLS1_2_CLIENT -DOPENSSL_MAX_TLS1_2_CIPHER_LENGTH=50 
> -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT 
> -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM 
> -DRMD160_ASM -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
> OPENSSLDIR: "/usr/lib/ssl"
> {code}
> {code:none}
> $ diff /usr/local/etc/trafficserver/records.config.422 
> /usr/local/etc/trafficserver/records.config
> 113c113
> < CONFIG proxy.config.http.server_ports STRING 8080
> ---
> > CONFIG proxy.config.http.server_ports STRING 8080 443:ssl
> 594,595c594,595
> < CONFIG proxy.config.diags.debug.enabled INT 0
> < CONFIG proxy.config.diags.debug.tags STRING http.*|dns.*
> ---
> > CONFIG proxy.config.diags.debug.enabled INT 1
> > CONFIG proxy.config.diags.debug.tags STRING ssl.*
> {code}
> {code:none}
> $ /usr/local/bin/traffic_server --version
> [TrafficServer] using root directory '/usr/local'
> Apache Traffic Server - traffic_server - 4.2.2 - (build # 0723 on Jan  7 2015 
> at 23:04:32)
> $ sudo /usr/local/bin/traffic_server
> [sudo] password for user:
> [TrafficServer] using root directory '/usr/local'
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) setting SNI callbacks 
> with for ctx 0xa4a7928
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) indexed '*' with 
> SSL_CTX 0xa4a7928
> [Jan  8 00:53:42.619] Server {0xb702e700} DEBUG: (ssl) importing SNI names 
> from /usr/local/etc/trafficserver
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> [SSLNextProtocolAccept:mainEvent] event 202 netvc 0xb280fa90
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 16 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8193 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8194 ret: -1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> SSL::3055967040:error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown 
> protocol:s23_srvr.c:628
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG:  (sslServerHandShakeEvent)> (ssl) SSL handshake error: SSL_ERROR_SSL (1), 
> errno=0
> {code}

[jira] [Assigned] (TS-3283) Certain SSL handshake error during client-hello hangs the client and leaves network connection open

2015-01-21 Thread Phil Sorber (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Sorber reassigned TS-3283:
---

Assignee: Phil Sorber  (was: Susan Hinrichs)

> Certain SSL handshake error during client-hello hangs the client and leaves 
> network connection open
> ---
>
> Key: TS-3283
> URL: https://issues.apache.org/jira/browse/TS-3283
> Project: Traffic Server
>  Issue Type: Bug
>  Components: SSL
>Reporter: Joe Chung
>Assignee: Phil Sorber
> Fix For: 5.3.0
>
>
> h3. Problem Description
> Send an SSLv2 Client Hello with an old cipher suite request against Traffic 
> Server 4.2.2, and the connection will freeze on the client side and 
> eventually time out after 120 seconds.
> The Traffic Server detects the SSL error, but instead of closing the 
> connection, goes on to accept new connections.
> h3. Reproduction
> === Client: Macbook Pro running OSX Mavericks 10.9.5 ===
> {code:none}
> $ openssl version -a
> OpenSSL 0.9.8za 5 Jun 2014
> built on: Aug 10 2014
> platform: darwin64-x86_64-llvm
> options:  bn(64,64) md2(int) rc4(ptr,char) des(idx,cisc,16,int) blowfish(idx)
> compiler: -arch x86_64 -fmessage-length=0 -pipe -Wno-trigraphs 
> -fpascal-strings -fasm-blocks -O3 -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H 
> -DL_ENDIAN -DMD32_REG_T=int -DOPENSSL_NO_IDEA -DOPENSSL_PIC -DOPENSSL_THREADS 
> -DZLIB -mmacosx-version-min=10.6
> OPENSSLDIR: "/System/Library/OpenSSL"
> {code}
> h4. The following command triggers the bad behavior on the 4.2.2 server.
> {code:none}
> $ openssl s_client -connect 192.168.20.130:443 -ssl2 -debug
> CONNECTED(0003)
> write to 0x7fb9f2508610 [0x7fb9f300f201] (45 bytes => 45 (0x2D))
>  - 80 2b 01 00 02 00 12 00-00 00 10 07 00 c0 03 00   .+..
> 0010 - 80 01 00 80 06 00 40 04-00 80 02 00 80 f4 71 1a   ..@...q.
> 0020 - ad 23 06 59 4d f8 d2 c5-b2 57 a9 66 4c.#.YMW.fL
> ^C
> {code}
> At this point, the client is hung, and I have to hit ctrl-c to interrupt it 
> or wait 120 seconds for tcp timeout.
> h3. Server: Lubuntu 13.10 on VMware
> {code:none}
> $ openssl version -a
> OpenSSL 1.0.1e 11 Feb 2013
> built on: Fri Jun 20 18:52:25 UTC 2014
> platform: debian-i386
> options:  bn(64,32) rc4(8x,mmx) des(ptr,risc1,16,long) blowfish(idx) 
> compiler: cc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT 
> -DDSO_DLFCN -DHAVE_DLFCN_H -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector 
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security 
> -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack 
> -Wall -DOPENSSL_NO_TLS1_2_CLIENT -DOPENSSL_MAX_TLS1_2_CIPHER_LENGTH=50 
> -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT 
> -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM 
> -DRMD160_ASM -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
> OPENSSLDIR: "/usr/lib/ssl"
> {code}
> {code:none}
> $ diff /usr/local/etc/trafficserver/records.config.422 
> /usr/local/etc/trafficserver/records.config
> 113c113
> < CONFIG proxy.config.http.server_ports STRING 8080
> ---
> > CONFIG proxy.config.http.server_ports STRING 8080 443:ssl
> 594,595c594,595
> < CONFIG proxy.config.diags.debug.enabled INT 0
> < CONFIG proxy.config.diags.debug.tags STRING http.*|dns.*
> ---
> > CONFIG proxy.config.diags.debug.enabled INT 1
> > CONFIG proxy.config.diags.debug.tags STRING ssl.*
> {code}
> {code:none}
> $ /usr/local/bin/traffic_server --version
> [TrafficServer] using root directory '/usr/local'
> Apache Traffic Server - traffic_server - 4.2.2 - (build # 0723 on Jan  7 2015 
> at 23:04:32)
> $ sudo /usr/local/bin/traffic_server
> [sudo] password for user:
> [TrafficServer] using root directory '/usr/local'
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) setting SNI callbacks 
> with for ctx 0xa4a7928
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) indexed '*' with 
> SSL_CTX 0xa4a7928
> [Jan  8 00:53:42.619] Server {0xb702e700} DEBUG: (ssl) importing SNI names 
> from /usr/local/etc/trafficserver
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> [SSLNextProtocolAccept:mainEvent] event 202 netvc 0xb280fa90
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 16 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8193 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8194 ret: -1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> SSL::3055967040:error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown 
> protocol:s23_srvr.c:628
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG:  (sslServerHandShakeEvent)> (ssl) SSL handshake error: SSL_ERROR_SSL (1

[jira] [Commented] (TS-153) "Dynamic" keep-alive timeouts

2015-01-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286022#comment-14286022
 ] 

ASF subversion and git services commented on TS-153:


Commit 5e91acd24e67da73baa81ded14878e32273f1e20 in trafficserver's branch 
refs/heads/master from [~bcall]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=5e91acd ]

TS-153: Dynamic keep-alive timeouts

1. Changed the limit on the number of connections to include all
incoming connections instead of just ones that are keep-alive. This will
keep the number of incoming connections per thread more consistent.
However, it will only close keep-alive connections.
2. Change the double link list to a queue.
3. When adding to the queue if the connection is already in the queue
remove it and then add it to the end the queue.
4. Properly close the connection by mimicking what inactivity cop does
to close the connection.
5. Add stats to determine the average KA timeout since it is now dynamic.
6. Added support for spdy connections.

Config option is now:
proxy.config.net.connections.threshold_shed_idle_in
Stats added are:
proxy.process.net.dynamic_keep_alive_timeout_in_total
proxy.process.net.dynamic_keep_alive_timeout_in_count


> "Dynamic" keep-alive timeouts
> -
>
> Key: TS-153
> URL: https://issues.apache.org/jira/browse/TS-153
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Bryan Call
>Priority: Minor
>  Labels: A
> Fix For: 5.3.0
>
> Attachments: ts153.diff
>
>
> (This is from a Y! Bugzilla ticket 1821593, adding it here. . Originally 
> posted by Leif Hedstrom on 2008-03-19):
> Currently you have to set static keep-alive idle timeouts in TS, e.g.
>CONFIG proxy.config.http.keep_alive_no_activity_timeout_in INT 8
>CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 30
> even with epoll() in 1.17.x, this is difficult to configure, and put an 
> appropriate timeout. The key here is that the
> settings above need to assure that you stay below the max configured number 
> of connections, e.g.:
> CONFIG proxy.config.net.connections_throttle INT 75000
> I'm suggesting that we add one (or two) new configuration options, and 
> appropriate TS code support, to instead of
> specifying timeouts, we specify connection limits for idle KA connections. 
> For example:
> CONFIG proxy.config.http.keep_alive_max_idle_connections_in INT 5
> CONFIG proxy.config.http_keep_alive_max_idle_connections_out INT 5000
> (one still has to be careful to leave head-room for active connections here, 
> in the example above, 2 connections
> could be active, which is a lot of traffic).
> These would override the idle timeouts, so one could use the max_idle 
> connections for incoming (client) connections,
> and the idle timeouts for outgoing (origin) connections for instance.
> The benefit here is that it makes configuration not only easier, but also a 
> lot safer for many applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3283) Certain SSL handshake error during client-hello hangs the client and leaves network connection open

2015-01-21 Thread Phil Sorber (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Sorber updated TS-3283:

Labels:   (was: bac)

> Certain SSL handshake error during client-hello hangs the client and leaves 
> network connection open
> ---
>
> Key: TS-3283
> URL: https://issues.apache.org/jira/browse/TS-3283
> Project: Traffic Server
>  Issue Type: Bug
>  Components: SSL
>Reporter: Joe Chung
>Assignee: Susan Hinrichs
> Fix For: 5.3.0
>
>
> h3. Problem Description
> Send an SSLv2 Client Hello with an old cipher suite request against Traffic 
> Server 4.2.2, and the connection will freeze on the client side and 
> eventually time out after 120 seconds.
> The Traffic Server detects the SSL error, but instead of closing the 
> connection, goes on to accept new connections.
> h3. Reproduction
> === Client: Macbook Pro running OSX Mavericks 10.9.5 ===
> {code:none}
> $ openssl version -a
> OpenSSL 0.9.8za 5 Jun 2014
> built on: Aug 10 2014
> platform: darwin64-x86_64-llvm
> options:  bn(64,64) md2(int) rc4(ptr,char) des(idx,cisc,16,int) blowfish(idx)
> compiler: -arch x86_64 -fmessage-length=0 -pipe -Wno-trigraphs 
> -fpascal-strings -fasm-blocks -O3 -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H 
> -DL_ENDIAN -DMD32_REG_T=int -DOPENSSL_NO_IDEA -DOPENSSL_PIC -DOPENSSL_THREADS 
> -DZLIB -mmacosx-version-min=10.6
> OPENSSLDIR: "/System/Library/OpenSSL"
> {code}
> h4. The following command triggers the bad behavior on the 4.2.2 server.
> {code:none}
> $ openssl s_client -connect 192.168.20.130:443 -ssl2 -debug
> CONNECTED(0003)
> write to 0x7fb9f2508610 [0x7fb9f300f201] (45 bytes => 45 (0x2D))
>  - 80 2b 01 00 02 00 12 00-00 00 10 07 00 c0 03 00   .+..
> 0010 - 80 01 00 80 06 00 40 04-00 80 02 00 80 f4 71 1a   ..@...q.
> 0020 - ad 23 06 59 4d f8 d2 c5-b2 57 a9 66 4c.#.YMW.fL
> ^C
> {code}
> At this point, the client is hung, and I have to hit ctrl-c to interrupt it 
> or wait 120 seconds for tcp timeout.
> h3. Server: Lubuntu 13.10 on VMware
> {code:none}
> $ openssl version -a
> OpenSSL 1.0.1e 11 Feb 2013
> built on: Fri Jun 20 18:52:25 UTC 2014
> platform: debian-i386
> options:  bn(64,32) rc4(8x,mmx) des(ptr,risc1,16,long) blowfish(idx) 
> compiler: cc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT 
> -DDSO_DLFCN -DHAVE_DLFCN_H -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector 
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security 
> -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack 
> -Wall -DOPENSSL_NO_TLS1_2_CLIENT -DOPENSSL_MAX_TLS1_2_CIPHER_LENGTH=50 
> -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT 
> -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM 
> -DRMD160_ASM -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
> OPENSSLDIR: "/usr/lib/ssl"
> {code}
> {code:none}
> $ diff /usr/local/etc/trafficserver/records.config.422 
> /usr/local/etc/trafficserver/records.config
> 113c113
> < CONFIG proxy.config.http.server_ports STRING 8080
> ---
> > CONFIG proxy.config.http.server_ports STRING 8080 443:ssl
> 594,595c594,595
> < CONFIG proxy.config.diags.debug.enabled INT 0
> < CONFIG proxy.config.diags.debug.tags STRING http.*|dns.*
> ---
> > CONFIG proxy.config.diags.debug.enabled INT 1
> > CONFIG proxy.config.diags.debug.tags STRING ssl.*
> {code}
> {code:none}
> $ /usr/local/bin/traffic_server --version
> [TrafficServer] using root directory '/usr/local'
> Apache Traffic Server - traffic_server - 4.2.2 - (build # 0723 on Jan  7 2015 
> at 23:04:32)
> $ sudo /usr/local/bin/traffic_server
> [sudo] password for user:
> [TrafficServer] using root directory '/usr/local'
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) setting SNI callbacks 
> with for ctx 0xa4a7928
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) indexed '*' with 
> SSL_CTX 0xa4a7928
> [Jan  8 00:53:42.619] Server {0xb702e700} DEBUG: (ssl) importing SNI names 
> from /usr/local/etc/trafficserver
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> [SSLNextProtocolAccept:mainEvent] event 202 netvc 0xb280fa90
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 16 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8193 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8194 ret: -1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> SSL::3055967040:error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown 
> protocol:s23_srvr.c:628
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG:  (sslServerHandShakeEvent)> (ssl) SSL handshake error: SSL_ERROR_SSL (1), 
> errno=0
> {code}
> At 

[jira] [Commented] (TS-153) "Dynamic" keep-alive timeouts

2015-01-21 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286079#comment-14286079
 ] 

Leif Hedstrom commented on TS-153:
--

>From the conversations on IRC, I really feel we need a better name. I like 
>bcall's proposal of
{code}
max_connections_in
max_connections_out
max_connections_active_in
max_connections_active_out
{code}

When we started the discussions around these features, the vision what that 
we'd remove the current "throttle" mechanism, and instead the number of FDs we 
require is the sum of the above. And, we throttle accordingly on the sums of 
"_in" and "_out".

The point here is to simplify both configurations, and code. What I had 
envisioned was that we allow up to the "sum" of the _in client connections. 
Once we hit that, we start shedding; The priority on shedding is

{code}
1. Shed the oldest KA connections
2. Shed the oldest inactive connection
3. Shed the oldest active connection (this must be optional)
{code}
If after this there is still no free socket / FD, we throttle (deny the 
connection).

This would allow us to get rid of a significant amount of timeout shenanigans 
in the event system. There's no point (IMO) to ever timeout on KA connections, 
they will get shedded when needed. The other two types of connections would 
still have timeouts of course, so those configs can remain. But, we can perhaps 
change the code to be less aggressive on how often we update / cancel such 
timeouts (i.e. we probably don't have to do it on higher resolution than 
seconds).

> "Dynamic" keep-alive timeouts
> -
>
> Key: TS-153
> URL: https://issues.apache.org/jira/browse/TS-153
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Bryan Call
>Priority: Minor
>  Labels: A
> Fix For: 5.3.0
>
> Attachments: ts153.diff
>
>
> (This is from a Y! Bugzilla ticket 1821593, adding it here. . Originally 
> posted by Leif Hedstrom on 2008-03-19):
> Currently you have to set static keep-alive idle timeouts in TS, e.g.
>CONFIG proxy.config.http.keep_alive_no_activity_timeout_in INT 8
>CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 30
> even with epoll() in 1.17.x, this is difficult to configure, and put an 
> appropriate timeout. The key here is that the
> settings above need to assure that you stay below the max configured number 
> of connections, e.g.:
> CONFIG proxy.config.net.connections_throttle INT 75000
> I'm suggesting that we add one (or two) new configuration options, and 
> appropriate TS code support, to instead of
> specifying timeouts, we specify connection limits for idle KA connections. 
> For example:
> CONFIG proxy.config.http.keep_alive_max_idle_connections_in INT 5
> CONFIG proxy.config.http_keep_alive_max_idle_connections_out INT 5000
> (one still has to be careful to leave head-room for active connections here, 
> in the example above, 2 connections
> could be active, which is a lot of traffic).
> These would override the idle timeouts, so one could use the max_idle 
> connections for incoming (client) connections,
> and the idle timeouts for outgoing (origin) connections for instance.
> The benefit here is that it makes configuration not only easier, but also a 
> lot safer for many applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-3309) TLS Session tickets docs

2015-01-21 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach resolved TS-3309.
-
Resolution: Fixed

> TLS Session tickets docs
> 
>
> Key: TS-3309
> URL: https://issues.apache.org/jira/browse/TS-3309
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, Security, SSL
>Reporter: Bin
>Assignee: James Peach
> Fix For: 5.3.0
>
> Attachments: traffic_line_rotation_doc.diff
>
>
> Add a few words to describe the TLS session ticket keys rotation for TS-3301. 
> jpeach is the best person to review it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3304) segfault in libtsutils

2015-01-21 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286205#comment-14286205
 ] 

Leif Hedstrom commented on TS-3304:
---

I think it's ok for a 4.2.x back port. I still there's more work to be done 
here, maybe clone this Jira and assign to amc ?

[~smalenfant] Thanks for the detailed reports, that certainly makes a lot of 
sense (the 86400 TTL on localhost is standard I think). Does [~psudaemon]'s fix 
solve your problem? You are not using log collation are you ?

> segfault in libtsutils
> --
>
> Key: TS-3304
> URL: https://issues.apache.org/jira/browse/TS-3304
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Steve Malenfant
>Assignee: Leif Hedstrom
> Fix For: 5.3.0
>
>
> Getting multiple segfaults per day on 4.2.1. 
> [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
> 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
> [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
> 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
> [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
> 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
> [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
> 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
> [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
> 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
> Stack Trace :
> (gdb) bt
> #0  ink_inet_addr (s=) at ink_inet.cc:107
> #1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
> ignore_timeout=false) at P_HostDBProcessor.h:545
> #2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
> #3  0x005e2b34 in HostDBProcessor::getby (this=, 
> cont=0x2b514cc749d0, hostname=0x0, len=, 
> ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
> dns_lookup_timeout=0)
> at HostDB.cc:772
> #4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
> ../../iocore/hostdb/I_HostDBProcessor.h:417
> #5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
> #6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6932
> #7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
> HttpSM.cc:3950
> #8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6925
> #9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:7224
> #12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #16 0x0052fef6 in HttpSM::state_read_client_request_header 
> (this=0x2b514cc749d0, event=100, data=) at HttpSM.cc:821
> #17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, 
> event=100, data=0x2b514802ca08) at HttpSM.cc:2539
> #18 0x0068793b in handleEvent (event=, 
> vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
> #19 read_signal_and_update (event=, vc=0x2b514802c900) 
> at UnixNetVConnection.cc:138
> #20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, 
> vc=0x2b514802c900, thread=) at UnixNetVConnection.cc:320
> #21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
> event=, e=) at UnixNet.cc:384
> #22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
> calling_code=5) at I_Continuation.h:146
> #23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
> at UnixEThread.cc:145
> #24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
> UnixEThread.cc:269
> #25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
> #26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
> #27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3311) CLONE - segfault in libtsutils

2015-01-21 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-3311:
-

 Summary: CLONE - segfault in libtsutils
 Key: TS-3311
 URL: https://issues.apache.org/jira/browse/TS-3311
 Project: Traffic Server
  Issue Type: Bug
  Components: HostDB
Reporter: Steve Malenfant
Assignee: Leif Hedstrom
 Fix For: 5.3.0


Getting multiple segfaults per day on 4.2.1. 

[4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
[4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
[4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
[4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
[4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]

Stack Trace :
(gdb) bt
#0  ink_inet_addr (s=) at ink_inet.cc:107
#1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
ignore_timeout=false) at P_HostDBProcessor.h:545
#2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
#3  0x005e2b34 in HostDBProcessor::getby (this=, 
cont=0x2b514cc749d0, hostname=0x0, len=, 
ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
dns_lookup_timeout=0)
at HostDB.cc:772
#4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
../../iocore/hostdb/I_HostDBProcessor.h:417
#5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
#6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
HttpSM.cc:6932
#7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
HttpSM.cc:3950
#8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
HttpSM.cc:6925
#9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
HttpSM.cc:1559
#10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
HttpSM.cc:6825
#11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
HttpSM.cc:7224
#12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
HttpSM.cc:1559
#13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
HttpSM.cc:6825
#14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
HttpSM.cc:1559
#15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
HttpSM.cc:6825
#16 0x0052fef6 in HttpSM::state_read_client_request_header 
(this=0x2b514cc749d0, event=100, data=) at HttpSM.cc:821
#17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, event=100, 
data=0x2b514802ca08) at HttpSM.cc:2539
#18 0x0068793b in handleEvent (event=, 
vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
#19 read_signal_and_update (event=, vc=0x2b514802c900) at 
UnixNetVConnection.cc:138
#20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, vc=0x2b514802c900, 
thread=) at UnixNetVConnection.cc:320
#21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
event=, e=) at UnixNet.cc:384
#22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
calling_code=5) at I_Continuation.h:146
#23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
at UnixEThread.cc:145
#24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
UnixEThread.cc:269
#25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
#26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
#27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3311) CLONE - segfault in libtsutils

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3311:
--
Assignee: Alan M. Carroll  (was: Leif Hedstrom)

> CLONE - segfault in libtsutils
> --
>
> Key: TS-3311
> URL: https://issues.apache.org/jira/browse/TS-3311
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Steve Malenfant
>Assignee: Alan M. Carroll
> Fix For: 5.3.0
>
>
> Getting multiple segfaults per day on 4.2.1. 
> [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
> 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
> [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
> 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
> [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
> 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
> [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
> 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
> [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
> 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
> Stack Trace :
> (gdb) bt
> #0  ink_inet_addr (s=) at ink_inet.cc:107
> #1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
> ignore_timeout=false) at P_HostDBProcessor.h:545
> #2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
> #3  0x005e2b34 in HostDBProcessor::getby (this=, 
> cont=0x2b514cc749d0, hostname=0x0, len=, 
> ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
> dns_lookup_timeout=0)
> at HostDB.cc:772
> #4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
> ../../iocore/hostdb/I_HostDBProcessor.h:417
> #5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
> #6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6932
> #7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
> HttpSM.cc:3950
> #8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6925
> #9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:7224
> #12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #16 0x0052fef6 in HttpSM::state_read_client_request_header 
> (this=0x2b514cc749d0, event=100, data=) at HttpSM.cc:821
> #17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, 
> event=100, data=0x2b514802ca08) at HttpSM.cc:2539
> #18 0x0068793b in handleEvent (event=, 
> vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
> #19 read_signal_and_update (event=, vc=0x2b514802c900) 
> at UnixNetVConnection.cc:138
> #20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, 
> vc=0x2b514802c900, thread=) at UnixNetVConnection.cc:320
> #21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
> event=, e=) at UnixNet.cc:384
> #22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
> calling_code=5) at I_Continuation.h:146
> #23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
> at UnixEThread.cc:145
> #24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
> UnixEThread.cc:269
> #25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
> #26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
> #27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3311) Possible lookups on NULL hostnames in HostDB

2015-01-21 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286252#comment-14286252
 ] 

Leif Hedstrom commented on TS-3311:
---

I cloned this, such that we can look at the underlying problem that triggered 
this issue. Assigning to [~amc]

> Possible lookups on NULL hostnames in HostDB
> 
>
> Key: TS-3311
> URL: https://issues.apache.org/jira/browse/TS-3311
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Steve Malenfant
>Assignee: Alan M. Carroll
> Fix For: 5.3.0
>
>
> Getting multiple segfaults per day on 4.2.1. 
> [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
> 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
> [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
> 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
> [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
> 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
> [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
> 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
> [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
> 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
> Stack Trace :
> (gdb) bt
> #0  ink_inet_addr (s=) at ink_inet.cc:107
> #1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
> ignore_timeout=false) at P_HostDBProcessor.h:545
> #2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
> #3  0x005e2b34 in HostDBProcessor::getby (this=, 
> cont=0x2b514cc749d0, hostname=0x0, len=, 
> ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
> dns_lookup_timeout=0)
> at HostDB.cc:772
> #4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
> ../../iocore/hostdb/I_HostDBProcessor.h:417
> #5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
> #6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6932
> #7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
> HttpSM.cc:3950
> #8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6925
> #9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:7224
> #12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #16 0x0052fef6 in HttpSM::state_read_client_request_header 
> (this=0x2b514cc749d0, event=100, data=) at HttpSM.cc:821
> #17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, 
> event=100, data=0x2b514802ca08) at HttpSM.cc:2539
> #18 0x0068793b in handleEvent (event=, 
> vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
> #19 read_signal_and_update (event=, vc=0x2b514802c900) 
> at UnixNetVConnection.cc:138
> #20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, 
> vc=0x2b514802c900, thread=) at UnixNetVConnection.cc:320
> #21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
> event=, e=) at UnixNet.cc:384
> #22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
> calling_code=5) at I_Continuation.h:146
> #23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
> at UnixEThread.cc:145
> #24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
> UnixEThread.cc:269
> #25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
> #26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
> #27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3311) Possible lookups on NULL hostnames in HostDB

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3311:
--
Summary: Possible lookups on NULL hostnames in HostDB  (was: CLONE - 
segfault in libtsutils)

> Possible lookups on NULL hostnames in HostDB
> 
>
> Key: TS-3311
> URL: https://issues.apache.org/jira/browse/TS-3311
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Steve Malenfant
>Assignee: Alan M. Carroll
> Fix For: 5.3.0
>
>
> Getting multiple segfaults per day on 4.2.1. 
> [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
> 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
> [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
> 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
> [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
> 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
> [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
> 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
> [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
> 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
> Stack Trace :
> (gdb) bt
> #0  ink_inet_addr (s=) at ink_inet.cc:107
> #1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
> ignore_timeout=false) at P_HostDBProcessor.h:545
> #2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
> #3  0x005e2b34 in HostDBProcessor::getby (this=, 
> cont=0x2b514cc749d0, hostname=0x0, len=, 
> ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
> dns_lookup_timeout=0)
> at HostDB.cc:772
> #4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
> ../../iocore/hostdb/I_HostDBProcessor.h:417
> #5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
> #6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6932
> #7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
> HttpSM.cc:3950
> #8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6925
> #9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:7224
> #12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #16 0x0052fef6 in HttpSM::state_read_client_request_header 
> (this=0x2b514cc749d0, event=100, data=) at HttpSM.cc:821
> #17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, 
> event=100, data=0x2b514802ca08) at HttpSM.cc:2539
> #18 0x0068793b in handleEvent (event=, 
> vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
> #19 read_signal_and_update (event=, vc=0x2b514802c900) 
> at UnixNetVConnection.cc:138
> #20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, 
> vc=0x2b514802c900, thread=) at UnixNetVConnection.cc:320
> #21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
> event=, e=) at UnixNet.cc:384
> #22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
> calling_code=5) at I_Continuation.h:146
> #23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
> at UnixEThread.cc:145
> #24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
> UnixEThread.cc:269
> #25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
> #26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
> #27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3311) Possible lookups on NULL hostnames in HostDB

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3311:
--
Backport to Version: 5.2.1  (was: 4.2.3, 5.2.1)

> Possible lookups on NULL hostnames in HostDB
> 
>
> Key: TS-3311
> URL: https://issues.apache.org/jira/browse/TS-3311
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Steve Malenfant
>Assignee: Alan M. Carroll
> Fix For: 5.3.0
>
>
> Getting multiple segfaults per day on 4.2.1. 
> [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
> 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
> [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
> 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
> [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
> 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
> [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
> 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
> [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
> 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
> Stack Trace :
> (gdb) bt
> #0  ink_inet_addr (s=) at ink_inet.cc:107
> #1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
> ignore_timeout=false) at P_HostDBProcessor.h:545
> #2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
> #3  0x005e2b34 in HostDBProcessor::getby (this=, 
> cont=0x2b514cc749d0, hostname=0x0, len=, 
> ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
> dns_lookup_timeout=0)
> at HostDB.cc:772
> #4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
> ../../iocore/hostdb/I_HostDBProcessor.h:417
> #5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
> #6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6932
> #7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
> HttpSM.cc:3950
> #8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6925
> #9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:7224
> #12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #16 0x0052fef6 in HttpSM::state_read_client_request_header 
> (this=0x2b514cc749d0, event=100, data=) at HttpSM.cc:821
> #17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, 
> event=100, data=0x2b514802ca08) at HttpSM.cc:2539
> #18 0x0068793b in handleEvent (event=, 
> vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
> #19 read_signal_and_update (event=, vc=0x2b514802c900) 
> at UnixNetVConnection.cc:138
> #20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, 
> vc=0x2b514802c900, thread=) at UnixNetVConnection.cc:320
> #21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
> event=, e=) at UnixNet.cc:384
> #22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
> calling_code=5) at I_Continuation.h:146
> #23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
> at UnixEThread.cc:145
> #24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
> UnixEThread.cc:269
> #25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
> #26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
> #27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-3306) Keep_alive_out not working

2015-01-21 Thread Thomas Jackson (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Jackson closed TS-3306.
--
Resolution: Not a Problem

Turns out there was a bug in the origin that was miscalculating the 
content-length. Some clients (such as curl) will continue to use the tcp 
session even if there is such a mismatch, but ATS doesn't-- which is more than 
fair. I've fixed the test case, and can confirm that it is working as intended.

> Keep_alive_out not working
> --
>
> Key: TS-3306
> URL: https://issues.apache.org/jira/browse/TS-3306
> Project: Traffic Server
>  Issue Type: Bug
>Reporter: Thomas Jackson
> Fix For: 5.3.0
>
> Attachments: cop.log
>
>
> I *hope* that I'm just missing something very obvious, but as best I can tell 
> keep_alive_out is not working on 5.0, 5.1, and master. At some point this 
> worked as we vetted the feature when we rolled it out-- I think we were 
> running 4.2 at the time.
> To show the issue I created a test 
> (https://github.com/jacksontj/trafficserver/commit/f913b88666aef6502b8540149a60b2ce25853f9c)
>  which shows the issue.
> In my test case there is a socket server which just returns a 200 OK 
> keep-alive response with the body being the number of HTTP requests this tcp 
> connection has seen. If you run the test you can see that it will always 
> return 1. I thought this could have been some issue timeouts or something, 
> but you can see in the output (below) that ATS immediately closes the 
> connection to origin.
> {code}
> test_basic_proxy (test_keepalive.TestKeepAliveOut) ... connection from 
> ('127.0.0.1', 39298)
> GET / HTTP/1.1
> Host: 127.0.0.1:45807
> Accept-Encoding: gzip
> Accept: */*
> User-Agent: python-requests/2.5.1 CPython/2.6.6 
> Linux/2.6.32-431.20.3.el6.ipvs.x86_64
> Client-ip: 127.0.0.1
> X-Forwarded-For: 127.0.0.1
> Via: http/1.1 [2620011950002221C32458757B78BB48] 
> (ApacheTrafficServer/5.3.0)
> sending data back to the client
> Client disconnected
> waiting for a connection
> connection from ('127.0.0.1', 39300)
> GET / HTTP/1.1
> Host: 127.0.0.1:45807
> Accept-Encoding: gzip
> Accept: */*
> User-Agent: python-requests/2.5.1 CPython/2.6.6 
> Linux/2.6.32-431.20.3.el6.ipvs.x86_64
> Client-ip: 127.0.0.1
> X-Forwarded-For: 127.0.0.1
> Via: http/1.1 [2620011950002221C32458757B78BB48] 
> (ApacheTrafficServer/5.3.0)
> sending data back to the client
> Client disconnected
> waiting for a connection
> FAIL
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3306) Keep_alive_out not working

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3306:
--
Fix Version/s: (was: 5.3.0)

> Keep_alive_out not working
> --
>
> Key: TS-3306
> URL: https://issues.apache.org/jira/browse/TS-3306
> Project: Traffic Server
>  Issue Type: Bug
>Reporter: Thomas Jackson
> Attachments: cop.log
>
>
> I *hope* that I'm just missing something very obvious, but as best I can tell 
> keep_alive_out is not working on 5.0, 5.1, and master. At some point this 
> worked as we vetted the feature when we rolled it out-- I think we were 
> running 4.2 at the time.
> To show the issue I created a test 
> (https://github.com/jacksontj/trafficserver/commit/f913b88666aef6502b8540149a60b2ce25853f9c)
>  which shows the issue.
> In my test case there is a socket server which just returns a 200 OK 
> keep-alive response with the body being the number of HTTP requests this tcp 
> connection has seen. If you run the test you can see that it will always 
> return 1. I thought this could have been some issue timeouts or something, 
> but you can see in the output (below) that ATS immediately closes the 
> connection to origin.
> {code}
> test_basic_proxy (test_keepalive.TestKeepAliveOut) ... connection from 
> ('127.0.0.1', 39298)
> GET / HTTP/1.1
> Host: 127.0.0.1:45807
> Accept-Encoding: gzip
> Accept: */*
> User-Agent: python-requests/2.5.1 CPython/2.6.6 
> Linux/2.6.32-431.20.3.el6.ipvs.x86_64
> Client-ip: 127.0.0.1
> X-Forwarded-For: 127.0.0.1
> Via: http/1.1 [2620011950002221C32458757B78BB48] 
> (ApacheTrafficServer/5.3.0)
> sending data back to the client
> Client disconnected
> waiting for a connection
> connection from ('127.0.0.1', 39300)
> GET / HTTP/1.1
> Host: 127.0.0.1:45807
> Accept-Encoding: gzip
> Accept: */*
> User-Agent: python-requests/2.5.1 CPython/2.6.6 
> Linux/2.6.32-431.20.3.el6.ipvs.x86_64
> Client-ip: 127.0.0.1
> X-Forwarded-For: 127.0.0.1
> Via: http/1.1 [2620011950002221C32458757B78BB48] 
> (ApacheTrafficServer/5.3.0)
> sending data back to the client
> Client disconnected
> waiting for a connection
> FAIL
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3309) TLS Session tickets docs

2015-01-21 Thread Bin (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286277#comment-14286277
 ] 

Bin commented on TS-3309:
-

Awesome! Thanks.

> TLS Session tickets docs
> 
>
> Key: TS-3309
> URL: https://issues.apache.org/jira/browse/TS-3309
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, Security, SSL
>Reporter: Bin
>Assignee: James Peach
> Fix For: 5.3.0
>
> Attachments: traffic_line_rotation_doc.diff
>
>
> Add a few words to describe the TLS session ticket keys rotation for TS-3301. 
> jpeach is the best person to review it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3312) KA timeout to origin does not seem to honor configurations

2015-01-21 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-3312:
-

 Summary: KA timeout to origin does not seem to honor configurations
 Key: TS-3312
 URL: https://issues.apache.org/jira/browse/TS-3312
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, HTTP
Reporter: Leif Hedstrom


Doing some basic testing, with the following settings:

{code}
CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 120
CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
{code}

I see ATS timing out the origin sessions after 30sec, with a 

{code}
CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
{code}


What's also interesting, after I made a config change per Geffon's suggestion:

{code}
CONFIG proxy.config.http.origin_min_keep_alive_connections INT 10
{code}

I see the following in the diagnostic trace:

{code}
[Jan 21 14:19:19.416] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] [release 
session] session placed into shared pool
[Jan 21 14:19:49.558] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
[session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
reseting timeout to maintain minimum number of connections
[Jan 21 14:20:19.633] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
[session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
reseting timeout to maintain minimum number of connections
[Jan 21 14:20:19.670] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
[session_pool] session 0x1cc5aa0 received io notice [VC_EVENT_EOS]
{code}

So, not only is it resetting the timeout twice, it also gets a VC_EVENT_EOS. I 
first though it was the origin that closed the connection, but from what I 
could tell, the timeout on the origin was set to 60s.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3312) KA timeout to origin does not seem to honor configurations

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3312:
--
Fix Version/s: 5.3.0

> KA timeout to origin does not seem to honor configurations
> --
>
> Key: TS-3312
> URL: https://issues.apache.org/jira/browse/TS-3312
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, HTTP
>Reporter: Leif Hedstrom
> Fix For: 5.3.0
>
>
> Doing some basic testing, with the following settings:
> {code}
> CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 120
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> I see ATS timing out the origin sessions after 30sec, with a 
> {code}
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> What's also interesting, after I made a config change per Geffon's suggestion:
> {code}
> CONFIG proxy.config.http.origin_min_keep_alive_connections INT 10
> {code}
> I see the following in the diagnostic trace:
> {code}
> [Jan 21 14:19:19.416] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] [release 
> session] session placed into shared pool
> [Jan 21 14:19:49.558] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.633] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.670] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_pool] session 0x1cc5aa0 received io notice [VC_EVENT_EOS]
> {code}
> So, not only is it resetting the timeout twice, it also gets a VC_EVENT_EOS. 
> I first though it was the origin that closed the connection, but from what I 
> could tell, the timeout on the origin was set to 60s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3313) New World order for connection management and timeouts

2015-01-21 Thread Leif Hedstrom (JIRA)
Leif Hedstrom created TS-3313:
-

 Summary: New World order for connection management and timeouts
 Key: TS-3313
 URL: https://issues.apache.org/jira/browse/TS-3313
 Project: Traffic Server
  Issue Type: New Feature
  Components: Core
Reporter: Leif Hedstrom


This is an umbrella ticket for all issues related to connection management and 
timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3313) New World order for connection management and timeouts

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3313:
--
Labels: Umbrella  (was: )

> New World order for connection management and timeouts
> --
>
> Key: TS-3313
> URL: https://issues.apache.org/jira/browse/TS-3313
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Leif Hedstrom
>  Labels: Umbrella
>
> This is an umbrella ticket for all issues related to connection management 
> and timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3313) New World order for connection management and timeouts

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3313:
--
Fix Version/s: 5.3.0

> New World order for connection management and timeouts
> --
>
> Key: TS-3313
> URL: https://issues.apache.org/jira/browse/TS-3313
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Leif Hedstrom
>  Labels: Umbrella
> Fix For: 5.3.0
>
>
> This is an umbrella ticket for all issues related to connection management 
> and timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3287) Coverity fixes for v5.3.0 by zwoop

2015-01-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286336#comment-14286336
 ] 

ASF subversion and git services commented on TS-3287:
-

Commit 9dc726cb24808e7ee99c890c6326d501bc7be049 in trafficserver's branch 
refs/heads/master from [~jpe...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=9dc726c ]

TS-3287: fix URL rewrite allocation size

We are allocating the URL rewrite path in units of "char", not units
of "char *".

Coverity CID #1254805


> Coverity fixes for v5.3.0 by zwoop
> --
>
> Key: TS-3287
> URL: https://issues.apache.org/jira/browse/TS-3287
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Leif Hedstrom
> Fix For: 5.3.0
>
>
> This is my JIRA for Coverity commits for v5.3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3313) New World order for connection management and timeouts

2015-01-21 Thread Bryan Call (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286340#comment-14286340
 ] 

Bryan Call commented on TS-3313:


Here is a rough design draft.  This will become a Wiki when it becomes more 
polished.

https://docs.google.com/document/d/1Y5iKRas1Bd-LbsHltKvB4rNd5ySA-L5aSMHoEK8YlvE/edit

> New World order for connection management and timeouts
> --
>
> Key: TS-3313
> URL: https://issues.apache.org/jira/browse/TS-3313
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Leif Hedstrom
>  Labels: Umbrella
> Fix For: 5.3.0
>
>
> This is an umbrella ticket for all issues related to connection management 
> and timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-1336) High CPU Usage at idle

2015-01-21 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286338#comment-14286338
 ] 

Leif Hedstrom commented on TS-1336:
---

I spoke with God (jplevyak) on this issue recently. One suspicion here is that 
we're doing things either at too short timeout intervals, or scheduling things 
too frequently. Definitely worth looking into, and also, maybe TS-3313 will 
help with some of these issues.


> High CPU Usage at idle
> --
>
> Key: TS-1336
> URL: https://issues.apache.org/jira/browse/TS-1336
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 3.0.5, 3.0.2
> Environment: Ubuntu 12.04 server, amd64, Xenon E5520 (4-core, 16 
> cores with HT)
>Reporter: Greg Smolyn
>  Labels: A
> Fix For: sometime
>
>
> On this unloaded system, a very basic traffic server instance is using 180% 
> CPU, with 3 threads ET_TASK 0, ET_TASK 1, and LOGGING taking up about 60% 
> each.
> top -H output:
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND  
>   
>   
> 10723 traffics  20   0 1960m 113m 4168 R   61  0.4   9:11.27 [ET_TASK 1]  
>   
>
> 10722 traffics  20   0 1960m 113m 4168 R   60  0.4   8:41.61 [ET_TASK 0]  
>   
>
> 10720 traffics  20   0 1960m 113m 4168 S   59  0.4   8:49.19 [LOGGING]
>   
>
>19 root  20   0 000 R   15  0.0 898:45.74 ksoftirqd/3  
>   
>
>10 root  20   0 000 S   15  0.0 930:16.92 ksoftirqd/1  
>   
>
>27 root  20   0 000 S   14  0.0 893:18.41 ksoftirqd/5  
>   
>
>35 root  20   0 000 S   14  0.0 888:54.41 ksoftirqd/7  
>   
>
> 3 root  20   0 000 S8  0.0 942:48.39 ksoftirqd/0  
>   
>
>15 root  20   0 000 S7  0.0 906:40.98 ksoftirqd/2  
>   
>
>23 root  20   0 000 S7  0.0 907:30.33 ksoftirqd/4  
>   
>
>31 root  20   0 000 S7  0.0 898:13.05 ksoftirqd/6  
>   
>
> 13530 root  20   0 98.2m 3244 2572 S1  0.0  29:28.86 flip_server  
>   
>
>  9425 root  20   0 17568 1592 1060 R0  0.0   0:04.16 top  
>   
>
> 10689 traffics  20   0 1960m 113m 4168 S0  0.4   0:00.54 [ET_NET 5]   
>   
>
> 10693 traffics  20   0 1960m 113m 4168 S0  0.4   0:00.51 [ET_NET 9]   
>   
>
> 10701 traffics  20   0 1960m 113m 4168 S0  0.4   0:00.56 [ET_NET 17]  
>   
>
> 10702 traffics  20   0 1960m 113m 4168 S0  0.4   0:00.53 [ET_NET 18]  
>   
>
> 10705 traffics  20   0 1960m 113m 4168 S0  0.4   0:00.54 [ET_NET 21]  
>   
>
> 1 root  20   0 24328 2256 1344 S0  0.0   0:02.53 init 
>   
>
> 2 root  20   0 000 S0  0.0   0:00.05 kth

[jira] [Commented] (TS-2406) Sig 11: Segmentation fault

2015-01-21 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286349#comment-14286349
 ] 

Leif Hedstrom commented on TS-2406:
---

[~shinrich] If you looked at this, can you please either close this as "can't 
reproduce" or at least update the Subject line to something more descriptive 
than segfault :).

> Sig 11: Segmentation fault
> --
>
> Key: TS-2406
> URL: https://issues.apache.org/jira/browse/TS-2406
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 4.2.0
>Reporter: Neddy
>Assignee: Susan Hinrichs
>  Labels: Crash
> Fix For: 6.0.0
>
> Attachments: traffic.out.bak
>
>
> I've noticed this today, still don't know why
> [Nov 27 21:10:49.280] Manager {0x7f54eff15720} ERROR: 
> [LocalManager::pollMgmtProcessServer] Server Process terminated due to Sig 
> 11: Segmentation fault
> [Nov 28 07:53:42.853] Manager {0x7f54eff15720} ERROR: 
> [LocalManager::pollMgmtProcessServer] Server Process terminated due to Sig 
> 11: Segmentation fault
> How can I trace this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-2573) Exponentional increasing of cluster timeouts

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-2573:
--
Fix Version/s: (was: sometime)

> Exponentional increasing of cluster timeouts
> 
>
> Key: TS-2573
> URL: https://issues.apache.org/jira/browse/TS-2573
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Clustering
>Reporter: Peter Walsh
>
> Occasionally we see cluster operations will start timing out after 5 seconds. 
>  This will continue at an increasing rate until traffic server is restarted.  
> The following stats increase when this happens, 
> proxy.process.cluster.remote_op_timeouts and 
> proxy.process.cluster.connections_open. 
> I can tell that spikes in IO wait can contribute to this issue.
> Any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-2573) Exponentional increasing of cluster timeouts

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom resolved TS-2573.
---
Resolution: Invalid

Closing as per Peter's recommendation.

> Exponentional increasing of cluster timeouts
> 
>
> Key: TS-2573
> URL: https://issues.apache.org/jira/browse/TS-2573
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Clustering
>Reporter: Peter Walsh
> Fix For: sometime
>
>
> Occasionally we see cluster operations will start timing out after 5 seconds. 
>  This will continue at an increasing rate until traffic server is restarted.  
> The following stats increase when this happens, 
> proxy.process.cluster.remote_op_timeouts and 
> proxy.process.cluster.connections_open. 
> I can tell that spikes in IO wait can contribute to this issue.
> Any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-2903) Connections are leaked at about 1000 per hour

2015-01-21 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286373#comment-14286373
 ] 

Leif Hedstrom commented on TS-2903:
---

[~shinrich] Wasn't there a fix related to this committed fairly recently?

> Connections are leaked at about 1000 per hour
> -
>
> Key: TS-2903
> URL: https://issues.apache.org/jira/browse/TS-2903
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Reporter: Puneet Dhaliwal
>Assignee: Susan Hinrichs
> Fix For: sometime
>
>
> For version 3.2.5, with keep alive on for in/out and post out, connections 
> were leaked at about 1000 per hour. The limit of 
> proxy.config.net.connections_throttle was reached at 30k and at 60k after 
> enough time.
> CONFIG proxy.config.http.keep_alive_post_out INT 1
> CONFIG proxy.config.http.keep_alive_enabled_in INT 1
> CONFIG proxy.config.http.keep_alive_enabled_out INT 1
> This might also be happening for 4.2.1 and 5.0.
> Pls let me know if there is further information required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3283) Certain SSL handshake error during client-hello hangs the client and leaves network connection open

2015-01-21 Thread Phil Sorber (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Sorber updated TS-3283:

Fix Version/s: 4.2.3

> Certain SSL handshake error during client-hello hangs the client and leaves 
> network connection open
> ---
>
> Key: TS-3283
> URL: https://issues.apache.org/jira/browse/TS-3283
> Project: Traffic Server
>  Issue Type: Bug
>  Components: SSL
>Reporter: Joe Chung
>Assignee: Phil Sorber
> Fix For: 4.2.3, 5.3.0
>
>
> h3. Problem Description
> Send an SSLv2 Client Hello with an old cipher suite request against Traffic 
> Server 4.2.2, and the connection will freeze on the client side and 
> eventually time out after 120 seconds.
> The Traffic Server detects the SSL error, but instead of closing the 
> connection, goes on to accept new connections.
> h3. Reproduction
> === Client: Macbook Pro running OSX Mavericks 10.9.5 ===
> {code:none}
> $ openssl version -a
> OpenSSL 0.9.8za 5 Jun 2014
> built on: Aug 10 2014
> platform: darwin64-x86_64-llvm
> options:  bn(64,64) md2(int) rc4(ptr,char) des(idx,cisc,16,int) blowfish(idx)
> compiler: -arch x86_64 -fmessage-length=0 -pipe -Wno-trigraphs 
> -fpascal-strings -fasm-blocks -O3 -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H 
> -DL_ENDIAN -DMD32_REG_T=int -DOPENSSL_NO_IDEA -DOPENSSL_PIC -DOPENSSL_THREADS 
> -DZLIB -mmacosx-version-min=10.6
> OPENSSLDIR: "/System/Library/OpenSSL"
> {code}
> h4. The following command triggers the bad behavior on the 4.2.2 server.
> {code:none}
> $ openssl s_client -connect 192.168.20.130:443 -ssl2 -debug
> CONNECTED(0003)
> write to 0x7fb9f2508610 [0x7fb9f300f201] (45 bytes => 45 (0x2D))
>  - 80 2b 01 00 02 00 12 00-00 00 10 07 00 c0 03 00   .+..
> 0010 - 80 01 00 80 06 00 40 04-00 80 02 00 80 f4 71 1a   ..@...q.
> 0020 - ad 23 06 59 4d f8 d2 c5-b2 57 a9 66 4c.#.YMW.fL
> ^C
> {code}
> At this point, the client is hung, and I have to hit ctrl-c to interrupt it 
> or wait 120 seconds for tcp timeout.
> h3. Server: Lubuntu 13.10 on VMware
> {code:none}
> $ openssl version -a
> OpenSSL 1.0.1e 11 Feb 2013
> built on: Fri Jun 20 18:52:25 UTC 2014
> platform: debian-i386
> options:  bn(64,32) rc4(8x,mmx) des(ptr,risc1,16,long) blowfish(idx) 
> compiler: cc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT 
> -DDSO_DLFCN -DHAVE_DLFCN_H -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector 
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security 
> -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack 
> -Wall -DOPENSSL_NO_TLS1_2_CLIENT -DOPENSSL_MAX_TLS1_2_CIPHER_LENGTH=50 
> -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT 
> -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM 
> -DRMD160_ASM -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
> OPENSSLDIR: "/usr/lib/ssl"
> {code}
> {code:none}
> $ diff /usr/local/etc/trafficserver/records.config.422 
> /usr/local/etc/trafficserver/records.config
> 113c113
> < CONFIG proxy.config.http.server_ports STRING 8080
> ---
> > CONFIG proxy.config.http.server_ports STRING 8080 443:ssl
> 594,595c594,595
> < CONFIG proxy.config.diags.debug.enabled INT 0
> < CONFIG proxy.config.diags.debug.tags STRING http.*|dns.*
> ---
> > CONFIG proxy.config.diags.debug.enabled INT 1
> > CONFIG proxy.config.diags.debug.tags STRING ssl.*
> {code}
> {code:none}
> $ /usr/local/bin/traffic_server --version
> [TrafficServer] using root directory '/usr/local'
> Apache Traffic Server - traffic_server - 4.2.2 - (build # 0723 on Jan  7 2015 
> at 23:04:32)
> $ sudo /usr/local/bin/traffic_server
> [sudo] password for user:
> [TrafficServer] using root directory '/usr/local'
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) setting SNI callbacks 
> with for ctx 0xa4a7928
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) indexed '*' with 
> SSL_CTX 0xa4a7928
> [Jan  8 00:53:42.619] Server {0xb702e700} DEBUG: (ssl) importing SNI names 
> from /usr/local/etc/trafficserver
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> [SSLNextProtocolAccept:mainEvent] event 202 netvc 0xb280fa90
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 16 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8193 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8194 ret: -1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> SSL::3055967040:error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown 
> protocol:s23_srvr.c:628
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG:  (sslServerHandShakeEvent)> (ssl) SSL handshake error: SSL_ERROR_SSL (1), 
> errno=0
> {code}
>

[jira] [Commented] (TS-3304) segfault in libtsutils

2015-01-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286403#comment-14286403
 ] 

ASF subversion and git services commented on TS-3304:
-

Commit dab592a536960b43634fb6663530e9d4ed2b in trafficserver's branch 
refs/heads/4.2.x from [~psudaemon]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=dab ]

TS-3304: Add NULL check to ink_inet_addr() input

(cherry picked from commit f93ca30fd7e67168805c8bce0dd8f72a7fc73934)

Conflicts:
CHANGES


> segfault in libtsutils
> --
>
> Key: TS-3304
> URL: https://issues.apache.org/jira/browse/TS-3304
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Steve Malenfant
>Assignee: Leif Hedstrom
> Fix For: 5.3.0
>
>
> Getting multiple segfaults per day on 4.2.1. 
> [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
> 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
> [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
> 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
> [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
> 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
> [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
> 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
> [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
> 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
> Stack Trace :
> (gdb) bt
> #0  ink_inet_addr (s=) at ink_inet.cc:107
> #1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
> ignore_timeout=false) at P_HostDBProcessor.h:545
> #2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
> #3  0x005e2b34 in HostDBProcessor::getby (this=, 
> cont=0x2b514cc749d0, hostname=0x0, len=, 
> ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
> dns_lookup_timeout=0)
> at HostDB.cc:772
> #4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
> ../../iocore/hostdb/I_HostDBProcessor.h:417
> #5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
> #6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6932
> #7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
> HttpSM.cc:3950
> #8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6925
> #9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:7224
> #12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #16 0x0052fef6 in HttpSM::state_read_client_request_header 
> (this=0x2b514cc749d0, event=100, data=) at HttpSM.cc:821
> #17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, 
> event=100, data=0x2b514802ca08) at HttpSM.cc:2539
> #18 0x0068793b in handleEvent (event=, 
> vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
> #19 read_signal_and_update (event=, vc=0x2b514802c900) 
> at UnixNetVConnection.cc:138
> #20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, 
> vc=0x2b514802c900, thread=) at UnixNetVConnection.cc:320
> #21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
> event=, e=) at UnixNet.cc:384
> #22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
> calling_code=5) at I_Continuation.h:146
> #23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
> at UnixEThread.cc:145
> #24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
> UnixEThread.cc:269
> #25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
> #26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
> #27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3283) Certain SSL handshake error during client-hello hangs the client and leaves network connection open

2015-01-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286404#comment-14286404
 ] 

ASF subversion and git services commented on TS-3283:
-

Commit cadb017ecee0c53ab1cf9d5b0ab5f7ead9204156 in trafficserver's branch 
refs/heads/4.2.x from [~joechung]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=cadb017 ]

TS-3283: Certain SSL handshake error during client-hello hangs the client and 
leaves network connection open


> Certain SSL handshake error during client-hello hangs the client and leaves 
> network connection open
> ---
>
> Key: TS-3283
> URL: https://issues.apache.org/jira/browse/TS-3283
> Project: Traffic Server
>  Issue Type: Bug
>  Components: SSL
>Reporter: Joe Chung
>Assignee: Phil Sorber
> Fix For: 4.2.3, 5.3.0
>
>
> h3. Problem Description
> Send an SSLv2 Client Hello with an old cipher suite request against Traffic 
> Server 4.2.2, and the connection will freeze on the client side and 
> eventually time out after 120 seconds.
> The Traffic Server detects the SSL error, but instead of closing the 
> connection, goes on to accept new connections.
> h3. Reproduction
> === Client: Macbook Pro running OSX Mavericks 10.9.5 ===
> {code:none}
> $ openssl version -a
> OpenSSL 0.9.8za 5 Jun 2014
> built on: Aug 10 2014
> platform: darwin64-x86_64-llvm
> options:  bn(64,64) md2(int) rc4(ptr,char) des(idx,cisc,16,int) blowfish(idx)
> compiler: -arch x86_64 -fmessage-length=0 -pipe -Wno-trigraphs 
> -fpascal-strings -fasm-blocks -O3 -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H 
> -DL_ENDIAN -DMD32_REG_T=int -DOPENSSL_NO_IDEA -DOPENSSL_PIC -DOPENSSL_THREADS 
> -DZLIB -mmacosx-version-min=10.6
> OPENSSLDIR: "/System/Library/OpenSSL"
> {code}
> h4. The following command triggers the bad behavior on the 4.2.2 server.
> {code:none}
> $ openssl s_client -connect 192.168.20.130:443 -ssl2 -debug
> CONNECTED(0003)
> write to 0x7fb9f2508610 [0x7fb9f300f201] (45 bytes => 45 (0x2D))
>  - 80 2b 01 00 02 00 12 00-00 00 10 07 00 c0 03 00   .+..
> 0010 - 80 01 00 80 06 00 40 04-00 80 02 00 80 f4 71 1a   ..@...q.
> 0020 - ad 23 06 59 4d f8 d2 c5-b2 57 a9 66 4c.#.YMW.fL
> ^C
> {code}
> At this point, the client is hung, and I have to hit ctrl-c to interrupt it 
> or wait 120 seconds for tcp timeout.
> h3. Server: Lubuntu 13.10 on VMware
> {code:none}
> $ openssl version -a
> OpenSSL 1.0.1e 11 Feb 2013
> built on: Fri Jun 20 18:52:25 UTC 2014
> platform: debian-i386
> options:  bn(64,32) rc4(8x,mmx) des(ptr,risc1,16,long) blowfish(idx) 
> compiler: cc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT 
> -DDSO_DLFCN -DHAVE_DLFCN_H -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector 
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security 
> -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack 
> -Wall -DOPENSSL_NO_TLS1_2_CLIENT -DOPENSSL_MAX_TLS1_2_CIPHER_LENGTH=50 
> -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT 
> -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM 
> -DRMD160_ASM -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
> OPENSSLDIR: "/usr/lib/ssl"
> {code}
> {code:none}
> $ diff /usr/local/etc/trafficserver/records.config.422 
> /usr/local/etc/trafficserver/records.config
> 113c113
> < CONFIG proxy.config.http.server_ports STRING 8080
> ---
> > CONFIG proxy.config.http.server_ports STRING 8080 443:ssl
> 594,595c594,595
> < CONFIG proxy.config.diags.debug.enabled INT 0
> < CONFIG proxy.config.diags.debug.tags STRING http.*|dns.*
> ---
> > CONFIG proxy.config.diags.debug.enabled INT 1
> > CONFIG proxy.config.diags.debug.tags STRING ssl.*
> {code}
> {code:none}
> $ /usr/local/bin/traffic_server --version
> [TrafficServer] using root directory '/usr/local'
> Apache Traffic Server - traffic_server - 4.2.2 - (build # 0723 on Jan  7 2015 
> at 23:04:32)
> $ sudo /usr/local/bin/traffic_server
> [sudo] password for user:
> [TrafficServer] using root directory '/usr/local'
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) setting SNI callbacks 
> with for ctx 0xa4a7928
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) indexed '*' with 
> SSL_CTX 0xa4a7928
> [Jan  8 00:53:42.619] Server {0xb702e700} DEBUG: (ssl) importing SNI names 
> from /usr/local/etc/trafficserver
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> [SSLNextProtocolAccept:mainEvent] event 202 netvc 0xb280fa90
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 16 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8193 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ss

[jira] [Updated] (TS-3283) Certain SSL handshake error during client-hello hangs the client and leaves network connection open

2015-01-21 Thread Phil Sorber (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Sorber updated TS-3283:

Backport to Version:   (was: 4.2.3)

> Certain SSL handshake error during client-hello hangs the client and leaves 
> network connection open
> ---
>
> Key: TS-3283
> URL: https://issues.apache.org/jira/browse/TS-3283
> Project: Traffic Server
>  Issue Type: Bug
>  Components: SSL
>Reporter: Joe Chung
>Assignee: Phil Sorber
> Fix For: 4.2.3, 5.3.0
>
>
> h3. Problem Description
> Send an SSLv2 Client Hello with an old cipher suite request against Traffic 
> Server 4.2.2, and the connection will freeze on the client side and 
> eventually time out after 120 seconds.
> The Traffic Server detects the SSL error, but instead of closing the 
> connection, goes on to accept new connections.
> h3. Reproduction
> === Client: Macbook Pro running OSX Mavericks 10.9.5 ===
> {code:none}
> $ openssl version -a
> OpenSSL 0.9.8za 5 Jun 2014
> built on: Aug 10 2014
> platform: darwin64-x86_64-llvm
> options:  bn(64,64) md2(int) rc4(ptr,char) des(idx,cisc,16,int) blowfish(idx)
> compiler: -arch x86_64 -fmessage-length=0 -pipe -Wno-trigraphs 
> -fpascal-strings -fasm-blocks -O3 -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H 
> -DL_ENDIAN -DMD32_REG_T=int -DOPENSSL_NO_IDEA -DOPENSSL_PIC -DOPENSSL_THREADS 
> -DZLIB -mmacosx-version-min=10.6
> OPENSSLDIR: "/System/Library/OpenSSL"
> {code}
> h4. The following command triggers the bad behavior on the 4.2.2 server.
> {code:none}
> $ openssl s_client -connect 192.168.20.130:443 -ssl2 -debug
> CONNECTED(0003)
> write to 0x7fb9f2508610 [0x7fb9f300f201] (45 bytes => 45 (0x2D))
>  - 80 2b 01 00 02 00 12 00-00 00 10 07 00 c0 03 00   .+..
> 0010 - 80 01 00 80 06 00 40 04-00 80 02 00 80 f4 71 1a   ..@...q.
> 0020 - ad 23 06 59 4d f8 d2 c5-b2 57 a9 66 4c.#.YMW.fL
> ^C
> {code}
> At this point, the client is hung, and I have to hit ctrl-c to interrupt it 
> or wait 120 seconds for tcp timeout.
> h3. Server: Lubuntu 13.10 on VMware
> {code:none}
> $ openssl version -a
> OpenSSL 1.0.1e 11 Feb 2013
> built on: Fri Jun 20 18:52:25 UTC 2014
> platform: debian-i386
> options:  bn(64,32) rc4(8x,mmx) des(ptr,risc1,16,long) blowfish(idx) 
> compiler: cc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT 
> -DDSO_DLFCN -DHAVE_DLFCN_H -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector 
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security 
> -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack 
> -Wall -DOPENSSL_NO_TLS1_2_CLIENT -DOPENSSL_MAX_TLS1_2_CIPHER_LENGTH=50 
> -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT 
> -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM 
> -DRMD160_ASM -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
> OPENSSLDIR: "/usr/lib/ssl"
> {code}
> {code:none}
> $ diff /usr/local/etc/trafficserver/records.config.422 
> /usr/local/etc/trafficserver/records.config
> 113c113
> < CONFIG proxy.config.http.server_ports STRING 8080
> ---
> > CONFIG proxy.config.http.server_ports STRING 8080 443:ssl
> 594,595c594,595
> < CONFIG proxy.config.diags.debug.enabled INT 0
> < CONFIG proxy.config.diags.debug.tags STRING http.*|dns.*
> ---
> > CONFIG proxy.config.diags.debug.enabled INT 1
> > CONFIG proxy.config.diags.debug.tags STRING ssl.*
> {code}
> {code:none}
> $ /usr/local/bin/traffic_server --version
> [TrafficServer] using root directory '/usr/local'
> Apache Traffic Server - traffic_server - 4.2.2 - (build # 0723 on Jan  7 2015 
> at 23:04:32)
> $ sudo /usr/local/bin/traffic_server
> [sudo] password for user:
> [TrafficServer] using root directory '/usr/local'
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) setting SNI callbacks 
> with for ctx 0xa4a7928
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) indexed '*' with 
> SSL_CTX 0xa4a7928
> [Jan  8 00:53:42.619] Server {0xb702e700} DEBUG: (ssl) importing SNI names 
> from /usr/local/etc/trafficserver
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> [SSLNextProtocolAccept:mainEvent] event 202 netvc 0xb280fa90
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 16 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8193 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8194 ret: -1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> SSL::3055967040:error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown 
> protocol:s23_srvr.c:628
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG:  (sslServerHandShakeEvent)> (ssl) SSL handshake error: SSL_ERROR_SSL (1), 
> err

[jira] [Updated] (TS-3283) Certain SSL handshake error during client-hello hangs the client and leaves network connection open

2015-01-21 Thread Phil Sorber (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Sorber updated TS-3283:

Fix Version/s: (was: 5.3.0)

> Certain SSL handshake error during client-hello hangs the client and leaves 
> network connection open
> ---
>
> Key: TS-3283
> URL: https://issues.apache.org/jira/browse/TS-3283
> Project: Traffic Server
>  Issue Type: Bug
>  Components: SSL
>Reporter: Joe Chung
>Assignee: Phil Sorber
> Fix For: 4.2.3
>
>
> h3. Problem Description
> Send an SSLv2 Client Hello with an old cipher suite request against Traffic 
> Server 4.2.2, and the connection will freeze on the client side and 
> eventually time out after 120 seconds.
> The Traffic Server detects the SSL error, but instead of closing the 
> connection, goes on to accept new connections.
> h3. Reproduction
> === Client: Macbook Pro running OSX Mavericks 10.9.5 ===
> {code:none}
> $ openssl version -a
> OpenSSL 0.9.8za 5 Jun 2014
> built on: Aug 10 2014
> platform: darwin64-x86_64-llvm
> options:  bn(64,64) md2(int) rc4(ptr,char) des(idx,cisc,16,int) blowfish(idx)
> compiler: -arch x86_64 -fmessage-length=0 -pipe -Wno-trigraphs 
> -fpascal-strings -fasm-blocks -O3 -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H 
> -DL_ENDIAN -DMD32_REG_T=int -DOPENSSL_NO_IDEA -DOPENSSL_PIC -DOPENSSL_THREADS 
> -DZLIB -mmacosx-version-min=10.6
> OPENSSLDIR: "/System/Library/OpenSSL"
> {code}
> h4. The following command triggers the bad behavior on the 4.2.2 server.
> {code:none}
> $ openssl s_client -connect 192.168.20.130:443 -ssl2 -debug
> CONNECTED(0003)
> write to 0x7fb9f2508610 [0x7fb9f300f201] (45 bytes => 45 (0x2D))
>  - 80 2b 01 00 02 00 12 00-00 00 10 07 00 c0 03 00   .+..
> 0010 - 80 01 00 80 06 00 40 04-00 80 02 00 80 f4 71 1a   ..@...q.
> 0020 - ad 23 06 59 4d f8 d2 c5-b2 57 a9 66 4c.#.YMW.fL
> ^C
> {code}
> At this point, the client is hung, and I have to hit ctrl-c to interrupt it 
> or wait 120 seconds for tcp timeout.
> h3. Server: Lubuntu 13.10 on VMware
> {code:none}
> $ openssl version -a
> OpenSSL 1.0.1e 11 Feb 2013
> built on: Fri Jun 20 18:52:25 UTC 2014
> platform: debian-i386
> options:  bn(64,32) rc4(8x,mmx) des(ptr,risc1,16,long) blowfish(idx) 
> compiler: cc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT 
> -DDSO_DLFCN -DHAVE_DLFCN_H -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector 
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security 
> -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack 
> -Wall -DOPENSSL_NO_TLS1_2_CLIENT -DOPENSSL_MAX_TLS1_2_CIPHER_LENGTH=50 
> -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT 
> -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM 
> -DRMD160_ASM -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
> OPENSSLDIR: "/usr/lib/ssl"
> {code}
> {code:none}
> $ diff /usr/local/etc/trafficserver/records.config.422 
> /usr/local/etc/trafficserver/records.config
> 113c113
> < CONFIG proxy.config.http.server_ports STRING 8080
> ---
> > CONFIG proxy.config.http.server_ports STRING 8080 443:ssl
> 594,595c594,595
> < CONFIG proxy.config.diags.debug.enabled INT 0
> < CONFIG proxy.config.diags.debug.tags STRING http.*|dns.*
> ---
> > CONFIG proxy.config.diags.debug.enabled INT 1
> > CONFIG proxy.config.diags.debug.tags STRING ssl.*
> {code}
> {code:none}
> $ /usr/local/bin/traffic_server --version
> [TrafficServer] using root directory '/usr/local'
> Apache Traffic Server - traffic_server - 4.2.2 - (build # 0723 on Jan  7 2015 
> at 23:04:32)
> $ sudo /usr/local/bin/traffic_server
> [sudo] password for user:
> [TrafficServer] using root directory '/usr/local'
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) setting SNI callbacks 
> with for ctx 0xa4a7928
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) indexed '*' with 
> SSL_CTX 0xa4a7928
> [Jan  8 00:53:42.619] Server {0xb702e700} DEBUG: (ssl) importing SNI names 
> from /usr/local/etc/trafficserver
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> [SSLNextProtocolAccept:mainEvent] event 202 netvc 0xb280fa90
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 16 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8193 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8194 ret: -1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> SSL::3055967040:error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown 
> protocol:s23_srvr.c:628
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG:  (sslServerHandShakeEvent)> (ssl) SSL handshake error: SSL_ERROR_SSL (1), 
> errno=0
> {cod

[jira] [Updated] (TS-3304) segfault in libtsutils

2015-01-21 Thread Phil Sorber (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Sorber updated TS-3304:

Backport to Version: 5.2.1  (was: 4.2.3, 5.2.1)

> segfault in libtsutils
> --
>
> Key: TS-3304
> URL: https://issues.apache.org/jira/browse/TS-3304
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Steve Malenfant
>Assignee: Leif Hedstrom
> Fix For: 4.2.3, 5.3.0
>
>
> Getting multiple segfaults per day on 4.2.1. 
> [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
> 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
> [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
> 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
> [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
> 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
> [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
> 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
> [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
> 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
> Stack Trace :
> (gdb) bt
> #0  ink_inet_addr (s=) at ink_inet.cc:107
> #1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
> ignore_timeout=false) at P_HostDBProcessor.h:545
> #2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
> #3  0x005e2b34 in HostDBProcessor::getby (this=, 
> cont=0x2b514cc749d0, hostname=0x0, len=, 
> ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
> dns_lookup_timeout=0)
> at HostDB.cc:772
> #4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
> ../../iocore/hostdb/I_HostDBProcessor.h:417
> #5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
> #6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6932
> #7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
> HttpSM.cc:3950
> #8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6925
> #9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:7224
> #12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #16 0x0052fef6 in HttpSM::state_read_client_request_header 
> (this=0x2b514cc749d0, event=100, data=) at HttpSM.cc:821
> #17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, 
> event=100, data=0x2b514802ca08) at HttpSM.cc:2539
> #18 0x0068793b in handleEvent (event=, 
> vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
> #19 read_signal_and_update (event=, vc=0x2b514802c900) 
> at UnixNetVConnection.cc:138
> #20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, 
> vc=0x2b514802c900, thread=) at UnixNetVConnection.cc:320
> #21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
> event=, e=) at UnixNet.cc:384
> #22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
> calling_code=5) at I_Continuation.h:146
> #23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
> at UnixEThread.cc:145
> #24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
> UnixEThread.cc:269
> #25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
> #26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
> #27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3313) New World order for connection management and timeouts

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3313:
--
Assignee: Bryan Call

> New World order for connection management and timeouts
> --
>
> Key: TS-3313
> URL: https://issues.apache.org/jira/browse/TS-3313
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Bryan Call
>  Labels: Umbrella
> Fix For: 5.3.0
>
>
> This is an umbrella ticket for all issues related to connection management 
> and timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TS-3283) Certain SSL handshake error during client-hello hangs the client and leaves network connection open

2015-01-21 Thread Phil Sorber (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Sorber resolved TS-3283.
-
Resolution: Fixed

This is backported and will be in the next 4.2.x release.

> Certain SSL handshake error during client-hello hangs the client and leaves 
> network connection open
> ---
>
> Key: TS-3283
> URL: https://issues.apache.org/jira/browse/TS-3283
> Project: Traffic Server
>  Issue Type: Bug
>  Components: SSL
>Reporter: Joe Chung
>Assignee: Phil Sorber
> Fix For: 4.2.3
>
>
> h3. Problem Description
> Send an SSLv2 Client Hello with an old cipher suite request against Traffic 
> Server 4.2.2, and the connection will freeze on the client side and 
> eventually time out after 120 seconds.
> The Traffic Server detects the SSL error, but instead of closing the 
> connection, goes on to accept new connections.
> h3. Reproduction
> === Client: Macbook Pro running OSX Mavericks 10.9.5 ===
> {code:none}
> $ openssl version -a
> OpenSSL 0.9.8za 5 Jun 2014
> built on: Aug 10 2014
> platform: darwin64-x86_64-llvm
> options:  bn(64,64) md2(int) rc4(ptr,char) des(idx,cisc,16,int) blowfish(idx)
> compiler: -arch x86_64 -fmessage-length=0 -pipe -Wno-trigraphs 
> -fpascal-strings -fasm-blocks -O3 -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H 
> -DL_ENDIAN -DMD32_REG_T=int -DOPENSSL_NO_IDEA -DOPENSSL_PIC -DOPENSSL_THREADS 
> -DZLIB -mmacosx-version-min=10.6
> OPENSSLDIR: "/System/Library/OpenSSL"
> {code}
> h4. The following command triggers the bad behavior on the 4.2.2 server.
> {code:none}
> $ openssl s_client -connect 192.168.20.130:443 -ssl2 -debug
> CONNECTED(0003)
> write to 0x7fb9f2508610 [0x7fb9f300f201] (45 bytes => 45 (0x2D))
>  - 80 2b 01 00 02 00 12 00-00 00 10 07 00 c0 03 00   .+..
> 0010 - 80 01 00 80 06 00 40 04-00 80 02 00 80 f4 71 1a   ..@...q.
> 0020 - ad 23 06 59 4d f8 d2 c5-b2 57 a9 66 4c.#.YMW.fL
> ^C
> {code}
> At this point, the client is hung, and I have to hit ctrl-c to interrupt it 
> or wait 120 seconds for tcp timeout.
> h3. Server: Lubuntu 13.10 on VMware
> {code:none}
> $ openssl version -a
> OpenSSL 1.0.1e 11 Feb 2013
> built on: Fri Jun 20 18:52:25 UTC 2014
> platform: debian-i386
> options:  bn(64,32) rc4(8x,mmx) des(ptr,risc1,16,long) blowfish(idx) 
> compiler: cc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT 
> -DDSO_DLFCN -DHAVE_DLFCN_H -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector 
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security 
> -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack 
> -Wall -DOPENSSL_NO_TLS1_2_CLIENT -DOPENSSL_MAX_TLS1_2_CIPHER_LENGTH=50 
> -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT 
> -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM 
> -DRMD160_ASM -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
> OPENSSLDIR: "/usr/lib/ssl"
> {code}
> {code:none}
> $ diff /usr/local/etc/trafficserver/records.config.422 
> /usr/local/etc/trafficserver/records.config
> 113c113
> < CONFIG proxy.config.http.server_ports STRING 8080
> ---
> > CONFIG proxy.config.http.server_ports STRING 8080 443:ssl
> 594,595c594,595
> < CONFIG proxy.config.diags.debug.enabled INT 0
> < CONFIG proxy.config.diags.debug.tags STRING http.*|dns.*
> ---
> > CONFIG proxy.config.diags.debug.enabled INT 1
> > CONFIG proxy.config.diags.debug.tags STRING ssl.*
> {code}
> {code:none}
> $ /usr/local/bin/traffic_server --version
> [TrafficServer] using root directory '/usr/local'
> Apache Traffic Server - traffic_server - 4.2.2 - (build # 0723 on Jan  7 2015 
> at 23:04:32)
> $ sudo /usr/local/bin/traffic_server
> [sudo] password for user:
> [TrafficServer] using root directory '/usr/local'
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) setting SNI callbacks 
> with for ctx 0xa4a7928
> [Jan  8 00:53:42.618] Server {0xb702e700} DEBUG: (ssl) indexed '*' with 
> SSL_CTX 0xa4a7928
> [Jan  8 00:53:42.619] Server {0xb702e700} DEBUG: (ssl) importing SNI names 
> from /usr/local/etc/trafficserver
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> [SSLNextProtocolAccept:mainEvent] event 202 netvc 0xb280fa90
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 16 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8193 ret: 1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) ssl_callback_info ssl: 
> 0xb280fcb8 where: 8194 ret: -1
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG: (ssl) 
> SSL::3055967040:error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown 
> protocol:s23_srvr.c:628
> [Jan  8 00:54:02.256] Server {0xb6265b40} DEBUG:  (sslServerHandShakeEvent)> (ssl) SSL handsh

[jira] [Updated] (TS-3304) segfault in libtsutils

2015-01-21 Thread Phil Sorber (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Sorber updated TS-3304:

Fix Version/s: 4.2.3

> segfault in libtsutils
> --
>
> Key: TS-3304
> URL: https://issues.apache.org/jira/browse/TS-3304
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HostDB
>Reporter: Steve Malenfant
>Assignee: Leif Hedstrom
> Fix For: 4.2.3, 5.3.0
>
>
> Getting multiple segfaults per day on 4.2.1. 
> [4324544.324222] [ET_NET 23][10504]: segfault at 0 ip 2acd66546168 sp 
> 2acd71f190b8 error 4 in libtsutil.so.4.2.1[2acd66521000+34000]
> [4410696.820857] [ET_NET 19][22738]: segfault at 0 ip 2af09f339168 sp 
> 2af0aa9230b8 error 4 in libtsutil.so.4.2.1[2af09f314000+34000]
> [4497039.474253] [ET_NET 12][34872]: segfault at 0 ip 2ad17e6a1168 sp 
> 2ad1896100b8 error 4 in libtsutil.so.4.2.1[2ad17e67c000+34000]
> [4583372.073916] [ET_NET 3][46994]: segfault at 0 ip 2aced4227168 sp 
> 2aceda7d80b8 error 4 in libtsutil.so.4.2.1[2aced4202000+34000]
> [4756046.944373] [ET_NET 22][10799]: segfault at 0 ip 2b1771f76168 sp 
> 2b177d9130b8 error 4 in libtsutil.so.4.2.1[2b1771f51000+34000]
> Stack Trace :
> (gdb) bt
> #0  ink_inet_addr (s=) at ink_inet.cc:107
> #1  0x005e0df5 in is_dotted_form_hostname (mutex=0x1d32cb0, md5=..., 
> ignore_timeout=false) at P_HostDBProcessor.h:545
> #2  probe (mutex=0x1d32cb0, md5=..., ignore_timeout=false) at HostDB.cc:668
> #3  0x005e2b34 in HostDBProcessor::getby (this=, 
> cont=0x2b514cc749d0, hostname=0x0, len=, 
> ip=0x2b50e8f092b0, aforce_dns=false, host_res_style=HOST_RES_NONE, 
> dns_lookup_timeout=0)
> at HostDB.cc:772
> #4  0x00517f2c in getbyaddr_re (this=0x2b514cc749d0) at 
> ../../iocore/hostdb/I_HostDBProcessor.h:417
> #5  HttpSM::do_hostdb_reverse_lookup (this=0x2b514cc749d0) at HttpSM.cc:3968
> #6  0x0052f028 in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6932
> #7  0x00518242 in HttpSM::do_hostdb_lookup (this=0x2b514cc749d0) at 
> HttpSM.cc:3950
> #8  0x0052f44a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6925
> #9  0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #10 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #11 0x0052ea8a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:7224
> #12 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #13 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #14 0x005284fa in HttpSM::handle_api_return (this=0x2b514cc749d0) at 
> HttpSM.cc:1559
> #15 0x0052ea9a in HttpSM::set_next_state (this=0x2b514cc749d0) at 
> HttpSM.cc:6825
> #16 0x0052fef6 in HttpSM::state_read_client_request_header 
> (this=0x2b514cc749d0, event=100, data=) at HttpSM.cc:821
> #17 0x0052a5b8 in HttpSM::main_handler (this=0x2b514cc749d0, 
> event=100, data=0x2b514802ca08) at HttpSM.cc:2539
> #18 0x0068793b in handleEvent (event=, 
> vc=0x2b514802c900) at ../../iocore/eventsystem/I_Continuation.h:146
> #19 read_signal_and_update (event=, vc=0x2b514802c900) 
> at UnixNetVConnection.cc:138
> #20 0x00689ec4 in read_from_net (nh=0x2b50e2e17c10, 
> vc=0x2b514802c900, thread=) at UnixNetVConnection.cc:320
> #21 0x0067fb12 in NetHandler::mainNetEvent (this=0x2b50e2e17c10, 
> event=, e=) at UnixNet.cc:384
> #22 0x006ac8cf in handleEvent (this=0x2b50e2e14010, e=0x1a9ef30, 
> calling_code=5) at I_Continuation.h:146
> #23 EThread::process_event (this=0x2b50e2e14010, e=0x1a9ef30, calling_code=5) 
> at UnixEThread.cc:145
> #24 0x006ad273 in EThread::execute (this=0x2b50e2e14010) at 
> UnixEThread.cc:269
> #25 0x006abc2a in spawn_thread_internal (a=0x198f820) at Thread.cc:88
> #26 0x2b50e026b9d1 in start_thread () from /lib64/libpthread.so.0
> #27 0x00381b2e8b6d in clone () from /lib64/libc.so.6
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3313) New World order for connection management and timeouts

2015-01-21 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-3313:
--
Fix Version/s: (was: 5.3.0)
   6.0.0

> New World order for connection management and timeouts
> --
>
> Key: TS-3313
> URL: https://issues.apache.org/jira/browse/TS-3313
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Bryan Call
>  Labels: Umbrella
> Fix For: 6.0.0
>
>
> This is an umbrella ticket for all issues related to connection management 
> and timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3313) New World order for connection management and timeouts

2015-01-21 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286414#comment-14286414
 ] 

Leif Hedstrom commented on TS-3313:
---

Marking this as a 6.0.0 fix version target. I'm hoping that much of this will 
go in for v5.3.0, but things that would break compatibility would need to go 
into 6.0.0.

> New World order for connection management and timeouts
> --
>
> Key: TS-3313
> URL: https://issues.apache.org/jira/browse/TS-3313
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Bryan Call
>  Labels: Umbrella
> Fix For: 6.0.0
>
>
> This is an umbrella ticket for all issues related to connection management 
> and timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-2406) Sig 11: Segmentation fault

2015-01-21 Thread Susan Hinrichs (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Susan Hinrichs closed TS-2406.
--
   Resolution: Cannot Reproduce
Fix Version/s: (was: 6.0.0)

Not enough information to pursue.  Based on the log info, it looks like SSL was 
enabled.  The SSL connection handling has evolved considerably since 4.2.

Please reopen again if this is still occurring.  A stack trace or core would be 
useful in tracking down the situation.  In addition description of the traffic 
and configuration would be useful (transparent, explicit proxy, forward or 
review proxy). 


> Sig 11: Segmentation fault
> --
>
> Key: TS-2406
> URL: https://issues.apache.org/jira/browse/TS-2406
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 4.2.0
>Reporter: Neddy
>Assignee: Susan Hinrichs
>  Labels: Crash
> Attachments: traffic.out.bak
>
>
> I've noticed this today, still don't know why
> [Nov 27 21:10:49.280] Manager {0x7f54eff15720} ERROR: 
> [LocalManager::pollMgmtProcessServer] Server Process terminated due to Sig 
> 11: Segmentation fault
> [Nov 28 07:53:42.853] Manager {0x7f54eff15720} ERROR: 
> [LocalManager::pollMgmtProcessServer] Server Process terminated due to Sig 
> 11: Segmentation fault
> How can I trace this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-153) "Dynamic" keep-alive timeouts

2015-01-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286467#comment-14286467
 ] 

ASF subversion and git services commented on TS-153:


Commit 1dfc029c78bd986e42fcd10a2a30f94aa8f54e7a in trafficserver's branch 
refs/heads/master from [~bcall]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=1dfc029 ]

TS-153: Updates to traffic_top to report dynamic KA times


> "Dynamic" keep-alive timeouts
> -
>
> Key: TS-153
> URL: https://issues.apache.org/jira/browse/TS-153
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Bryan Call
>Priority: Minor
>  Labels: A
> Fix For: 5.3.0
>
> Attachments: ts153.diff
>
>
> (This is from a Y! Bugzilla ticket 1821593, adding it here. . Originally 
> posted by Leif Hedstrom on 2008-03-19):
> Currently you have to set static keep-alive idle timeouts in TS, e.g.
>CONFIG proxy.config.http.keep_alive_no_activity_timeout_in INT 8
>CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 30
> even with epoll() in 1.17.x, this is difficult to configure, and put an 
> appropriate timeout. The key here is that the
> settings above need to assure that you stay below the max configured number 
> of connections, e.g.:
> CONFIG proxy.config.net.connections_throttle INT 75000
> I'm suggesting that we add one (or two) new configuration options, and 
> appropriate TS code support, to instead of
> specifying timeouts, we specify connection limits for idle KA connections. 
> For example:
> CONFIG proxy.config.http.keep_alive_max_idle_connections_in INT 5
> CONFIG proxy.config.http_keep_alive_max_idle_connections_out INT 5000
> (one still has to be careful to leave head-room for active connections here, 
> in the example above, 2 connections
> could be active, which is a lot of traffic).
> These would override the idle timeouts, so one could use the max_idle 
> connections for incoming (client) connections,
> and the idle timeouts for outgoing (origin) connections for instance.
> The benefit here is that it makes configuration not only easier, but also a 
> lot safer for many applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3312) KA timeout to origin does not seem to honor configurations

2015-01-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286509#comment-14286509
 ] 

ASF subversion and git services commented on TS-3312:
-

Commit 0b7bf112a778111469f837a3fbba3982d217bb5d in trafficserver's branch 
refs/heads/master from [~bcall]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=0b7bf11 ]

TS-153: Dynamic keep-alive timeouts
Renaming proxy.config.net.connections.threshold_shed_idle_in to
proxy.config.net.max_connections_in to be inline with the overall
design in TS-3312


> KA timeout to origin does not seem to honor configurations
> --
>
> Key: TS-3312
> URL: https://issues.apache.org/jira/browse/TS-3312
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core, HTTP
>Reporter: Leif Hedstrom
> Fix For: 5.3.0
>
>
> Doing some basic testing, with the following settings:
> {code}
> CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 120
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> I see ATS timing out the origin sessions after 30sec, with a 
> {code}
> CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 30
> {code}
> What's also interesting, after I made a config change per Geffon's suggestion:
> {code}
> CONFIG proxy.config.http.origin_min_keep_alive_connections INT 10
> {code}
> I see the following in the diagnostic trace:
> {code}
> [Jan 21 14:19:19.416] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] [release 
> session] session placed into shared pool
> [Jan 21 14:19:49.558] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.633] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_bucket] session received io notice [VC_EVENT_INACTIVITY_TIMEOUT], 
> reseting timeout to maintain minimum number of connections
> [Jan 21 14:20:19.670] Server {0x7fb1b4f06880} DEBUG: (http_ss) [0] 
> [session_pool] session 0x1cc5aa0 received io notice [VC_EVENT_EOS]
> {code}
> So, not only is it resetting the timeout twice, it also gets a VC_EVENT_EOS. 
> I first though it was the origin that closed the connection, but from what I 
> could tell, the timeout on the origin was set to 60s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-153) "Dynamic" keep-alive timeouts

2015-01-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286508#comment-14286508
 ] 

ASF subversion and git services commented on TS-153:


Commit 0b7bf112a778111469f837a3fbba3982d217bb5d in trafficserver's branch 
refs/heads/master from [~bcall]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=0b7bf11 ]

TS-153: Dynamic keep-alive timeouts
Renaming proxy.config.net.connections.threshold_shed_idle_in to
proxy.config.net.max_connections_in to be inline with the overall
design in TS-3312


> "Dynamic" keep-alive timeouts
> -
>
> Key: TS-153
> URL: https://issues.apache.org/jira/browse/TS-153
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Core
>Reporter: Leif Hedstrom
>Assignee: Bryan Call
>Priority: Minor
>  Labels: A
> Fix For: 5.3.0
>
> Attachments: ts153.diff
>
>
> (This is from a Y! Bugzilla ticket 1821593, adding it here. . Originally 
> posted by Leif Hedstrom on 2008-03-19):
> Currently you have to set static keep-alive idle timeouts in TS, e.g.
>CONFIG proxy.config.http.keep_alive_no_activity_timeout_in INT 8
>CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 30
> even with epoll() in 1.17.x, this is difficult to configure, and put an 
> appropriate timeout. The key here is that the
> settings above need to assure that you stay below the max configured number 
> of connections, e.g.:
> CONFIG proxy.config.net.connections_throttle INT 75000
> I'm suggesting that we add one (or two) new configuration options, and 
> appropriate TS code support, to instead of
> specifying timeouts, we specify connection limits for idle KA connections. 
> For example:
> CONFIG proxy.config.http.keep_alive_max_idle_connections_in INT 5
> CONFIG proxy.config.http_keep_alive_max_idle_connections_out INT 5000
> (one still has to be careful to leave head-room for active connections here, 
> in the example above, 2 connections
> could be active, which is a lot of traffic).
> These would override the idle timeouts, so one could use the max_idle 
> connections for incoming (client) connections,
> and the idle timeouts for outgoing (origin) connections for instance.
> The benefit here is that it makes configuration not only easier, but also a 
> lot safer for many applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3235) PluginVC crashed with unrecognized event

2015-01-21 Thread zouyu (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286878#comment-14286878
 ] 

zouyu commented on TS-3235:
---

@portl4t,
Does it mean that InterceptPlugin in atscppapi cannot be used by the customer's 
threads?
and if not, if the continuation can be used by customer's thread, the customer 
thread should make sure that it gets all the needed locks of provided by that 
continuation?

> PluginVC crashed with unrecognized event
> 
>
> Key: TS-3235
> URL: https://issues.apache.org/jira/browse/TS-3235
> Project: Traffic Server
>  Issue Type: Bug
>  Components: CPP API, HTTP, Plugins
>Reporter: kang li
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: pluginvc-crash.diff
>
>
> We are using atscppapi to create Intercept plugin.
>  
> From the coredump , that seems Continuation of the InterceptPlugin was 
> already been destroyed. 
> {code}
> #0  0x00375ac32925 in raise () from /lib64/libc.so.6
> #1  0x00375ac34105 in abort () from /lib64/libc.so.6
> #2  0x2b21eeae3458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b21eeae3525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`", 
> ap=0x2b21f4913ad0) at ink_error.cc:65
> #4  0x2b21eeae35ee in ink_fatal (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b21eeae2160 in _ink_assert (expression=0x76ddb8 "call_event == 
> core_lock_retry_event", file=0x76dd04 "PluginVC.cc", line=203)
> at ink_assert.cc:37
> #6  0x00530217 in PluginVC::main_handler (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at PluginVC.cc:203
> #7  0x004f5854 in Continuation::handleEvent (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at ../iocore/eventsystem/I_Continuation.h:146
> #8  0x00755d26 in EThread::process_event (this=0x309b250, 
> e=0xe0f5b80, calling_code=1) at UnixEThread.cc:145
> #9  0x0075610a in EThread::execute (this=0x309b250) at 
> UnixEThread.cc:239
> #10 0x00755284 in spawn_thread_internal (a=0x2849330) at Thread.cc:88
> #11 0x2b21ef05f9d1 in start_thread () from /lib64/libpthread.so.0
> #12 0x00375ace8b7d in clone () from /lib64/libc.so.6
> (gdb) p sm_lock_retry_event
> $13 = (Event *) 0x2b2496146e90
> (gdb) p core_lock_retry_event
> $14 = (Event *) 0x0
> (gdb) p active_event
> $15 = (Event *) 0x0
> (gdb) p inactive_event
> $16 = (Event *) 0x0
> (gdb) p *(INKContInternal*)this->core_obj->connect_to
> Cannot access memory at address 0x2b269cd46c10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3235) PluginVC crashed with unrecognized event

2015-01-21 Thread portl4t (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286904#comment-14286904
 ] 

portl4t commented on TS-3235:
-

Yeah, if a thread wants to schedule a continuation, it must holds the 
continuation's lock first. So, if you want to schedule a continuation in the 
customer's thread, you should hold the lock in the customer's thread. We don't 
have to hold the lock explicitly in the Plugin if there are no customer's 
thread, because the ats working thread had done this in the core code.

> PluginVC crashed with unrecognized event
> 
>
> Key: TS-3235
> URL: https://issues.apache.org/jira/browse/TS-3235
> Project: Traffic Server
>  Issue Type: Bug
>  Components: CPP API, HTTP, Plugins
>Reporter: kang li
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: pluginvc-crash.diff
>
>
> We are using atscppapi to create Intercept plugin.
>  
> From the coredump , that seems Continuation of the InterceptPlugin was 
> already been destroyed. 
> {code}
> #0  0x00375ac32925 in raise () from /lib64/libc.so.6
> #1  0x00375ac34105 in abort () from /lib64/libc.so.6
> #2  0x2b21eeae3458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b21eeae3525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`", 
> ap=0x2b21f4913ad0) at ink_error.cc:65
> #4  0x2b21eeae35ee in ink_fatal (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b21eeae2160 in _ink_assert (expression=0x76ddb8 "call_event == 
> core_lock_retry_event", file=0x76dd04 "PluginVC.cc", line=203)
> at ink_assert.cc:37
> #6  0x00530217 in PluginVC::main_handler (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at PluginVC.cc:203
> #7  0x004f5854 in Continuation::handleEvent (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at ../iocore/eventsystem/I_Continuation.h:146
> #8  0x00755d26 in EThread::process_event (this=0x309b250, 
> e=0xe0f5b80, calling_code=1) at UnixEThread.cc:145
> #9  0x0075610a in EThread::execute (this=0x309b250) at 
> UnixEThread.cc:239
> #10 0x00755284 in spawn_thread_internal (a=0x2849330) at Thread.cc:88
> #11 0x2b21ef05f9d1 in start_thread () from /lib64/libpthread.so.0
> #12 0x00375ace8b7d in clone () from /lib64/libc.so.6
> (gdb) p sm_lock_retry_event
> $13 = (Event *) 0x2b2496146e90
> (gdb) p core_lock_retry_event
> $14 = (Event *) 0x0
> (gdb) p active_event
> $15 = (Event *) 0x0
> (gdb) p inactive_event
> $16 = (Event *) 0x0
> (gdb) p *(INKContInternal*)this->core_obj->connect_to
> Cannot access memory at address 0x2b269cd46c10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3235) PluginVC crashed with unrecognized event

2015-01-21 Thread zouyu (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286914#comment-14286914
 ] 

zouyu commented on TS-3235:
---

Thanks @portl4t, seems that InterceptPlugin in atscppapi lacks of locking the 
continuation.


> PluginVC crashed with unrecognized event
> 
>
> Key: TS-3235
> URL: https://issues.apache.org/jira/browse/TS-3235
> Project: Traffic Server
>  Issue Type: Bug
>  Components: CPP API, HTTP, Plugins
>Reporter: kang li
>Assignee: Brian Geffon
> Fix For: 5.3.0
>
> Attachments: pluginvc-crash.diff
>
>
> We are using atscppapi to create Intercept plugin.
>  
> From the coredump , that seems Continuation of the InterceptPlugin was 
> already been destroyed. 
> {code}
> #0  0x00375ac32925 in raise () from /lib64/libc.so.6
> #1  0x00375ac34105 in abort () from /lib64/libc.so.6
> #2  0x2b21eeae3458 in ink_die_die_die (retval=1) at ink_error.cc:43
> #3  0x2b21eeae3525 in ink_fatal_va(int, const char *, typedef 
> __va_list_tag __va_list_tag *) (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`", 
> ap=0x2b21f4913ad0) at ink_error.cc:65
> #4  0x2b21eeae35ee in ink_fatal (return_code=1, 
> message_format=0x2b21eeaf08d8 "%s:%d: failed assert `%s`") at ink_error.cc:73
> #5  0x2b21eeae2160 in _ink_assert (expression=0x76ddb8 "call_event == 
> core_lock_retry_event", file=0x76dd04 "PluginVC.cc", line=203)
> at ink_assert.cc:37
> #6  0x00530217 in PluginVC::main_handler (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at PluginVC.cc:203
> #7  0x004f5854 in Continuation::handleEvent (this=0x2b24ef007cb8, 
> event=1, data=0xe0f5b80) at ../iocore/eventsystem/I_Continuation.h:146
> #8  0x00755d26 in EThread::process_event (this=0x309b250, 
> e=0xe0f5b80, calling_code=1) at UnixEThread.cc:145
> #9  0x0075610a in EThread::execute (this=0x309b250) at 
> UnixEThread.cc:239
> #10 0x00755284 in spawn_thread_internal (a=0x2849330) at Thread.cc:88
> #11 0x2b21ef05f9d1 in start_thread () from /lib64/libpthread.so.0
> #12 0x00375ace8b7d in clone () from /lib64/libc.so.6
> (gdb) p sm_lock_retry_event
> $13 = (Event *) 0x2b2496146e90
> (gdb) p core_lock_retry_event
> $14 = (Event *) 0x0
> (gdb) p active_event
> $15 = (Event *) 0x0
> (gdb) p inactive_event
> $16 = (Event *) 0x0
> (gdb) p *(INKContInternal*)this->core_obj->connect_to
> Cannot access memory at address 0x2b269cd46c10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-2715) Compiler warning in ESI plugin

2015-01-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286930#comment-14286930
 ] 

ASF subversion and git services commented on TS-2715:
-

Commit 55ddb12bbb568bd08cc958b77f90c1c1f8d40281 in trafficserver's branch 
refs/heads/4.2.x from [~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=55ddb12 ]

TS-2715 Fix some ESI compile warnings

(cherry picked from commit 1e6b4194e80a99e9d9b9ede0a66611209b52c178)

Conflicts:
CHANGES


> Compiler warning in ESI plugin
> --
>
> Key: TS-2715
> URL: https://issues.apache.org/jira/browse/TS-2715
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Plugins
>Reporter: Leif Hedstrom
>Assignee: Leif Hedstrom
>Priority: Trivial
> Fix For: 5.0.0
>
>
> lib/EsiGzip.cc:36:18: warning: unused variable 'GZIP_TRAILER_SIZE' 
> [-Wunused-const-variable]
> static const int GZIP_TRAILER_SIZE = 8;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : clang-analyzer #287

2015-01-21 Thread jenkins
See