[jira] [Commented] (TS-1405) apply time-wheel scheduler about event system

2013-04-10 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13627875#comment-13627875
 ] 

Leif Hedstrom commented on TS-1405:
---

I think the max being down is an artifact of less pressure on the box (since it 
not can only do about 60% of the traffic it used to). I ran a few more tests, 
the second one tries to reduce the pressure on the box to verify that the max 
response time is due to the system being on its knees:

With this patch, and 500 connections (there's not noticeable difference, other 
than mean time is 30% worse):
{code}
6378965 fetches on 580129 conns, 498 max parallel, 6.378960E+08 bytes, in 60 
seconds
100 mean bytes/fetch
106315.6 fetches/sec, 1.063156E+07 bytes/sec
msecs/connect: 0.245 mean, 8.846 max, 0.042 min
msecs/first-response: 3.791 mean, 207.045 max, 0.079 min
{code}

Current master with 300 connections, but at a lower QPS (so less pressure):
{code}
8850329 fetches on 8 conns, 300 max parallel, 8.850330E+08 bytes, in 60 
seconds
100 mean bytes/fetch
147505.5 fetches/sec, 1.475055E+07 bytes/sec
msecs/connect: 0.191 mean, 2.037 max, 0.043 min
msecs/first-response: 0.678 mean, 77.340 max, 0.085 min
{code}


So even though this second test on master is doing significantly more QPS 
(almost 50% more), it still has much better response response times across the 
board. By reducing the throughput in this last test, such that the system 
resources aren't at their limits, the response times improve. I think that's 
why with the patch, you see slightly better response times on Max, but it's 
really not indicative of the patch improving anything. It's because with the 
patch, ATS simply can't put the system under pressure.

This is pretty much the same problem I posted about early on here. As far as I 
can tell, it's gotten noticeably worse since the first patch sets :).


 apply time-wheel scheduler  about event system
 --

 Key: TS-1405
 URL: https://issues.apache.org/jira/browse/TS-1405
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.2.0
Reporter: Bin Chen
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: linux_time_wheel.patch, linux_time_wheel_v10jp.patch, 
 linux_time_wheel_v11jp.patch, linux_time_wheel_v2.patch, 
 linux_time_wheel_v3.patch, linux_time_wheel_v4.patch, 
 linux_time_wheel_v5.patch, linux_time_wheel_v6.patch, 
 linux_time_wheel_v7.patch, linux_time_wheel_v8.patch, 
 linux_time_wheel_v9jp.patch


 when have more and more event in event system scheduler, it's worse. This is 
 the reason why we use inactivecop to handler keepalive. the new scheduler is 
 time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (TS-1405) apply time-wheel scheduler about event system

2013-04-10 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13627875#comment-13627875
 ] 

Leif Hedstrom edited comment on TS-1405 at 4/10/13 3:05 PM:


I think the max being down is an artifact of less pressure on the box (since it 
now can only do about 60% of the traffic it used to). I ran a few more tests, 
the second one tries to reduce the pressure on the box to verify that the max 
response time is due to the system being on its knees:

With this patch, and 500 connections (there's not noticeable difference, other 
than mean time is 30% worse):
{code}
6378965 fetches on 580129 conns, 498 max parallel, 6.378960E+08 bytes, in 60 
seconds
100 mean bytes/fetch
106315.6 fetches/sec, 1.063156E+07 bytes/sec
msecs/connect: 0.245 mean, 8.846 max, 0.042 min
msecs/first-response: 3.791 mean, 207.045 max, 0.079 min
{code}

Current master with 300 connections, but at a lower QPS (so less pressure):
{code}
8850329 fetches on 8 conns, 300 max parallel, 8.850330E+08 bytes, in 60 
seconds
100 mean bytes/fetch
147505.5 fetches/sec, 1.475055E+07 bytes/sec
msecs/connect: 0.191 mean, 2.037 max, 0.043 min
msecs/first-response: 0.678 mean, 77.340 max, 0.085 min
{code}


So even though this second test on master is doing significantly more QPS 
(almost 50% more), it still has much better response response times across the 
board. By reducing the throughput in this last test, such that the system 
resources aren't at their limits, the response times improve. I think that's 
why with the patch, you see slightly better response times on Max, but it's 
really not indicative of the patch improving anything. It's because with the 
patch, ATS simply can't put the system under pressure.

This is pretty much the same problem I posted about early on here. As far as I 
can tell, it's gotten noticeably worse since the first patch sets :).


  was (Author: zwoop):
I think the max being down is an artifact of less pressure on the box 
(since it not can only do about 60% of the traffic it used to). I ran a few 
more tests, the second one tries to reduce the pressure on the box to verify 
that the max response time is due to the system being on its knees:

With this patch, and 500 connections (there's not noticeable difference, other 
than mean time is 30% worse):
{code}
6378965 fetches on 580129 conns, 498 max parallel, 6.378960E+08 bytes, in 60 
seconds
100 mean bytes/fetch
106315.6 fetches/sec, 1.063156E+07 bytes/sec
msecs/connect: 0.245 mean, 8.846 max, 0.042 min
msecs/first-response: 3.791 mean, 207.045 max, 0.079 min
{code}

Current master with 300 connections, but at a lower QPS (so less pressure):
{code}
8850329 fetches on 8 conns, 300 max parallel, 8.850330E+08 bytes, in 60 
seconds
100 mean bytes/fetch
147505.5 fetches/sec, 1.475055E+07 bytes/sec
msecs/connect: 0.191 mean, 2.037 max, 0.043 min
msecs/first-response: 0.678 mean, 77.340 max, 0.085 min
{code}


So even though this second test on master is doing significantly more QPS 
(almost 50% more), it still has much better response response times across the 
board. By reducing the throughput in this last test, such that the system 
resources aren't at their limits, the response times improve. I think that's 
why with the patch, you see slightly better response times on Max, but it's 
really not indicative of the patch improving anything. It's because with the 
patch, ATS simply can't put the system under pressure.

This is pretty much the same problem I posted about early on here. As far as I 
can tell, it's gotten noticeably worse since the first patch sets :).

  
 apply time-wheel scheduler  about event system
 --

 Key: TS-1405
 URL: https://issues.apache.org/jira/browse/TS-1405
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.2.0
Reporter: Bin Chen
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: linux_time_wheel.patch, linux_time_wheel_v10jp.patch, 
 linux_time_wheel_v11jp.patch, linux_time_wheel_v2.patch, 
 linux_time_wheel_v3.patch, linux_time_wheel_v4.patch, 
 linux_time_wheel_v5.patch, linux_time_wheel_v6.patch, 
 linux_time_wheel_v7.patch, linux_time_wheel_v8.patch, 
 linux_time_wheel_v9jp.patch


 when have more and more event in event system scheduler, it's worse. This is 
 the reason why we use inactivecop to handler keepalive. the new scheduler is 
 time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (TS-1405) apply time-wheel scheduler about event system

2013-04-10 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13627875#comment-13627875
 ] 

Leif Hedstrom edited comment on TS-1405 at 4/10/13 3:06 PM:


I think the max being down is an artifact of less pressure on the box (since it 
now can only do about 60% of the traffic it used to). I ran a few more tests, 
the second one tries to reduce the pressure on the box to verify that the max 
response time is due to the system being on its knees:

With this patch, and 500 connections (there's not noticeable difference, other 
than mean time is 30% worse):
{code}
6378965 fetches on 580129 conns, 498 max parallel, 6.378960E+08 bytes, in 60 
seconds
100 mean bytes/fetch
106315.6 fetches/sec, 1.063156E+07 bytes/sec
msecs/connect: 0.245 mean, 8.846 max, 0.042 min
msecs/first-response: 3.791 mean, 207.045 max, 0.079 min
{code}

Current master with 300 connections, but at a lower QPS (so less pressure):
{code}
8850329 fetches on 8 conns, 300 max parallel, 8.850330E+08 bytes, in 60 
seconds
100 mean bytes/fetch
147505.5 fetches/sec, 1.475055E+07 bytes/sec
msecs/connect: 0.191 mean, 2.037 max, 0.043 min
msecs/first-response: 0.678 mean, 77.340 max, 0.085 min
{code}


So even though this second test on master is doing significantly more QPS 
(almost 50% more), it still has much better response response times across the 
board. By reducing the throughput in this last test, such that the system 
resources aren't at their limits (and probably less rescheduling on lock 
contention), the response times improve. I think that's why with the patch, you 
see slightly better response times on Max, but it's really not indicative of 
the patch improving anything. It's because with the patch, ATS simply can't put 
the system under pressure.

This is pretty much the same problem I posted about early on here. As far as I 
can tell, it's gotten noticeably worse since the first patch sets :).


  was (Author: zwoop):
I think the max being down is an artifact of less pressure on the box 
(since it now can only do about 60% of the traffic it used to). I ran a few 
more tests, the second one tries to reduce the pressure on the box to verify 
that the max response time is due to the system being on its knees:

With this patch, and 500 connections (there's not noticeable difference, other 
than mean time is 30% worse):
{code}
6378965 fetches on 580129 conns, 498 max parallel, 6.378960E+08 bytes, in 60 
seconds
100 mean bytes/fetch
106315.6 fetches/sec, 1.063156E+07 bytes/sec
msecs/connect: 0.245 mean, 8.846 max, 0.042 min
msecs/first-response: 3.791 mean, 207.045 max, 0.079 min
{code}

Current master with 300 connections, but at a lower QPS (so less pressure):
{code}
8850329 fetches on 8 conns, 300 max parallel, 8.850330E+08 bytes, in 60 
seconds
100 mean bytes/fetch
147505.5 fetches/sec, 1.475055E+07 bytes/sec
msecs/connect: 0.191 mean, 2.037 max, 0.043 min
msecs/first-response: 0.678 mean, 77.340 max, 0.085 min
{code}


So even though this second test on master is doing significantly more QPS 
(almost 50% more), it still has much better response response times across the 
board. By reducing the throughput in this last test, such that the system 
resources aren't at their limits, the response times improve. I think that's 
why with the patch, you see slightly better response times on Max, but it's 
really not indicative of the patch improving anything. It's because with the 
patch, ATS simply can't put the system under pressure.

This is pretty much the same problem I posted about early on here. As far as I 
can tell, it's gotten noticeably worse since the first patch sets :).

  
 apply time-wheel scheduler  about event system
 --

 Key: TS-1405
 URL: https://issues.apache.org/jira/browse/TS-1405
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.2.0
Reporter: Bin Chen
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: linux_time_wheel.patch, linux_time_wheel_v10jp.patch, 
 linux_time_wheel_v11jp.patch, linux_time_wheel_v2.patch, 
 linux_time_wheel_v3.patch, linux_time_wheel_v4.patch, 
 linux_time_wheel_v5.patch, linux_time_wheel_v6.patch, 
 linux_time_wheel_v7.patch, linux_time_wheel_v8.patch, 
 linux_time_wheel_v9jp.patch


 when have more and more event in event system scheduler, it's worse. This is 
 the reason why we use inactivecop to handler keepalive. the new scheduler is 
 time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (TS-1719) Lua Plugin breaks build on Linux (Ubuntu 12.10/amd64)

2013-04-10 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach updated TS-1719:


Description: 
{noformat}
Making all in lua
make[3]: Entering directory 
`/home/igalic/src/asf/trafficserver/plugins/experimental/lua'
/bin/bash ../../../libtool  --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H -I. 
-I../../../lib/ts  -I/usr//include/luajit-2.0   -I../../../proxy/api 
-I../../../proxy/api -D_LARGEFILE64_SOURCE=1 -D_COMPILE64BIT_SOURCE=1 
-D_GNU_SOURCE -D_REENTRANT -Dlinux -I/usr/include/tcl8.5  -std=c++11 -g -pipe 
-Wall -Werror -O3 -feliminate-unused-debug-symbols -fno-strict-aliasing 
-Wno-invalid-offsetof -MT lua_la-state.lo -MD -MP -MF .deps/lua_la-state.Tpo -c 
-o lua_la-state.lo `test -f 'state.cc' || echo './'`state.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I. -I../../../lib/ts 
-I/usr//include/luajit-2.0 -I../../../proxy/api -I../../../proxy/api 
-D_LARGEFILE64_SOURCE=1 -D_COMPILE64BIT_SOURCE=1 -D_GNU_SOURCE -D_REENTRANT 
-Dlinux -I/usr/include/tcl8.5 -std=c++11 -g -pipe -Wall -Werror -O3 
-feliminate-unused-debug-symbols -fno-strict-aliasing -Wno-invalid-offsetof -MT 
lua_la-state.lo -MD -MP -MF .deps/lua_la-state.Tpo -c state.cc  -fPIC -DPIC -o 
.libs/lua_la-state.o
state.cc: In function 'instanceid_t LuaPluginRegister(unsigned int, const 
char**)':
state.cc:169:21: error: comparison between signed and unsigned integer 
expressions [-Werror=sign-compare]
state.cc: In member function 'bool LuaThreadState::init(LuaPluginInstance*)':
state.cc:251:7: error: 'strerror' was not declared in this scope
cc1plus: all warnings being treated as errors
make[3]: *** [lua_la-state.lo] Error 1
{noformat}

  was:
{noformat}
Making all in lua
make[3]: Entering directory 
`/home/igalic/src/asf/trafficserver/plugins/experimental/lua'
/bin/bash ../../../libtool  --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H -I. 
-I../../../lib/ts  -I/usr//include/luajit-2.0   -I../../../proxy/api 
-I../../../proxy/api -D_LARGEFILE64_SOURCE=1 -D_COMPILE64BIT_SOURCE=1 
-D_GNU_SOURCE -D_REENTRANT -Dlinux -I/usr/include/tcl8.5  -std=c++11 -g -pipe 
-Wall -Werror -O3 -feliminate-unused-debug-symbols -fno-strict-aliasing 
-Wno-invalid-offsetof -MT lua_la-state.lo -MD -MP -MF .deps/lua_la-state.Tpo -c 
-o lua_la-state.lo `test -f 'state.cc' || echo './'`state.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I. -I../../../lib/ts 
-I/usr//include/luajit-2.0 -I../../../proxy/api -I../../../proxy/api 
-D_LARGEFILE64_SOURCE=1 -D_COMPILE64BIT_SOURCE=1 -D_GNU_SOURCE -D_REENTRANT 
-Dlinux -I/usr/include/tcl8.5 -std=c++11 -g -pipe -Wall -Werror -O3 
-feliminate-unused-debug-symbols -fno-strict-aliasing -Wno-invalid-offsetof -MT 
lua_la-state.lo -MD -MP -MF .deps/lua_la-state.Tpo -c state.cc  -fPIC -DPIC -o 
.libs/lua_la-state.o
state.cc: In function 'instanceid_t LuaPluginRegister(unsigned int, const 
char**)':
state.cc:169:21: error: comparison between signed and unsigned integer 
expressions [-Werror=sign-compare]
state.cc: In member function 'bool LuaThreadState::init(LuaPluginInstance*)':
state.cc:251:7: error: 'strerror' was not declared in this scope
cc1plus: all warnings being treated as errors
make[3]: *** [lua_la-state.lo] Error 1
{format}


 Lua Plugin breaks build on Linux (Ubuntu 12.10/amd64)
 -

 Key: TS-1719
 URL: https://issues.apache.org/jira/browse/TS-1719
 Project: Traffic Server
  Issue Type: Bug
  Components: Build
Reporter: Igor Galić
Assignee: James Peach
 Fix For: 3.3.2


 {noformat}
 Making all in lua
 make[3]: Entering directory 
 `/home/igalic/src/asf/trafficserver/plugins/experimental/lua'
 /bin/bash ../../../libtool  --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H 
 -I. -I../../../lib/ts  -I/usr//include/luajit-2.0   -I../../../proxy/api 
 -I../../../proxy/api -D_LARGEFILE64_SOURCE=1 -D_COMPILE64BIT_SOURCE=1 
 -D_GNU_SOURCE -D_REENTRANT -Dlinux -I/usr/include/tcl8.5  -std=c++11 -g -pipe 
 -Wall -Werror -O3 -feliminate-unused-debug-symbols -fno-strict-aliasing 
 -Wno-invalid-offsetof -MT lua_la-state.lo -MD -MP -MF .deps/lua_la-state.Tpo 
 -c -o lua_la-state.lo `test -f 'state.cc' || echo './'`state.cc
 libtool: compile:  g++ -DHAVE_CONFIG_H -I. -I../../../lib/ts 
 -I/usr//include/luajit-2.0 -I../../../proxy/api -I../../../proxy/api 
 -D_LARGEFILE64_SOURCE=1 -D_COMPILE64BIT_SOURCE=1 -D_GNU_SOURCE -D_REENTRANT 
 -Dlinux -I/usr/include/tcl8.5 -std=c++11 -g -pipe -Wall -Werror -O3 
 -feliminate-unused-debug-symbols -fno-strict-aliasing -Wno-invalid-offsetof 
 -MT lua_la-state.lo -MD -MP -MF .deps/lua_la-state.Tpo -c state.cc  -fPIC 
 -DPIC -o .libs/lua_la-state.o
 state.cc: In function 'instanceid_t LuaPluginRegister(unsigned int, const 
 char**)':
 state.cc:169:21: error: comparison between signed and unsigned integer 
 expressions [-Werror=sign-compare]
 state.cc: In member 

[jira] [Updated] (TS-608) Is HttpSessionManager::purge_keepalives() too aggressive?

2013-04-10 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-608:
-

Fix Version/s: (was: 3.3.2)
   3.3.3

 Is HttpSessionManager::purge_keepalives()  too aggressive?
 --

 Key: TS-608
 URL: https://issues.apache.org/jira/browse/TS-608
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Leif Hedstrom
 Fix For: 3.3.3

 Attachments: TS-608.patch


 It seems that if we trigger the max server connections, we call this purge 
 function in the session manager, which will close all currently open 
 keep-alive connections. This seems very aggressive, why not limit it to say 
 only removing 10% of each bucket or some such? Also, how does this work 
 together with per-origin limits? Ideally, if the per-origin limits are in 
 place, we would only purge sessions that are for the IP we wish to connect to 
 ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1053) get combo_handler compiled

2013-04-10 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628487#comment-13628487
 ] 

Leif Hedstrom commented on TS-1053:
---

So, I have some patches for this. but, I'm also thinking we should move this 
plugin in under the esi plugin source tree. Such that it builds both esi.so and 
combo_handler.so. They both share code, and it makes sense to combine them I 
think.

 get combo_handler compiled
 --

 Key: TS-1053
 URL: https://issues.apache.org/jira/browse/TS-1053
 Project: Traffic Server
  Issue Type: Task
  Components: Plugins
Reporter: Conan Wang
Assignee: Leif Hedstrom
 Fix For: 3.3.4

 Attachments: combo_handler.diff, fetcher.diff, Makefile


 combo_handler require ESI's code. Before make ESI work as a lib, you can try 
 it this way:
 make esi/lib and esi/fetcher the subdir of combo_handler and use the 
 makefile.
 {noformat} 
 combo_handler
 |combo_handler.cc
 |fetcher
 |lib
 |LICENSE
 |Makefile
 |README
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (TS-1053) get combo_handler compiled

2013-04-10 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-1053:
--

Fix Version/s: (was: 3.3.4)
   3.3.2

 get combo_handler compiled
 --

 Key: TS-1053
 URL: https://issues.apache.org/jira/browse/TS-1053
 Project: Traffic Server
  Issue Type: Task
  Components: Plugins
Reporter: Conan Wang
Assignee: Leif Hedstrom
 Fix For: 3.3.2

 Attachments: combo_handler.diff, fetcher.diff, Makefile


 combo_handler require ESI's code. Before make ESI work as a lib, you can try 
 it this way:
 make esi/lib and esi/fetcher the subdir of combo_handler and use the 
 makefile.
 {noformat} 
 combo_handler
 |combo_handler.cc
 |fetcher
 |lib
 |LICENSE
 |Makefile
 |README
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1779) Crash using SNI and ssl_ca_name

2013-04-10 Thread Rodney (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628491#comment-13628491
 ] 

Rodney commented on TS-1779:


It happens everytime, so very easy to reproduce. Let me know how you
want to receive it.




 Crash using SNI and ssl_ca_name
 ---

 Key: TS-1779
 URL: https://issues.apache.org/jira/browse/TS-1779
 Project: Traffic Server
  Issue Type: Bug
  Components: SSL
Reporter: Rodney
Assignee: James Peach
 Fix For: 3.3.2


 When I add 'ssl_ca_name' to include a chain cert CA the traffic server fails 
 to start with a core dump. It seems to be okay if I just have one entry in 
 'ssl_multicert.config' file but as soon as I use SNI the traffic server will 
 not start with a core dump.
 This witnessed on 3.2.0 and currently 3.2.4 with Debian Squeeze.
 Example entries:
 ssl_cert_name=my1.crt ssl_key_name=my1.key ssl_ca_name=my1CA.crt
 ssl_cert_name=my2.crt ssl_key_name=my2.key ssl_ca_name=my2CA.crt
 #Default
 dest_ip=* ssl_cert_name=my1.crt ssl_key_name=my1.key ssl_ca_name=my1CA.crt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1779) Crash using SNI and ssl_ca_name

2013-04-10 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628602#comment-13628602
 ] 

James Peach commented on TS-1779:
-



There should be a stack trace in the log file, probably traffic.out. That would 
actually be more useful than the core file, since I probably won't have the 
matching toolchain to symbolicate it. If you have a throwaway set of 
certificates you can give me, that would be great too :)




 Crash using SNI and ssl_ca_name
 ---

 Key: TS-1779
 URL: https://issues.apache.org/jira/browse/TS-1779
 Project: Traffic Server
  Issue Type: Bug
  Components: SSL
Reporter: Rodney
Assignee: James Peach
 Fix For: 3.3.2


 When I add 'ssl_ca_name' to include a chain cert CA the traffic server fails 
 to start with a core dump. It seems to be okay if I just have one entry in 
 'ssl_multicert.config' file but as soon as I use SNI the traffic server will 
 not start with a core dump.
 This witnessed on 3.2.0 and currently 3.2.4 with Debian Squeeze.
 Example entries:
 ssl_cert_name=my1.crt ssl_key_name=my1.key ssl_ca_name=my1CA.crt
 ssl_cert_name=my2.crt ssl_key_name=my2.key ssl_ca_name=my2CA.crt
 #Default
 dest_ip=* ssl_cert_name=my1.crt ssl_key_name=my1.key ssl_ca_name=my1CA.crt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1405) apply time-wheel scheduler about event system

2013-04-10 Thread Bin Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628638#comment-13628638
 ] 

Bin Chen commented on TS-1405:
--

http_load  -parallel 100 -seconds 60 -keep_alive 100 /tmp/URL
all /tmp/URL is hit or miss? how about hit ratio? 

 apply time-wheel scheduler  about event system
 --

 Key: TS-1405
 URL: https://issues.apache.org/jira/browse/TS-1405
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.2.0
Reporter: Bin Chen
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: linux_time_wheel.patch, linux_time_wheel_v10jp.patch, 
 linux_time_wheel_v11jp.patch, linux_time_wheel_v2.patch, 
 linux_time_wheel_v3.patch, linux_time_wheel_v4.patch, 
 linux_time_wheel_v5.patch, linux_time_wheel_v6.patch, 
 linux_time_wheel_v7.patch, linux_time_wheel_v8.patch, 
 linux_time_wheel_v9jp.patch


 when have more and more event in event system scheduler, it's worse. This is 
 the reason why we use inactivecop to handler keepalive. the new scheduler is 
 time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1405) apply time-wheel scheduler about event system

2013-04-10 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628664#comment-13628664
 ] 

Leif Hedstrom commented on TS-1405:
---

100% cache hit ratio. Not that I run 3x of those http_load, for a total of 300 
connections.

 apply time-wheel scheduler  about event system
 --

 Key: TS-1405
 URL: https://issues.apache.org/jira/browse/TS-1405
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.2.0
Reporter: Bin Chen
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: linux_time_wheel.patch, linux_time_wheel_v10jp.patch, 
 linux_time_wheel_v11jp.patch, linux_time_wheel_v2.patch, 
 linux_time_wheel_v3.patch, linux_time_wheel_v4.patch, 
 linux_time_wheel_v5.patch, linux_time_wheel_v6.patch, 
 linux_time_wheel_v7.patch, linux_time_wheel_v8.patch, 
 linux_time_wheel_v9jp.patch


 when have more and more event in event system scheduler, it's worse. This is 
 the reason why we use inactivecop to handler keepalive. the new scheduler is 
 time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (TS-1106) redirect map generates multiple Via: header entries.

2013-04-10 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom updated TS-1106:
--

Fix Version/s: (was: 3.3.3)
   3.5.0

 redirect map generates multiple Via: header entries.
 

 Key: TS-1106
 URL: https://issues.apache.org/jira/browse/TS-1106
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.1.2
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
 Fix For: 3.5.0


 It seems using the redirect rule in remap.config, ends up duplicating the 
 entry in the Via: header, e.g.
 {code}
 Via: http/1.1 kramer.ogre.com (ApacheTrafficServer/3.1.3-unstable [u c s f p 
 eS:tNc  i p s ]), http/1.1 kramer.ogre.com 
 (ApacheTrafficServer/3.1.3-unstable [u c s f p eS:tNc  i p s ])
 {code}
 I'm not sure why this happens, if it's duplicating it, or if it's going 
 through the SM twice. I know i'm not proxying twice through the box.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira