[jira] [Resolved] (TS-1278) Clang warns: Volatile fields read but results discarded

2012-08-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/TS-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Galić resolved TS-1278.


Resolution: Fixed
  Assignee: Igor Galić

57035900278a94310705c48200b098dc0cf2c164
22bd64a667f3300d94bb147683925ed51b30e18d

> Clang warns: Volatile fields read but results discarded
> ---
>
> Key: TS-1278
> URL: https://issues.apache.org/jira/browse/TS-1278
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
> Environment: clang version 3.2 (trunk 157601)
> Target: x86_64-unknown-linux-gnu
> Thread model: posix
>Reporter: Igor Galić
>Assignee: Igor Galić
> Fix For: 3.3.1
>
> Attachments: volatile.patch
>
>
> {noformat}
> Making all in eventsystem
> make[2]: Entering directory 
> `/home/igalic/src/asf/trafficserver/BUILD/iocore/eventsystem'
>   CXXEventSystem.o
> In file included from ../../../iocore/eventsystem/EventSystem.cc:31:
> In file included from ../../../iocore/eventsystem/P_EventSystem.h:41:
> In file included from ../../../iocore/eventsystem/I_EventSystem.h:35:
> In file included from ../../../iocore/eventsystem/I_Action.h:30:
> In file included from ../../../iocore/eventsystem/I_Continuation.h:40:
> ../../../iocore/eventsystem/I_Lock.h:404:19: error: expression result unused; 
> assign into a variable to force a volatile load
>   [-Werror,-Wunused-volatile-lvalue]
> ink_assert(m->thread_holding);
> ~~^~~
> ../../../lib/ts/ink_assert.h:54:31: note: expanded from macro 'ink_assert'
> #define ink_assert(EX) (void)(EX)
>   ^
> 1 error generated.
> make[2]: *** [EventSystem.o] Error 1
> {noformat}
> Discussion from {{#llvm}} on IRC:
> {noformat}
> < jMCg> volatile EThreadPtr thread_holding;
> <@baldrick> the clang warning sounds very sensible then
> < jMCg> 
> http://git-wip-us.apache.org/repos/asf?p=trafficserver.git;a=blob;f=iocore/eventsystem/I_Lock.h#l80
> < jMCg> The comment is great. "You must not modify or set this value 
> directly." -- that's why it's public!
> < jMCg> baldrick: can you still help me understand it?
> <@baldrick> jMCg: reading a volatile variable may have side-effects (that's 
> what volatile is for).  For example, if it is mapped to some I/O area, each 
> read could fire off a nuclear warhead for example.
> <@baldrick> jMCg: thus it seems sensible to warn if it looks like someone is 
> readying it but not using the result.
> < jMCg> Oh. Now I'm back on track. *Reading* a volatile variable may have 
> *side-effects* - sometimes I'm slow.
> < jMCg> baldrick: I'll open a Bug in our project. Thank you very much.
> {noformat}
> This is it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-745) Support ssd

2012-08-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/TS-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443029#comment-13443029
 ] 

Igor Galić commented on TS-745:
---

Can you please rebase this branch against the current trunk?

> Support ssd
> ---
>
> Key: TS-745
> URL: https://issues.apache.org/jira/browse/TS-745
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Cache
>Reporter: mohan_zl
>Assignee: mohan_zl
> Fix For: 3.3.1
>
> Attachments: TS-ssd-2.patch, TS-ssd.patch
>
>
> A patch for supporting, not work well for a long time with --enable-debug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (TS-1420) traffic.out cannot currently be rotated nicely

2012-08-28 Thread Leif Hedstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leif Hedstrom resolved TS-1420.
---

Resolution: Duplicate

> traffic.out cannot currently be rotated nicely
> --
>
> Key: TS-1420
> URL: https://issues.apache.org/jira/browse/TS-1420
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 3.2.0
>Reporter: Jon Cowie
>
> There's currently no way to rotate traffic.out without having to restart 
> trafficserver to get it to release the file handle. This is sub-optimal and 
> could do with fixing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-745) Support ssd

2012-08-28 Thread Zhao Yongming (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443208#comment-13443208
 ] 

Zhao Yongming commented on TS-745:
--

woo, that will be a tough task. and Wei Jin had make another implement on 
tiered storage, which is based on 3.0.x tree, codes on: 
https://gitorious.org/trafficserver/taobao/commits/new_tbtrunk_ssd, without 
docs as always.

the tiered storage does not work well with multi-volume config for now.

I am still not sure we should move forward to merge it in our office tree or 
not, although it works perfect for me. in my hardware spec, 
mem:ssd:sas=1:10:100.

if you guys think the tiered storage is a right way to go, we will merge it 
into the git master.

thanks

> Support ssd
> ---
>
> Key: TS-745
> URL: https://issues.apache.org/jira/browse/TS-745
> Project: Traffic Server
>  Issue Type: New Feature
>  Components: Cache
>Reporter: mohan_zl
>Assignee: mohan_zl
> Fix For: 3.3.1
>
> Attachments: TS-ssd-2.patch, TS-ssd.patch
>
>
> A patch for supporting, not work well for a long time with --enable-debug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1405) apply time-wheel scheduler about event system

2012-08-28 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443216#comment-13443216
 ] 

Leif Hedstrom commented on TS-1405:
---

I'm wondering, with these improvements (they are improvements, right? :) ), 
could we get rid of inactivity cop, and enable the old code path which injected 
inactivity events ? I believe the inactivity cop was added as a response to 
"performance concerns" with the events, but right now inactivity cop can itself 
be a serious performance problem.

> apply time-wheel scheduler  about event system
> --
>
> Key: TS-1405
> URL: https://issues.apache.org/jira/browse/TS-1405
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.2.0
>Reporter: kuotai
>Assignee: kuotai
> Fix For: 3.3.0
>
> Attachments: time-wheel.patch
>
>
> when have more and more event in event system scheduler, it's worse. This is 
> the reason why we use inactivecop to handler keepalive. the new scheduler is 
> time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (TS-1422) TProxy + proxy.config.http.use_client_target_addr can caused site-specific DoS when DNS records are bad/stale or point to unreachable servers

2012-08-28 Thread B Wyatt (JIRA)
B Wyatt created TS-1422:
---

 Summary: TProxy + proxy.config.http.use_client_target_addr can 
caused site-specific DoS when DNS records are bad/stale or point to unreachable 
servers
 Key: TS-1422
 URL: https://issues.apache.org/jira/browse/TS-1422
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.2.0
 Environment: Version 3.2 running with TProxy interception and 
proxy.config.http.use_client_target_addr == 1
Reporter: B Wyatt
Assignee: Alan M. Carroll


In the presence of multiple A(AA) records from DNS, most consumer browsers will 
choose an alternate record if their current selected record is unreachable.  
This allows the browser to successfully mitigate downed servers and 
stale/erroneous DNS entries.

However, an intercepting proxy will establish a connection for a given endpoint 
regardless of the state of the upstream endpoint.  As a result, the browsers 
ability to detect downed origin servers is completely neutralized.

When enabling proxy.config.http.use_client_target_addr this situation creates a 
localized service outage.  ATS will skip DNS checks in favor of using the 
endpoint address that the client was attempting to connect to during 
interception.  If this endpoint is unreachable, ATS will send an error response 
(50x) to the user browser.  Since the browser assumes this is from the Origin 
Server, it makes no attempt to move to the next DNS record. 

In the event that a DNS record is erroneous or the most selected record (aka 
first?) points to a down server, this can deny access to a destination for 
users behind the transparent proxy, while users that are not intercepted merely 
see increased latency as their browser cycles through bad DNS entries looking 
for a good address.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (TS-1423) Blind tunneling of garbage/invalid requests when using transparent interception

2012-08-28 Thread B Wyatt (JIRA)
B Wyatt created TS-1423:
---

 Summary: Blind tunneling of garbage/invalid requests when using 
transparent interception
 Key: TS-1423
 URL: https://issues.apache.org/jira/browse/TS-1423
 Project: Traffic Server
  Issue Type: New Feature
Affects Versions: 3.2.0
 Environment: 3.2 with TProxy inteception and 
proxy.config.http.use_client_target_addr == 1
Reporter: B Wyatt
Assignee: Alan M. Carroll


Presently, when ATS encounters a request that it cannot parse or that is 
malformed in any way, it sends an error response to the client.

When using transparent interception and 
proxy.config.http.use_client_target_addr ATS should have enough information to 
blindly tunnel the original "transmission" to the desired endpoint and maintain 
the service regardless of HTTP/1.x compliance and moreover if it is non-HTTP 
communication over port 80. 

Bonus would be support for supporting alien protocols where the server speaks 
first however, ambiguity over a slow incoming request and an expectation that 
the server speaks first can make that difficult.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (TS-1424) Transparent proxy with proxy.config.http.use_client_source_port==1 has problems if the client is keep-alive and the origin server is not.

2012-08-28 Thread B Wyatt (JIRA)
B Wyatt created TS-1424:
---

 Summary: Transparent proxy with 
proxy.config.http.use_client_source_port==1 has problems if the client is 
keep-alive and the origin server is not.
 Key: TS-1424
 URL: https://issues.apache.org/jira/browse/TS-1424
 Project: Traffic Server
  Issue Type: Bug
 Environment: 3.2 with transparent (TProxy) interception + 
proxy.config.http.use_client_source_port = 1
Reporter: B Wyatt
Assignee: Alan M. Carroll


As keep-alive is hop-to-hop ATS will happily support client keep-alive in 
instances where an Origin Server terminates the connection after each 
transaction. 

However, when using proxy.config.http.use_client_source_port this behavior can 
cause some sites to break.  

When the client is kept alive, subsequent requests are made rapidly and with 
the same 4-tuple for addressing.  Since ATS is trying to match the 4-tuple (due 
to proxy.config.http.use_client_source_port) it enters a 3-way race between: 

# the FIN, FIN/ACK packets being exchanged with the origin server and the new 
request packets from the client.  If the OS is slow it is possible that ATS 
will attempt to reconnect with the same port/address before the connection is 
legitimately closed.
# Kernel timers for PAWS and recently closed sockets.  This is different (and 
much shorter) than the time-wait state and there is no way to disable it
# Everything working out just fine and the connection establishing like normal

The best repro case I've seen is a slow origin server that serves pages in 
 tags from the same host but does not support keep-alive 
(http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp for instance)

It is possible that simply respecting a servers keep-alive settings when using 
proxy.config.http.use_client_source_port would work as the original client 
would change the 4-tuple address for its next connection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (TS-1425) Crash (rare) when an origin server disconnects mid POST request upload (client->tunnel->OS)

2012-08-28 Thread B Wyatt (JIRA)
B Wyatt created TS-1425:
---

 Summary: Crash (rare) when an origin server disconnects mid POST 
request upload (client->tunnel->OS) 
 Key: TS-1425
 URL: https://issues.apache.org/jira/browse/TS-1425
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP, Network
Affects Versions: 3.2.0
 Environment: ATS 3.2 proxying a POST request to an unstable server
Reporter: B Wyatt
Assignee: B Wyatt


When an Origin server terminates the connection during a post upload, the cache 
must reset and re-create the tunnel in order to deliver the 50X error message. 

In doing so we deallocate the buffers either being used to read the POST data 
from the client or support a lingering read for client side aborts without 
cancelling either of these reads.

Best case scenario, the buffer is still deallocated the next time read_from_net 
is called for this vc and the fact that it has been zero'd out puts disables 
the read (because it thinks its out of buffers space).

bad case scenario, the buffer has been re-alloc'd when read_from_net is called 
and we potentially corrupt another buffer with data from the wire

worst case scenario, the buffer has been re-alloc'd when we enter read_from_net 
and modified/freed before the call to fill the buffer resulting in a segfault 
and trashed data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (TS-1426) TSHttpTxnOutgoingTransparencySet crashes if the User Agent has disconnected

2012-08-28 Thread B Wyatt (JIRA)
B Wyatt created TS-1426:
---

 Summary: TSHttpTxnOutgoingTransparencySet crashes if the User 
Agent has disconnected
 Key: TS-1426
 URL: https://issues.apache.org/jira/browse/TS-1426
 Project: Traffic Server
  Issue Type: Bug
  Components: TS API
Affects Versions: 3.2.0
 Environment: 3.2 with a plugin that uses 
TSHttpTxnOutgoingTransparencySet and a user agent that disconnects
Reporter: B Wyatt
Assignee: B Wyatt
Priority: Trivial


Null deref when sm->ua_session has become NULL due to client side disconnect.  
Simple fix incoming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (TS-1427) PluginVC's inactivity timeout support is expensive due to Mutex and Thread Scheduling load

2012-08-28 Thread B Wyatt (JIRA)
B Wyatt created TS-1427:
---

 Summary: PluginVC's inactivity timeout support is expensive due to 
Mutex and Thread Scheduling load
 Key: TS-1427
 URL: https://issues.apache.org/jira/browse/TS-1427
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP, TS API
Affects Versions: 3.2.0
 Environment: 3.2 with plugin that makes use of 1 or more PluginVC's 
with long/large requests
Reporter: B Wyatt
Assignee: B Wyatt
Priority: Minor


PluginVC's method for handling inactivity timer is to cancel and reschedule an 
event every time activity happens.  When data arrives in rapid and small 
packets this results in a few hundred cancel and resets per second per 
PluginVC. Doing so spikes futex and thread scheduling loads in the kernel. 

By contrast, the socket-based VC's use an inactivity cop that fires once a 
second to see if any connection has been inactive for too long.  This only 
requires maintaining an timeout timestamp per connection which is far cheaper.  
However, it reduces the accuracy of the inactivity timeout to be ~1sec 
resolution.

Well worth the accuracy trade to get performance back up to par in my opinion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1405) apply time-wheel scheduler about event system

2012-08-28 Thread John Plevyak (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443575#comment-13443575
 ] 

John Plevyak commented on TS-1405:
--

The current code "should" have a complexity which is bounded by the need to 
scan the entire queue every 5 seconds.  This is necessary because cancelling an 
event involves setting the volatile "cancelled" flag and to not scan them would 
result in running out of memory.  Assuming an event is inserted with a 30 
seconds timeout and waits till it runs, it will be touched 30/5 = 6 + 10 = 16 
times.  For a 300 second timeout it will be touched 300/5 = 60 + 10 = 70 times.

If an event is cancelled (the normal case for timeouts). Then it will be 
touched once (after an average of 2.5 seconds).  So (at least according to the 
design). The cost of the current design should be only a small constant factor 
worse than the time wheel and should average slightly more than 1 touch per 
event which is the best that can be expected.   Of course that is the 
design if it is causing problems, then likely there is a bug or something 
about the workload which is causing problems.

The time wheel can bring this down to 1 touch every N seconds with expected 1 
touch per event or 6 and 60 above.

So, I think this is a very reasonable change, assuming that it can deal with 
the out-of-memory issue, and I interested in seeing the benchmarks as I am 
curious as to see how the theory and practice collide.

> apply time-wheel scheduler  about event system
> --
>
> Key: TS-1405
> URL: https://issues.apache.org/jira/browse/TS-1405
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.2.0
>Reporter: kuotai
>Assignee: kuotai
> Fix For: 3.3.0
>
> Attachments: time-wheel.patch
>
>
> when have more and more event in event system scheduler, it's worse. This is 
> the reason why we use inactivecop to handler keepalive. the new scheduler is 
> time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1405) apply time-wheel scheduler about event system

2012-08-28 Thread John Plevyak (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443583#comment-13443583
 ] 

John Plevyak commented on TS-1405:
--

Sorry, the numbers for 30 seconds should be 30/5 + ~17 (every time a power of 2 
bucket is touched, 1/2 of the of the elements will be moved out, and 1/2 of 
those will be moved down 2 levels, etc.) = 27 vs 7 for the time wheel

So the time wheel, in the case of short expired timeouts, can be several times 
more efficient.

> apply time-wheel scheduler  about event system
> --
>
> Key: TS-1405
> URL: https://issues.apache.org/jira/browse/TS-1405
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.2.0
>Reporter: kuotai
>Assignee: kuotai
> Fix For: 3.3.0
>
> Attachments: time-wheel.patch
>
>
> when have more and more event in event system scheduler, it's worse. This is 
> the reason why we use inactivecop to handler keepalive. the new scheduler is 
> time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1405) apply time-wheel scheduler about event system

2012-08-28 Thread kuotai (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443716#comment-13443716
 ] 

kuotai commented on TS-1405:


Thanks your comments:-) yeah, we will take more tests. In my env(cluster mode), 
ts have 15K+ qps, and 20W+ event in scheduler.

> apply time-wheel scheduler  about event system
> --
>
> Key: TS-1405
> URL: https://issues.apache.org/jira/browse/TS-1405
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.2.0
>Reporter: kuotai
>Assignee: kuotai
> Fix For: 3.3.0
>
> Attachments: time-wheel.patch
>
>
> when have more and more event in event system scheduler, it's worse. This is 
> the reason why we use inactivecop to handler keepalive. the new scheduler is 
> time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (TS-1405) apply time-wheel scheduler about event system

2012-08-28 Thread kuotai (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443716#comment-13443716
 ] 

kuotai edited comment on TS-1405 at 8/29/12 12:15 PM:
--

Thanks your comments:-) yeah, we will take more tests. In my env(cluster mode), 
ts have 15K+ qps, and 70K+ event in scheduler.

  was (Author: kuotai):
Thanks your comments:-) yeah, we will take more tests. In my env(cluster 
mode), ts have 15K+ qps, and 20W+ event in scheduler.
  
> apply time-wheel scheduler  about event system
> --
>
> Key: TS-1405
> URL: https://issues.apache.org/jira/browse/TS-1405
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.2.0
>Reporter: kuotai
>Assignee: kuotai
> Fix For: 3.3.0
>
> Attachments: time-wheel.patch
>
>
> when have more and more event in event system scheduler, it's worse. This is 
> the reason why we use inactivecop to handler keepalive. the new scheduler is 
> time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1405) apply time-wheel scheduler about event system

2012-08-28 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443773#comment-13443773
 ] 

Leif Hedstrom commented on TS-1405:
---

Hmmm, I need to play with this some more, but with "few" connections (300), the 
time wheel patch has very noticeable performance degredation. Just doing a 
quick test (I will fiddle with it some more), I get:

{code}
http_load  -parallel 100 -seconds 20 -keep_alive 100 /tmp/URL
2644059 fetches on 26310 conns, 300 max parallel, 2.644059E+06 bytes, in 20 
seconds
1 mean bytes/fetch
132202.7 fetches/sec, 1.322027E+05 bytes/sec
msecs/connect: 0.156 mean, 1.884 max, 0.048 min
msecs/first-response: 2.156 mean, 82.044 max, 0.076 min

tinkerballa (21:15) 272/0 $ ~/benchit.sh 100 20 100
http_load  -parallel 100 -seconds 20 -keep_alive 100 /tmp/URL
3275553 fetches on 32567 conns, 300 max parallel, 3.275550E+06 bytes, in 20 
seconds
1 mean bytes/fetch
163776.5 fetches/sec, 1.637765E+05 bytes/sec
msecs/connect: 0.171 mean, 2.251 max, 0.047 min
msecs/first-response: 1.440 mean, 117.784 max, 0.090 min
{code}

The first is with the time wheel patch, the second is basic trunk (which is 
still a little slower than I normally would see it, need to look into that 
too). But both throughput (QPS) and latency is worse with the patch.

> apply time-wheel scheduler  about event system
> --
>
> Key: TS-1405
> URL: https://issues.apache.org/jira/browse/TS-1405
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.2.0
>Reporter: kuotai
>Assignee: kuotai
> Fix For: 3.3.0
>
> Attachments: time-wheel.patch
>
>
> when have more and more event in event system scheduler, it's worse. This is 
> the reason why we use inactivecop to handler keepalive. the new scheduler is 
> time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1405) apply time-wheel scheduler about event system

2012-08-28 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443774#comment-13443774
 ] 

Leif Hedstrom commented on TS-1405:
---

I should point out that CPU usage is less with the time wheel patch. So, 
perhaps there's a lock contention or something that triggers now, preventing us 
from consuming all available CPU ?

> apply time-wheel scheduler  about event system
> --
>
> Key: TS-1405
> URL: https://issues.apache.org/jira/browse/TS-1405
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.2.0
>Reporter: kuotai
>Assignee: kuotai
> Fix For: 3.3.0
>
> Attachments: time-wheel.patch
>
>
> when have more and more event in event system scheduler, it's worse. This is 
> the reason why we use inactivecop to handler keepalive. the new scheduler is 
> time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-475) HTTP SM should support efficient byte range requests

2012-08-28 Thread Pong (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443794#comment-13443794
 ] 

Pong commented on TS-475:
-

http://learn.iis.net/page.aspx/653/configure-byte-range-request-segment-size-in-application-request-routing/
Is it the same idea ?

> HTTP SM should support efficient byte range requests
> 
>
> Key: TS-475
> URL: https://issues.apache.org/jira/browse/TS-475
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache, HTTP
>Reporter: Leif Hedstrom
>Assignee: Alan M. Carroll
>Priority: Critical
> Fix For: 3.1.4
>
> Attachments: diff.out
>
>
> The cache has support for efficiently locate a particular range in the cached 
> object, but the HTTP SM does not support this. In order to make Range: 
> request efficient (particularly on large objects), the SM should support this 
> new cache feature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira